id_paragraph
stringlengths
20
26
parag_1
stringlengths
101
3.02k
parag_2
stringlengths
173
2.77k
annot_1
dict
annot_2
dict
id_source
stringlengths
8
11
id_target
stringlengths
8
11
index_paragraph
int64
0
26
list_sentences_1
listlengths
1
36
list_sentences_2
listlengths
1
36
By2l1r_DB.BJLFqqnsr.00
Our initial experiments compare our agent to baseline agents trained on a single policy. For these experiments, we use the navigation environment defined previously with three objectives: stay on the road, avoid hazards, and move right. Note that the opposite of each of these objectives are also included in possible beh...
Our initial experiments compare our agent to baseline agents trained on a single policy. For these experiments, we use the navigation environment defined previously with three objectives: stay on the road, avoid hazards, and move right. Note that the opposite of each of these objectives are also included in possible beh...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
By2l1r_DB
BJLFqqnsr
0
[ { "text": "Our initial experiments compare our agent to baseline agents trained on a single policy." }, { "text": "For these experiments, we use the navigation environment defined previously with three objectives: stay on the road, avoid hazards, and move right." }, { "text": "Note that the oppos...
[ { "text": "Our initial experiments compare our agent to baseline agents trained on a single policy." }, { "text": "For these experiments, we use the navigation environment defined previously with three objectives: stay on the road, avoid hazards, and move right." }, { "text": "Note that the oppos...
SyF8k7bCW.HytIRPamf.00
Mirkovic, 2009; Binder & Desai, 2011). The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al. (2013b) and learning from the occurrence of words also succeeded in Pennington et al.
Mirkovic, 2009; Binder & Desai, 2011). The idea of learning from the context information (Turney & Pantel, 2010) was recently successfully applied to vector representation learning for words in Mikolov et al. (2013); Pennington et al. Collobert et al.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium", "Content_addition" ], "instruction": "", "annotator": "annotator_09" }
SyF8k7bCW
HytIRPamf
0
[ { "text": "Mirkovic, 2009; Binder & Desai, 2011)." }, { "text": "The idea of learning from the context information was first successfully applied to vector representation learning for words in Mikolov et al." }, { "text": "(2013b) and learning from the occurrence of words also succeeded in Penni...
[ { "text": "Mirkovic, 2009; Binder & Desai, 2011)." }, { "text": "The idea of learning from the context information (Turney & Pantel, 2010) was recently successfully applied to vector representation learning for words in" }, { "text": "Mikolov et al. (2013); Pennington et al. Collobert et al." ...
aomiOZE_m2.rxb2TiQ6bq.22
We further compare our network pruning method with representative model compression techniques for image SR. Specifically, we compare with neural architecture search based methods (Chu et al., 2019b;a) and knowledge distillation (KD) based methods (Lee et al., 2020). We provide quantitative results in Tab. 4. Our SRPN-...
We further compare our SRP to other representative efficient image SR approaches via model compression. Concretely, neural architecture search based methods (Chu et al., 2019b;a) and knowledge distillation (KD) based methods (Lee et al., 2020) are compared to. Quantitative results at × 2 scale are presented in Tab. 4, w...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Please, rewrite this paragraph, make it easier to read", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Write in a more passive style and remove the last sentence", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
22
[ { "text": "We further compare our network pruning method with representative model compression techniques for image SR." }, { "text": "Specifically, we compare with neural architecture search based methods (Chu et al., 2019b;a) and knowledge distillation (KD) based methods (Lee et al., 2020)." }, { ...
[ { "text": "We further compare our SRP to other representative efficient image SR approaches via model compression." }, { "text": "Concretely, neural architecture search based methods (Chu et al., 2019b;a) and knowledge distillation (KD) based methods (Lee et al., 2020) are compared to." }, { "tex...
MXi6uEx-hp.rdZfFcGyf9.11
We collect interaction data for one month for a listwise online campaign recommender system. Users are represented by attributes such as age, occupation, and localities. Items attributes are also given such as text features, image features, and reward points of campaigns. We simulate a representative RL environment by...
We collect four-week interaction data in a listwise online campaign recommender system. Users are represented by attributes such as age, occupation, and localities. Item attributes include text features, image features, and reward points of campaigns. We train a VAE (Kingma & Welling, 2013) to learn item representation...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the majority of the paragraph, avoiding we and writing in a more neutral tone.", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite and reorganize the paragraph to convey the ideas more clearly.", "annotator": "annotator_07" }
MXi6uEx-hp
rdZfFcGyf9
11
[ { "text": "We collect interaction data for one month for a listwise online campaign recommender system." }, { "text": "Users are represented by attributes such as age, occupation, and localities." }, { "text": "Items attributes are also given such as text features, image features, and reward po...
[ { "text": "We collect four-week interaction data in a listwise online campaign recommender system." }, { "text": "Users are represented by attributes such as age, occupation, and localities." }, { "text": "Item attributes include text features, image features, and reward points of campaigns." ...
HJRpJl_vr.H115opYiS.00
The shot-number k only appears in first two terms of the denominator. It does not contribute to the last term of the denominator. This implies diminishing returns in expected accuracy when more support data is added without altering φ . 2. By observing the degree of terms in equation 7 (and treating the last term of th...
The shot-number k only appears in first two terms of the denominator, implying that the bound saturates quickly with increasing k . This is also in agreement with the empirical observation that meta-testing accuracy has diminishing improvements when more support data is added. By observing the degree of terms in equatio...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
HJRpJl_vr
H115opYiS
0
[ { "text": "The shot-number k only appears in first two terms of the denominator. It does not contribute to the last term of the denominator." }, { "text": "This implies diminishing returns in expected accuracy when more support data is added without altering φ . 2." }, { "text": "By observing th...
[ { "text": "The shot-number k only appears in first two terms of the denominator, implying that the bound saturates quickly with increasing k ." }, { "text": "This is also in agreement with the empirical observation that meta-testing accuracy has diminishing improvements when more support data is added." ...
BkxG1CvhWf.wcpE7maMLZ4.00
A gap in the literature seems to be a practical completeness threshold for cost optimal planning problems that have actions with 0-cost. This is one hurdle to the application of SAT-based planning to such problems since, without a reasonable completeness threshold, optimality can only be proved after solving the compil...
A gap in the literature seems to be a practical completeness threshold for cost optimal planning problems that have actions with 0-cost. This is one hurdle to the application of SAT-based planning to such problems, since without a reasonable completeness threshold, optimality can only be proved after solving the compil...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Concise the last sentence of this text.", "annotator": "annotator_07" }
BkxG1CvhWf
wcpE7maMLZ4
0
[ { "text": "A gap in the literature seems to be a practical completeness threshold for cost optimal planning problems that have actions with 0-cost." }, { "text": "This is one hurdle to the application of SAT-based planning to such problems since, without a reasonable completeness threshold, optimality c...
[ { "text": "A gap in the literature seems to be a practical completeness threshold for cost optimal planning problems that have actions with 0-cost." }, { "text": "This is one hurdle to the application of SAT-based planning to such problems, since without a reasonable completeness threshold, optimality c...
CVRUl83zah.I75TtW0V7.23
Evaluation We compute the accuracy at the sample-level, meaning a predicted set is considered correct if and only if every element is correct. The baselines are very volatile during training, further resulting in very large variances at the end of training. To reduce this variance, we pick the best model according to t...
Evaluation. We compute the accuracy at the sample-level, meaning a predicted set is considered correct only if every element is correct. The predicted ID for each predicted element is obtained by taking the argmax over the elements’ dimensions in the output. The baselines are very volatile during training, which result...
{ "annotation": [ "Content_addition", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
23
[ { "text": "Evaluation We compute the accuracy at the sample-level, meaning a predicted set is considered correct if and only if every element is correct." }, { "text": "" }, { "text": "The baselines are very volatile during training, further resulting in very large variances at the end of traini...
[ { "text": "Evaluation. We compute the accuracy at the sample-level, meaning a predicted set is considered correct only if every element is correct." }, { "text": "The predicted ID for each predicted element is obtained by taking the argmax over the elements’ dimensions in the output." }, { "text...
CVRUl83zah.I75TtW0V7.01
Implicit DSPN. Motivated by our analysis on exclusive multiset-equivariance, we seek tofurther improve DSPN. We propose implicit DSPN (iDSPN) in Section 3: a version of DSPN that uses implicit differentiation, which enables better optimizers and more iterations to be used at a constant memory cost and less computati...
Implicit DSPN. Despite this beneficial property, DSPN is outperformed by the set-equivariant Slot Attention (Locatello et al., 2020), which motivates us to improve other aspects of DSPN. We propose implicit DSPN (iDSPN) in Section 3: a version of DSPN that uses approximate implicit differentiation. Implicit differentiat...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
1
[ { "text": "Implicit DSPN." }, { "text": "Motivated by our analysis on exclusive multiset-equivariance, we seek tofurther improve DSPN." }, { "text": "We propose implicit DSPN (iDSPN) in Section 3: a version of DSPN that uses implicit differentiation, which enables better optimizers and more i...
[ { "text": "Implicit DSPN." }, { "text": "Despite this beneficial property, DSPN is outperformed by the set-equivariant Slot Attention (Locatello et al., 2020), which motivates us to improve other aspects of DSPN." }, { "text": "We propose implicit DSPN (iDSPN) in Section 3: a version of DSPN that...
c-9Hob6rd2.H4aN8Z9LDS.01
FetchPush . As shown in Figure 5(c), in the fetch environment, the agent is trained to fetch an object from the initial position (rectangle in green) to a distant position (rectangle in red). Although the fetch tasks are more complicated than they reach ones in the maze, GSRL also yields large performance gain, as sh...
FetchPush . As shown in Figure 11(c), in the fetch environment, the agent is trained to fetch an object from the initial position (rectangle depicted in green) to a distant position (rectangle depicted in red). Let the origin (0 , 0 , 0) denote the projection of the gripper’s initial coordinate on the table. The object...
{ "annotation": [ "Development", "Content_addition" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
c-9Hob6rd2
H4aN8Z9LDS
1
[ { "text": "FetchPush ." }, { "text": "As shown in Figure 5(c), in the fetch environment, the agent is trained to fetch an object from the initial position (rectangle in green) to a distant position (rectangle in red). Although the fetch tasks are more complicated than they reach ones in the maze, GSRL...
[ { "text": "FetchPush ." }, { "text": "As shown in Figure 11(c), in the fetch environment, the agent is trained to fetch an object from the initial position (rectangle depicted in green) to a distant position (rectangle depicted in red). Let the origin (0 , 0 , 0) denote the projection of the gripper’s i...
nCTSF9BQJ.DGhBYSP_sR.01
Traditional computational approaches are mainly based on biophysics and statistics (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017). Though having dominated the area for years, their limitations are non-negligible. In general, biophysics-based methods face the trade-off between efficiency and accuracy...
Traditional computational approaches are mainly based on biophysics and statistics (Schymkowitz et al., 2005; Park et al., 2016; Alford et al., 2017). Although these methods have dominated the field for years, they have several limitations. Biophysics-based methods face a trade-off between efficiency and accuracy since...
{ "annotation": [ "Unusable", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
null
nCTSF9BQJ
DGhBYSP_sR
1
[ { "text": "Traditional computational approaches are mainly based on biophysics and statistics (Schymkowitz et al., 2005;" }, { "text": "Park et al., 2016; Alford et al., 2017)." }, { "text": "Though having dominated the area for years, their limitations are non-negligible." }, { "text": ...
[ { "text": "Traditional computational approaches are mainly based on biophysics and statistics (Schymkowitz et al., 2005;" }, { "text": "Park et al., 2016; Alford et al., 2017)." }, { "text": "Although these methods have dominated the field for years, they have several limitations." }, { ...
c8pZvSp-5r.zd4IIIuixp.01
In addition, both DeiT and ViT utilize an extra learnable class token to perform classification ( i.e ., cls token shown in Figure 1 (a) and (b)). By design, the class token is not translation-invariant although it can learn to be so. A simple alternative is to directly replace it with a global average pooling (GAP), wh...
In addition, both DeiT and ViT utilize an extra learnable class token to perform classification ( i.e ., cls token shown in Figure 1 (a) and (b)). By design, the class token is not translation-invariant although it can learn to be so. A simple alternative is to directly replace it with a global average pooling (GAP), wh...
{ "annotation": [ "Concision" ], "instruction": "Simplify the conclusions of this paragraph wo make it clearer and more concise.", "annotator": "annotator_03" }
{ "annotation": [ "Concision" ], "instruction": "Simplify the last sentence by removing the notion of translation-equivariant and just calling it conditional positional encodings.", "annotator": "annotator_07" }
c8pZvSp-5r
zd4IIIuixp
1
[ { "text": "In addition, both DeiT and ViT utilize an extra learnable class token to perform classification ( i.e ., cls token shown in Figure 1 (a) and (b))." }, { "text": "By design, the class token is not translation-invariant although it can learn to be so." }, { "text": "A simple alternative ...
[ { "text": "In addition, both DeiT and ViT utilize an extra learnable class token to perform classification ( i.e ., cls token shown in Figure 1 (a) and (b))." }, { "text": "By design, the class token is not translation-invariant although it can learn to be so." }, { "text": "A simple alternative ...
aomiOZE_m2.rxb2TiQ6bq.19
When compared with all previous methods, our SRPN-L performs the best on all the datasets with all scaling factors. Different from careful network designs as most compared methods have done, we start with the existing EDSR baseline (Lim et al., 2017) and prune it to a much smaller network, showing the effectiveness of ...
When compared with all previous methods, our SRPN-Lite performs the best on all the datasets under all scaling factors. Unlike most comparison methods, which achieve efficiency through careful network designs, our work starts with the existing EDSR baseline (Lim et al., 2017) and prunes it to a much smaller network, sho...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the following paragraph, make it more formal.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the writing and change SRPN-L to SRPN-Lite", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
19
[ { "text": "When compared with all previous methods, our SRPN-L performs the best on all the datasets with all scaling factors." }, { "text": "Different from careful network designs as most compared methods have done, we start with the existing EDSR baseline (Lim et al., 2017) and prune it to a much smal...
[ { "text": "When compared with all previous methods, our SRPN-Lite performs the best on all the datasets under all scaling factors." }, { "text": "Unlike most comparison methods, which achieve efficiency through careful network designs, our work starts with the existing EDSR baseline (Lim et al., 2017) an...
nCTSF9BQJ.DGhBYSP_sR.22
Shan et al. (2022) identifies 5 single-point mutations on a human antibody against SARS-CoV-2 that enhance neutralization (effectiveness). There are 494 possible single-point mutations on the heavy chain CDR region of the antibody in total. We use the most competitive methods benchmarked in Section 4.1 to predict ∆∆ ...
In Shan et al. , the authors report five single-point mutations on a human antibody against SARS-CoV-2 that enhance neutralization effectiveness. These mutations are among the 494 possible single-point mutations on the heavy chain CDR region of the antibody. We use the most competitive methods benchmarked in Section 4....
{ "annotation": [ "Rewriting_medium" ], "instruction": "Fluidify this paragraph.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English in this paragraph in an academic style.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
22
[ { "text": "Sequence-based models do not predict ∆∆ G accurately for protein-protein binding in accordance with the discussion in Section 2.2." }, { "text": "Figure 3 plots the distribution of per-complex correlation coefficients." }, { "text": "Please refer to Section B of the appendix for more...
[ { "text": "Sequence-based models do not accurately predict ∆∆ G for protein-protein binding, as discussed in Section 2.2." }, { "text": "Figure 3 shows the distribution of per-complex correlation coefficients." }, { "text": "Please refer to Section B of the appendix for more results and discussi...
aomiOZE_m2.rxb2TiQ6bq.10
Given the issue above, it is necessary to prune all the Conv layers in residual blocks if we seek acceleration of practical use. Thus, we need a method to align the pruned indices in all constrained Conv layers. Regularization then arises as a natural solution given its prevailing use to impose priors on the sparsity s...
Given this issue, it is imperative to prune all the Conv layers in residual blocks, thus calling for an approach to align the pruned indices of all constrained Conv layers. Regularization then arises as a promising solution considering it has been widely used before to impose priors on the sparsity structure in classifi...
{ "annotation": [ "Rewriting_medium" ], "instruction": "I want to use other words in my paragraph.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Concision" ], "instruction": "Revise this text to make it a little more concise and fitting to the academic style.", "annotator": "annotator_07" }
aomiOZE_m2
rxb2TiQ6bq
10
[ { "text": "Given the issue above, it is necessary to prune all the Conv layers in residual blocks if we seek acceleration of practical use. Thus, we need a method to align the pruned indices in all constrained Conv layers." }, { "text": "Regularization then arises as a natural solution given its prevail...
[ { "text": "Given this issue, it is imperative to prune all the Conv layers in residual blocks, thus calling for an approach to align the pruned indices of all constrained Conv layers." }, { "text": "Regularization then arises as a promising solution considering it has been widely used before to impose p...
FKg16y0Y9A.ztJ9BPSr-.00
We have implemented Stars as part of[redacted for blind review] which is based on the Adaptive Massively Parallel Computation (AMPC) model [7]. Each logical unit of computation is automatically distributed across a number of worker machines, with the experiments in this paper scaling to thousands of individual workers.
We have implemented Stars as part of the Grale [25] graph building system using Flume - a C++ counterpart to FlumeJava [13]. which is based on the Adaptive Massively Parallel Computation (AMPC) model [7]. Each logical unit of computation is automatically distributed across a number of worker machines, with the experime...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
FKg16y0Y9A
ztJ9BPSr-
0
[ { "text": "We have implemented Stars as part of[redacted for blind review] which is based on the Adaptive" }, { "text": " Massively Parallel Computation (AMPC) model [7]." }, { "text": "Each logical unit of computation is automatically distributed across a number of worker machines, with the exp...
[ { "text": "We have implemented Stars as part of the Grale [25] graph building system using Flume - a C++ counterpart to FlumeJava [13]." }, { "text": "which is based on the Adaptive Massively Parallel Computation (AMPC) model [7]." }, { "text": "Each logical unit of computation is automatically ...
CVRUl83zah.I75TtW0V7.06
Note that exclusive multiset-equivariance is not always obtained in DSPN, but depends on the choice of encoder. For instance, a DeepSets encoder (Zaheer et al., 2017) – which is based on sum pooling – has the same gradients for equal elements, which would make DSPN set-equivariant. It is specifically the use of the exc...
Note that DSPN is not always exclusively multiset-equivariant, but it depends on the choice of encoder. A DeepSets encoder (Zaheer et al., 2017) – which is based on sum pooling – has the same gradients for equal elements, which would make DSPN set-equivariant. It is specifically the use of the exclusively multiset-equiv...
{ "annotation": [ "Rewriting_light" ], "instruction": "Change the subject in the first sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Concision" ], "instruction": "Lightly revise this paragraph for better readability while trying to make it a little shorter without loosing informations.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
6
[ { "text": "Note that exclusive multiset-equivariance is not always obtained in DSPN, but depends on the choice of encoder." }, { "text": "For instance, a DeepSets encoder (Zaheer et al., 2017) – which is based on sum pooling – has the same gradients for equal elements, which would make DSPN set-equivar...
[ { "text": "Note that DSPN is not always exclusively multiset-equivariant, but it depends on the choice of encoder." }, { "text": "A DeepSets encoder (Zaheer et al., 2017) – which is based on sum pooling – has the same gradients for equal elements, which would make DSPN set-equivariant." }, { "te...
7VIguXRv9h.yPdniQMisK.00
• Implicit Curricula: Examples are learned in a consistent order (Section 2). We show that the order in which examples are learned is consistent across runs, similar training methods, and similar architectures. Furthermore, we show that it is possible to change this order by changing the order in which examples are pre...
• Implicit Curricula: Examples are learned in a consistent order (Section 2). We show that the order in which examples are learned is consistent across runs, similar training methods, and similar architectures. Furthermore, we show that it is possible to change this order by changing the order in which examples are pre...
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Remove the less important details in the results.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion" ], "instruction": "Remove unnecessary details.", "annotator": "annotator_07" }
7VIguXRv9h
yPdniQMisK
0
[ { "text": "• Implicit Curricula: Examples are learned in a consistent order (Section 2)." }, { "text": "We show that the order in which examples are learned is consistent across runs, similar training methods, and similar architectures." }, { "text": "Furthermore, we show that it is possible to ...
[ { "text": "• Implicit Curricula: Examples are learned in a consistent order (Section 2)." }, { "text": "We show that the order in which examples are learned is consistent across runs, similar training methods, and similar architectures." }, { "text": "Furthermore, we show that it is possible to ...
tUjROCVSs0.LG_Cl6t7Bt.00
We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric. In principle, our approach is also applicable to other shape spaces such as manifolds of images, and we will include remarks on how we propose t...
We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric. In principle, our approach is also applicable to other shape spaces such as manifolds of images, and we will include remarks on how we propose t...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
tUjROCVSs0
LG_Cl6t7Bt
0
[ { "text": "We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric." }, { "text": "In principle, our approach is also applicable to other shape spaces such as manifolds of images, and w...
[ { "text": "We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric." }, { "text": "In principle, our approach is also applicable to other shape spaces such as manifolds of images, and w...
NAxP0iFmBr.5QBuYp8GH.03
We trained all the learning-based control policies in a mix of 60 instances of the Blank Environment containing 1 to 6 humans (uniformly sampled). Each training run has 500,000 steps of environment interactions. For fair evaluation, we report all of the mean metrics based on the data of a hundred 500-step episodes. In ...
We trained all the learning-based control policies in a mix of 28 instances of the BlankEnv uniformly containing 1 to 6 humans. Each training run has 700,000 steps (1,000 training iterations). For fair evaluation, we report all of the mean metrics based on the data from the latest hundred 500-steps episodes. In all exp...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_substitution", "Rewriting_light" ], "instruction": "", "annotator": "annotator_08" }
NAxP0iFmBr
5QBuYp8GH
3
[ { "text": "We trained all the learning-based control policies in a mix of 60 instances of the Blank Environment containing 1 to 6 humans (uniformly sampled)." }, { "text": "Each training run has 500,000 steps of environment interactions." }, { "text": "For fair evaluation, we report all of the m...
[ { "text": "We trained all the learning-based control policies in a mix of 28 instances of the BlankEnv uniformly containing 1 to 6 humans." }, { "text": "Each training run has 700,000 steps (1,000 training iterations)." }, { "text": "For fair evaluation, we report all of the mean metrics based o...
OV5v_wBMHk.bw4cqlpLh.06
Kullback-Leibler divergence) fails (Seguy et al., 2018). In addition, it does not require adversarial training and is, therefore, easier to optimize than adversarial-based measures (Kallus, 2020).
Kullback-Leibler divergence) fails (Seguy et al., 2018). In addition, the calculated discrepancy can be optimized with the traditional supervised learning framework instead of the adversarial learning framework, and is therefore easier to optimize than adversarial-based methods (Kallus, 2020).
{ "annotation": [ "Development", "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
OV5v_wBMHk
bw4cqlpLh
6
[ { "text": "Kullback-Leibler divergence) fails (Seguy et al., 2018)." }, { "text": "In addition, it does not require adversarial training and is, therefore, easier to optimize than adversarial-based measures (Kallus, 2020)." } ]
[ { "text": "Kullback-Leibler divergence) fails (Seguy et al., 2018)." }, { "text": "In addition, the calculated discrepancy can be optimized with the traditional supervised learning framework instead of the adversarial learning framework, and is therefore easier to optimize than adversarial-based methods...
MXi6uEx-hp.rdZfFcGyf9.03
Action Graph The input to our policy framework consists of the state s and a list C = [ c a 0 , ..., c a k ] of action representations for each action a i ∈ A . We build a fully connected action graph G with vertices corresponding to each action. If certain action relations are predefined in domain knowledge, we can r...
Action Graph : The input to our policy framework consists of the state s and a list C = [ c a 0 , ..., c a k ] of action representations for each action a i ∈ A . We build a fully connected action graph G with vertices corresponding to each available action. If certain action relations are predefined via domain knowledg...
{ "annotation": [ "Rewriting_light", "Content_substitution" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Review this paragraph, when needed try to make it clearer.", "annotator": "annotator_01" }
MXi6uEx-hp
rdZfFcGyf9
3
[ { "text": "Action Graph The input to our policy framework consists of the state s and a list C =" }, { "text": "[ c a 0 , ..., c a k ] of action representations for each action a i ∈ A ." }, { "text": "We build a fully connected action graph G with vertices corresponding to each action." }, ...
[ { "text": "Action Graph : The input to our policy framework consists of the state s and a list C =" }, { "text": "[ c a 0 , ..., c a k ] of action representations for each action a i ∈ A ." }, { "text": "We build a fully connected action graph G with vertices corresponding to each available acti...
UlHNcByJV.W1RxpkrWx8.03
Adult dataset. The de-biased classifier achieves higher recall while maintaining predictive ability evidenced by its precision value. We note that trade-off between precision and recall can be regulated by changing the number of epochs and not resetting the weights for each batch. Our code generates a full log of perfo...
Adult dataset. The de-biased classifier achieves higher recall while maintaining predictive ability evidenced by its precision value. Our code generates a full log of performance metrics for the biased and de-biased classifiers for every run of the algorithm.
{ "annotation": [ "Content_deletion" ], "instruction": "Remove non-essential sentences.", "annotator": "annotator_07" }
null
UlHNcByJV
W1RxpkrWx8
3
[ { "text": "Adult dataset." }, { "text": "The de-biased classifier achieves higher recall while maintaining predictive ability evidenced by its precision value." }, { "text": "We note that trade-off between precision and recall can be regulated by changing the number of epochs and not resetting t...
[ { "text": "Adult dataset." }, { "text": "The de-biased classifier achieves higher recall while maintaining predictive ability evidenced by its precision value." }, { "text": "" }, { "text": "Our code generates a full log of performance metrics for the biased and de-biased classifiers for...
nCTSF9BQJ.DGhBYSP_sR.07
Recently, deep learning-based approaches to predicting mutational effects on protein binding have emerged. We group them into three categories: end-to-end models, pre-training-based models, and unsupervised models. End-to-end models take both mutant and wild-type(not mutated) protein structuresalong with other features...
Recently, deep learning-based approaches have emerged. We group them into three categories: endto-end models, pre-training-based models, and unsupervised models. End-to-end models directly predict the difference in binding free energy by taking both mutant and wild-type protein structures as input (Shan et al., 2022). ...
{ "annotation": [ "Content_deletion" ], "instruction": "Give me a shorter version of this:", "annotator": "annotator_01" }
{ "annotation": [ "Content_deletion", "Concision" ], "instruction": "Make this paragraph twice as short by making the content more concise and deleting unnecessary details.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
7
[ { "text": "Recently, deep learning-based approaches to predicting mutational effects on protein binding have emerged." }, { "text": "We group them into three categories: end-to-end models, pre-training-based models, and unsupervised models." }, { "text": "End-to-end models take both mutant and w...
[ { "text": "Recently, deep learning-based approaches have emerged." }, { "text": "We group them into three categories: endto-end models, pre-training-based models, and unsupervised models." }, { "text": "End-to-end models directly predict the difference in binding free energy by taking both mutan...
CVRUl83zah.I75TtW0V7.20
Previous uses of implicit differentation in meta-learning [iMAML] and neural architecture search [iDARTS] also involve solving a linear system, but their Y corresponds to the neural network parameters and thus usually has millions of entries. In contrast, in our setting we work with a much smaller Y , which for example...
Previous uses of implicit differentiation in meta-learning (Rajeswaran et al., 2019) and neural architecture search (Zhang et al., 2021b) also involve solving a linear system, but their Y corresponds to the neural network parameters and thus usually has millions of entries. In contrast, in our setting we work with a mu...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
20
[ { "text": "Previous uses of implicit differentation in meta-learning [iMAML] and neural architecture search [iDARTS] also involve solving a linear system, but their Y corresponds to the neural network parameters and thus usually has millions of entries." }, { "text": "In contrast, in our setting we work...
[ { "text": "Previous uses of implicit differentiation in meta-learning (Rajeswaran et al., 2019) and neural architecture search (Zhang et al., 2021b) also involve solving a linear system, but their Y corresponds to the neural network parameters and thus usually has millions of entries." }, { "text": "In ...
HJi5QRusB.3ELqS2sPA.00
Main results Table 1 displays the accuracy of each model on the test set of each dataset, after they were training on ImageNet-only or all datasets. Traffic Signs and MSCOCO are not used for training in either case, as they are reserved for evaluation. We propose to use the average (over the datasets) rank of each metho...
Main results Table 1 displays the accuracy of each model on the test set of each dataset, after they were trained on ImageNet-only or all datasets. Traffic Signs and MSCOCO are not used for training in either case, as they are reserved for evaluation. We propose to use the average (over the datasets) rank of each method...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
HJi5QRusB
3ELqS2sPA
0
[ { "text": "Main results Table 1 displays the accuracy of each model on the test set of each dataset, after they were training on ImageNet-only or all datasets." }, { "text": "Traffic Signs and MSCOCO are not used for training in either case, as they are reserved for evaluation." }, { "text": "We ...
[ { "text": "Main results Table 1 displays the accuracy of each model on the test set of each dataset, after they were trained on ImageNet-only or all datasets." }, { "text": "Traffic Signs and MSCOCO are not used for training in either case, as they are reserved for evaluation." }, { "text": "We p...
oS9Uk_Rig.NE2g1bZGme.00
To encourage evasion with fewer mutation turns. We further update the reward over a mutation period using the reward function defined as R t = R t − 1 − nσ where the value for R t is either given by R l or R s depending the environment used, n is the number of mutation turns within current episode, and σ is the constant...
To encourage evasion with fewer mutation turns. We further update the reward over a mutation period using the reward function defined as R = R t − σt where the value for R t is either given by R l or R s at step t inside one episode, and σ is the constant step penalty, which is set to be 0.1 in our environment.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
oS9Uk_Rig
NE2g1bZGme
0
[ { "text": "To encourage evasion with fewer mutation turns." }, { "text": "We further update the reward over a mutation period using the reward function defined as R t =" }, { "text": "R t − 1 − nσ where the value for R t is either given by R l or R s depending the environment used, n is the numbe...
[ { "text": "To encourage evasion with fewer mutation turns." }, { "text": "We further update the reward over a mutation period using the reward function defined as R =" }, { "text": "R t − σt where the value for R t is either given by R l or R s at step t inside one episode, and σ is the constant ...
SyF8k7bCW.HytIRPamf.04
We adopted the idea proposed in Chen et al. They aim to build a model for supervised SNLI task (Bowman et al., 2015), and the model concatenates the outputs from a global mean-pooling function and a global max-pooling function to serve as a sentence representation, and shows a performance boost on the SNLI dataset. Bes...
We followed the idea proposed in Chen et al. They built a model for supervised SNLI task (Bowman et al., 2015) that concatenates the outputs from a global mean pooling and a global max pooling to serve as a sentence representation, and showed a performance boost on the SNLI dataset. Also, Conneau et al. (2017) found th...
{ "annotation": [ "Rewriting_light" ], "instruction": "Rewrite this paragraph using more formal language", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Rephrase the text", "annotator": "annotator_06" }
SyF8k7bCW
HytIRPamf
4
[ { "text": "We adopted the idea proposed in Chen et al." }, { "text": "They aim to build a model for supervised SNLI task (Bowman et al., 2015), and the model concatenates the outputs from a global mean-pooling function and a global max-pooling function to serve as a sentence representation, and shows a ...
[ { "text": "We followed the idea proposed in Chen et al." }, { "text": "They built a model for supervised SNLI task (Bowman et al., 2015) that concatenates the outputs from a global mean pooling and a global max pooling to serve as a sentence representation, and showed a performance boost on the SNLI dat...
nCTSF9BQJ.DGhBYSP_sR.26
In this work, we propose the rotamer density estimator (RDE) that estimates the distribution of rotamers. We find the entropy of the estimated distributions and the unsupervised representations produced by the RDE enable more accurate prediction of binding ∆∆ G . One of the major limitations of this work is that it ca...
In this work, we introduce the Rotamer Density Estimator (RDE) which estimates the distribution of rotamers for protein sidechains. We demonstrate that RDE leads to improved accuracy in predicting binding ∆∆ G compared to other methods. One limitation of RDE is the inability to model backbone flexibility directly which...
{ "annotation": [ "Rewriting_heavy", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
nCTSF9BQJ
DGhBYSP_sR
26
[ { "text": "The rotamer density estimator (RDE) is a generative model for sidechain structures. It can be used to predict sidechain conformations by sampling from the estimated distribution." }, { "text": "We use the RDE to sample sidechain torsional angles (rotamers) for structures with sidechains rem...
[ { "text": "RDE is a generative model for protein sidechain structures, which can predict sidechain conformations by sampling from the estimated distribution." }, { "text": "We use RDE to sample sidechain torsional angles (rotamers) for structures with 10% sidechains removed in our test split of PDB-REDO...
u9NaukzyJ-.hh0KECXQLv.03
Reminders are among the most common technological interventions to improve adherence to medication [4,27,28]. Reminders can take many forms, including interventions of caregivers through video and voice calls [29] and text messages [30], smart pill boxes [10], and computer applications [13]. Focusing on systems that vi...
Reminders are among the most common technological interventions to improve adherence to medication [4,26,27]. Reminders can take many forms, including interventions of caregivers through video and voice calls [28] and text messages [29], smart pill boxes [10], and computer applications [13]. Focusing on systems that vi...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
u9NaukzyJ-
hh0KECXQLv
3
[ { "text": "Reminders are among the most common technological interventions to improve adherence to medication [4,27,28]." }, { "text": "Reminders can take many forms, including interventions of caregivers through video and voice calls [29] and text messages [30], smart pill boxes [10], and computer appl...
[ { "text": "Reminders are among the most common technological interventions to improve adherence to medication [4,26,27]." }, { "text": "Reminders can take many forms, including interventions of caregivers through video and voice calls [28] and text messages [29], smart pill boxes [10], and computer appl...
SRquLaHRM4.vI2x5N-YHC.01
Optimal Transport The Optimal Transport [30] is initially introduced to solve the problem of howto reduce the cost when moving several items simultaneously. Recently, OT theory has drawn wideattention in the machine learning and computer vision community by comparing distributions readilyavailable to them under the fo...
Optimal Transport The Optimal Transport [30] is initially introduced to solve the problem of howto reduce the cost when moving simultaneously several items. Recently, OT theory has drawn wideattention in the machine learning and computer vision community by comparing distributions readilyavailable to them under the for...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium", "Unusable" ], "instruction": "I want to improve the last sentence.", "annotator": "annotator_09" }
SRquLaHRM4
vI2x5N-YHC
1
[ { "text": "Optimal Transport The Optimal Transport [30] is initially introduced to solve the problem of howto reduce the cost when moving several items simultaneously." }, { "text": "Recently, OT theory has drawn wideattention in the machine learning and computer vision community by comparing distribut...
[ { "text": "Optimal Transport The Optimal Transport [30] is initially introduced to solve the problem of howto reduce the cost when moving simultaneously several items." }, { "text": "Recently, OT theory has drawn wideattention in the machine learning and computer vision community by comparing distributi...
x8CcXI4Ei.4yg90qT46L.01
Generalization of meta learning. The excess risk , as a metric of generalization ability of gradientbased meta learning has been analyzed recently [3,4,9,14,18,42]. The generalization of meta learninghas been studied in [27] in the context of mixed linear regression, where the focus is on investigatingwhen abundant tas...
Generalization of meta learning. The excess risk , as a metric of generalization ability of nestedmeta learning has been analyzed recently [3,4,9,13,17,41]. Generalization performance has also been studied in a relevant but different setting - representation based meta learning [12,15]. Informationtheoretical generaliz...
{ "annotation": [ "Content_deletion" ], "instruction": "Remove a redundant sentence. Use clearer expression.", "annotator": "annotator_08" }
{ "annotation": [ "Content_deletion" ], "instruction": "Improve the English and remove the second sentence.", "annotator": "annotator_02" }
x8CcXI4Ei
4yg90qT46L
1
[ { "text": "Generalization of meta learning." }, { "text": "The excess risk , as a metric of generalization ability of gradientbased meta learning has been analyzed recently [3,4,9,14,18,42]." }, { "text": "The generalization of meta learninghas been studied in [27] in the context of mixed linear...
[ { "text": "Generalization of meta learning." }, { "text": "The excess risk , as a metric of generalization ability of nestedmeta learning has been analyzed recently [3,4,9,13,17,41]." }, { "text": "" }, { "text": "Generalization performance has also been studied in a relevant but differe...
I6_1TEti_.kbRnfqVqh.00
Consistency layers. Approaches ensuring consistency by embedding the constraints into the predictive layer as in SPLs include MultiplexNet [37] and HMCCN [31]. MultiplexNet is able to encode only constraints in disjunctive normal form, which is problematic for generality (D4) and efficiency (D6) as neuro-symbolic SOP ta...
Consistency layers. Approaches ensuring consistency by embedding the constraints into the predictive layer as in SPLs include MultiplexNet [38] and HMCCN [32]. MultiplexNet is able to encode only constraints in disjunctive normal form, which is problematic for generality (D4) and efficiency (D6) as neuro-symbolic SOP ta...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
I6_1TEti_
kbRnfqVqh
0
[ { "text": "Consistency layers." }, { "text": "Approaches ensuring consistency by embedding the constraints into the predictive layer as in SPLs include MultiplexNet [37] and HMCCN [31]." }, { "text": "MultiplexNet is able to encode only constraints in disjunctive normal form, which is problemati...
[ { "text": "Consistency layers." }, { "text": "Approaches ensuring consistency by embedding the constraints into the predictive layer as in SPLs include MultiplexNet [38] and HMCCN [32]." }, { "text": "MultiplexNet is able to encode only constraints in disjunctive normal form, which is problemati...
S1BhqsOsB.1mgtDFRDc.01
Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space. Researchers have tried to handle such stochasticity using latent variable models (Loehlin, 1987) or autoregressive prediction of the output pixel space, which involves samp...
Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space. Researchers have tried to handle such stochasticity using latent variable models (Loehlin, 1987) or autoregressive prediction of the output pixel space, which involves samp...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
S1BhqsOsB
1mgtDFRDc
1
[ { "text": "Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space." }, { "text": "Researchers have tried to handle such stochasticity using latent variable models (Loehlin, 1987) or autoregressive prediction of t...
[ { "text": "Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space." }, { "text": "Researchers have tried to handle such stochasticity using latent variable models (Loehlin, 1987) or autoregressive prediction of t...
S1CMuZFor.H1NchtnoS.01
• The NTK is defined using the gradient of the DNN output with respect to weight parameter space . In contrast, the linear approximation Lemma in this paper) is defined using the gradient of the DNN output with respect to input parameter space . In other words, the variables to be differentiated are different. • Although...
• The NTK is defined using the gradient of the DNN output with respect to weight parameter space . In contrast, the linear approximation (Lemma 3 in this paper) is defined using the gradient of the DNN output with respect to input parameter space . In other words, the variables to be differentiated are different. • The r...
{ "annotation": [ "Content_deletion" ], "instruction": "Please exclude the content that seems unnecessary.", "annotator": "annotator_09" }
{ "annotation": [ "Content_deletion" ], "instruction": "Remove the second item of the list.", "annotator": "annotator_07" }
S1CMuZFor
H1NchtnoS
1
[ { "text": "• The NTK is defined using the gradient of the DNN output with respect to weight parameter space ." }, { "text": "In contrast, the linear approximation Lemma in this paper) is defined using the gradient of the DNN output with respect to input parameter space ." }, { "text": "In other wo...
[ { "text": "• The NTK is defined using the gradient of the DNN output with respect to weight parameter space ." }, { "text": "In contrast, the linear approximation (Lemma 3 in this paper) is defined using the gradient of the DNN output with respect to input parameter space ." }, { "text": "In other...
CVRUl83zah.I75TtW0V7.22
Dataset The input of every example is represented by a 64 × 4 matrix, where each row is a one-hot vector that is sampled i.i.d. from the multinomial distribution over the equally weighted 4 classes. We generate the target matrix of size 64 × 64 by counting the occurrences for each unique input class sequentially fr...
MSE as pairwise loss for DSPN/iDSPN and cross-entropy as pairwise loss for the other models. The baselines perform worse with MSE as pairwise loss. For each example, the input multiset has a size of 64 with 4-dimensional elements corresponding toclasses. This is represented as a 64 × 4 matrix where each row is a one-ho...
{ "annotation": [ "Rewriting_heavy", "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
22
[ { "text": "Dataset The input of every example is represented by a 64 × 4 matrix, where each row is a one-hot vector that is sampled i.i.d." }, { "text": "from the multinomial distribution over the equally weighted 4 classes." }, { "text": "We generate the target matrix of size 64 × 64 by cou...
[ { "text": "MSE as pairwise loss for DSPN/iDSPN and cross-entropy as pairwise loss for the other models. The baselines perform worse with MSE as pairwise loss. For each example, the input multiset has a size of 64 with 4-dimensional elements corresponding toclasses. This is represented as a 64 × 4 matrix where e...
nCTSF9BQJ.DGhBYSP_sR.05
Our method provides a solution to the aforementioned challenges. Training the rotamer density estimator requires only protein structures. Thus, itis an unsupervised learner of the effectof mutations on binding and it alleviates the difficulty rising from the scarcity of annotated mutation data. In addition, our metho...
Our method is an attempt to address the aforementioned challenges. The Rotamer Density Estimator is trained solely on protein structures, not requiring other labels, making it an unsupervised learner of the mutation effect on protein-protein interaction. This feature mitigates the challenge posed by the scarcity of ann...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Give me a more formal version of the following paragraph.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite this paragraph in a more formal and academic way.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
5
[ { "text": "Our method provides a solution to the aforementioned challenges." }, { "text": "Training the rotamer density estimator requires only protein structures. Thus, itis an unsupervised learner of the effectof mutations on binding and it alleviates the difficulty rising from the scarcity of annot...
[ { "text": "Our method is an attempt to address the aforementioned challenges." }, { "text": "The Rotamer Density Estimator is trained solely on protein structures, not requiring other labels, making it an unsupervised learner of the mutation effect on protein-protein interaction. This feature mitigates ...
UlHNcByJV.W1RxpkrWx8.00
In this section we describe our main contribution, Adversarial Optimism (AdOpt) in detail. At a high level, AdOpt uses two classifiers. The first one is a classifier trained on all the accepted data thus far, without any de-biasing. We refer to this as the “biased” classifier hereafter. The second one is our adversari...
In this section we describe our main contribution, Adversarial Optimism (AdOpt) in detail. AdOpt uses two classifiers. The first one is a “biased” classifier trained on all the accepted data thus far. The second one is our adversarially de-biased classifier. AdOpt then proceeds as follows:
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
null
UlHNcByJV
W1RxpkrWx8
0
[ { "text": "In this section we describe our main contribution, Adversarial Optimism (AdOpt) in detail." }, { "text": "At a high level, AdOpt uses two classifiers." }, { "text": "The first one is a classifier trained on all the accepted data thus far, without any de-biasing." }, { "text":...
[ { "text": "In this section we describe our main contribution, Adversarial Optimism (AdOpt) in detail." }, { "text": "AdOpt uses two classifiers." }, { "text": "The first one is a “biased” classifier trained on all the accepted data thus far." }, { "text": "" }, { "text": "The sec...
usz0l2mwO.5ie3V0GP-.03
Besides improving fine-tuning on low-resource data by removing irrelevant features, we expect VIB to improve on out-of-domain data because it removes redundant features. In particular, annotation artifacts in a specific dataset are known to create shortcut features, which are superficial cues correlated with a label (G...
Besides improving fine-tuning on low-resource data by removing irrelevant features, we expect VIB to improve on out-of-domain data because it removes redundant features. In particular, annotation artifacts create shortcut features, which are superficial cues correlated with a label (Gururangan et al., 2018; Poliak et a...
{ "annotation": [ "Development", "Rewriting_heavy" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Concision", "Development" ], "instruction": "", "annotator": "annotator_07" }
usz0l2mwO
5ie3V0GP-
3
[ { "text": "Besides improving fine-tuning on low-resource data by removing irrelevant features, we expect VIB to improve on out-of-domain data because it removes redundant features." }, { "text": "In particular, annotation artifacts in a specific dataset are known to create shortcut features, which are s...
[ { "text": "Besides improving fine-tuning on low-resource data by removing irrelevant features, we expect VIB to improve on out-of-domain data because it removes redundant features." }, { "text": "In particular, annotation artifacts create shortcut features, which are superficial cues correlated with a l...
33RNh69fYq.kMvWVl725x.00
Feature reconstruction . A linear projection is first applied to these feature tokens to reduce C org to a smaller channel, C . Then these tokens are processed by NME and LQD. The learnable position embeddings [12, 13] are added in the attention module to inform the spatial information. Afterward, another linear projec...
Feature reconstruction . The feature map, f org , is first tokenized to H × W feature tokens, followedby a linear projection to reduce C org to a smaller channel, C . Then these tokens are processed by NME and LQD. The learnable position embeddings [14, 15] are added in attention modules to informthe spatial informatio...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
33RNh69fYq
kMvWVl725x
0
[ { "text": "Feature reconstruction ." }, { "text": "A linear projection is first applied to these feature tokens to reduce C org to a smaller channel, C ." }, { "text": "Then these tokens are processed by NME and LQD." }, { "text": "The learnable position embeddings [12, 13] are added in ...
[ { "text": "Feature reconstruction ." }, { "text": "The feature map, f org , is first tokenized to H × W feature tokens, followedby a linear projection to reduce C org to a smaller channel, C ." }, { "text": "Then these tokens are processed by NME and LQD." }, { "text": "The learnable pos...
atxti8SVk.3K9AmPwALM.04
Metric learning develops a feature representation based on data grouping and separation cues. Our method (Fig. 3) segments an image by learning a pixel-wise embedding with a contrastive loss between pixels and segments. We start from defining two disjoint sets – positive and negative segments (exemplars) with respect to...
Metric learning develops a feature representation based on data grouping and separation cues. Our method (Fig. 3) segments an image by learning a pixel-wise embedding with a contrastive loss between pixels and segments: For each pixel i , we learn a latent feature φ ( i ) such that i is close to its positive segments (...
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph heavily more concise in the explanations made.", "annotator": "annotator_03" }
{ "annotation": [ "Concision" ], "instruction": "Don't give to much details about the method of learning, just keep the main idea.", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
4
[ { "text": "Metric learning develops a feature representation based on data grouping and separation cues." }, { "text": "Our method (Fig. 3) segments an image by learning a pixel-wise embedding with a contrastive loss between pixels and segments. We start from defining two disjoint sets – positive and neg...
[ { "text": "Metric learning develops a feature representation based on data grouping and separation cues." }, { "text": "Our method (Fig. 3) segments an image by learning a pixel-wise embedding with a contrastive loss between pixels and segments: For each pixel i , we learn a latent feature φ ( i ) such ...
nCTSF9BQJ.DGhBYSP_sR.23
Statistical Significance To show that there is a statistically significant relationship between the entropy estimated by the RDE and the experimental ∆∆ G values, we conduct linear regression analysis using the RDE-Linear model defined in Eq.9. The linear model contains 7 coefficients and 1 bias: w bound W L , w bound ...
Statistical Significance To demonstrate a statistically significant relationship between the entropy estimated by RDE and experimental ∆∆ G values, we conduct linear regression analysis using the RDE-Linear model defined in Eq. The linear model consists of seven coefficients and one bias term: w bound W L , w bound W R...
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Simplify the explanation of the merged w unbnd M R and w unbnd W R.", "annotator": "annotator_03" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Concise the penultimate sentence. Improve the English in this paragraph.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
23
[ { "text": " Shan et al." }, { "text": "(2022) identifies 5 single-point mutations on a human antibody against SARS-CoV-2 that enhance neutralization (effectiveness)." }, { "text": "There are 494 possible single-point mutations on the heavy chain CDR region of the antibody in total." }, { ...
[ { "text": "In Shan et al." }, { "text": ", the authors report five single-point mutations on a human antibody against SARS-CoV-2 that enhance neutralization effectiveness." }, { "text": "These mutations are among the 494 possible single-point mutations on the heavy chain CDR region of the antibo...
OzYyHKPyj7.O9Mk1uqXra.04
For all tasks, we see that our RNS-RNN (denoted NS+S+U) attains near-optimal cross-entropy (within 0.05 nats) on the validation set. All stack models effectively solve the deterministic marked reversal and Dyck tasks, although we note that on marked reversal the NS models do not generalize well on held-out lengths. Our ...
For all tasks, we see that our RNS-RNN (denoted NS+S+U) attains near-optimal cross-entropy (within 0.05 nats) on the validation set. All stack models effectively solve the deterministic marked reversal and Dyck tasks, although we note that on marked reversal the NS models do not generalize well on held-out lengths. Our ...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
OzYyHKPyj7
O9Mk1uqXra
4
[ { "text": "For all tasks, we see that our RNS-RNN (denoted NS+S+U) attains near-optimal cross-entropy (within 0.05 nats) on the validation set." }, { "text": "All stack models effectively solve the deterministic marked reversal and Dyck tasks, although we note that on marked reversal the NS models do not...
[ { "text": "For all tasks, we see that our RNS-RNN (denoted NS+S+U) attains near-optimal cross-entropy (within 0.05 nats) on the validation set." }, { "text": "All stack models effectively solve the deterministic marked reversal and Dyck tasks, although we note that on marked reversal the NS models do not...
8jLtSbSLbC.T9bU3kyKC4.00
Mathematically, any irreversible mapping y = f(args...) can be trivially transformed to its reversible form y += f(args...) or y (cid:89) = f(args...) ( (cid:89) is the bit-wise XOR ), where y is a pre-emptied variable. But in numeric computing with finite precision, this is not always true. The reversibility of arithme...
Mathematically, any irreversible mapping y = f(args...) can be trivially transformed to its reversible form y += f(args...) or y (cid:89) = f(args...) ( (cid:89) is the bit-wise XOR ), where y is a pre-emptied variable. But in numeric computing with finite precision, this is not always true. The reversibility of arithme...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
8jLtSbSLbC
T9bU3kyKC4
0
[ { "text": "Mathematically, any irreversible mapping y = f(args...) can be trivially transformed to its reversible form y += f(args...) or y (cid:89) = f(args...)" }, { "text": "( (cid:89) is the bit-wise XOR ), where y is a pre-emptied variable." }, { "text": "But in numeric computing with finite...
[ { "text": "Mathematically, any irreversible mapping y = f(args...) can be trivially transformed to its reversible form y += f(args...) or y (cid:89) = f(args...)" }, { "text": "( (cid:89) is the bit-wise XOR ), where y is a pre-emptied variable." }, { "text": "But in numeric computing with finite...
TFoRhVCpnb.yqo5NaW74.00
Image semantic segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image. The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task [6, 58, 19] in recent years. However, training sucha fully-supervised semantic segment...
Image Semantic Segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image. The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task [7, 20, 63] in recent years. However, training such a Fully-Supervised Semantic Segmen...
{ "annotation": [ "Rewriting_light" ], "instruction": "Use uppercases properly.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Use capital letters at the beginning of every word in the names of segmentation methods.", "annotator": "annotator_07" }
TFoRhVCpnb
yqo5NaW74
0
[ { "text": "Image semantic segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image." }, { "text": "The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task [6, 58, 19] in recent years." }, { "...
[ { "text": "Image Semantic Segmentation is the task of pixel-level semantic label allocation for recognizing objects in an image." }, { "text": "The development of Deep Neural Networks (DNNs) has promoted the rapid development of the semantic segmentation task [7, 20, 63] in recent years." }, { "...
fDUdAYCQqZy.0cNiGAHFml.03
In real-world problems, the dynamics are often nearly deterministic. We leverage this assumption and remove the expectation over the next states in the operator, which leads to Expectile V -Learning, where we train the value network to minimize the following loss:
In real-world problems, the dynamics are often nearly deterministic. We leverage this assumption We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a practical algorithm, Expectile V -Learning, whe...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
fDUdAYCQqZy
0cNiGAHFml
3
[ { "text": "In real-world problems, the dynamics are often nearly deterministic." }, { "text": "We leverage this assumption and remove the expectation over the next states in the operator, which leads to Expectile V -Learning, where we train the value network to minimize the following loss:" } ]
[ { "text": "In real-world problems, the dynamics are often nearly deterministic." }, { "text": "We leverage this assumption We consider the case where the dynamics are nearly-deterministic like robotic applications, and we remove the expectation over the next states in the operator. This leads to a pract...
Byyb66j52G.hR5KKRfhQm.12
Interrupted augmentation . We wonder how generalization would change after regularizationstopped. Thus, we stop the DA during training, such as (0, 5), (0, 15). When we compare the graph Figure 2(d) and Figure 2(e), the generalization performance in Figure 2(d) rapidly decreasesafter interrupted on both (0, 5) and (0, ...
Interrupted augmentation . To determine how generalization would change after regularization stopped, we stop the DA during training, such as (0, 5), (0, 15). Jumper with easybg mode rapidly lost generalization performance (after interruption at both (0, 5) and (0, 15)) (Figure 2(d)), whereas Jumper with easy mode do n...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make sentences concise, add missing spaces.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
Byyb66j52G
hR5KKRfhQm
12
[ { "text": "Interrupted augmentation ." }, { "text": "We wonder how generalization would change after regularizationstopped. Thus, we stop the DA during training, such as (0, 5), (0, 15)." }, { "text": "When we compare the graph Figure 2(d) and Figure 2(e), the generalization performance in Figur...
[ { "text": "Interrupted augmentation ." }, { "text": "To determine how generalization would change after regularization stopped, we stop the DA during training, such as (0, 5), (0, 15)." }, { "text": "Jumper with easybg mode rapidly lost generalization performance (after interruption at both (0, ...
SyF8k7bCW.HytIRPamf.03
While an RNN decoder, is designed to produce thenext word, a CNN decoder is free to find any relevant local patterns within the target sequences. The generation process is only conditioned on the sentence representation. Although the word order information is implicitly encoded in the CNN decoder, it is not emphasized ...
In our model, the CNN decoder predicts all words at once during training, which is different from autoregressive decoders, and we call it a predict-all-words CNN decoder. We want to compare the performance of the predict-all-words decoders and that of the autoregressive decoders separate from the RNN/CNN distinction, t...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Please rephrase the entire paragraph for better readability.", "annotator": "annotator_09" }
SyF8k7bCW
HytIRPamf
3
[ { "text": "While an RNN decoder, is designed to produce thenext word, a CNN decoder is free to find any relevant local patterns within the target sequences. The generation process is only conditioned on the sentence representation." }, { "text": "Although the word order information is implicitly encoded...
[ { "text": "In our model, the CNN decoder predicts all words at once during training, which is different from autoregressive decoders, and we call it a predict-all-words CNN decoder." }, { "text": "We want to compare the performance of the predict-all-words decoders and that of the autoregressive decoder...
Mu-tqfqX-.6NSudk3nD.03
As a quick example of selection collider bias, if we were to ask you the gender of some random person born in 1801,and one in 1999, you may toss a coin to determine your answer, asbirth date and gender are unconditionally independent in the real world. However, if instead we where to ask about the gender of a person bo...
If someone was to ask you the gender of a random person born in 1801, you may toss a coin to determine your answer, as gender at birth is invariant to time. However, if instead someone was to ask about the gender of a person born in 1801 on a random Wikipedia page, you may then inform your guess with the knowledge that...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this paragraph to improve its clarity.", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "This paragraph is confusing, rewrite to make it clearer and more readable.", "annotator": "annotator_07" }
Mu-tqfqX-
6NSudk3nD
3
[ { "text": "As a quick example of selection collider bias, if we were to ask you the gender of some random person born in 1801,and one in 1999, you may toss a coin to determine your answer, asbirth date and gender are unconditionally independent in the real world. However, if instead we where to ask about the ge...
[ { "text": "If someone was to ask you the gender of a random person born in 1801, you may toss a coin to determine your answer, as gender at birth is invariant to time. However, if instead someone was to ask about the gender of a person born in 1801 on a random Wikipedia page, you may then inform your guess with...
MXi6uEx-hp.rdZfFcGyf9.17
We validate the choice of using graph attention network as the relational architecture. In Figure 7, we compare GAT against a graph convolutional network (GCN) (Kipf & Welling, 2016) in the action graph of AGILE. We observe that for thesimple grid world and RecSim tasks, GCN achieves optimal performance. This is becau...
We validate the choice of using graph attention network as the relational architecture. In Figure 7, we compare GAT against a graph convolutional network (GCN) (Kipf & Welling, 2016) to act over AGILE’s action graph. We observe that GCN achieves optimal performance for the grid world and RecSys tasks. GCN can learn sim...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the middle part of this paragraph and improve the English in the remainder", "annotator": "annotator_10" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Make this paragraph a bit more concise.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
17
[ { "text": "We validate the choice of using graph attention network as the relational architecture." }, { "text": "In Figure 7, we compare GAT against a graph convolutional network (GCN) (Kipf & Welling, 2016) in the action graph of AGILE." }, { "text": "We observe that for thesimple grid world ...
[ { "text": "We validate the choice of using graph attention network as the relational architecture." }, { "text": "In Figure 7, we compare GAT against a graph convolutional network (GCN) (Kipf & Welling, 2016) to act over AGILE’s action graph." }, { "text": "We observe that GCN achieves optimal p...
wnT56xFToh.QBiYZ6j1pM.00
CNNs to tag images of road scenes from 52 possible labels. In the medical domain, (Wang et al., 2017) present a chest X-ray dataset in which one image may contain multiple abnormalities. Multilabel classification is also prominent in natural language processing (Nam et al., 2014). Our proposed method is therefore releva...
CNNs to tag images of road scenes from 52 possible labels. In the medical domain, (Wang et al., 2017) present a chest X-ray dataset in which one image may contain multiple abnormalities. Multilabel classification is also prominent in natural language processing (Nam et al., 2014). Recent work also provides a theoretical...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
wnT56xFToh
QBiYZ6j1pM
0
[ { "text": "CNNs to tag images of road scenes from 52 possible labels." }, { "text": "In the medical domain, (Wang et al., 2017) present a chest X-ray dataset in which one image may contain multiple abnormalities." }, { "text": "Multilabel classification is also prominent in natural language proce...
[ { "text": "CNNs to tag images of road scenes from 52 possible labels." }, { "text": "In the medical domain, (Wang et al., 2017) present a chest X-ray dataset in which one image may contain multiple abnormalities." }, { "text": "Multilabel classification is also prominent in natural language proce...
S1BhqsOsB.1mgtDFRDc.00
Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing visual informa...
Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is...
{ "annotation": [ "Development", "Concision" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Shorten this paragraph while making it more precise, mainly on the sentence about prediction losses.", "annotator": "annotator_07" }
S1BhqsOsB
1mgtDFRDc
0
[ { "text": "Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction." }, { "text": "One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint?" }, { "text": "Humans excel at th...
[ { "text": "Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction." }, { "text": "One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint?" }, { "text": "Humans excel at th...
sIqSoZ9KiO.KLlOZMoJ9G.02
Summary. This paper introduced a novel neural network architecture, specifically suitable for modeling image generators (decoders) in the context of deep generative modeling. The proposed SDN layer was analyzed in the context of: (a) a complex hierarchical VAE model, where we obtained state-of-the-art performance in no...
Summary. This paper introduced a novel neural layer suitable for deep neural networks that produce images – image generators (decoders). Proposed SDN improves upon convolutional networks in terms of incorporating the prior on spatial coherence and modeling of long-range spatial dependencies. SDN was analyzed in the con...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
sIqSoZ9KiO
KLlOZMoJ9G
2
[ { "text": "The competing methods were evaluated on 3D-Shapes (Burgess & Kim, 2018), a synthetic dataset containing 64 × 64 images of scenes with rooms and objects of various colors and shapes." }, { "text": "Following related literature (Locatello et al., 2019), there was no training-test split, so the ...
[ { "text": "The competing methods were evaluated on 3D-Shapes (Burgess & Kim, 2018), a synthetic dataset containing 64 × 64 images of scenes with rooms and objects of various colors and shapes." }, { "text": "Following related literature (Locatello et al., 2019), there was no training-test split, so the ...
416QeRWm9c.EIldqblSQa.00
Gansbeke et al., 2021; Yang et al., 2020). However, existing analysis typically assumes that the pre-training data distribution is the same as the target distribution, while the difference between the two is the critical reason for the trade-off focused in this work. Thus, our work needs to propose new analysis approac...
Gansbeke et al., 2021; Yang et al., 2020). However, existing analysis typically assumes that the pre-training data distribution is the same as the target distribution, while the difference between the two is the critical reason for the trade-off focused in this work. Thus, our work proposes new analysis approaches. Rec...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the last sentence to give it a more modest town, and change some formulations to improve the flow of the paragraph.", "annotator": "annotator_04" }
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
416QeRWm9c
EIldqblSQa
0
[ { "text": "Gansbeke et al., 2021; Yang et al., 2020)." }, { "text": "However, existing analysis typically assumes that the pre-training data distribution is the same as the target distribution, while the difference between the two is the critical reason for the trade-off focused in this work." }, { ...
[ { "text": "Gansbeke et al., 2021; Yang et al., 2020)." }, { "text": "However, existing analysis typically assumes that the pre-training data distribution is the same as the target distribution, while the difference between the two is the critical reason for the trade-off focused in this work." }, { ...
wSf7BpyxTb.ZCPjX5OcL.01
In this section, we further strengthour algorithm with SPIDER variance reduction technique [11], a variantof SARAH [31, 32], as stated in algorithm 3. Indeed, we use a large batchsize of b in every q iterations and use small batchsizes of b x and b y for the rest. We prove that SAPD+ using variance reduction, i.e., wit...
In this section, we equip SAPD+ with SPIDER variance reduction technique [12], a variant of SARAH [32, 33]. More precisely, for inexactly solving SCSC subproblems given in (4), we propose using VR-SAPD as stated in algorithm 3. Note VR-SAPD employs a large batchsize of b in every q iterations and use small batchsizes o...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
wSf7BpyxTb
ZCPjX5OcL
1
[ { "text": "In this section, we further strengthour algorithm with SPIDER variance reduction technique [11], a variantof SARAH [31, 32], as stated in algorithm 3." }, { "text": "Indeed, we use a large batchsize of b in every q iterations and use small batchsizes of b x and b y for the rest." }, { ...
[ { "text": "In this section, we equip SAPD+ with SPIDER variance reduction technique [12], a variant of SARAH [32, 33]. More precisely, for inexactly solving SCSC subproblems given in (4), we propose using VR-SAPD as stated in algorithm 3." }, { "text": "Note VR-SAPD employs a large batchsize of b in eve...
OCBr-AN0r.k4RhAVo6Ik.00
In general, the model parameter is fixed and the attacker can only provide crafted examples to foolthe model. Based on the amount of information the attacker can access, the adversarial attack canbe categorized into three classes. (1) White-box attack . The attacker has full accessto the system,including parameters and...
Note the adversarial attack happened in the testing stage, and the attackers cannot manipulate theforecasting model or its output. On the benign testing set, the forecasting model can perform well. Based on the amount of information the attacker can access in the testing stage, the adversarial attackcan be categorized ...
{ "annotation": [ "Development", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
OCBr-AN0r
k4RhAVo6Ik
0
[ { "text": "In general, the model parameter is fixed and the attacker can only provide crafted examples to foolthe model." }, { "text": "Based on the amount of information the attacker can access, the adversarial attack canbe categorized into three classes." }, { "text": "(1) White-box attack ." ...
[ { "text": "Note the adversarial attack happened in the testing stage, and the attackers cannot manipulate theforecasting model or its output. On the benign testing set, the forecasting model can perform well." }, { "text": "Based on the amount of information the attacker can access in the testing stage,...
v8Vdrwfrg.Hrx_LZTUq.00
We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy. Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks. As a key feature, our algorithm not only evolves the pruned sparse ...
We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy. Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks. As a key feature, our algorithm not only evolves the pruned sparse ...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
v8Vdrwfrg
Hrx_LZTUq
0
[ { "text": "We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy." }, { "text": "Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks." }, { "text": "As...
[ { "text": "We propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy." }, { "text": "Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks." }, { "text": "As...
MXi6uEx-hp.rdZfFcGyf9.06
Listwise RL (CDQN) : To solve the combinatorial action space problem of listwise actions, we follow the Cascaded DQN (CDQN) framework of Chen et al. (2019a). The main challenge is that building the list all at once is not feasible due to the intractably large number of possible lists. Therefore, the key is to build th...
Listwise RL (CDQN) : For tasks with listwise actions, we follow the Cascaded DQN (CDQN) framework of Chen et al. (2019a). The main challenge is that building the action list all at once is not feasible due to a combinatorial number of possible list-actions. Therefore, the key is to build the list incrementally, one act...
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Make first sentence more concise. Rewrite phrases, prefer short formulations and avoid we.", "annotator": "annotator_04" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Make first sentence more concise. Rewrite phrases, prefer short formulations and avoid we.", "annotator": "annotator_01" }
MXi6uEx-hp
rdZfFcGyf9
6
[ { "text": "Listwise RL (CDQN) :" }, { "text": "To solve the combinatorial action space problem of listwise actions, we follow the Cascaded DQN (CDQN) framework of Chen et al. (2019a)." }, { "text": "The main challenge is that building the list all at once is not feasible due to the intractably ...
[ { "text": "Listwise RL (CDQN) :" }, { "text": "For tasks with listwise actions, we follow the Cascaded DQN (CDQN) framework of Chen et al. (2019a)." }, { "text": "The main challenge is that building the action list all at once is not feasible due to a combinatorial number of possible list-action...
7_CwM-IzWd.zcm6f5HDI.18
We conduct this study with ModelNet40, using the front and rear views. We choose ten values from an interval [10 − 9 , 10 − 3 ] as λ . We use SGD without momentum, set the learning rate to 0.1 and batch size to eight. For each combination of hyperparameters, we train three model repetitions.
We conduct this study with ModelNet40, using the front and rear views. We choose ten values from an interval [10 − 9 , 10 − 3 ] as λ . We use SGD without momentum, set the learning rate to 0.1 and batch size to eight. Using each combination of hyperparameters, we repeat training for three times with random initializati...
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_08" }
7_CwM-IzWd
zcm6f5HDI
18
[ { "text": "We conduct this study with ModelNet40, using the front and rear views." }, { "text": "We choose ten values from an interval [10 − 9 , 10 − 3 ] as λ ." }, { "text": "We use SGD without momentum, set the learning rate to 0.1 and batch size to eight." }, { "text": "For each combi...
[ { "text": "We conduct this study with ModelNet40, using the front and rear views." }, { "text": "We choose ten values from an interval [10 − 9 , 10 − 3 ] as λ ." }, { "text": "We use SGD without momentum, set the learning rate to 0.1 and batch size to eight." }, { "text": "Using each com...
BJ49j43UH.B15bvYjiH.00
Datasets: We consider 12 public datasets (3 public tabular datasets, 7 public image datasets, and 2 public language datasets) to evaluate DVRL in comparison to multiple benchmark methods.public tabular datasets are (1) Blog, ( 2) Adult, (3) Rossmann; 7 public image datasets are (4) HAM 10000, (5) MNIST, (6) USPS, (7) F...
Datasets: We consider 12 public datasets (3 public tabular datasets, 7 public image datasets, andpublic language datasets) to evaluate DVRL in comparison to multiple benchmark methods. 3 public tabular datasets are (1) Blog, ( 2) Adult, (3) Rossmann; 7 public image datasets are (4) HAM 10000, (5) MNIST, (6) USPS, (7) F...
{ "annotation": [ "Concision" ], "instruction": "Make the last sentence more concise.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Make the last sentence shorter.", "annotator": "annotator_07" }
BJ49j43UH
B15bvYjiH
0
[ { "text": "Datasets: We consider 12 public datasets (3 public tabular datasets, 7 public image datasets, and 2 public language datasets) to evaluate DVRL in comparison to multiple benchmark methods.public tabular datasets are (1) Blog, (" }, { "text": "2) Adult, (3) Rossmann; 7 public image datasets are...
[ { "text": "Datasets: We consider 12 public datasets (3 public tabular datasets, 7 public image datasets, andpublic language datasets) to evaluate DVRL in comparison to multiple benchmark methods. 3 public tabular datasets are (1) Blog, (" }, { "text": "2) Adult, (3) Rossmann; 7 public image datasets are...
u9NaukzyJ-.hh0KECXQLv.00
A prescription is a common and important form of medical interven- tion that is used in modern clinical settings. It comes as a recommendation from an healthcare provider to a patient [1]. It indicates actions such as taking medications, following a diet, or executing physical exercises [2]. When agreed upon between th...
A prescription is a common and important form of medical inter- vention provided a clinician to a patient [1]. It indicates actions such as taking medications, following a diet, or executing physical exercises [2]. When agreed upon between a patient and their healthcare provider, the patient is expected to follow their...
{ "annotation": [ "Concision" ], "instruction": "Revise this paragraph to be more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Merge the two first sentences in one shorter one. Improve the sentence defining adherence to make it clearer.", "annotator": "annotator_07" }
u9NaukzyJ-
hh0KECXQLv
0
[ { "text": "A prescription is a common and important form of medical interven- tion that is used in modern clinical settings. It comes as a recommendation from an healthcare provider to a patient [1]." }, { "text": "It indicates actions such as taking medications, following a diet, or executing physical ...
[ { "text": "A prescription is a common and important form of medical inter- vention provided a clinician to a patient [1]." }, { "text": "It indicates actions such as taking medications, following a diet, or executing physical exercises [2]." }, { "text": "When agreed upon between a patient and t...
UlHNcByJV.W1RxpkrWx8.01
• If a data point is accepted by the biased classifier, it is accepted and added to the dataset with the true label. • If a data point is instead rejected by the biased classifier we use our de-biased classifier to decide whether to add it to the Pseudo-label dataset from PLOT. • We then apply the Pseudo-label mechani...
• If a data point is accepted by the biased classifier, we accept it and add it to the dataset with the true label. • If a data point is instead rejected by the biased classifier, we use de-biased classifier to decide whether to add it to the pseudo-label dataset. • As in PLOT, retrain on the pseudo-label candidates wi...
{ "annotation": [ "Rewriting_light" ], "instruction": "Rewrite the bullet points, making them more independent and preferring active over passive formulations", "annotator": "annotator_04" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Shorten the last sentence. Make this paragraph more direct.", "annotator": "annotator_07" }
UlHNcByJV
W1RxpkrWx8
1
[ { "text": "• If a data point is accepted by the biased classifier, it is accepted and added to the dataset with the true label." }, { "text": "• If a data point is instead rejected by the biased classifier we use our de-biased classifier to decide whether to add it to the Pseudo-label dataset from PLOT...
[ { "text": "• If a data point is accepted by the biased classifier, we accept it and add it to the dataset with the true label." }, { "text": "• If a data point is instead rejected by the biased classifier, we use de-biased classifier to decide whether to add it to the pseudo-label dataset." }, { ...
atxti8SVk.3K9AmPwALM.03
Wang et al., 2020; Fan et al., 2020; Sun et al., 2020). Xu et al. (2015) formulates all types of weak supervision as linear constraints on a SVM. Recent works (Lin et al., 2016; Kolesnikov & Lampert, 2016;Pathak et al., 2015) typically use Class Activation Map (CAM) (Zhou et al., 2016) to obtain an initial dense mask a...
Wang et al., 2020; Fan et al., 2020; Sun et al., 2020). Xu et al. (2015) formulates all types of weak supervision as linear constraints on a SVM. Papandreou et al. bootstraps segmentation predictions via EM-optimization. Recent works (Lin et al., 2016; Kolesnikov & Lampert, 2016; Pathak et al., 2015) typically use CAM ...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Development", "Concision" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
3
[ { "text": "Wang et al., 2020; Fan et al., 2020; Sun et al., 2020)." }, { "text": "Xu et al." }, { "text": "(2015) formulates all types of weak supervision as linear constraints on a SVM." }, { "text": " Recent works (Lin et al., 2016; Kolesnikov & Lampert, 2016;Pathak et al., 2015) typic...
[ { "text": "Wang et al., 2020; Fan et al., 2020; Sun et al., 2020)." }, { "text": "Xu et al." }, { "text": "(2015) formulates all types of weak supervision as linear constraints on a SVM." }, { "text": "Papandreou et al. bootstraps segmentation predictions via EM-optimization. Recent work...
MXi6uEx-hp.rdZfFcGyf9.22
Bi-LSTM : The raw action representations of candidate actions are passed on to the 2-layer MLP followed by ReLU. Then the output of the MLP is processed by a 2-layer bidirectional LSTM (Huang et al., 2015) followed by another 2-layer MLP to create the action-summary to be used in the subsequent utility network.
Bi-LSTM : The raw action representations of candidate actions are passed on to the 2-layer MLP followed by ReLU. Then, the output of the MLP is processed by a 2-layer bidirectional LSTM (Huang et al., 2015). Another 2-layer MLP follows this to create the action set summary to be used in the following utility network.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Update the last sentence and split it into two sentences to make it easier to understand", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Split this paragraph into smaller and more focused points.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
22
[ { "text": "Bi-LSTM :" }, { "text": "The raw action representations of candidate actions are passed on to the 2-layer MLP followed by ReLU." }, { "text": "Then the output of the MLP is processed by a 2-layer bidirectional LSTM (Huang et al., 2015) followed by another 2-layer MLP to create the ac...
[ { "text": "Bi-LSTM :" }, { "text": "The raw action representations of candidate actions are passed on to the 2-layer MLP followed by ReLU." }, { "text": "Then, the output of the MLP is processed by a 2-layer bidirectional LSTM (Huang et al., 2015). Another 2-layer MLP follows this to create the ...
9B3Sn8E9.J-9pEjms.00
We thank Yasaman Bahri for significant code contributions, frequent discussion and useful feedback on the manuscript, Sergey Ioffe for feedback on the text, as well as Greg Yang, Ravid Ziv, and Jeffrey
We thank Yasaman Bahri for frequent discussion and useful feedback on the manuscript. We additionally appreciate both Yasaman Bahri and Greg Yang for the ongoing contributions to improve the library. We thank Sergey Ioffe for feedback on the text, as well as Ravid Ziv, and Jeffrey
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
9B3Sn8E9
J-9pEjms
0
[ { "text": "We thank Yasaman Bahri for significant code contributions, frequent discussion and useful feedback on the manuscript, Sergey Ioffe for feedback on the text, as well as Greg Yang, Ravid Ziv, and Jeffrey" } ]
[ { "text": "We thank Yasaman Bahri for frequent discussion and useful feedback on the manuscript. We additionally appreciate both Yasaman Bahri and Greg Yang for the ongoing contributions to improve the library. We thank Sergey Ioffe for feedback on the text, as well as Ravid Ziv, and Jeffrey" } ]
MnewiFDvHZ.iAYttXl-uH.02
Moreover, we replace the original constraint function g t p¨q with ˆ g ` t ´ 1 p¨q and the dual variables λ twith Q p t ´ 1 q such that Q p t ´ 1 q ˆ g ` t ´ 1 p x q is a rectified approximator of λ t g t p x q . We also added thethe regularization (or smooth) term α t } x ´ x t } 2 that helps the stability of the algor...
Moreover, we replace the original constraint function g t p¨q with ˆ g ` t ´ 1 p¨q and the dual variables λ twith Q p t ´ 1 q such that Q p t ´ 1 q ˆ g ` t ´ 1 p x q is a rectified approximator of λ t g t p x q . We also added thethe regularization (or smooth) term α t } x ´ x t } 2 that helps the stability of the algor...
{ "annotation": [ "Concision", "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_deletion" ], "instruction": "Remove unnecessary content to make this paragraph shorter.", "annotator": "annotator_07" }
MnewiFDvHZ
iAYttXl-uH
2
[ { "text": "Moreover, we replace the original constraint function g t p¨q with ˆ g ` t ´ 1 p¨q and the dual variables λ twith Q p t ´ 1 q such that Q p t ´ 1 q ˆ g ` t ´ 1 p x q is a rectified approximator of λ t g t p x q ." }, { "text": "We also added thethe regularization (or smooth) term α t } x ´ x t...
[ { "text": "Moreover, we replace the original constraint function g t p¨q with ˆ g ` t ´ 1 p¨q and the dual variables λ twith Q p t ´ 1 q such that Q p t ´ 1 q ˆ g ` t ´ 1 p x q is a rectified approximator of λ t g t p x q ." }, { "text": "We also added thethe regularization (or smooth) term α t } x ´ x t...
b5cpxvkUTu.haUk--I4J.00
If you are using existing assets (e g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [No] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you...
If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [No] (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
b5cpxvkUTu
haUk--I4J
0
[ { "text": "If you are using existing assets (e g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators?" }, { "text": "[Yes] (b) Did you mention the license of the assets?" }, { "text": "[No] (c) Did you include any new assets ...
[ { "text": "If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators?" }, { "text": "[Yes] (b) Did you mention the license of the assets?" }, { "text": "[No] (c) Did you include any new assets ...
5t8NvKONr.tls-ZX2iE.02
The core of this proof is based on substituting a neural network for the inner product between the branch net and trunk net. The neural network with a low complexity could approximate the inner product with better performance since the inner product is an infinitely differentiable function. The DeepONet showed defects...
The core of this proof is showing that the inner product between the branch net and the trunk net could be replaced with a neural network that has a low complexity (Lemma 1). Therefore, the entire structure of DeepONet could be replaced with a neural network that receives [ E ( u ) , y ] ∈ R d y + m as input. It gives ...
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Remove unnecessary details. Include citation.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the beginning of the paragraph to improve the argumentation.", "annotator": "annotator_07" }
5t8NvKONr
tls-ZX2iE
2
[ { "text": "The core of this proof is based on substituting a neural network for the inner product between the branch net and trunk net. The neural network with a low complexity could approximate the inner product with better performance since the inner product is an infinitely differentiable function." }, ...
[ { "text": "The core of this proof is showing that the inner product between the branch net and the trunk net could be replaced with a neural network that has a low complexity (Lemma 1)." }, { "text": "Therefore, the entire structure of DeepONet could be replaced with a neural network that receives [ E (...
uUr8LbBzx8.u296507jDZ.00
TransBoost’s performance is compared to the standard inductive (fully supervised) performance. Forinstance, using 20% of the training set and 25% of the test set, we obtained the best top-1 accuracygain of +3.58% in the transductive setting while the performance in the inductive setting degraded by-1.21%. We note that...
Table 4 presents 16 experiments of TransBoost’s procedure performed on all combinations of the instance, % of training set and 25% of the test set, we obtained the best top gain of +3.58% in the transductive setting while the performance in the inductive setting degraded by We note that the performance in the inductive...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
uUr8LbBzx8
u296507jDZ
0
[ { "text": " TransBoost’s performance is compared to the standard inductive (fully supervised) performance." }, { "text": "Forinstance, using 20% of the training set and 25% of the test set, we obtained the best top-1 accuracygain of +3.58% in the transductive setting while the performance in the inducti...
[ { "text": "Table 4 presents 16 experiments of TransBoost’s procedure performed on all combinations of the instance," }, { "text": "% of training set and 25% of the test set, we obtained the best top gain of +3.58% in the transductive setting while the performance in the inductive setting degraded by" ...
hegI87bI5S.fL6Q48sfx8.15
Four participants answered that they intentionally used edges (Figure 5 (iv)) in all conditions. However, they were affected by the cursor hidden by the notch, causing them to lose sight of the cursor. All participants answered that when the cursor was hidden by the notch, they tried to find the cursor by moving the mo...
Four participants answered that they intentionally used edges (Figure 5 (iv)) in all conditions. However, the notch hid the cursor, which caused them to lose sight of the cursor. All participants answered that they attempted to find the cursor by moving the mouse vigorously when the cursor was hidden by the notch.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Restructure the last two sentences in this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Revise this paragraph to make it more clear and concise.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
15
[ { "text": "Four participants answered that they intentionally used edges (Figure 5 (iv)) in all conditions." }, { "text": "However, they were affected by the cursor hidden by the notch, causing them to lose sight of the cursor." }, { "text": "All participants answered that when the cursor was hi...
[ { "text": "Four participants answered that they intentionally used edges (Figure 5 (iv)) in all conditions." }, { "text": "However, the notch hid the cursor, which caused them to lose sight of the cursor." }, { "text": "All participants answered that they attempted to find the cursor by moving t...
CVRUl83zah.I75TtW0V7.19
To make this point more concrete, we implement push apart from the main text using the Jacobian of sorting. We use sort : R n → R n to denote sorting of a multiset with scalar elements inascending order. This is extended to FSPool with higher-dimensional elements by independently sorting each dimension across the multi...
To make this point more concrete, we implement push apart from the main text using the Jacobian of sorting. We omit the transposes in the following for brevity. We begin by defining the function g ([ a, b ]) = sort ([ a, b ]) · [ − 1 , 1] . This multiplies the smaller value with − 1 and the larger with(ties broken arbit...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
19
[ { "text": "To make this point more concrete, we implement push apart from the main text using the Jacobian of sorting." }, { "text": "We use sort : R n → R n to denote sorting of a multiset with scalar elements inascending order. This is extended to FSPool with higher-dimensional elements by independent...
[ { "text": "To make this point more concrete, we implement push apart from the main text using the Jacobian of sorting." }, { "text": "We omit the transposes in the following for brevity." }, { "text": "We begin by defining the function g ([ a, b ]) =" }, { "text": "sort ([ a, b ]) · [ − 1...
hegI87bI5S.fL6Q48sfx8.05
Because typical targets on GUIs are rectangular, target height ( H ) also affects the movement time [3,8,14,20,21,27]. Accot and Zhai [1] proposed a model for a bivariate (2D) pointing tasks that takes H . Zhang et al. [28] proposed to balance the effects of W and H (Eq. 2).
Target height ( H ) also affects the movement time because typical targets on grafical user interfaces (GUIs) are rectangular [3,8,14,20, 22,30]. Accot and Zhai [1] proposed a model for a bivariate (2D) pointing tasks that considers H . Further, Zhang et al. [31] proposed to balancing the effects of W and H .
{ "annotation": [ "Rewriting_light" ], "instruction": "Rewrite this paragraph and focus more on the first sentence", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve this paragraph for clarity, mainly the first sentence.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
5
[ { "text": "Because typical targets on GUIs are rectangular, target height ( H ) also affects the movement time [3,8,14,20,21,27]." }, { "text": "Accot and Zhai [1] proposed a model for a bivariate (2D) pointing tasks that takes H ." }, { "text": " Zhang et al." }, { "text": "[28] propose...
[ { "text": "Target height ( H ) also affects the movement time because typical targets on grafical user interfaces (GUIs) are rectangular [3,8,14,20, 22,30]." }, { "text": "Accot and Zhai [1] proposed a model for a bivariate (2D) pointing tasks that considers H ." }, { "text": "Further, Zhang et ...
7_CwM-IzWd.zcm6f5HDI.01
According to the greedy learner hypothesis, it is the speed at which a multi-modal DNN learns from each modality that leads to imbalance in conditional utilization rate. If we appropriately intervene in the learning process to adjust these speeds, we may be able to prevent the hurtful imbalance across input modalitie...
According to the greedy learner hypothesis, it is the diverged speed at which a multi-modal DNN learns from different modalities that leads to an imbalance in conditional utilization rate. If we intervene in the training process to adjust these speeds, we may be able to prevent the hurtful imbalance across input modali...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_08" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
1
[ { "text": "According to the greedy learner hypothesis, it is the speed at which a multi-modal DNN learns from each modality that leads to imbalance in conditional utilization rate." }, { "text": "If we appropriately intervene in the learning process to adjust these speeds, we may be able to prevent th...
[ { "text": "According to the greedy learner hypothesis, it is the diverged speed at which a multi-modal DNN learns from different modalities that leads to an imbalance in conditional utilization rate." }, { "text": "If we intervene in the training process to adjust these speeds, we may be able to prevent...
hegI87bI5S.fL6Q48sfx8.00
Although the mouse cursor can enter the notch area at the top of the MacBook Pro (2021) display, it is partially or entirely hidden by the notch. Avoiding the notch or moving the cursor carefully around the notch can increase the movement time. In this study, we conducted a series of experiments to evaluate the effect...
The notch on the top edge of the MacBook Pro (2021) display hides the mouse cursor even though the cursor can move under this area. Avoiding the notch or moving the cursor carefully around the notch can increase the movement time. In this study, we perform three experiments to evaluate the effect of the notch on the mo...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the words using in this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Modify this paragraph to make it more direct and easy to read.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
0
[ { "text": "Although the mouse cursor can enter the notch area at the top of the MacBook Pro (2021) display, it is partially or entirely hidden by the notch." }, { "text": "Avoiding the notch or moving the cursor carefully around the notch can increase the movement time." }, { "text": "In this s...
[ { "text": "The notch on the top edge of the MacBook Pro (2021) display hides the mouse cursor even though the cursor can move under this area." }, { "text": "Avoiding the notch or moving the cursor carefully around the notch can increase the movement time." }, { "text": "In this study, we perfor...
u9NaukzyJ-.hh0KECXQLv.16
When dealing with conflicts, arrows are effective in communi- cating the suggested conflict resolution action. End-of-line arrows can be used to indicate that the medication entries which have been scheduled too close together should be taken apart and vice-versa. The calendar should also have support for indication th...
When dealing with conflicts, arrows are effective in communi- cating the suggested conflict resolution action. End-of-line arrows can be used to indicate that medication entries that have been sched- uled too close together should be taken apart and vice-versa. The calendar should also have support for indication that ...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Clarify the wording in this paragraph.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Reword my sentence about entries.", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
16
[ { "text": "When dealing with conflicts, arrows are effective in communi- cating the suggested conflict resolution action." }, { "text": "End-of-line arrows can be used to indicate that the medication entries which have been scheduled too close together should be taken apart and vice-versa." }, { ...
[ { "text": "When dealing with conflicts, arrows are effective in communi- cating the suggested conflict resolution action." }, { "text": "End-of-line arrows can be used to indicate that medication entries that have been sched- uled too close together should be taken apart and vice-versa." }, { "t...
CVRUl83zah.I75TtW0V7.15
The goal is to reconstruct the input with a permutation-invariant latent vector as bottleneck. Varying the set size n and dimensionality of set elements d allows us to control the difficulty of the task. Note that while this appears like a toy task, it is also likely harder than many real-world datasets with similar n a...
The goal is to reconstruct the input with a permutation-invariant latent vector as bottleneck. Varying the set size n and dimensionality of set elements d allows us to control the difficulty of the task. Note that while this appears like a toy task, it is also likely harder than many real-world datasets with similar n a...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
15
[ { "text": "The goal is to reconstruct the input with a permutation-invariant latent vector as bottleneck." }, { "text": "Varying the set size n and dimensionality of set elements d allows us to control the difficulty of the task." }, { "text": "Note that while this appears like a toy task, it is ...
[ { "text": "The goal is to reconstruct the input with a permutation-invariant latent vector as bottleneck." }, { "text": "Varying the set size n and dimensionality of set elements d allows us to control the difficulty of the task." }, { "text": "Note that while this appears like a toy task, it is ...
hegI87bI5S.fL6Q48sfx8.03
Patrick et al. proposed the Mouse Ether technique on finding out that when using multiple displays with different resolutions, a user loses the cursor because of unnatural cursor movement between displays [5]. The results showed that the technique improved performance by up to 28% by preventing unnatural warping when ...
Patrick et al. found out that a user loses the cursor when using multiple displays with different resolutions based on an unnatural cursor movement between displays, and proposed a Mouse Ether technique [5]. The proposed technique improved performance by up to 28% by preventing unnatural warping when the cursor was mov...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the writing of this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Modify the logical flow of ideas to improve the readability of the paragraph.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
3
[ { "text": "Patrick et al. proposed the Mouse Ether technique on finding out that when using multiple displays with different resolutions, a user loses the cursor because of unnatural cursor movement between displays [5]." }, { "text": "The results showed that the technique improved performance by up to...
[ { "text": "Patrick et al. found out that a user loses the cursor when using multiple displays with different resolutions based on an unnatural cursor movement between displays, and proposed a Mouse Ether technique [5]." }, { "text": "The proposed technique improved performance by up to 28% by preventing...
jyac3IgQ44.f4au9jfat5.05
To leverage the natural sparsity of point clouds andfurther improve efficiency, we sparsely implementall our window center searching, window gathering, and balanced window sampling into CUDAoperations. These operations are mainly based on a hash map that establishes the mapping fromcoordinate space to voxel index [23]....
We implement all our window center searching,window gathering, and balanced window samplingsparsely in cuda operations to leverage the natural sparsity of point clouds and improve efficiency. These operations are mainly based on a hash mapwhich establishes the mapping from coordinatespace to voxel index as in [20]. For...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Revise this paragraph for better readability.", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the flow of ideas for better readability.", "annotator": "annotator_07" }
jyac3IgQ44
f4au9jfat5
5
[ { "text": "Relative position encoding is necessary for transformer-based networks because fine-grained positioninformation may be lost in high-level features with the deepening of the network." }, { "text": "To make better useof position information to facilitate multi-scale feature learning in our case...
[ { "text": "The 3D point cloud feature generally contains the original coordinates information, which voxelswill inherit. However, the fine-grained location information may be blurred with the deepening ofthe network, so the relative position encoding is necessary." }, { "text": "Furthermore, since MsSVT...
nkOpNqg-ip.OwJsIhe_p.01
The surprising result of our experimental evaluation is that, for runtimes of 1h, the black box optimizers are hardly ever able to improve upon Naive AutoML. In fact, the experiments even show that the “Ex-def” baseline itself is already quite strong in this time frame. This is not a contradiction to the results in Tho...
The surprising result of our experimental evaluation is that, for runtimes of 1h, the black box optimizers are hardly ever able to improve upon Naive AutoML. In fact, the experiments even show that the “Ex-def” baseline itself is already quite strong in this time frame. This is not a contradiction to the results in (Th...
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Rewrite the last sentence, making it more concise.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Make the last sentence more concise.", "annotator": "annotator_07" }
nkOpNqg-ip
OwJsIhe_p
1
[ { "text": "The surprising result of our experimental evaluation is that, for runtimes of 1h, the black box optimizers are hardly ever able to improve upon Naive AutoML." }, { "text": "In fact, the experiments even show that the “Ex-def” baseline itself is already quite strong in this time frame." }, ...
[ { "text": "The surprising result of our experimental evaluation is that, for runtimes of 1h, the black box optimizers are hardly ever able to improve upon Naive AutoML." }, { "text": "In fact, the experiments even show that the “Ex-def” baseline itself is already quite strong in this time frame." }, ...
hegI87bI5S.fL6Q48sfx8.06
It is known that the edge target (placing the target adjacent to the edge of the screen) can reduce the movement time [3,9,10,24,25]. Pointing at a target in the center of the screen requires the cursor to stop just inside the target. When pointing an edge target, the cursor stops at the edge. As a result, the pointing...
An edge target (target adjacent to the edge of the screen) can reduce the movement time [3,9,10,27,28]. Pointing at a target at the center of the screen requires the cursor to stop inside the target. The cursor stops at the edge when pointing at an edge target. Thus, the pointing task can be completed by moving a curso...
{ "annotation": [ "Rewriting_light" ], "instruction": "Rewrite this paragraph and choose better words", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Reorganise the flow of ideas when a sentence is confusing. Try to shorten the paragraph a bit.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
6
[ { "text": "It is known that the edge target (placing the target adjacent to the edge of the screen) can reduce the movement time [3,9,10,24,25]." }, { "text": "Pointing at a target in the center of the screen requires the cursor to stop just inside the target." }, { "text": "When pointing an edg...
[ { "text": "An edge target (target adjacent to the edge of the screen) can reduce the movement time [3,9,10,27,28]." }, { "text": "Pointing at a target at the center of the screen requires the cursor to stop inside the target." }, { "text": "The cursor stops at the edge when pointing at an edge t...
ByZyHzZC-.HktKf7-AW.02
Given the probability density P ( θ ) , we are now interested in deriving the probability of ending at a given minimum, θ A , which we will denote by lowercase p A = ˜ p A C , where C is a normalization constant (the unnormalized probability ˜ p A is all we are interested in when estimating the relative probability of...
Given the probability density P ( θ ) , we are now interested in deriving the probability of ending at a given minimum, θ A , which we will denote by lowercase p A = ˜ p A C , where C is a normalization constant which is the same for every mimnima (the unnormalized probability ˜ p A is all we are interested in when est...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
2
[ { "text": "Given the probability density P ( θ ) , we are now interested in deriving the probability of ending at a given minimum, θ A , which we will denote by lowercase p A = ˜ p" }, { "text": "A C , where C is a normalization constant (the unnormalized probability ˜ p" }, { "text": "A is all...
[ { "text": "Given the probability density P ( θ ) , we are now interested in deriving the probability of ending at a given minimum, θ A , which we will denote by lowercase p A = ˜ p" }, { "text": "A C , where C is a normalization constant which is the same for every mimnima (the unnormalized probability ...
hegI87bI5S.fL6Q48sfx8.12
We instructed the participants to (1) point the target as quickly and accurately as possible after clicking the starting position, (2) avoid any clutching action (floating the mouse in the middle of an operation) during the trail, and (3) check the presented conditions before starting the trial. Clutching action decrea...
We instructed the participants to (1) point the target as quickly and accurately as possible after clicking the starting position, (2) avoid any clutching action (floating the mouse in the middle of an operation) during the trial, and (3) check the presented conditions before starting the trial. The clutching action de...
{ "annotation": [ "Content_addition", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
12
[ { "text": "We instructed the participants to (1) point the target as quickly and accurately as possible after clicking the starting position, (2) avoid any clutching action (floating the mouse in the middle of an operation) during the trail, and (3) check the presented conditions before starting the trial." }...
[ { "text": "We instructed the participants to (1) point the target as quickly and accurately as possible after clicking the starting position, (2) avoid any clutching action (floating the mouse in the middle of an operation) during the trial, and (3) check the presented conditions before starting the trial." }...
kBsx5htyKn.qV5njV8W5.00
Active learning is a powerful technique to get the best out of an annotation budget. It consists of integrating the current model itself in the selection of which data points should be annotated next, which are selected based on an heuristic that is supposed to maximize the improvement of the model.
Active learning is a powerful technique to get the best out of an annotation budget for machine learning services in general, and Natural Language Processing ones in particular. It consists of integrating the current model itself in the selection of which data points should be annotated next, which are selected based o...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
kBsx5htyKn
qV5njV8W5
0
[ { "text": "Active learning is a powerful technique to get the best out of an annotation budget." }, { "text": "It consists of integrating the current model itself in the selection of which data points should be annotated next, which are selected based on an heuristic that is supposed to maximize the imp...
[ { "text": "Active learning is a powerful technique to get the best out of an annotation budget for machine learning services in general, and Natural Language Processing ones in particular." }, { "text": "It consists of integrating the current model itself in the selection of which data points should be ...
OV5v_wBMHk.bw4cqlpLh.00
Estimating individual treatment effects from observational data is very challenging due to the existence of treatment selection bias. Most existing representation-based methods mitigate this issue by aligning distributions of different treatment groups in the representation space. However, they still suffer from two cr...
Estimating individual treatment effects from observational data is highly challenging due to the existence of treatment selection bias. Most prevalent approaches mitigate this issue by aligning distributions of different treatment groups in the representation space. However, there are two critical problems circumvented...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the english of this paragraph.", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Edit this paragraph by making more formal choices of wording.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
0
[ { "text": "Estimating individual treatment effects from observational data is very challenging due to the existence of treatment selection bias." }, { "text": "Most existing representation-based methods mitigate this issue by aligning distributions of different treatment groups in the representation spa...
[ { "text": "Estimating individual treatment effects from observational data is highly challenging due to the existence of treatment selection bias." }, { "text": "Most prevalent approaches mitigate this issue by aligning distributions of different treatment groups in the representation space." }, { ...
XnxT9Uofth.vN0Ie05Cbd.00
The learning problem can be arbitrarily difficult, especially when f ∗ ( z ) is close to 1 2 , in which case itwill be difficult to determine the true value of r ( z ) . To give a problem-dependent bound, we assume the following bounded noise assumption. In the literature on statistical learning, this assumption is als...
The learning problem can be arbitrarily difficult, especially when f ∗ ( z ) is close to 1 2 , in which caseit will be difficult to determine the true value of r ( z ) . However, the marginal case is rare in manyreal-world problems, and the learning goal is not that difficult to identify for human beings. To give a pro...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
XnxT9Uofth
vN0Ie05Cbd
0
[ { "text": "The learning problem can be arbitrarily difficult, especially when f ∗ ( z ) is close to 1 2 , in which case itwill be difficult to determine the true value of r ( z ) ." }, { "text": "" }, { "text": "To give a problem-dependent bound, we assume the following bounded noise assumption....
[ { "text": "The learning problem can be arbitrarily difficult, especially when f ∗ ( z ) is close to 1 2 , in which caseit will be difficult to determine the true value of r ( z ) ." }, { "text": "However, the marginal case is rare in manyreal-world problems, and the learning goal is not that difficult t...
HkW3nTM6X.S1d278zJ4.00
SDE model. Figures 2c-dclearly illustrate that time-variant drift function leads to reduced prediction error, which hints that the dynamics underlying a golf swing motion are learned better. We also see that the error consistently decreases as the number of inducing points M is increased, and reaches the minimum at M =...
SDE model. Figures 2c-d illustrate that time-variant drift function can also reduce prediction error, at least for golf swing trajectories that span approximately the same part of the state space. We also see that the error consistently decreases as the number of inducing points M is increased, and reaches the minimum ...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the sentence related to Figures 2c-d.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the first long sentence to better fit the academic style.", "annotator": "annotator_07" }
HkW3nTM6X
S1d278zJ4
0
[ { "text": "SDE model." }, { "text": "Figures 2c-dclearly illustrate that time-variant drift function leads to reduced prediction error, which hints that the dynamics underlying a golf swing motion are learned better." }, { "text": "We also see that the error consistently decreases as the number ...
[ { "text": "SDE model." }, { "text": "Figures 2c-d illustrate that time-variant drift function can also reduce prediction error, at least for golf swing trajectories that span approximately the same part of the state space." }, { "text": "We also see that the error consistently decreases as the n...
HU3k56fdo.UBQDwHj6Ebd.00
One can fine-tune the global model on the local data [41], perform MAML-based personalizedapproaches [37, 38], or achieve the personalization by local batch normalization layers [11]. Our proposed method is perpendicular to the above studies and potentially can be combined with them for further improvement.
One can fine-tune the global model on the local data [37], perform MAML-based personalizedapproaches [38, 39], or achieve the personalization by local batch normalization layers [11]. Thereare also many other emerging explorations dealing with FL data heterogeneity, such as heterogeneousoptimization [40, 41, 42], robus...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
HU3k56fdo
UBQDwHj6Ebd
0
[ { "text": "One can fine-tune the global model on the local data [41], perform MAML-based personalizedapproaches [37, 38], or achieve the personalization by local batch normalization layers [11]." }, { "text": "" }, { "text": "Our proposed method is perpendicular to the above studies and potentia...
[ { "text": "One can fine-tune the global model on the local data [37], perform MAML-based personalizedapproaches [38, 39], or achieve the personalization by local batch normalization layers [11]." }, { "text": "Thereare also many other emerging explorations dealing with FL data heterogeneity, such as het...
NwOG107NKJ.0PPYM22rdB.01
General classes of network formation methods include: 1) exponential random graph models (ERGMs) Lusher et al. [2013]Pattison and Wasserman [1999], meta-networks, and meta-matrices Carley and Hill [2001]Krackhardt and Carley [1998] for multilayer social networks. 2) block modeling Guimerà and Sales-Pardo [2009] 3) geog...
General classes of network formation methods include: 1) exponential random graph models (ERGMs) [Lusher et al., 2013][Pattison and Wasserman, 1999], meta-networks, and meta-matrices [Carley and Hill, 2001][Krackhardt and Carley, 1998] for multilayer social networks. 2) block modeling [Guimerà and Sales-Pardo, 2009] 3)...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
NwOG107NKJ
0PPYM22rdB
1
[ { "text": "General classes of network formation methods include: 1) exponential random graph models (ERGMs)" }, { "text": "Lusher et al. [2013]Pattison and Wasserman [1999], meta-networks, and meta-matrices Carley and Hill [2001]Krackhardt and Carley [1998] for multilayer social networks." }, { ...
[ { "text": "General classes of network formation methods include: 1) exponential random graph models (ERGMs)" }, { "text": "[Lusher et al., 2013][Pattison and Wasserman, 1999], meta-networks, and meta-matrices [Carley and Hill, 2001][Krackhardt and Carley, 1998] for multilayer social networks." }, { ...
F3z0hchpGy.xeuzrNJiNW.00
This is done by applying a matrix ρ ( g q → p ) ∈ R C out × C in to the coefficients of the feature at q , in order to obtain the coefficients of the feature vector transported to p , which can be used for the convolution at p . The transporter depends on the geometric type of the feature, denoted by ρ . Details of how ...
This is done by applying a matrix ρ ( g q → p ) ∈ R C out × C in to the coefficients of the feature at q , in order to obtain the coefficients of the feature vector transported to p , which can be used for the convolution at p . The transporter depends on the geometric type (group representation) of the feature, denoted ...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
F3z0hchpGy
xeuzrNJiNW
0
[ { "text": "This is done by applying a matrix ρ ( g q → p ) ∈ R C out × C in to the coefficients of the feature at q , in order to obtain the coefficients of the feature vector transported to p , which can be used for the convolution at p ." }, { "text": "The transporter depends on the geometric type of t...
[ { "text": "This is done by applying a matrix ρ ( g q → p ) ∈ R C out × C in to the coefficients of the feature at q , in order to obtain the coefficients of the feature vector transported to p , which can be used for the convolution at p ." }, { "text": "The transporter depends on the geometric type (grou...
OV5v_wBMHk.bw4cqlpLh.17
To avoid the significant variance in the IHDP benchmark, we conduct ablation study on the ACIC benchmark, to evaluate the effectiveness of ESCFR’s components and validate our claims in Section 3. In Table 2, ESCFR firstly augments TARNet with stochastic optimal transport in Section 3.1, which effectively reduces the o...
To verify the effectiveness of individual components, an ablation study is conducted on the ACIC benchmark in Table 2. Specifically, ESCFR first augments TARNet with stochastic optimal transport in Section 3.1, which effectively reduces the out-of-sample PEHE from 3.254 to 3.207. Then, it mitigates the MSE issue with ...
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Make the first sentence more concise.", "annotator": "annotator_08" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Simplify the first sentence. Improve the connections between sentences.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
17
[ { "text": "Representation-based methods mitigate the treatment selection bias and enhance the overall performance." }, { "text": "In particular, CFR-WASS reaches an out-of-sample PEHE of 3.207 on ACIC, significantly outperforming most statistical methods." }, { "text": "It also gets an AUUC of 0...
[ { "text": "Representation-based methods mitigate the treatment selection bias and enhance overall performance." }, { "text": "In particular, CFR-WASS reaches an out-of-sample PEHE of 3.207 on ACIC, significantly outperforming most statistical methods." }, { "text": "" }, { "text": "Howev...
7_CwM-IzWd.zcm6f5HDI.10
We refer to the training steps at which we perform forward and backward passes normally as regular steps . We introduce re-balancing steps at which we force the model to update only one of the unimodal branches in order to accelerate learning from the associated input modality. See Appendix A. for the full explanation...
We refer to the training steps at which we perform forward and backward passes normally as regular steps . We introduce re-balancing steps at which we update one of the uni-modal branches intentionally in order to accelerate the model to learn from the corresponding modality. See Appendix A.2 for the full explanation o...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Rewrite this paragraph", "annotator": "annotator_05" }
7_CwM-IzWd
zcm6f5HDI
10
[ { "text": "We refer to the training steps at which we perform forward and backward passes normally as regular steps ." }, { "text": "We introduce re-balancing steps at which we force the model to update only one of the unimodal branches in order to accelerate learning from the associated input modality...
[ { "text": "We refer to the training steps at which we perform forward and backward passes normally as regular steps ." }, { "text": "We introduce re-balancing steps at which we update one of the uni-modal branches intentionally in order to accelerate the model to learn from the corresponding modality." ...
UlHNcByJV.W1RxpkrWx8.02
We compare against four baseline methods: PLOT, NeuralUCB, Greedy (no exploration), and decayed ϵ -greedy method. For ϵ -greedy, we follow (Kveton et al., 2019) and use a decayed schedule, dropping to 0.1% exploration by T=2500. In addition we evaluate the performance of the Standalone Adversarial classifier as an abla...
We compare against four baseline methods: PLOT, NeuralUCB, Greedy (no exploration), and decayed ϵ -greedy method. For ϵ -greedy, we follow (Kveton et al., 2019) and use a decayed schedule, dropping to 0.1% exploration by T=2500. We combine PLOT with ϵ -greedy selection of pseudolabel candidates as in Pacchiano et al. I...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
UlHNcByJV
W1RxpkrWx8
2
[ { "text": "We compare against four baseline methods: PLOT, NeuralUCB, Greedy (no exploration), and decayed ϵ -greedy method." }, { "text": "For ϵ -greedy, we follow (Kveton et al., 2019) and use a decayed schedule, dropping to 0.1% exploration by T=2500." }, { "text": "" }, { "text": "In...
[ { "text": "We compare against four baseline methods: PLOT, NeuralUCB, Greedy (no exploration), and decayed ϵ -greedy method." }, { "text": "For ϵ -greedy, we follow (Kveton et al., 2019) and use a decayed schedule, dropping to 0.1% exploration by T=2500." }, { "text": "We combine PLOT with ϵ -gr...
aomiOZE_m2.rxb2TiQ6bq.02
Deep CNN is firstly utilized for image SR in SRCNN (Dong et al., 2014) and continuously shows promising SR performance. There are only three convolutional (Conv) layers in SRCNN, hindering its performance. Kim et al . increased the network depth in VDSR (Kim et al., 2016a) with residual learning and obtained notable im...
Deep CNN for image SR is pioneered by SRCNN (Dong et al., 2014) and has continuously shown promising SR performance. There are only three convolutional (Conv) layers in SRCNN, constraining its expressivity. Kim et al . increased the network depth in VDSR (Kim et al., 2016a) with residual learning and obtained notable i...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Can you make the last sentence simple?", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Use shorter, more direct formulations to make this paragraph more concise. Rewrite the last two sentences to make them more understandable.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
2
[ { "text": "Deep CNN is firstly utilized for image SR in SRCNN (Dong et al., 2014) and continuously shows promising SR performance." }, { "text": "There are only three convolutional (Conv) layers in SRCNN, hindering its performance." }, { "text": "Kim et al . increased the network depth in VDSR (...
[ { "text": "Deep CNN for image SR is pioneered by SRCNN (Dong et al., 2014) and has continuously shown promising SR performance." }, { "text": "There are only three convolutional (Conv) layers in SRCNN, constraining its expressivity." }, { "text": "Kim et al . increased the network depth in VDSR ...
6Olckfg0rk.4SAcuLPvUb.00
Our handcrafted backdoor attacks directly modify a pre-trained model’s parameters to introduce malicious functionality. Because our attack does not require training, knowledge of or access to the training data is unnecessary. More importantly, handcrafted attacks have more degrees of freedom in optimizing a model’s beh...
Our handcrafted backdoor attacks directly modify a pre-trained model’s parameters to introduce malicious functionality. Because our attack does not require training, knowledge of or access to the training data is unnecessary. More importantly, handcrafted attacks have more degrees of freedom in optimizing a model’s beh...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
6Olckfg0rk
4SAcuLPvUb
0
[ { "text": "Our handcrafted backdoor attacks directly modify a pre-trained model’s parameters to introduce malicious functionality." }, { "text": "Because our attack does not require training, knowledge of or access to the training data is unnecessary." }, { "text": "More importantly, handcrafted...
[ { "text": "Our handcrafted backdoor attacks directly modify a pre-trained model’s parameters to introduce malicious functionality." }, { "text": "Because our attack does not require training, knowledge of or access to the training data is unnecessary." }, { "text": "More importantly, handcrafted...
OV5v_wBMHk.bw4cqlpLh.09
We further derive the upper bound of PEHE in the stochastic batch form as in Theorem 3.1 based on Uri et al. (2017), which demonstrates that the PEHE can be optimized by iteratively minimizing the factual outcome estimation error and the optimal transport discrepancy at a mini-batch level .
To further investigate the effectiveness of this shortcut, Theorem 3.1 demonstrates that PEHE can be optimized by iteratively minimizing the factual outcome estimation error and the mini-batch group discrepancy (6). The proof of the theorem can be found in Appendix A.3.
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "At the last part, state that the proof will be shown in the appendix. Also, make the sentence more sophisticated.", "annotator": "annotator_05" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Make this sentence more concise. Add a reference to Appendix A.3. where the proof is.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
9
[ { "text": "We further derive the upper bound of PEHE in the stochastic batch form as in Theorem 3.1 based on Uri et al. (2017), which demonstrates that the PEHE can be optimized by iteratively minimizing the factual outcome estimation error and the optimal transport discrepancy at a mini-batch level ." } ]
[ { "text": "To further investigate the effectiveness of this shortcut, Theorem 3.1 demonstrates that PEHE can be optimized by iteratively minimizing the factual outcome estimation error and the mini-batch group discrepancy (6). The proof of the theorem can be found in Appendix A.3." } ]
S1qImCcFQ.Ske132uA7.02
To validate the model and inference procedure, we used the neural spike train data recorded from the primary visual cortex of an anesthetized macaque monkey collected by Graf et al. The dataset is composed of short trials where the monkey viewed periodic temporal pattern of motions of 72 orientations, each repeated 50 ...
To validate the model and inference procedure, we used the neural spike train data recorded from the primary visual cortex of an anesthetized macaque monkey collected by Graf et al. The dataset is composed of short trials where the monkey viewed periodic temporal pattern of motions of 72 orientations, each repeated 50 ...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
S1qImCcFQ
Ske132uA7
2
[ { "text": "To validate the model and inference procedure, we used the neural spike train data recorded from the primary visual cortex of an anesthetized macaque monkey collected by Graf et al." }, { "text": "The dataset is composed of short trials where the monkey viewed periodic temporal pattern of mot...
[ { "text": "To validate the model and inference procedure, we used the neural spike train data recorded from the primary visual cortex of an anesthetized macaque monkey collected by Graf et al." }, { "text": "The dataset is composed of short trials where the monkey viewed periodic temporal pattern of mot...
HJZRDGZCb.SkuHQG2zz.00
To evaluate the proposed spatial-wise and channel-wise sparse-complementary (SW-SC and CWSC) convolutional kernels, we applied them onto on state-of-the-art network architectures for the image classification task, including ResNet (He et al. (2016a;b)) and DenseNet (Huang et al. for the CIFAR-10/100 (Krizhevsky & Hinton...
To evaluate the proposed spatial-wise and channel-wise sparse-complementary (SW-SC and CWSC) convolutional kernels, we applied them onto on state-of-the-art network architectures for the image for the CIFAR-10/ Krizhevsky Hinton and ImageNet-1K ) datasets. For all experiments, we replace all 3 × 3 and 1 × 1 kernels by ...
{ "annotation": [ "Concision" ], "instruction": "Exclude unnecessary details.", "annotator": "annotator_08" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
HJZRDGZCb
SkuHQG2zz
0
[ { "text": "To evaluate the proposed spatial-wise and channel-wise sparse-complementary (SW-SC and CWSC) convolutional kernels, we applied them onto on state-of-the-art network architectures for the image classification task, including ResNet (He et al." }, { "text": "(2016a;b)) and DenseNet (Huang et al....
[ { "text": "To evaluate the proposed spatial-wise and channel-wise sparse-complementary (SW-SC and CWSC) convolutional kernels, we applied them onto on state-of-the-art network architectures for the image for the CIFAR-10/ Krizhevsky" }, { "text": "Hinton" }, { "text": "and ImageNet-1K ) datasets...
5t8NvKONr.tls-ZX2iE.00
In this section, we would like to clarify the complexity of the DeepONet required for the approximation A and reconstruction R based on the theory in Galanti & Wolf (2020). Furthermore,using the results on the upper bound for the complexity of hypernetwork Galanti & Wolf (2020), we will show that the HyperDeepONet enta...
In this section, we would like to clarify the complexity of the DeepONet required for the approximation A and reconstruction R based on the theory in Galanti & Wolf (2020). Furthermore, we will show that the HyperDeepONet entails a relatively lower complexity than the DeepONet using the results on the upper bound for t...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Use correct citation format.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Reorder the last sentence.", "annotator": "annotator_07" }
5t8NvKONr
tls-ZX2iE
0
[ { "text": "In this section, we would like to clarify the complexity of the DeepONet required for the approximation A and reconstruction R based on the theory in Galanti & Wolf (2020)." }, { "text": "Furthermore,using the results on the upper bound for the complexity of hypernetwork Galanti & Wolf (2020)...
[ { "text": "In this section, we would like to clarify the complexity of the DeepONet required for the approximation A and reconstruction R based on the theory in Galanti & Wolf (2020)." }, { "text": "Furthermore, we will show that the HyperDeepONet entails a relatively lower complexity than the DeepONet ...
CswFOyPyhT.FUeqrAFby.00
Previous work has shown GFlowNets are useful in settings with multi-modal posteriors. This is of particular interest to us where many admissible structures can explain the observed data equally well. Next, we discuss a toy system in which has many modes in section 5, then present our GFlowNet-based solution in section...
Previous work has shown GFlowNets are useful in settings with multi-modal posteriors. This is of particular interest to us where many admissible structures can explain the observed data equally well. Next, we present our GFlowNet-based solution in section 4, then discuss a toy system which has many modes in section 5.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the citation in correct order.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Revise this paragraph to present the sections in a coherent order.", "annotator": "annotator_02" }
CswFOyPyhT
FUeqrAFby
0
[ { "text": "Previous work has shown GFlowNets are useful in settings with multi-modal posteriors." }, { "text": "This is of particular interest to us where many admissible structures can explain the observed data equally well." }, { "text": "Next, we discuss a toy system in which has many modes ...
[ { "text": "Previous work has shown GFlowNets are useful in settings with multi-modal posteriors." }, { "text": "This is of particular interest to us where many admissible structures can explain the observed data equally well." }, { "text": "Next, we present our GFlowNet-based solution in section...
nCTSF9BQJ.DGhBYSP_sR.10
We would like to highlight that, sequence-based(evolution-based) methodsfor single proteins are not suitable for protein-protein interactions due to the lacking of evolutionary information in most cases — protein-protein interactions involve two or more chains. The chains might belong to different species (e.g. host a...
However, it is important to note that sequence-based methods are not suitable for predicting mutational effects on general protein-protein interactions due to the lack of evolutionary information in many cases. Protein-protein interactions typically involve two or more chains, which may belong to different species or m...
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary examples.", "annotator": "annotator_01" }
{ "annotation": [ "Concision" ], "instruction": "Rewrite this paragraph to make it shorter while keeping all the informations.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
10
[ { "text": "We would like to highlight that, sequence-based(evolution-based) methodsfor single proteins are not suitable for protein-protein interactions due to the lacking of evolutionary information in most cases — protein-protein interactions involve two or more chains." }, { "text": "The chains migh...
[ { "text": "However, it is important to note that sequence-based methods are not suitable for predicting mutational effects on general protein-protein interactions due to the lack of evolutionary information in many cases." }, { "text": "Protein-protein interactions typically involve two or more chains, ...
X50LVGSli.jqJzurpUu.01
Previous works on unsupervised learning for CO have studied max-cut Yao et al. and TSP problems Hudson et al. (2021), while these works depend on carefully selected problem-specific objectives. Some works have investigated satisfaction problems Amizadeh et al. ; Toenshoff et al. Applying these approaches to general CO ...
Previous works on unsupervised learning for CO have studied max-cut (Yao et al., 2019) and TSP problems (Hudson et al., 2021), while these works depend on carefully selected problem-specific objectives. Some works have investigated satisfaction problems (Amizadeh et al., 2018; Toenshoff et al., 2019). Applying these ap...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
X50LVGSli
jqJzurpUu
1
[ { "text": "Previous works on unsupervised learning for CO have studied max-cut Yao et al. and TSP problems Hudson et al. (2021), while these works depend on carefully selected problem-specific objectives." }, { "text": "Some works have investigated satisfaction problems Amizadeh et al." }, { "te...
[ { "text": "Previous works on unsupervised learning for CO have studied max-cut (Yao et al., 2019) and TSP problems (Hudson et al., 2021), while these works depend on carefully selected problem-specific objectives." }, { "text": "Some works have investigated satisfaction problems (Amizadeh et al., 2018;"...