AerdnaNami
commited on
Commit
·
35c30dd
1
Parent(s):
d12caad
Restore repository state from commit 8081326
Browse files- annotations/Ed/.keep +0 -0
- annotations/Ed/paper_11.txt.json +26 -0
- annotations/Ed/paper_13.txt.json +34 -0
- annotations/Ed/paper_14.txt.json +26 -0
- annotations/Ed/paper_15.txt.json +10 -0
- annotations/Ed/paper_16.txt.json +26 -0
- annotations/Ekaterina/paper_64.txt.json +98 -0
- annotations/Ekaterina/paper_65.txt.json +90 -0
- annotations/Ekaterina/paper_66.txt.json +50 -0
- annotations/Ekaterina/paper_67.txt.json +18 -0
- annotations/Ekaterina/paper_68.txt.json +58 -0
- annotations/Iman/.keep +0 -0
- annotations/Iman/paper_100.txt.json +26 -0
- annotations/Iman/paper_11.txt.json +10 -0
- annotations/Iman/paper_13.txt.json +34 -0
- annotations/Iman/paper_14.txt.json +18 -0
- annotations/Iman/paper_15.txt.json +18 -0
- annotations/Iman/paper_16.txt.json +1 -0
- annotations/Iman/paper_17.txt.json +34 -0
- annotations/Iman/paper_18.txt.json +26 -0
- annotations/Iman/paper_19.txt.json +26 -0
- annotations/Iman/paper_20.txt.json +18 -0
- annotations/Iman/paper_64.txt.json +74 -0
- annotations/Iman/paper_65.txt.json +74 -0
- annotations/Iman/paper_66.txt.json +18 -0
- annotations/Iman/paper_67.txt.json +26 -0
- annotations/Iman/paper_68.txt.json +34 -0
- annotations/Iman/paper_69.txt.json +74 -0
- annotations/Iman/paper_70.txt.json +10 -0
- annotations/Iman/paper_71.txt.json +10 -0
- annotations/Iman/paper_72.txt.json +50 -0
- annotations/Iman/paper_73.txt.json +18 -0
- annotations/Iman/paper_74.txt.json +18 -0
- annotations/Iman/paper_75.txt.json +1 -0
- annotations/Iman/paper_76.txt.json +34 -0
- annotations/Iman/paper_78.txt.json +34 -0
- annotations/Iman/paper_79.txt.json +34 -0
- annotations/Iman/paper_80.txt.json +34 -0
- annotations/Iman/paper_81.txt.json +42 -0
- annotations/Iman/paper_82.txt.json +10 -0
- annotations/Iraa/.keep +0 -0
- annotations/Kaushal/.keep +0 -0
annotations/Ed/.keep
ADDED
|
File without changes
|
annotations/Ed/paper_11.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_11.txt",
|
| 4 |
+
"start": 78,
|
| 5 |
+
"end": 562,
|
| 6 |
+
"label": "Coherence",
|
| 7 |
+
"user": "Ed",
|
| 8 |
+
"text": "Cross-lingual information retrieval (CLIR) (Braschler et al., 1999;Shakery and Zhai, 2013;Jiang et al., 2020;Asai et al., 2021a), for example, can find relevant text in a high-resource language such as English even when the query is posed in a different, possibly low-resource, language. In this work, we develop useful CLIR models for this constrained, yet important, setting where a retrieval corpus is available only in a single high-resource language (English in our experiments)."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_11.txt",
|
| 12 |
+
"start": 797,
|
| 13 |
+
"end": 960,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Ed",
|
| 16 |
+
"text": " alternative end-to-end approach that can tackle the problem purely cross-lingually, i.e., without involving MT, would clearly be more efficient and cost-effective"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_11.txt",
|
| 20 |
+
"start": 2235,
|
| 21 |
+
"end": 2456,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Ed",
|
| 24 |
+
"text": " KD (Hinton et al., 2014) is a powerful supervision technique typically used to distill the knowledge of a large teacher model about some task into a smaller student model (Mukherjee and Awadallah, 2020;Turc et al., 2020)"
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Ed/paper_13.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_13.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 444,
|
| 6 |
+
"label": "Lacks synthesis",
|
| 7 |
+
"user": "Ed",
|
| 8 |
+
"text": "Few-shot learning is the problem of learning classifiers with only a few training examples. Zero-shot learning (Larochelle et al., 2008), also known as dataless classification (Chang et al., 2008), is the extreme case, in which no labeled data is used. For text data, this is usually accomplished by representing the labels of the task in a textual form, which can either be the name of the label or a concise textual description."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_13.txt",
|
| 12 |
+
"start": 1321,
|
| 13 |
+
"end": 2033,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Ed",
|
| 16 |
+
"text": "These models embed both input and label texts into a common vector space. The similarity of the two items can then be computed using a similarity function such as the dot product. The advantage is that input and label text are encoded independently, which means that the label embeddings can be pre-computed. Therefore, at inference time, only a single call to the model per input is needed. In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label. On the other hand, they allow for interaction between the tokens of label and input, so that in theory they should be superior in classification accurac"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_13.txt",
|
| 20 |
+
"start": 1713,
|
| 21 |
+
"end": 1880,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Ed",
|
| 24 |
+
"text": "In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_13.txt",
|
| 28 |
+
"start": 3365,
|
| 29 |
+
"end": 3395,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Ed",
|
| 32 |
+
"text": "In contrast to most prior work"
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Ed/paper_14.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_14.txt",
|
| 4 |
+
"start": 182,
|
| 5 |
+
"end": 310,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Ed",
|
| 8 |
+
"text": "Unfortunately, for many languages, and especially low-resource languages, such taskspecific labelled data is often not available"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_14.txt",
|
| 12 |
+
"start": 2549,
|
| 13 |
+
"end": 2644,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Ed",
|
| 16 |
+
"text": "as this is the only task for which high-quality data is available in a large number of language"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_14.txt",
|
| 20 |
+
"start": 2833,
|
| 21 |
+
"end": 2980,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Ed",
|
| 24 |
+
"text": "a base understanding of syntactic structure in both the source and target language is necessary for any meaningful natural language processing task"
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Ed/paper_15.txt.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_15.txt",
|
| 4 |
+
"start": 897,
|
| 5 |
+
"end": 902,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Ed",
|
| 8 |
+
"text": "2021)"
|
| 9 |
+
}
|
| 10 |
+
]
|
annotations/Ed/paper_16.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_16.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 414,
|
| 6 |
+
"label": "Coherence",
|
| 7 |
+
"user": "Ed",
|
| 8 |
+
"text": "To facilitate the study of text summarization, earlier datasets are mostly in the news domain with relatively short input passages, such as NYT (Sandhaus, 2008), Gigaword (Napoles et al., 2012), CNN/Daily Mail (Hermann et al., 2015), NEWSROOM (Grusky et al., 2018) and XSUM (Narayan et al., 2018). Datasets for long docu-ments include Sharma et al. (2019), Cohan et al. (2018), andFisas et al. (2016)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_16.txt",
|
| 12 |
+
"start": 14,
|
| 13 |
+
"end": 414,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Ed",
|
| 16 |
+
"text": "To facilitate the study of text summarization, earlier datasets are mostly in the news domain with relatively short input passages, such as NYT (Sandhaus, 2008), Gigaword (Napoles et al., 2012), CNN/Daily Mail (Hermann et al., 2015), NEWSROOM (Grusky et al., 2018) and XSUM (Narayan et al., 2018). Datasets for long docu-ments include Sharma et al. (2019), Cohan et al. (2018), andFisas et al. (2016)"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_16.txt",
|
| 20 |
+
"start": 713,
|
| 21 |
+
"end": 1264,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Ed",
|
| 24 |
+
"text": "Researchers recently explore the peer review domain data for a few tasks, such as PeerRead (Kang et al., 2018) for paper decision predictions, AM-PERE for proposition classification in reviews, and RR (Cheng et al., 2020) for paired-argument extraction from review-rebuttal pairs. Additionally, a meta-review dataset is introduced by Bhatia et al. (2020) without any annotation. There are also some explorations on research articles (Teufel et al., 1999;Liakata et al., 2010;Lauscher et al., 2018), which differ in nature from the peer review domain."
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Ekaterina/paper_64.txt.json
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_64.txt",
|
| 4 |
+
"start": 171,
|
| 5 |
+
"end": 194,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Ekaterina",
|
| 8 |
+
"text": "(Feinman et al., 2017) "
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_64.txt",
|
| 12 |
+
"start": 367,
|
| 13 |
+
"end": 384,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Ekaterina",
|
| 16 |
+
"text": "(Ma et al., 2018)"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_64.txt",
|
| 20 |
+
"start": 564,
|
| 21 |
+
"end": 587,
|
| 22 |
+
"label": "Format",
|
| 23 |
+
"user": "Ekaterina",
|
| 24 |
+
"text": "(Carrara et al., 2019b)"
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_64.txt",
|
| 28 |
+
"start": 782,
|
| 29 |
+
"end": 784,
|
| 30 |
+
"label": "Format",
|
| 31 |
+
"user": "Ekaterina",
|
| 32 |
+
"text": "( "
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_64.txt",
|
| 36 |
+
"start": 1070,
|
| 37 |
+
"end": 1087,
|
| 38 |
+
"label": "Format",
|
| 39 |
+
"user": "Ekaterina",
|
| 40 |
+
"text": "(Li and Li, 2016)"
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_64.txt",
|
| 44 |
+
"start": 1560,
|
| 45 |
+
"end": 1577,
|
| 46 |
+
"label": "Format",
|
| 47 |
+
"user": "Ekaterina",
|
| 48 |
+
"text": "(Li and Li, 2016)"
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"file": "paper_64.txt",
|
| 52 |
+
"start": 1728,
|
| 53 |
+
"end": 1746,
|
| 54 |
+
"label": "Format",
|
| 55 |
+
"user": "Ekaterina",
|
| 56 |
+
"text": "(Mao et al., 2019)"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"file": "paper_64.txt",
|
| 60 |
+
"start": 2059,
|
| 61 |
+
"end": 2079,
|
| 62 |
+
"label": "Format",
|
| 63 |
+
"user": "Ekaterina",
|
| 64 |
+
"text": "(Cohen et al., 2019)"
|
| 65 |
+
},
|
| 66 |
+
{
|
| 67 |
+
"file": "paper_64.txt",
|
| 68 |
+
"start": 2293,
|
| 69 |
+
"end": 2314,
|
| 70 |
+
"label": "Format",
|
| 71 |
+
"user": "Ekaterina",
|
| 72 |
+
"text": "(Smith and Gal, 2018)"
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"file": "paper_64.txt",
|
| 76 |
+
"start": 3527,
|
| 77 |
+
"end": 3745,
|
| 78 |
+
"label": "Unsupported claim",
|
| 79 |
+
"user": "Ekaterina",
|
| 80 |
+
"text": "To tackle such word-level, semantically similar examples, designed a discriminator to classify each token representation as part of an adversarial perturbation or not, which is then used to 'correct' the perturbation."
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"file": "paper_64.txt",
|
| 84 |
+
"start": 3939,
|
| 85 |
+
"end": 3959,
|
| 86 |
+
"label": "Format",
|
| 87 |
+
"user": "Ekaterina",
|
| 88 |
+
"text": "(Mozes et al., 2020)"
|
| 89 |
+
},
|
| 90 |
+
{
|
| 91 |
+
"file": "paper_64.txt",
|
| 92 |
+
"start": 14,
|
| 93 |
+
"end": 4173,
|
| 94 |
+
"label": "Lacks synthesis",
|
| 95 |
+
"user": "Ekaterina",
|
| 96 |
+
"text": "Previous work in the image domain has analysed the output of specific layers in an attempt to identify adversarial examples or adversarial subspaces. First, (Feinman et al., 2017) proposed that adversarial subspaces have a lower probability density, motivating the use of the Kernel Density (KD) metric to detect the adversarial examples. Nevertheless, (Ma et al., 2018) found Local Intrinsic Dimensionality (LID) was a better metric in defining the subspace for more complex data. In contrast to the local subspace focused approaches of KD and LID, (Carrara et al., 2019b) showed that trajectories of hidden layer features can be used to train a LSTM network to accurately discriminate between authentic and adversarial examples. Out performing all previous methods, ( introduced an effective detection framework using Mahalanobis Distance Analysis (MDA), where the distance is calculated between a test sample and the closest class-conditional Gaussian distribution in the space defined by the output of the final layer of the classifier (logit space). (Li and Li, 2016) also explored using the output of convolutional layers for image classification systems to identify statistics that distinguish adversarial samples from original samples. They find that by performing a PCA decomposition the statistical variation in the least principal directions is the most significant and can be used to separate original and adversarial samples. However, they argue this is ineffective as an adversary can easily suppress the tail distribution. Hence, (Li and Li, 2016) extract statistics from the convolutional layer output to train a cascade classifier to separate the original and adversarial samples. Most recently, (Mao et al., 2019) avoid the use of artificially designed metrics and combine the adversarial subspace identification stage and the detecting adversaries stage into a single framework, where a parametric model adaptively learns the deep features for detecting adversaries.\n\nIn contrast to the embedding space detection approaches, (Cohen et al., 2019) shows that influence functions combined with Nearest Neighbour distances perform comparably or better than the above standard detection approaches. Other detection approaches have explored the use of uncertainty: (Smith and Gal, 2018) argues that adversarial examples are out of distribution and do not lie on the manifold of real data. Hence, a discriminative Bayesian model's epistemic (model) uncertainty should be high. Therefore, calculations of the model uncertainty are thought to be useful in detecting adversarial examples, independent of the domain. However, Bayesian approaches aren't always practical in implementation and thus many different approaches to approximate this uncertainty have been suggested in literature (Leibig et al., 2017;Gal, 2016;Gal and Ghahramani, 2016).\n\nThere are a number of existing NLP specific detection approaches. For character level attacks, detection approaches have exploited the grammatical (Sakaguchi et al., 2017) and spelling (Mays et al., 1991;Islam and Inkpen, 2009) inconsistencies to identify and detect the adversarial samples. However, these character level attacks are unlikely to be employed in practice due to the simplicity with which they can be detected. Therefore, detection approaches for the more difficult semantically similar attack samples are of greater interest, where the meaning of the textual input is maintained without compromising the spelling or gram-matical integrity. To tackle such word-level, semantically similar examples, designed a discriminator to classify each token representation as part of an adversarial perturbation or not, which is then used to 'correct' the perturbation. Other detection approaches (Raina et al., 2020;Han et al., 2020;Minervini and Riedel, 2018) have shown some success in using perplexity to identify adversarial textual examples. Most recently, (Mozes et al., 2020) achieved state of the art performance with the Frequency Guided Word Substitution (FGWS) detector, where a change in model prediction after substituting out low frequency words is revealing of adversarial samples."
|
| 97 |
+
}
|
| 98 |
+
]
|
annotations/Ekaterina/paper_65.txt.json
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_65.txt",
|
| 4 |
+
"start": 44,
|
| 5 |
+
"end": 120,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Ekaterina",
|
| 8 |
+
"text": "Open-Domain Passage Retrieval has been a hot research topic in recent years."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_65.txt",
|
| 12 |
+
"start": 777,
|
| 13 |
+
"end": 782,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Ekaterina",
|
| 16 |
+
"text": "BERT "
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_65.txt",
|
| 20 |
+
"start": 920,
|
| 21 |
+
"end": 1034,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Ekaterina",
|
| 24 |
+
"text": "Although enjoying satisfying retrieval accuracy, the retrieval latency is often hard to tolerate in practical use."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_65.txt",
|
| 28 |
+
"start": 1035,
|
| 29 |
+
"end": 1115,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Ekaterina",
|
| 32 |
+
"text": "More recently, the Bi-Encoder structure has captured the researchers' attention."
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_65.txt",
|
| 36 |
+
"start": 1269,
|
| 37 |
+
"end": 1342,
|
| 38 |
+
"label": "Unsupported claim",
|
| 39 |
+
"user": "Ekaterina",
|
| 40 |
+
"text": " first proposes to pretrain the Bi-Encoder with Inverse Cloze Task (ICT)."
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_65.txt",
|
| 44 |
+
"start": 1850,
|
| 45 |
+
"end": 1916,
|
| 46 |
+
"label": "Unsupported claim",
|
| 47 |
+
"user": "Ekaterina",
|
| 48 |
+
"text": "Our method follows the contrastive learning research line of ODPR."
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"file": "paper_65.txt",
|
| 52 |
+
"start": 2199,
|
| 53 |
+
"end": 2278,
|
| 54 |
+
"label": "Unsupported claim",
|
| 55 |
+
"user": "Ekaterina",
|
| 56 |
+
"text": "Contrastive learning recently is attracting researchers' attention in all area."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"file": "paper_65.txt",
|
| 60 |
+
"start": 2337,
|
| 61 |
+
"end": 2353,
|
| 62 |
+
"label": "Format",
|
| 63 |
+
"user": "Ekaterina",
|
| 64 |
+
"text": "He et al., 2020)"
|
| 65 |
+
},
|
| 66 |
+
{
|
| 67 |
+
"file": "paper_65.txt",
|
| 68 |
+
"start": 2407,
|
| 69 |
+
"end": 2484,
|
| 70 |
+
"label": "Format",
|
| 71 |
+
"user": "Ekaterina",
|
| 72 |
+
"text": "Karpukhin et al., 2020;Yan et al., 2021;Giorgi et al., 2021;Gao et al., 2021)"
|
| 73 |
+
},
|
| 74 |
+
{
|
| 75 |
+
"file": "paper_65.txt",
|
| 76 |
+
"start": 2505,
|
| 77 |
+
"end": 2509,
|
| 78 |
+
"label": "Unsupported claim",
|
| 79 |
+
"user": "Ekaterina",
|
| 80 |
+
"text": "ODPR"
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"file": "paper_65.txt",
|
| 84 |
+
"start": 2684,
|
| 85 |
+
"end": 2707,
|
| 86 |
+
"label": "Format",
|
| 87 |
+
"user": "Ekaterina",
|
| 88 |
+
"text": "(Manmatha et al., 2017)"
|
| 89 |
+
}
|
| 90 |
+
]
|
annotations/Ekaterina/paper_66.txt.json
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_66.txt",
|
| 4 |
+
"start": 421,
|
| 5 |
+
"end": 457,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Ekaterina",
|
| 8 |
+
"text": "Lourie et al., 2021;Li et al., 2021)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_66.txt",
|
| 12 |
+
"start": 3356,
|
| 13 |
+
"end": 3358,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Ekaterina",
|
| 16 |
+
"text": " 1"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_66.txt",
|
| 20 |
+
"start": 3391,
|
| 21 |
+
"end": 3493,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Ekaterina",
|
| 24 |
+
"text": "Pretrained V&L models learn to combine vision and language through self-supervised multitask learning."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_66.txt",
|
| 28 |
+
"start": 3735,
|
| 29 |
+
"end": 4100,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Ekaterina",
|
| 32 |
+
"text": "Major architectures are single-and dualstream multimodal transformers: single-stream models concatenate word and image features, and encode the resulting sequence with a single transformer stack; dual-stream models use distinct transformer stacks to handle visual and textual inputs, and additional layers (e.g. co-attention) to fuse these into multimodal features."
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_66.txt",
|
| 36 |
+
"start": 5029,
|
| 37 |
+
"end": 5048,
|
| 38 |
+
"label": "Format",
|
| 39 |
+
"user": "Ekaterina",
|
| 40 |
+
"text": "(Levesque et al., 1"
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_66.txt",
|
| 44 |
+
"start": 5158,
|
| 45 |
+
"end": 5185,
|
| 46 |
+
"label": "Format",
|
| 47 |
+
"user": "Ekaterina",
|
| 48 |
+
"text": "2012; Gardner et al., 2020)"
|
| 49 |
+
}
|
| 50 |
+
]
|
annotations/Ekaterina/paper_67.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_67.txt",
|
| 4 |
+
"start": 1050,
|
| 5 |
+
"end": 1094,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Ekaterina",
|
| 8 |
+
"text": "(Houlsby et al., 2019;Merchant et al., 2020)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_67.txt",
|
| 12 |
+
"start": 14,
|
| 13 |
+
"end": 1627,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Ekaterina",
|
| 16 |
+
"text": "The standard practice of using BERT is fine-tuning, i.e. the entirety of the model parameters is adjusted on the training corpus of the downstream task, so that the model is adapted to that specific task (Devlin et al., 2019). There is also an alternative feature-based approach, used by ELMo (Peters et al., 2018). In the latter approach, the pre-trained model is regarded as a feature extractor with frozen parameters. During the learning of a downstream task, one feeds a fixed or learnable combination of the model's intermediate representations as input to the task-specific module, and only the parameters of the latter will be updated. It has been shown that the fine-tuning approach is generally superior to the feature-based approach for BERT in terms of task performance (Devlin et al., 2019;Peters et al., 2019).\n\nA natural middle ground between these two approaches is partial fine-tuning, i.e. only fine-tuning some topmost layers of BERT while keeping the remaining bottom layers frozen. This approach has been studied in (Houlsby et al., 2019;Merchant et al., 2020), where the authors observed that finetuning only the top layers can almost achieve the performance of full fine-tuning on several GLUE tasks. The approach of partial fine-tuning essentially regards the bottom layers of BERT as a feature extractor. Freezing weights from bottom layers is a sensible idea as previous studies show that the mid layer representations produced by BERT are most transferrable, whereas the top layers representations are more task-oriented (Wang et al., 2019;Tenney et al., 2019b,a;Merchant et al., 2020).\n"
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Ekaterina/paper_68.txt.json
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_68.txt",
|
| 4 |
+
"start": 812,
|
| 5 |
+
"end": 1024,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Ekaterina",
|
| 8 |
+
"text": "(Chalkidis and Kampas, 2018;Aletras et al., 2019Aletras et al., , 2020Zhong et al., 2020b; (Aletras et al., 2016;Sim et al., 2016;Katz et al., 2017;Zhong et al., 2018;Chalkidis et al., 2019a;Malik et al., 2021)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_68.txt",
|
| 12 |
+
"start": 1070,
|
| 13 |
+
"end": 1137,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Ekaterina",
|
| 16 |
+
"text": "(Chalkidis et al., , 2019cChen et al., 2020;Hendrycks et al., 2021)"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_68.txt",
|
| 20 |
+
"start": 1308,
|
| 21 |
+
"end": 1385,
|
| 22 |
+
"label": "Format",
|
| 23 |
+
"user": "Ekaterina",
|
| 24 |
+
"text": "(Nallapati and Manning, 2008;Chalkidis et al., 2019bChalkidis et al., , 2020a"
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_68.txt",
|
| 28 |
+
"start": 1503,
|
| 29 |
+
"end": 1547,
|
| 30 |
+
"label": "Format",
|
| 31 |
+
"user": "Ekaterina",
|
| 32 |
+
"text": "(Chalkidis et al., 2020b;Zheng et al., 2021;"
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_68.txt",
|
| 36 |
+
"start": 1931,
|
| 37 |
+
"end": 1970,
|
| 38 |
+
"label": "Format",
|
| 39 |
+
"user": "Ekaterina",
|
| 40 |
+
"text": "(Wang et al., 2018(Wang et al., , 2019b"
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_68.txt",
|
| 44 |
+
"start": 3591,
|
| 45 |
+
"end": 3630,
|
| 46 |
+
"label": "Format",
|
| 47 |
+
"user": "Ekaterina",
|
| 48 |
+
"text": "(Wang et al., 2018(Wang et al., , 2019b"
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"file": "paper_68.txt",
|
| 52 |
+
"start": 4182,
|
| 53 |
+
"end": 4184,
|
| 54 |
+
"label": "Format",
|
| 55 |
+
"user": "Ekaterina",
|
| 56 |
+
"text": " 1"
|
| 57 |
+
}
|
| 58 |
+
]
|
annotations/Iman/.keep
ADDED
|
File without changes
|
annotations/Iman/paper_100.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_100.txt",
|
| 4 |
+
"start": 217,
|
| 5 |
+
"end": 1647,
|
| 6 |
+
"label": "Lacks synthesis",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": " A majority of existing works focus on perfect pinyin. Traditional models are typically based on statistical language models (Chen and Lee, 2000) and statistical machine translation (Yang et al., 2012). Recent works are usually built with neural network. For example, Moon IME (Huang et al., 2018) integrates attention-based neural network and an information retrieval module. Zhang et al. (2019) improves an LSTM-\nbased encoder-decoder model with online vocabulary adaptation. For abbreviated pinyin, CoCAT (Huang et al., 2015) uses machine translation technology to reduce the number of the typing letters. Huang and Zhao (2018) propose an LSTM-based encoder-decoder approach with the concatenation of context words and abbreviated pinyin as input. Our work differs from existing works in that we are the first one to exploit GPT and verify the pros and cons of GPT in different situations. In addition, there are some works handling\npinyin with typing errors. Chen and Lee (2000) investigate a typing model which handles spelling correction in sentence-based pinyin input method. CHIME (Zheng et al., 2011) is a error-tolerant Chinese pinyin input method. It finds similar pinyin which will be further ranked with Chinese specific features. Jia and Zhao (2014) propose a joint graph model to globally optimize the tasks of pinyin input method and typo correction. We leave error-tolerant pinyin input method as a future work. \n"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_100.txt",
|
| 12 |
+
"start": 217,
|
| 13 |
+
"end": 1647,
|
| 14 |
+
"label": "Coherence",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " A majority of existing works focus on perfect pinyin. Traditional models are typically based on statistical language models (Chen and Lee, 2000) and statistical machine translation (Yang et al., 2012). Recent works are usually built with neural network. For example, Moon IME (Huang et al., 2018) integrates attention-based neural network and an information retrieval module. Zhang et al. (2019) improves an LSTM-\nbased encoder-decoder model with online vocabulary adaptation. For abbreviated pinyin, CoCAT (Huang et al., 2015) uses machine translation technology to reduce the number of the typing letters. Huang and Zhao (2018) propose an LSTM-based encoder-decoder approach with the concatenation of context words and abbreviated pinyin as input. Our work differs from existing works in that we are the first one to exploit GPT and verify the pros and cons of GPT in different situations. In addition, there are some works handling\npinyin with typing errors. Chen and Lee (2000) investigate a typing model which handles spelling correction in sentence-based pinyin input method. CHIME (Zheng et al., 2011) is a error-tolerant Chinese pinyin input method. It finds similar pinyin which will be further ranked with Chinese specific features. Jia and Zhao (2014) propose a joint graph model to globally optimize the tasks of pinyin input method and typo correction. We leave error-tolerant pinyin input method as a future work. \n"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_100.txt",
|
| 20 |
+
"start": 1684,
|
| 21 |
+
"end": 2529,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "Our methodology also relates to pretrained models that use pinyin information. Sun et al. (2021) propose a general-purpose Chinese BERT with\nnew embedding layers to inject pinyin and glyph information of characters. There are also task-specific BERT models, especially for the task of grammatical error correction since an important type of error is caused by characters pronounced with the same pinyin. Zhang et al. (2021a) add a pinyin embedding layer and learns to predict characters from similarly pronounced candidates. PLOME (Liu et al., 2021) add two embedding layers implemented with two GRU networks to inject both pinyin and shape of characters, respectively. Xu et al. (2021) add a hierarchical encoder to inject the pinyin letters at character and sentence levels, and add a ResNet encoder to use graphic features of character image."
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Iman/paper_11.txt.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_11.txt",
|
| 4 |
+
"start": 1960,
|
| 5 |
+
"end": 2457,
|
| 6 |
+
"label": "Coherence",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": " Can a highperformance CLIR model be trained that can operate without having to rely on MT? To answer the question, instead of viewing the MT-based approach as a competing one, we propose to leverage its strength via knowledge distillation (KD) into an end-to-end CLIR model. KD (Hinton et al., 2014) is a powerful supervision technique typically used to distill the knowledge of a large teacher model about some task into a smaller student model (Mukherjee and Awadallah, 2020;Turc et al., 2020)."
|
| 9 |
+
}
|
| 10 |
+
]
|
annotations/Iman/paper_13.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_13.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 105,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Few-shot learning is the problem of learning classifiers with only a few training examples."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_13.txt",
|
| 12 |
+
"start": 1713,
|
| 13 |
+
"end": 1881,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label. "
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_13.txt",
|
| 20 |
+
"start": 1785,
|
| 21 |
+
"end": 1814,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": " Cross Attention (CA) models "
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_13.txt",
|
| 28 |
+
"start": 446,
|
| 29 |
+
"end": 932,
|
| 30 |
+
"label": "Lacks synthesis",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "In recent years, there has been a surge in zeroshot and few-shot approaches to text classification. One approach (Yin et al., 2019, 2020; Halder et al., 2020;Wang et al., 2021) makes use of entailment models. Textual entailment (Dagan et al., 2006), also known as natural language inference (NLI) (Bowman et al., 2015), is the problem of predicting whether a textual premise implies a textual hypothesis in a logical sense. For example, Emma loves apples implies that Emma likes apples."
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_14.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_14.txt",
|
| 4 |
+
"start": 1993,
|
| 5 |
+
"end": 2185,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "The language selection does, however, obfuscate the fact that for most non-Indo-European and low-resource languages no data is available for semantically rich tasks such as question answering."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_14.txt",
|
| 12 |
+
"start": 181,
|
| 13 |
+
"end": 311,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " Unfortunately, for many languages, and especially low-resource languages, such taskspecific labelled data is often not available."
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Iman/paper_15.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_15.txt",
|
| 4 |
+
"start": 896,
|
| 5 |
+
"end": 902,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": " 2021)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_15.txt",
|
| 12 |
+
"start": 649,
|
| 13 |
+
"end": 1046,
|
| 14 |
+
"label": "Coherence",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "Researchers also realize that the vision modality maybe redundant. Irrelevant images have little impact on the translation quality, and no significant BLEU drop is observed even the image is absent (Elliott, 2018). Encouraging results appeared in 2021) proposed a cross-lingual visual pretraining approach. In this work, we make a systematic study on whether stronger vision features are helpful."
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Iman/paper_16.txt.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
[]
|
annotations/Iman/paper_17.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_17.txt",
|
| 4 |
+
"start": 1074,
|
| 5 |
+
"end": 1083,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "BART-Gen "
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_17.txt",
|
| 12 |
+
"start": 1650,
|
| 13 |
+
"end": 2144,
|
| 14 |
+
"label": "Coherence",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "It has been a rising interest in event extraction under less data scenario. Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence. Text2Event's unnatural output format hinders the model from fully leveraging pre-trained knowledge"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_17.txt",
|
| 20 |
+
"start": 1287,
|
| 21 |
+
"end": 1397,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "All these fully supervised methods can achieve substantial performance with a large amount of annotated data. "
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_17.txt",
|
| 28 |
+
"start": 1521,
|
| 29 |
+
"end": 1528,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "DEGREE "
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_18.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_18.txt",
|
| 4 |
+
"start": 1195,
|
| 5 |
+
"end": 1496,
|
| 6 |
+
"label": "Lacks synthesis",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Recently, several works (Zhang and Yang, 2018;Gui et al., 2019;Li et al., 2020b) utilize external lexicon knowledge to help connect related characters and promote capturing the local composition. Nevertheless, building the lexicon is time-consuming and the quality of the lexicon may not be satisfied."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_18.txt",
|
| 12 |
+
"start": 3532,
|
| 13 |
+
"end": 3574,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "(OntoNotes V4.0, OntoNotes V5.0, and MSRA)"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_18.txt",
|
| 20 |
+
"start": 3786,
|
| 21 |
+
"end": 3791,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "CBLUE"
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Iman/paper_19.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_19.txt",
|
| 4 |
+
"start": 171,
|
| 5 |
+
"end": 173,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "AL"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_19.txt",
|
| 12 |
+
"start": 2873,
|
| 13 |
+
"end": 3072,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "Recently proposed alternatives to uncertaintybased query strategies leverage reinforcement learning and imitation learning (Fang et al., 2017;Liu et al., 2018;Vu et al., 2019;Brantley et al., 2020). "
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_19.txt",
|
| 20 |
+
"start": 3422,
|
| 21 |
+
"end": 3425,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "ASM"
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Iman/paper_20.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_20.txt",
|
| 4 |
+
"start": 762,
|
| 5 |
+
"end": 1331,
|
| 6 |
+
"label": "Lacks synthesis",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Templates in Data-Driven D2T Generation Using simple handcrafted templates for individual keys or predicates is an efficient way of introducing domain knowledge while preventing text-to-text models from overfitting to a specific data format (Heidari et al., 2021;Kale and Rastogi, 2020a;. Transforming individual triples to text is also used in Laha et al. (2020) whose work is the most similar to ours. They also build a three-step pipeline for zero-shot D2T generation, but they use handcrafted rules for producing the output text and do not address content planning."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_20.txt",
|
| 12 |
+
"start": 2087,
|
| 13 |
+
"end": 2660,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "\n\nSentence Ordering Sentence ordering is the task of organizing a set of natural language sentences to increase the coherence of a text (Barzilay et al., 2001;Lapata, 2003). Several neural methods for this task were proposed, using either interactions between pairs of sentences Li and Jurafsky, 2017), global interactions (Gong et al., 2016;Wang and Wan, 2019), or combination of both (Cui et al., 2020). We base our ordering module ( §5.1) on the recent work of Calizzano et al. (2021), who use a pointer network (Wang and Wan, 2019;Vinyals et al., 2015) on top of a PLM."
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Iman/paper_64.txt.json
ADDED
|
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_64.txt",
|
| 4 |
+
"start": 290,
|
| 5 |
+
"end": 309,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Kernel Density (KD)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_64.txt",
|
| 12 |
+
"start": 367,
|
| 13 |
+
"end": 384,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "(Ma et al., 2018)"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_64.txt",
|
| 20 |
+
"start": 564,
|
| 21 |
+
"end": 588,
|
| 22 |
+
"label": "Format",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "(Carrara et al., 2019b) "
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_64.txt",
|
| 28 |
+
"start": 1070,
|
| 29 |
+
"end": 1087,
|
| 30 |
+
"label": "Format",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "(Li and Li, 2016)"
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_64.txt",
|
| 36 |
+
"start": 1560,
|
| 37 |
+
"end": 1577,
|
| 38 |
+
"label": "Format",
|
| 39 |
+
"user": "Iman",
|
| 40 |
+
"text": "(Li and Li, 2016)"
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_64.txt",
|
| 44 |
+
"start": 1728,
|
| 45 |
+
"end": 1746,
|
| 46 |
+
"label": "Format",
|
| 47 |
+
"user": "Iman",
|
| 48 |
+
"text": "(Mao et al., 2019)"
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"file": "paper_64.txt",
|
| 52 |
+
"start": 2059,
|
| 53 |
+
"end": 2079,
|
| 54 |
+
"label": "Format",
|
| 55 |
+
"user": "Iman",
|
| 56 |
+
"text": "(Cohen et al., 2019)"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"file": "paper_64.txt",
|
| 60 |
+
"start": 2293,
|
| 61 |
+
"end": 2314,
|
| 62 |
+
"label": "Format",
|
| 63 |
+
"user": "Iman",
|
| 64 |
+
"text": "(Smith and Gal, 2018)"
|
| 65 |
+
},
|
| 66 |
+
{
|
| 67 |
+
"file": "paper_64.txt",
|
| 68 |
+
"start": 170,
|
| 69 |
+
"end": 193,
|
| 70 |
+
"label": "Format",
|
| 71 |
+
"user": "Iman",
|
| 72 |
+
"text": " (Feinman et al., 2017)"
|
| 73 |
+
}
|
| 74 |
+
]
|
annotations/Iman/paper_65.txt.json
ADDED
|
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_65.txt",
|
| 4 |
+
"start": 606,
|
| 5 |
+
"end": 711,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "they fail to capture non-lexical semantic similarity, thus performing unsatisfying on retrieval accuracy."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_65.txt",
|
| 12 |
+
"start": 776,
|
| 13 |
+
"end": 782,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " BERT "
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_65.txt",
|
| 20 |
+
"start": 1035,
|
| 21 |
+
"end": 1115,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "More recently, the Bi-Encoder structure has captured the researchers' attention."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_65.txt",
|
| 28 |
+
"start": 1316,
|
| 29 |
+
"end": 1341,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": " Inverse Cloze Task (ICT)"
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_65.txt",
|
| 36 |
+
"start": 1911,
|
| 37 |
+
"end": 1915,
|
| 38 |
+
"label": "Unsupported claim",
|
| 39 |
+
"user": "Iman",
|
| 40 |
+
"text": "ODPR"
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_65.txt",
|
| 44 |
+
"start": 2337,
|
| 45 |
+
"end": 2353,
|
| 46 |
+
"label": "Format",
|
| 47 |
+
"user": "Iman",
|
| 48 |
+
"text": "He et al., 2020)"
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"file": "paper_65.txt",
|
| 52 |
+
"start": 2406,
|
| 53 |
+
"end": 2424,
|
| 54 |
+
"label": "Format",
|
| 55 |
+
"user": "Iman",
|
| 56 |
+
"text": " Karpukhin et al.,"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"file": "paper_65.txt",
|
| 60 |
+
"start": 2684,
|
| 61 |
+
"end": 2707,
|
| 62 |
+
"label": "Format",
|
| 63 |
+
"user": "Iman",
|
| 64 |
+
"text": "(Manmatha et al., 2017)"
|
| 65 |
+
},
|
| 66 |
+
{
|
| 67 |
+
"file": "paper_65.txt",
|
| 68 |
+
"start": 3507,
|
| 69 |
+
"end": 3511,
|
| 70 |
+
"label": "Unsupported claim",
|
| 71 |
+
"user": "Iman",
|
| 72 |
+
"text": "ODPR"
|
| 73 |
+
}
|
| 74 |
+
]
|
annotations/Iman/paper_66.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_66.txt",
|
| 4 |
+
"start": 420,
|
| 5 |
+
"end": 440,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": " Lourie et al., 2021"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_66.txt",
|
| 12 |
+
"start": 5029,
|
| 13 |
+
"end": 5048,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "(Levesque et al., 1"
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Iman/paper_67.txt.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_67.txt",
|
| 4 |
+
"start": 45,
|
| 5 |
+
"end": 49,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "BERT"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_67.txt",
|
| 12 |
+
"start": 1216,
|
| 13 |
+
"end": 1236,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " several GLUE tasks."
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_67.txt",
|
| 20 |
+
"start": 1237,
|
| 21 |
+
"end": 1342,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "The approach of partial fine-tuning essentially regards the bottom layers of BERT as a feature extractor."
|
| 25 |
+
}
|
| 26 |
+
]
|
annotations/Iman/paper_68.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_68.txt",
|
| 4 |
+
"start": 856,
|
| 5 |
+
"end": 903,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "2019Aletras et al., , 2020Zhong et al., 2020b; "
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_68.txt",
|
| 12 |
+
"start": 1502,
|
| 13 |
+
"end": 1548,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " (Chalkidis et al., 2020b;Zheng et al., 2021;."
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_68.txt",
|
| 20 |
+
"start": 703,
|
| 21 |
+
"end": 1548,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "Natural language understanding (NLU) technologies can assist legal practitioners in a variety of legal tasks (Chalkidis and Kampas, 2018;Aletras et al., 2019Aletras et al., , 2020Zhong et al., 2020b; (Aletras et al., 2016;Sim et al., 2016;Katz et al., 2017;Zhong et al., 2018;Chalkidis et al., 2019a;Malik et al., 2021), information extraction from legal documents (Chalkidis et al., , 2019cChen et al., 2020;Hendrycks et al., 2021) and case summarization (Bhattacharya et al., 2019) to legal question answering (Ravichander et al., 2019;Kien et al., 2020;Zhong et al., 2020a,c) and text classification (Nallapati and Manning, 2008;Chalkidis et al., 2019bChalkidis et al., , 2020a. Transformer models (Vaswani et al., 2017) pre-trained on legal, rather than generic, corpora have also been studied (Chalkidis et al., 2020b;Zheng et al., 2021;."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_68.txt",
|
| 28 |
+
"start": 2171,
|
| 29 |
+
"end": 2177,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "DALL-E"
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_69.txt.json
ADDED
|
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_69.txt",
|
| 4 |
+
"start": 490,
|
| 5 |
+
"end": 507,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Lyu et al., 2020;"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_69.txt",
|
| 12 |
+
"start": 898,
|
| 13 |
+
"end": 1058,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " use a convolutional neural network with Monte Carlo dropout in order to obtain an uncertainty estimate for active learning in the task of image classification."
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_69.txt",
|
| 20 |
+
"start": 510,
|
| 21 |
+
"end": 1266,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "Despite their obvious advantage of modeling uncertainty, the main problem with Bayesian deep learning methods is the computational cost of full Bayesian inference. To tackle this problem, Gal and Ghahramani (2016) propose using standard dropout (Srivastava et al., 2014) as a practical approximation of Bayesian inference in deep neural networks and call this method Monte Carlo dropout. use a convolutional neural network with Monte Carlo dropout in order to obtain an uncertainty estimate for active learning in the task of image classification. Houlsby et al. (2011) sample many networks with Monte Carlo simulation and propose an objective function that takes into account the disagreement and confidence of the predictions coming from these networks."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_69.txt",
|
| 28 |
+
"start": 1268,
|
| 29 |
+
"end": 1942,
|
| 30 |
+
"label": "Lacks synthesis",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "Similar methods have also been applied to NLP. In machine translation, extend the Transformer architecture with MC dropout to get a Variational Transformer, and use it to sample multiple translations from the approximate posterior distribution. They also introduce BLEUVar, an uncertainty metric based on the BLEU score (Papineni et al., 2002) between pairs of the generated translations. Lyu et al. (2020) extend the work of to question answering and propose an active learning approach based on a modified BLEUVar version. Similarly, use a conditional random field to obtain uncertainty estimates for active learning and apply their method to named entity recognition.\n"
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_69.txt",
|
| 36 |
+
"start": 2332,
|
| 37 |
+
"end": 2338,
|
| 38 |
+
"label": "Unsupported claim",
|
| 39 |
+
"user": "Iman",
|
| 40 |
+
"text": "CNN/DM"
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_69.txt",
|
| 44 |
+
"start": 2343,
|
| 45 |
+
"end": 2348,
|
| 46 |
+
"label": "Unsupported claim",
|
| 47 |
+
"user": "Iman",
|
| 48 |
+
"text": "XSum "
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"file": "paper_69.txt",
|
| 52 |
+
"start": 2358,
|
| 53 |
+
"end": 2366,
|
| 54 |
+
"label": "Unsupported claim",
|
| 55 |
+
"user": "Iman",
|
| 56 |
+
"text": "PEGASUS "
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"file": "paper_69.txt",
|
| 60 |
+
"start": 2370,
|
| 61 |
+
"end": 2375,
|
| 62 |
+
"label": "Unsupported claim",
|
| 63 |
+
"user": "Iman",
|
| 64 |
+
"text": "BART "
|
| 65 |
+
},
|
| 66 |
+
{
|
| 67 |
+
"file": "paper_69.txt",
|
| 68 |
+
"start": 1943,
|
| 69 |
+
"end": 2547,
|
| 70 |
+
"label": "Lacks synthesis",
|
| 71 |
+
"user": "Iman",
|
| 72 |
+
"text": "Although summarization is a prominent NLP task, summarization uncertainty has not been widely studied. is the only work that focuses on uncertainty for summarization, but their work does not make use of Bayesian methods. They define a generated summary's uncertainty based on the entropy of each token generated by the model during the decoding phase. Their study includes experiments on CNN/DM and XSum using the PEGASUS and BART summarization models. Their main focus is on understanding different properties of uncertainty during the decoding phase, and their work is not directly comparable to ours."
|
| 73 |
+
}
|
| 74 |
+
]
|
annotations/Iman/paper_70.txt.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_70.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 1246,
|
| 6 |
+
"label": "Coherence",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Automatic Readability Assessment is the task of assigning a reading level for a given text. It is useful in many applications such as selecting age appropriate texts in classrooms (Sheehan et al., 2014), assessment of patient education materials (Sare et al., 2020) and clinical informed consent forms (Perni et al., 2019), measuring the readability of financial disclosures (Loughran and McDonald, 2014), and so on. Contemporary NLP approaches treat it primarily as a classification problem. This approach makes it non-transferable to situations where the reading level scale in the test data doesn't match the one in the training set. Applying learning to rank methods has been seen as a potential solution to this problem in the past. Ranking texts by readability is also useful in a range of application scenarios, from ranking search results based on readability (Kim et al., 2012;Fourney et al., 2018) to controlling the reading level of machine translation output (Agrawal and Carpuat, 2019;Marchisio et al., 2019). However, exploration of ranking methods has not been a prominent direction for ARA research. Further, recent developments in neural ranking approaches haven't been explored for this task yet, to our knowledge."
|
| 9 |
+
}
|
| 10 |
+
]
|
annotations/Iman/paper_71.txt.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_71.txt",
|
| 4 |
+
"start": 1481,
|
| 5 |
+
"end": 1577,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "These findings confirm results from prior work using other methods, while revealing new details."
|
| 9 |
+
}
|
| 10 |
+
]
|
annotations/Iman/paper_72.txt.json
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_72.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 719,
|
| 6 |
+
"label": "Coherence",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Sparse attention mechanism The full attention mechanism has a quadratic memory cost. Prior research works have proposed different sparse attention mechanisms to reduce the memory cost. Longformer (Beltagy et al., 2020) uses a dilated sliding window of blocks and global attention patterns. BigBird (Zaheer et al., 2020) employs sliding window and random blocks. Reformer (Kitaev et al., 2020) uses the locality-sensitive hashing. In addition to optimizing the encoder self-attention, Huang et al. (2021) proposes head-wise positional strides to reduce the cost of the encoder-decoder attention. However, sparse attention diminishes the benefits of pretraining and sacrifices parts of the receptive field.\n"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_72.txt",
|
| 12 |
+
"start": 14,
|
| 13 |
+
"end": 718,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "Sparse attention mechanism The full attention mechanism has a quadratic memory cost. Prior research works have proposed different sparse attention mechanisms to reduce the memory cost. Longformer (Beltagy et al., 2020) uses a dilated sliding window of blocks and global attention patterns. BigBird (Zaheer et al., 2020) employs sliding window and random blocks. Reformer (Kitaev et al., 2020) uses the locality-sensitive hashing. In addition to optimizing the encoder self-attention, Huang et al. (2021) proposes head-wise positional strides to reduce the cost of the encoder-decoder attention. However, sparse attention diminishes the benefits of pretraining and sacrifices parts of the receptive field."
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_72.txt",
|
| 20 |
+
"start": 720,
|
| 21 |
+
"end": 1470,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "Extract-then-generate method The model first extracts salient text snippets from the input, followed by generating a concise overall summary. Most two-stage summarization approaches (Zhang et al., 2019;Lebanoff et al., 2019;Xu and Durrett, 2019;Bajaj et al., 2021) are trained separately, which suffer from information loss due to the cascaded errors. Some approaches attempt to reduce that loss by bridging the two stages. Chen and Bansal (2018) adopts reinforcement learning with a sentence-level policy gradient method. Bae et al. (2019) proposes summary-level policy gradient. In addition to the drawbacks explained in Section 2.3, our model is different as we jointly train an extractthen-generate model for summarization using latent variables."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_72.txt",
|
| 28 |
+
"start": 720,
|
| 29 |
+
"end": 1470,
|
| 30 |
+
"label": "Coherence",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "Extract-then-generate method The model first extracts salient text snippets from the input, followed by generating a concise overall summary. Most two-stage summarization approaches (Zhang et al., 2019;Lebanoff et al., 2019;Xu and Durrett, 2019;Bajaj et al., 2021) are trained separately, which suffer from information loss due to the cascaded errors. Some approaches attempt to reduce that loss by bridging the two stages. Chen and Bansal (2018) adopts reinforcement learning with a sentence-level policy gradient method. Bae et al. (2019) proposes summary-level policy gradient. In addition to the drawbacks explained in Section 2.3, our model is different as we jointly train an extractthen-generate model for summarization using latent variables."
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_72.txt",
|
| 36 |
+
"start": 1471,
|
| 37 |
+
"end": 1903,
|
| 38 |
+
"label": "Lacks synthesis",
|
| 39 |
+
"user": "Iman",
|
| 40 |
+
"text": "\nDivide-and-conquer approach A common approach in long input summarization is divide-andconquer (Gidiotis and Tsoumakas, 2020;Grail et al., 2021). This approach breaks a long input into multiple parts, which are summarized separately and combined to produce a final complete summary. However, these models do not capture the contextual dependencies across parts and assumes a certain structure of the input (such as paper sections)."
|
| 41 |
+
},
|
| 42 |
+
{
|
| 43 |
+
"file": "paper_72.txt",
|
| 44 |
+
"start": 1904,
|
| 45 |
+
"end": 2536,
|
| 46 |
+
"label": "Lacks synthesis",
|
| 47 |
+
"user": "Iman",
|
| 48 |
+
"text": "\nHierarchical models Various hierarchical models have been proposed to handle the longer inputs. Cohan et al. (2018) models the document discourse structure with a hierarchical encoder and a discourse-aware decoder to generate the summary. HAT-Bart (Rohde et al., 2021) proposes a new Hierarchical Attention Transformer-based architecture that attempts to capture sentence and paragraphlevel information. HMNet (Zhu et al., 2020) builds a hierarchical structure that includes discourselevel information and speaker roles. However, these models focus mainly on model performance and not on reducing the memory and computational cost."
|
| 49 |
+
}
|
| 50 |
+
]
|
annotations/Iman/paper_73.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_73.txt",
|
| 4 |
+
"start": 2097,
|
| 5 |
+
"end": 2113,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Google Assistant"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_73.txt",
|
| 12 |
+
"start": 2117,
|
| 13 |
+
"end": 2130,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " OpenAI GPT-3"
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Iman/paper_74.txt.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_74.txt",
|
| 4 |
+
"start": 311,
|
| 5 |
+
"end": 491,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "The languages of South Asia, moreover, have a long recorded history, and have undergone complex change through genetic descent, sociolinguistic interactions, and contact influence."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_74.txt",
|
| 12 |
+
"start": 1101,
|
| 13 |
+
"end": 1230,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "There is much data to be extracted for even the most endangered languages (e.g. Burushaski, a language isolate of the northwest),"
|
| 17 |
+
}
|
| 18 |
+
]
|
annotations/Iman/paper_75.txt.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
[]
|
annotations/Iman/paper_76.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_76.txt",
|
| 4 |
+
"start": 696,
|
| 5 |
+
"end": 703,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "( 2021)"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_76.txt",
|
| 12 |
+
"start": 1374,
|
| 13 |
+
"end": 1400,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": " Khashabi et al., 2020;"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_76.txt",
|
| 20 |
+
"start": 1623,
|
| 21 |
+
"end": 1640,
|
| 22 |
+
"label": "Format",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": " Yin et al., 2020"
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_76.txt",
|
| 28 |
+
"start": 2061,
|
| 29 |
+
"end": 2083,
|
| 30 |
+
"label": "Format",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": " Pilault et al., 2021)"
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_78.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_78.txt",
|
| 4 |
+
"start": 1882,
|
| 5 |
+
"end": 2113,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Despite promising improvements on directly predicting unseen relations, ZS-BERT still makes wrong predictions due to similar relations or similar entities. The same problem arises in supervised methods under the zero-shot settings."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_78.txt",
|
| 12 |
+
"start": 2918,
|
| 13 |
+
"end": 2935,
|
| 14 |
+
"label": "Format",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "Gao et al., 2021;"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_78.txt",
|
| 20 |
+
"start": 2826,
|
| 21 |
+
"end": 2995,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "Recently, Instancewise Contrastive Learning (Instance-CL) (He et al., 2020;Yan et al., 2021;Gao et al., 2021; has achieved remarkable success in representation learning."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_78.txt",
|
| 28 |
+
"start": 4433,
|
| 29 |
+
"end": 4440,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "K-Means"
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_79.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_79.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 582,
|
| 6 |
+
"label": "Lacks synthesis",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Entity Linking Entity linking has been widely studied (Milne and Witten, 2008;Cucerzan, 2007;Lazic et al., 2015b;Gupta et al., 2017;Raiman and Raiman, 2018;Kolitsas et al., 2018;Cao et al., 2021, inter alia). Dutta and Weikum (2015) combine clustering-based cross-document coreference decisions and linking around sparse bag-of-word representations not well suited for the embedding- (Bagga and Baldwin, 1998;Gooi and Allan, 2004;Singh et al., 2011;Barhom et al., 2019;Cattan et al., 2020;Caciularu et al., 2021;Ravenscroft et al., 2021;Logan IV et al., inter alia)."
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_79.txt",
|
| 12 |
+
"start": 584,
|
| 13 |
+
"end": 1022,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "Alternatives to Cross-Encoders Our work demonstrates how clustering-based training and prediction improves dual-encoder based models for linking and discovery. If prediction efficiency, and not training efficiency, was the only concern, one could use model distillation (Hinton et al., 2015;Izacard and Grave, 2021, inter alia). We could also consider models such as poly-encoders as an alternative to dual-encoders (Humeau et al., 2020)."
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_79.txt",
|
| 20 |
+
"start": 14,
|
| 21 |
+
"end": 582,
|
| 22 |
+
"label": "Coherence",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "Entity Linking Entity linking has been widely studied (Milne and Witten, 2008;Cucerzan, 2007;Lazic et al., 2015b;Gupta et al., 2017;Raiman and Raiman, 2018;Kolitsas et al., 2018;Cao et al., 2021, inter alia). Dutta and Weikum (2015) combine clustering-based cross-document coreference decisions and linking around sparse bag-of-word representations not well suited for the embedding- (Bagga and Baldwin, 1998;Gooi and Allan, 2004;Singh et al., 2011;Barhom et al., 2019;Cattan et al., 2020;Caciularu et al., 2021;Ravenscroft et al., 2021;Logan IV et al., inter alia)."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_79.txt",
|
| 28 |
+
"start": 583,
|
| 29 |
+
"end": 1022,
|
| 30 |
+
"label": "Coherence",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "\nAlternatives to Cross-Encoders Our work demonstrates how clustering-based training and prediction improves dual-encoder based models for linking and discovery. If prediction efficiency, and not training efficiency, was the only concern, one could use model distillation (Hinton et al., 2015;Izacard and Grave, 2021, inter alia). We could also consider models such as poly-encoders as an alternative to dual-encoders (Humeau et al., 2020)."
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_80.txt.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_80.txt",
|
| 4 |
+
"start": 1382,
|
| 5 |
+
"end": 1409,
|
| 6 |
+
"label": "Format",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "Sidner (1981Sidner ( , 1983"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_80.txt",
|
| 12 |
+
"start": 1210,
|
| 13 |
+
"end": 1329,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "This problem did not appear with pre-neural models of coherence, since they compute coherence on the basis of entities."
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_80.txt",
|
| 20 |
+
"start": 1697,
|
| 21 |
+
"end": 1923,
|
| 22 |
+
"label": "Lacks synthesis",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": " Centering theory serves as basis for many researchers to develop systems computing local coherence based on the approximations of entities (Barzilay and Lapata 2008;Feng and Hirst 2012;Guinaudeau and Strube 2013, inter alia)."
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_80.txt",
|
| 28 |
+
"start": 3755,
|
| 29 |
+
"end": 3814,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "many previous coherence models are evaluated on AES and AWQ"
|
| 33 |
+
}
|
| 34 |
+
]
|
annotations/Iman/paper_81.txt.json
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_81.txt",
|
| 4 |
+
"start": 14,
|
| 5 |
+
"end": 83,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": "The dissemination of fake news has become an important social issue. "
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_81.txt",
|
| 12 |
+
"start": 83,
|
| 13 |
+
"end": 213,
|
| 14 |
+
"label": "Unsupported claim",
|
| 15 |
+
"user": "Iman",
|
| 16 |
+
"text": "For emergent complex events, human readers are usually exposed to multiple news documents, where some are real and others are fake"
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"file": "paper_81.txt",
|
| 20 |
+
"start": 310,
|
| 21 |
+
"end": 407,
|
| 22 |
+
"label": "Unsupported claim",
|
| 23 |
+
"user": "Iman",
|
| 24 |
+
"text": "We notice that articles about the same topic may contain conflicting or complementary information"
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"file": "paper_81.txt",
|
| 28 |
+
"start": 1004,
|
| 29 |
+
"end": 1094,
|
| 30 |
+
"label": "Unsupported claim",
|
| 31 |
+
"user": "Iman",
|
| 32 |
+
"text": "Most existing work in fake news detection is limited to judging each document in isolation"
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"file": "paper_81.txt",
|
| 36 |
+
"start": 1800,
|
| 37 |
+
"end": 1887,
|
| 38 |
+
"label": "Unsupported claim",
|
| 39 |
+
"user": "Iman",
|
| 40 |
+
"text": "Existing work on fine-grained misinformation detection detects fake knowledge triplets "
|
| 41 |
+
}
|
| 42 |
+
]
|
annotations/Iman/paper_82.txt.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"file": "paper_82.txt",
|
| 4 |
+
"start": 3931,
|
| 5 |
+
"end": 3997,
|
| 6 |
+
"label": "Unsupported claim",
|
| 7 |
+
"user": "Iman",
|
| 8 |
+
"text": " WMT14 English-German and WMT19 Chinese-English translation tasks."
|
| 9 |
+
}
|
| 10 |
+
]
|
annotations/Iraa/.keep
ADDED
|
File without changes
|
annotations/Kaushal/.keep
ADDED
|
File without changes
|