ACL-OCL / Base_JSON /prefixN /json /nlp4if /2021.nlp4if-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:32:22.966601Z"
},
"title": "Understanding the Impact of Evidence-Aware Sentence Selection for Fact Checking",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vrije Universiteit Brussel",
"location": {
"postCode": "1050",
"settlement": "Brussels",
"country": "Belgium"
}
},
"email": "gbekouli@etrovub.be"
},
{
"first": "Christina",
"middle": [],
"last": "Id",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Papagiannopoulou",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Deligiannis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Vrije Universiteit Brussel",
"location": {
"postCode": "1050",
"settlement": "Brussels",
"country": "Belgium"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Fact Extraction and VERification (FEVER) is a recently introduced task that consists of the following subtasks (i) document retrieval, (ii) sentence retrieval, and (iii) claim verification. In this work, we focus on the subtask of sentence retrieval. Specifically, we propose an evidence-aware transformer-based model that outperforms all other models in terms of FEVER score by using a subset of training instances. In addition, we conduct a large experimental study to get a better understanding of the problem, while we summarize our findings by presenting future research challenges 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Fact Extraction and VERification (FEVER) is a recently introduced task that consists of the following subtasks (i) document retrieval, (ii) sentence retrieval, and (iii) claim verification. In this work, we focus on the subtask of sentence retrieval. Specifically, we propose an evidence-aware transformer-based model that outperforms all other models in terms of FEVER score by using a subset of training instances. In addition, we conduct a large experimental study to get a better understanding of the problem, while we summarize our findings by presenting future research challenges 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently a lot of research in the NLP community has been focused on the problem of automated fact checking (Liu et al., 2020; Zhong et al., 2020) . In this work, we focus on the FEVER dataset that is the largest fact checking dataset (Thorne et al., 2018) . The goal of the task is to identify the veracity of a given claim based on Wikipedia documents. The problem is traditionally approached as a series of three subtasks, namely (i) document retrieval (select the most relevant documents to the claim), (ii) sentence retrieval (select the most relevant sentences to the claim from the retrieved documents), and (iii) claim verification (validate the veracity of the claim based on the relevant sentences).",
"cite_spans": [
{
"start": 107,
"end": 125,
"text": "(Liu et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 126,
"end": 145,
"text": "Zhong et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 234,
"end": 255,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several models have been proposed for the FEVER dataset (Hanselowski et al., 2018; Nie et al., 2019a; Soleimani et al., 2020) . Most of the existing literature (Liu et al., 2020; Zhong et al., 2020) focuses on the task of claim verification, while little work has been done on the tasks of document retrieval and sentence retrieval. We suspect that this is because it is more straightforward for researchers to focus only on the improvement in terms of performance of the last component (i.e., 1 https://github.com/bekou/evidence_ aware_nlp4if claim verification) instead of experimenting with the whole pipeline of the three subtasks. In addition, the performance in the first two components is already quite high (i.e., >90% in terms of document accuracy for the document retrieval step and >87% in terms of sentence recall).",
"cite_spans": [
{
"start": 56,
"end": 82,
"text": "(Hanselowski et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 83,
"end": 101,
"text": "Nie et al., 2019a;",
"ref_id": "BIBREF6"
},
{
"start": 102,
"end": 125,
"text": "Soleimani et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 160,
"end": 178,
"text": "(Liu et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 179,
"end": 198,
"text": "Zhong et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 494,
"end": 495,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike the aforementioned studies, in this work, we focus on the task of sentence retrieval on the FEVER dataset. Specifically, inspired by studies that investigate the impact of loss functions and sampling on other domains (e.g., computer vision (Wu et al., 2017; Wang et al., 2017) , information retrieval (Pobrotyn et al., 2020) ), this paper -to the best of our knowledge -is the first attempt to shed some light on the sentence retrieval task by performing the largest experimental study to date and investigating the performance of a model that is able to take into account the relations between all potential evidences in a given list of evidences. The contributions of our work are as follows: (i) we propose a simple yet effective evidence-aware transformer-based model that is able to outperform all other models in terms of the FEVER score (i.e., metric of the claim verification subtask) and improve a baseline model by 0.7% even by using a small subset of training instances; (ii) we conduct an extensive experimental study on various settings (i.e., loss functions, sampling instances) showcasing the effect in performance of each architectural choice on the sentence retrieval and the claim verification subtasks; (iii) the results of our study point researchers to certain directions in order to improve the overall performance of the task.",
"cite_spans": [
{
"start": 247,
"end": 264,
"text": "(Wu et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 265,
"end": 283,
"text": "Wang et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 308,
"end": 331,
"text": "(Pobrotyn et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We frame the sentence selection subtask, where the input is a claim sentence and a list of candidate evidence sentences (i.e., as retrieved from the document retrieval step, for that we used the same input as in the work of Liu et al. (2020) ), as an NLI problem. Specifically, the claim is the \"hypothesis\" Figure 1 : The architectures used for the sentence retrieval subtask. The pointwise loss considers each potential evidence independently. The pairwise loss considers the potential evidences in pairs (positive, negative). The proposed evidence-aware selection model uses self-attention to consider all the potential evidences in the evidence set simultaneously. sentence and the potential evidence sentence is a \"premise\" sentence. In Fig. 1 , we present the various architectures that we used in our experiments.",
"cite_spans": [
{
"start": 224,
"end": 241,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 308,
"end": 316,
"text": "Figure 1",
"ref_id": null
},
{
"start": 742,
"end": 748,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "Pointwise: Our model is similar to the one described in the work of Soleimani et al. (2020) . We use a BERT-based model (Devlin et al., 2019) to obtain the representation of the input sentences. For training, we use the cross-entropy loss and the input to our model is the claim along with an evidence sentence. The goal of the sentence retrieval component paired with the pointwise loss is to predict whether a candidate evidence sentence is an evidence or not for a given claim. Thus, the problem of sentence retrieval is framed as a binary classification task.",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "Soleimani et al. (2020)",
"ref_id": "BIBREF10"
},
{
"start": 120,
"end": 141,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "2.1"
},
{
"text": "Pairwise: In our work, we also exploit the pairwise loss, where the goal is to maximize the margin between the positive and the negative examples. Specifically, we use the pairwise loss that is similar to the margin based loss presented in the work of Wu et al. (2017) . The pairwise loss is:",
"cite_spans": [
{
"start": 252,
"end": 268,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "L pairwise (p, n) = [\u2212y ij (f (x p ) \u2212 f (x n )) + m] + (1) In Eq. (1), y ij \u2208 {\u22121, 1}, f (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "is the representation that we obtain from the BERT-based model, m is the margin and the indices p and n indicate a pair of a positive and a negative example. In order to obtain a claim aware representation of the (positive-negative) instances, we concatenate the claim with the corresponding evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "Triplet: Unlike the pairwise loss that considers only pairs of positive and negative examples, the triplet loss (Wu et al., 2017) uses triplets of training instances. Specifically, given an anchor sample a (i.e., claim), the goal is the distance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "D ij = f (x i ) \u2212 f (x j ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "to be greater between the anchor and a negative example than the distance between the anchor and a positive example. The triplet loss is depicted in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L triplet (a, p, n) = [D 2 ap \u2212 D 2 an + m] +",
"eq_num": "(2)"
}
],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "Similar to the previous equation, in Eq. 2, m is the margin and the indices a, p and n indicate the triplet of the anchor, a positive and a negative example. As anchor we use the claim, while similar to the pairwise loss, we concatenate the claim with the corresponding evidence for the positive and the negative examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-based",
"sec_num": "2.2"
},
{
"text": "We have also experimented with the cosine loss. Specifically, we exploit positive and negative samples using the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine:",
"sec_num": null
},
{
"text": "L cos (p, n) = y ij (1 \u2212 cos(f (x p ), f (x n )))+ (1 \u2212 y ij )[(cos(f (x p ), f (x n )) \u2212 m)] + (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine:",
"sec_num": null
},
{
"text": "In Eq. 3, y ij \u2208 {0, 1} and cos indicates the cosine distance between the positive and the negative samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine:",
"sec_num": null
},
{
"text": "Angular: The angular loss (Wang et al., 2017) uses triplets of instances (i.e., similar to the triplet loss) while imposing angular constraints between the examples of the triplet. The formula is given by:",
"cite_spans": [
{
"start": 26,
"end": 45,
"text": "(Wang et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine:",
"sec_num": null
},
{
"text": "L ang (a, p, n) = [D 2 ap \u2212 4 tan 2 rD 2 nc ] + (4) In Eq. (4), f (x c ) = (f (x a ) \u2212 f (x p ))/2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine:",
"sec_num": null
},
{
"text": "and r is a fixed margin (angle).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cosine:",
"sec_num": null
},
{
"text": "Unlike the aforementioned loss functions, the proposed model relies on a transformer-based model, similar to the retrieval model proposed in the work of Pobrotyn et al. (2020) . This model exploits the use of self-attention over the potential evidence sentences in the evidence set. Unlike (i) the pointwise loss that does not take into account the relations between the evidence sentences, and (ii) the distance-based losses (e.g., triplet) that considers only pairs of sentences, the transformer model considers subsets of evidence sentences simultaneously at the training phase. Specifically, the input to the transformer is a list of BERT-based representations of the evidence sentences. Despite its simplicity, the model is able to reason and rank the evidence sentences by taking into account all the other evidence sentences in the list. On top of the transformer, we exploit a binary cross-entropy loss similar to the one presented in the case of the pointwise loss.",
"cite_spans": [
{
"start": 153,
"end": 175,
"text": "Pobrotyn et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence-Aware Selection",
"sec_num": "2.3"
},
{
"text": "3 Experimental Study",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence-Aware Selection",
"sec_num": "2.3"
},
{
"text": "For the conducted experiments in the sentence retrieval task, in all the loss functions except for the evidence-aware one, we present results using all the potential evidence sentences (retrieved from document retrieval). For the evidence-aware model, we conduct experiments using either 5 or 10 negative examples per positive instance during training. In addition, the overall (positive and negative) maximum number of instances that are kept is 20. This is because unlike the other models that the evidences are considered individually or in pairs, in the evidence-aware model, we cannot consider all the evidences simultaneously. We experiment also with a limited number of instances in the other settings to have a fair comparison among the different setups. Note that for the distance-based losses, we conduct additional experiments only in the best performing model when all instances are included (i.e., pairwise). We also present results on the claim verification task with all of the examined architectures. For the claim verification step, we use the model of Liu et al. (2020) . We evaluate the performance of our models using the official evaluation metrics for sentence retrieval (precision, recall and F 1 using the 5 highly ranked evidence sentences) and claim verification (label accuracy and FEVER score) in the dev and test sets.",
"cite_spans": [
{
"start": 1070,
"end": 1087,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.1"
},
{
"text": "We use the official evaluation metrics of the FEVER task for the sentence retrieval and the claim verification subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "Sentence Retrieval: The organizers of the shared task suggested the precision to count the number of the correct evidences retrieved by the sentence retrieval component with respect to the number of the predicted evidences. The recall has also been exploited. Note that a claim is considered correct in the case that at least a complete evidence group is identified. Finally, the F 1 score is calculated based on the aforementioned metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "Claim Verification: The evaluation of the claim verification subtask is based on the label accuracy and the FEVER score metrics. The label accuracy measures the accuracy of the label predictions without taking the retrieved evidences into account. On the other hand, the FEVER score counts a claim as correct if a complete evidence group has been correctly identified as well as the corresponding label. Thus, the FEVER score is considered as a strict evaluation metric and it was the primary metric for ranking the systems on the leaderboard of the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.2"
},
{
"text": "In Table 1 , we present our results on the sentence retrieval and claim verification tasks. The \"# Negative Examples\" column indicates the number of negative evidences that are randomly sampled for each positive instance during training, while the \"# Max Instances\" column indicates the maximum number of instances that we keep for each claim.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "The symbol denotes that we keep all the instances from this category (i.e., \"# Negative Examples\" or \"# Max Instances\"). Note that for the number of maximum instances, we keep as many as possible from the positive samples, and then we randomly sample from the negative instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "The evidence-aware model (see the setting with 5 negative examples and 20 maximum instances denoted as (5, 20)) is the best performing one both in dev and test set in terms of FEVER score. The pairwise loss performs best in terms of label accuracy on the test set. However, the most important evaluation metric is the FEVER score, since it takes into account both the label accuracy and the predicted evidence sentences. The pointwise loss is the worst performing one when using all the evidence sentences. This is because in the case that we use all the potential evidences, the number of negative samples is too large and we have a highly imbalance problem leading to low recall and FEVER score in both the dev and test set. Note that the evidence-aware model relies on the pointwise loss (i.e., the worst performing one). However, a benefit of the evidence-aware model (0.7% in terms of FEVER score) is reported (see pointwise (5, 20) ). This showcases the important effect of ranking potential evidences simultaneously using self-attention. From the distance-based loss functions (e.g., triplet) except for the pairwise, we observe that the angular and the cosine loss have worst performance compared to the pairwise and the triplet loss when using all the instances. We hypothesize that this is because the norm-based distance measures fit best for scoring pairs using the BERT-based representations.",
"cite_spans": [
{
"start": 930,
"end": 933,
"text": "(5,",
"ref_id": null
},
{
"start": 934,
"end": 937,
"text": "20)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benefit of Evidence-Aware Model:",
"sec_num": null
},
{
"text": "Performance Gain: Most recent research works (e.g., ; Liu et al. (2020)) focus on creating complex models for claim verification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benefit of Evidence-Aware Model:",
"sec_num": null
},
{
"text": "We conducted a small scale experiment (that is not present in Table 1) , where we replaced our model for claim verification (recall that we rely on the method of Liu et al. (2020) ) with a BERT-based classifier. We observed that when using the model of the Liu et al. (2020) instead of the BERT-classifier (in our early experiments on the dev set), the benefit for the pointwise loss was 0.2 percentage points, a benefit of 0.1 percentage points for the triplet loss and a drop of 1 percentage point in the performance of the cosine loss. Therefore, the seemingly small performance increase in our model (i.e., a benefit of 0.7% in terms of FEVER score) is in line with the performance benefit of complex architectures for the claim verification task. In our paper, we do not claim state-of-the-art performance on the task, but rather showcase the benefit of our proposed methodology over a strong baseline model that relies on BERT base .",
"cite_spans": [
{
"start": 162,
"end": 179,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 257,
"end": 274,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Benefit of Evidence-Aware Model:",
"sec_num": null
},
{
"text": "The evidenceaware model is the best performing one (5, 20), while using only a small fraction of the overall training instances. This is because the evidence-aware model is able to take into account all possible combinations of the sampled evidences while computing attention weights. However, the same model in the (10, 20) setting showcases a reduced performance. This is due to the fact that the pointwise loss affects the model in a similar way as in the pointwise setting leading to a lower performance (due to class imbalance). For the pairwise loss, we observe that the performance of the model when sampling constrained evidence sentences (see (5, 20) , (10, 20) settings) is similar to the performance of the model when we do not sample evidence sentences. In addition, it seems that when one constrains the number of negative samples should also constrain the overall number of instances in order to achieve the same performance as in the non-sampling setting. We hypothesize that this is due to that fact that when we have a limited number of instances it is better to have a more balanced version of the dataset.",
"cite_spans": [
{
"start": 652,
"end": 655,
"text": "(5,",
"ref_id": null
},
{
"start": 656,
"end": 659,
"text": "20)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Samples Matters:",
"sec_num": null
},
{
"text": "Outcome: Therefore, we conclude that the evidence-aware model achieves high performance by using few examples, and thus it can be used even in the case that we have a small amount of training instances. In the case of the pairwise loss is important to sample instances, otherwise it becomes computationally intensive when we take all the possible combinations between the positive and negative training instances into account. In addition, it is crucial to sample negative sentences to control: (i) the computational complexity in the case of the distance-based loss functions, (ii) the memory constraints in the case of the evidence-aware model and (iii) the imbalance issue in the case of the pointwise loss. However, more sophisticated techniques than random sampling should be investigated to select examples that are more informative. Finally, as indicated by our performance gain, we motivate future researchers to work also on the sentence retrieval subtask, as the improvement in this subtask leads to similar improvements with architectures proposed for the claim verification subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Samples Matters:",
"sec_num": null
},
{
"text": "An extensive review on the task of fact extraction and verification can be found in Bekoulis et al. (2020) . For the sentence retrieval task, several pipeline methods (Chernyavskiy and Ilvovsky, 2019; Portelli et al., 2020 ) rely on the sentence retrieval component of Thorne et al. (2018) that use TF-IDF representations. An important line of research (Hanselowski et al., 2018; Nie et al., 2019a; Zhou et al., 2019) includes the use of ESIMbased models (Chen et al. (2017) . Those works formulate the sentence selection subtask as an NLI problem where the claim is the \"premise\" sentence and the potential evidence sentence is a \"hypothesis\" sentence. Similar to the ESIM-based methods, language model based methods (Nie et al., 2019b; Zhong et al., 2020; Soleimani et al., 2020; Liu et al., 2020; transform the sentence retrieval task to an NLI problem using pre-trained language models. For the language model based sentence retrieval two types of losses have been exploited (i) pointwise loss, and (ii) pairwise loss, as presented also in Section 2. Unlike the aforementioned studies that rely only on losses of type (i) and (ii), we conduct the largest experimental study to date by using various functions on the sentence retrieval subtask of the FEVER task. In addition, we propose a new evidence-aware model that is able to outperform all other methods using a limited number of training instances.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "Bekoulis et al. (2020)",
"ref_id": "BIBREF0"
},
{
"start": 167,
"end": 200,
"text": "(Chernyavskiy and Ilvovsky, 2019;",
"ref_id": "BIBREF2"
},
{
"start": 201,
"end": 222,
"text": "Portelli et al., 2020",
"ref_id": "BIBREF9"
},
{
"start": 269,
"end": 289,
"text": "Thorne et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 353,
"end": 379,
"text": "(Hanselowski et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 380,
"end": 398,
"text": "Nie et al., 2019a;",
"ref_id": "BIBREF6"
},
{
"start": 399,
"end": 417,
"text": "Zhou et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 455,
"end": 474,
"text": "(Chen et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 718,
"end": 737,
"text": "(Nie et al., 2019b;",
"ref_id": "BIBREF7"
},
{
"start": 738,
"end": 757,
"text": "Zhong et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 758,
"end": 781,
"text": "Soleimani et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 782,
"end": 799,
"text": "Liu et al., 2020;",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this paper, we focus on the subtask of sentence retrieval of the FEVER task. In particular, we propose a simple and effective evidence-aware model that outperforms all other models in which each potential evidence takes into account information about other potential evidences. The model uses only a few training instances and improves a simple pointwise loss by 0.7% percentage points in terms of FEVER score. In addition, we conduct a large experimental study, compare the pros and cons of the studied architectures and discuss the results in a comprehensive way, while pointing researchers to future research directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fact extraction and verification-the fever case: An overview",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Papagiannopoulou",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Deligiannis",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.03001"
]
},
"num": null,
"urls": [],
"raw_text": "Giannis Bekoulis, Christina Papagiannopoulou, and Nikos Deligiannis. 2020. Fact extraction and verification-the fever case: An overview. arXiv preprint arXiv:2010.03001.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enhanced LSTM for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1657--1668",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1152"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1657-1668, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extract and aggregate: A novel domain-independent approach to factual data verification",
"authors": [
{
"first": "Anton",
"middle": [],
"last": "Chernyavskiy",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Ilvovsky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {
"DOI": [
"10.18653/v1/D19-6612"
]
},
"num": null,
"urls": [],
"raw_text": "Anton Chernyavskiy and Dmitry Ilvovsky. 2019. Ex- tract and aggregate: A novel domain-independent approach to factual data verification. In Proceed- ings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 69-78, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "UKP-athene: Multi-sentence textual entailment for claim verification",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zile",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daniil",
"middle": [],
"last": "Sorokin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "103--108",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5516"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. UKP-athene: Multi-sentence textual entailment for claim verification. In Pro- ceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 103-108, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fine-grained fact verification with kernel graph attention network",
"authors": [
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7342--7351",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.655"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342-7351, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Combining fact extraction and verification with neural semantic matching networks",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Haonan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6859--6866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019a. Combining fact extraction and verification with neu- ral semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6859-6866, Honolulu, Hawai. AAAI Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Revealing the importance of semantic retrieval for machine reading at scale",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Songhe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2553--2566",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1258"
]
},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Songhe Wang, and Mohit Bansal. 2019b. Revealing the importance of semantic retrieval for machine reading at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2553-2566, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Context-aware learning to rank with selfattention",
"authors": [
{
"first": "Przemys\u0142aw",
"middle": [],
"last": "Pobrotyn",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Bartczak",
"suffix": ""
},
{
"first": "Miko\u0142aj",
"middle": [],
"last": "Synowiec",
"suffix": ""
},
{
"first": "Rados\u0142aw",
"middle": [],
"last": "Bia\u0142obrzeski",
"suffix": ""
},
{
"first": "Jaros\u0142aw",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the ECOM'20: The SIGIR 2020 Workshop on eCommerce, Online. Association for Computing Machinery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Przemys\u0142aw Pobrotyn, Tomasz Bartczak, Miko\u0142aj Syn- owiec, Rados\u0142aw Bia\u0142obrzeski, and Jaros\u0142aw Bojar. 2020. Context-aware learning to rank with self- attention. In Proceedings of the ECOM'20: The SIGIR 2020 Workshop on eCommerce, Online. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distilling the evidence to augment fact verification models",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Portelli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Serra",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "47--51",
"other_ids": {
"DOI": [
"10.18653/v1/2020.fever-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "Beatrice Portelli, Jason Zhao, Tal Schuster, Giuseppe Serra, and Enrico Santus. 2020. Distilling the evi- dence to augment fact verification models. In Pro- ceedings of the Third Workshop on Fact Extraction and VERification (FEVER), pages 47-51, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bert for evidence retrieval and claim verification",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Soleimani",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Marcel",
"middle": [],
"last": "Worring",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "359--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Soleimani, Christof Monz, and Marcel Worring. 2020. Bert for evidence retrieval and claim verifi- cation. In Advances in Information Retrieval, pages 359-366, Cham. Springer International Publishing.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "FEVER: a large-scale dataset for fact extraction and VERification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "809--819",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1074"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep metric learning with angular loss",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shilei",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuanqing",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2593--2601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Wang, Feng Zhou, Shilei Wen, Xiao Liu, and Yuanqing Lin. 2017. Deep metric learning with an- gular loss. In Proceedings of the IEEE International Conference on Computer Vision, pages 2593-2601, Venice, Italy. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sampling matters in deep embedding learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Chao-Yuan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Manmatha",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahenbuhl",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2840--2848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao-Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. 2017. Sampling matters in deep embedding learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2840-2848, Venice, Italy. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Transformer-xh: Multi-evidence reasoning with extra hop attention",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Corby",
"middle": [],
"last": "Rosset",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with ex- tra hop attention. In International Conference on Learning Representations, Online.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reasoning over semantic-level graph for fact checking",
"authors": [
{
"first": "Wanjun",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Zenan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiahai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6170--6180",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.549"
]
},
"num": null,
"urls": [],
"raw_text": "Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over semantic-level graph for fact checking. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6170-6180, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "GEAR: Graph-based evidence aggregating and reasoning for fact verification",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Changcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "892--901",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1085"
]
},
"num": null,
"urls": [],
"raw_text": "Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and rea- soning for fact verification. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 892-901, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Results of the (i) sentence retrieval task in terms of Precision (P), Recall (R), and F 1 scores and (ii) claim verification task in terms of the label accuracy (LA) and the FEVER score evaluation metrics in the dev and the test",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}