FigAgent / 2004.05773 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
8a9c3a5 verified

Introduction

When a potentially viral news item is rapidly or indiscriminately published by a news outlet, the responsibility of verifying the truthfulness of the item is often passed on to the audience. To alleviate this problem, independent teams of professional fact checkers manually verify the veracity and credibility of common or particularly check-worthy statements circulating the web. However, these teams have limited resources to perform manual fact checks, thus creating a need for automating the fact checking process.

:::: center ::: tabular |L| Claim: The last major oil spill from a drilling accident in America happened over 40 years ago in 1969.
Ruling Comments: (...) [The last major oil spill from a drilling accident in America happened over 40 years ago in 1969.]{.mark}
(...) The largest in volume was the Santa Barbara spill of 1969 referenced by Murdock and Johnson, in which an estimated 100,000 barrels of oil spilled into the Pacific Ocean, according to the API. [The Santa Barbara spill was so big it ranked seventh among the 10 largest oil spills caused by marine well blowouts in the world, the report states.]{.mark} Two other U.S. spills, both in 1970, rank eighth and 10th. [Fourteen marine blowouts have taken place in the U.S. between 1969 and 2007.]{.mark} Six of them took place after 1990 and spilled a total of nearly 13,700 barrels.
(...) We interviewed three scientists who said that the impact of a spill has little to do with its volume. [Scientists have proven that spills far smaller than Santa Barbara's have been devastating.]{.mark}
Justification: While the nation's largest oil well blowout did take place in 1969, it's not factually correct to call it the "last major oil spill". First of all, two of the largest blowouts in the world took place in the U. S. the following year. More importantly, experts agree that spills far smaller in volume to the 1969 disaster have been devastating. From a scientific perspective, Johnson's decision to single out the 1969 blowout as the last "major" one makes no sense.\

Ruling: Half-True
::: ::::

The current research landscape in automated fact checking is comprised of systems that estimate the veracity of claims based on available metadata and evidence pages. Datasets like LIAR [@wang2017liar] and the multi-domain dataset MultiFC [@augenstein-etal-2019-multifc] provide real-world benchmarks for evaluation. There are also artificial datasets of a larger scale, e.g., the FEVER [@Thorne18Fever] dataset based on Wikipedia articles. As evident from the effectiveness of state-of-the-art methods for both real-world -- 0.492 macro F1 score [@augenstein-etal-2019-multifc], and artificial data -- 68.46 FEVER score (label accuracy conditioned on evidence provided for 'supported' and 'refuted' claims) [@stammbach-neumann-2019-team], the task of automating fact checking remains a significant and poignant research challenge.

A prevalent component of existing fact checking systems is a stance detection or textual entailment model that predicts whether a piece of evidence contradicts or supports a claim [@Ma:2018:DRS:3184558.3188729; @mohtarami-etal-2018-automatic; @Xu2019AdversarialDA]. Existing research, however, rarely attempts to directly optimise the selection of relevant evidence, i.e., the self-sufficient explanation for predicting the veracity label [@Thorne18Fever; @stammbach-neumann-2019-team]. On the other hand, @alhindi-etal-2018-evidence have reported a significant performance improvement of over 10% macro F1 score when the system is provided with a short human explanation of the veracity label. Still, there are no attempts at automatically producing explanations, and automating the most elaborate part of the process - producing the justification for the veracity prediction - is an understudied problem.

In the field of NLP as a whole, both explainability and interpretability methods have gained importance recently, because most state-of-the-art models are large, neural black-box models. Interpretability, on one hand, provides an overview of the inner workings of a trained model such that a user could, in principle, follow the same reasoning to come up with predictions for new instances. However, with the increasing number of neural units in published state-of-the-art models, it becomes infeasible for users to track all decisions being made by the models. Explainability, on the other hand, deals with providing local explanations about single data points that suggest the most salient areas from the input or are generated textual explanations for a particular prediction.

Saliency explanations have been studied extensively [@Adebayo:2018:SCS:3327546.3327621; @arras-etal-2019-evaluating; @poerner-etal-2018-evaluating], however, they only uncover regions with high contributions for the final prediction, while the reasoning process still remains behind the scenes. An alternative method explored in this paper is to generate textual explanations. In one of the few prior studies on this, the authors find that feeding generated explanations about multiple choice question answers to the answer predicting system improved QA performance [@rajani-etal-2019-explain].

Inspired by this, we research how to generate explanations for veracity prediction. We frame this as a summarisation task, where, provided with elaborate fact checking reports, later referred to as ruling comments, the model has to generate veracity explanations close to the human justifications as in the example in Table [tab:Example]{reference-type="ref" reference="tab:Example"}. We then explore the benefits of training a joint model that learns to generate veracity explanations while also predicting the veracity of a claim.
In summary, our contributions are as follows:

  1. We present the first study on generating veracity explanations, showing that they can successfully describe the reasons behind a veracity prediction.

  2. We find that the performance of a veracity classification system can leverage information from the elaborate ruling comments, and can be further improved by training veracity prediction and veracity explanation jointly.

  3. We show that optimising the joint objective of veracity prediction and veracity explanation produces explanations that achieve better coverage and overall quality and serve better at explaining the correct veracity label than explanations learned solely to mimic human justifications.

Existing fact checking websites publish claim veracity verdicts along with ruling comments to support the verdicts. Most ruling comments span over long pages and contain redundancies, making them hard to follow. Textual explanations, by contrast, are succinct and provide the main arguments behind the decision. PolitiFact [^1] provides a summary of a claim's ruling comments that summarises the whole explanation in just a few sentences.

We use the PolitiFact-based dataset LIAR-PLUS [@alhindi-etal-2018-evidence], which contains 12,836 statements with their veracity justifications. The justifications are automatically extracted from the long ruling comments, as their location is clearly indicated at the end of the ruling comments. Any sentences with words indicating the label, which @alhindi-etal-2018-evidence select to be identical or similar to the label, are removed. We follow the same procedure to also extract the ruling comments without the summary at hand.

We remove instances that contain fewer than three sentences in the ruling comments as they indicate short veracity reports, where no summary is present. The final dataset consists of 10,146 training, 1,278 validation, and 1,255 test data points. A claim's ruling comments in the dataset span over 39 sentences or 904 words on average, while the justification fits in four sentences or 89 words on average.

Method

We now describe the models we employ for training separately (1) an explanation extraction and (2) veracity prediction, as well as (3) the joint model trained to optimise both.

The models are based on DistilBERT [@sanh2019distilbert], which is a reduced version of BERT [@devlin2019bert] performing on par with it as reported by the authors. For each of the models described below, we take the version of DistilBERT that is pre-trained with a language-modelling objective and further fine-tune its embeddings for the specific task at hand.

Architecture of the Explanation (left) and Fact-Checking (right) models that optimise separate objectives.

Our explanation model, shown in Figure 1{reference-type="ref" reference="figure:separateModels"} (left) is inspired by the recent success of utilising the transformer model architecture for extractive summarisation [@liu-lapata-2019-text]. It learns to maximize the similarity of the extracted explanation with the human justification.

We start by greedily selecting the top $k$ sentences from each claim's ruling comments that achieve the highest ROUGE-2 F1 score when compared to the gold justification. We choose $k = 4$, as that is the average number of sentences in veracity justifications. The selected sentences, referred to as oracles, serve as positive gold labels - $\mathbf{y}^E \in {0,1}^N$, where $N$ is the total number of sentences present in the ruling comments. Appendix [appendix:a]{reference-type="ref" reference="appendix:a"} provides an overview of the coverage that the extracted oracles achieve compared to the gold justification. Appendix [appendix:o]{reference-type="ref" reference="appendix:o"} further presents examples of the selected oracles, compared to the gold justification.

At training time, we learn a function $f(X) = \mathbf{p}^E$, $\mathbf{p}^E \in \mathbb{R}^{1, N}$ that, based on the input $X$, the text of the claim and the ruling comments, predicts which sentence should be selected - {0,1}, to constitute the explanation. At inference time, we select the top $n = 4$ sentences with the highest confidence scores.

Our extraction model, represented by function $f(X)$, takes the contextual representations produced by the last layer of DistilBERT and feeds them into a feed-forward task-specific layer - $\mathbf{h} \in \mathbb{R}^{h}$. It is followed by the prediction layer $\mathbf{p}^{E} \in \mathbb{R}^{1,N}$ with sigmoid activation. The prediction is used to optimise the cross-entropy loss function $\mathcal{L}_{E}=\mathcal{H}(\mathbf{p}^{E}, \mathbf{y}^{E})$.

For the veracity prediction model, shown in Figure 1{reference-type="ref" reference="figure:separateModels"} (right), we learn a function $g(X) = \mathbf{p}^F$ that, based on the input X, predicts the veracity of the claim $\mathbf{y}^{F} \in Y_{F}$, $Y_F =$ {true, false, half-true, barely-true, mostly-true, pants-on-fire}.

The function $g(X)$ takes the contextual token representations from the last layer of DistilBERT and feeds them to a task-specific feed-forward layer $\mathbf{h} \in \mathbb{R}^{h}$. It is followed by the prediction layer with a softmax activation $\mathbf{p}^{F} \in \mathbb{R}^{6}$. We use the prediction to optimise a cross-entropy loss function $\mathcal{L}_{F}= \mathcal{H}(\mathbf{p}^{F}, \mathbf{y}^{F})$.

Architecture of the Joint model learning Explanation (E) and Fact-Checking (F) at the same time.

Finally, we learn a function $h(X) = (\mathbf{p}^E, \mathbf{p}^F)$ that, given the input X - the text of the claim and the ruling comments, predicts both the veracity explanation $\mathbf{p}^E$ and the veracity label $\mathbf{p}^F$ of a claim. The model is shown Figure 2{reference-type="ref" reference="figure:jointmodel"}. The function $h(X)$ takes the contextual embeddings $\mathbf{c}^E$ and $\mathbf{c}^F$ produced by the last layer of DistilBERT and feeds them into a cross-stitch layer [@misra2016cross; @ruder122019latent], which consists of two layers with two shared subspaces each - $\mathbf{h}{E}^1$ and $\mathbf{h}{E}^2$ for the explanation task and $\mathbf{h}_F^1$ and $\mathbf{h}_F^2$ for the veracity prediction task. In each of the two layers, there is one subspace for task-specific representations and one that learns cross-task representations. The subspaces and layers interact trough $\alpha$ values, creating the linear combinations $\widetilde{h}^i_E$ and $\widetilde{h}^j_F$, where i,j$\in {1,2}$: $$\begin{equation} \centering \begin{bmatrix} \widetilde{h}^i_E\ \widetilde{h}^j_F \end{bmatrix}

\begin{bmatrix} \alpha_{EE} & \alpha_{EF}\ \alpha_{FE} & \alpha_{FF} \end{bmatrix} \begin{bmatrix} {h^i_E}^T & {h^j_F}^T\ \end{bmatrix} \end{equation}$$

We further combine the resulting two subspaces for each task - $\widetilde{h}^i_E$ and $\widetilde{h}^j_F$ with parameters $\beta$ to produce one representation per task: $$\begin{equation} \centering \widetilde{h}^T_P

\begin{bmatrix} \beta_P^1\ \beta_P^2 \end{bmatrix}^T \begin{bmatrix} \widetilde{h}^1_P & \widetilde{h}^2_P\ \end{bmatrix}^T \end{equation}$$ where P $\in {E, F}$ is the corresponding task.

Finally, we use the produced representation to predict $\mathbf{p}^{E}$ and $\mathbf{p}^{F}$, with feed-forward layers followed by sigmoid and softmax activations accordingly. We use the prediction to optimise the joint loss function $\mathcal{L}_{MT}= \gamma*\mathcal{H}(\mathbf{p}^{E}, \mathbf{y}^{E}) + \eta * \mathcal{H}(\mathbf{p}^{F}, \mathbf{y}^{F})$, where $\gamma$ and $\eta$ are used for weighted combination of the individual loss functions.

We first conduct an automatic evaluation of both the veracity prediction and veracity explanation models.