mishig HF Staff commited on
Commit
e731eda
·
verified ·
1 Parent(s): 4234a4c

Add 1 files

Browse files
Files changed (1) hide show
  1. 2205/2205.12487.md +386 -0
2205/2205.12487.md ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models
2
+
3
+ URL Source: https://arxiv.org/html/2205.12487
4
+
5
+ Markdown Content:
6
+ ,Aditya Shah Virginia Tech Blacksburg, USA[aditya31@vt.edu](mailto:aditya31@vt.edu),Lichao Sun Lehigh University Bethlehem, USA[lis221@lehigh.edu](mailto:lis221@lehigh.edu),Jin-Hee Cho Virginia Tech Blacksburg, USA[jicho@vt.edu](mailto:jicho@vt.edu)and Lifu Huang Virginia Tech Blacksburg, USA[lifuh@vt.edu](mailto:lifuh@vt.edu)
7
+
8
+ (2023)
9
+
10
+ ###### Abstract.
11
+
12
+ We propose end-to-end multimodal fact-checking and explanation generation, where the input is a claim and a large collection of web sources, including articles, images, videos, and tweets, and the goal is to assess the truthfulness of the claim by retrieving relevant evidence and predicting a truthfulness label (e.g., support, refute or not enough information), and to generate a statement to summarize and explain the reasoning and ruling process. To support this research, we construct Mocheg, a large-scale dataset consisting of 15,601 claims where each claim is annotated with a truthfulness label and a ruling statement, and 33,880 textual paragraphs and 12,112 images in total as evidence. To establish baseline performances on Mocheg, we experiment with several state-of-the-art neural architectures on the three pipelined subtasks: multimodal evidence retrieval, claim verification, and explanation generation, and demonstrate that the performance of the state-of-the-art end-to-end multimodal fact-checking does not provide satisfactory outcomes. To the best of our knowledge, we are the first to build the benchmark dataset and solutions for end-to-end multimodal fact-checking and explanation generation. The dataset, source code and model checkpoints are available at [https://github.com/VT-NLP/Mocheg](https://github.com/VT-NLP/Mocheg).
13
+
14
+ Multimodal Fact-Checking; Evidence Retrieval; Stance Detection; Explanation Generation; Explainable Fact-Checking
15
+
16
+ ††journalyear: 2023††copyright: rightsretained††conference: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval; July 23–27, 2023; Taipei, Taiwan††booktitle: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23), July 23–27, 2023, Taipei, Taiwan††doi: 10.1145/3539618.3591879††isbn: 978-1-4503-9408-6/23/07††ccs: Computing methodologies Natural language processing††ccs: Information systems Multimedia and multimodal retrieval††ccs: Computing methodologies Natural language generation††ccs: Computing methodologies Computer vision
17
+ 1. Introduction
18
+ ---------------
19
+
20
+ Misinformation has been a growing public concern in society and caused difficulty in finding reliable information online(Godfrey-Smith, [1989](https://arxiv.org/html/2205.12487#bib.bib21); Edelman and Edelman, [2001](https://arxiv.org/html/2205.12487#bib.bib16)). For example, as Islam et al. ([2020](https://arxiv.org/html/2205.12487#bib.bib29)) shows, the misinformation about COVID-19 has widely spread and led people to distrust medical treatment and even refuse to get vaccinated. The situation has become even more complicated with the emergence of large language models, like ChatGPT(OpenAI, [2022](https://arxiv.org/html/2205.12487#bib.bib47)) since they could be intentionally misused to generate misinformation(Goldstein et al., [2023](https://arxiv.org/html/2205.12487#bib.bib22)) or wrongly spread misinformation due to the hallucination issue(Zhuo et al., [2023](https://arxiv.org/html/2205.12487#bib.bib78)). To fight against misinformation, many fact-checking websites, such as Snopes 1 1 1[https://www.snopes.com/](https://www.snopes.com/) and PolitiFact 2 2 2[https://www.politifact.com/](https://www.politifact.com/), have been created where journalists manually collect thousands of claims from news and social media and verify them by referring to external reliable and relevant documents. However, it is time-consuming and hard to generalize to more broad claims.
21
+
22
+ ![Image 1: Refer to caption](https://arxiv.org/html/x1.png)
23
+
24
+ Figure 1. An example of end-to-end multimodal fact-checking and explanation generation.
25
+
26
+ Recently, researchers have started to investigate automatic misinformation detection and fact-checking by developing various benchmark datasets(Thorne et al., [2018](https://arxiv.org/html/2205.12487#bib.bib65); Wang, [2017](https://arxiv.org/html/2205.12487#bib.bib71); Shu et al., [2020](https://arxiv.org/html/2205.12487#bib.bib59); Nakamura et al., [2019](https://arxiv.org/html/2205.12487#bib.bib44); Papadopoulou et al., [2018a](https://arxiv.org/html/2205.12487#bib.bib48)) and start-of-the-art neural network architectures(Tan et al., [2020](https://arxiv.org/html/2205.12487#bib.bib64); Song et al., [2021a](https://arxiv.org/html/2205.12487#bib.bib61); Li et al., [2020](https://arxiv.org/html/2205.12487#bib.bib39); Zhou et al., [2020](https://arxiv.org/html/2205.12487#bib.bib77)). However, we found the following limitations with the current fact-checking studies: (1) Most of them only consider text while ignoring the multi-media nature (e.g., images) of online articles, which are essential and useful to predict the truthfulness of claims. There are a few multimodal fact-checkinig datasets existing(Nielsen and McConville, [2022](https://arxiv.org/html/2205.12487#bib.bib46); Abdelnabi et al., [2022](https://arxiv.org/html/2205.12487#bib.bib2); Mishra et al., [2022](https://arxiv.org/html/2205.12487#bib.bib42)), however, their truthfulness labels(Mishra et al., [2022](https://arxiv.org/html/2205.12487#bib.bib42)) or evidence(Nielsen and McConville, [2022](https://arxiv.org/html/2205.12487#bib.bib46); Abdelnabi et al., [2022](https://arxiv.org/html/2205.12487#bib.bib2)) are automatically generated and thus cannot be guaranteed to be consistent with human judgements. (2) While current studies simply predict a truthfulness label, it is also necessary to provide a textual statement to explain the prediction. These explanations are vital to justify how the conclusion is reached step by step based on external evidence, and provide the public with rationale to analyze the reasoning process and share it with others. (3) Some prior studies(Wang, [2017](https://arxiv.org/html/2205.12487#bib.bib71); Zlatkova et al., [2019](https://arxiv.org/html/2205.12487#bib.bib79); Reis et al., [2020](https://arxiv.org/html/2205.12487#bib.bib56)) assume that a short piece of evidence text is already identified, based on which the models can directly predict the truthfulness of the target claim. However, this is not realistic in practice as the claim does not come with evidence, which should be retrieved from a knowledge base or the Internet.
27
+
28
+ | | Evidence Retrieval | Multimodal | Explainable Fact-checking | Annotated Label | Annotated Evidence |
29
+ | --- |
30
+ | FEVER(Thorne et al., [2018](https://arxiv.org/html/2205.12487#bib.bib65)) | ✓ | ✗ | ✗ | ✓ | ✓ |
31
+ | Liar(Wang, [2017](https://arxiv.org/html/2205.12487#bib.bib71)) | ✗ | ✗ | ✗ | ✓ | ✓ |
32
+ | Snopes(Hanselowski et al., [2019](https://arxiv.org/html/2205.12487#bib.bib26)) | ✓ | ✗ | ✗ | ✓ | ✓ |
33
+ | PUBHEALTH(Kotonya and Toni, [2020](https://arxiv.org/html/2205.12487#bib.bib36)) | ✓ | ✗ | ✓ | ✓ | ✓ |
34
+ | FACTIFY(Mishra et al., [2022](https://arxiv.org/html/2205.12487#bib.bib42)) | ✗ | ✓ | ✗ | ✗ | ✗ |
35
+ | MuMiN (Nielsen and McConville, [2022](https://arxiv.org/html/2205.12487#bib.bib46)) | ✗ | ✓ | ✗ | ✓ | ✗ |
36
+ | FakeNewsNet(Shu et al., [2020](https://arxiv.org/html/2205.12487#bib.bib59)) | ✓ | ✓ | ✗ | ✓ | ✗ |
37
+ | Fauxtography(Zlatkova et al., [2019](https://arxiv.org/html/2205.12487#bib.bib79)) | ✗ | ✓ | ✗ | ✓ | ✗ |
38
+ | NewsBag(Jindal et al., [2020](https://arxiv.org/html/2205.12487#bib.bib31)) | ✗ | ✓ | ✗ | ✓ | ✗ |
39
+ | QProp(Barrón-Cedeno et al., [2019](https://arxiv.org/html/2205.12487#bib.bib8)) | ✗ | ✓ | ✗ | ✓ | ✗ |
40
+ | TABFACT(Chen et al., [2019](https://arxiv.org/html/2205.12487#bib.bib11)) | ✗ | ✓ | ✗ | ✓ | ✓ |
41
+ | CLAIMDECOMP(Chen et al., [2022](https://arxiv.org/html/2205.12487#bib.bib10)) | ✓ | ✗ | ✓ | ✓ | ✓ |
42
+ | MultiFC(Augenstein et al., [2019](https://arxiv.org/html/2205.12487#bib.bib6)) | ✓ | ✗ | ✗ | ✓ | ✗ |
43
+ | FEVEROUS(Aly et al., [2021](https://arxiv.org/html/2205.12487#bib.bib4)) | ✓ | ✗ | ✗ | ✓ | ✓ |
44
+ | Mocheg (Ours) | ✓ | ✓ | ✓ | ✓ | ✓ |
45
+
46
+ Table 1. Comparison between Mocheg and other related datasets. The columns indicate whether the dataset requires automatic evidence retrieval, multimodal reasoning, or explanation generation and whether its label and evidence are annotated by a human.
47
+
48
+ To tackle these challenges, we propose end-to-end multimodal fact-checking and explanation generation, where the input consists of a claim and a large collection of web sources, including articles, images, and tweets, and the goal is to automatically retrieve information sources relevant to the claim (Evidence Retrieval), predict the truthfulness of the claim based on the relevant evidence (Claim Verification), and generate a textual explanation to explain the reasoning and ruling process (Explanation Generation). An example 3 3 3 The example is from [https://www.politifact.com/factchecks/2021/may/13/andrew-clyde/ridiculous-claim-those-capitol-jan-6-resembled-nor/](https://www.politifact.com/factchecks/2021/may/13/andrew-clyde/ridiculous-claim-those-capitol-jan-6-resembled-nor/) is shown in Figure[1](https://arxiv.org/html/2205.12487#S1.F1 "Figure 1 ‣ 1. Introduction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"). To support this research, we introduce Mocheg, a new benchmark dataset with 15,601 claims annotated with truthfulness labels, multimodal evidence, and ruling statements, along with a large collection of web articles and images as the evidence sources. To set up the baseline performance, we explore the state-of-the-art pre-trained vision-language models for multimodal evidence retrieval, claim verification, and explanation generation. Experimental results show that there is still a huge room for further improvements in this end-to-end multimodal fact-checking and explanation generation task. Overall, the contributions of our work are as follows:
49
+
50
+ * •
51
+ To the best of our knowledge, this is the first study that investigates end-to-end multimodal fact-checking and explanation generation task.
52
+
53
+ * •
54
+ We create the first benchmark dataset for end-to-end multimodal fact-checking and explanation generation. The baseline performance of the state-of-the-art language models demonstrates that the task is still challenging, and there is a huge space to improve.
55
+
56
+ 2. Related work
57
+ ---------------
58
+
59
+ #### Multimodal Fake News Detection and Fact-checking:
60
+
61
+ Most previous benchmark datasets(Wang, [2017](https://arxiv.org/html/2205.12487#bib.bib71); Alhindi et al., [2018](https://arxiv.org/html/2205.12487#bib.bib3); Aly et al., [2021](https://arxiv.org/html/2205.12487#bib.bib4); Thorne et al., [2018](https://arxiv.org/html/2205.12487#bib.bib65); Hanselowski et al., [2019](https://arxiv.org/html/2205.12487#bib.bib26); Kotonya and Toni, [2020](https://arxiv.org/html/2205.12487#bib.bib36); Augenstein et al., [2019](https://arxiv.org/html/2205.12487#bib.bib6); Shahi and Nandini, [2020](https://arxiv.org/html/2205.12487#bib.bib58)) for fake news detection and fact-checking are mainly based on text. As information is naturally in multi-modality, recent studies have started to take images(Boididou et al., [2015](https://arxiv.org/html/2205.12487#bib.bib9); Zlatkova et al., [2019](https://arxiv.org/html/2205.12487#bib.bib79); Shu et al., [2020](https://arxiv.org/html/2205.12487#bib.bib59); Nakamura et al., [2019](https://arxiv.org/html/2205.12487#bib.bib44); Jindal et al., [2020](https://arxiv.org/html/2205.12487#bib.bib31); Reis et al., [2020](https://arxiv.org/html/2205.12487#bib.bib56); Fung et al., [2021](https://arxiv.org/html/2205.12487#bib.bib19); Raj and Meel, [2022](https://arxiv.org/html/2205.12487#bib.bib52)) and videos(Papadopoulou et al., [2018b](https://arxiv.org/html/2205.12487#bib.bib49); Rayar et al., [2022](https://arxiv.org/html/2205.12487#bib.bib53); Micallef et al., [2022](https://arxiv.org/html/2205.12487#bib.bib41)) into consideration. Many methods for multimodal fake news detection are based on cross-modality consistency checking(Tan et al., [2020](https://arxiv.org/html/2205.12487#bib.bib64); Zhou et al., [2020](https://arxiv.org/html/2205.12487#bib.bib77); Song et al., [2021a](https://arxiv.org/html/2205.12487#bib.bib61); Wang et al., [2021](https://arxiv.org/html/2205.12487#bib.bib72); Abdelnabi et al., [2022](https://arxiv.org/html/2205.12487#bib.bib2); Roy and Ekbal, [2021](https://arxiv.org/html/2205.12487#bib.bib57)) or computing a fused representation of multimodal (textual + visual) information for final classification(Khattar et al., [2019](https://arxiv.org/html/2205.12487#bib.bib34); Song et al., [2021b](https://arxiv.org/html/2205.12487#bib.bib62); Wang et al., [2022](https://arxiv.org/html/2205.12487#bib.bib69); Kamboj et al., [2020](https://arxiv.org/html/2205.12487#bib.bib32)). (Reis et al., [2020](https://arxiv.org/html/2205.12487#bib.bib56); Zlatkova et al., [2019](https://arxiv.org/html/2205.12487#bib.bib79); Nakamura et al., [2019](https://arxiv.org/html/2205.12487#bib.bib44)) directly predict the truthfulness of multimodal claims without considering explicit evidence. (Mishra et al., [2022](https://arxiv.org/html/2205.12487#bib.bib42); Nielsen and McConville, [2022](https://arxiv.org/html/2205.12487#bib.bib46); Abdelnabi et al., [2022](https://arxiv.org/html/2205.12487#bib.bib2)) are the most related work to ours in that it considers explicit multimodal evidence. However, their labels or evidence are automatically generated without validating by humans while our label and evidence are annotated by fact-checking journalists. And we further provide journalists explanations regarding the truthfulness prediction. Compared with all these studies, our Mocheg is designed for the end-to-end multimodal fact-checking and explanation generation that requires systems to automatically retrieve multimodal evidence to predict the truthfulness of each claim and generate a ruling statement to explain the reasoning and ruling process. Table[1](https://arxiv.org/html/2205.12487#S1.T1 "Table 1 ‣ 1. Introduction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models") compares Mocheg with mentioned datasets.
62
+
63
+ #### Explainable Fact-Checking:
64
+
65
+ Providing explanations to the model predictions is beneficial for humans to understand the truthfulness of the claims(Guo et al., [2022](https://arxiv.org/html/2205.12487#bib.bib23); Uscinski and Butler, [2013b](https://arxiv.org/html/2205.12487#bib.bib67), [a](https://arxiv.org/html/2205.12487#bib.bib66); Gurrapu et al., [2022](https://arxiv.org/html/2205.12487#bib.bib24), [2023](https://arxiv.org/html/2205.12487#bib.bib25)). Current explainable fact-checking studies can be divided into four categories. The first is to directly take the evidence used for claim verification as the explanation(Thorne et al., [2018](https://arxiv.org/html/2205.12487#bib.bib65); Alhindi et al., [2018](https://arxiv.org/html/2205.12487#bib.bib3); Hanselowski et al., [2019](https://arxiv.org/html/2205.12487#bib.bib26); Fan et al., [2020](https://arxiv.org/html/2205.12487#bib.bib17)). However, the evidence usually consists of several individual sentences extracted from a large collection of documents, which are not logically connected and thus might be hard for humans to interpret. The second is to incorporate external knowledge graphs to compute a set of semantic traces starting from the claim(Gad-Elrab et al., [2019](https://arxiv.org/html/2205.12487#bib.bib20)). The semantic traces can serve as explanations to justify the truthfulness of the claims. The third is to generate questions based on claims and link the claims and evidence by using these questions as a proxy(Yang et al., [2022](https://arxiv.org/html/2205.12487#bib.bib74); Chen et al., [2022](https://arxiv.org/html/2205.12487#bib.bib10); Dai et al., [2022](https://arxiv.org/html/2205.12487#bib.bib12)). Although these generated questions can improve the explainability, they may be similar or less relevant because, normally, the claim is short. The fourth is to apply natural language generation to generate a paragraph describing the reasoning process(Atanasova et al., [2020](https://arxiv.org/html/2205.12487#bib.bib5); Kotonya and Toni, [2020](https://arxiv.org/html/2205.12487#bib.bib36); Kazemi et al., [2021](https://arxiv.org/html/2205.12487#bib.bib33); Zhang et al., [2021](https://arxiv.org/html/2205.12487#bib.bib76); Stammbach and Ash, [2020](https://arxiv.org/html/2205.12487#bib.bib63)), which is the most interpretable to humans. Previous studies usually summarize fact-checking articles written by journalists in shorter paragraphs as explanations. In stark contrast, our work generates explanations based on the evidence that is automatically retrieved from the web, which is more realistic in practice. In addition, in our end-to-end multimodal setting, the system needs to sequentially or jointly perform all three sub-tasks, including multimodal evidence retrieval, multimodal claim verification, and multimodal explanation generation.
66
+
67
+ 3. Dataset Construction
68
+ -----------------------
69
+
70
+ ### 3.1. Data Source
71
+
72
+ PolitiFact and Snopes are two widely used websites to fight against the spreading of misinformation, where journalists are asked to manually check and verify each claim and write a ruling article to share their judgment. Considering this, we use these two websites as the data sources 4 4 4 We have obtained permission from both Snopes and Politifact to publish the data for the research purpose.. Specifically, we develop scripts based on(Hanselowski et al., [2019](https://arxiv.org/html/2205.12487#bib.bib26)) to collect all the necessary information from these two websites, including the claims that are purely based on text, truthfulness labels, text and/or image evidence that is extracted from external articles by journalists and help determine the truthfulness of claims, evidence references that are linked to external articles/images containing the text and image evidence, and ruling articles that can explain and justify the truthfulness of the claims and can be viewed as a short summary of the various evidence. Note that, the claims were originally manually collected by the journalists of the two websites from many sources, e.g., online speeches, public statements, news articles, and social media platforms, such as Facebook, Twitter, Instagram, TikTok, and so on. The truthfulness labels, evidence, evidence references, and ruling articles are also manually provided by fact-checkers of the two websites 5 5 5 We illustrate the detailed fact-checking processing in Snopes and Politifact in Section. [8](https://arxiv.org/html/2205.12487#S8 "8. Ethical Statement ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models")..
73
+
74
+ Based on the evidence references, we further develop scripts to collect the articles and images that contain the evidence. Since the evidence references are linked to thousands of websites with distinct HTML templates, we utilize Boilerpipe(Kohlschütter et al., [2010](https://arxiv.org/html/2205.12487#bib.bib35)) to extract the text and newspaper 6 6 6[https://newspaper.readthedocs.io/en/latest/](https://newspaper.readthedocs.io/en/latest/) to obtain all image links contained in the webpages and download the images based on urllib 7 7 7[https://docs.python.org/3/library/urllib.html](https://docs.python.org/3/library/urllib.html). Some evidence references are linked to Twitter. To collect them, we first extract the Tweet IDs from the URLs of evidence references and then apply Twitter API 8 8 8[https://developer.twitter.com/en/docs/api-reference-index](https://developer.twitter.com/en/docs/api-reference-index) to collect the text and images from the corresponding Tweets.
75
+
76
+ ### 3.2. Data Preprocessing
77
+
78
+ Since fact-checking websites adjust their labels over time, the initial data contains more than 75 truthfulness labels, and some labels overlap with each other, such as “True”, “TRUE”, and “Status: True.”. Also, some labels have only a few instances. For example, the label “Labeled Satire” has only 23 instances in total. Considering these, we follow(Hanselowski et al., [2019](https://arxiv.org/html/2205.12487#bib.bib26)) and map 68 of these labels into three general categories, including Supported, Refuted, and NEI (Not Enough Information). We remove the claims that are annotated with other labels. In this way, each claim is just assigned one of the three target labels. The initial dataset also contains a lot of advertisement images. To clean the dataset, we design several rules, including (1) removing an image if its name contains any of the keywords, including “-ad-”, “logo”, “.gif”, “.ico”, “lazyload”, “.cgi”, “Logo”, “ .php”, “icon”, “Bubble”, “svg”, “rating-false”, “rating-true”, “banner”, “-line”, or its size is smaller than 400 ×\times× 400; (2) removing a claim if we can not crawl any evidence or the ruling article; (3) for each ruling article, there is usually a paragraph starting with “Our ruling” or “In sum” which summarizes the whole ruling and reasoning process to achieve the fact-checking conclusion, thus we use this paragraph as the target explanation. As a result, we collect 15,601 claims with 33,880 text evidence, where each piece of text evidence is an individual paragraph extracted from a particular evidence reference article and 12,112 image evidence 9 9 9 Among the 15,601 claims, 19% of them have tweets as evidence while the remaining 81% only use other sources such as news articles or government reports as evidence. Note that the image and text evidence may be from separate sources with no clear association.. Based on the evidence references, we finally collect 91,822 articles and 122,246 images which are further combined to form a constant collection of web resources for the evidence retrieval task. Within the web source collection, only 30% (27,566 out of 91,822) of articles and 10% (12,112 out of 122,246) of images contain the evidence of claims, making the evidence retrieve task realistic and challenging enough.
79
+
80
+ ### 3.3. Task Definition
81
+
82
+ We name the dataset Mocheg and propose End-to-End 10 10 10 The end-to-end setting in our fact-checking task means it starts with only the claim and goes through the evidence retrieval, claim verification, and explanation generation, which is almost the complete pipeline for a journalist to do fact-checking in real life. Multimodal Fact-Checking and Explanation Generation, with three subtasks 11 11 11 Note that we don’t consider claim extraction as a subtask as all the input claims are considered worthy of being checked.:
83
+
84
+ Task 1. Multimodal Evidence Retrieval: Given a claim and a collection of web sources containing both documents and images, the Evidence Retrieval task is to determine which paragraphs contained in the documents and images are related to the claim and can be further used to determine the truthfulness of the claim.
85
+
86
+ ![Image 2: Refer to caption](https://arxiv.org/html/x2.png)
87
+
88
+ Figure 2. Overview of framework, consisting of a text evidence retrieval module (top left), an image evidence retrieval module (bottom left), a claim verification module (bottom right), and an explanation generation module (top right).
89
+
90
+ Task 2. Multimodal Claim Verification: Based on the text and image evidence retrieved in Task 1, the Multimodal Claim Verification task is to predict the truthfulness (Supported, Refuted, or NEI) of the claim. As both the input claim and retrieved evidence may contain both text and images, this task requires cross-modal reasoning.
91
+
92
+ Task 3. Explanation Generation: Given an input claim, the evidence retrieved from Task 1, as well as the truthfulness predicted from Task 2, the Explanation Generation task aims to generate a paragraph that summarizes the evidence based on the predicted truthfulness label and explains the ruling process.
93
+
94
+ ### 3.4. Train / Dev / Test Split
95
+
96
+ We split the whole dataset into training (Train), development (Dev), and test (Test) sets with the percentage of 75%, 10%, and 15%, respectively. Table[2](https://arxiv.org/html/2205.12487#S3.T2 "Table 2 ‣ 3.4. Train / Dev / Test Split ‣ 3. Dataset Construction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models") shows the detailed statistics for each split.
97
+
98
+ Data Train Dev Test
99
+ # Claims 11,669 1,490 2,442
100
+ Ave. # Tokens in Claim 20 20 21
101
+ Max. # Tokens in Claim 81 77 89
102
+ # Text evidence (Paragraphs)23.545 4,067 6,268
103
+ # Image evidence 8,927 1,178 2,007
104
+ # Refuted Labels 4,542 488 825
105
+ # Supported Labels 3,826 501 817
106
+ # NEI Labels 3,301 501 800
107
+ Ave. # Tokens in Explanation 132 90 105
108
+ Max. # Tokens in Explanation 600 521 600
109
+ # Document/Image in Collection 91,822 / 122,246
110
+
111
+ Table 2. Dataset statistics of Mocheg.
112
+
113
+ 4. Approach
114
+ -----------
115
+
116
+ To establish the baseline performance on Mocheg, we design a framework for End-to-End Multimodal Fact-checking and Explanation Generation. As illustrated in Figure[2](https://arxiv.org/html/2205.12487#S3.F2 "Figure 2 ‣ 3.3. Task Definition ‣ 3. Dataset Construction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), it consists of three components for the corresponding sub-tasks.
117
+
118
+ ### 4.1. Evidence Retrieval
119
+
120
+ To solve this task, we apply two baseline models to retrieve text and image evidence separately.
121
+
122
+ #### Text Evidence Retrieval:
123
+
124
+ The top left in Figure[2](https://arxiv.org/html/2205.12487#S3.F2 "Figure 2 ‣ 3.3. Task Definition ‣ 3. Dataset Construction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models") illustrates the approach for text evidence retrieval. Given an input claim and a document corpus, we first split each document into sentences and then apply SBERT (Sentence-BERT)(Reimers and Gurevych, [2019](https://arxiv.org/html/2205.12487#bib.bib54), [2021](https://arxiv.org/html/2205.12487#bib.bib55)) to take in the input claim and a sentence from the document corpus and output their contextual representations, based on which we can further compute a cosine similarity score for each pair. Based on these similarity scores, we rank all the sentences and select the top-1000 1000 1000 1000 as the candidate evidence. We fine-tune the SBERT based on the following InfoNCE loss(Van den Oord et al., [2018](https://arxiv.org/html/2205.12487#bib.bib68)):
125
+
126
+ ℒ⁢(C i,T p,𝒯)=−log⁡(exp⁡(cosine⁢(𝑪 i,𝑻 p))∑T j∈𝒯 exp⁡(cosine⁢(𝑪 i,𝑻 j)))ℒ subscript 𝐶 𝑖 superscript 𝑇 𝑝 𝒯 cosine subscript 𝑪 𝑖 superscript 𝑻 𝑝 subscript subscript 𝑇 𝑗 𝒯 cosine subscript 𝑪 𝑖 subscript 𝑻 𝑗\mathcal{L}(C_{i},T^{p},\mathcal{T})=-\log(\frac{\exp(\text{cosine}(% \boldsymbol{C}_{i},\boldsymbol{T}^{p}))}{\sum_{T_{j}\in\mathcal{T}}\exp(\text{% cosine}(\boldsymbol{C}_{i},\boldsymbol{T}_{j}))})caligraphic_L ( italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , caligraphic_T ) = - roman_log ( divide start_ARG roman_exp ( cosine ( bold_italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ) ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∈ caligraphic_T end_POSTSUBSCRIPT roman_exp ( cosine ( bold_italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_T start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ) end_ARG )
127
+
128
+ where T p superscript 𝑇 𝑝 T^{p}italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT is a piece of positive evidence to a claim C i subscript 𝐶 𝑖 C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, 𝒯 𝒯\mathcal{T}caligraphic_T contains T p superscript 𝑇 𝑝 T^{p}italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and a set of other negative evidence to C i subscript 𝐶 𝑖 C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For each claim, we use the evidence of other claims in the same batch as the negative ones 12 12 12 In Mocheg, there are 37 sentences that are labeled as positive evidence of two different claims, thus the probability of a text being positive evidence of two claims in the same batch is very low.. 𝑪 i subscript 𝑪 𝑖\boldsymbol{C}_{i}bold_italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, 𝑻 p superscript 𝑻 𝑝\boldsymbol{T}^{p}bold_italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and 𝑻 𝒋 subscript 𝑻 𝒋\boldsymbol{T_{j}}bold_italic_T start_POSTSUBSCRIPT bold_italic_j end_POSTSUBSCRIPT are the sentence level representations encoded from SBERT. In this work, we use bold symbols to denote vector representations.
129
+
130
+ We further apply a re-ranking model based on BERT(Devlin et al., [2018](https://arxiv.org/html/2205.12487#bib.bib13)), which encodes each pair of the input claim and a candidate evidence sentence and outputs a score based on a linear classification layer. Based on these scores, we further rank all the candidate evidence and select the top-K 𝐾 K italic_K as the text evidence. The BERT-based re-ranking model is pre-trained on the MS MARCO Passage Ranking dataset(Bajaj et al., [2016](https://arxiv.org/html/2205.12487#bib.bib7)) which is designed for text retrieval.
131
+
132
+ #### Image Evidence Retrieval:
133
+
134
+ As shown in the bottom left of Figure[2](https://arxiv.org/html/2205.12487#S3.F2 "Figure 2 ‣ 3.3. Task Definition ‣ 3. Dataset Construction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), given an input claim and the image corpus, we use CLIP(Radford et al., [2021](https://arxiv.org/html/2205.12487#bib.bib51)) as the encoder to learn an overall representation for the claim and a representation for each image, then compute the cosine similarity between each image and the input claim. We sort all the images in the corpus based on the cosine similarity scores and take the top-K 𝐾 K italic_K as the candidate image evidence. We fine-tune CLIP based on the same InfoNCE loss as text evidence retrieval. Note that, during inference, we always retrieve top-K 𝐾 K italic_K text and image evidence respectively though it’s possible that there is no image or text evidence contained in the background corpus.
135
+
136
+ ### 4.2. Claim Verification
137
+
138
+ Based on the text and image evidence, we further design a claim verification approach to predict the truthfulness of each input claim, which is shown in the bottom right of Figure[2](https://arxiv.org/html/2205.12487#S3.F2 "Figure 2 ‣ 3.3. Task Definition ‣ 3. Dataset Construction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models").
139
+
140
+ #### Encoding with CLIP:
141
+
142
+ We formulate an input claim as C={c 0,c 1,…,c n}𝐶 subscript 𝑐 0 subscript 𝑐 1…subscript 𝑐 𝑛 C=\{c_{0},c_{1},...,c_{n}\}italic_C = { italic_c start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_c start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, a piece of text evidence as T i={t i⁢0,t i⁢1,…,t i⁢s}subscript 𝑇 𝑖 subscript 𝑡 𝑖 0 subscript 𝑡 𝑖 1…subscript 𝑡 𝑖 𝑠 T_{i}=\{t_{i0},t_{i1},...,t_{is}\}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = { italic_t start_POSTSUBSCRIPT italic_i 0 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , … , italic_t start_POSTSUBSCRIPT italic_i italic_s end_POSTSUBSCRIPT }, a piece of image evidence as M j={m j⁢0,m j⁢1,…,m j⁢q}subscript 𝑀 𝑗 subscript 𝑚 𝑗 0 subscript 𝑚 𝑗 1…subscript 𝑚 𝑗 𝑞 M_{j}=\{m_{j0},m_{j1},...,m_{jq}\}italic_M start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = { italic_m start_POSTSUBSCRIPT italic_j 0 end_POSTSUBSCRIPT , italic_m start_POSTSUBSCRIPT italic_j 1 end_POSTSUBSCRIPT , … , italic_m start_POSTSUBSCRIPT italic_j italic_q end_POSTSUBSCRIPT }, where c k subscript 𝑐 𝑘 c_{k}italic_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT denotes the k 𝑘 k italic_k-th token of the claim, t i⁢k subscript 𝑡 𝑖 𝑘 t_{ik}italic_t start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT is the k 𝑘 k italic_k-th token of the i 𝑖 i italic_i-th text evidence T i subscript 𝑇 𝑖 T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, and m j⁢k subscript 𝑚 𝑗 𝑘 m_{jk}italic_m start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT is the k 𝑘 k italic_k-th patch of the j 𝑗 j italic_j-th image evidence M j subscript 𝑀 𝑗 M_{j}italic_M start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT. Given a claim C 𝐶 C italic_C and its text evidence {T 0,T 1,…}subscript 𝑇 0 subscript 𝑇 1…\{T_{0},T_{1},...\}{ italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … } and image evidence {M 0,M 1,…}subscript 𝑀 0 subscript 𝑀 1…\{M_{0},M_{1},...\}{ italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … }, we concatenate them as an overall sequence {C,T 0,T 1,…,M 0,M 1⁢…}𝐶 subscript 𝑇 0 subscript 𝑇 1…subscript 𝑀 0 subscript 𝑀 1…\{C,T_{0},T_{1},...,M_{0},M_{1}...\}{ italic_C , italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … } and feed it into CLIP to obtain their contextual representations:
143
+
144
+ 𝑯 C={𝒉 c 0,𝒉 c 1,…,𝒉 c n},subscript 𝑯 𝐶 subscript 𝒉 subscript 𝑐 0 subscript 𝒉 subscript 𝑐 1…subscript 𝒉 subscript 𝑐 𝑛\displaystyle\boldsymbol{H}_{C}=\{\boldsymbol{h}_{c_{0}},\boldsymbol{h}_{c_{1}% },\ldots,\boldsymbol{h}_{c_{n}}\},bold_italic_H start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT = { bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT } ,
145
+ 𝑯 T i={𝒉 t i⁢0,𝒉 t i⁢1,…,𝒉 t i⁢s},subscript 𝑯 subscript 𝑇 𝑖 subscript 𝒉 subscript 𝑡 𝑖 0 subscript 𝒉 subscript 𝑡 𝑖 1…subscript 𝒉 subscript 𝑡 𝑖 𝑠\displaystyle\boldsymbol{H}_{T_{i}}=\{\boldsymbol{h}_{t_{i0}},\boldsymbol{h}_{% t_{i1}},\ldots,\boldsymbol{h}_{t_{is}}\},bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = { bold_italic_h start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_h start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT } ,
146
+ 𝑯 M j={𝒉 m j⁢0,𝒉 m j⁢1,…,𝒉 m j⁢q}.subscript 𝑯 subscript 𝑀 𝑗 subscript 𝒉 subscript 𝑚 𝑗 0 subscript 𝒉 subscript 𝑚 𝑗 1…subscript 𝒉 subscript 𝑚 𝑗 𝑞\displaystyle\boldsymbol{H}_{M_{j}}=\{\boldsymbol{h}_{m_{j0}},\boldsymbol{h}_{% m_{j1}},\ldots,\boldsymbol{h}_{m_{jq}}\}.bold_italic_H start_POSTSUBSCRIPT italic_M start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT = { bold_italic_h start_POSTSUBSCRIPT italic_m start_POSTSUBSCRIPT italic_j 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_h start_POSTSUBSCRIPT italic_m start_POSTSUBSCRIPT italic_j 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT italic_m start_POSTSUBSCRIPT italic_j italic_q end_POSTSUBSCRIPT end_POSTSUBSCRIPT } .
147
+
148
+ #### Stance detection:
149
+
150
+ We then pair each piece of evidence with the input claim and detect the stance of the evidence towards the claim. As Figure[3](https://arxiv.org/html/2205.12487#S4.F3 "Figure 3 ‣ Stance detection: ‣ 4.2. Claim Verification ‣ 4. Approach ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models") describes, taking text evidence as an example, we first compute an attention distribution between the claim and the evidence by using 𝑯 C={𝒉 c 0,𝒉 c 1,…,𝒉 c n}subscript 𝑯 𝐶 subscript 𝒉 subscript 𝑐 0 subscript 𝒉 subscript 𝑐 1…subscript 𝒉 subscript 𝑐 𝑛\boldsymbol{H}_{C}=\{\boldsymbol{h}_{c_{0}},\boldsymbol{h}_{c_{1}},...,% \boldsymbol{h}_{c_{n}}\}bold_italic_H start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT = { bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT } as query, 𝑯 T i={𝒉 t i⁢0,𝒉 t i⁢1,…,𝒉 t i⁢s}subscript 𝑯 subscript 𝑇 𝑖 subscript 𝒉 subscript 𝑡 𝑖 0 subscript 𝒉 subscript 𝑡 𝑖 1…subscript 𝒉 subscript 𝑡 𝑖 𝑠\boldsymbol{H}_{T_{i}}=\{\boldsymbol{h}_{t_{i0}},\boldsymbol{h}_{t_{i1}},% \ldots,\boldsymbol{h}_{t_{is}}\}bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = { bold_italic_h start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_h start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_i italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT } as key and value to compute cross attention and obtain an updated claim representation 𝑯 T i⁢2⁢C={𝒉 c~0,𝒉 c~1,…,𝒉 c~i,…,𝒉 c~n}subscript 𝑯 subscript 𝑇 𝑖 2 𝐶 subscript 𝒉 subscript~𝑐 0 subscript 𝒉 subscript~𝑐 1…subscript 𝒉 subscript~𝑐 𝑖…subscript 𝒉 subscript~𝑐 𝑛\boldsymbol{H}_{T_{i}2C}=\{\boldsymbol{h}_{\tilde{c}_{0}},\boldsymbol{h}_{% \tilde{c}_{1}},\ldots,\boldsymbol{h}_{\tilde{c}_{i}},\ldots,\boldsymbol{h}_{% \tilde{c}_{n}}\}bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT = { bold_italic_h start_POSTSUBSCRIPT over~ start_ARG italic_c end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , bold_italic_h start_POSTSUBSCRIPT over~ start_ARG italic_c end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT over~ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , bold_italic_h start_POSTSUBSCRIPT over~ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT } where 𝒉 c~i subscript 𝒉 subscript~𝑐 𝑖\boldsymbol{h}_{\tilde{c}_{i}}bold_italic_h start_POSTSUBSCRIPT over~ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT is defined by:
151
+
152
+ 𝒉 c~i=Softmax⁢(𝒉 c i⋅𝑯 T i⊤)⋅𝑯 T i subscript 𝒉 subscript~𝑐 𝑖⋅Softmax⋅subscript 𝒉 subscript 𝑐 𝑖 subscript superscript 𝑯 top subscript 𝑇 𝑖 subscript 𝑯 subscript 𝑇 𝑖\displaystyle\boldsymbol{h}_{\tilde{c}_{i}}=\text{Softmax}(\boldsymbol{h}_{c_{% i}}\cdot\boldsymbol{H}^{\top}_{T_{i}})\cdot\boldsymbol{H}_{T_{i}}bold_italic_h start_POSTSUBSCRIPT over~ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = Softmax ( bold_italic_h start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⋅ bold_italic_H start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) ⋅ bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT
153
+
154
+ We then fuse the updated claim representation 𝑯 T i⁢2⁢C subscript 𝑯 subscript 𝑇 ���� 2 𝐶\boldsymbol{H}_{T_{i}2C}bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT with its original representation H C subscript 𝐻 𝐶 H_{C}italic_H start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT by two arithmetic operations, subtraction (-) and multiplication (*), which work best as comparison functions in(Wang and Jiang, [2016](https://arxiv.org/html/2205.12487#bib.bib70)), and obtain the stance representation 𝑮 T i⁢2⁢C subscript 𝑮 subscript 𝑇 𝑖 2 𝐶\boldsymbol{G}_{T_{i}2C}bold_italic_G start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT of evidence T i subscript 𝑇 𝑖 T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT towards the claim C 𝐶 C italic_C based on max pooling.
155
+
156
+ 𝑮~T i⁢2⁢C=σ([𝑯 T i⁢2⁢C 𝑯 C:𝑯 T i⁢2⁢C−𝑯 C]⋅𝑾 a+𝒃 a),\displaystyle\boldsymbol{\tilde{G}}_{T_{i}2C}=\sigma([\boldsymbol{H}_{T_{i}2C}% \boldsymbol{H}_{C}:\boldsymbol{H}_{T_{i}2C}-\boldsymbol{H}_{C}]\cdot% \boldsymbol{W}_{a}+\boldsymbol{b}_{a}),overbold_~ start_ARG bold_italic_G end_ARG start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT = italic_σ ( [ bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT bold_italic_H start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT : bold_italic_H start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT - bold_italic_H start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT ] ⋅ bold_italic_W start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT + bold_italic_b start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) ,
157
+ 𝑮 T i⁢2⁢C=Max_Pooling⁢(𝑮~T i⁢2⁢C),subscript 𝑮 subscript 𝑇 𝑖 2 𝐶 Max_Pooling subscript bold-~𝑮 subscript 𝑇 𝑖 2 𝐶\displaystyle\boldsymbol{G}_{T_{i}2C}=\text{Max\_Pooling}(\boldsymbol{\tilde{G% }}_{T_{i}2C}),bold_italic_G start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT = Max_Pooling ( overbold_~ start_ARG bold_italic_G end_ARG start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT ) ,
158
+
159
+ where [:]delimited-[]:[:][ : ] denotes concatenation operation, 𝑾 a subscript 𝑾 𝑎\boldsymbol{W}_{a}bold_italic_W start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and 𝒃 a subscript 𝒃 𝑎\boldsymbol{b}_{a}bold_italic_b start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT are learnable parameters for aggregating the representations, and σ 𝜎\sigma italic_σ denotes a LeckyReLU activation function.
160
+
161
+ ![Image 3: Refer to caption](https://arxiv.org/html/x3.png)
162
+
163
+ Figure 3. Stance Detection
164
+
165
+ #### Prediction:
166
+
167
+ As we have multiple text and image evidence, we further compute the average of the stance representations of all text evidence and image evidence, respectively, to obtain 𝑮 T⁢2⁢C=Mean_Pooling⁢(𝑮 T i⁢2⁢C)subscript 𝑮 𝑇 2 𝐶 Mean_Pooling subscript 𝑮 subscript 𝑇 𝑖 2 𝐶\boldsymbol{G}_{T2C}=\text{Mean\_Pooling}(\boldsymbol{G}_{{T_{i}}2C})bold_italic_G start_POSTSUBSCRIPT italic_T 2 italic_C end_POSTSUBSCRIPT = Mean_Pooling ( bold_italic_G start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT ) and 𝑮 M⁢2⁢C=Mean_Pooling⁢(𝑮 M j⁢2⁢C)subscript 𝑮 𝑀 2 𝐶 Mean_Pooling subscript 𝑮 subscript 𝑀 𝑗 2 𝐶\boldsymbol{G}_{M2C}=\text{Mean\_Pooling}(\boldsymbol{G}_{{M_{j}}2C})bold_italic_G start_POSTSUBSCRIPT italic_M 2 italic_C end_POSTSUBSCRIPT = Mean_Pooling ( bold_italic_G start_POSTSUBSCRIPT italic_M start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT 2 italic_C end_POSTSUBSCRIPT ). We then concatenate the overall stance representations 13 13 13 Since the evidence in our corpus is annotated by journalists on Politifact and Snopes, we assume the evidence is reliable and fuse the stance of evidence to the claim to predict the truthfulness. We leave it as a future work to check the trustworthiness of evidence. 𝑮 T⁢2⁢C subscript 𝑮 𝑇 2 𝐶\boldsymbol{G}_{T2C}bold_italic_G start_POSTSUBSCRIPT italic_T 2 italic_C end_POSTSUBSCRIPT and 𝑮 M⁢2⁢C subscript 𝑮 𝑀 2 𝐶\boldsymbol{G}_{M2C}bold_italic_G start_POSTSUBSCRIPT italic_M 2 italic_C end_POSTSUBSCRIPT obtained from both modalities to predict the truthfulness label and optimize the claim verification approach based on the cross-entropy objective:
168
+
169
+ 𝒚^c⁢l⁢s=𝑾 h⊤⋅[𝑮 T⁢2⁢C:𝑮 M⁢2⁢C]+𝒃 h,\displaystyle\boldsymbol{\hat{y}}_{cls}=\boldsymbol{W}_{h}^{\top}\cdot[% \boldsymbol{G}_{{T}2C}:\boldsymbol{G}_{{M}2C}]+\boldsymbol{b}_{h},overbold_^ start_ARG bold_italic_y end_ARG start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT = bold_italic_W start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ⋅ [ bold_italic_G start_POSTSUBSCRIPT italic_T 2 italic_C end_POSTSUBSCRIPT : bold_italic_G start_POSTSUBSCRIPT italic_M 2 italic_C end_POSTSUBSCRIPT ] + bold_italic_b start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ,
170
+ ℒ⁢(y i|C)=−log⁡(exp⁡(𝒚^c⁢l⁢s,i)∑j=0 2 exp⁡(𝒚^c⁢l⁢s,j))ℒ conditional subscript 𝑦 𝑖 𝐶 subscript bold-^𝒚 𝑐 𝑙 𝑠 𝑖 superscript subscript 𝑗 0 2 subscript bold-^𝒚 𝑐 𝑙 𝑠 𝑗\displaystyle\mathcal{L}(y_{i}|C)=-\log(\frac{\exp(\boldsymbol{\hat{y}}_{cls,i% })}{\sum_{j=0}^{2}\exp(\boldsymbol{\hat{y}}_{cls,j})})caligraphic_L ( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_C ) = - roman_log ( divide start_ARG roman_exp ( overbold_^ start_ARG bold_italic_y end_ARG start_POSTSUBSCRIPT italic_c italic_l italic_s , italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_exp ( overbold_^ start_ARG bold_italic_y end_ARG start_POSTSUBSCRIPT italic_c italic_l italic_s , italic_j end_POSTSUBSCRIPT ) end_ARG )
171
+
172
+ where 𝒚^𝒄⁢𝒍⁢𝒔 subscript bold-^𝒚 𝒄 𝒍 𝒔\boldsymbol{\hat{y}_{cls}}overbold_^ start_ARG bold_italic_y end_ARG start_POSTSUBSCRIPT bold_italic_c bold_italic_l bold_italic_s end_POSTSUBSCRIPT denotes the probabilities over all possible labels. y i subscript 𝑦 𝑖 y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the truthfulness label of claim C 𝐶 C italic_C. During training, we fix the parameters of CLIP while tuning all the other parameters.
173
+
174
+ ### 4.3. Explanation Generation
175
+
176
+ To justify the truthfulness prediction, we further generate a ruling statement by considering the input claim, the predicted truthfulness label as well as the text evidence. The top right of Figure[2](https://arxiv.org/html/2205.12487#S3.F2 "Figure 2 ‣ 3.3. Task Definition ‣ 3. Dataset Construction ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models") illustrates the overall architecture for explanation generation.
177
+
178
+ Specifically, given an input claim C 𝐶 C italic_C, its truthfulness label y C subscript 𝑦 𝐶 y_{C}italic_y start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT, and text evidence {T 1,T 2,…}subscript 𝑇 1 subscript 𝑇 2…\{T_{1},T_{2},\ldots\}{ italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … }, we concatenate them into an overall sequence X 𝑋 X italic_X with a separator </s>. Then we feed this sequence as input to BART(Lewis et al., [2019](https://arxiv.org/html/2205.12487#bib.bib38)), which is a state-of-the-art pre-trained sequence-to-sequence model, to generate a ruling statement S={s 1,s 2,…,s q}𝑆 subscript 𝑠 1 subscript 𝑠 2…subscript 𝑠 𝑞 S=\{s_{1},s_{2},\ldots,s_{q}\}italic_S = { italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_s start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT }. During training, we use the gold truthfulness label of each claim as input. During the evaluation, we use the truthfulness label predicted by the claim verification model. The training objective is to minimize the following negative log-likelihood based on the gold ruling statement S~={s~1,s~2,…,s~q}~𝑆 subscript~𝑠 1 subscript~𝑠 2…subscript~𝑠 𝑞\tilde{S}=\{\tilde{s}_{1},\tilde{s}_{2},\ldots,\tilde{s}_{q}\}over~ start_ARG italic_S end_ARG = { over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT }:
179
+
180
+ ℒ g=−∑i log⁡(p⁢(s~i|s~1:i−1,X;ϕ))subscript ℒ 𝑔 subscript 𝑖 𝑝 conditional subscript~𝑠 𝑖 subscript~𝑠:1 𝑖 1 𝑋 italic-ϕ\mathcal{L}_{g}=-\sum_{i}\log(p(\tilde{s}_{i}|\tilde{s}_{1:i-1},X;\phi))caligraphic_L start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT = - ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_log ( italic_p ( over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | over~ start_ARG italic_s end_ARG start_POSTSUBSCRIPT 1 : italic_i - 1 end_POSTSUBSCRIPT , italic_X ; italic_ϕ ) )
181
+
182
+ To ensure the generated ruling statement is consistent with the truthfulness label of the claim, we apply a truthfulness reward and optimize the generation model with reinforcement learning (RL)(Lai et al., [2021](https://arxiv.org/html/2205.12487#bib.bib37)). Specifically, we pre-train a truthfulness classification model based on BERT(Devlin et al., [2019](https://arxiv.org/html/2205.12487#bib.bib14)), which takes the ruling statement as input and outputs a confidence score for each truthfulness label. We use the difference between the confidence score of the correct label and the score of the wrong labels as the reward R c⁢l⁢s subscript 𝑅 𝑐 𝑙 𝑠 R_{cls}italic_R start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT:
183
+
184
+ R c⁢l⁢s=𝒑⁢(y C|S)−∑y j≠y C,y j∈Y 𝒑⁢(y j|S),subscript 𝑅 𝑐 𝑙 𝑠 𝒑 conditional subscript 𝑦 𝐶 𝑆 subscript formulae-sequence subscript 𝑦 𝑗 subscript 𝑦 𝐶 subscript 𝑦 𝑗 𝑌 𝒑 conditional subscript 𝑦 𝑗 𝑆\displaystyle R_{cls}=\boldsymbol{p}(y_{C}|S)\;\;-\sum_{y_{j}\neq y_{C},y_{j}% \in Y}\boldsymbol{p}(y_{j}|S),italic_R start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT = bold_italic_p ( italic_y start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT | italic_S ) - ∑ start_POSTSUBSCRIPT italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≠ italic_y start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∈ italic_Y end_POSTSUBSCRIPT bold_italic_p ( italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | italic_S ) ,
185
+ 𝒑⁢(y|S)=Softmax⁢(BERT θ⁢(S)),𝒑 conditional 𝑦 𝑆 Softmax subscript BERT 𝜃 𝑆\displaystyle\boldsymbol{p}(y|S)=\text{Softmax}(\text{BERT}_{\theta}(S)),bold_italic_p ( italic_y | italic_S ) = Softmax ( BERT start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_S ) ) ,
186
+
187
+ where y C subscript 𝑦 𝐶 y_{C}italic_y start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is the gold truthfulness label of C 𝐶 C italic_C, Y 𝑌 Y italic_Y is the target label set, and S 𝑆 S italic_S is the generated ruling statement.
188
+
189
+ We then apply the reward R c⁢l⁢s subscript 𝑅 𝑐 𝑙 𝑠 R_{cls}italic_R start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT for policy learning, and the policy gradient is computed as:
190
+
191
+ ∇ϕ 𝒥⁢(ϕ)=𝔼⁢[R c⁢l⁢s⋅∇ϕ⁢∑i log⁡(𝒑⁢(s i|s 1:i−1,X;ϕ))],subscript∇italic-ϕ 𝒥 italic-ϕ 𝔼 delimited-[]⋅subscript 𝑅 𝑐 𝑙 𝑠 subscript∇italic-ϕ subscript 𝑖 𝒑 conditional subscript 𝑠 𝑖 subscript 𝑠:1 𝑖 1 𝑋 italic-ϕ\nabla_{\phi}{\mathcal{J}({\phi})}=\mathbb{E}[R_{cls}\cdot\nabla_{\phi}\sum_{i% }\log(\boldsymbol{p}(s_{i}|s_{1:i-1},X;\phi))],∇ start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT caligraphic_J ( italic_ϕ ) = blackboard_E [ italic_R start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT ⋅ ∇ start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_log ( bold_italic_p ( italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_s start_POSTSUBSCRIPT 1 : italic_i - 1 end_POSTSUBSCRIPT , italic_X ; italic_ϕ ) ) ] ,
192
+
193
+ where X 𝑋 X italic_X is the concatenated sequence of the input claim, its truthfulness label, and text evidence, and ϕ italic-ϕ\phi italic_ϕ denotes the model parameters.
194
+
195
+ 5. Experiments
196
+ --------------
197
+
198
+ ### 5.1. Evidence Retrieval
199
+
200
+ For each claim, we retrieve the top-K 𝐾 K italic_K text and image evidence from the corresponding text and image corpus and evaluate the retrieval performance based on Precision, Recall, NDCG(Järvelin and Kekäläinen, [2002](https://arxiv.org/html/2205.12487#bib.bib30)), MAP (Mean Average Precision), and S-Recall (Similarity-based Recall) scores. In S-Recall, it first computes a recall score for each gold text or image evidence based on the highest cosine similarity score between it and all retrieved text or image evidence, while each piece of evidence is represented with a vector learned from SBERT or CLIP. We use the average recall of all gold evidence as the S-Recall.
201
+
202
+ Media K Rec@K Pre@K NDCG MAP S-Rec
203
+ Image 5 17.01 4.71 13.81 11.93 68.22
204
+ Image 10 21.44 3.02 15.32 12.58 71.85
205
+ Text w/o Re-ranking 5 15.67 12.20 19.23 13.61 52.42
206
+ Text w/o Re-ranking 10 19.40 8.16 19.60 13.02 55.77
207
+ Text 5 19.72 14.92 23.66 14.34 54.57
208
+ Text 10 23.99 9.79 24.09 15.34 58.28
209
+
210
+ Table 3. Performance of text and image evidence retrieval. (%). Pre denotes Precision while Rec means Recall.
211
+
212
+ We show the performance of text and image evidence retrieval on the test set of Mocheg in Table[3](https://arxiv.org/html/2205.12487#S5.T3 "Table 3 ‣ 5.1. Evidence Retrieval ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"). We can see that the performance of both image and text evidence retrieval is low, indicating the difficulty of both tasks. Taking text evidence retrieval as an example, the model needs to retrieve 2 pieces of text evidence on average for each claim from a collection of 2,792,639 sentences, which is very challenging. Also, the proposed evidence retrieval is based on semantic matching. However, in many cases, it is more important to find evidence that is relevant to the claim but describes different aspects or is against the claim, especially for refuted claims. For example, given an input claim, “H.R. 6666 provides $100 billion to entities that perform COVID-19 testing but prohibits them from allowing any non-vaccinated persons into their facilities.” the retrieval model missed an important piece of evidence “No provision in this bill would make testing or quarantining mandatory.”. This is against the claim and has lower similarity compared with the retrieved text “It would provide $100 billion to organizations that do COVID-19 testing or contact tracing or that provide services to people who are isolated at home.”. In addition, for many claims, their evidence come from the comprehension of long paragraphs instead of several sentences. Although our approach successfully retrieves several relevant sentences, they are insufficient to cover all the background and indicate the truthfulness of the claims.
213
+
214
+ ### 5.2. Claim Verification
215
+
216
+ For claim verification, we first design two common baselines: (1) Majority Label, which predicts the majority label (i.e., Refuted) in the Training set for all the claims in the Test set; and (2) Average Similarity, which computes average cosine similarity between the target claim and all the gold text and image evidence based on their embeddings learned from CLIP. If the average similarity is higher than α 1∈{0.5,0.6,0.7,0.75,0.8}subscript 𝛼 1 0.5 0.6 0.7 0.75 0.8\alpha_{1}\in\{0.5,0.6,0.7,0.75,0.8\}italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ { 0.5 , 0.6 , 0.7 , 0.75 , 0.8 }, predict it as Supported; if the average similarity is lower than α 2∈{0.2,0.3,0.4,0.5,0.6,0.65,0.7}subscript 𝛼 2 0.2 0.3 0.4 0.5 0.6 0.65 0.7\alpha_{2}\in\{0.2,0.3,0.4,0.5,0.6,0.65,0.7\}italic_α start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ { 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.65 , 0.7 } and α 2<α 1 subscript 𝛼 2 subscript 𝛼 1\alpha_{2}<\alpha_{1}italic_α start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT < italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, predict it as Refuted; otherwise, predict it as NEI. We search for the best value of α 1 subscript 𝛼 1\alpha_{1}italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and α 2 subscript 𝛼 2\alpha_{2}italic_α start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT on the Development set and then apply them to the Test set. We then adapt Pre-CoFactv2(Du et al., [2023](https://arxiv.org/html/2205.12487#bib.bib15)), a multimodal fact-checking model which achieves state-of-the-art results at the Factify 2 challenge(Mishra et al., [2023](https://arxiv.org/html/2205.12487#bib.bib43)) at AAAI 2023 14 14 14[https://aiisc.ai/defactify2/](https://aiisc.ai/defactify2/), to be the third baseline model. As there is very little existing work on multimodal fact-checking, we further adapt SpotFakePlus(Singhal et al., [2020](https://arxiv.org/html/2205.12487#bib.bib60)), a multimodal fake news detection approach, to our fact-checking task, by using their model to compare the consistency of input claim and image evidence and adding a new component to check the consistency of input claim and text evidence 15 15 15 Most of existing multimodal fake news detection studies aim to detect fake news by comparing the consistency between news text and news image or between the news articles and external knnowledge graphs, thus cannot be directly applied to fact-checking task.. As shown in Table[4](https://arxiv.org/html/2205.12487#S5.T4 "Table 4 ‣ 5.2. Claim Verification ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), Majority Label and Average Similarity yield a performance score that is close to a random baseline, while Pre-CoFactv2 and SpotFakePlus underperform our approach, demonstrating that Mocheg does not contain any label distribution bias and cannot be easily solved simply by comparing the semantics between claims and evidence.
217
+
218
+ Setting F-score (%)
219
+ Majority Label 33.78
220
+ Average Similarity (Gold Evidence)32.72
221
+ Pre-CoFactv2(Du et al., [2023](https://arxiv.org/html/2205.12487#bib.bib15)) (Gold Evidence)47.17
222
+ SpotFakePlus(Singhal et al., [2020](https://arxiv.org/html/2205.12487#bib.bib60)) (Gold Evidence)44.11
223
+ w/o Evidence 39.93
224
+ w/ Text Evidence (Gold)47.54
225
+ w/ Image Evidence (Gold)45.62
226
+ w/ Text and Image evidence (Gold)50.78
227
+ w/ Text Evidence (System)42.79
228
+ w/ Image Evidence (System)40.91
229
+ w/ Text and Image evidence (System)44.06
230
+ Human w/o Evidence 20.00
231
+ Human w/ System Evidence 62.00
232
+ Human w/ Gold Evidence 70.00
233
+
234
+ Table 4. Performance of claim verification. Gold Evidence denotes gold text and image evidence while System Evidence means system-retrieved text and image evidence.
235
+
236
+ To evaluate the impact of each type of evidence to claim verification, we design ablated models of our approach by considering the text evidence only, image evidence only, or no evidence. In addition, we compare its performance based on the system-retrieved evidence and the gold evidence to show the impact of evidence retrieval. As shown in Table[4](https://arxiv.org/html/2205.12487#S5.T4 "Table 4 ‣ 5.2. Claim Verification ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), without considering any evidence, the model can still outperform the majority based baseline on claim verification due to the fact that some claims, such as “Paying taxes is optional!!,” contain obvious clues or are against common sense so that the model can directly predict the truthfulness based on the claim itself. By adding text and/or image evidence, the performance of claim verification can be boosted, proving the usefulness of the evidence. The text evidence provides more significant gain than image evidence due to two reasons: (1) for about 32% of the claims (787 out of 2,442) in the Test set, they only have text evidence without any associated image evidence. However, our approach always returns the top-5 most relevant images as evidence, introducing noises; (2) Texts usually carry more information than images. However, we also observe many examples that the image evidence complements the text evidence. For example, for claim #1 A Boeing B-17E bomber from World War II was found in the jungle in Figure[4](https://arxiv.org/html/2205.12487#S5.F4 "Figure 4 ‣ 5.2. Claim Verification ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), its image evidence plays a crucial role in confirming the aircraft was found in the jungle.
237
+
238
+ ![Image 4: Refer to caption](https://arxiv.org/html/x4.png)
239
+
240
+ Figure 4. Examples of Multimodal Fact Checking. The Truthfulness column shows gold labels.
241
+
242
+ Finally, we also set up a human performance for claim verification by randomly sampling 50 claims and asking two annotators to label truthfulness by providing gold evidence, system evidence, or no evidence, which reach a Fleiss κ 𝜅\kappa italic_κ score(Fleiss, [1971](https://arxiv.org/html/2205.12487#bib.bib18)) of 0.67, 0.59, and 0.42, respectively. We take a human prediction as true only if both of the two annotators provide the true label. As we can see, there is still a significant gap between machine and human performance.
243
+
244
+ ### 5.3. Explanation Generation
245
+
246
+ Setting Model ROUGE 1 ROUGE 2 ROUGE L BLEU BERTScore
247
+ Gold Evidence ORACLE-40.22 23.80 25.97 20.03 86.82
248
+ Gold Evidence LEAD-3-32.10 16.97 22.17 8.41 86.77
249
+ Gold Evidence w/o Generation-37.71 21.70 25.62 22.56 87.20
250
+ System Evidence w/o Generation-28.69 9.93 17.18 7.38 83.95
251
+ Gold Evidence + Gold Truthfulness BART-large 45.51 27.37 35.41 21.84 89.05
252
+ Gold Evidence + System Truthfulness BART-large 43.87 26.37 34.10 20.86 88.87
253
+ System Evidence + Gold Truthfulness BART-large 35.53 17.46 26.05 10.95 87.01
254
+ System Evidence + System Truthfulness BART-large 33.88 16.51 24.83 10.08 86.95
255
+
256
+ Table 5. Performance of explanation generation. (%)
257
+
258
+ We fine-tune BART based on a pre-trained bart-large 16 16 16 https://huggingface.co/facebook/bart-large checkpoint(Wolf et al., [2019](https://arxiv.org/html/2205.12487#bib.bib73)) to generate the ruling statement. We use ROUGE(Lin, [2004](https://arxiv.org/html/2205.12487#bib.bib40)), BLEU(Papineni et al., [2002](https://arxiv.org/html/2205.12487#bib.bib50)), and Bertscore(Zhang et al., [2019](https://arxiv.org/html/2205.12487#bib.bib75)) as the evaluation metrics. The BERT-based 17 17 17 https://huggingface.co/bert-base-uncased classifier is pre-trained on the gold explanations and reaches an F-score of 87.59%. We fix the classifier during training of the generation model. To evaluate the impact of the evidence retrieval and claim verification on explanation generation, we compare the performance of our approach based on gold evidence and/or gold truthfulness labels with the system-based evidence and truthfulness labels. Note that we only train the model based on gold evidence and truthfulness but perform inference by taking different types of evidence or truthfulness as input. Similar as(Kotonya and Toni, [2020](https://arxiv.org/html/2205.12487#bib.bib36)), we further compare our method to LEAD-3, which selects the first three sentences in evidence, and the ORACLE baseline(Narayan et al., [2018](https://arxiv.org/html/2205.12487#bib.bib45)), which greedily select 18 18 18[https://github.com/pltrdy/extoracle_summarization](https://github.com/pltrdy/extoracle_summarization) multiple evidence sentences that maximize the ROUGE-2 score. Table[5](https://arxiv.org/html/2205.12487#S5.T5 "Table 5 ‣ 5.3. Explanation Generation ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models") shows the results with the following observations: (1) Without generation, the explanation is directly from the concatenation of all the text evidence. The explanation may contain all the necessary information but is not interpretable to humans as the sentences are not connected coherently or logically; (2) Evidence retrieval has a more significant impact on explanation generation than claim verification. This is reasonable because the evidence carries most of the content in the explanation and truthfulness is usually implied when comparing the evidence and the input claim. (3) The explanation in our corpus is pretty abstractive, as corroborated by the low performance of ORACLE baseline, which is the upper bound of extractive summarization, and LEAD-3 baseline.
259
+
260
+ ### 5.4. Implementation Details
261
+
262
+ We use 2 Quadro RTX 8000 to run our experiments. The retrieval models cost 15 GB and are trained for about 20 runs with a batch size of 256. The claim verification models cost 3 GB and are trained for about 50 runs with a batch size of 128. The explanation generation model costs 45 GB and is trained for about 30 runs with a batch size of 10. We use grid search to tune the hyperparameters: for evidence retrieval, the learning rage ∈{10−5,10−6,10−7}absent superscript 10 5 superscript 10 6 superscript 10 7\in\{10^{-5},10^{-6},10^{-7}\}∈ { 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT } and batch size ∈{256,480,512}absent 256 480 512\in\{256,480,512\}∈ { 256 , 480 , 512 }; for claim verification, the learning rage ∈{10−1,10−2,10−3,10−4}absent superscript 10 1 superscript 10 2 superscript 10 3 superscript 10 4\in\{10^{-1},10^{-2},10^{-3},10^{-4}\}∈ { 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT } and batch size ∈{64,128,256,512,1024,2048}absent 64 128 256 512 1024 2048\in\{64,128,256,512,1024,2048\}∈ { 64 , 128 , 256 , 512 , 1024 , 2048 }; for explanation generation, the learning rage ∈{5×10−2,5×10−3,5×10−4,5×10−5}absent 5 superscript 10 2 5 superscript 10 3 5 superscript 10 4 5 superscript 10 5\in\{5\times 10^{-2},5\times 10^{-3},5\times 10^{-4},5\times 10^{-5}\}∈ { 5 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT , 5 × 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT , 5 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT , 5 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT } and batch size ∈{10,12,48,192}absent 10 12 48 192\in\{10,12,48,192\}∈ { 10 , 12 , 48 , 192 }.
263
+
264
+ 6. Remaining Challenges
265
+ -----------------------
266
+
267
+ ### 6.1. Claim Verification
268
+
269
+ We randomly sample 50 claims with gold evidence that are incorrectly verified from the Test set and identify the following remaining challenges for multimodal fact-checking:
270
+
271
+ Cross-modality Reasoning: Both text evidence and image evidence provide complementary information to verify the truthfulness of the claims. 30% of verification errors are due to deep cross-modality reasoning and evidence fusion. For example, for claim #2 “’If you just count all the deaths in the red states, we are number two in the world in deaths.” in Figure[4](https://arxiv.org/html/2205.12487#S5.F4 "Figure 4 ‣ 5.2. Claim Verification ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), since there are two different definitions for the red state, the model needs to refer to the image map to confirm the mentioned states.
272
+
273
+ Cross Document/Sentence Reasoning: 30% of verification errors are due to the reasoning across multiple pieces of textual evidence or across multiple sentences. For example, given the claim ‘The Biden administration’s American Jobs Plan will be ’the biggest non-defense investment in research and development in the history of our country.”, the model needs to first know the current largest investment is $11 billion by referring to evidence “The largest increase in research and development came in 1964, and totaled $11 billion”, and then refer to another piece of evidence “experts say the plan is likely to far exceed $11 billion in spending on research and development.” to understand that the Plan will exceed $11 billion.
274
+
275
+ Deep Visual Understanding: For 6% of wrongly predicted claims, their image evidence is charts, tables, or even maps. The current visual understanding techniques, such as CLIP, cannot deeply understand the content and semantics of such images. For example, given claim #3 “San Francisco had twice as many drug overdose deaths as COVID deaths last year” in Figure[4](https://arxiv.org/html/2205.12487#S5.F4 "Figure 4 ‣ 5.2. Claim Verification ‣ 5. Experiments ‣ End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models"), to determine the truthfulness of this claim, the model needs to obtain the number of drug overdose deaths from the image.
276
+
277
+ Other Complex Reasoning: Many claims also require various types of complex reasoning, such as mathematical calculation (4% of errors) and commonsense (8% of errors). For instance, the model needs to understand that “29,000 recipients” plus “12,700 recipients” is “41,700 recipients”, “from 2019 to 1998” is “22 years”, “there are fifty states in US”. In addition, the model has difficulty in dealing with claims (12% of errors) that are partially supported or refuted. For example, for the claim “Since 2010, student debt has increased by 102% and real wages have fallen by over 8%.”, it’s true that “student debt has increased by 102%” but the “real wages have fallen by over 8%” is not correct.
278
+
279
+ ### 6.2. Explanation Generation
280
+
281
+ We also sample 50 system-generated explanations and analyze their error types as follows.
282
+
283
+ Limited Encoding and Decoding Length: Our approach is based on pre-trained language models, such as BERT and BART, which can only encode or decode a limited length of the sequence. In our dataset, some evidence and ruling statements exceed the maximal length. For those cases, we truncate the sequence and lose part of the information.
284
+
285
+ Missing Evidence: As we construct the evidence source collection based on the evidence links listed on Snopes and PolitiFact, some evidence used in the ruling statements is not included. For example, given the claim “By revoking the Keystone pipeline permit, Biden is destroying 11,000 jobs” the gold explanation contains the information “A 2014 report found that the company would need only 50 employees to maintain the Keystone XL pipeline” which is not covered in any of the background documents. In addition, our current explanation generation approach only leverages text evidence while image evidence can also provide complementary information.
286
+
287
+ Logical Coherence: One critical challenge for explanation generation is to determine the logical connection among the evidence sentences and organize them coherently, a common issue in long-form text generation(Hu et al., [2022a](https://arxiv.org/html/2205.12487#bib.bib27), [b](https://arxiv.org/html/2205.12487#bib.bib28)). For example, given the claim, “A new, independent study found that at least 55 of our largest corporations used various loopholes to pay zero federal income tax in 2020.”, our explanation generation approach fails to correctly organize the following two evidence: “many of the relevant provisions are deliberate attempts to set incentives” and “Some critics say the financial disclosures used to compile the report are imperfect estimates.”
288
+
289
+ 7. Conclusion
290
+ -------------
291
+
292
+ We created Mocheg, an end-to-end multimodal fact-checking and explanation generation benchmark dataset which consists of 15,601 claims annotated with truthfulness labels, together with 33,880 text evidence, 12,112 image evidence as well as explainable statements. We explore the state-of-the-art neural architectures to set up the baseline performance on three sub-tasks (i.e., multimodal evidence retrieval, claim verification, and explanation generation). Our experimental results show that the performance of all three sub-tasks is still far from enough. For future work, an obvious next step is to explore more advanced techniques to improve the three sub-tasks and deep visual understanding. Furthermore, open-domain fact-checking is another promising direction to detect hallucination errors in large language models like ChatGPT(OpenAI, [2022](https://arxiv.org/html/2205.12487#bib.bib47)). In the open-domain setting, evaluating the trustworthiness of evidence will play a critical role.
293
+
294
+ 8. Ethical Statement
295
+ --------------------
296
+
297
+ For dataset release, we have obtained permission from both Snopes and Politifact to publish the data for the research purpose. Our dataset is licensed under the CC BY 4.0 19 19 19[https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/), while the associated codes to Mocheg for data crawler and baseline are licensed under Apache License 2.0 20 20 20[https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0). Our dataset contains 2,916 tweets. In accordance with the Twitter developer terms 21 21 21[https://developer.twitter.com/en/developer-terms/more-on-restricted-use-cases](https://developer.twitter.com/en/developer-terms/more-on-restricted-use-cases), we will only share the Twitter IDs and scripts to crawl tweets based on Twitter API. Our work can be used to predict the truthfulness of various claims in the web and stop the spread of misinformation. Our dataset does not use features or label information about sensitive personally identifiable information, like individual names. Since our dataset contains internet claims, some claims may be offensive. However, we crawl the articles from some reputational fact-checking websites, like Politifact and Snopes, to decrease the possibility of offensive content.
298
+
299
+ Given the importance of fact-checking in secular societies, we introduce the fact-checking process of Snopes and Politifact to show how our data sources reduce bias. According to Politifact 22 22 22[https://www.politifact.com/article/2018/feb/12/principles-truth-o-meter-politifacts-methodology-i/](https://www.politifact.com/article/2018/feb/12/principles-truth-o-meter-politifacts-methodology-i/) and Snopes 23 23 23[https://www.snopes.com/transparency/](https://www.snopes.com/transparency/), they always attempt to contact the person, website, or organization that made the statement they are fact-checking. They will have consultations with a variety of expertise. They seek direct access to government reports, academic studies, and other data. They also have one to two rounds of reviews. Finally, they will accept the error correction from the public and mark the corrected articles. According to Politifact, PolitiFact journalists avoid the public expression of political opinion and public involvement in the political process to set their own opinions aside as they work to uphold principles of independence and fairness. 23 of 36 journalists are women. According to Snopes, members of their editorial staff are precluded from donating to or participating in political campaigns, political party activities, or political advocacy organizations. 6 of 10 journalists are women.
300
+
301
+ ###### Acknowledgements.
302
+
303
+ This research is based upon work supported by U.S. DARPA KMASS Program # HR001121S0034. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
304
+
305
+ References
306
+ ----------
307
+
308
+ * (1)
309
+ * Abdelnabi et al. (2022) Sahar Abdelnabi, Rakibul Hasan, and Mario Fritz. 2022. Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context Images via Online Resources. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 14940–14949.
310
+ * Alhindi et al. (2018) Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. 2018. Where is your evidence: improving fact-checking by justification modeling. In _Proceedings of the first workshop on fact extraction and verification (FEVER)_. 85–90.
311
+ * Aly et al. (2021) Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verification over unstructured and structured information. _arXiv preprint arXiv:2106.05707_ (2021).
312
+ * Atanasova et al. (2020) Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. Generating fact checking explanations. _arXiv preprint arXiv:2004.05773_ (2020).
313
+ * Augenstein et al. (2019) Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. _arXiv preprint arXiv:1909.03242_ (2019).
314
+ * Bajaj et al. (2016) Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated machine reading comprehension dataset. _arXiv preprint arXiv:1611.09268_ (2016).
315
+ * Barrón-Cedeno et al. (2019) Alberto Barrón-Cedeno, Israa Jaradat, Giovanni Da San Martino, and Preslav Nakov. 2019. Proppy: Organizing the news based on their propagandistic content. _Information Processing & Management_ 56, 5 (2019), 1849–1864.
316
+ * Boididou et al. (2015) Christina Boididou, Katerina Andreadou, Symeon Papadopoulos, Duc-Tien Dang-Nguyen, Giulia Boato, Michael Riegler, Yiannis Kompatsiaris, et al. 2015. Verifying multimedia use at mediaeval 2015. _MediaEval_ 3, 3 (2015), 7.
317
+ * Chen et al. (2022) Jifan Chen, Aniruddh Sriram, Eunsol Choi, and Greg Durrett. 2022. Generating Literal and Implied Subquestions to Fact-check Complex Claims. _arXiv preprint arXiv:2205.06938_ (2022).
318
+ * Chen et al. (2019) Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A large-scale dataset for table-based fact verification. _arXiv preprint arXiv:1909.02164_ (2019).
319
+ * Dai et al. (2022) Shih-Chieh Dai, Yi-Li Hsu, Aiping Xiong, and Lun-Wei Ku. 2022. Ask to know more: Generating counterfactual explanations for fake claims. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_. 2800–2810.
320
+ * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
321
+ * Devlin et al. (2019) Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference_, Vol.1. Association for Computational Linguistics (ACL), 4171–4186. arXiv:1810.04805 [https://github.com/tensorflow/tensor2tensor](https://github.com/tensorflow/tensor2tensor)
322
+ * Du et al. (2023) Wei-Wei Du, Hong-Wei Wu, Wei-Yao Wang, and Wen-Chih Peng. 2023. Team Triple-Check at Factify 2: Parameter-Efficient Large Foundation Models with Feature Representations for Multi-Modal Fact Verification. _arXiv preprint arXiv:2302.07740_ (2023).
323
+ * Edelman and Edelman (2001) Murray Edelman and Murray Jacob Edelman Edelman. 2001. _The politics of misinformation_. Cambridge University Press.
324
+ * Fan et al. (2020) Angela Fan, Aleksandra Piktus, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Andreas Vlachos, Antoine Bordes, and Sebastian Riedel. 2020. Generating fact checking briefs. _arXiv preprint arXiv:2011.05448_ (2020).
325
+ * Fleiss (1971) Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. _Psychological bulletin_ 76, 5 (1971), 378.
326
+ * Fung et al. (2021) Yi Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen McKeown, Mohit Bansal, and Avirup Sil. 2021. Infosurgeon: Cross-media fine-grained information consistency checking for fake news detection. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. 1683–1698.
327
+ * Gad-Elrab et al. (2019) Mohamed H Gad-Elrab, Daria Stepanova, Jacopo Urbani, and Gerhard Weikum. 2019. Exfakt: A framework for explaining facts over knowledge graphs and text. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining_. 87–95.
328
+ * Godfrey-Smith (1989) Peter Godfrey-Smith. 1989. Misinformation. _Canadian Journal of Philosophy_ 19, 4 (1989), 533–550.
329
+ * Goldstein et al. (2023) Josh A Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova. 2023. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. _arXiv preprint arXiv:2301.04246_ (2023).
330
+ * Guo et al. (2022) Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A Survey on Automated Fact-Checking. _Transactions of the Association for Computational Linguistics_ 10 (2022), 178–206. [https://doi.org/10.1162/tacl_a_00454](https://doi.org/10.1162/tacl_a_00454) arXiv:2108.11896
331
+ * Gurrapu et al. (2022) Sai Gurrapu, Lifu Huang, and Feras A Batarseh. 2022. ExClaim: Explainable Neural Claim Verification Using Rationalization. In _2022 IEEE 29th Annual Software Technology Conference (STC)_. IEEE, 19–26.
332
+ * Gurrapu et al. (2023) Sai Gurrapu, Ajay Kulkarni, Lifu Huang, Ismini Lourentzou, Laura Freeman, and Feras A Batarseh. 2023. Rationalization for Explainable NLP: A Survey. _arXiv preprint arXiv:2301.08912_ (2023).
333
+ * Hanselowski et al. (2019) Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, and Iryna Gurevych. 2019. A richly annotated corpus for different tasks in automated fact-checking. _arXiv preprint arXiv:1911.01214_ (2019).
334
+ * Hu et al. (2022a) Zhe Hu, Hou Pong Chan, and Lifu Huang. 2022a. MOCHA: A Multi-Task Training Approach for Coherent Text Generation from Cognitive Perspective. _arXiv preprint arXiv:2210.14650_ (2022).
335
+ * Hu et al. (2022b) Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, and Lifu Huang. 2022b. PLANET: Dynamic Content Planning in Autoregressive Transformers for Long-form Text Generation. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 2288–2305.
336
+ * Islam et al. (2020) Md Saiful Islam, Tonmoy Sarkar, Sazzad Hossain Khan, Abu-Hena Mostofa Kamal, SM Murshid Hasan, Alamgir Kabir, Dalia Yeasmin, Mohammad Ariful Islam, Kamal Ibne Amin Chowdhury, Kazi Selim Anwar, et al. 2020. COVID-19–related infodemic and its impact on public health: A global social media analysis. _The American journal of tropical medicine and hygiene_ 103, 4 (2020), 1621.
337
+ * Järvelin and Kekäläinen (2002) Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. _ACM Transactions on Information Systems (TOIS)_ 20, 4 (2002), 422–446.
338
+ * Jindal et al. (2020) Sarthak Jindal, Raghav Sood, Richa Singh, Mayank Vatsa, and Tanmoy Chakraborty. 2020. NewsBag: a multi-modal benchmark dataset for fake news detection. In _CEUR Workshop Proc._, Vol.2560. 138–145.
339
+ * Kamboj et al. (2020) Manvi Kamboj, Christian Hessler, Priyanka Asnani, Kais Riani, and Mohamed Abouelenien. 2020. Multimodal Political Deception Detection. _IEEE MultiMedia_ 28, 1 (2020), 94–102.
340
+ * Kazemi et al. (2021) Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, and Rada Mihalcea. 2021. Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News. _arXiv preprint arXiv:2104.12918_ (2021).
341
+ * Khattar et al. (2019) Dhruv Khattar, Jaipal Singh Goud, Manish Gupta, and Vasudeva Varma. 2019. MVAE: Multimodal Variational Autoencoder for Fake News Detection. _The World Wide Web Conference_ (2019).
342
+ * Kohlschütter et al. (2010) Christian Kohlschütter, Peter Fankhauser, and Wolfgang Nejdl. 2010. Boilerplate detection using shallow text features. In _Proceedings of the third ACM international conference on Web search and data mining_. 441–450.
343
+ * Kotonya and Toni (2020) Neema Kotonya and Francesca Toni. 2020. Explainable automated fact-checking for public health claims. _arXiv preprint arXiv:2010.09926_ (2020).
344
+ * Lai et al. (2021) Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2021. Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer. _ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference_ 2 (may 2021), 484–494. [https://doi.org/10.48550/arxiv.2105.06947](https://doi.org/10.48550/arxiv.2105.06947) arXiv:2105.06947
345
+ * Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. arXiv:1910.13461[cs.CL]
346
+ * Li et al. (2020) Lily Li, Or Levi, Pedram Hosseini, and David A Broniatowski. 2020. A multi-modal method for satire detection using textual and visual cues. _arXiv preprint arXiv:2010.06671_ (2020).
347
+ * Lin (2004) Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In _Text Summarization Branches Out_. Association for Computational Linguistics, Barcelona, Spain, 74–81. [https://aclanthology.org/W04-1013](https://aclanthology.org/W04-1013)
348
+ * Micallef et al. (2022) Nicholas Micallef, Marcelo Sandoval-Castañeda, Adi Cohen, Mustaque Ahamad, Srijan Kumar, and Nasir Memon. 2022. Cross-Platform Multimodal Misinformation: Taxonomy, Characteristics and Detection for Textual Posts and Videos. In _Proceedings of the International AAAI Conference on Web and Social Media_, Vol.16. 651–662.
349
+ * Mishra et al. (2022) Shreyash Mishra, S Suryavardan, Amrit Bhaskar, Parul Chopra, Aishwarya Reganti, Parth Patwa, Amitava Das, Tanmoy Chakraborty, Amit Sheth, Asif Ekbal, et al. 2022. Factify: A multi-modal fact verification dataset. In _Proceedings of the First Workshop on Multimodal Fact-Checking and Hate Speech Detection (DE-FACTIFY)_.
350
+ * Mishra et al. (2023) Shreyash Mishra, S Suryavardan, Amrit Bhaskar, Parul Chopra, Aishwarya Reganti, Parth Patwa, Amitava Das, Tanmoy Chakraborty, Amit Sheth, Asif Ekbal, et al. 2023. Factify 2: A multimodal fake news and satire news dataset. In _proceedings of defactify 2: second workshop on Multimodal Fact-Checking and Hate Speech Detection, CEUR_.
351
+ * Nakamura et al. (2019) Kai Nakamura, Sharon Levy, and William Yang Wang. 2019. r/fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection. _arXiv preprint arXiv:1911.03854_ (2019).
352
+ * Narayan et al. (2018) Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization. arXiv:1808.08745[cs.CL]
353
+ * Nielsen and McConville (2022) Dan Saattrup Nielsen and Ryan McConville. 2022. MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset. _arXiv preprint arXiv:2202.11684_ (2022).
354
+ * OpenAI (2022) OpenAI. 2022. _OpenAI: Introducing ChatGPT_. [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
355
+ * Papadopoulou et al. (2018a) Olga Papadopoulou, Markos Zampoglou, Symeon Papadopoulos, and Ioannis Kompatsiaris. 2018a. A corpus of debunked and verified user-generated videos. _Online Information Review_ 43 (11 2018). [https://doi.org/10.1108/OIR-03-2018-0101](https://doi.org/10.1108/OIR-03-2018-0101)
356
+ * Papadopoulou et al. (2018b) Olga Papadopoulou, Markos Zampoglou, Symeon Papadopoulos, and Ioannis Kompatsiaris. 2018b. A corpus of debunked and verified user-generated videos. _Online information review_ 43, 1 (2018), 72–88.
357
+ * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_. 311–318.
358
+ * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. arXiv:2103.00020[cs.CV]
359
+ * Raj and Meel (2022) Chahat Raj and Priyanka Meel. 2022. ARCNN framework for multimodal infodemic detection. _Neural Networks_ 146 (2022), 36–68. [https://doi.org/10.1016/j.neunet.2021.11.006](https://doi.org/10.1016/j.neunet.2021.11.006)
360
+ * Rayar et al. (2022) Frederic Rayar, Mathieu Delalandre, and Van-Hao Le. 2022. A large-scale TV video and metadata database for French political content analysis and fact-checking. (2022).
361
+ * Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics. [https://arxiv.org/abs/1908.10084](https://arxiv.org/abs/1908.10084)
362
+ * Reimers and Gurevych (2021) Nils Reimers and Iryna Gurevych. 2021. The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_. Association for Computational Linguistics, Online, 605–611. [https://arxiv.org/abs/2012.14210](https://arxiv.org/abs/2012.14210)
363
+ * Reis et al. (2020) Julio CS Reis, Philipe Melo, Kiran Garimella, Jussara M Almeida, Dean Eckles, and Fabrício Benevenuto. 2020. A dataset of fact-checked images shared on whatsapp during the brazilian and indian elections. In _Proceedings of the International AAAI Conference on Web and Social Media_, Vol.14. 903–908.
364
+ * Roy and Ekbal (2021) Arjun Roy and Asif Ekbal. 2021. MulCoB-MulFaV: Multimodal Content Based Multilingual Fact Verification. In _2021 International Joint Conference on Neural Networks (IJCNN)_. IEEE, 1–8.
365
+ * Shahi and Nandini (2020) Gautam Kishore Shahi and Durgesh Nandini. 2020. FakeCovid–A multilingual cross-domain fact check news dataset for COVID-19. _arXiv preprint arXiv:2006.11343_ (2020).
366
+ * Shu et al. (2020) Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, and Huan Liu. 2020. Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. _Big data_ 8, 3 (2020), 171–188.
367
+ * Singhal et al. (2020) Shivangi Singhal, Anubha Kabra, Mohit Sharma, Rajiv Ratn Shah, Tanmoy Chakraborty, and Ponnurangam Kumaraguru. 2020. Spotfake+: A multimodal framework for fake news detection via transfer learning (student abstract). In _Proceedings of the AAAI conference on artificial intelligence_, Vol.34. 13915–13916.
368
+ * Song et al. (2021a) Chenguang Song, Nianwen Ning, Yunlei Zhang, and Bin Wu. 2021a. A multimodal fake news detection model based on crossmodal attention residual and multichannel convolutional neural networks. _Information Processing and Management_ 58, 1 (2021), 102437. [https://doi.org/10.1016/j.ipm.2020.102437](https://doi.org/10.1016/j.ipm.2020.102437)
369
+ * Song et al. (2021b) Chenguang Song, Nianwen Ning, Yunlei Zhang, and Bin Wu. 2021b. A multimodal fake news detection model based on crossmodal attention residual and multichannel convolutional neural networks. _Information Processing & Management_ 58, 1 (2021), 102437. [https://doi.org/10.1016/j.ipm.2020.102437](https://doi.org/10.1016/j.ipm.2020.102437)
370
+ * Stammbach and Ash (2020) Dominik Stammbach and Elliott Ash. 2020. e-fever: Explanations and summaries for automated fact checking. _Proceedings of the 2020 Truth and Trust Online (TTO 2020)_ (2020), 32–43.
371
+ * Tan et al. (2020) Reuben Tan, Bryan A Plummer, and Kate Saenko. 2020. Detecting cross-modal inconsistency to defend against neural fake news. _arXiv preprint arXiv:2009.07698_ (2020).
372
+ * Thorne et al. (2018) James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. _arXiv preprint arXiv:1803.05355_ (2018).
373
+ * Uscinski and Butler (2013a) Joseph E. Uscinski and Ryden W. Butler. 2013a. The Epistemology of Fact Checking. _Critical Review_ 25, 2 (June 2013), 162–180. [https://doi.org/10.1080/08913811.2013.843872](https://doi.org/10.1080/08913811.2013.843872)
374
+ * Uscinski and Butler (2013b) Joseph E. Uscinski and Ryden W. Butler. 2013b. The Epistemology of Fact Checking. _Critical Review_ 25, 2 (2013), 162–180. [https://doi.org/10.1080/08913811.2013.843872](https://doi.org/10.1080/08913811.2013.843872) arXiv:https://doi.org/10.1080/08913811.2013.843872
375
+ * Van den Oord et al. (2018) Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. _arXiv e-prints_ (2018), arXiv–1807.
376
+ * Wang et al. (2022) Jingzi Wang, Hongyan Mao, and Hongwei Li. 2022. FMFN: Fine-Grained Multimodal Fusion Networks for Fake News Detection. _Applied Sciences_ 12, 3 (2022). [https://doi.org/10.3390/app12031093](https://doi.org/10.3390/app12031093)
377
+ * Wang and Jiang (2016) Shuohang Wang and Jing Jiang. 2016. A compare-aggregate model for matching text sequences. _arXiv preprint arXiv:1611.01747_ (2016).
378
+ * Wang (2017) William Yang Wang. 2017. ” liar, liar pants on fire”: A new benchmark dataset for fake news detection. _arXiv preprint arXiv:1705.00648_ (2017).
379
+ * Wang et al. (2021) Yaqing Wang, Fenglong Ma, Haoyu Wang, Kishlay Jha, and Jing Gao. 2021. Multimodal Emergent Fake News Detection via Meta Neural Process Networks. _Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (aug 2021), 3708–3716. [https://doi.org/10.1145/3447548.3467153](https://doi.org/10.1145/3447548.3467153) arXiv:2106.13711
380
+ * Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-of-the-art natural language processing. _arXiv preprint arXiv:1910.03771_ (2019).
381
+ * Yang et al. (2022) Jing Yang, Didier Vega-Oliveros, Taís Seibt, and Anderson Rocha. 2022. Explainable Fact-Checking Through Question Answering. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 8952–8956.
382
+ * Zhang et al. (2019) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. _arXiv preprint arXiv:1904.09675_ (2019).
383
+ * Zhang et al. (2021) Zijian Zhang, Koustav Rudra, and Avishek Anand. 2021. Explain and Predict, and then Predict again. In _Proceedings of the 14th ACM International Conference on Web Search and Data Mining_. 418–426.
384
+ * Zhou et al. (2020) Xinyi Zhou, Jindi Wu, and Reza Zafarani. 2020. [… formula…]: Similarity-Aware Multi-modal Fake News Detection. _Advances in Knowledge Discovery and Data Mining_ 12085 (2020), 354.
385
+ * Zhuo et al. (2023) Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. 2023. Exploring ai ethics of chatgpt: A diagnostic analysis. _arXiv preprint arXiv:2301.12867_ (2023).
386
+ * Zlatkova et al. (2019) Dimitrina Zlatkova, Preslav Nakov, and Ivan Koychev. 2019. Fact-checking meets fauxtography: Verifying claims about images. _arXiv preprint arXiv:1908.11722_ (2019).