Limitoy / ACL_24_with_limitation /ACL_24_104.json
Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "104",
"Title": "AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators",
"Limitation": "We have chosen this small set of social media data due to the limitation of the annotation budget. Similar observations as PoliClaimtest can be drawn. GPT-4 AFaCTA outperforms experts on perfectly consistent samples and underperforms on inconsistent samples. GPT-3.5 also achieves a moderate agreement with human experts on perfectly consistent samples. Error analysis shows that GPT3.5’s error concentrates on false negatives, similar to its behavior in the political speech domain (see Table 12). We also conduct the self-consistency CoT experiments on CheckThat!-2021-dev to verify the im-\nportance of a diversified source of self-consistency. The results are shown in Figure 6. It can be observed that the level of self-consistency calibrates accuracy, and the 3 predefined reasoning paths outperform automatically generated ones. One discrepancy is that self-consistency CoT slightly outperforms GPT-3.5 AFaCTA when sampling more than 7 reasoning paths. We attribute this to GPT3.5’s heavier hallucinations on Twitter domain (see Table 12 where it fails to identify apparent factual information). Therefore, complicated reasoning paths like AFaCTA Step 3 might be challenging in many cases. Importantly, due to the annotation budget, our experimental dataset on the social media domain is limited. We leave the extensive analysis of this domain to future work.",
"abstractText": "With the rise of generative AI, automated factchecking methods to combat misinformation are becoming more and more important. However, factual claim detection, the first step in a fact-checking pipeline, suffers from two key issues that limit its scalability and generalizability: (1) inconsistency in definitions of the task and what a claim is, and (2) the high cost of manual annotation. To address (1), we review the definitions in related work and propose a unifying definition of factual claims that focuses on verifiability. To address (2), we introduce AFaCTA (Automatic Factual Claim deTection Annotator), a novel framework that assists in the annotation of factual claims with the help of large language models (LLMs). AFaCTA calibrates its annotation confidence with consistency along three predefined reasoning paths. Extensive evaluation and experiments in the domain of political speech reveal that AFaCTA can efficiently assist experts in annotating factual claims and training highquality classifiers, and can work with or without expert supervision. Our analyses also result in PoliClaim, a comprehensive claim detection dataset spanning diverse political topics.1",
"1 Introduction": "The explosion of mis- and disinformation is a growing public concern, with misinformation being widely shared (Vosoughi et al., 2018). Manual factchecking is an important counter-measure to misinformation (Lewandowsky et al., 2020). However, fact-checking is a time-consuming and expensive endeavor, and computational remedies are required (Vlachos and Riedel, 2014).\nA first step to identify mis- and disinformation consists of factual claim detection, which filters out the claims with factual assertions that need checking (Arslan et al., 2020; Alam et al., 2021a; Stammbach et al., 2023b). Considering the sheer amount\n1https://github.com/EdisonNi-hku/AFaCTA.\nof daily online content and LLMs’ generative capability, we argue that a valid factual claim detection system should be efficient and easily deployable to monitor misinformation consistently. Therefore, we need a way to produce high-quality resources to build transparent, accurate and fair models to automatically detect such claims. However, there are two major challenges in the data collection process.\nDiscrepancies in task and claim definitions. By now, arguably, several different claim definitions exist, which confuse practitioners. What is a claim is unclear, leading to various claim detection tasks, e.g., in automated fact-checking and argument mining. For example, Alam et al. (2021a) dismiss all opinions from factual claims, but Gupta et al. (2021) includes “opinions with social impact” as factual claims. Many studies (Arslan et al., 2020; Nakov et al., 2022) aim at detecting “check-worthy” claims while Konstantinovskiy et al. (2020) argues the definition of “check-worthiness” is highly subjective and political. Such variances reflect a lack of clarity in conceptualizing critical distinctions, such as the overlap between opinions and verifiable facts (refer to Table 1 row 1), and the separate nature of verifiability and check-worthiness in the context of factual claim detection (see Table 1 rows 2 and 3). To address these inconsistencies, we propose a definition of factual claims based on verifiability: factual claims present verifiable facts; a fact is verifiable only if it provides enough specificity to guide evidence retrieval and fact-checking. We focus on verifiability to maximize the definition’s objectivity and clearly delineate facts from opinions.\nManual annotations are expensive. All existing datasets are manually annotated, which is timeconsuming and expensive. Thus, most existing resources are inevitably restricted to certain topics for which it is feasible to annotate claims manually. Such examples include presidential debates (Has-\n1890\nof factual claim detection is hard and controversial. Example claims are highlighted in yellow. Explanations are written in italics.\nsan et al., 2015), COVID-19 tweets (Alam et al., 2021a), biomedical (Wührl and Klinger, 2021) and environmental claims (Stammbach et al., 2023a). This potentially limits models’ ability to generalize to future topics. However, manually annotating datasets with new topics is too expensive. In light of this, we propose AFaCTA, a multi-step reasoning framework that leverages LLMs to assist in claim annotation, making annotation more scalable and generalizable while rigorously following our factual claim definition.\nIn fact-checking, it is essential to have high annotation accuracy. However, LLM annotators are far from perfect (Ziems et al., 2023; Pangakis et al., 2023). Thus, to ensure the reliability of LLM annotations, AFaCTA calibrates the correctness of the annotations based on the consistency of different paths. Our evaluation shows that AFaCTA outperforms experts by a large margin when all reasoning paths achieve perfect consistency but fails to achieve expert-level performance on inconsistent samples. Nevertheless, we argue that AFaCTA can be an efficient tool in assisting factual claim annotation: perfectly consistent samples can be labeled automatically by the tool, which roughly saves 50% of expert time (see GPT-4-AFaCTA’s perfect consistency rate in Table 3). However, inconsistent ones may need expert supervision.\nUsing AFaCTA, we annotate PoliClaim, a high-\nquality claim detection dataset covering U.S. political speeches across 25 years, spanning various political topics. We split the 2022 speeches as the test set and the 1998 to 2021 speeches as the training set to imitate the real-world use case where a model learns from the past and predicts future claims. We evaluate hundreds of classifiers trained on various data combinations, finding that AFaCTA’s annotated data with perfect consistency can be a strong substitute for data annotated by human experts. In summary, our contributions include:\n1. We review the regular misconceptions and confounders in claim definition, proposing a claim definition for fact-checking focusing on verifiability.\n2. We propose AFaCTA, an LLM-based framework that assists factual claim annotation and ensures its reliability by calibrating annotation quality with consistency along different reasoning paths.\n3. We annotate PoliClaim, a high-quality factual claim detection dataset covering political speeches of 25 years and various topics.",
"2 Claim Definition for Fact-checking": "In this section, we first provide an overview of the discrepancies in claim definitions in prior work.\nThen, we propose our definition of a factual claim with respect to existing discrepancies.",
"2.1 Discrepancies in Prior Work": "Claim conceptions: The term “claim detection” is used not only in fact-checking but also in other areas of research, for example, argument mining (Boland et al., 2022). However, this term refers to different concepts in different research areas. In fact-checking, claim detection aims at identifying objective information in statements, which can be ruled factually wrong or correct according to evidence (Thorne et al., 2018; Arslan et al., 2020; Gangi Reddy et al., 2022), and unverifiable subjective statements are usually not considered as factual claims. In contrast, in argument mining, claim detection aims at identifying the core argument or point of view referring to what is being argued about (Habernal and Gurevych, 2017). Therefore, both objective and subjective information can be identified as claims depending on their role in the discourse (Daxenberger et al., 2017; Chakrabarty et al., 2019). The intermixing of such concepts has led to dataset misuse issues in research: for instance, Gupta et al. (2021) annotate a claim detection dataset for fack-checking COVID19 tweets. However, the dataset is jointly trained and evaluated with claim detection datasets for argument mining (Peldszus and Stede, 2015; Stab and Gurevych, 2017, inter alia), which potentially harms the soundness of the results.\nDiscrepancies in task definitions: Some prior work defines factual claim detection as identifying check-worthy claims (Arslan et al., 2020; Nakov et al., 2021, 2022; Stammbach et al., 2023b) while others aim at distinguishing factual claims and nonclaims (Konstantinovskiy et al., 2020; Gupta et al., 2021). Alam et al. (2021a) and Arslan et al. (2020) have both check-worthiness and claim vs non-claim labels. However, Konstantinovskiy et al. (2020) posits that the definition of check-worthiness is subjective, depending on an annotator’s knowledge or political stance about a topic. For example, the statement “human-induced climate change is an immediate and severe threat” might be deemed selfevident by climate scientists but as checkworthy by others who are skeptical of climate models or prioritize economic growth. Some might argue that claims like this, which are subject to disagreement regarding their importance, are check-worthy due to their controversial nature. However, it requires\nbackground knowledge outside the claim itself to determine the controversy. This could involve factors such as who made the claim and why it is controversial, making the task impossible to solve at the sentence level.\nCheck-worthiness labels also suffer from another serious problem of future prediction. Training a model detecting past check-worthy claims (e.g., about COVID-19) may fail to detect checkworthiness in future claims whose sociopolitical context and controversy are unknown.\nBlurry boundaries between factual claims and non-claims: In related work, personal opinions are usually defined as non-factual claims (Arslan et al., 2020; Alam et al., 2021a). However, many opinions are explicitly based on verifiable facts, lying between the definition of factual claims and non-factual claims. For example: “Hydroxychloroquine cures COVID.” is a verifiable factual claim. But “I believe Hydroxychloroquine cures COVID.” becomes a personal opinion based on a verifiable fact. Alam et al. (2021a) excludes all opinions from factual claims, which is not a good practice. A false claim can be harmful in political speeches and social media, no matter if it is enclosed by \"I believe\" or not. Gupta et al. (2021) defines ‘opinions with societal implications as factual claims”, where societal implications is again an ambiguous definition.\nThe first row of Table 1 showcases the prevalent entanglement of subjective and objective information. To the best of our knowledge, no previous work in factual claim detection discusses the intersection of opinions and facts and how to delineate facts from opinions.\nContext Unavailable: Related work focusing on sentence-level factual claim detection in political speech fails to discuss that sometimes sentences are not self-contained (Arslan et al., 2020; BarrónCedeño et al., 2023). However, resolving the coreferences is essential for semantic understanding. The last row of Table 1 shows such an example.",
"2.2 Our Definition of Factual Claims": "To avoid claim misconceptions, we always use “factual claim” or “claim detection for factchecking” to specify our focus on fact-checking rather than argument mining. We define facts focusing on verifiability following Arslan et al. (2020) and Alam et al. (2021a):\nFact: A fact is a statement or assertion that can\nbe objectively verified as true or false based on empirical evidence or reality.\nTo have a clear and objective task definition, we follow Konstantinovskiy et al. (2020) to focus on verifiability (factual vs. not factual claim) instead of check-worthiness (check-worthy vs. not check-worthy). Whether a sentence contains a verifiable fact or not depends only on its content (and sometimes on a little context surrounding it to clarify key statements), regardless of political or social contexts not captured by the text itself. This differs from many related works that annotate political opinions without verifiable facts as check-worthy and verifiable facts as not check-worthy. Examples of differences in checkworthiness and verifiability are showcased in rows two and three of Table 1. Controversial political opinions and interpretations are usually considered check-worthy due to their potential societal implications. However, they are often open to debate and can hardly be verified against certain evidence. Therefore, we argue that checkworthiness and verifiability are perpendicular dimensions of factual claim detection. In this work, we focus on verifiability for the scalability of data annotation and transferability to easy-todeploy smaller models.\nTo address the opinion-with-fact problem that is overlooked by prior work, we define opinions and factual claims as:\nOpinion: An opinion is a judgment based on facts, an attempt to draw a reasonable conclusion from factual evidence. While the underlying facts can be verified, the derived opinion remains subjective and is not universally verifiable.\nFactual claim: A factual claim is a statement that explicitly presents some verifiable facts. Statements with subjective components like opin-\nions can also be factual claims if they explicitly present objectively verifiable facts.\nHow to define verifiability? The verifiability of information is not trivial to define because many assertions can be interpreted either subjectively or objectively. For instance, “MIT is one of the best universities in the world” can be either expressing the speaker’s subjective feeling about MIT, which is not verifiable, or it can be asserting a verifiable fact, which can be checked with evidence like university rankings and public survey results. For clarity, we define a statement as verifiable if it provides enough specific information to guide factcheckers in verification. Therefore, the above MIT claim is verifiable. Generally, we observe that a statement is verifiable when it provides specific details for evidence search. For example, “MIT is a good university” is less verifiable than “MIT is one of the best universities according to the QS ranking”.",
"3 AFaCTA": "This section introduces AFaCTA for assisting factual claim annotation. AFaCTA consists of three prompting steps and an aggregation step (illustrated in Figure 1), inspired by Kahneman (2011) and our claim definitions. The prompts can be found in Appendix C.\nStep 1: Direct Classification. We ask LLMs to answer whether a statement contains verifiable information without any chain of thought (CoT, Wang et al., 2023). This step corresponds to a human expert’s fast decision-making at first sight of a statement without deep thinking.\nStep 2: Fact-Extraction CoT. We instruct LLMs to conduct step-by-step reasoning over a statement: firstly, analyze the objective and subjective information covered; secondly, extract the factual part;\nthirdly, reason why it is verifiable or unverifiable; and finally, determine whether the factual part is verifiable. This step aims at identifying verifiable facts entangled with subjective opinions (row 1 of Table 1). The prompt and an illustrative example of this step can be found in Appendix C.3.\nStep 3: Reasoning with Debate. We note that the verifiability of many statements depends on their interpretation. Ambiguity between verifiable and unverifiable statements often arises from a lack of specificity, as shown in the examples in Appendix A.\nImitating a critical thinking process, we first prompt LLMs to argue that the statement contains some (or does not contain any) verifiable information. Then we pass the debating arguments to another LLM call to judge which aspect it leans towards. To address the position bias of LLM-asa-judge (Zheng et al., 2023), we prompt the final judging step twice, each time with the positions of the verifiable and unverifiable arguments swapped. The prompts and an illustrative example of this step can be found in Appendix C.4.\nFinal Step: Results Aggregation. We aggregate the results of three steps through majority voting. Labels from steps 1 and 2 each contribute one vote, while two position-swapped labels from step 3 contribute 0.5 votes apiece (3 votes in total). Samples with more than 1.5 votes are classified as positive samples (factual claims), and others as negative samples. See Appendix D for a discussion on tiebreaking. Idealy, if all steps have perfect consistency (0 or 3 votes), the annotation accuracy should be high.",
"4 PoliClaim Dataset": "We obtain a large political speech data from Picard and Stammbach (2022), which mainly consists of State of the State (SOTS) speeches (already cleaned and split into sentences). These speeches are governors’ major public addresses of the year, thus in-\ncluding meaningful political topics. We randomly sample two speeches from each year, from 1998 to 2021, as training data and four speeches from 2022 as test data.2 This design has two considerations: (1) We aim to replicate the real-world scenario where models are trained on previous claims (e.g., from 1998 to 2021) and used to predict future claims on potentially unseen topics (e.g., in 2022). (2) The test set will be used to evaluate the annotation performance of AFaCTA, and the 2022 speeches are likely unseen by June LLM checkpoints we use to better replicate the future-claimdetection scenario.\nThe PoliClaim test set (PoliClaimtest) was annotated by two human experts3, who had no access to AFaCTA’s output when annotating. The experts achieved a substantial Cohen’s Kappa of 0.69 in independent annotation before the discussion. Then, they had meetings to resolve disagreements and develop gold labels. Disagreements were mainly caused by ambiguous verifiability, see Appendix A for disagreement resolving. Our annotation guideline, an instantiation of our factual claim definition, can be found in Appendix B.\nTo test AFaCTA’s annotation performance on different domains, we re-annotate the development set of CheckThat!-2021 (Nakov et al., 2021), which originally contained check-worthiness labels of COVID-19 tweets, following the same annotation process (Cohen’s Kappa 0.58). Due to budget limitations, our explorations and annotations mainly focused on the domain of political speech. We leave the extensive study on the social media domain (and other potential domains for factual claim detection) to future work.\nAfter verifying the performance of AFaCTA using the test sets (see more in Section 5.1), we annotated the training set with the tool’s assistance, imitating its expected use case of assisting annotation. The perfectly consistent samples were labeled directly with GPT-4 AFaCTA, while the inconsistent samples were left for human annotation. We randomly sampled 8 speeches and manually relabeled the inconsistent annotations from AFaCTA, leading to PoliClaimgold where all annotations are labeled with perfect consistency or human supervision. The perfectly consistent samples in the rest\n2We do speech-level random sampling to keep the sentence distribution of full speeches.\n3PhD students who are familiar with the domain of political speeches in the U.S. and COVID-related claims and have good knowledge of the literature on claim detection.\nof the speeches fall into PoliClaimsilver while the inconsistent samples fall into PoliClaimbronze. The statistics of datasets can be found in Table 2.",
"5 Experiments": "Since AFaCTA is an LLM-agnostic prompting framework, we test both GPT-3.5 (Ouyang et al., 2021) and GPT-4 (OpenAI, 2023) as the backbone LLM. We also test open-sourced LLMs which does not work well due to high position bias in Step 3 (see Appendix F). Detailed settings are in Appendix G to ensure reproducibility.",
"5.1 AFaCTA Annotation Performance": "It is unlikely for LLMs to produce expert-level annotation on all samples S. Therefore, AFaCTA (with LLM M) calibrates its performance with self-consistency, dividing S into two subsets: SMcon with perfect consistency across all steps (0 or 3 votes) and SMinc with inconsistency among some steps (0.5 to 2.5 votes). We use two criteria to compare AFaCTA with human experts: (1) Accuracy: AFaCTA’s accuracy vs. experts’ average accuracy, both are computed against gold labels; (2) Agreement (Cohen’s Kappa): AFaCTA’s average agreement to experts vs. agreement between experts. Both metrics should be compared on S, SMcon, and SMinc to evaluate AFaCTA’s reliability on entire, perfectly consistent, and inconsistent samples. See Appendix E for formulas and implementations of all metrics.\nThe results are presented in Table 3. On the full test set S, even GPT-4 AFaCTA underperforms the average performance of human experts on both accuracy and agreement. However, if we only consider the subset where AFaCTA has perfect consistency (SMcon), GPT-4 outperforms human experts by a large margin on accuracy (98.49% > 94.85%) and achieves better agreement with experts (0.833 > 0.743). On the contrary, LLMs achieve worse annotation performance than human experts on inconsistent subsets (SMinc). Comparable inter-human agreement is achieved on both subsets, but the accuracy and agreement on SMcon are higher, indicating that SMcon is slightly less challenging than S M inc.\nTakeaway: With AFaCTA’s self-consistency calibration, auto-annotation of perfectly consistent samples can be reliably adopted to reduce manual effort (also see Section 5.5). In the case of PoliClaimtest, only 51.22% needs further supervision, while 48.78% of manual effort is saved with\nGPT-4-AFaCTA.",
"5.2 Error Analysis": "Annotation errors in the fact-checking domain may lead to downstream model inaccuracies. Therefore, we also analyze AFaCTA’s errors within the perfectly consistent samples. We find that GPT-4 AFaCTA makes false positive errors due to oversensitivity to granular or implicit facts. It makes false negative errors due to context limitations. GPT-3.5 seems less capable of identifying implicit facts within opinions compared to GPT-4. It sometimes fails to identify facts that are specific enough for verification and asks for more “specific details”. Roughly 97% of its errors are false negatives caused by misunderstanding verifiability and other hallucinations, indicating that its positive predictions are more reliable.\nIn Appendix N, we analyze all errors rather than provide isolated examples to avoid cherry-picking. We hope that this thorough analysis can benefit future research in manual/automatic annotation about factual claims.",
"5.3 Predefined Reasoning Paths Matter": "Leveraging self-consistency to improve LLM reasoning is not new. Wang et al. (2023) show that LLMs can use self-sampled reasoning paths (i.e., CoTs) to improve predictions with self-consistency. In AFaCTA, we use pre-defined reasoning paths instead of LLM-sampled ones. To compare these approaches, we conduct self-consistency CoT with the prompt of Step 1: Direct Classification. Step 1\nis chosen since it (1) directly addresses verifiability, which is the core of our factual claim definition; (2) contains no predefined CoT; and (3) is simple but achieves decent performance compared to Steps 2 and 3 (see Appendix H where we separately evaluate each step’s performance).\nWe generate 11 CoTs (more details in Appendix I) for both GPT-3.5 and GPT-4 and then compute accuracy scores for different selfconsistency levels. The results are illustrated in the left figure of Figure 2. We observe that selfconsistency level, to some degree, calibrates accuracy: a higher self-consistency level generally indicates higher accuracy, and vice versa. However, self-consistency CoT underperforms AFaCTA on the perfectly consistent subset (84.18% < 98.49%) while the former samples 11 CoT reasoning paths, and the latter relies on only 3 predefined reasoning paths. One possible explanation is that the predefined paths encourage critical thinking and reasoning from different angles, making the achieved selfconsistency more comprehensive. We also observe that AFaCTA and self-consistency CoT achieve perfect consistency on 48.78% and 58.09% of the data, respectively, indicating that the perfectconsistency in AFaCTA is only slightly harder to achieve than in self-consistency CoT.\nFurthermore, we find that the accuracy on perfectly consistent samples grows with the number of CoT voters (see the right figure of Figure 2). This is intuitive as more consistent outputs indicate more confident predictions. However, the marginal benefit of adding more CoTs drops significantly: the accuracy of GPT-4 tends to converge to 85%. Since the accuracy of GPT-3.5 seems to grow linearly up to 11 CoTs, we further extend it to 19 CoTs and observe convergence to 84.1% (see Figure 5), which is still much lower than GPT-3.5 AFaCTA’s 90.4%.\nTakeaway: Auto-annotations with more selfconsistency (especially the perfectly consistent\nones) tend to be more accurate. However, the source of self-consistency needs to be diversified and well-defined to scale up annotation performance efficiently. In this case, we show that predefined reasoning paths with expertise outperform those automatically sampled by LLMs.",
"5.4 Domain Agnostic AFaCTA": "The reasoning logic of AFaCTA is not restricted to the political speech domain. To verify its performance on the social media domain, we conduct the analyses in Section 5.1 and Section 5.3 again on the CheckThat!-2021 (Nakov et al., 2021) development set. Experiment results are similar to those on PoliClaimtest (see Appendix J). Therefore, AFaCTA may assist factual claim annotation in various domains.",
"5.5 AFaCTA Delivers Useful Annotations": "To explore whether AFaCTA’s annotation can replace or augment manual annotation in training classifiers, we train hundreds of classifiers with different combinations of PoliClaimgold (AFaCTA annotations + Human Supervision), PoliClaimsilver (AFaCTA perfectly consistent annotations), and PoliClaimbronze (AFaCTA inconsistent annota-\ntions). All results are averaged over random seeds of 42, 43, and 44, and are supported with statistical significance tests (see Appendix L). 4\nUsing only gold, silver, or bronze data: We first gradually increase the number of training data points (by 100 per step) of the same quality. Results are shown in Figure 3. We observe the same phenomenon as previous work (Stammbach et al., 2023b) where the marginal accuracy gain drops while adding more data. The PoliClaimgold and PoliClaimsilver curves roughly follow the same growing trend, approaching GPT-4’s aggregated performance. This indicates that the perfectly consistent annotations (silver) from AFaCTA can strongly substitute for manually annotated data. The PoliClaimgold curve is slightly higher, showing that learning from human-supervised hard samples (inconsistent annotations of AFaCTA) is beneficial. The PoliClaimbronze curve is much lower, showing that the noisy, inconsistent annotations harm the classifier training.\nAugmenting training with auto-annotated data: When the manual annotation budget is limited, can we augment the dataset with automatic annotation? In Figure 4, we gradually augment the PoliClaimgold data with automatically annotated ones (100 per step). It can be observed that: (1) The performance increases more with PoliClaimsilver data augmentation, showing that the data quality is important in data augmentation. (2) Compared to augmenting the full PoliClaimgold dataset, augmentation results in more improvement when there are only 500 PoliClaimgold data. Therefore,\n4This section presents RoBERTa (Liu et al., 2019) results. Appendix M presents similar DistilBERT (Sanh et al., 2019) results as side findings. Detailed fine-tuning settings are in Appendix K.\nhigh-quality automatic annotation is more helpful when the manual annotation budget is limited. (3) Combining gold and silver data leads to classifiers that outperform aggregated GPT-4 reasoning, demonstrating that extending training data with LLM annotation is a promising approach to achieving better performance. One of the best RoBERTa checkpoints trained on all PoliClaimgold and PoliClaimsilver is available on HuggingFace5.",
"6 Related Work": "Claim Detection: The term “claim detection” has different definitions in various research fields (Boland et al., 2022). Even inside the field of factchecking, its exact definition depends on the domain (Alam et al., 2021b; Stammbach et al., 2023b) or task objective (Arslan et al., 2020; Konstantinovskiy et al., 2020; Gangi Reddy et al., 2022) and is somewhat arbitrary. In this work, we propose a definition focusing on one important dimension of factual claims – verifiability, to minimize the conceptual uncertainty. Another important dimension of factual claims is check-worthiness (Arslan et al., 2020; Nakov et al., 2021, 2022; Barrón-Cedeño et al., 2023), whose definition is more arbitrary (Konstantinovskiy et al., 2020).\nAutomatic Annotation: Automatic data annotation using LLM is both promising (Pangakis et al., 2023) and necessary (Veselovsky et al., 2023). Early work observes that LLMs’ annotation performance highly depends on tasks: LLMs outperform human annotators on some tasks (Gilardi et al., 2023; Zhu et al., 2023; Törnberg, 2023) but fails to achieve human-level performance on others (Ziems et al., 2023; Reiss, 2023). Therefore, we argue that\n5https://huggingface.co/JingweiNi/roberta-base-afacta\na detailed task-specific study about LLM annotation reliability is essential.\nPangakis et al. (2023) recommend evaluating LLMs’ annotation against a small subset that is not in the LLMs’ training corpus and annotated by subject matter experts. We follow these suggestions in this work. Concurrent studies also explore self-consistency (Pangakis et al., 2023) and CoT (He et al., 2023) to improve the performance and reliability of LLM annotation. However, they do not compare predefined reasoning paths with automatically sampled CoTs.",
"7.1 Check-Worthiness": "The objective of factual claim detection is to prioritize claims that are both verifiable and checkworthy, maximizing the use of potentially limited fact-checking resources. However, in this project, we focus on verifiability without exploiting the other important aspect: checkworthiness. Konstantinovskiy et al. (2020) argues that the definition of check-worthiness is subjective. However, it is possible to define a claim’s checkworthiness according to its context. For example, is the claimer an influential person or media? Is the topic controversial? There has already been work that takes some contextual information (e.g., claimer, topic, etc.) into account (Gangi Reddy et al., 2022). Future work may explore deterministic and efficient ways to define and annotate checkworthiness leveraging rich contextual information.",
"7.2 Only GPT-4 Is Reliable": "We find that only GPT-4-AFaCTA outperforms human experts on perfectly consistent samples. GPT3.5 achieves promising results but tends to produce false negative errors. Although GPT-4 is much cheaper than human supervision, it is close-sourced and is comparatively more expensive than other LLMs. Future work may study how to use opensourced models to produce high-quality annotations. Specifically, future work may explore (1) training the model to better understand the annotation guideline; (2) leveraging internal certainties like output logits; and (3) extending the spectrum of self-consistency levels with cheaper inference.",
"8 Conclusion": "We propose AFaCTA, which leverages LLMs to assist in the annotation of factual claim detection. It\nensures reliability by calibrating annotation quality through consistency. AFaCTA’s consistent annotation proves effective for training and data augmentation even without human supervision.\nLimitations\nAFaCTA Prompt. The design of AFaCTA prompts is inspired by the fast and slow thinking patterns (Kahneman, 2011) and prior knowledge of factual claim definition. However, we do not explore other techniques (e.g., few-shot prompting, in-context learning, and putting whole annotation guidelines in context etc.) to improve AFaCTA performance further, for two reasons: (1) the current AFaCTA’s performance is good enough to show the potential of assisting claim detection annotation with LLMs; and (2) we annotated thousands of sentences with GPT-4-AFaCTA, which is very expensive. Extending the current prompts with more in-context information is not affordable for us.\nBesides, AFaCTA step 2 and 3 cost (approximately) 6.5x and 8.5x more tokens than step 1. Although step 2 and 3 bring self-consistency calibration and performance gain through aggregation, the marginal benefit of API cost is far from perfect.\nSocial Media and Other Domains. In this work, we only conduct extensive experiments and analyses on the political speech domain, only exploring the social media domain with a small dataset (due to the definition discrepancy, we cannot evaluate our methods with prior datasets). We believe a comprehensive study on one domain can provide deeper insights, and the conclusions might be transferable to other domains. Therefore, we do not split our budget across various domains. Future work may consider extending the large-scale analyses to other domains that need fact-checking.\nLimited Expert Annotators. We only evaluate AFaCTA’s annotation performance against two experts, which may lead to potential bias. We fail to hire more expert annotators mainly because expert annotation is extremely expensive, and it is hard to find more experts with good knowledge about factual claim definitions. As compensation, we release all expert annotations and detailed error analyses where the potential bias can be analyzed. Besides, adding unsupervised LLMannotated data continuously improves the accuracy on PoliClaimtest, demonstrating that our human labeling on PoliClaimtest has very limited bias.\nEthics Statement\nIn this work, all human annotators are officially hired and have full knowledge of the context and utility of the collected data. We adhered strictly to ethical guidelines, respecting the dignity, rights, safety, and well-being of all participants.\nThere are no data privacy issues or bias against certain demographics with regard to the annotated data. Both original SOTS data (Picard and Stammbach, 2022) and CheckThat!-2021 (Nakov et al., 2021) datasets are widely used for NLP and other research. Our annotated datasets will also be publicly available for research purpose.",
"Acknowledgements": "This paper has received funding from the Swiss National Science Foundation (SNSF) under the project ‘How sustainable is sustainable finance? Impact evaluation and automated greenwashing detection’ (Grant Agreement No. 100018_207800). It is also funded by grant from Hasler Stiftung for the Research Program Responsible AI with the project “Scientific Claim Verification.”",
"A Ambiguities in Verifiability": "In political speeches and social media, not all statements are necessarily grounded with enough specific information and are undoubtedly verifiable. Many statements are a mixture of specificity and vagueness, which makes verifiability hard to define. The specificity required for verification may vary based on the topic. But generally, the more specific information a fact contains, the more verifiable it is. For example, a vague statement like \"Birmingham is small\" tends to be a not verifiable opinion since it lacks specificity (e.g., the standard of “being small”). In contrast, \"Birmingham is small in terms of population compared to London\" offers a clearer path for verification by comparing the population sizes of both cities. Such ambiguity in verifiability results in different expert annotations. To resolve disagreement and obtain gold labels, we have the experts debate “whether a statement provides enough specific information to guide factcheckers in verification” to achieve agreement.\nIn the following list, we showcase some examples with vague verifiability. We rely on our experts’ critical thinking and common sense to determine their verifiability. E1. “I promised that our roads would be the envy\nof the nation.” Analysis: “envy of the nation” seems to be an unverifiable subjective expression. However, this is a part of the speaker’s pledge about improving infrastructure and can be verified by comparing the roads with those in other states.\nE2. “Evil acts against innocent people in the places where we once ran errands or recreated have also made us feel less safe.” Analysis: the speaker claims the existance of evil acts which seems verifiable. However, no specific details are mentioned and different people may interpret or define “evil act” differently. Therefore, it is hard to verify.\nE3. “In my budget proposals, we will fully fund our rainy-day accounts.” Analysis: the \"rainy-day account.\" seems to be an unspecific metaphor which is hard to verify. However, we know from the context that the speaker claims to fund emergency cases (i.e., rainy days). Therefore, it tends to be verifiable.\nE4. “Ensuring society provides a hand up when people need help.” Analysis: it seems that the speaker is pledging a helpful society. However, nothing specific is mentioned, making this claim hard to verify.\nE5. “Folks, no doubt, the last couple of years have been especially trying for our medical professionals.” Analysis: at the first glance, the medical professionals’ personal feeling seems subjective and not verifiable. However, as COVID is a public event, this can be verified by checking data related to the workload, stress levels, and overal conditions of medical professionals.\nE6. “Authoritarian and illiberal impulses aren’t just rising overseas, they’ve been echoing here at home for some time.” Analysis: it claims the arising of authoritarian and illiberal impulses. However, no specific events or details are mentioned thus different people may interpret those things differently, making it hard to verify.\nE7. “We are finally going to fix the darn roads.” Analysis: “darn roads” is a subjective expression. However, the speaker’s pledge of improving (at least some) roads is verifiable.\nE8. “I’ll call this nonsense what it is, and that is an un-American, outrageous breach of our federal law.” Analysis: the speaker interprets the COVID vaccination plan as “an un-American, outrageous breach of federal law”, which seems verifiable by checking laws. However, this is a controversial issue where different people may have different interpretations of the laws. And importantly, no specific legal provisions are mentioned. Therefore, it leans towards unverifiable opinion.\nWe make all our experts’ annotations publicly available. Challenging samples can be found by locating disagreements. Though we tried our best to make the annotation accurate, errors may still occur due to their challenging nature. We encourage future work to improve our definitions to resolve the existing vagueness.",
"B Annotation Guideline": "The task is to select verifiable statements from political speeches for fact-checking. Given a statement from a political speech and its context, answer\ntwo questions following the guidelines. Your annotation will be used to evaluate an LLM-based annotation assistant for factual claim definition.\nB.1 Guidelines\nContext: Make sure to consider a small context of the target statement (the previous and next sentence) when annotating. Some statements require context to understand the meaning. For example: E1. “... Just consider what we did last year for the\nmiddle class in California, sending 12 billion dollars back – the largest state tax rebate in American history. But we didn’t stop there. We raised the minimum wage. We increased paid sick leave. Provided more paid family leave. Expanded child care to help working parents ...” Without the context, the underlined sentence seems an incomplete sentence. With the context, we know the speaker is claiming a bunch of verifiable achievements of their administration.\nE2. “... When I first stood before this chamber three years ago, I declared war on criminals and asked for the Legislature to repeal and replace the catch-and-release policies in SB 91. With the help of many of you, we got it done. Policies do matter. We’ve seen our overall crime rate decline by 10 percent in 2019 and another 18.5 percent in 2020! ...” The underlined part claims that the policies against crimes have been “done”, which is verifiable. It needs context to understand it.\nOpinion with Facts: Opinions can also be based on factual information. For example: E1. “I am proud to report that on top of the lo-\ncal improvements, the state has administered projects in almost all 67 counties already, and like I said, we’ve only just begun.” The speaker’s “proud of” is a subjective opinion. However, the content of pride (administered projects) is factual information.\nE2. “I first want to thank my wife of 34 years, First Lady Rose Dunleavy.” The speaker expresses their thankfulness to their wife. However, there is factual information about the first lady’s name and the length of their marriage.\nWhat is verifiable? The verifiability of the factual information depends on how specific it is. If there is enough specific information to guide a general fact-checker in checking it, the factual information\nis verifiable. Otherwise, it is not verifiable. For example:\nE1. “Birmingham is small.” is not verifiable because it lacks any specific information for determining veracity. It leans more toward subjective opinion.\nE2. “Birmingham is small, compared to London” is more verifiable than E1. A fact-checker can retrieve the city size, population size ... etc., of London and Birmingham to compare them. However, what to compare to prove Birmingham’s “small” is not specific enough.\nE3. “Birmingham is small in population size, compared to London” is more verifiable than E1 and E2. A fact-checker now knows it is exactly the population size to be compared.\nWhen does an opinion explicitly present a fact? Many opinions are more or less based on some factual information. However, some facts are explicitly presented by the speakers, while others are not. Explicit presentation means the fact is directly entailed by the opinion without extrapolation:\nE1. “The pizza is delicious.” This opinion seems to be based on the fact that “pizza is a kind of food”. However, this fact is not explicitly presented.\nE2. “I first want to thank my wife of 34 years, First Lady Rose Dunleavy.” The name of the speaker’s wife and their year of marriage are explicitly presented.\nAlong with these guidelines, definitions in Section 2 are also presented to the annotators.\nB.2 Annotation Questions",
"Q1. Does the target statement explicitly present any verifiable factual information?": "• A - Yes, the statement contains factual information with enough specific details that a factchecker knows how to verify it. E.g., Birmingham is small in population compared to London.\n• B - Maybe, the statement seems to contain some factual information. However, there are certain ambiguities (e.g., lack of specificity) making it hard to determine the verifiability. E.g., Birmingham is small compared to London. (lack of details about what standard Birmingham is small)\n• C - No, the statement contains no verifiable factual information. Even if there is some, it is clearly unverifiable. E.g., Birmingham is small.\nIf your answer to Q1 is B - Maybe, then please answer Q2 below:",
"Q2. Do you think this statement needs fact-": "checking of any degree? In other words, does it lean more to checkable facts or subjective opinions?\n• A - Yes, it leans more to facts that need checking.\n• B - No, it leans more toward subjective opinion and does not need a fact-check.\nSamples labeled with A and B/A are positive samples, while those with C and B/B are negative samples.",
"C AFaCTA Prompts": "Following are the prompts of AFaCTA. In all prompts, we always include the previous and next sentence of the target statement if the context is available. “{sentence}”, and “{context}” are variables to be substituted with the target sentence and its contexts correspondingly. When annotating Twitter data, we simply change “political speech” to “Twitter” and remove the specifications about contexts (see exact prompts in our code base).\nC.1 System Prompt You are an AI assistant who helps fact -checkers\nto identify fact -like information in statements.\nC.2 Step 1: Direct Classification Given the <context > of the following <sentence >\nfrom a political speech , does it contain any objective information?\n<context >: \"...{ context }...\" <sentence >: \"{ sentence }\"\nAnswer with Yes or No only.\nC.3 Step 2: Fact-Extraction CoT In this prompt, we use the categorical definition for facts in Konstantinovskiy et al. (2020), removing the final category of “other statements you think are claims” to reduce uncertainty.\nStatements in political speech are usually based on facts to draw reasonable conclusions.\nCategories of fact: C1. Mentioning somebody (including the speaker)\ndid or is doing something specific and objective.\nC2. Quoting quantities , statistics , and data. C3. Claiming a correlation or causation.\nC4. Assertion of existing laws or rules of operation. C5. Pledging a specific future plan or making specific predictions about future.\nPlease first analyze the objective and subjective information that the following < statement > (from a political speech) covers. Then extract the fact that the <statement > is based on. Then carefully reason about if the extracted fact is objectively verifiable. Finally answer if the fact falls into the above categories (C1 to C5) or not (C0).\nContext for <statement > to help you understand it better: \"{ context }\" <statement >: \"{ sentence }\"\nFormat your answer in JSON with the following keys in order: {{ \"ANALYSIS \": \"What are the objective and\nsubjective information that <statement > covers?\",\n\"FACT_PART \": \"The extracted fact.\", \"VERIFIABLE_REASON \": \"Detailed reason about\nthe extracted fact 's verifiability. Note that a fact lacks important details or can be interpreted differently is not objectively verifiable. Future plans/ pledge (C5) that are specific and clear can be verifiable. Citing others ' words is verifiable and falls into C1. \",\n\"VERIFIABILITY \": \"A boolean value indicates the verifiability .\",\n\"CATEGORY \": \"C1 to C5, or C0.\" }}\nFollowing is an example showing GPT-4’s analyses of a statement, given its context:\nTarget statement: ``We are so thankful that we haven 't suffered any loss of life , and it 's always heartening to see and hear stories of Alaskans pitching in to help each other.''\nGPT -4's reply: \\{ \"ANALYSIS \": \"The objective information in\nthe statement is that there has been no loss of life due to the storms. The subjective information is the speaker 's expression of gratitude and the heartening stories of Alaskans helping each other.\",\n\"FACT_PART \": \"There hasn 't been any loss of life due to the storms.\", \"VERIFIABLE_REASON \": \"The fact that there hasn 't been any loss of life due to the storms can be verified by checking official records and reports from relevant authorities such as emergency services , hospitals , and local government .\", \"VERIFIABILITY \": true , \"CATEGORY \": \"C1\"\n\\}\nC.4 Step 3: Reasoning with Debate Argument for objectivity:\nConcisely argue that the following <sentence > from a political speech does contain some objective information.\nContext of <sentence > in the speech: \"...{ context }...\" <sentence >: \"{ sentence }\"\nFollowing is an example argument from GPT4 that there is some objective information in the target statement:\nTarget statement: ``We are so thankful that we haven 't suffered any loss of life , and it 's\nalways heartening to see and hear stories of Alaskans pitching in to help each other.''\nGPT -4's reply: The sentence \"The storms have required state disaster declarations covering almost half of our boroughs and the communities they include\" does contain\nobjective information. It provides specific details about the impact of the storms , stating that they have led to state disaster declarations in nearly half of the boroughs in the state. This is a factual claim that can be verified by checking the official records of disaster declarations.\nArgument for subjectivity: Concisely argue that the following <sentence >\nfrom a political speech does not contain any objective information.\nContext of <sentence > in the speech: \"...{ context }...\" <sentence >: \"{ sentence }\"\nFollowing is an example argument from GPT-4 that there is NO objective information in the target statement:\nTarget statement: ``We are so thankful that we haven 't suffered any loss of life , and it 's always heartening to see and hear stories of Alaskans pitching in to help each other.''\nGPT -4's reply: The sentence \"The storms have required state disaster declarations covering almost half of our boroughs and the communities they include\" does not contain\nobjective information because it lacks specific details. It does not provide the exact number or names of the boroughs affected , the specific nature of the disaster declarations , or the precise extent of the damage. Without these details , the statement remains vague and subjective.\nFinal judgment: Two AI assistants are debating about whether the\nfollowing <sentence > (from a political speech) contains any objectively verifiable information.\nContext of <sentence > in the speech: \"...{ context }...\" <sentence >: \"{ sentence }\"\nAssistant A's View: \"{ assistant_a }\"\nAssistant B's View: \"{ assistant_b }\"\nBased on the above , does <sentence > contain any objectively verifiable information? Which perspective do you align with more closely? Please reply with \"Lean towards A\", or \"Lean towards B\" only.\nFollowing is an example of GPT-4’s judge given arguments of different perspectives:\nTarget statement: ``We are so thankful that we haven 't suffered any loss of life , and it 's always heartening to see and hear stories of Alaskans pitching in to help each other.''\nGPT -4's reply: Lean towards A.\nHere the objective argument is assigned to assistant A.",
"D AFaCTA Tie-Breaking": "All samples with 1.5 votes are labeled as nonfactual claims. This tie-break is randomly chosen\nfor two reasons: (1) tie-break does not influence AFaCTA’s performance as it does not affect the perfectly consistent subset (i.e., the subset annotated with best confidence and may not need further human supervision). (2) Samples with 1.5 votes are the most confusing ones reported by AFaCTA, which require human supervision the most. They should not be dogmatically classified in practice. We only do it here to compute AFaCTA’s annotation performance on the test sets.",
"E Details of Evaluation Metrics": "This section presents the formulas of metrics used in Section 5. For conciseness, only formulas on perfectly consistent samples SMcon are showcased. Similar formulas are applied for inconsistent samples SMinc and all samples S.\nAverage accuracy of human expert on perfectly consistent samples SMcon is calculated as:\nAccHcon = 1\n2\n∑\nh∈{h1,h2} acc_score(Gcon, P hcon) (1)\nwhere Gcon and P hcon denote the gold labels and human-annotated labels of samples where AFaCTA achieves perfect self-consistency; and h1 and h2 denotes two human experts.\nAccuracy of AFaCTA against gold label on SMcon is calculated as:\nAccMcon = acc_score(Gcon, P M con) (2)\nwhere PMcon denotes AFaCTA’s prediction on perfectly consistent samples.\nAgreement (Cohen’s Kappa) between human annotators on SMcon is calculated as:\nKappaHcon = cohen_kappa(P h1 con, P h2 con) (3)\nAverage Cohen’s Kappa between AFaCTA and two human annotators on SMcon is calculated as:\nAccMcon = 1\n2\n∑\nh∈{h1,h2} cohen_kappa(P hcon, P M con) (4)\nWe use Sci-Kit Learn’s accuracy and Cohen’s Kappa implementations to calculate all metrics.",
"F AFaCTA with Open-sourced LLMs": "We tried AFaCTA framework on two popular opensourced LLMs: Llama-2-chat-13b (Touvron et al., 2023) and zephyr-7b-beta (Tunstall et al., 2023). Results are presented in Table 4. For both models,\nwe use the official checkpoints on huggingface and conduct greedy decoding when inference. We observe that both models suffer from heavy position bias in AFaCTA step 3: when putting arguments for verifiable and unverifiable to different positions, llama-2-chat-13b and zephyr-7b-beta predict inconsistently in 99% and 97% cases correspondingly. Therefore, there are seldom annotations with perfect consistency, and the consistency-based annotation strategy of AFaCTA does not help.\nWe also observe that zephyr-7b-beta achieves better performance than GPT-3.5 on CheckThat!2021-dev, showing the potential of using open-sourced LLMs as annotators. In future work, we will explore fine-tuning open-sourced LLMs to mitigate the position bias problem and improve annotation quality.",
"G Hyperparameter Settings": "For OpenAI models, we always use gpt-3.5-turbo0613 and gpt-4-0613. We use a temperature of 0, and top-p of 1 for all experiments except the selfconsistency CoT (Wang et al., 2023) experiments where we use a temperature of 0.7. We make all LLM generations publicly available. We always use a random seed of 42 if not specified. For opensourced LLM inference, we use greedy sampling, a top p of 1, and a maximum generation length of 3072.",
"H Performance of Each AFaCTA Step": "We compute the annotation performance of each AFaCTA reasoning step. For Step 3, we average the scores of labels 3.1 and 3.2 (see Figure 1). The results are presented in Table 5. It can be observed that Step 1, though simple, achieves promising performance. It outperforms other steps by a wide margin with GPT-4.",
"I Self-Consistency CoT": "We use the following prompt to generate Selfconsistency CoT. It keeps most of the prompt template of AFaCTA Step 1 to make them comparable. We use a temperature of 0.7 to sample different CoTs.\nGiven the <context > of the following <sentence > from a political speech , does it contain any objective information?\n<context >: \"...{ context }...\" <sentence >: \"{ sentence }\"\nFormat your reply as follows:\n[Chain of thought ]: your step -by-step reasoning about the question [Answer ]: a single word yes or no",
"J Experiments on Social Media Domain": "We compare AFaCTA’s annotation performance with human experts on the re-annotated CheckThat!-2021 development set. We have chosen this small set of social media data due to the limitation of the annotation budget.\nSimilar observations as PoliClaimtest can be drawn. GPT-4 AFaCTA outperforms experts on perfectly consistent samples and underperforms on inconsistent samples. GPT-3.5 also achieves a moderate agreement with human experts on perfectly consistent samples. Error analysis shows that GPT3.5’s error concentrates on false negatives, similar to its behavior in the political speech domain (see Table 12).\nWe also conduct the self-consistency CoT experiments on CheckThat!-2021-dev to verify the im-\nportance of a diversified source of self-consistency. The results are shown in Figure 6. It can be observed that the level of self-consistency calibrates accuracy, and the 3 predefined reasoning paths outperform automatically generated ones. One discrepancy is that self-consistency CoT slightly outperforms GPT-3.5 AFaCTA when sampling more than 7 reasoning paths. We attribute this to GPT3.5’s heavier hallucinations on Twitter domain (see Table 12 where it fails to identify apparent factual information). Therefore, complicated reasoning paths like AFaCTA Step 3 might be challenging in many cases.\nImportantly, due to the annotation budget, our experimental dataset on the social media domain is limited. We leave the extensive analysis of this domain to future work.",
"K Fine-tuning Settings": "For all RoBERTa and DistilBERT fine-tuning experiments, we keep all settings the same except for the training data. All models are fine-tuned for 5 epochs with a batch size of 64. We do not\nconduct checkpoint selection. For other hyperparameters, we keep the default setting of huggingface TrainingArgument: a learning rate of 5e-5, a max_grad_norm of 1, no warm-up and weight decay, etc. We use the huggingface checkpoints of “roberta-base” and “distilbert-base-uncased”. All experiments are conducted on a node with 4 32G V100 GPUs. It takes roughly 0.1 GPU hour to train a classifier. In this work, we always use Sci-kit Learn for score computing.",
"L Statistical Significance Test": "We conduct a statistical significance test to show that different training set combinations of PoliClaimgold, PoliClaimsilver, and PoliClaimbronze lead to statistically significant differences in fine-tuning claim detectors. We first conduct a Student-t test for each training combination based on the results of three random seeds and then aggregate p-values using Fisher’s method. For example, to compare “only PoliClaimgold” vs. only “PoliClaimsilver”, we use the following formula:\npx00 = Student-t({Accrx00g}, {Accrx00s}) (5) pagg = Fisher(p100, p200, ..., p2000) (6)\nwhere r denotes random seeds 42, 43, and 44; px00 denotes the p-value of the x00 step; and pagg denotes the aggregated p-value. The aggregated pvalues of all comparisons are shown in Table 7. It can be seen that all observations in Section 5.5 and Appendix M are statistically significant. Scipy’s implementations for Student-t test and Fisher’s Method are used.\nWe do not conduct statistical tests on experiments of Section 5.1 as obtaining independent samples of human / GPT-4 annotation can be very costly, and OpenAI API does not support random seeds at the moment of experimenting.",
"M Further Fine-tuning Experiments": "This section provides more supplementary results of the experiments in Section 5.5.\nM.1 Only Golen, Silver, or Bronze\nWe gradually increase the size of golden, silver, and bronze training data to fine-tune DistilBERT. The results are shown in Figure 7. The same observations can be drawn from Figure 3: perfectly consistent (silver) data achieve a similar growing trend as manually supervised (golden) data, while accuracy grows slower when adding (bronze) inconsistent data.\nM.2 Augmenting Gold Data with Silver/Bronze Data\nWe conduct the data augmentation experiments in Section 5.5 on both RoBERTa (Figure 8) and DistilBERT (Figure 9) with a different number of PoliClaimgold data (500, 1000, 1500, and 1936). Similar conclusions as Section 5.5 can be drawn: perfectly consistent (silver) data are better at aug-\nmentation than inconsistent (bronze) data. Figure 10 also shows a clear trend. When the manual annotation budget is more restricted, more augmentation data are needed to achieve a comparable performance.\nIn all experiments, the marginal benefit of adding data decreases quicker on DistilBERT than on RoBERTa, as expected. However, we suspect adding more high-quality annotated and diversified data might boost weaker models to outperform stronger models, though the marginal accuracy gain is low. We leave this exploration to future work.",
"N Error Analyses": "We conduct a thorough analysis on GPT-4 and GPT-3.5 AFaCTA. Errors on PoliClaimtest can be found in Table 8, Table 9, and Table 10. Errors on CheckThat!-2021-dev can be found in Table 11 and Table 12.\nIn both domains, we observe that GPT-4 is good at disentangling factual information from speeches or tweets. But it also leads to false positive errors due to over-sensitivity towards factual information. It also makes negative errors due to the lack of full context of the statements. In general, GPT-4 only makes mistakes on confusing samples that lie between factual and non-factual claims.\nGPT-3.5’s errors concentrate on false negatives. It regularly hallucinates about personal experience and quotations which are explicitly defined in the prompts. It is very conservative in identifying any-\nthing as verifiable fact arguing there not enough “specific details” to determine verifiability. However, many facts are already specific enough for verification (see row 2 of Table 9). Sometimes, it also fails to identify facts entangled with opinions (see row 1 of Table 10 and row 1 of Table 12)."
}