| { |
| "url": "http://arxiv.org/abs/2404.16627v1", |
| "title": "Incorporating Lexical and Syntactic Knowledge for Unsupervised Cross-Lingual Transfer", |
| "abstract": "Unsupervised cross-lingual transfer involves transferring knowledge between\nlanguages without explicit supervision. Although numerous studies have been\nconducted to improve performance in such tasks by focusing on cross-lingual\nknowledge, particularly lexical and syntactic knowledge, current approaches are\nlimited as they only incorporate syntactic or lexical information. Since each\ntype of information offers unique advantages and no previous attempts have\ncombined both, we attempt to explore the potential of this approach. In this\npaper, we present a novel framework called \"Lexicon-Syntax Enhanced\nMultilingual BERT\" that combines both lexical and syntactic knowledge.\nSpecifically, we use Multilingual BERT (mBERT) as the base model and employ two\ntechniques to enhance its learning capabilities. The code-switching technique\nis used to implicitly teach the model lexical alignment information, while a\nsyntactic-based graph attention network is designed to help the model encode\nsyntactic structure. To integrate both types of knowledge, we input\ncode-switched sequences into both the syntactic module and the mBERT base model\nsimultaneously. Our extensive experimental results demonstrate this framework\ncan consistently outperform all baselines of zero-shot cross-lingual transfer,\nwith the gains of 1.0~3.7 points on text classification, named entity\nrecognition (ner), and semantic parsing tasks. Keywords:cross-lingual transfer,\nlexicon, syntax, code-switching, graph attention network", |
| "authors": "Jianyu Zheng, Fengfei Fan, Jianquan Li", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Unsupervised cross-lingual transfer refers to the process of leveraging knowledge from one lan- guage, and applying it to another language without explicit supervision (Conneau et al., 2019). Due to the free requirement of the labeled data in tar- get language, it is highly preferred for low-resource scenarios. Recently, unsupervised cross-lingual transfer has been widely applied in various natural language processing (NLP) tasks, such as part-of- speech (POS) tagging (Kim et al., 2017; de Vries et al., 2022), named entity recognition (NER) (Fe- tahu et al., 2022; Xie et al., 2018), machine reading comprehension (Hsu et al., 2019; Chen et al., 2022), and question answering (QA) (Nooralahzadeh and Sennrich, 2023; Asai et al., 2021). The success of unsupervised cross-lingual trans- fer can be attributed to its ability to exploit connec- tions across languages, which are reflected in vari- ous linguistic aspects such as lexicon, semantics, and syntactic structures. Consequently, many stud- ies have sought to enhance models by encouraging them to learn these cross-lingual commonalities. For instance, in the lexical domain, Qin et al. (2021) utilize bilingual dictionaries to randomly replace certain words with their translations in other lan- guages, thereby encouraging models to implicitly align representations between the source language and multiple target languages. In the area of syntax, several works have developed novel neural archi- \u2217Equal Contribution \u2020 Jianquan Li is the corresponding author tectures to guide models in encoding the structural features of languages. Ahmad et al. (2021), for example, proposes a graph neural network (GNN) to encode the structural representation of input text and fine-tune the GNN along with the multilingual BERT (mBERT) for downstream tasks. Both lexical and syntactic approaches facilitate the alignment of linguistic elements across different languages, thereby enhancing the performance of cross-lingual transfer tasks. However, language is a highly intricate system (Ellis and Larsen-Freeman, 2009), with elements at various levels being interconnected. For exam- ple, sentences are composed of phrases, which in turn are composed of words. In cross-lingual transfer, we hypothesize that merely guiding mod- els to focus on a single linguistic aspect is inade- quate. Instead, by simultaneously directing models to learn linguistic knowledge across diverse levels, their performance can be further improved. Table 1 presents some example sentences extracted from the XNLI dataset (Conneau et al., 2018). These parallel sentence pairs demonstrate that the multi- lingual model makes incorrect predictions for sen- tence pairs in the target languages (French and Ger- man) when only one aspect of linguistic knowledge, such as lexical or syntactic knowledge, is incorpo- rated. However, when both types of knowledge are integrated into the model, the correct prediction is obtained. Despite this, most previous studies have focused on either syntactic or lexical information alone, without considering the integration of both types of information. arXiv:2404.16627v1 [cs.CL] 25 Apr 2024 Lang Premise(P)/Hypothesis(H) Label +Lex +Syn Ours fr P:Votre soci\u00e9t\u00e9 charitable fournit non seulement de les services sociaux communautaires efficaces \u00e0 les animaux et les personnes, mais sert \u00e9galement \u00e9galement de fourri\u00e8re pour la Ville de Nashua. H:La soci\u00e9t\u00e9 humaine est le refuge pour animaux de Nashua. entali contra contra entail de P:Ihre humane Gesellschaft erbringt nicht nur effektive gemeinschaftlich-soziale Dienstleistungen f\u00fcr Tiere und ihre Menschen, sondern dient auch als Zwinger der Stadt Nashua. H:Die Humane Society ist Nashuas Tierheim. entail contra contra entail en P:Your humane society provides not only effective community social services for animals and their people , but also serves as the pound for the City of Nashua . H:The humane society is Nashua\u2019s animal shelter . Table 1: The parallel sentence pairs in French and German from XNLI(Conneau et al., 2018), which are translated from English. Each sentence pair consist of a Premise sentence(P) and a Hypothesis sentence(H). The \"Label\" column indicates the relationship between each sentence pair, which can be contradiction(contra), entailment(entail) or neutral. \"+Lex\" and \"+Syn\" represent the prediction results from the multilingual models infused with lexical and syntactic knowledge, respectively. The \"ours\" column shows the results of integrating both types of knowledge into the model. Compared to the other two methods, our method can accurately predict the relationship between each sentence pair. In this work, we aim to enhance unsupervised cross-lingual transfer by integrating knowledge from different linguistic levels. To achieve this, we propose a framework called \"Lexicon-Syntax En- hanced Multilingual BERT\" (\"LS-mBERT\"), based on a pre-trained multilingual BERT model. Specifi- cally, we first preprocess the input source language sequences to obtain each word\u2019s part-of-speech information and dependency relationships between words in each sentence. Then, we replace some words in the sentence with their translations from other languages while preserving the established dependency relationships. Furthermore, we em- ploy a graph attention network(Veli\u010dkovi\u0107 et al., 2017) to construct a syntactic module, the output of which is integrated into the attention heads of the multilingual BERT. This integration guides the entire model to focus on syntactic structural rela- tionships. Finally, during the fine-tuning process, we simultaneously train the multilingual BERT and the syntactic module with the pre-processed text. As a result, our framework enables the multilingual BERT to not only implicitly learn knowledge related to lexical alignment but also encode knowledge about syntactic structure. To validate the effectiveness of our framework, we conduct experiments on various tasks, including text classification, named entity recognition (ner), and semantic parsing. The experimental results show that our framework consistently outperforms all baseline models in zero-shot cross-lingual trans- fer across these tasks. For instance, our method achieves the improvement of 3.7 points for mTOP dataset. Our framework also demonstrates sig- nificant improvements in generalized cross-lingual transfer. Moreover, we examine the impact of im- portant parameters, such as the replacement ra- tio of source words, and languages for replace- ment. To facilitate further research explorations, we release our code at https://github.com/ Tian14267/LS_mBert.", |
| "main_content": "Cross-lingual transfer is crucial in the field of natural language processing (NLP) as it enables models trained on one language to be applied to another. To enhance performance in transfer tasks, numerous studies focus on addressing the characteristics of various languages and their relationships. 2.1. Incorporating Lexical Knowledge for Cross-lingual Transfer A group of studies aims to incorporate lexical alignment knowledge into cross-lingual transfer research (Zhang et al., 2021a; Wang et al., 2022; Qin et al., 2021; Lai et al., 2021). For example, Zhang et al. (2021a) and Wang et al. (2022) employ bilingual dictionaries to establish word alignments and subsequently train cross-lingual models by leveraging explicit lexical associations between languages. Other methods (Qin et al., 2021; Lai et al., 2021) involve substituting a portion of words in a sentence with their equivalents from different languages, a technique commonly known as \"codeswitching.\" By increasing the diversity of input text, these approaches promote implicit alignments of language representations. However, this group of studies mainly offers insights into lexical translation across languages, while neglecting the learning of language-specific structural rules. 2.2. Incorporating Syntactic Knowledge for Cross-lingual Transfer Another research category focuses on integrating syntactic knowledge for cross-lingual transfer (Ahmad et al., 2021; Yu et al., 2021; Zhang et al., 2021b; He et al., 2019; Cignarella et al., 2020; Xu et al., 2022; Shi et al., 2022; Wang et al., 2021). Many studies in this group (Ahmad et al., 2021; Wang et al., 2021) develop graph neural networks to encode syntactic structures, a category to which our work also belongs. Taking inspiration from Ahmad et al. (2021), we adopt a similar architecture, specifically using a graph attention network to encode syntactic knowledge. Other methods (Cignarella et al., 2020; Xu et al., 2022) extract sparse syntactic features from text and subsequently incorporate them into the overall model. Although these approaches consider the relationships between language elements, they frequently overlook the alignments across languages, which impedes the effective transfer of linguistic elements and rules between languages. Consequently, we combine the strengths of these two categories of approaches. First, we replace the input sequence with translated words from other languages, which aids in guiding the entire model to acquire implicit alignment information. Then, we introduce an additional module to assist the model in encoding syntax. 3. Methodology In this section, we provide a detailed introduction to our framework \"LS-mBERT\", as illustrated in Figure 1. Our objective is to enhance the crosslingual transfer capabilities of multilingual BERT (mBERT) by incorporating both lexical and syntactic knowledge. Given an input sequence, we first pre-process it using a part-of-speech tagger and a universal parser(Section 3.1). This yields the part-of-speech tag for each word and dependency relationships among words in the sequence. To enable mBERT to implicitly encode word alignment information, we substitute some words with their translations from other languages using a code-switching technology (Section 3.2). Moreover, to guide mBERT in attending to syntactic relationships, we construct a graph attention network (GAT), introduced in Section 3.3. The output of the graph attention network is then used as input to the attention heads within BERT, effectively biasing attention information between words. Finally, to integrate both syntactic and lexical knowledge, we pass the code-switched text into both the GAT network and mBERT, which are trained simultaneously (Section 3.4). 3.1. Pre-processing Input Sequence The initial step involves pre-processing the input data to obtain prior knowledge for subsequent training. As our framework incorporates syntactic knowledge, we opt for an off-the-shelf parser with high accuracy to process the input text. In this case, we employ the UDPipe toolkit(Straka and Strakov\u00e1, 2017) to parse the inputs sentences, and Stanza(Qi et al., 2020) to annotate the part-of-speech information of each word. By utilizing both tools, given a sentence, we can obtain the dependency relationships between words and their part-of-speech information, which are then utilized to provide syntactic knowledge and enhance word representations, respectively. 3.2. Code-switching for Text (lexical knowledge) As our objective is to improve unsupervised crosslingual transfer, introducing explicit alignment signals would be inappropriate. Therefore, we employ an implicit strategy to guide the entire model to encode word alignment information. Inspired by the work of Qin et al. (2021), we opt for the codeswitching strategy. Specifically, we first randomly select a proportion \u03b1 of words within each source sentence. Then, for each selected word, we use a high-quality bilingual dictionary to substitute it with a corresponding translation from another target language. This method not only promotes the implicit alignment of representations across diverse languages within our model, but also enhances the model\u2019s robustness when processing input text. 3.3. Graph Attention Network (syntactic knowledge) To guide mBERT in acquiring syntactic knowledge better, we construct an external syntactic module by referring to the method introduced by Ahmad et al. (2021). The overview of this module is displayed in Figure 2. Given that there are n tokens in the input sequence, we first represent each token by combining its embedding representation with part-of-speech (POS) information. The representation of the i-th token can be calculated: xi = ciWc + posiWpos, where ci and posi represent the token representation and the part-ofspeech representation of the i-th token, respectively; while Wc and Wpos denote the token parameter matrix and the part-of-speech parameter matrix. Then, the encoded sequence s\u2032 = [x1, x2, \u00b7 \u00b7 \u00b7 , xn] is passed into the subsequent syntactic module, which is designed with a graph attention network (GAT) (Veli\u010dkovi\u0107 et al., 2017). The GAT module comprises a total of L layers, each with m attention heads. These attention heads play a crucial role in generating representations for individual tokens by attending to neighboring tokens in the graph. Each attention in GAT operates as follows: O = Attention(T, T, V, M), wherein T denotes the query and key matrices, and V represents the value matrix. Besides, M signifies the mask matrix, determining whether a pair of words in the dependency tree can attend each other. Notably, the relationships between words in the attention matrix are modeled based on the distances between words in codeswitching part-of-speech tagging dependency parsing UDPipe bilingual dictionary guidelines (Root) mean needed new the iron donors are more nsubj det amod compound nsubj aux amod ccomp leitlinien (Root) mean necesitaba new the fer donors are \u66f4\u591a\u7684 nsubj det amod compound nsubj aux amod ccomp GAT network The new iron guidelines mean more donors are needed Label Multilingual BERT The_DET new_ADJ iron_NOUN guidelines_NOUN mean_VERB more _ A D J donors _ N O U N are_AUX needed_VERB The_DET new_ADJ fer_NOUN leitlinien_NOUN mean_VERB \u66f4\u591a \u7684_ADJ donors_NOUN are_AUX necesitaba_VERB codeswitching Figure 1: An overview of lexicon-syntax enhanced multilingual BERT (\"LS-mBERT\"). An example sentence is provided to explain how this framework works. To introduce lexical alignment knowledge, we utilize bilingual dictionaries to randomly replace some words in the sentence with the equivalent words from other languages (pink for German, green for Spanish, light blue for Chinese, and orange for French). Then, an graph attention network (GAT) is developed to encode the syntactic structure of this sentence. The output representation of GAT is sent to the attention heads in multilingual BERT for guiding them to focus on the language-specific structures. the dependency tree, rather than the positional information within the word sequence. Subsequently, the resulting representations produced by all attention heads are concatenated to form the output representations for each token. Finally, the output sequence from the final layer can be denoted as Y = [y1, y2, \u00b7 \u00b7 \u00b7 , yn], where yi represents the output representation for the i-th token. To maintain the lightweight nature of the architecture, certain elements in GAT have been excluded. Specifically, we do not employ feed-forward sub-layers, residual connections, or positional representations. We found that these modifications do not result in a significant performance gap. 3.4. Summary of the Framework: Lexicon-syntax Enhanced Multilingual BERT In this subsection, we provide an overview of our \"LS-mBERT\" framework, as illustrated in Figure 1. We first select multilingual BERT (mBERT) as the base model. Then, we process the input sequence using the code-switching strategy in Section 3.2, resulting in the code-switched sequence s\u2032. It is important to note that despite some words in each sentence being replaced with other languages, the original dependency relationships between words are still preserved in s\u2032. Next, we feed the codeswitched text into both mBERT and the syntactic module (GAT), facilitating the fusion of the two types of knowledge. Furthermore, this step guides the entire model to better align different languages within the high-dimensional vector space during training. After GAT processes the code-switched sequence, the output from the final layer is utilized to bias the attention heads of mBERT. The calculation process can be described as follows: O = Attention(Q + Y W Q l , K + Y W K l , V ), where Q, K, and V represent the query, key, and value matrices, respectively; While W Q l and W K l are new parameters to learn for biasing the query and key matrices. t1 c1 pos1 + x1 t2 c2 pos2 + x2 ... ... ... tn-1 cn-1 posn-1 + xn-1 tn cn posn + xn ... ... + + + + + m \u00d7 L layers y1 y2 yn-1 yn ... input seq token emb pos emb att layer Figure 2: The architecture of graph attention network (Ahmad et al., 2021; Veli\u010dkovi\u0107 et al., 2017). Each input token is represented by combining its token embedding and part-of-speech embedding. Each attention head within the graph attention network(GAT) generates a representation for each token embedding by attending to its neighboring tokens in the dependency graph. Next, the resulting representations are concatenated to form the output representation for each token. Finally, we can obtain the representations of the output sequence embeddings from the final layer of GAT. 4. Experiments 4.1. Experimental Settings As above mentioned, we use UDPipe (Straka and Strakov\u00e1, 2017) and Stanza (Qi et al., 2020) for parsing sentences and obtaining words\u2019 part-ofspeech information in all languages, and employ MUSE (Lample et al., 2018) as the bilingual dictionary for word substitution. For all tasks, we identify the optimal parameter combinations by searching within the candidate sets. The learning rate is set to 2e-5, utilizing AdamW as the optimizer. The batch size is 64, and the maximum length for input sequences is 128 tokens. For code-switching, we vary the replacement ratio (\u03b1) from 0.3 to 0.7 with a step of 0.1. For the GAT network, we adopt the identical parameter values as employed in the work of Ahmad et al. (2021). Specifically, we set L to 4 and k to 4. 4.2. Tasks Our framework is evaluated on the following tasks, using English as the source language. Some statistics are summarized in Table 2, along with the detailed descriptions provided below. Text Classification. Text Classification is a task that assigns predefined categories to open-ended text. In our experiment, we utilize two publicly available dataset: XNLI and PAWS-X. In XNLI (Conneau et al., 2018), models need to predict whether a given pair of sentences is entailed, contradicted, or neutral; In PAWS-X (Yang et al., 2019), models are required to determine whether two given sentences or phrases convey the same meaning. When implementing the two tasks, to establish connections between the dependency trees of the two sentences, we introduce two edges from the [CLS] token to the root nodes. Subsequently, we apply the code-switching technique to randomly replace certain words in the sentence pairs. Named Entity Recognition. Named Entity Recognition (NER) is a task that involves the automatic identification and categorization of named entities. In our experiment, we employ the Wikiann (Pan et al., 2017) dataset. Wikiann consists of Wikipedia articles annotated with person, location, organization, and other tags in the IOB2 format. Our method is evaluated across 15 languages. To ensure that the models can obtain complete entity information, we exclusively substitute words that do not constitute named entities during the code-switching process. Task-oriented Semantic Parsing. In this task, the models are required to determine the intent of the utterance and then fill the relevant slots. The dataset for the experiment is mTOP (Li et al., 2021), which is an almost parallel corpus, containing 100k examples in total across 6 languages. Our experiments cover 5 languages. 4.3. Baselines We choose the following methods as baselines to compare: \u2022 mBERT. We exclusively utilize the multilingual BERT model to perform zero-shot crosslingual transfer for these tasks. \u2022 mBERT+Syn. A graph attention network (GAT) is integrated with multilingual BERT, and these two components are jointly trained for all tasks. \u2022 mBERT+Code-switch. The multilingual BERT model is fine-tuned with the codeswitched text across various languages. 5. Results and analysis 5.1. Cross-Lingual Transfer Results The main experimental results are displayed in Table 3. Our method consistently demonstrates superior performance across all tasks compared to other baselines. This indicates our method\u2019s effectiveness for cross-lingual transfer, achieved through the incorporation of lexical and syntactic knowledge. Especially for the tasks Wikiann and mTOP, our method exhibits a significant improvement, with an increase of 2.2 and 3.7 points, respectively, when compared to the baseline with the best performance. In addition, since code-switching technique blends words from various language, we calculate the results across the languages excluding English, as shown in the column \"AVG/en\" in Table 3. We find that the performance gap between our method and each baseline in most tasks becomes wider. This also indicates that our method can more effectively align non-English languages within the same vector space implicitly. For each task, we discover most of languages can gain improvement by using our method, as compared to the top-performing baseline. Specifically, 84.6% (11/13), 100.0% (7/7), 80.0% (12/15) and 100.0% (5/5) languages demonstrate improvement in XNLI, PAWS-X, Wikiann and mTOP respectively. Furthermore, our method also provides improvement for non-alphabetic languages in many tasks, such as Chinese, Japan and Korean. This reflects that our method can be effectively generalized into various target languages, even in cases where significant differences exist between the source and target languages. Task Dataset |Train| |Dev| |Test| |Lang| Metric Classification XNLI 392K 2.5K 5K 13 Accuracy Classification PAWS-X 49K 2K 2K 7 Accuracy NER Wikiann 20K 10K 1-10K 15 F1 Semantic Parsing mTOP 15.7K 2.2K 2.8-4.4K 5 Exact Match Table 2: Evaluation datasets. |Train|, |Dev| and |Test| delegate the numbers of examples in the training, validation and testing sets, respectively. |Lang| is the number of target languages we use in each task. Tasks Methods en ar bg de el es fr hi ru tr ur vi zh ko nl pt ja AVG / en AVG XNLI (Conneau et al., 2018) mBERT 80.8 64.3 68.0 70.0 65.3 73.5 73.4 58.9 67.8 60.9 57.2 69.3 67.8 66.4 67.5 mBERT+Syn 81.6 65.4 69.3 70.7 66.5 74.1 73.2 60.5 68.8 62.4 58.7 69.9 69.3 67.4 68.5 mBERT+code-switch 80.9 64.2 70.0 71.5 67.1 73.7 73.2 61.6 68.9 58.6 57.8 69.9 70.0 67.2 68.3 our method 81.3 65.8 71.3 71.8 68.3 75.2 74.2 62.8 70.7 61.1 58.8 71.8 70.8 68.6 69.5 PAWS-X (Yang et al., 2019) mBERT 94.0 85.7 87.4 87.0 77.0 69.6 73.0 80.2 81.7 mBERT+Syn 93.7 86.2 89.5 88.7 78.8 75.5 75.9 82.7 83.9 mBERT+code-switch 92.4 85.9 87.9 88.3 80.2 78.0 78.0 83.4 84.3 our method 93.8 87.2 89.6 89.4 81.8 79.0 80.0 84.6 85.6 Wikiann(Pan et al., 2017) mBERT 83.7 36.1 76.0 75.2 68.0 75.8 79.0 65.0 63.9 69.1 38.7 71.0 58.9 81.3 79.0 66.9 68.1 mBERT+Syn 84.1 34.6 76.9 75.4 68.2 76.0 79.1 64.0 64.2 68.7 38.0 73.1 58.0 81.7 79.5 67.0 68.1 mBERT+code-switch 82.4 39.2 77.1 75.2 68.2 71.0 78.0 66.1 64.2 72.4 41.3 69.2 59.9 81.3 78.9 67.3 68.3 our method 84.5 41.4 78.9 77.3 70.2 75.3 80.3 67.6 63.9 73.1 46.8 72.6 62.2 81.8 80.8 69.4 70.5 mTOP(Li et al., 2021) mBERT 81.0 28.1 40.2 38.8 9.8 29.2 39.6 mBERT+Syn 81.3 30.0 43.0 41.2 11.5 31.4 41.4 mBERT+code-switch 82.3 40.3 47.5 48.2 16.0 38.0 46.8 our method 83.5 44.5 54.2 51.7 18.8 47.3 50.5 Table 3: The experimental results on four tasks. The best results in each task are highlighted in bold. The baselines include \"mBERT\", \"mBERT+Syn\" and \"mBERT+codeswitch\". They delegate \"only using mBERT\", \"using mBERT with a syntactic module (GAT)\" and \"mBERT with the code-switching technique\" for cross-lingual transfer. The results of \"mBERT\" is from Hu et al. (2020). For \"mBERT+Syn\" and \"mBERT+code-switch\", we adopt open-source code of the work of Ahmad et al. (2021) and Qin et al. (2021) to reproduce these experiments, and report the results. The evaluation metrics are F1 value for the NER task, Accuracy for classification tasks, and Exact Match for semantic parsing. The \"AVG\" column means the average performance across all language for each method, while the \"AVG /en\" indicates the average performance on the languages excluding English. 5.2. Generalized Cross-Lingual Transfer Results In practical scenarios, cross-lingual transfer could involve any language pair. For example, in a crosslingual question-answering (QA) task, the context passage may be in German, while the multilingual model is required to answer the question in French. Considering on this, we conduct zero-shot cross-lingual transfer experiments within a generalized setting. Since PAWS-X and mTOP are completely parallel, we evaluate the performance of our method and \"mBERT\" baseline on generalized cross-lingual transfer tasks using the two dataset. The experimental results are illustrated in Figure 3. For both classification and semantic parsing benchmarks, we have observed improvements among most language pairs. This reflects that our method is very effective for generalized crosslingual transfer. Furthermore, when English is included in the language pair, there is a substantial enhancement in performance. Specifically, when English serves as the source language, the average performance of target languages is increased over 10% and 3% in mTOP and PAWS-X dataset, respectively. This reflects the effectiveness of the code-switching in aligning other languages with English. For the PAWS-X dataset, we find that some non-Indo-European languages such as Japanese, Korean, and Chinese can achieve improvements, even when the source languages belong to the Indo-European language family, including English, Spanish, French, and German. It reflects that syntactic knowledge can effectively narrow the gap of language structures for this task, especially for the language pairs without close linguistic relationships. 6. Analysis and Discussion 6.1. Impact on Languages We investigate whether our method can improve the performance of specific languages or language groups. As shown in Figure 4, we display the performance improvement of our method by comparing the \"mBERT\" baseline. We find that almost languages can obtain benefits from our method. Particularly, when the target language, such as German, Spanish and French, belongs to the IndoEuropean language family, the improvement is very significant. Furthermore, the performance in the mTOP task is improved significantly by our method among all languages. This may be because that our method consider both syntax and lexicon simultaneously, which is beneficial for the semantic parsing task. target source performance difference (a) mTOP target (b) PAWS-X performance difference source Figure 3: Results for generalized zero-shot cross-lingual transfer on mTOP and PAWS-X. We report the performance differences between our method and \"mBERT\" baseline across all languages. -5 0 5 10 15 20 en de es fr bg ru ar vi tr ur el hi zh ko Performance Improvement(%) Language XNLI PAWS-X Wikiann mTOP Figure 4: Performance improvements for XNLI, PAWS-X, Wikiann, and mTOP across languages. The languages in x-axis are grouped by language families: IE.Germanic (en, de), IE.Romance (es, fr), IE.Slavic (bg, ru), Afro-asiatic (ar), Austro-asiatic (vi), Altaic (tr, ur), IE.Greek (el), IE.Indic (hi), Sino-tibetan (zh), Korean (ko). 6.2. Representation Similarities across Languages To evaluate the effectiveness of our method in aligning different languages, we employ the representation similarity between languages as the metric. Specifically, we utilize the testing set of XNLI (Conneau et al., 2018) as the dataset, which consists of parallel sentences across multiple languages. Then we take the vector of [CLS] token from the final layer of our model, as well as the vectors from two baselines (\"mBERT+Syn\" and \"mBERT+codeswitch) for each sentence. Following Libovick` y et al. (2019), the centroid vector for representing each language is calculated by averaging these sentence representations. Finally, we adopt cosine similarity as the indicator to assess the degree of alignment between English and each target language. Figure 5 illustrates the similarities between languages by using our method and the other two baselines. It can be easily found that our method outperforms the other two baselines in aligning language representations. This suggests that infusing two types of knowledge is indeed effective in reducing the disparities in language typologies, which improve cross-lingual transfer performance. In addition, we observe that \"mBERT+code-switch\" performs better than \"mBERT+Syn\", which reflects that lexical knowledge is more useful than syntactic knowledge for this task. 6.3. Impact of Code-switching The replacement ratio \u03b1 for code-switching is an important hyper-parameter in our method. Hence, we explore its impact on mTOP and PAWS-X, by varying \u03b1 from 0 to 0.9 in increments of 0.1, shown in Figure 6. When \u03b1 is set to 0, it represents the results of the baseline \"mBERT+Syn\". As \u03b1 increases, more source words are substituted with their equivalent words from other languages. The performance improvement certificates the effectiveness of code-switching technique. Notably, when about half of the words are replaced (0.5 for PAWS80 85 90 95 100 ar bg de el es fr hi ru tr ur vi zh mBERT+Syn mBERT+code-switch LS-mBERT Figure 5: The similarities between languages. We first calculate the centroid representation for each language following Libovick` y et al. (2019). Then we adopt cosine similarity to evaluate the similarity between English and each target language. X and 0.4 for mTOP), the performance reaches their peaks. After that, both tasks experience a decline in performance. This decline might be because the expression of meaning and sentence structure are influenced severely as too many words are replaced. Therefore, it is a optimal choice to set \u03b1 between 0.4 to 0.5 for code-switching. Figure 6: Performance on mTOP and PAWS-X with different replacement ratio \u03b1 in code-switching. Furthermore, we investigate whether the choice of the replacement language in code-switching impacts our model\u2019s performance. We select mTOP and PAWS-X as the testing tasks. In codeswitching, we devise three different measures for language replacement: \"Exclusively replacing with the target language\", \"Replacing with languages from the same language family as the target language\"; and \"Replacing with languages selected randomly\". The experimental results are illustrated in Figure 7. We can easily observe that \"Exclusively replacing with the target language\" performs best, while \"Replacing with randomly selected languages\" yields the poorest results. Hence, this also underscores the importance of selecting languages closely related to each target language for substitution when employing the code-switching technique. 35 45 55 65 75 85 95 mTOP PAWS-X Performance(%) Type1 Type2 Type3 Figure 7: Performance on mTOP and PAWS-X with different replacement languages in code-switching. The source language for both tasks is English, and the results are averaged across all target languages excluding English. \u201cType1\u201d represents the replacement with the target language; \u201cType2\u201d represents the replacement with languages from the same language family as the target language; \u201cType3\u201d represents the replacement with randomly selected languages. 6.4. Performance with XLM-R To validate the universality of our method, we substitute multilingual BERT with XLM-R in our framework. XLM-R is a more robust multilingual pre-trained model known for its exceptional crosslingual transfer capabilities. Subsequently, we test its performance on the PAWX-S dataset, and the experimental results are displayed in Table 4. In Table 4, we also observe that our framework outperforms the other three baselines. This indicates that integrating lexical and syntactic knowledge is beneficial for enhancing performance, irrespective of the base model employed. Notably, our framework only achieves the slight performance improvement when utilizing XLM-R as the base model compared to employing multilingual BERT. It may be because that the base model, XLM-R, adopt larger corpus during pre-training, resulting in preserving richer language information. Consequently, XLM-R itself has possessed superior cross-lingual transfer capabilities. The assistance by incorporating external linguistic knowledge appears to be relatively minor in comparison. 6.5. Limitations and Challenges In our study, we adopt a bilingual dictionary, such as MUSE (Lample et al., 2018), to substitute words in other languages. However, we randomly choose a target language word when there exist multiple translations for a source language word. This approach, although convenient, neglect the context of the source language word, potentially leading to inaccurate translations. This also highlights us to explore more precise word alignment methods in Task Methods en ar bg de el es fr hi ru tr ur vi ko nl pt AVG PAWS-X XLM-R 84.2 48.5 80.5 77.0 77.8 76.1 79.8 67.5 70.4 76.0 54.2 78.5 59.1 83.3 79.3 72.8 XLM-R+Syn 83.5 46.4 80.1 76.0 78.9 77.6 79.1 72.1 70.6 76.1 55.3 77.6 59.0 83.1 79.2 73.0 XKLM-R+code-switch 83.4 46.8 81.7 78.2 79.2 71.1 78.6 72.9 70.6 77.2 57.9 76.0 58.2 83.6 80.0 73.0 our method 83.1 44.9 82.7 76.8 78.4 76.9 79.6 71.1 70.1 76.6 60.4 78.2 58.1 83.5 79.7 73.3 Table 4: Results for PAWS-X with XLM-R. the future. Furthermore, the tasks we have evaluated are quite limited, with some of them involving only a few languages. In the future, we will extend our method to more cross-lingual tasks. Meanwhile, we also develop dataset for these tasks to support more languages. 7. Conclusion In this paper, we present a framework called \"lexicon-syntax enhanced multilingual BERT\" (\"LSmBERT\"), which infuses lexical and syntactic knowledge to enhance cross-lingual transfer performance. Our method employs code-switching technology to generate input text mixed in various languages, enabling the entire model to capture lexical alignment information during training. Besides, a syntactic module consisting of a graph attention network (GAT) is introduced to guide mBERT in encoding language structures. The experimental results demonstrate that our proposed method outperforms all the baselines across different tasks, which certificates the effectiveness of integrating both types of knowledge into mBERT for improving cross-lingual transfer. In the future, we plan to incorporate different linguistic knowledge into large language models (LLMs) to further enhance cross-lingual transfer performance. 8. Acknowledgements The authors would like to thank the anonymous reviewers for their feedback and suggestions. Additionally, this work was supported by the Major Program of the National Social Science Fund of China (18ZDA238), the National Social Science Fund of China (No.21CYY032), Beihang University Sponsored Projects for Core Young Researchers in the Disciplines of Social Sciences and Humanities(KG16183801) and Tianjin Postgraduate Scientific Research Innovation Program (No.2022BKY024). 9. Bibliographical", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.03437v1", |
| "title": "Knowledge Graph Representation for Political Information Sources", |
| "abstract": "With the rise of computational social science, many scholars utilize data\nanalysis and natural language processing tools to analyze social media, news\narticles, and other accessible data sources for examining political and social\ndiscourse. Particularly, the study of the emergence of echo-chambers due to the\ndissemination of specific information has become a topic of interest in mixed\nmethods research areas. In this paper, we analyze data collected from two news\nportals, Breitbart News (BN) and New York Times (NYT) to prove the hypothesis\nthat the formation of echo-chambers can be partially explained on the level of\nan individual information consumption rather than a collective topology of\nindividuals' social networks. Our research findings are presented through\nknowledge graphs, utilizing a dataset spanning 11.5 years gathered from BN and\nNYT media portals. We demonstrate that the application of knowledge\nrepresentation techniques to the aforementioned news streams highlights,\ncontrary to common assumptions, shows relative \"internal\" neutrality of both\nsources and polarizing attitude towards a small fraction of entities.\nAdditionally, we argue that such characteristics in information sources lead to\nfundamental disparities in audience worldviews, potentially acting as a\ncatalyst for the formation of echo-chambers.", |
| "authors": "Tinatin Osmonova, Alexey Tikhonov, Ivan P. Yamshchikov", |
| "published": "2024-04-04", |
| "updated": "2024-04-04", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.SI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "A knowledge graph, also known as a semantic net- work, was initially introduced by C. Hoede and F.N. Stokman as a tool for representing the content of medical and sociological texts (Nurdiati and Hoede, 2008). Constructing increasingly larger graphs with the intent of accumulating knowledge was initially deemed to provide a resultant structure capable of operating as an expert system proficient in investi- gating causes and computing the consequences of certain decisions. The concept of knowledge graph co-evolved with the rise of computational social science (Conte et al., 2012) and digital data analysis methods (Rogers, 2013). Access to open sources on the Internet has facilitated the measurement of the dy- namics of political debates (Neuman et al., 2014). Platforms like Twitter and other microblogging ser- vices are widely utilized for studying and modeling social and political discourse (Graham et al., 2016), (Jungherr, 2014) , (Wang et al., 2018). Contempo- rary researchers even develop a conceptual frame- work for predicting the morality underlying political tweets(Johnson and Goldwasser, 2018). Moreover, knowledge graphs of fact-checked claims, such as ClaimsKG, have been designed. Such tools facili- tate structured queries about truth values, authors, dates, journalistic reviews, and various types of metadata (Tchechmedjiev et al., 2019). A significant group of studies, advocate usage of graphs for social, political, and business industry data, stating that \u201cgraphs greatly increases the clar- ity of presentation and makes it easier for a reader to understand the data being used\u201d(Kastellec and Leoni, 2007) . Additionally, (Abu-Salih and Be- heshti, 2021) explains that knowledge graphs serve as indispensable frameworks that underpin intelli- gent systems. This is achieved by extracting sub- tle semantic nuances from textual data sourced from a range of vocabularies and semantic repos- itories. In the past decade, there has been a no- table increase in the examination of political dis- course within social content in such a way. The authors discuss in detail the connection between political discussions and the language used in them (Chilton, 2004), (Parker, 2014). Furthermore, the literature examines opinion polarization (Banisch and Olbrich, 2019), attempts to characterize an intuition of the dynamics of the political debate (Yamshchikov and Rezagholi, 2019), and provides techniques for estimating them (Merz et al., 2016), (Subramanian et al., 2017), (Glava\u0161 et al., 2017), (Subramanian et al., 2018) or (Rasov et al., 2020). The extensively employed data sources in studies centered on automated text classification for politi- cal discourse analysis involve Manifesto Database (Lehmann et al., 2017) and the proceedings of the European Parliament (Koehn, 2005). The challenges arising in contemporary studies on observational and discourse analysis are the quality of data (Tweedie et al., 1994) and the credi- bility of data sources. It is crucial to apply statistical measures and tests to quantify the impact of poor data quality and bias on the results (Abu-Salih and Beheshti, 2021). However, quantifying such effects proves comprehensive in the realm of social sci- ences due to the numerous indigent properties of arXiv:2404.03437v1 [cs.CL] 4 Apr 2024 social datasets (Shah et al., 2015). One significant challenge is associated with the formation of so- called echo-chambers in social structures, which naturally obstruct the propagation of information, reinforcing disparities across various social strata (Goldie et al., 2014), (Colleoni et al., 2014), (Guo et al., 2015) or (Harris and Harrigan, 2015). Ad- dressing the credibility of sources, the phenomenon of fake news draws constant attention from media outlets and researchers. According to (Anderson and Auxier, 2020), 55% of online social network users believe they are accurately informed about re- cent political updates by the media. Consequently, misleading information and false news have the potential to shape certain beliefs and human be- haviors. As a solution, several studies (Allcott and Gentzkow, 2017), (Shu et al., 2017) or (Lazer et al., 2018) analyze and propose methods to enhance the quality of information. Additionally, these stud- ies imply the existence of a certain ground truth that could be universally accepted. Taking existing knowledge and challenges into account, in this work, we study the issue of news representation from a data analysis perspective. We construct two datasets comprising news arti- cles from \"alt-right\" and \"liberal\" news platforms, denoted as Breitbart News (BN) and the New York Times (NYT), spanning 11.5 consecutive years (from 2008 to Fall 2019). We demonstrate that infor- mation disparities between these news sources are fundamental regardless of the social structures that encapsulate the readers of the aforementioned out- lets. Upon analyzing the findings, we assert that one has to take into consideration these dispari- ties, since they signify fundamental differences in the foundational data that shapes the perspectives, beliefs, and, ultimately, the behavior of readers. Simply put, even if we had no social media infor- mation disparities by various news sources could contribute to echo-chamber formation.", |
| "main_content": "We have parsed two news sites Breitbart News1 that could be generally associated with the \"altWe have parsed two news sites Breitbart News that could be generally associated with the \"altright\" political views and the New York Times2 associated with \"liberal\" political views. The choice of these two media platforms was arbitrary to a certain extent. We parsed all news presented on both platforms in the period from 2008 till the fall of 2019. Using the texts of the news as input data we built an information extraction pipeline aimed to reconstruct a form of knowledge graph out of the news texts. To do that we have used state of the art open information extraction (Stanovsky et al., 2018) and named 1https://www.breitbart.com/ 2https://www.nytimes.com/ entity recognition (Peters et al., 2017) tools of AllenNLP3. The outputs of both models are noisy, so in order to stabilize the resulting signal we came up with the heuristics for substring-matching. We used only ARG0 and ARG1 items of open information extractor and all entities of named entity recognition to extract the most useful objects of the articles. For every entity recognized by both methods, we created a vertex in our knowledge graph. We also applied additional manual \u2019filtering\u2019 of the resulting named entities. The procedure to fix the problems of the different spelling and some artifacts of NER and OIE that crowded the list of entities. Finding longer overlapping substrings with high frequencies we matched longer entities with their shorter \"parents\". The recognized vertexes were connected with an edge that had an estimate of sentiment and subjectivity calculated with TextBlob4. This naive approach yielded a hypergraph of named entities out of both data sources. The weights of the vertexes corresponded to the number of mentions of a given entity. The edges of the graph had three attributes: frequency, polarity, and subjectivity. To facilitate further research of news coverage and political discourse we share the gathered data5. 3. Do You Know What I Know? In this chapter, we explore the acquired knowledge graphs. In Section 3.1, we present a bird\u2019s-eye view of the graph, including key properties, and delve into the most contrasting entities and topics with varying coverage in two sources. Section 3.2 revisits the graphs, highlighting aspects crucial for differences in political discourse. Figure 1: Breitbart News. Distribution of sentiment. 3.1. Bird\u2019s-eye View Figures 5 \u2013 6 show a visualization of two obtained graphs. One can see the divergence of topics: 3https://allennlp.org 4https://textblob.readthedocs.io/en 5https://shorturl.at/ntDOT Figure 2: New York Times. Distribution of sentiment. Figure 3: Breitbart News. Distribution of subjectivity. Figure 4: New York Times. Distribution of subjectivity. Breitbart is more focused around certain personalities, while the New York Times extensively covers foreign affairs. Table 1 shows the first interesting and counter-intuitive result that one can draw when studying obtained graph representations: both media sources are \"neutral\" on average. Figures 1 \u2013 2 show the distribution of polarity across all edges. The average neutral tone is not a consequence of negatively and positively charged news that balance each other. Distributions in Figures 3 \u2013 4 do not only show that average sentiment across all edges is very close to zero for both graphs, but they also demonstrate and a vast majority of the analyzed relations are presented in a non-polarizing Data Radius Diameter Modularity BN 6 11 0.43 NYT 7 13 0.53 Average Data Path length Polarity Subjectivity BN 3.76 0.00 0.12 NYT 3.52 -0.00 0.08. Table 1: Various parameters of the obtained graph representations. Both sources are neutral on average with Breitbart being just above and NYT just below zero average polarity. Breitbart tends to be more subjective, yet average subjectivity for both sources is at around 10%, with NYT a bit more objective. way (at least to the extent to which modern NLP method can distinguish polarity). One can also see the corresponding distributions of subjectivity that are similar for both sources. For the NYT Spearman correlation between polarity and subjectivity is 31%, for Breitbart, it is 23%. Both media sites try to present themselves to the reader as neutral on average and moderately subjective. This stands to reason: an average reader probably neither wants to feel that she wears rosetinted glasses nor wants to constantly read that the doom is nigh. Majority of the news are neutral, extremely positive and extremely negative news are rare in both sources. At the same time both sources tend to point bias in the coverage \"on the other side\". Another interesting line of thought that could be developed when regarding Table 1 is the connection between right political actors and propagation of conspiracy theories, see, for example, (Hellinger, 2018). Indeed, the Breitbart graph has smaller modularity and comparable path length. This could imply a lower encapsulation of topics and a higher tendency to connect remote entities. Even a first bird\u2019s eye view gives several fundamental insights: \u2022 when assessed formally both right and left media demonstrate qualitatively comparable behavior; they try to cover the news in a relatively neutral tone with a pinch of subjectivity; \u2022 the coverage of various topics differs significantly; the entities that Brietbart constantly covers tend to be people and actors of domestic US politics, whereas NYT pays more attention to institutions and international affairs; \u2022 the overall differences between formally obtained knowledge structures that could proxy right and left world-view are minute, despite our intuition telling us otherwise. Figure 5: Breitbart News. Overall visualisation of two graphs extracted out of the media sources. The classes found with modularity analysis (Blondel et al., 2008) are highlighted with different colours. Breitbard has smaller number of classes and is centered around US political discourse. 3.2. Politics of Contrasts Figure 7 shows a joint graph of the most polarized edges. These are the edges between entities for which the polarity in NYT and Breitbart has a different sign. Similarly, in Figure 8 one could see the most contrasting vertexes. These are the entities that have the highest average polarity of the adjacent edge. Effectively these are the representation of the polarizing topics and are covered with different polarity in both news sources. An interesting difference between the graph of contrasting edges and the graph of contrasting nodes is that the former is mostly populated with domestic political actors, whereas the latter up to a large extent consists of entities connected with foreign affairs. This is interesting. Certain relationships between entities tend to be more polarizing for domestic issues and local politicians, yet when averaged over several such relationships across time the foreign affairs and institutions come forward. This is the same pattern that we saw earlier. One could speculate that contrasting edges highlight certain local events centered around specific politicians. Such events could be highly polarizing yet temporal. At the same time institutions and global affairs might not be as polarizing as a local scandal, yet the position of both sides on them is persistent, so when averaging across adjacent edges one sees Figure 8. This highlights the fundamental difference between the sources. Though on macro-level both outlets prefer to stick to neutral coverage and refrain from subjectivity when it comes to certain entities and topics they provide different evaluations and tend to be more subjective in these cases. The combination of these two factors is extremely unfortunate since it facilitates social conflict. Indeed, every reader is perfectly convinced that her news source is relevant, objective, and non-biased. This also happens to be true in the vast majority of cases. Yet on a handful of key issues, the media takes a more polarizing and subjective position. Moreover, the local polarizing issues tend to be associated with personalities, while longer, fundamental differences are associated with institutions. This could be attributed to the idea of core political beliefs that could be less polarizing yet may be harder to change in the long run. Figure 6: New York Times.Overall visualisation of two graphs extracted out of the media sources. The classes found with modularity analysis (Blondel et al., 2008) are highlighted with different colours. NYT graph has almost twice as many modularity classes and pays way more attention to foreign affairs. This could be partially attributed to the bigger size of the resulting graph, since NYT had more articles published in the studied time period. 4. Discussion One of the key contributions of this paper is an attempt to demonstrate that an echo-chamber is not exactly a phenomenon based solely on the topology of human social networks. Using modern language processing methods and straight-forward knowledge representation we show that two different media sources paint two different pictures of the political reality. Yet these differences are less obvious than we tend to think and are more subtle. Surprisingly low average polarity and subjectivity for both knowledge structures are extremely intriguing. Assuming there is no ill will on the side of the publisher one can try to explain why the overwhelming amount of news articles try to be non-polarizing and non-subjective and with these attempts reinforce the echo-chambers around them without even trying. Could echo-chambers be a consequence of human psychological trust mechanisms on top of certain social structure formation? In (Levine, 2014) and (Clare and Levine, 2019) the authors discuss the truth-default theory. They demonstrate that when people cognitively process the content of others\u2019 communication, they typically do so in a manner characterized by unquestioned, passive acceptance. We could speculate that such behavior naturally transfers to the news sources. High neutrality and low subjectivity reinforce this truthdefault. Since the preferred news outlet is often objective and neutral the reader tends to ignore or accept rare polarizing and subjective articles and dismiss the counter-argument of the other side, since in an overwhelming majority of the cases the criticism was not applicable. This might be wild speculation that demands further experimental verification. However, the very idea that echo-chamber formation could be attributed to the personal rather than collective behavior is new to our knowledge. 5. Conclusion In this paper, we present the graphs of entities that correspond to two major \"alt-right\" and \"liberal\" news media and their coverage of the mentioned entities and relations between them. The graphs are obtained without any expert knowledge solely with NLP instruments and methods of knowledge representation. Analyzing obtained graphs we show that despite common intuition they exhibit a lot of structural similarities. We also highlight Figure 7: Sub-graph of contrasting edges. These are the edges for which the sign of polarity for BN and NYT is different. fundamental differences that could be attributed to the formation of echo-chambers and certain biases on the world perception. We suggest that the formation of echo-chambers has more to do with the structure of information consumption and certain core beliefs of the individual rather than social structure that encompasses the aforementioned person. Limitations The study covers the period from 2008 to the Fall of 2019, excluding updates beyond 2019. It refrains from a detailed examination of the political aspects and perspectives of Breitbart News and New York Times readers, and it does not develop additional discussions on the global order. Considering recent global crises like wars, economic downturns in specific nations, and the worldwide impact of the COVID-19 pandemic, we anticipate that applying our methodology to recent-year data may produce slightly different findings. Nonetheless, in an effort to encourage transparent research in knowledge representation for social sciences, we provide access to our collected datasets. Ethics Statement Our work prioritizes transparency and relies on data collected from open sources. We refrain from making political judgments in our discussion notes to prevent discrimination and minimize potential societal harm. Figure 8: Sub-graph of contrasting vertexes. These are the vertexes for which the average of polarity of the adjacent edges is the highest. Blue nodes are shifted towards NYT, red \u2014 towards BN. 6. Bibliographical" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.08079v1", |
| "title": "BayesFLo: Bayesian fault localization of complex software systems", |
| "abstract": "Software testing is essential for the reliable development of complex\nsoftware systems. A key step in software testing is fault localization, which\nuses test data to pinpoint failure-inducing combinations for further diagnosis.\nExisting fault localization methods, however, are largely deterministic, and\nthus do not provide a principled approach for assessing probabilistic risk of\npotential root causes, or for integrating domain and/or structural knowledge\nfrom test engineers. To address this, we propose a novel Bayesian fault\nlocalization framework called BayesFLo, which leverages a flexible Bayesian\nmodel on potential root cause combinations. A key feature of BayesFLo is its\nintegration of the principles of combination hierarchy and heredity, which\ncapture the structured nature of failure-inducing combinations. A critical\nchallenge, however, is the sheer number of potential root cause scenarios to\nconsider, which renders the computation of posterior root cause probabilities\ninfeasible even for small software systems. We thus develop new algorithms for\nefficient computation of such probabilities, leveraging recent tools from\ninteger programming and graph representations. We then demonstrate the\neffectiveness of BayesFLo over state-of-the-art fault localization methods, in\na suite of numerical experiments and in two motivating case studies on the JMP\nXGBoost interface.", |
| "authors": "Yi Ji, Simon Mak, Ryan Lekivetz, Joseph Morgan", |
| "published": "2024-03-12", |
| "updated": "2024-03-12", |
| "primary_cat": "cs.SE", |
| "cats": [ |
| "cs.SE", |
| "stat.ME" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Software testing \u2013 the process of executing a program with the intent of finding errors (My- ers et al., 2004) \u2013 is an essential step in the development of robust software applications. Such testing aims to reveal (and subsequently fix) as many bugs as possible prior to the re- lease of a software application, thus greatly reducing the likelihood of encountering failures for the end-user. This is crucial in an era where nearly all facets of daily life involve human interaction with software applications. There are, however, two critical challenges. First, each software test can be time-consuming to perform. This involves not only running the software application itself, which can be intensive in an era of complex machine learning models with massive data, but also determining whether the software deviates from its expected behavior. The latter, known as the \u201coracle problem\u201d (Barr et al., 2014), typically requires an independent assessment of numerical accuracy of software outputs (Lekivetz and Morgan, 2021), which can be very costly. Second, the number of test cases required for thorough software examination can easily be overwhelming. As \u201cbugs [tend to] lurk in corners and congregate at boundaries\u201d (Beizer, 2003), software testing typically focuses on boundary values and the combinations of inputs, which can grow rapidly. For practical ap- plications, it is thus wholly infeasible to exhaustively test all input combinations (Kumar, 2019), which can easily require billions of test cases! These two fundamental challenges open up a world of exciting new statistical directions for this application of rising importance. Such directions can roughly be categorized into two topics. The first is a careful design of test cases to perform, with the joint goals of identifying failure settings and diagnosing its underlying root causes. Statistically, this can be viewed as an experimental design problem for software fault diagnosis. There has been notable work on this front. An early approach is the one-factor-at-a-time design (Frey et al., 2003; Wu and Hamada, 2009), which varies inputs sequentially one at a time; this is suitable for unit testing (Runeson, 2006), which focuses on investigating individual inputs. Another approach is pairwise testing (Bach and Schroeder, 2004), which examines all pairwise combinations of inputs; test case generation for this setting has been explored in Tai and Lei (2002). A more general approach is combinatorial testing (Nie and Leung, 2011b), which investigates combinations involving more than two inputs. For combinatorial testing, the design of choice is a covering array (CA; Colbourn, 2004); such designs aim to 2 represent (or \u201ccover\u201d) each combination of inputs (up to a specified order) at least once in the test runs (Dalal and Mallows, 1998). CAs are thus ideal for detecting failures from limited test runs; we will discuss CAs in greater detail later in Section 2. With the initial test cases conducted and failures detected, the second direction is fault localization (Wong et al., 2023): the use of this test data to pinpoint root causes. This is a highly challenging problem due to the overwhelming number of scenarios to consider for potential root causes. To see why, consider a software application for training boosted tree models, and suppose it has I = 10 input factors each with two levels. Such levels could represent, e.g., low / high learning rate or low / high tree depth. There are thus a total of P10 i=1 \u000010 i \u0001 2i = 59048 input combinations, e.g., the combination of low learning rate with high tree depth, that might be potential root causes. Since each combination is either a root cause or not, this results in a whopping 259048 different scenarios to consider for potential root causes! Fault localization then requires gauging which of these many scenarios is likely given test set outcomes, which is clearly a computationally intensive task (Wong et al., 2023), even for systems with a moderate number of inputs I and few failed test cases. Due to this sheer amount of potential root causes, researchers have developed determin- istic (i.e., non-probabilistic) fault localization techniques that, based on the outcomes of initial test cases, select a handful of suspicious input combinations for further investigation. This includes the work of Nie and Leung (2011a), which proposed a minimal failure-causing schema and used it to narrow down the search range for potential root causes and to guide subsequent test case generation. Niu et al. (2013) proposed a notion of tuple relationship tree for visualizing the relationships among all input combinations. Such a tree is utilized to eliminate \u201chealthy\u201d combinations and to propose subsequent test cases for further ex- amination of the system. More recently, Ghandehari (2016) and Ghandehari et al. (2018) introduced a two-phase approach for finding faulty statements in a software system. Such approaches have been integrated, either in full or in part, within the Covering Array Anal- ysis module in the statistical software package JMP (henceforce called JMP; Jones and Sall, 2011). Despite the above body of work, a key weakness of such existing methods is that they are not probabilistic in nature. These methods thus provide little insight on the probability of a combination being a root cause given test outcomes. Such probabilities are critical for 3 confident fault localization; they (i) provide a principled statistical approach for assessing root cause risks, and thus a principled measure of confidence that an identified suspicious combination is (or is not) a root cause. One way to achieve this is via a Bayesian modeling approach, where prior root cause probabilities are assigned on each input combination, then updated by conditioning on the observed test set results. Such a Bayesian framework, when carefully elicited and specified, offers three further advantages over the state-of-the- art. It (ii) gives a flexible framework for integrating prior structural knowledge on root cause behavior that are known to be present, which permits quicker fault localization with fewer tests. By integrating such structure, a Bayesian approach may also (iii) provide a more informed ranking of potential root causes, by disentangling the many potential effects returned by existing methods, which are often too numerous to fully explore in practice. Finally, a Bayesian approach can (iv) naturally incorporate prior domain knowledge from test engineers (Lekivetz and Morgan, 2018), which can further accelerate fault localization. We will demonstrate such advantages in later case studies. We thus propose a new Bayesian Fault LOcalization (BayesFLo) framework, which ad- dresses the aforementioned limitations via the four advantages (i)-(iv). The main workhorse of BayesFLo is a new probabilistic model on root cause indicators over all possible input combinations. This model carefully embeds the desirable principles of combination hier- archy and heredity (Lekivetz and Morgan, 2021), which capture the structured nature by which software root causes arise. We show that the integration of such principles, which are derived from the well-known principles of effect hierarchy and heredity (Wu and Hamada, 2009) for analyzing experimental data, can improve the identification of root causes from limited test cases. A critical challenge for Bayesian computation is the sheer number of considered combinations; without careful manipulation, this renders the computation of posterior root cause probabilities to be wholly infeasible. We thus develop a new algorith- mic framework for efficient computation of such posterior probabilities, leveraging recent tools from integer programming and graph representations. We then demonstrate the prac- tical advantages of BayesFLo over the state-of-the-art, in a suite of numerical experiments and our motivating application on fault localization for JMP\u2019s interface of XGBoost, an open-source machine learning library for scalable tree boosting (Chen and Guestrin, 2016). The paper is organized as follows. Section 2 outlines our motivating application on fault localization of the JMP XGBoost interface, as well as limitations of the current state-of- 4 Figure 1: The user interface for the XGBoost library in JMP Pro 17.0. JMP XGBoost Case Study 2 Hyperparameter Level 1 Level 2 max depth 6 9 alpha 0 1 lambda 0 1 learning rate 0.05 0.3 booster gbtree dart sample type uniform weighted normalize type tree forest Table 1: Considered hyperparameters (factors) for our motivating XGBoost case study. the-art. Section 3 presents the BayesFLo model, and describes the combination hierarchy and heredity principles embedded in its prior specification. Sections 4 proposes novel algorithms for computing the desired posterior root cause probabilities of potential root cause combinations. Section 5 investigates the effectiveness of BayesFLo over the state- of-the-art in a suite of numerical experiments. Section 6 then explores the application of BayesFLo in two practical case studies on fault localization of the JMP XGBoost interface. Section 7 concludes the paper.", |
| "main_content": "2.1 Background & Challenges Our motivating application is the fault localization for JMP\u2019s interface of the XGBoost library (Chen et al., 2015; Chen and Guestrin, 2016). XGBoost, short for \u201ceXtreme Gradient Boosting\u201d, is a popular machine learning software package, which provides an efficient and scalable implementation of gradient boosting (see, e.g., Friedman, 2002). This library is widely used in the statistical and machine learning communities, and has gained widespread popularity in broad applications, including epidemiology (Ogunleye and Wang, 2019) and e-commerce (Song and Liu, 2020). The popularity of XGBoost can be attributed to several reasons: it offers an algorithmic optimization framework with built-in parallel and distributed computing capabilities, and is available as an open-source library in many coding environments, including Python, R, C++ and Julia. We focus in this work on its implementation within JMP (Jones and Sall, 2011), a subsidiary of SAS Institute focused on statistical analysis for scientists and engineers. 5 A critical challenge for a robust implementation of XGBoost (and indeed, of most machine learning software) is the verification of software performance over a broad range of hyperparameter settings. This verification is particularly important given the increasing dependence of modern machine learning algorithms on a careful tuning of hyperparameter settings. Figure 1 shows the XGBoost User Interface in JMP Pro Version 17.0. We see that there are many hyperparameters that users may freely vary for model training. With this flexibility, however, the verification of this software via a brute-force testing of all hyperparameter combinations is wholly infeasible. One solution is to first (i) construct and run the software system on a designed test set of hyperparameter settings. Upon encountering failures, one then (ii) identifies potential root causes for further investigation, namely, fault localization. Consider first step (i) for our XGBoost case study with I = 7 two-level factors; we return to this case study later in Section 6.2. Table 1 summarizes the considered factors and its levels. A popular test set design is a covering array (CA; Colbourn, 2004; Lekivetz and Morgan, 2021), which is defined as follows. Take a matrix with dimensions M \u00d7 I, and suppose its i-th column takes Ji distinct levels for some integer Ji \u22652. Then this array is a CA of strength s, if within any choice of s columns, each possible level combination involving these columns occurs at least once. For software testing, this CA can be used to design a test set with M runs and I inputs (where the i-th input has Ji levels); the levels in its m-th row then specify the input settings for performing the m-th test run. Table 2 (left) shows a strength-3 CA for the XGBoost case study. This CA achieves the desired coverage condition with a minimal number of M = 12 runs, thus greatly reducing the number of runs on the expensive software system. Note that, with this strength-3 CA, all twoand three-factor input combinations are investigated in at least one test run; if one of these combinations causes an error, we would observe a corresponding failure in a test run. Step (ii) then aims to pinpoint potential root causes from the test runs. This is the key problem explored in this work, and is highly challenging for several reasons. Table 2 (right) shows the test outcomes for our case study, where 0 indicates a passed case and 1 a failed one. Here, there are a total of P7 i=1 \u00007 i \u0001 2i = 2186 input combinations, e.g., max depth = 9 and alpha = 0, that may be potential root causes. Since each combination is either a root cause or not, there are thus a whopping 22186 different root cause scenarios to consider for fault diagnosis. A key challenge is the parsing of these many scenarios to 6 Test Cases & Outcomes for JMP XGBoost Case Study 2 max depth alpha lambda learning rate booster sample type normalize type Outcome 9 0 0 0.05 gbtree weighted forest 1 9 0 1 0.3 dart weighted tree 0 6 1 0 0.3 gbtree uniform tree 0 6 1 0 0.05 dart weighted forest 0 9 1 1 0.3 gbtree weighted forest 1 6 0 1 0.3 dart uniform forest 0 6 1 1 0.05 dart weighted tree 0 6 0 0 0.3 gbtree weighted tree 1 6 0 1 0.05 gbtree uniform forest 1 9 1 1 0.05 gbtree uniform tree 0 9 0 0 0.05 dart uniform tree 0 9 1 0 0.3 dart uniform forest 0 Table 2: The M = 12-run test design and corresponding test outcomes for our motivating XGBoost case study. Here, an outcome of 0 indicates a passed test case and 1 indicates a failed one. find which are likely given the observed test set, then how to use this analysis for efficient system diagnosis. Another challenge is the need for assessing confidence that an identified suspicious combination is indeed a root cause. This provides test engineers a principled way for deciding which combinations are likely root causes and need to be investigated in subsequent tests, and which are likely not root causes and can be safely ignored; such uncertainty estimation is thus critical for trustworthy fault diagnosis (Zhou et al., 2023). 2.2 State-of-the-Art and Its Limitations Existing fault localization methods, as described earlier, are largely deterministic in nature. This includes the work of Niu et al. (2013), who used a tuple relationship tree to capture relationships among all factor combinations based on testing results. For a given test case, this tree lists all considered combinations (tuples) along the branches of the tree, which can then be used to classify which class (faulty or healthy) each tuple belongs to. Nie and Leung (2011a) introduced the idea of a minimal failure-causing schema, defined as the smallest-order factor combination such that all test cases containing them trigger a failure. This schema is then applied for guiding fault localization and subsequent testing. Ghandehari (2016) and Ghandehari et al. (2018) developed a two-stage combinatorialtesting-based fault localization approach. The key idea is to identify potential failureinducing factor combinations from test results by eliminating combinations that appear in passed test cases, then rank such combinations based on two proposed \u201ccombination suspiciousness\u201d and \u201cenvironment suspiciousness\u201d metrics. Lekivetz and Morgan (2018) 7 proposed a deterministic ranking procedure that incorporates a domain-knowledge-guided weighting scheme. A key limitation of such methods is that they are not probabilistic in nature, and thus do not provide the desired probabilistic measure of confidence for how likely a particular combination is a root cause. The above methods also unfortunately do not have publicly-available code; we instead make use of the JMP Covering Array Analysis (JMP Statistical Discovery LLC, 2023) as the \u201cstate-of-the-art\u201d approach, which has integrated such methods either in full or in part. Returning to our XGBoost case study, Figure 2 shows the fault localization analysis from JMP\u2019s Covering Array module. Similar to Ghandehari et al. (2018), this analysis first removes all combinations that have been cleared in passed cases; we call these \u201ctested-andpassed\u201d combinations later. The remaining combinations are then ranked in terms of its failure count, i.e., the number of failed test cases for which this combination is present. For example, the two-factor combination of alpha = 0 and booster = gbtree has a failure count of 3, since it is present in three failed test runs: runs 1, 8 and 9. This ranking of potential root causes via its failure count is quite intuitive, as combinations that show up in more failed test cases should naturally be treated as more suspicious. Figure 2 shows the ranked combinations with two or three failure counts, where three is the highest count in this analysis. Despite the intuition behind this approach, there are several notable limitations. First, such an approach is deterministic and does not provide a probabilistic quantification of risk that a suspicious combination is indeed a root cause. This probabilistic risk is crucial for guiding the scope of subsequent diagnosis for likely root causes. For example, in Figure 2, if we find that the combinations with three and two failure counts have 95% and nearzero root cause probabilities, respectively, then it is economically reasonable to diagnose only the former combinations and not the latter. However, if the latter two failure count combinations have 75% probability, then it is prudent to diagnose those as well for software robustness. Such decisions cannot be made with current deterministic methods. Second, in ranking combinations by failure count, the JMP analysis (and existing methods) yields many \u201ctied\u201d combinations, e.g., in Figure 2, there are 15 tied combinations with a failure count of two. Such ties make the subsequent diagnosis process particularly difficult, since the investigation of all 15 combinations with two failure counts is typically too costly in practice. A probabilistic ranking of combinations can alleviate this issue by disentangling 8 Figure 2: JMP\u2019s Covering Array Analysis for our motivating XGBoost case study. Listed are the potential root cause combinations ranked by decreasing failure counts. tied combinations to facilitate targeted diagnosis. Finally, existing approaches largely do not provide a framework for integrating prior domain and/or structural knowledge on root cause behavior, e.g., the aforementioned combination hierarchy and heredity principles. The integration of such knowledge can improve fault localization with limited test runs, as we shall see later. 3 The BayesFLo Model To address these limitations, we propose a new Bayesian Fault Localization (BayesFLo) framework, which provides a principled statistical approach for assessing probabilistic risk of potential root causes via conditioning on test set outcomes. We first present the employed modeling framework, then show how it embeds the desirable structure of combination hierarchy and heredity (Lekivetz and Morgan, 2021) within its prior specification, thus enabling effective fault localization with limited (expensive) test runs. 3.1 Prior Specification We first introduce some notation. Consider a software system (or more broadly, a complex engineering system) with I \u22651 input factors, where a factor i can take on Ji \u22652 different levels. A K-input combination (with K \u2264I) is denoted as (i, j)K, where i = (i1, \u00b7 \u00b7 \u00b7 , iK), i1 < \u00b7 \u00b7 \u00b7 < iK is an ordered K-vector containing all inputs for this combination, and j = (j1, \u00b7 \u00b7 \u00b7 , jK), jk \u2208{1, \u00b7 \u00b7 \u00b7 , Jik} is a K-vector indicating the levels of each corresponding input. For example, the 2-input combination of the first factor at level 1 9 and the second factor at level 2 can be denoted as (i, j)2, where i = (1, 2) and j = (1, 2). In the case of K = 1, i.e., a single input i at level j, this may be simplified to (i, j). Now let CK denote the set of K-input combinations (i, j)K as described above, and let C = \u222aI K=1CK denote the set of combinations over all orders K = 1, \u00b7 \u00b7 \u00b7 , I. Further let Z(i,j)K \u2208{0, 1} be an indicator variable for whether the combination (i, j)K is truly a root cause. As this is unknown prior to running test cases, we model each Z(i,j)K a priori as an independent Bernoulli random variable: Z(i,j)K indep. \u223c Bern{p(i,j)K}, (i, j)K \u2208CK, K = 1, \u00b7 \u00b7 \u00b7 , I, (1) where p(i,j)K is the prior probability that this combination is a root cause. Here, the view that Z(i,j)K is random makes our approach Bayesian; this contrasts with existing fault localization approaches, which presume Z(i,j)K to be fixed but unknown. For K = 1, this notation simplifies to Z(i,j) and p(i,j). Whenever appropriate, we denote Z = (Z(i,j)K)(i,j)K\u2208C and p = (p(i,j)K)(i,j)K\u2208C for brevity. It is worth noting the sheer number of input combinations in C that needs to be considered as potential root causes. Assuming each factor has an equal number of levels J = J1 = \u00b7 \u00b7 \u00b7 = JI, one can show that CK contains \u0000 I K \u0001 JK distinct combinations of order K, thus the total number of consider input combinations is |C| = PI K=1 \u0000 I K \u0001 JK. Even with a moderate number of inputs, say I = 10, with each having J = 2 levels, this amounts to |C| = 59048 combinations. As we shall see later in Section 4, the size of C forms the key bottleneck for Bayesian inference, as the computation of posterior probabilities can require O(2|C|) work; this can thus be infeasible even for small software systems. Next, we adopt the following product form on the root cause probability for (i, j)K: p(i,j)K = K Y k=1 p(ik,jk), (i, j)K \u2208CK, K = 2, \u00b7 \u00b7 \u00b7 , I. (2) In words, the combination root cause probability p(i,j)K is modeled as the product of the root cause probabilities for its component inputs. This product form offers two advantages: it precludes the need for exhaustive prior elicitation over all combinations in C (discussed later), and nicely embeds the desired principles of combination hierarchy and heredity (Lekivetz and Morgan, 2021). These principles, which capture the structured nature of 10 typical software root causes, can be seen as extensions of the well-known principles of effect hierarchy and heredity (Wu and Hamada, 2009), which are widely used for analysis of factorial experiments. The first principle, combination hierarchy, asserts that combinations involving fewer inputs are more likely to be failure-inducing than those involving more inputs. Empirical evidence suggests this principle holds across software across various domains (Kuhn et al., 2004). To see how our prior in (2) captures combination hierarchy, note that by its product form construction, the combination probability p(i,j)K is always less than the probability of any component input p(ik,jk). Thus, this prior assigns increasingly smaller root cause probabilities on combinations with a higher interaction order K, thus capturing the desired hierarchy structure. The second principle, combination heredity, asserts that a combination is more likely to be failure-inducing when some of its component inputs are more likely to be failure-inducing. From our product-form prior in (2), note that the combination root cause probability p(i,j)K cannot be large unless some of its component root cause probabilities in {p(ik,jk)}K k=1 are also large. This thus captures the desired combination heredity effect. Similar product-form weights have been used for modeling hierarchy and heredity in the context of predictive modeling (Tang et al., 2023) and data reduction (Mak and Joseph, 2017). With the product form (2), we require only the specification of the single-input root cause probabilities {p(i,j)}i,j. Such a specification, however, requires careful elicitation of important domain knowledge from test engineers. For most software systems at the testing stage, it may be reasonable to specify a small (i.e., near-zero) value for p(i,j), as this reflects the prior belief that failure-inducing root causes should occur sporadically. Oftentimes, however, an engineer has additional domain knowledge that permits a more informed prior specification. For example, the engineer may know that certain factors have been recently added to the system, and thus may be more suspicious of such factors. This heightened suspicion can be captured via a larger specification of its p(i,j) compared to other factors. We shall see how such domain knowledge can accelerate fault localization in later experiments. From a Bayesian perspective, the product-form prior (2) provides a way for propagating elicited domain knowledge over the many root cause probabilities in p. For example, suppose an engineer has heightened suspicions on factor i, and accordingly specifies a higher value for the single-factor root cause probabilities {p(i,j)}j. By (2), this induces larger prior root cause probabilities p(i,j)K for any combination (i, j)K involving factor i, thus \u201cpool11 ing\u201d this information over such combinations. This \u201cinformation pooling\u201d, guided by the embedded principles of combination hierarchy and heredity, can facilitate the disentangling of the large number of potential root causes from limited test runs. Recent work on related notions of information pooling have shown promise in various high-dimensional inference problems, e.g., matrix completion (Yuchi et al., 2023) and multi-armed bandits (Mak et al., 2022), and we show in later experiments that this is also important for effective fault localization. 3.2 Posterior Root Cause Probabilities In what follows, we suppress the notation (i, j)K to (i, j) for brevity. Using the above prior specification, we now need to condition on the observed test case data. Suppose we run the software system at M different test cases, where the m-th test case is performed at input levels tm = (tm,1, \u00b7 \u00b7 \u00b7 , tm,I), tm,i \u2208{1, \u00b7 \u00b7 \u00b7 , Ji}. Then the test data can be denoted as D = {(tm, ym)}M m=1, where ym \u2208{0, 1} is a binary variable with 1 indicating a failure and 0 if not. To make things concrete, consider the following example. Suppose the system has I = 3 input factors, each with two levels. Further assume there is only one true root cause ((1, 2), (1, 2)), i.e., the combination of the first input at level 1 and the second at level 2, which results in failure. Suppose we then run the first test case at input setting t1 = (1, 2, 1), i.e., with the three factors at levels 1, 2 and 1, respectively. Then, since the root cause is present in t1, this would result in a failure, namely y1 = 1. However, if we run the second test case at a different setting t2 = (2, 2, 2), then this test case would result in no failure, i.e., y2 = 0, as the root cause is not present in t2. Here, we presume that observed outcomes are deterministic, in that the same outcome ym is always observed whenever the software system is run with inputs tm. With this framework, the problem of fault localization then reduces to the evaluation of the posterior root cause probabilities for all considered combinations in C, namely: P(Z(i,j) = 1|D), for all (i, j) \u2208C. (3) Such a computation, however, can easily become computationally intractable. The key bottleneck lies in the complex conditioning structure from data D over the high-dimensional set of combinations C; as we see later, this can then induce an O(2|C|) complexity for a brute-force computation of posterior probabilities. Recall that, with the moderate setting 12 of I = 10 and J = 2, |C| consists of nearly 60000 combinations. Thus, without careful modifications to exploit problem structure, posterior computation can be intractable even for small systems! We adopt next the following categorization of input combinations in C for efficient computation of root cause probabilities: (a) Tested-and-Passed (TP): TP combinations for a passed test case tm are combinations in C that have been tested in tm. Continuing from the earlier example, suppose we run the test case tm = (2, 2, 2) with no failure, i.e., with ym = 0. Then it follows that the combination ((1, 2), (2, 2)), i.e., with the first factor A at level 2 and the second factor B at level 2, is a TP combination. (In what follows, we may denote such a combination as A2B2 for notational simplicity; this should be clear from context.) For this single passed case, the set of TP combinations is CTP,m = {A2, B2, C2, A2B2, A2C2, B2C2, A2B2C2}. (b) Tested-and-Failed (TF): TF combinations for a failed test case tm are combinations in C that have been tested in tm. For example, suppose we run the test case t = (1, 2, 1) and observe a failure, i.e., with ym = 1. Then, from this single failed case, the set of TF combinations becomes CTF,m = {A1, B2, C1, A1B2, A1C1, B2C1, A1B2C1}. (c) Untested (UT): UT combinations are combinations in C that have not been tested in any test case. For example, suppose we run the test case t = (1, 2, 1). Then one UT combination is A2B1, as such a combination was not tested in t. This partition of C naturally extends for multiple test runs in D. Here, the TP combinations CTP from D are the TP combinations over all passed test cases. The TF combinations CTF from D are the TF combinations over all failed test cases, with the combinations from CTP removed. CUT then consists of all remaining combinations in C. In other words: CTP = \u222am:ym=0 CTP,m, CTF = (\u222am:ym=1 CTF,m) \\ CTP, CUT = C \\ (CTP \u222aCTF). (4) Figure 3 (left) visualizes this partition of C from observed test runs for a simple example. With this partition of C, we now present efficient algorithms for computing the posterior root cause probabilities (3). For TP combinations, it is clear that such a combination cannot 13 Figure 3: [Left] Visualizing the use of passed and failed test cases for partitioning the set of considered combinations C into CTP, CTF and CUT. [Right] Workflow for the proposed BayesFLo fault localization approach. be a root cause, as it was cleared by a passed test case. In other words: P(Z(i,j) = 1|D) = 0, (i, j) \u2208CTP. (5) This is akin to Ghandehari et al. (2018), which removes TP combinations from consideration for root causes. Furthermore, for UT combinations, we have: P(Z(i,j) = 1|D) = P(Z(i,j) = 1), (i, j) \u2208CUT, (6) since the observed test set D does not provide any information on an untested combination (i, j). As such, its root cause probability given D simply reduces to its prior probability given in (2). The challenge thus lies in computing posterior probabilities on the remaining class of TF combinations. We detail next an approach for computing such probabilities, leveraging tools from integer programming and graph representations. Figure 3 (right) summarizes the proposed algorithmic workflow; we elaborate on this in the following section. 14 4 Computation of Root Cause Probabilities Consider the case of TF combinations, where we wish to compute the posterior root cause probability (3) for a given TF combination (i, j) \u2208CTF. One solution might be the \u201cbruteforce\u201d approach: P(Z(i,j) = 1|D) = P(Z(i,j) = 1, D) P(D) = P z\u2208{0,1}|C|,Z(i,j)=1 P(Z = z)P(D|Z = z) P z\u2208{0,1}|C| P(Z = z)P(D|Z = z) . (7) where P(Z = z) follows from Equation (2), and P(D|Z = z) can be deduced by reasoning. The limitation of such an approach is clear. For each (i, j) \u2208CTF, we need to compute the sum of 2|C|\u22121 terms in the numerator and the sum of 2|C| terms in the denominator. Hence, even for small software systems with |C| small, this brute-force approach can be infeasible. This sheer dimensionality of potential root cause scenarios is the key bottleneck for tractable computation of probabilities for Bayesian fault localization. To address this, we employ an alternate formulation, which allows for considerable speed-ups in computing probabilities. We first outline this reformulation, then show how this facilitates efficient computation via a connection to the related problem of minimal set covering. 4.1 An Alternate Formulation The following proposition provides a useful reformulation of the desired posterior root cause probability for a TF combination (i, j): Proposition 1. Let (i, j) \u2208CTF, and let: M(i,j) = {m = 1, \u00b7 \u00b7 \u00b7 , M : ym = 1, (i, j) \u2208CTF,m} (8) be the index set of failed test cases for which (i, j) is a potential root cause. Define the event: E(i,j) = {for each m \u2208M(i,j), there exists some c \u2208CTF,m \\ CTP such that Zc = 1}. (9) In words, this is the event that all failures in M(i,j) can be explained by the selected root causes {c \u2208CTF : Zc = 1}. The desired posterior root cause probability then reduces to: P(Z(i,j) = 1|D) = P(Z(i,j) = 1|E(i,j)) = p(i,j) P(E(i,j)) (10) 15 The proof of this can be found in Appendix A. There are two key advantages of this alternate form (10) over the brute-force approach (7). First, its numerator can be directly computed via Equation (2) with little work. Second, its denominator P(E(i,j)) can be effectively computed via a novel connection to a related minimal set covering problem for bipartite graphs (Asratian et al., 1998); we show this below. To compute P(E(i,j)), we first inspect condition (9) for E(i,j), which requires, for each failed test case in M(i,j), a corresponding TF combination that induces this failure. Figure 4 visualizes this condition in the form of a bipartite graph, where the left nodes are the TF combinations in CTF, and the right nodes are failed test cases in M(i,j). Here, an edge is drawn from a combination c (on left) to a test case index m (on right) if c \u2208CTF,m, i.e., if combination c is contained in the failed test inputs tm. Viewed this way, condition (9) is equivalent to finding a selection of potential root causes in {Zc}c\u2208CTF, such that every failed test case on the right is connected to (or \u201ccovered\u201d by) a selected combination on the left via an edge. Figure 4 visualizes two possible \u201ccovers\u201d. Such a cover of right-hand nodes can be interpreted as a selection of potential root causes (left-hand nodes) that explain the failed test cases. Thus, to compute the probability P(E(i,j)), we need to sum over the prior probabilities for all possible selections of potential root causes that cover the failed test cases in M(i,j). 4.2 Enumerating Minimal Covers With this insight, we now establish a useful link between the desired probability P(E(i,j)) and the related problem of minimal set covering. Formally, we define a cover of the failed test indices M(i,j) as a subset \u02dc C of the potential root causes CTF, such that for every m \u2208M(i,j), there exists an edge connecting some node in \u02dc C to m. A minimal cover of M(i,j) is then a cover \u02dc C of M(i,j) which, if any element is removed from \u02dc C, ceases to be a cover. Figure 4 visualizes this notion of a minimal cover. Using this definition, the following proposition reveals a useful connection: Proposition 2. The desired probability P(E(i,j)) can be simplified as: P(E(i,j)) = P({Zc = 1 for all c \u2208\u02dc C}, for at least one minimal cover \u02dc C of M(i,j)). (11) 16 Figure 4: Visualizing the bipartite graph representation and two minimal covers for failures involving the combination (i, j). Its proof can be found in Appendix B. In words, this shows that P(E(i,j)) amounts to finding the probability that, for at least one minimal cover \u02dc C, all combinations in \u02dc C are indeed root causes. To compute (11), a natural approach is to first enumerate all minimal covers of M(i,j). Fortunately, the set cover problem for bipartite graphs has been well-studied in the literature, and efficient polynomial-time algorithms have been developed for finding minimal covers (Skiena, 1998; Hopcroft and Karp, 1973). Leveraging such developments can thus greatly speed up the brute-force approach for posterior probability computation (see Equation (7)), which is doubly-exponential in complexity and thus infeasible for even small software systems. With recent developments in integer programming algorithms (Wolsey, 2020), a popular strategy for finding minimal set covers is to formulate and solve this problem as an integer linear program (ILP; Schrijver, 1998). We adopt such a strategy below. Let CTF,(i,j) be the set of potential root causes in CTF involving M(i,j); this is typically much smaller than CTF, which reduces the size of the optimization program below. We 17 propose the following feasibility program to find the first minimal cover for M(i,j): arg max 1 s.t. zc \u2208{0, 1} for all c \u2208CTF,(i,j), lg,m \u2208{0, 1} for all g \u2208CTF,(i,j), m \u2208M(i,j), [C1] X c\u2208CTF,(i,j) zc \u00b7 I(m \u2208Mc) \u22651 for all m \u2208M(i,j), [C2] X c\u2208CTF,(i,j),c\u0338=g zc \u00b7 I(m \u2208Mc) \u2264|CTF,(i,j)|(1 \u2212lg,m) for all g \u2208CTF,(i,j), m \u2208M(i,j), [C3] X m\u2208M(i,j) lg,m \u22651 for all g \u2208CTF,(i,j). (12) Here, the decision variables in this feasibility program are the binary variables {zc} and {lg,m}, with zc = 1 indicating combination c is included in the cover and zc = 0 otherwise. The first constraint [C1] requires the selected combinations {c : zc = 1} to cover all failed test cases in M(i,j). The next constraints [C2] and [C3] ensure the selected cover is indeed a minimal cover. To see why, note that via constraint [C2], the auxiliary indicator variable lg,m \u2208{0, 1} equals 1 if by removing g from the considered cover, we fail to cover failure case m. For the considered cover to be minimal, we thus need, for each g in the cover, at least one lg,m = 1 for some failure case m; this is ensured by constraint [C3]. One appealing property of the integer feasible program (12) is that the objective is (trivially) linear and all constraints are linear in the binary decision variables. Such an integer linear program thus admits nice structure for efficient large-scale optimization, particularly via recent developments in cutting plane and branch-and-bound algorithms (Balas et al., 1993; Stidsen et al., 2014). In our later implementation, we made use of the GurobiPy package in Python (Gurobi Optimization, LLC, 2023), which implements state-of-the-art optimization solvers for large-scale integer programming. Gurobi is widely used for solving large-scale optimization problems in the industry, including for the National Football League (North, 2020) and Air France (Richard, 2020). Here, with the ILP formulation (12), Gurobi can solve for a feasible minimal set cover in minutes for our later case studies. This formulation thus provides an efficient strategy for computing the desired probability P(E(i,j)). Of course, after finding a single minimal cover via (12), we still have to find subsequent distinct minimal covers to compute (11). This can easily be performed by iteratively solving the ILP (12) with an additional constraint that ensures subsequent covers are distinct from 18 found covers. More concretely, let {\u02dc zc}c\u2208CTF,(i,j) be a minimal cover found by (12). Then a subsequent cover can be found by solving the ILP (12) with the additional constraint: dc = zc \u2295\u02dc zc, X c\u2208CTF,(i,j) dc \u22651, c \u2208CTF,(i,j), [C4] where \u2295is the XOR operator. This new constraint [C4] ensures the next cover is distinct from the previous found cover. To see why, note that dc equals 1 only if the binary variables zc and \u02dc zc are different; the inequality constraint in [C4] thus ensures all considered covers are different from the previous cover {\u02dc zc}c\u2208CTF,(i,j). The resulting ILP is still a linear program here, as XOR can naturally be expressed as linear constraints (Magee and Glover, 1996). More specifically, the XOR condition in [C4] can be equivalently expressed as: dc \u2265zc \u2212\u02dc zc, dc \u2265\u02dc zc \u2212zc, dc \u2264zc + \u02dc zc, dc \u22642 \u2212zc \u2212\u02dc zc, c \u2208CTF,(i,j), (13) which are clearly linear in the binary decision variables {zc}c\u2208CTF,(i,j). With this, if a feasible solution is found for the ILP (12) with [C4], the optimization solver will return a distinct minimal cover, which we add to the collection. If not, the solver will instead return a \u201cdual certificate\u201d (G\u00a8 uzelsoy et al., 2010) that guarantees the ILP has no feasible solutions; such a certificate is made possible by the linear nature of the above integer program. One then iteratively solves the ILP (12) with constraint [C4] (modified to exclude two or more found covers) until the solver returns a dual certificate, in which case no feasible solutions are possible and thus all minimal covers have been enumerated. 4.3 Computing Root Cause Probabilities After enumerating all minimal covers for M(i,j), we can then compute the probability P(E(i,j)) via Proposition 2. Let V = { \u02dc C1, \u00b7 \u00b7 \u00b7 , \u02dc C|V|} be the collection of all minimal covers of M(i,j) found by the above procedure. By the principle of inclusion-exclusion, it follows from (11) that: P(E(i,j)) = P({Zc = 1 for all c \u2208\u02dc C}, for at least one \u02dc C \u2208V) = X cover \u02dc C\u2208V Y c\u2208\u02dc C pc \u2212 X covers \u02dc C, \u02dc C\u2032\u2208V Y c\u2208\u02dc C\u222a\u02dc C\u2032 pc + \u00b7 \u00b7 \u00b7 + (\u22121)|V| Y c\u2208\u02dc C1\u222a\u00b7\u00b7\u00b7\u222a\u02dc C|V| pc, (14) 19 where pc = P(Zc = 1) is again the prior root cause probability of combination c. We can then plug the computed P(E(i,j)) into Equation (10) to finally compute the desired root cause probability P(Z(i,j) = 1|D) for a TF combination (i, j). For software systems with a small number of inputs, the set of minimal covers V may not be large, in which case the computation in (14) would not be intensive. For larger systems with |V| large, one can employ the following second-order truncation as an approximation: P(E(i,j)) \u2248 X cover \u02dc C\u2208V Y c\u2208\u02dc C pc \u2212 X covers \u02dc C, \u02dc C\u2032\u2208V Y c\u2208\u02dc C\u222a\u02dc C\u2032 pc, (15) which bypasses the need for computing higher-order terms involving more than two covers. Note that, by the inclusion-exclusion principle, the right-hand side of (15) underestimates the probability P(E(i,j)). This is by design: from (10), this then results in a slight overestimation of the posterior root cause probability P(Z(i,j) = 1|D). From a risk perspective, this is more preferable than an approximation procedure that underestimates such probabilities. 4.4 Algorithm Summary For completeness, we provide in Algorithm 1 a summary of the full BayesFLo procedure. Suppose a test set is performed, yielding test data D = {(tm, ym)}M m=1. Here, the test cases {tm}M m=1 should ideally be collected from a covering array to ensure good coverage of combinations, but this is not necessary for BayesFLo. With test data collected and priors elicited on the single-factor root cause probabilities {p(i,j)}i,j, we then partition the set of considered combinations C into TP, TF and UT combinations using Equation (4). Here, if the test engineer is confident that a root cause should not exceed a certain order, then posterior probabilities need to be computed only for combinations up to such an order. This can be justified as a stronger form of combination hierarchy, and can further reduce computation for evaluating posterior probabilities. Next, we compute posterior root cause probabilities within each category. For TP combinations, this is trivially zero as such combinations were cleared in passed cases. For UT combinations, this can be set as the prior probabilities from (2), as no information can be gleaned on such combinations from the test data. For TF combinations, its posterior probabilities can be computed via the minimal set cover approach in Section 4. Finally, with posterior probabilities computed, we can then rank the potential root causes in terms 20 Algorithm 1 BayesFLo: Bayesian Fault Localization Input: Test set D = {(tm, ym)}M m=1, consisting of each test case and its corresponding test outcome. Output: Potential root causes (i, j) \u2208C with posterior root cause probabilities P(Z(i,j) = 1|D). 1: Elicit root cause probabilities {p(i,j)}i,j from domain knowledge. 2: Partition the set of considered combinations C into TP, TF and UT combinations using Equation (4). 3: For TP combinations, set its posterior root cause probability to 0. 4: For UT combinations, set its posterior root cause probability as the prior probability (2). 5: For each TF combination (i, j), enumerate minimal covers for the failed cases in M(i,j), then compute its posterior root cause probability using Equation (15). 6: Rank potential root causes (TF and UT combinations) using its corresponding posterior probabilities. of their probabilities, which can be used for guiding software diagnosis; more on this later. Figure 3 (right) visualizes the full workflow behind the BayesFLo procedure. 5 Numerical Experiments We now explore the effectiveness of BayesFLo in a suite of experiments. We explore its performance compared to the state-of-the-art first in a simple four-factor single-root-cause experiment, then in a larger eight-factor single-root-cause experiment, and finally in a more complex eight-factor experiment with multiple root causes. 5.1 Experiment 1: Four Factors, Single Root Cause The first experiment considers a small system with I = 4 factors, labeled A, B, C, D, each with J = 2 levels, labeled 1 and 2. Here, we select a single true root cause A2C2, then generate M = 5 test runs via a strength-2 covering array. Table 3 (top left) shows the corresponding test design, which yields three passed and two failed runs. We then compare the proposed BayesFLo approach with the Covering Array Analysis module in JMP. The latter, as mentioned previously, integrates developments from existing literature, and serves as a good state-of-the-art method for comparison. For BayesFLo, we assume little prior knowledge aside from the belief that root causes are 21 Test Cases & Outcomes for Experiment 1 A B C D Outcome 1 1 1 1 0 2 2 2 1 1 2 2 1 2 0 2 1 2 2 1 1 2 2 2 0 Test Cases & Outcomes for Experiment 2 A B C D E F G H Outcome 1 1 1 1 1 1 1 1 0 2 2 2 2 2 2 1 1 0 2 2 2 1 1 1 2 2 1 2 1 1 2 2 1 2 2 1 1 2 1 2 1 2 2 1 0 1 1 2 1 2 2 1 2 0 Test Cases & Outcomes for Experiment 3 A B C D E F G H Outcome 1 1 1 1 1 1 1 1 0 2 2 2 2 2 2 1 1 0 2 2 2 1 1 1 2 2 0 2 1 1 2 2 1 2 2 0 1 2 1 2 1 2 2 1 1 1 1 2 1 2 2 1 2 1 2 1 2 2 1 1 2 1 1 1 2 2 2 1 2 1 1 0 Table 3: [Top left] The M = 5-run test design and corresponding outcomes for Experiment 1. [Top right] The M = 6-run test design and corresponding outcomes for Experiment 2. [Bottom] The M = 8-run test design and corresponding outcomes for Experiment 3. Here, an outcome of 0 indicates a passed test case and 1 indicates a failed one. sporadically occurring; as such, we set the prior single-factor root cause probabilities in (2) as p(i,j) = 0.125 for all i and j. We further presume the test engineer is confident that there are no root causes involving all four factors, so posterior probabilities are computed only for combinations with at most three factors. Root cause probabilities are then evaluated via the proposed workflow in Figure 3 (right). Finally, these probabilities are ranked from largest to smallest to highlight important root causes for subsequent fault diagnosis. Table 4 shows the top-ranked posterior root cause probabilities from BayesFLo for Experiment 1, and Figure 5 shows the corresponding analysis from JMP. We see that both approaches pinpoint the true root cause A2C2 as the most suspicious combination, which is desirable. The deterministic JMP analysis, however, does not provide a probabilistic quantification of risk for each combination. As such, it is unclear from such an analysis whether a test engineer should investigate just the top combination A2C2 (with two failure counts), or all 15 combinations with a single failure count. BayesFLo provides a much clearer picture of this probabilistic uncertainty. Our method yields an 85% posterior probability on A2C2, which suggests this is highly likely to be a root cause. For subsequent two-factor combinations, this probability drops considerably to 23%, which suggests a reduced need for diagnosis. Here, this is the correct advice as such combinations indeed do not contain the true root cause. Finally, the three-factor combinations in our ranking have 22 Figure 5: JMP\u2019s Covering Array Analysis for Experiment 1. Listed are the potential root cause combinations ranked by decreasing failure counts. BayesFLo Analysis for Experiment 1 Combination Posterior Probability Failure Count A2C2 0.85 2 A2B1 0.23 1 A2D1 0.23 1 B1C2 0.23 1 B2D1 0.23 1 B1D2 0.23 1 C2D1 0.23 1 A2B1C2 0.03 1 A2B2C2 0.03 1 A2B2D1 0.03 1 A2B1D2 0.03 1 A2C2D1 0.03 1 A2C2D2 0.03 1 B2C2D1 0.03 1 B1C2D2 0.03 1 Table 4: The top-ranked posterior probabilities from BayesFLo in Experiment 1, along with its corresponding failure counts. a small probability of 3%, which suggests little need for inspection (despite it having a single failure count). Such an analysis is thus much more nuanced and can better guide further diagnosis compared to existing methods. It is worth noting that the top-ranked combinations from BayesFLo (Table 4) are all TF combinations, i.e., they appear in at least one failed test case. The UT combinations in this experiment, which all involve three factors, have a posterior root cause probability of 0.0019 from BayesFLo. This is considerably smaller than the top-ranked combinations in Table 4, which is unsurprising as the latter has appeared in at least one failed test case and should thus be treated as more suspicious. The proposed BayesFLo approach captures this intuition quantitatively via its Bayesian analysis. 5.2 Experiment 2: Eight Factors, Single Root Cause The second experiment considers a larger system with I = 8 factors, each with J = 2 levels. Similar to before, we select a single true root cause A2G2, then generate M = 6 test runs via a strength-2 covering array. Table 3 (top right) shows the corresponding test design, which yields four passed and two failed runs. As before, BayesFLo is compared with the JMP analysis, which serves as the state-of-the-art. For BayesFLo, we investigate two choices of prior specifications for the root cause 23 probabilities. The first prior is similar to Experiment 1, where p(i,j) = 0.0625 for all i and j to reflect the belief that root causes occur sporadically. The second is a more informed prior that captures domain knowledge from the test engineer. Suppose A and G were two new factors added to the software system and have not been tested previously. A test engineer may find such factors to be more suspicious a priori, and thus may assign a higher prior probability p(i,j) = 0.25 on factors A and G, and a lower prior probability of p(i,j) = 0.0625 for other factors. The hope is that such domain knowledge can help tease out potential root causes given limited test runs. We further suppose the engineer is confident there are no root causes involving three or more factors, so posterior probabilities are computed only for combinations with at most two factors. Consider first the analysis with the first prior. Table 5 (top) shows the top-ranked posterior probabilities from BayesFLo using this prior, and Figure 6 shows the corresponding analysis from JMP. We see that the true root cause A2G2 is amongst the top-ranked combinations for both the BayesFLo and JMP analysis. But as before, the latter does not provide the desired probabilistic quantification of risk offered by BayesFLo. While the BayesFLo posterior probability for A2G2 is rather low at 16%, perhaps due to the small prior probability assigned, it is clear that it (along with the other five tied combinations) need to be investigated. Subsequent combinations have considerably lower probabilities, which suggests a reduced need for inspection. Such insights are thus more nuanced and can better guide further investigation by software test engineers. Consider next the second prior, which captures domain knowledge on the elevated suspiciousness of the new factors A and G. Table 5 (bottom) shows the top-ranked probabilities for the combinations using this prior. With additional domain knowledge from this prior, we see a much higher posterior probability of 48% on A2G2, which is unsurprising as this involves both new factors. This highlights two advantages of BayesFLo. First, this shows the flexibility of BayesFLo in incorporating useful domain knowledge for improving fault localization. By leveraging such information, we are able to pinpoint the true root cause A2G2 with much greater certainty. Second, this demonstrates the usefulness of combination heredity for disentangling the six top combinations, which were tied in the JMP analysis. With the heightened suspiciousness of factors A and G specified in the prior, this embedded heredity structure in BayesFLo then raises the prior probabilities on all combinations involving these factors, which allows our procedure to identify the true root cause with 24 Figure 6: JMP\u2019s Covering Array Analysis for Experiment 2. Listed are the potential root cause combinations ranked by decreasing failure counts. BayesFLo Analysis for Experiment 2 (Prior 1) Combination Posterior Probability (Prior 1) Failure Count A2G2 0.16 2 A2F1 0.16 2 A2H2 0.16 2 F1G2 0.16 2 G2H2 0.16 2 F1H2 0.16 2 A2B1 0.06 1 A2C1 0.06 1 . . . . . . . . . BayesFLo Analysis for Experiment 2 (Prior 2) Combination Posterior Probability (Prior 2) Failure Count A2G2 0.48 2 A2F1 0.12 2 A2H2 0.12 2 F1G2 0.12 2 G2H2 0.12 2 A2B1 0.08 1 A2C1 0.08 1 . . . . . . . . . Table 5: The top-ranked posterior probabilities from BayesFLo in Experiment 2 using Prior 1 (top) and Prior 2 (bottom), along with its corresponding failure counts. limited tests. 5.3 Experiment 3: Eight Factors, Multiple Root Causes Finally, the third experiment investigates a system with I = 8 factors each with J = 2 levels, but with two true root causes B1C2 and G2H1. Table 3 (bottom) shows the test design with M = 8 runs, which yields five passed and three failed runs. For BayesFLo, we employ a similar prior as Experiment 2, with p(i,j) = 0.0625 for all i and j to reflect the belief that root causes occur sporadically. As in Experiment 2, we suppose the test engineer is confident that there are no root causes with three or more factors, thus we only compute posterior probabilities on combinations with at most two factors. Table 6 shows the top-ranked posterior probabilities from BayesFLo for Experiment 3, and Figure 7 shows the corresponding analysis from JMP. As before, while JMP correctly identified the top two combinations as the two root causes via failure counts, it does not yield a measure of probabilistic confidence, and thus it is unclear how many further combinations (with one failure count) need to be explored to debug the software. BayesFLo 25 Figure 7: JMP\u2019s Covering Array Analysis for Experiment 3. Listed are the potential root cause combinations ranked by decreasing failure counts. BayesFLo Analysis for Experiment 3 Combination Posterior Probability Failure Count G2H1 0.98 2 B1C2 0.97 2 A1G2 0.20 1 B2C1 0.20 1 C1F2 0.20 1 F2G2 0.20 1 A1E2 0.13 1 A1H2 0.13 1 B1F2 0.13 1 D1E2 0.13 1 D1F2 0.13 1 F2H2 0.13 1 G1H2 0.13 1 Table 6: The top-ranked posterior probabilities from BayesFLo in Experiment 3, along with its corresponding failure counts. provides a more informed analysis for guiding further investigation. Its top two combinations, which are the true root causes, have a near-certain posterior probability of being a root cause. Subsequent combinations have considerably reduced posterior probabilities, and are thus much less important for investigation, as desired. 6 Fault Localization of the JMP XGBoost Interface Finally, we return to our motivating problem on the fault localization of the XGBoost User Interface in JMP (Jones and Sall, 2011). Figure 1 shows this user interface in JMP Pro Version 17.0. As discussed in Section 2, a key challenge is the verification of software performance over the many hyperparameters that can be freely varied by users. We investigate next the effectiveness of BayesFLo in two complementary fault localization case studies for this interface. An important consideration is in deciding what software behavior constitutes as a failure. Wong et al. (2023) defines a failure as a scenario where the system \u201cdeviates from its correct behavior\u201d. The determination of such \u201ccorrect\u201d (or expected) behavior is known as the oracle problem in software testing (Lekivetz and Morgan, 2021). In this case, since our goal is to investigate the JMP XGBoost interface, we forgo the more tedious process of independently building a machine learning model on XGBoost for verification, and instead rely on the XGBoost Python API (Brownlee, 2016) as the \u201coracle\u201d for comparison. With this, we explore next two case studies that each investigates a different notion of 26 JMP XGBoost Case Study 1 Hyperparameter Level 1 Level 2 Level 3 max depth 3 6 9 subsample 0.1 0.3 0.65 colsample bytree 0.1 0.3 0.65 min child weight 1 5.5 10 alpha 0 1 2 lambda 0 1 2 learning rate 0.05 0.15 0.3 iterations 20 150 300 Table 7: Considered hyperparameters (factors) for the XGBoost Case Study 1. software failure; the first explores discrepancies in predictive performance, and the second explores discrepancies in warning messages. For the first case study, predictive performance is assessed via out-of-fold predictions from K-fold cross validation (James et al., 2013), and the discrepancy between predictions is measured via the log-relative error (LRE; see McCullough, 1998). Test outcomes with median LRE below 9.0 (as recommended in McCullough, 1998) are deemed a \u201cfailure\u201d, and suggest a mismatch between the JMP interface and the Python oracle. After reconciling predictions, the second case study investigates discrepancies in warning messages between the two implementations; details in Section 6.2. 6.1 Case Study 1 In the first case study, we focus on testing the I = 8 factors from the first column of Figure 1. As such factors are continuous, we apply the equivalence partitioning strategy (Myers et al., 2004; Lekivetz and Morgan, 2021) to choose J = 3 discretized levels for each factor, summarized in Table 7. With this, we generate a set of M = 15 test runs using a strength-2 covering array. For each test case, we then compute the prediction LREs between the JMP and Python implementation to assess failures. Table 8 summarizes the test cases and its corresponding outcomes. For BayesFLo, since we have little prior information besides the belief that root causes occur rarely, we set p(i,j) = 1/24 for all factors i and levels j. After consulting with test engineers, it is highly unlikely that root causes in the interface involve more than two factors, thus we only compute posterior probabilities on combinations with at most two factors. Root cause probabilities are computed via the BayesFLo workflow in Figure 3 (right). Table 9 shows the five input combinations with highest posterior root cause probabilities 27 Test Cases & Outcomes for JMP XGBoost Case Study 1 max depth subsample colsample bytree min child weight alpha lambda learning rate iterations Outcome 6 0.3 1 5.5 0 2 0.15 20 0 9 0.3 0.65 10 0 1 0.3 20 0 6 0.65 0.65 10 2 2 0.05 20 0 6 1 0.3 5.5 2 0 0.15 300 0 9 1 1 1 0 0 0.15 150 0 3 0.3 0.3 1 0 0 0.05 20 1 9 1 1 10 2 2 0.3 300 0 3 0.65 1 1 1 2 0.05 150 1 3 1 0.65 1 2 1 0.05 300 1 3 0.3 0.3 5.5 1 1 0.3 300 1 3 0.3 0.3 10 2 2 0.15 150 1 9 1 1 5.5 1 1 0.05 20 0 9 0.65 0.3 10 1 0 0.3 150 0 6 0.65 0.65 5.5 1 1 0.15 150 0 6 0.65 0.65 1 0 0 0.3 300 0 Table 8: The M = 15-run test design and corresponding outcomes for the XGBoost Case Study 1. Here, an outcome of 0 indicates a passed test case and 1 indicates a failed one. from BayesFLo. We observe that the single-factor setting max depth = 3 has a near-certain root cause probability of 0.999. Furthermore, the remaining four combinations all involve this setting of max depth = 3, with considerably lower probabilities. This suggests that the test engineer should first investigate the JMP interface at max depth = 3 prior to any other combinations. Indeed, after digging into the source code, we find that max depth = 3 is indeed the culprit root cause, stemming from an out-of-sync issue for the default value of max depth in the JMP XGBoost interface. Further inspection of the interface shows that, after this out-of-sync issue is corrected, the remaining four combinations are not root causes, which is in line with the small BayesFLo posterior probabilities from Table 9. To contrast, the JMP analysis (which serves as the state-of-the-art) yields a more muddled picture for fault localization. Figure 8 shows a screenshot from the JMP Covering Array Analysis module, which again ranks suspicious combinations by their failure counts. We see that the top-ranked combination is max depth = 3 (with a failure count of 5), which is desirable as this is indeed the true root cause. After this, however, there are multiple tied combinations with three failure counts. From this deterministic analysis, it is unclear whether a test engineer should expend further budget on investigating these tied combinations, and if so, how many should be inspected. The BayesFLo probabilistic analysis clarifies such decisions for the test engineer: there is little need for further investigation beyond max depth = 3, as such combinations have near-zero root cause probabilities. 28 Figure 8: JMP\u2019s Covering Array Analysis for the XGBoost Case Study 1. Listed are the potential root cause combinations ranked by decreasing failure counts. BayesFLo Analysis for JMP XGBoost Case Study 1 Combination Posterior Probability Failure Count max depth = 3 0.999 5 max depth = 3, alpha = 1 0.040 2 max depth = 3, alpha = 2 0.040 2 max depth = 3, subsample = 0.3 0.037 3 max depth = 3, colsample bytree = 0.3 0.037 3 Table 9: The top-ranked posterior probabilities from BayesFLo in the XGBoost Case Study 1, along with its corresponding failure counts. 6.2 Case Study 2 After reconciling predictive discrepancies, the second case study then investigates failures in the form of warning message discrepancies between JMP and Python for XGBoost. The inspection of such discrepancies is important for a reliable software implementation that adheres to user specifications. Here, guided by domain knowledge from JMP test engineers, we explore I = 7 factors, including four from Case Study 1 and three new categorical factors booster, sample type and normalize type. With this, we generate a set of M = 12 test runs using a strength-3 covering array. For each test case, we then investigate warning discrepancies between JMP and Python. Table 2 summarizes these test cases and its outcomes. Note that this was the motivating case study from Section 2. For BayesFLo, our earlier analysis can be used as domain knowledge for prior elicitation in the second case study. For the three factors not investigated in Case Study 1, we have heightened suspicions on such factors a priori (as they were not tested previously), and thus set p(i,j) = 0.25 for these factors. For the remaining four factors, we adopt the same p(i,j) = 1/24 prior employed earlier, which reflects our belief that such factors are less suspicious as they have been tested in Case Study 1. After discussions with test engineers, it is highly unlikely that root causes here consists of more than three factors, so we evaluate posterior probabilities only for combinations with at most three factors. Table 10 shows the top five combinations with highest posterior probabilities from BayesFLo. We see that the top two combinations have considerably higher probabilities above 90%, whereas subsequent combinations have much lower probabilities. This suggests 29 BayesFLo Analysis for JMP XGBoost Case Study 2 Combination Posterior Probability Failure Count booster = gbtree, sample type = weighted 0.94 3 booster = gbtree, normalize type = forest 0.94 3 booster = gbtree, alpha = 0 0.56 3 booster = gbtree, sample type = weighted, normalize rate = tree 0.15 1 booster = gbtree, sample type = uniform, normalize rate = forest 0.15 1 Table 10: The top-ranked posterior probabilities from BayesFLo in the XGBoost Case Study 2, along with its corresponding failure counts. that the test engineer should focus on investigating the first two combinations, with others taking much less priority. Upon inspection of the source code, we find that the first two combinations in Table 10 are indeed root causes. The root issue stems from the setting of booster = gbtree; from the XGBoost documentation (Chen and Guestrin, 2016), such a setting should ignore user specifications for sample type and normalize type. With the first two combinations in Table 10, the Python oracle returns the desired warning that sample type and normalize type are ignored, whereas the JMP interface fails to output this warning. To contrast, the JMP analysis (see Figure 2 from Section 2) again yields a more opaque picture. There, we see that the top-ranked combinations are the same as that for BayesFLo, with a tied failure count of 3. Again, two such combinations are true root causes, which is desired. After this, there are however fifteen tied combinations, each with a slightly lower failure count of 2. Since such an analysis is deterministic, it is unclear whether a test engineer needs to allocate further costs for inspecting these fifteen combinations, which would be very costly! BayesFLo provides further insight via its probabilistic analysis: it suggests that the first three combinations involving booster = gbtree are considerably more suspicious, while remaining combinations are much less suspicious as they have near-zero root cause probabilities. 7 Conclusion We proposed a new BayesFLo framework for Bayesian fault localization of complex software systems. Existing methods for fault localization are largely deterministic, and thus have key limitations for a probabilistic quantification of risk on potential root causes, and for integrating prior domain and/or structural knowledge from test engineers. BayesFLo 30 addresses such limitations via a new Bayesian model on potential root cause combinations. A key feature of this model is its embedding of combination hierarchy and heredity (Lekivetz and Morgan, 2021), which capture the structured nature of software root causes. One critical challenge is the computation of posterior root cause probabilities, which can be infeasible even for small systems. We thus developed a new algorithmic framework for computing the desired posterior probabilities, leveraging recent tools from integer programming and graph representations. We then demonstrate the effectiveness of BayesFLo over the state-of-the-art in a suite of numerical experiments, and two case studies on our motivating application of fault localization on the JMP XGBoost interface. Given promising results, there are many immediate avenues for future work. One direction is the use of the BayesFLo modeling framework for sequential design of subsequent test sets. This adaptive testing of software, which can be facilitated by the proposed Bayesian model, can greatly accelerate the discovery of bugs in complex systems. Another direction is the extension of BayesFLo for fault localization of systems with continuous and mixed factors. Such a setting would be more complex, as it requires the probabilistic modeling of the fault response surface; recent work in Chen et al. (2022) appears to be useful for this goal. Acknowledgements. This work was supported by NSF CSSI 2004571, NSF DMS 2210729, NSF DMS 2210729, NSF DMS 2220496, NSF DMS 2316012 and DE-SC0024477. 31" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.11034v1", |
| "title": "PAT-Questions: A Self-Updating Benchmark for Present-Anchored Temporal Question-Answering", |
| "abstract": "Existing work on Temporal Question Answering (TQA) has predominantly focused\non questions anchored to specific timestamps or events (e.g. \"Who was the US\npresident in 1970?\"). Little work has studied questions whose temporal context\nis relative to the present time (e.g. \"Who was the previous US president?\"). We\nrefer to this problem as Present-Anchored Temporal QA (PATQA). PATQA poses\nunique challenges: (1) large language models (LLMs) may have outdated\nknowledge, (2) complex temporal relationships (e.g. 'before', 'previous') are\nhard to reason, (3) multi-hop reasoning may be required, and (4) the gold\nanswers of benchmarks must be continuously updated. To address these\nchallenges, we introduce the PAT-Questions benchmark, which includes single and\nmulti-hop temporal questions. The answers in PAT-Questions can be automatically\nrefreshed by re-running SPARQL queries on a knowledge graph, if available. We\nevaluate several state-of-the-art LLMs and a SOTA temporal reasoning model\n(TEMPREASON-T5) on PAT-Questions through direct prompting and\nretrieval-augmented generation (RAG). The results highlight the limitations of\nexisting solutions in PATQA and motivate the need for new methods to improve\nPATQA reasoning capabilities.", |
| "authors": "Jannat Ara Meem, Muhammad Shihab Rashid, Yue Dong, Vagelis Hristidis", |
| "published": "2024-02-16", |
| "updated": "2024-02-16", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Large language models (LLMs) have demonstrated impressive performance across a wide spectrum of question-answering (QA) domains, thanks to an abundant amount of data spanning different QA tasks such as open-book question answer- ing (OBQA) (Ye et al., 2023; Zhao et al., 2023), knowledge-base question answering (KBQA) (Tan et al., 2023b), and multi-hop reasoning tasks (Ko- jima et al., 2022; Wang et al., 2023). Their ability to tackle temporal question answering (TQA) has also seen considerable advancements, as evidenced by recent literature (Dhingra et al., 2022; Jia et al., 2018a; Tan et al., 2023a). However, many past studies on Temporal Ques- tion Answering (TQA) focus on questions an- chored by specific timestamps or events, such as \u2018Who was the president of the US in 1985/during World War II?\u2019. In real life, we argue that many of the questions LLMs face are present-anchored with- out a specific timestamp, representing a crucial yet underexplored category of TQA. We refer to it as Present-Anchored Temporal QA (PATQA), where the time condition is relative to the present time, for example, \u2018Which team does Cristiano Ronaldo play for currently?\u2019. PATQA poses challenges due to several fac- tors: (1) LLMs\u2019 knowledge becomes outdated due to periodic training (He et al., 2022; Zhang and Choi, 2021; Liska et al., 2022). Efforts to miti- gate this through retrieval-augmented generation (RAG), providing current documents as context, are also often ineffective (Lewis et al., 2020; Kasai et al., 2022; Vu et al., 2023), as verified by our own experiments with New Bing1 (using GPT-4, shown in Section 4.2). (2) PATQA can contain complex temporal relationships (e.g. before, last, previ- ous) that are challenging. For example, \u2018Which team did Cristiano Ronaldo play for before the current team?\u2019 requires a sequential understand- ing of temporal expressions \u2018current\u2019 and \u2018before\u2019. (3) PATQA may require multi-hop reasoning that involves temporal reasoning in subsequent hops. For example, tracing Cristiano Ronaldo\u2019s current team to its current head coach in \u2018Who is the head coach of the team that Cristiano Ronaldo plays for currently?\u2019 (Cristiano Ronaldo \u2192team \u2192head coach), requires sequential temporal reasoning fol- lowed by multi-hop information integration (head coaches change with time too). (4) Creating and maintaining PATQA benchmarks is expensive 1https://www.bing.com/chat arXiv:2402.11034v1 [cs.CL] 16 Feb 2024 Cristiano Ronaldo Juventus F.C. Real Madrid F.C. Manchester United F.C. Al-Nassr Jos\u00e9 Mourinho Lu\u00eds Castro Ole Gunnar Solskj\u00e6r member of sports team (2009-2018) member of sports team (Jul 2018 \u2013 Aug 2021) member of sports team (Aug 2021- Nov2022) member of sports team (Jan 2023-Present) head coach (2010-2013) head coach (Dec 2018- Nov 2021) head coach (Jul 2023- Present) Question1: Which team does Cristiano Ronaldo play for currently? Gold Answer1: Al-Nassr (single-hop) Question2: Who is the head coach of the team that Cristiano Ronaldo plays for currently? Gold Answer2: Lu\u00eds Castro (multi-hop) (a) A subgraph from Wikidata around Subject: Cristiano Ronaldo Answer2 Answer1 Data Cutoff-date Release Date LLM Jos\u00e9 Mourinho Real Madrid Unknown Dec, 2022 Flan-T5- xl Juventus (false) Juventus Sept, 2022 July, 2023 Llama-2- 7B Zlatan Ibrahimovi\u0107 (false) Manchester United June, 2023 May, 2023 Falcon- 7B Jos\u00e9 Mourinho Real Madrid Unknown Sept, 2023 Mistral- 7B Ole Gunnar Solskj\u00e6r Manchester United Jan, 2022 June, 2020 GPT-3.5 (b) LLM responses for the two questions Figure 1: Illustration of the limitations of the LLMs in answering the present-anchored temporal questions. The LLMs respond with an out-of-date answer (purple) due to knowledge outdating or a false information (red) due to lacking multi-hop PAT reasoning abilities. because the gold answers to the questions keep changing and manual updates are not sustainable and scalable. Figure 1 illustrates examples that, due to these challenges, current LLMs perform poorly on data in our created PATQA dataset. We introduce a novel benchmark, referred to as PAT-Questions2, comprising 6172 present time- sensitive factual question-answer pairs that pos- sess the four features we have mentioned above. These challenges require both single and multi-hop temporal reasoning over complex temporal rela- tions to answer correctly. A unique property of PAT-Questions is its capability to automatically update answers over time, resulting in distinct in- stances for different timestamps. We construct PAT- Questions by leveraging templates derived from time-dependent facts sourced from the Wikidata knowledge base (Vrande\u02c7 ci\u00b4 c and Kr\u00f6tzsch, 2014). This allows us to ground our questions on Wiki- data facts, thereby ensuring data quality over time by associating a SPARQL query with each ques- tion to accurately retrieve answers from the most up-to-date Wikidata. As far as we know, there are only two datasets which contain present-anchored temporal QA ex- amples, but without complex temporal relations like \u2018before\u2019, \u2018previous\u2019, and have very few multi- hop temporal questions (Kasai et al., 2022; Vu et al., 2023). Further, these datasets do not offer a way to automatically update the answers over time, which limits their applicability to future PATQA algo- rithms. We benchmark several state-of-the-art (SOTA) 2Present-Anchored Temporal Questions LLMs on PAT-Questions, both directly prompting the LLMs with the questions, and in a RAG setting. To retieve documents in RAG, we use Google Cus- tom Search (GCS), following Kasai et al. (2022)\u2019s work, to retrieve relevant documents from up-to- date Wikipedia and Wikidata first, and provide the documents as context along with the initial prompt to the LLMs. We also evaluate the performance of a SOTA temporal reasoning system (Tan et al., 2023a), which fine-tunes the T5-SFT model (Raffel et al., 2020). In their setting, external context in the form of natural language text is provided. In con- trast, we consider an open retrieval setting (Nguyen et al., 2016; Rashid et al., 2024) to retrieve the most relevant context for each question and provide that as context to the LLMs. Our empirical results high- light that the SOTA models significantly struggle on PAT-Questions, especially on multi-hop ones, with EM accuracy ranging from 1.5% to 15.5%. Our main contributions are: \u2022 We publish a novel PATQA benchmark, PAT- Questions3, with annotated single-hop and multi-hop questions for two different times- tamps (December 2021, December 2023). We provide an automatic answer updating system for the research community to always get up- to-date answers to PAT-Questions. \u2022 We evaluate our benchmark on a wide range of LLMs in direct prompting and RAG set- tings, and identify limitations of the LLMs in tackling PAT-Questions. 3Our dataset and code for self-updates: https:// anonymous.4open.science/r/PAT-Questions-0EB4/ Dataset Creation KC Question Types PAT Auto. Ans- update #ques. m- hop Bef-event reasoning Temporal QA & Reasoning Datasets TempQuestions (2018a) Man.-Filt. Freebase \u2713 \u2713 \u2717 \u2717 1271 CRON-QUESTIONS (2021) Templ. Wikidata \u2713 \u2717 \u2717 \u2717 410k TimeQA (2021) Templ.- Wikidata Wikipedia \u2717 \u2717 \u2717 \u2717 20k SituatedQA-temporal (2021) Man.-Filt. Wikipedia \u2717 \u2717 \u2717 \u2717 12k TEMPLAMA (2022) Templ./ Cloze Custom-News \u2717 \u2717 \u2717 \u2717 50k StreamingQA (2022) Man.+Gen WMT news \u2713 \u2717 \u2717 \u2717 410k TEMPREASON (2023a) Templ./ Cloze Wikidata \u2717 \u2713 \u2717 \u2717 429k Present-Anchored Temporal QA Datasets REALTIME QA (2022) News websites News Articles \u2713 \u2717 some \u2717 \u223c5k FreshQA (2023) Man. Google search \u2713 \u2717 377 \u2717 600 PAT-Questions (ours) Templ.-Wikidata Wikipedia \u2713 \u2713 \u2713 \u2713 6172 Table 1: Comparison of temporal question-answering datasets. Abbreviations: Man.=created manually, Man.- Filt.=filtered from other datasets, Man.+Gen.=created by crowdsourcing and generated by LLMs, Templ.=created using templates, KC=Knowledge Corpus, PAT=Present Time-Anchored. \u2022 We modify a state-of-the-art temporal reason- ing system, Tan et al. (2023a), to answer our PAT-Questions, and experimentally show how it performs on our dataset.", |
| "main_content": "Temporal Question-Answering Datasets Research on understanding time in texts has led to the development of datasets aimed at enhancing temporal understanding in both knowledge-base question answering (KBQA) and natural language question-answering systems. Prior works on temporal KBQA have led to the creation of datasets like TempQuestions (Jia et al., 2018a), Tequila (Jia et al., 2018b), TimeQuestions (Jia et al., 2021), and CRONQuestions (Saxena et al., 2021), which focuses on integrating temporal data into knowledge bases for ranking entities related to a query (Talukdar et al., 2012; Chang and Manning, 2012). Recent efforts have shifted towards enhancing large language models (LLMs) for time-sensitive reasoning based on natural text only. Datasets like TimeQA (Chen et al., 2021), TEMPLAMA (Dhingra et al., 2022), and TEMPREASON (Tan et al., 2023b) have been introduced to test the ability of LLMs to reason and answer questions that involve understanding explicit temporal context (i.e. \u2018What team did Cristiano Ronaldo play for in 2021?\u2019) or complex temporal relations such as \u2018before\u2019 and \u2018after\u2019 (i.e. \u2018What team did Cristiano Ronaldo play for before Manchester United?\u2019) or to identify timedependent facts from unstructured text. Time-sensitive reasoning over Evolving data Existing benchmarks in temporal QA systems focus on static knowledge, annotating questions with single or explicit timestamps, which overlooks the dynamic nature of real-world information where answers can change over time. Notably, SituatedQAtemporal (Zhang and Choi, 2021) and StreamingQA (Liska et al., 2022) have attempted to incorporate temporal context by dating questions and sourcing from recent news, yet they still operate on static snapshots of knowledge. The dynamic REALTIME QA benchmark (Kasai et al., 2022) tests models on current events, however, they exclusively focus on news data and lack emphasis on evolving facts and multi-hop reasoning. FreshQA (Vu et al., 2023) is a contemporary dataset that attempts to update LLMs with current information through time-sensitive questions. Both REALTIME QA and FreshQA rely on the authors to update the answers to reflect new information or changes over time, which limits the datasets\u2019 effectiveness in supporting the real-time adaptation of LLMs. In contrast, our dataset, PAT-Questions, can be automatically updated over time ensuring its adaptability and accuracy in real-time, surpassing existing present-anchored datasets. Table 1 shows the comparison among all relevant datasets. 3 PAT-Questions Dataset Construction We extend the TEMPREASON dataset (Tan et al., 2023a) to construct PAT-Questions. Each question in TEMPREASON follows a time-sensitive template and is annotated with Wikidata IDs for TEMPREASON Templates (\ud835\udc47) 1. Which team did {\ud835\udc94} play for in \ud835\udf0f? 2. Which team did {\ud835\udc94} play for before \ud835\udefc? \u2026 PAT-Questions single-hop Templates (\ud835\udc47Pat) 1. Which team does {\ud835\udc94} play for currently? 2. Which team did {\ud835\udc94} play for before the current team? \u2026 TEMPREASON Templates Modification QA Instance Creation and Annotation Single-hop PAT-Questions (\ud835\udc43s) 1. Which team does Jos\u00e9 Sosa (Q314241) play for (P54) currently? 2. Which team did Jos\u00e9 Sosa (Q314241) play for (P54) before the current team? \u2026 Retrieve answers via SPARQL & Filter by human SPARQL (\u201cWhich team does Jos\u00e9 Sosa play for currently?\u201c, (\u201cQ314241\u201d, \u201cP54\u201d)) Wikidata Query Service 2023: {\u201cID\u201d: \u201cQ214940\u201d, \u201cLabel\u201d: \u201cEstudiantes de La Plata\u201d} 2021: {\u201cID\u201d: \u201cQ6601875\u201d, \u201cLabel\u201d: \u201cFenerbah\u00e7e SK\u201d} Filter common facts about the answers Common Facts/Relations (F) Sports team: home venue, owner (temporal), head coach(temporal) \u2026 Multi-hop PAT-Questions (\ud835\udc43m) 1. Who is the head coach (P286) of the team that Jos\u00e9 Sosa (Q314241) play for (P54) currently? (single-hop ans 2023: Q214940, single-hop ans 2021: \u201cQ6601875\u201d) 2. Who is the head coach (P286) of the team that Jos\u00e9 Sosa (Q314241) played for (P54) before the current team? \u2026 PAT-Questions multi-hop Templates (\ud835\udc47mPat) 1. Who is the head coach of the team that {\ud835\udc94} play for currently? 2. Who is the head coach of the team that {\ud835\udc94} played for before the current team? \u2026 Retrieve answers via SPARQL & Filter by human SPARQL (\u201cWho is the head coach of the team that Jos\u00e9 Sosa play for currently?\u201c, ( \u201cQ214940\u201d, \u201cP286\u201d), (\u201cQ6601875\u201d, \u201cP286\u201d) ) 2023: {\u201cID\u201d: \u201cQ3288172\u201d, \u201cLabel\u201d: \u201cEduardo Dom\u00ednguez\u201d} Step 1 Step 2.1 Step 2.2 Step 3 Step 4 Wikidata Query Service Complex Templates Creation Complex QA instances Creation and Annotation Step 5.1 Step 5.2 2021: {\u201cID\u201d: \u201cQ556610\u201d, \u201cLabel\u201d: \u201cV\u00edtor Pereira\u201d} Figure 2: Illustration of PAT-Questions dataset construction following Algorithm 1. Firstly, we modify the timesensitive templates from the TEMPREASON dataset (Tan et al., 2023a) to build PAT-Questions templates, and following the steps shown in the figure, we create a set of one-hop and multi-hop PAT-Questions with annotated answers for two different timestamps, Dec 2021 and Dec 2023. Here, \u03c4 and \u03b1 refer to a year and an entity respectively. the primary subject and relation, which facilitates us to generate structured SPARQL queries to automatically update the responses over time. The pre-existing annotations eliminate the need for entity linking as a pre-processing step. We leverage the, the December 31, 2023 Wikidata dump as the knowledge source. We annotate PAT-Questions for two different timestamps of Wikidata to compare the performance of the LLMs. The overall procedure of our data construction is illustrated in Figure 2 and formally defined in Algorithm 1. Step 1: TEMPREASON Templates Modification TEMPREASON templates by Dhingra et al. (2022), consist of time-sensitive facts (s, r, o, \u03c4s, \u03c4e), with s representing the subject, r the relation, o the object, and \u03c4s and \u03c4e denoting the start and end times of the fact. We adapt two types of TEMPREASON templates (T): i) (s, r, ?, \u03c4) (where \u03c4 lies between \u03c4s and \u03c4e) becomes our single-hop PAT template (s, r, ?, \u03c4cur) where \u03c4cur is the current time, and ii) (s, r, ?, \u03c4 \u227a\u03c4\u03b1) (where \u03b1 is an object related to (s, r) pair facts with distinct \u03c4s and \u03c4e, and \u03c4 \u227a\u03c4\u03b1 is the time range immediately preceding \u03c4s) transforms into (s, r, ?, \u03c4 \u227a\u03c4\u03b1cur) where \u03c4 \u227a\u03c4\u03b1cur represents the time range immediately preceding the start time of the current object of the (s, r) pair fact. These rules are outlined formally in Table 2. Our templates (TPat) are challenging as they don\u2019t explicitly specify the Algorithm 1 Construct PAT-Questions Dataset Require: TEMPREASON dataset, D, TEMPREASON templates T Ensure: PAT-Questions 1: TP at \u2190[] \\\\ Single-hop templates 2: for each template, t \u2208T do 3: \\\\t = (s, r, \u03c4), or t = (s, r, \u03b1) 4: if \u03c4 \u2208t then 5: TP at \u2190Replace(\u03c4, \u2018currently\u2019) \u222aTP at 6: else if \u03b1 \u2208t then 7: TP at \u2190Replace(\u03b1, \u2018current\u2019+equiv(r)) \u222aTP at { using rules from Table 6} 8: S = [subjects(D)] \\\\ Wikidata subjects for all TEMPREASON questions 9: Ps = CreateQAInstances(SPARQL(S, TP at))) 10: F = Filter(multiFacts(Ps)) 11: TmP at \u2190[] \\\\ Multi-hop templates 12: for each relation ri \u2208F do 13: TmP at \u2190 Insert(ri, TP at) \u222a TmP at { using rules from Table 7} 14: Pm =CreateQAInstances(SPARQL(S, TmP at)) 15: PAT-Questions = Ps \u222aPm 16: return PAT-Questions current time \u03c4cur or object \u03b1, unlike the original TEMPREASON templates. Steps 1-7 of Algorithm 1 depict Step 1, illustrated with examples in Figure 2. Our PAT-Questions single-hop templates are available in Table 6 in Appendix A. Step 2: Simple QA instances Creation and Annotation We filter the subject entities from original TEMPREASON questions for which the PATQuestions are valid. Based on Tan et al. (2023a)\u2019s KB relation, r Rule TEMPREASON Template PAT-Questions single-hop Template member of sports (s, r, ?, \u03c4) \u2192(s, r, ?, \u03c4cur) Which team did {s} play for in \u03c4? Which team does {s} play for currently? team (P54) (s, r, ?, \u03c4 \u227a\u03c4\u03b1) \u2192(s, r, ?, \u03c4 \u227a\u03c4\u03b1cur) Which team did {s} play for before \u03b1? Which team did {s} play for before the current team? Table 2: Conversion of the TEMPREASON templates (Step 1) for the \u2018member of sports team\u2019 relation to single-hop PAT-Question templates. TEMPREASON has two templates per relation r and we convert each of the templates following the two rules shown above. For example, s is Cristiano Ronaldo, \u03c4 is 2021 and \u03b1 is Real Madrid F.C., the single-hop PAT-Questions become \u2018Which team does Cristiano Ronaldo play for currently?\u2019 and \u2018Which team did Cristiano Ronaldo play for before the current team?\u2019 KB relation, r Common relations, ri Rule PAT-Questions singlehop Template PAT-Questions multi-hop Template member of sports home venue (P115), (s, r, ?, \u03c4cur) \u2192 ((s, r, ?, \u03c4cur), ri, ?, \u03c4cur) Which team does {s} play for currently What is the home venue of the team that {s} plays for currently? team (P54) head coach (286) (\u03c4cur)? Who is the head coach of the team that {s} plays for currently? (s, r, ?, \u03c4 \u227a\u03c4\u03b1cur) \u2192 ((s, r, ?, \u03c4 \u227a\u03c4\u03b1cur), ri, ?, \u03c4cur) Which team did {s} play for before the current team (\u03c4 \u227a\u03c4\u03b1cur)? What is the home venue of the team that {s} played for before the current team? Who is the head coach of the team that {s} played for before the current team? Table 3: Conversion of the PAT-Questions single-hop templates to multi-hop templates (Step 4) for the \u2018member of sports team\u2019 (P54) relation to PAT-Questions multi-hop templates. approach, we insert the subjects into the single-hop PAT-Questions templates (TPat) and annotate the questions with the Wikidata IDs of the subjects and relations. Since questions are in natural language, we establish a set of SPARQL query templates to convert each natural language question into its corresponding SPARQL query (see Appendix B). We insert each (s, r) pair into the appropriate SPARQL template, and retrieve the Wikidata ID and NL label of the gold answer using the Wikidata Query Service API(Algorithm 1, lines 8-9). Note that questions are annotated for two different timestamps. We temporally organize the facts linked with (s, r) pairs, fetching the latest objects (o) (current and previous) for the 2023 version, and filtering by end date \u03c4e \u2264Dec, 2021 for the 2021 version. Step 3: Filter common facts In this step, we randomly select a subset of single-hop questionanswer pairs, capturing all templates. We extract all facts (F), including both temporal and nontemporal ones, linked to the answer Wikidata entities (Algorithm 1, line 10). We then filter common facts, i.e., Wikidata triples (s, r, o) shared among these entities, which can be temporal or static. Notably, we prioritize single-hop answer facts over subjects to construct multi-hop templates. This decision is made because single-hop answers span various types, such as sports teams, employers, heads of government/company/organization, etc., resulting in a broader range of facts for our multihop questions compared to those associated with the subjects. Step 4: Complex Templates Creation We generate multi-hop PAT-Question templates (TmPat) by integrating facts from F into the single-hop templates, TPat, and converting them into natural language following the guidelines outlined in Table 3, where ri represents one of the relations in F (Algorithm 1, lines 11-13). Note that all answers to the multi-hop templates are grounded on the time the question is posed, denoted as \u03c4cur. The multi-hop PAT templates are listed in Table 7 in Appendix A. Step 5: Complex QA instances Creation and Annotation For each filtered question in step 2, we insert the subject into its multi-hop template (Algorithm 1, line 14), annotating it with the subject, relation, intermediate entity (gold answer to the single-hop question), and intermediate relation (ri). Answers are then retrieved following step 3. In this process, we select intermediate entities and relations for insertion into SPARQL templates for most of the questions (see Appendix C), rather than Who is the head coach (P286) of the team that Jos\u00e9 Sosa (Q314241) plays for (P54) currently? SELECT ?MIDSUB ?MIDSUBLabel WHERE { wd:SUB p:PROP1 ?s . ?s ps:PROP1 ?item . ?s pq:P580 ?starttime . FILTER NOT EXISTS{ ?s pq:P582 ?endtime. }. SERVICE wikibase:label { bd:serviceParam wikibase:language \"[AUTO_LANGUAGE],en\". } } order by desc(?starttime) Who is the head coach (P286) of the team that Jos\u00e9 Sosa (Q314241) played for (P54) before the current team? SELECT ?MIDSUB ?MIDSUBLabel ?endtime WHERE { wd:SUB p:PROP1 ?s . ?s ps:PROP1 ?item . ?s pq:P580 ?starttime . ?s pq:P582 ?endtime . FILTER(?endtime <=NOW()). SERVICE wikibase:label { bd:serviceParam wikibase:language \"[AUTO_LANGUAGE],en\". } } order by desc(?endtime) LIMIT 1 SELECT ?item ?itemLabel WHERE { wd: MIDSUB p: PROP2 ?s . ?s ps: PROP2 ?item . ?s pq:P580 ?starttime . FILTER NOT EXISTS{ ?s pq:P582 ?endtime .}. SERVICE wikibase:label { bd:serviceParam wikibase:language \"[AUTO_LANGUAGE],en\". } } order by desc(?starttime) SUB = Q314241 PROP1 = P54 SUB = Q314241 PROP1 = P54 MIDSUB = Q214940 PROP2 = P286 MIDSUB = Q6601875 PROP2 = P286 item : Q3288172 itemLabel : Eduardo Dom\u00ednguez NL Question Step 1 Step 2 Item : Q8080212 itemLabel : \u0130smail Kartal Current team\u2019s Head Coach Previous team\u2019s Head Coach MIDSUB: Q6601875 MIDSUBlabel : Fenerbah\u00e7e SK Previous team MIDSUB: Q214940 MIDSUBlabel : Estudiantes de La Plata Current team Answer Figure 3: Illustration of automatic answer-updates to two multi-hop PAT-Questions via SPARQL templates the original subject and relation. Some questions are filtered out due to missing facts, like spouse (P26), founder (P112), etc. Detailed statistics for PAT-Questions are provided in Table 4. We randomly select 1000 PAT-Questions and manually verify the accuracy of the annotated answers. We also filter out any questions where the answer cannot be retrieved via the SPARQL query. Type Category single-hop multi-hop current 1442 1617 before-current 1440 1673 Total # questions 2882 3290 6172 Table 4: Dataset Statistics of PAT-Questions Automatically Updating the Answers of PATQuestions. The questions in our dataset are timesensitive, with answers expected to change periodically. While the most recent object of Wikidata facts may change, the subject and relation remain constant (Example in Figure 1(a)). Thus, the SPARQL template associated with each question consistently retrieves the latest answer without requiring manual intervention. This functionality empowers users to update the answers to PATQuestions any time they want. An illustration of the answer update process is provided in Figure 3. Most questions include facts prone to change every six months or longer. To ensure that the research community has the latest answers, we commit to quarterly updates each year, executed through a cronjob running SPARQL queries automatically. 4 Experiments We conduct experiments on 5 LLMs that have been significantly successful in QA tasks but do not have access to up-to-date world knowledge, including Falcon-7B-Instruct (fal), Flan-T5-XL (Chung et al., 2022), Llama-2-7B (Touvron et al., 2023), Mistral7B (Jiang et al., 2023), and GPT-3.5 (Brown et al., 2020) in a direct prompting setting and a RAG setting. We also modify the existing setting of TEMPREASON-T5-SFT by Tan et al. (2023a) to evaluate PAT-Questions in a RAG setting. We compare the results of direct prompting setting at two different timestamps: December 2021 and December 2023. Given that the cutoff date of the LLMs\u2019 knowledge is \u22652021, they should ideally know the answers for December 2021. 4.1 Experimental Setup Directly Prompting the Pre-trained Models In this experimental setting, we feed each question to the LLMs and instruct the LLMs to answer the question in a few words to avoid verbosity for EM comparisons (see Section 4.1). We use the HuggingFace library for the open-source models and GPT-3.5-Chat API with a temperature of 0. For the 2021 evaluation of the open-source models, we prepend the question with \u201cAssume it is now December 2021,\" to ensure the fairness of the comparisons with the 2021 gold annotations and with GPT-3.5 for which the cutoff date is January 2022. Retrieval-Augmented Generation (RAG) In this setting, we augment the LLMs\u2019 answer-generation capabilities with retrieval. We retrieve up to five Wikipedia documents for each question usSingle-hop Multi-hop 2023 2021 2023 2021 EM F1 EM F1 EM F1 EM F1 Falcon-7B 4.4 5.7 7.8 5.8 2.5 5.6 4.4 6.5 Falcon-7B-w-RAG 8.1 4.9 4.7 2.9 Flan-T5-XL 2.0 5.5 2.1 6.0 1.5 5.4 2.8 9.7 Flan-T5-XL-w-RAG 14.9 15.8 5.1 9.5 Llama-2-7B 8.4 9.0 10.0 11.2 5.3 8.6 7.0 9.6 Llama-2-7B-w-RAG 13.9 8.7 6.6 6.0 Mistral-7B 7.4 6.4 10.5 7.5 5.7 4.7 6.1 4.8 Mistral-7B-w-RAG 12.7 5.5 5.9 2.7 GPT-3.5 11.7 11.3 13.6 13.3 9.3 7.7 9.7 8.1 GPT-3.5-w-RAG 15.5 16.5 7.6 6.6 TEMPREASON-T5-subWiki 12.0 21.4 2.3 7.9 TEMPREASON-T5-w-RAG 8.3 16.1 1.5 5.5 Table 5: The experimental results by EM Accuracy (%) and token-level F1 (%), for two categories of questions of PAT-Questions for two different snapshots of present data (Dec 2023 and Dec 2021) ing Google Custom Search (GCS) Engine 4, divide each document into chunks of 300 tokens, rank the relevance of these chunks using BM25 and finally assign the top 5 chunks as the retrieved evidence for a question from PAT-Questions dataset. Chunking is necessary in our case because the LLMs that we use have token limitations. We retrieve all the documents for all the questions on the same date (January 16, 2024) to maintain the fairness of our evaluation on the entire dataset. We prompt the LLMs using the question and the retrieved chunks and instruct the LLMs to answer in a few words using the information available in the chunks. We exclusively evaluate this method against December 2023 gold annotations since the retrieved documents contain current information. It would be illogical to retrieve data from a current knowledge source and compare it with outdated gold answers. TEMPREASON-T5 Experiments We evaluate our PAT-Questions with Tan et al. (2023b)\u2019 T5-SFT model fine-tuned on improving the reasoning capability of the large language model by temporal span extraction. Their Open-book QA (OBQA) setting assumes that the subject entity of the question is already known and they extract the Wikipedia page associated with the subject entity to provide as context to the model. However, this setting is not practical in traditional Open-Retrieval QA settings. As such, we modify their OBQA setting to suit the PATQA problem. We provide the Top 5 4https://programmablesearchengine.google.com/ BM25 Wikipedia chunks retrieved by GCS for the question as context and, and evaluate their finetuned model\u2019s performance (TEMPREASON-T5w-RAG. in Table 5). We also show a comparison with their version of the OBQA setting, meaning we extract the content of the subject entity\u2019s current Wikipedia page and provide that as context to the model (TEMPREASON-T5-subWiki in Table 5). Evaluation Metrics We employ token-level F1 (Rajpurkar et al., 2016) and Chen et al. (2023)\u2019s exact matching (EM) Accuracy metric for the LLMs where if the generated text contains an exact match to the answer or vice-versa, it is considered a correct answer. To address the issue where LLMs might produce an accurate yet differently phrased response to PAT-Questions, such as \"Man United\" instead of \"Manchester United F.C.,\" resulting in a zero exact match (EM) score, we annotate each answer with all possible aliases from Wikidata using SPARQL queries. For Tan et al. (2023a)\u2019s system, we use traditional Exact Match and F1 to be consistent with their evaluation. 4.2 Results and Discussion Our findings, presented in Table 5, indicate that pretrained Large Language Models (LLMs) face challenges with PAT-Questions, both single and multihop, showing very low EM scores between 1.5% to 15.5% and F1 scores ranging from 2.9% to 16.5%. Accuracy improves with document retrieval, especially for single-hop questions, due to the retrieval of up-to-date and relevant documents. Open-source Falcon-7b Flan-T5-xl Llama-2-7b Mistral-7b GPT-3.5 0 5 10 15 20 25 Outdated Answers (%) single-hop single-hop-wRAG multi-hop multi-hop-wRAG (a) outdated responses (%) by LLMs Falcon-7b Flan-T5-xl Llama-2-7b Mistral-7b GPT-3.5 0 10 20 30 40 50 60 Information not avilable (%) single-hop single-hop-wRAG multi-hop multi-hop-wRAG (b) \u2018information not available\u2019 responses (%) by LLMs Figure 4: Error distribution of the incorrect LLM responses LLMs, significantly underperform in direct prompting settings compared to GPT-3.5 for multi-hop questions. These models considerably benefit from document retrieval due to their lower initial baseline. However, the success of the RAG approach largely depends on the retrieval engine\u2019s efficiency, which in our case struggles more with multi-hop than single-hop questions as evidenced by the performance degradation of GPT-3.5 for multi-hop questions. Despite the LLMs\u2019 knowledge cut-off date being \u22652021, the performance compared to 2021 annotations is still very low (though better than the up-to-date annotations). This highlights the LLMs\u2019 performance gap in both PAT and multihop reasoning. Note that the F1 scores for different models show considerable variation. Flan-T5-XL and GPT-3.5 generally adhere to instructions for concise responses, leading to brief and focused answers. Conversely, other models, including GPT3.5 in certain instances, tend to produce longer responses, which, despite being accurate, result in lower F1 scores due to their verbosity. We also compare the performances of TEMPREASON-T5 model with two different contexts: the subject\u2019s Wikipedia page and the documents retrieved by GCS. Although the model is specialized in temporal reasoning on the subject\u2019s Wikipedia content, it shows low accuracy on both single and multi-hop PAT-Questions. However being fine-tuned on single-hop temporal facts from Wikidata, the model demonstrates comparable results with the open-source LLMs on single-hop questions. The performance degrades significantly for multi-hop questions and open-retrieval RAG settings due to the lack of multi-hop and PAT reasoning capabilities. We presented a random subset of 50 multi-hop PAT-Questions to New Bing and GPT-4 Web. New Bing accurately answered 9 questions but failed or provided incorrect responses for the remaining 41. GPT-4, on the other hand, correctly answered 6 questions, inaccurately responded to 6, and indicated that information was unavailable for the remaining 38 questions. This comparison highlights the challenges both the services face in handling multi-hop PAT-Questions (see Appendix D). Error Analysis Figure 4 shows the error distribution of the LLM-generated answers. Figure 4a shows the percentage of outdated answers and Figure 4b shows the percentage of \u2018information not available\u2019 or similar responses out of the incorrect responses of the LLMs based on EM. The responses of Llama-7b, Mistral-7b and GPT-3.5 (especially GPT-3.5) are more grounded to the information available in their parametric memory till the cut-off date for single-hop questions, whereas FlanT5-XL and Falcon-7b are more likely to generate fake or misinformed responses when not prompted with RAG. Almost all the LLMs struggle in multihop reasoning. GPT-3.5 is more cautious in answering present-centric questions and is more likely to respond with \u2018I do not have real-time information\u2019 than responding with an incorrect or outdated answer (see Appendix D and E for more details). 5 Conclusion In this paper, we introduced a novel self-updating dataset, PAT-Questions, of present-anchored temporal questions requiring both single and multi-hop reasoning on complex temporal relations. We provide a detailed evaluation in both direct prompting and RAG settings of the SOTA LLMs and TEMPREASON-T5 on PAT-Questions, and present the limitations of the LLMs in PATQA. The findings indicate a significant gap in LLMs\u2019 reasoning capabilities when addressing PAT-Questions. We provide an automatic answer updating system for the research community to retrieve the up-to-date answers of PAT-Questions. Limitations Our self-updating system depends on an up-to-date knowledge base. We use the Wikidata knowledge base (KB), which may occasionally experience refreshing delays, potentially desynchronizing some gold annotations. Further, we retrieved documents for the PAT-Questions in our RAG pipeline solely using Google Custom Search API. However, this aspect is less significant given that our primary focus is not improving retrieval accuracy. Additionally, the scope of our multi-hop questions is currently limited to 2-hops, which already pose significant challenges for LLMs. We leave 2+-hop questions for future work. Ethics Statement We built our dataset entirely from publicly available information on Wikidata. No personal or restricted data were collected from any source or subject. Although the LLMs may sometimes generate fake information i.e. hallucinate, our experiments do not involve LLMs in creating any harmful content and, thus raise no ethical concern. We adhere to the Code of Ethics with our work." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.09724v1", |
| "title": "ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs", |
| "abstract": "In the midst of widespread misinformation and disinformation through social\nmedia and the proliferation of AI-generated texts, it has become increasingly\ndifficult for people to validate and trust information they encounter. Many\nfact-checking approaches and tools have been developed, but they often lack\nappropriate explainability or granularity to be useful in various contexts. A\ntext validation method that is easy to use, accessible, and can perform\nfine-grained evidence attribution has become crucial. More importantly,\nbuilding user trust in such a method requires presenting the rationale behind\neach prediction, as research shows this significantly influences people's\nbelief in automated systems. It is also paramount to localize and bring users'\nattention to the specific problematic content, instead of providing simple\nblanket labels. In this paper, we present $\\textit{ClaimVer, a human-centric\nframework}$ tailored to meet users' informational and verification needs by\ngenerating rich annotations and thereby reducing cognitive load. Designed to\ndeliver comprehensive evaluations of texts, it highlights each claim, verifies\nit against a trusted knowledge graph (KG), presents the evidence, and provides\nsuccinct, clear explanations for each claim prediction. Finally, our framework\nintroduces an attribution score, enhancing applicability across a wide range of\ndownstream tasks.", |
| "authors": "Preetam Prabhu Srikar Dammu, Himanshu Naidu, Mouly Dewan, YoungMin Kim, Tanya Roosta, Aman Chadha, Chirag Shah", |
| "published": "2024-03-12", |
| "updated": "2024-03-12", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.CY", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Misinformation and disinformation are longstanding issues, but the proliferation of AI tools that can generate informa- tion on demand has amplified these issues. Tools for fact- checking are not keeping pace with sophisticated text gener- ation techniques. Even when they are effective, they lack ap- propriate explainability and granularity to be useful to users. Studies have shown that explanations are crucial for users to build trust in AI systems [Rechkemmer and Yin, 2022; Weitz et al., 2019; Shin, 2021]. There is a need for a novel \u2217Work does not relate to position at Amazon. HealthFeedback.org ClaimVer \u201cAutism used to be 1 in 10,000. Now it's 1 in 50. Now, where it all coming from? Vaccines are doing it.\u201d 1Autism used to be 1 in 10,000. Now it's 1 in 50. 2Now, where it all coming from? 3Vaccines are doing it. R1: Prevalence of autism is not directly supported or refuted. R2: : Origin of the increase in autism prevalence is not addressed. R3: Statement that vaccines are causing the increase in autism prevalence is directly contradicted by the triplet [('autism', 'does not have cause', 'vaccine')] \u201cImage shows mismatch between Neil Armstrong\u2019s spacesuit and boot print left on the Moon, therefore Moon landing was a hoax.\u201d ClaimVer 1Image shows mismatch between Neil Armstrong\u2019s spacesuit and boot print left on the Moon, 2therefore Moon landing was a hoax. R1: specific claim about the mismatch between the spacesuit and boot print is not directly supported or refuted R2: : The triplets directly state that the Moon landing was a significant event and an instance of the Apollo 11 mission, which contradicts the claim that the Moon landing was a hoax. [('Apollo 11', 'crew member(s)', 'Neil Armstrong'), ('Apollo 11','significant event', 'Moon landing'), ('Apollo 11', 'instance of', 'Moon landing')] Q38404 (Vaccine): neurodevelopmental condition Q134808 (Autism): biological preparatory medicine that improves immunity to a particular disease Inaccurate: The link between vaccines and autism has already been disproved in several studies. B) A) AFP Fact Check rating: False Q1615 (Neil Armstrong): American astronaut; first person to walk on the moon Q495307 (Moon landing): arrival of a spacecraft on the surface of the Moon Q223571 Q190868 Q405 Q190084 Q190084 Q18218093 \u2026. Figure 1: Demonstration of ClaimVer for claim verification and evi- dence attribution. (A) Text labeled as Inaccurate by HealthFeedback and ClaimVer\u2019s predictions, rationale, and evidence. (B) Text la- beled as False by Google Fact Check Tools and ClaimVer\u2019s outputs. Predictions are color-coded (amber: extrapolatory, red: contradic- tory); Ri: rationale; related wiki entities are displayed in boxes. human-centric approach to text verification that provides us- able and appropriately granular explanations that can not only inform but also educate the user. Most fact-checkers, including widely used ones in deploy- ment, issue blanket predictions that can lead to user misun- derstanding. For instance, in Figure 1 (A), we observe that HealthFeedback,1, a fact-checker for medical text, indicates that a misleading statement about the increase in Autism is inaccurate. However, there are multiple claims made in that text, which are not addressed by this tool. In fact, research 1https://healthfeedback.org/ arXiv:2403.09724v1 [cs.CL] 12 Mar 2024 does show that Autism cases have increased, but this is mostly attributed to increased testing [Russell et al., 2015]. Our method accurately breaks down the text into multiple claims and shows that the specific claim that vaccines are causing autism is indeed incorrect, attributing it to a fact from the Wikidata [Vrande\u02c7 ci\u00b4 c and Kr\u00a8 otzsch, 2014]. It also provides a clear rationale as to why the first two claims cannot be deter- mined, as there\u2019s no conclusive evidence present in the KG. Such granular predictions, supported by justifications, signifi- cantly improve user confidence [Rechkemmer and Yin, 2022; Weitz et al., 2019; Shin, 2021]. Similarly, in Figure 1 (B), we notice that Google Fact Check Tools2 simply provides a blanket label for an utterance denying the moon landing. In contrast, ClaimVer identifies the exact text span that can be conclusively proven incorrect and proceeds to provide specific information about the Apollo 11 mission and its crew members to refute the claim. All ver- ified entities present in the text, along with their Wiki IDs and descriptions, are displayed for user reference. Prior research [Rashkin et al., 2023; Yue et al., 2023; Thorne et al., 2019; Aly et al., 2021] typically validates text at the paragraph or sentence level without adequately enhanc- ing user awareness by supplying key details such as rationale, match scores, or evidence. A KG-based approach allows for finer granularity, aiding in pinpointing specific inaccuracies like hallucinations in LLM-generated text or false claims in misleading text. Furthermore, if needed, broader-level met- rics can be extracted from this detailed attribution. The assumption of one-to-one mapping between input and reference texts, prevalent in previous methods [Rashkin et al., 2023; Yue et al., 2023; Thorne et al., 2019; Aly et al., 2021], does not hold if the given text consists of claims that can be mapped to more than one source. In contrast, utilizing a KG, which represents a consolidated body of knowledge, results in a more comprehensive evaluation. While most pre- vious methods may not support scenarios with information spread across various references, querying a KG can yield triplets originally sourced from multiple documents. Addi- tionally, procuring the specific spans of text required to eval- uate claims, from large text sources that may span several pages, presents many challenges. On the other hand, a KG captures only the most important relationships as nodes and links, and offers a more efficient way to evaluate the claims. Prior methods that depend on document indices or vector databases are not easy to maintain or audit. In contrast, exist- ing trusted KGs that are constructed through human curation provide an effective and human-centered approach for eval- uating text at scale. Therefore, we leverage KGs to build a framework that realizes our goal of performing fine-grained text verification and evidence attribution. Our framework also generates insights that boost user awareness, thereby foster- ing increased trust in automated systems.", |
| "main_content": "Research on validating text has been ongoing for the past decade, while the concept of evidence attribution has gained 2https://toolbox.google.com/factcheck/explorer increased attention in recent years, following the advent of generative models. Our method integrates fact verification and evidence attribution; therefore, we discuss recent advancements in both domains in this section. 2.1 Fact Verification Fact verification is a task that is closely related to natural language inference (NLI) [Conneau et al., 2017; Schick and Sch\u00a8 utze, 2020], in which given a premise, the task is to verify whether a hypothesis is an entailment, contradiction, or neutral. Similarly, in fact verification, the task is to check if a given text can be supported, refuted, or indeterminable, given a reference text. Recent studies in this domain show that LLMs can achieve high performance, and can be considerably reliable for verification tasks, even though they are prone to hallucations [Guan et al., 2023]. In [Lee et al., 2020], the authors show that the inherent knowledge of LLMs could be used to perform fact verification. Other works [Yao et al., 2022; Jiang et al., 2023b] have shown that using external knowledge is helpful for many reasoning-intensive tasks, and report enhanced performance on HotPotQA [Yang et al., 2018] and FEVER [Thorne et al., 2018]. A wide variety of studies have established LLMs are suitable for fact verification. For example, [Dong and Smith, 2021] enhanced accuracy of table-based fact verification by incorporating column-level cell rank information into pre-training. In FactScore, authors [Min et al., 2023], introduce a new evaluation that breaks a long-form text generated by large language models (LMs) into individual atomic facts and calculates the proportion of these atomic facts that are substantiated by a credible knowledge base. 2.2 Evidence Attribution The distinction between evidence attribution and fact verification lies in the emphasis on identifying a source that can be attributed to the information. This task is becoming increasingly important, as generative models produce useful and impressive outputs, but without a frame of reference to validate them. In [Rashkin et al., 2023], the authors present a framework named AIS (Attributable to Identified Sources) that specifies annotation guidelines and underlines the importance of attributing text to an external, verifiable, and independent source. [Yue et al., 2023] demonstrate that LLMs can be utilized for automatic evaluation of attribution, operationalizing the guidelines presented in [Rashkin et al., 2023]. However, both of these works are primarily designed for the question-answering (QA) task, a primary end-user application for LLMs like ChatGPT [Achiam et al., 2023]. In contrast, our method is not restricted to QA and is designed to work with text in general. Furthermore, while these previous studies focus on sentence or paragraph levels, our approach extends to a more detailed and granular level of analysis. 3 Methodology In this section, we present the methodology for retrieving relevant triplets from the KG, fine-tuning LLM to process text at Preprocessing \u2022 NER \u2022 Coreference \u2022 KG Entity Linking \u2022 Compartmentalization KG Triplet Retrieval Algorithm Knowledge Graph + Finetuned ClaimVer LLM Outputs Steven Tyler has never been a part of the band Aerosmith. Input Text Steven Tyler has never been a part of the band Aerosmith. Claim Contradictory Prediction [('Aerosmith', 'has part(s)', 'Steven Tyler\u2019)], 1.0 Relevant Triplets & TMS Score (KAS) 0.047 This triplet establishes a clear relationship between Steven Tyler and Aerosmith, refuting the claim that he has never been associated with the band. Rationale Figure 2: Flow of Operations in the ClaimVer framework. Identified KG entity nodes during preprocessing inform the extraction of relevant triplets by the KG algorithm. Subsequently, these triplets and preprocessed text are then fed to a ClaimVer LLM, fine-tuned to operationalize the objective function. For each claim, the corresponding text span, prediction, relevant triplets, attribution scores, and rationale are generated. claim-level, verifying claims, tagging evidence for each prediction, and generating a rationale along with an attribution score that reflects the text\u2019s validity. 3.1 Preprocessing Preprocessing involves multiple steps required to make the input text suitable for the subsequent operations. Since the nodes in a KG typically represent entities, performing Named Entity Recognition (NER) is necessary. In our work, we chose Wikidata [Vrande\u02c7 ci\u00b4 c and Kr\u00a8 otzsch, 2014] as the KG source; thus, we use an NER module suitable for Wiki entities [Gerber, 2023]. However, the framework is sufficiently generic to support any kind of KG that models information in the form of triplets. As our analysis is performed at the claim level, coreference resolution [Lee et al., 2017] becomes a necessary step to form localized claims that are semantically self-contained. If input text exceeds the context length, which depends on design choices, compartmentalization would be required. As a final step in preprocessing, we perform KG entity linking. This step tags all entities in the text that are present in the KG as nodes. 3.2 Relevant Triplets Retrieval Retrieving relevant triplets is a complex problem that has attracted attention from various research communities, and resulted in multiple approaches to address the challenge. While retrieving direct links between two given nodes in a KG is relatively straightforward, identifying complex paths that involve multiple hops is challenging. In our framework, we use Woolnet [Guti\u00b4 errez and Patricio, 2023], a multi-node Breadth-First Search (BFS) algorithm, to retrieve the most relevant triplets for a given claim present in the KG. This BFS algorithm initiates from multiple starting points and, at each step, searches for and processes all adjacent neighbors before advancing. It constructs a subgraph of visited nodes, tracking their origins, and distances from each BFS\u2019s start. The algorithm expands each search tree one node at a time until paths intersect or reach a predefined maximum length. Upon intersection, it assesses if the discovered path meets the length criteria. If so, it logs the route, utilizing backtracking to trace the path to its origins, while ensuring there are no repetitions or cycles, thus maintaining a connection to a starting node. In our experiments, we allow for a maximum of three hops between any two given nodes, and a maximum of four potential paths. Adopting less stringent conditions leads to less relevant triplets. 3.3 Objective Function Previous works on evidence attribution tasks have established definitions for the categorization of input text with reference to a supporting source [Rashkin et al., 2023; Gao et al., 2023; Bohnet et al., 2022; Yue et al., 2023]. Similar to the formulation in [Yue et al., 2023], we use three categories: Attributable, Extrapolatory, and Contradictory. However, there are two main differences that distinguish our approach from previous methods. First, we verify the input text against facts present in a KG, an aggregated information source constructed by integrating numerous data sources into a structure of triplets, instead of relying on a single reference. This approach eliminates the one-to-one dependency between the text and its information source. Second, we perform attribution with finer granularity, specifically at the claim level, involving a subtask of decomposing the input text into individual claims. We define our categories as follows: \u2022 Attributable: The triplets fully support the claim. \u2022 Extrapolatory: The triplets lack sufficient information to evaluate the claim. \u2022 Contradictory: The triplets contradict the claim. We formulate the objective function of our task as follows: f(input text, ret triplets) = {(claim spani, claim predi, rel tripletsi, rationalei)}n i=1 (1) where: \u2022 input text: The input text containing claim(s). \u2022 ret triplets: A set of retrieved triplets for the input text. \u2022 claim spani: The ith claim extracted as a substring from input text. \u2022 claim predi: The predicted label for claim spani. \u2022 rel tripletsi: A relevant subset of ret triplets supporting, refuting, or are extrapolatory for claim spani. \u2022 rationalei: Justification for claim predi. \u2022 n: The total number of claims automatically extracted from input text. This objective function encompasses two main sub-tasks: 1. Decomposing input text into claims. 2. Generating prediction and corresponding rationale for each claim by identifying relevant supporting triplets. 3.4 Fine-tuning LLMs The objective function shares similarities with the wellstudied task of NLI [Conneau et al., 2017; Schick and Sch\u00a8 utze, 2020]. LLMs achieve state-of-the-art performance for NLI [Chowdhery et al., 2023], making them a suitable choice to operationalize the objective function. Additionally, [Yue et al., 2023] shows that LLMs can be used to automatically evaluate attribution to a given information source. However, these prior methods do not involve a complex sub-task, which is central to the proposed objective function, i.e., decomposing the input text into text spans that correspond to separate claims in the presence of multiple claims. It is crucial to perform both claim decomposition and attribution for all claims in a single step, as processing each claim individually can lead to an exponential increase in LLM queries, leading to significantly higher computational costs and latency issues. In order to perform attribution at the claim level, we need to fine-tune LLMs specifically for the proposed objective function (see \u00a73.3) using a custom dataset. This is necessary because, as of this writing, even the state-of-the-art model, OpenAI\u2019s GPT-4 [Achiam et al., 2023], does not perform satisfactorily right out of the box. Further details on the dataset are provided in \u00a74. On every prediction, a membership check for relevant triplets is performed for additional verification. We selected five open-source LLMs with diverse sizes, ranging from 2.7B parameters to 13B parameters, to perform the fine-tuning: Phi-2 2.7B [Javaheripi et al., 2023], MistralInstruct 7B [Jiang et al., 2023a], Zephyr-Beta 7B [Tunstall et al., 2023], Solar-Instruct 10.7B [Kim et al., 2023], and Llama2-chat 13B [Touvron et al., 2023]. The models were fine-tuned using LoRA [Hu et al., 2021] with 4-bit quantization and adapters with rank 8 [Dettmers et al., 2024]. The context length was set to 1024 tokens. All models converged after 2 epochs, and high ROUGE-L [Lin, 2004] scores greater than 0.635 were achieved for each model. The instruction prompt used for fine-tuning is presented in Figure. 3. 3.5 Computing Attribution Scores For various downstream tasks, such as ranking and filtering, a continuous score that reflects the validity of a given piece of text with respect to a KG is desirable. We propose the KG Attribution Score (KAS), which accomplishes this task with a high level of granularity, and is detailed in this section. Analyze text against provided triplets, classifying claims as \"Attributable\", \"Contradictory\", or \"Extrapolatory\". Justify your classification using the following structure: \"text span\": Text under evaluation. \"prediction\": Category of the text (Attributable/Contradictory/Extrapolatory). \"triplets\": Relevant triplets (if any, else \"NA\"). \"rationale\": Reason for classification. For multiple claims, number each component (e.g., \"text span1\", \"prediction1\",..). Use \"NA\" for inapplicable keys. Example: \"text span1\": \"Specific claim\", \"prediction1\": \"Attributable/Contradictory/Extrapolatory\", \"triplets1\": \"Relevant triplets\", \"rationale1\": \"Prediction justification\", ... Input for analysis: -Text: {Input Text} -Triplets: {Retrieved Triplets} Figure 3: Instruction prompt for operationalizing objective function. Claim Scores cs(yi) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 2 if yi = Attributable 1 if yi = Extrapolatory and |rel tripletsi| > 0 0 if yi = Extrapolatory and |rel tripletsi| = 0 0 if yi = No attribution \u22121 if yi = Contradictory (2) where, yi is claim predi. For each claim, we assign a score that reflects the level of its validity, ranging from -1 (contradictory) to 2 (attributable). If a claim is predicted to be extrapolatory, yet has one or more relevant triplets, we assign that claim a score of 1, as there is still relevant information available even though it may not be sufficient to completely support or refute. However, if there are no triplets at all, along with an extrapolatory prediction, we assign 0 as it does not add much information. While decomposing claims, the model might occasionally omit words, typically stop-words, and we assign 0 in those cases as well. Triplets Match Score (TMS) This score reflects the extent of the match between the relevant triplets and the corresponding claim, and it can also serve as a proxy for prediction confidence. Even though the prediction is made at the claim level, the triplets match score considers word-level matches in the computation. It can be computed as follows: TMS(E(claim spani), E(rel tripleti)) = \u03b1 \u00b7 SS(E(claim spani), E(rel tripleti)) + \u03b2 \u00b7 EPR(E(claim spani), E(rel tripleti)) (3) where, E(claim spani) and E(rel tripleti) represent the sets of entities in claim spani and rel tripleti, respectively. SS is the semantic similarity computed using the cosine similarity of text embeddings, and EPR represents the ratio of entities in E(claim spani) that are also present in E(rel tripleti). The parameters \u03b1 and \u03b2 can be adjusted Split Samples Claims Claim Labels Att Ext Con Train 3400 5343 2964 1485 894 Test 1000 1675 858 492 325 Table 1: Distribution of fine-tuning dataset. Att: Attributable, Ext: Extrapolatory, Con: Contradictory. as needed; in our experiments, we use 0.5 for both. In cases where examples of an entity retrieved from the KG are used to support the prediction, instead of the entity itself, we may not have a direct overlap, and thus semantic similarity would be helpful. EPR rewards the direct use of the entity, so a balance between both may be ideal in most cases. KG Attribution Score (KAS) For the final KG Attribution Score (KAS), a continuous score between 0 and 1 is desirable, as this facilitates various downstream applications such as ranking, fine-tuning, and filtering. This can be achieved using a Sigmoid function. However, the standard Sigmoid function treats positive and negative scores equally. In most cases, higher penalties should be assigned for erroneous text than rewards for valid text. This requirement can be met using a modified sigmoid function that penalizes mistakes by a factor of \u03b3: \u03c3mod(x, \u03b3) = 1 1 + e\u2212\u03b3\u00b7x , where \u03b3 = \u001a\u03b3 = 3 if x < 0, \u03b3 = 1 if x \u22650, (4) In our experiments, we set the value of \u03b3 to 3. Finally, the modified Sigmoid function, applied to the summation of triplet match scores and claim scores, is used to generate KAS: KAS = \u03c3mod( n X i=1 [TMSi \u00b7 cs(yi)], \u03b3) (5) 4 Dataset Open-domain Question Answering (QA) datasets, such as WikiQA [Yang et al., 2015], HotPotQA [Yang et al., 2018], PopQA [Mallen et al., 2022], and EntityQuestions [Sciavolino et al., 2021], as well as Fact Verification datasets like FEVER [Thorne et al., 2019], FEVEROUS [Aly et al., 2021], TabFacT [Chen et al., 2019], and SEM-TAB-FACTS [Wang et al., 2021], provide texts along with corresponding reference contexts or attributable information sources. However, these datasets significantly differ from the type of data required to train and test our proposed objective function, primarily due to two major factors: First, these datasets predominantly offer samples that are inherently attributable. To address this limitation, prior work [Yue et al., 2023] in attribution evaluation introduced new samples by modifying correct answers to generate contradictory instances. However, this adjustment alone is not adequate for our use case because, our method requires attribution at the claim level, and necessitates the automatic decomposition of claims. Consequently, as this task represents a novel challenge, we developed a new dataset that enables effective training and testing of the objective function. Considering the choice of our KG, which is Wikidata [Vrande\u02c7 ci\u00b4 c and Kr\u00a8 otzsch, 2014], we opted for WikiQA [Yang et al., 2015] as it is closely associated with the Wiki ecosystem. Given that our method is designed for text validation in general, not limited to question answering, we retain only answers and discard the questions. Subsequently, we processed the answers following the steps detailed in Section 3.1, selecting entries containing two or more Wiki entities. This approach resulted in the exclusion of most single-word answers and other responses that are dependent on their corresponding questions and may lack comprehensibility without them. We utilize GPT-4 [Achiam et al., 2023] to generate the initial version of the ground truth, as knowledge distillation [Gou et al., 2021] has proven to be an effective strategy. Although GPT-4 can adhere to the instructions (refer to Figure 3) to a reasonable degree and responds in the required format with all necessary keys, it still underperforms in the overall task. The most frequent issue observed is the erroneous assignment of prediction labels. After post-processing, we conducted manual checks to ensure only high-quality samples were retained, as research indicates that high alignment can be achieved with as few as 1,000 samples, provided they are of superior quality [Zhou et al., 2023]. The final dataset is comprised of two splits: the training split, based on the training split of WikiQA [Yang et al., 2015], and a test split, derived from both the test and validation splits. The training split contains 3,400 samples, and since some entries feature multiple claims, there are a total of 5,343 claims within this split. Similarly, the test split includes 1,000 samples and 1,675 claims. The label counts for the claims are tabulated in Table 1. 5 Experiments and Results In this section, we present the evaluation of our claim-level attribution method. The performance metrics of the finetuned LLMs, which operationalize the objective function, are presented in Tables 3 and 4. In Table 3, we observe that all models converge and achieve sufficiently high ROUGEL and ROUGE-1 scores, with Solar-Instruct achieving the highest of 0.655 and 0.693 respectively. We also observe that the smaller model, Phi2 with just 2.7B parameters, is also sufficiently compatible for this task as it attained a decent ROUGE-L score of 0.635. The average accuracies on the test set, however, vary significantly across the models, with Solar-Instruct reporting the highest at 89.31%. The reason behind this variation is that these scores account for both sub-tasks of the objective function (refer \u00a73.3): decomposing input text into claims and generating predictions for each claim. The first task, decomposing text into multiple claims, is somewhat subjective, and there could be multiple valid approaches due to linguistic complexities. We impose a strict strategy while computing accuracy: the text span of the claim, the identified relevant triplets, and the prediction label must all exactly match the ground truth to be considered accurate. Since these LLMs Input Text Relevant Triplets Prediction (TMS) Rationale KAS 1 1George O'Malley is a fictional character from the medical drama television series Grey's Anatomy, 2which airs on the American Broadcasting Company (ABC) in the United States. 1: [(\"Grey's Anatomy\", 'characters', \"George O'Malley\")] 2: [(\"Grey's Anatomy\", 'original broadcaster', 'American Broadcasting Company'), ('American Broadcasting Company', 'country', 'United States of America')] 1: Attributable (0.852) 2: Attributable (0.637) 1: The triplet directly supports the claim that George O'Malley is a character in Grey's Anatomy. 2: The triplets confirm that Grey's Anatomy airs on ABC, which is based in the United States, directly supporting the claim about the show's broadcasting and location. 0.818 2 1Bane was portrayed as a tertiary villain by Robert Swenson in Batman & Robin , 2directed by Joel Schumacher, 3and Batman\u2019s Back 1: [('Batman & Robin', 'cast member', 'Robert Swenson\u2019)] 2: [('Batman & Robin', 'director', 'Joel Schumacher')] 3: NA 1: Attributable (0.788) 2: Attributable (0.882) 3: Extrapolatory (0.0) 1: The triplet directly supports the claim that Robert Swenson was involved in Batman & Robin, which is a requirement for the statement about his portrayal of Bane. 2: The triplet directly supports the claim about the director of Batman & Robin, which is relevant to the context of the film. 3: There are no triplets that directly support or refute the claim about Batman's back 0.752 3 1Crater Lake is the main feature of Crater Lake National Park 2and famous for its deep blue color and water clarity. 1: [('Crater Lake', 'located in protected area', 'Crater Lake National Park\u2019)] 2: NA 1: Attributable (0.942) 2: Extrapolatory (0.0) 1: The triplet directly supports the claim that Crater Lake is a significant feature within Crater Lake National Park, as it is located within the protected area. 2: There are no triplets provided that directly support or refute the claim about the deep blue color and water clarity of Crater Lake. 0.719 4 1Based in Blagnac , France, a suburb of Toulouse, 2and with significant activity across Europe, 3airbus produces approximately half of the world's jet airliners . 1: [('Airbus Operations S.A.S.', 'country', 'France'), ('Airbus Corporate Jets', 'headquarters location', 'Toulouse'), ('Blagnac', 'country', 'France\u2019)] 2: NA 3: NA 1: Attributable (0.505) 2: Extrapolatory (0.0) 3: Extrapolatory (0.0) 1: The triplets confirm that Airbus Operations S.A.S. is in France, Airbus Corporate Jets is headquartered in Toulouse, and Blagnac is a suburb of Toulouse in France, supporting the statement about Airbus's location in France and its proximity to Toulouse. 2: The triplets do not provide information about Airbus's activity across Europe 3: The triplets do not provide any information about Airbus's production output or market share 0.583 5 1Pope Benedict XVI never appointed anyone significant within the Catholic Church, 2nor did he ever teach the importance of understanding God's redemptive love. 1: [('Rutilio del Riego J\u00e1\u00f1ez', 'appointed by', 'Benedict XVI'), ('Rutilio del Riego J\u00e1\u00f1ez','religion or worldview', 'Catholic Church\u2019)] 2: [('God','said to be the same as', 'love')] 1: Contradictory (0.781) 2: Extrapolatory (0.065) 1: The triplets directly contradict the claim by showing that Pope Benedict XVI did indeed appoint someone (Rutilio del Riego J\u00e1\u00f1ez) who is associated with the Catholic Church, indicating that he did appoint significant individuals within the Church. 2: While the triplets indicate that God is equated with love, it does not directly address whether Pope Benedict XVI taught the importance of understanding God's redemptive love. 0.248 6 1Southwest Airlines has never operated any Boeing 737 models. 1: [('Boeing 737 MAX', 'operator', 'Southwest Airlines'), ('Boeing 737 #1491', 'operator', 'Southwest Airlines')] 1: Contradictory (0.933) 1: The triplets directly contradict the claim by indicating that Southwest Airlines has operated both the Boeing 737 MAX and Boeing 737 #1491, which are specific models of the Boeing 737. This refutes the statement that Southwest Airlines has never operated any Boeing 737 models. 0.057 Table 2: Examples of claim-level attribution by the proposed method. The first column shows the numbered claims in the input text. Second column lists relevant triplets for each claim. Predictions and Triplets Match Score (TMS) are in the third column, while the rationale behind each prediction is in the fourth column. The Knowledge Graph Attribution Score (KAS) is shown in the last column. Model: Solar-Instruct. may decompose the claims slightly differently, as multiple valid options are possible, the accuracy values may appear low even while the objective function is correctly executed. For instance, example 4 in Table 2 has been decomposed into three claims, but the first could arguably be further decomposed to verify whether Blagnac is in France, and whether it is a suburb of Toulouse. Controlling the precise manner of decomposition is challenging, and might necessitate an additional step before the prediction step, involving separate processing for each claim. However, this option could prove to be impractical, as the number of LLM queries could increase exponentially. While the overall accuracy may not fully reflect the models\u2019 performance due to the combined assessment of the two sub-tasks, focusing solely on the prediction task could offer better insights into how the models are performing in terms of categorization. In Table 4, the second column indicates number of claims with text spans exactly matching the ground truth responses. Columns 3 to 6 present the accuracy, precision, recall, and F1 scores for these matching claims. The most performant model is Solar-Instruct, with 1052 exact matches out of 1675 claims in the test set. Across all models, the classification scores in all metrics are above 98%, which clearly demonstrates that the models can reliably differentiate between the classes attributable, extrapolatory, and contradictory. Table 2 showcases the claim-level attribution performed by our method. Each claim in the input text is numbered and color-coded to reflect its prediction: green for attributable, amber for extrapolatory, and red for contradictory. The examples are sorted in descending order by their KAS scores, which reflect the validity of the text. As expected, we observe more green at the top of the table and more amber and eventually red as we move down. Since the Wiki ecosystem is open-domain, we observe that the examples cover a wide range of topics, demonstrating that the method is adaptable to diverse inputs. In the first example in 2, the input text is decomposed into two claims, both of which are attributable. The first claim is supported by a single triplet in the KG, while the second claim can be supported by combining two triplets. The second example presents more challenges for evaluation due to its complex sentence structure, but ClaimVer accurately identifies that the third claim regarding Batman\u2019s Back is neither supported nor refuted by the triplets, as indicated in the raModel Size ROUGE-L ROUGE-1 Avg. Acc. Phi-2 2.7B 0.635 0.673 75.16% Mistral-Instruct 7B 0.645 0.680 83.04% Zephyr-Beta 7B 0.638 0.676 79.88% Solar-Instruct 10.7B 0.655 0.693 89.31% Llama2-Chat 13B 0.6395 0.677 79.41% Table 3: ROUGE scores and average accuracies on the test set (n = 1, 000). Model #MC Acc Prec Rec F1 Phi-2 819 98.29 98.33 98.29 98.26 Mistral-Instruct 928 99.35 99.36 99.35 99.35 Zephyr-Beta 757 98.34 98.38 98.34 98.32 Solar-Instruct 1052 99.80 99.81 99.80 99.80 Llama2-Chat 869 99.53 99.54 99.53 99.53 Table 4: Scores on matching claims in the test set (n = 1675). #MC: number of matching claims. tionale. In the third example, we note that the first claim is predicted to be attributable with a high triplet match score of 0.942 since there is a triplet that clearly supports the location description of Crater Lake. However, as there is no information regarding the water characteristics, the second claim is categorized as extrapolatory. In the fourth example, the first claim alone requires three triplets combined as supporting evidence, illustrating the method\u2019s ability to handle complex multi-hop paths within the KG. The second and third claims are predicted to be extrapolatory, since there are no triplets concerning Airbus\u2019s market share, or its activities in Europe, as highlighted in the model\u2019s rationale. It is noteworthy that the context provided in the third claim is crucial for the first claim to be comprehensible, demonstrating why individual claim evaluation may be suboptimal. Interestingly, in the fifth example, the method identifies a specific instance from the KG to refute a general claim, citing the appointment of Rutilio del Riego J\u00b4 a\u02dc nez. Similarly, in the sixth example, the method provides specific instances, quoting two distinct Boeing 737 models to demonstrate contradiction with a high triplet match score. 6 Discussion The susceptibility of LLMs to generating factually incorrect statements is an alarming concern as LLM-powered services become increasingly popular for seeking advice and information. The democratization of generative models has also had adverse effects, such as increasing misinformation [Monteith et al., 2024]. To arm end-users with the tools necessary to combat being misinformed, it is crucial to develop textvalidation methods that are human-centric, and prioritize user engagement, understanding, and informativeness. We design our method with these principles in mind: we make predictions at the claim level, and identify text spans within the given text, that can be color-coded and presented to the user. The proposed method also generates easily comprehensible explanations along with the prediction and evidence, thus reducing the cognitive burden on the end-user, and making the process user-friendly. The usability and evaluation of these systems should align with human needs and capabilities. Chatbots, such as ChatGPT [Achiam et al., 2023], serve a wide array of tasks; therefore, the text validation method should be adaptable to various domains. While KGs like Wikidata [Vrande\u02c7 ci\u00b4 c and Kr\u00a8 otzsch, 2014] are considered open-domain, the implementation of more specialized KGs, along with corresponding routing algorithms may be necessary to support a broader range of topics. For instance, a common-sense KG [Hwang et al., 2020] would be more useful in validating non-factoid answers that involve logic. Furthermore, the maintenance efficiency of our approach aligns well with the need for sustainable, long-term AI solutions. In a world where information is constantly evolving, the ability to update and maintain AI systems with minimal effort is not just a convenience, but a necessity. This directly ties into the ethical implications of AI, where outdated or incorrect information can lead to harmful decisions. By leveraging existing, well-maintained KGs, we can ensure that AI systems remain accurate and relevant over time. While there are several advantages associated with using KGs, we also acknowledge the presence of known issues, such as knowledge coverage and the efforts required to keep these sources up-to-date. For our solution, we assume that the KG is up-to-date and possesses adequate coverage. However, this may not always be the case, and thus the most suitable technique should be adopted after considering the specific requirements of a particular use case. Another point to consider is that the proposed method does not provide traditional citations to articles, although it may be possible to retrieve that information from the KG, if information source mapping has been properly maintained. 7 Conclusion In this paper, we present ClaimVer, a framework that facilitates text verification and evidence attribution at the claim level by leveraging information present in KGs. We have prioritized human-centric design principles to make the framework more informative, intuitive, and user-friendly. Additionally, our methodology incorporates design choices that ensure open access, sustainability, and reliability. ClaimVer presents several advantages, as outlined below: 1. Human-centric design: In addition to its primary functions of text verification and evidence attribution, the system generates considerable information conducive to user awareness. This information serves to educate users and enhance their trust in the automated system. 2. Finer Granularity: Perform validation at the claim level, enabling localization of hallucinations, or false claims. 3. Enhanced Coverage: Eliminate one-to-one mapping between input and reference text, allowing for layered interpretation, and handling of distributed information. 4. Domain Adaptability: Flexibility in adapting to new domains by switching to a more suitable KG. 5. Maintenance Efficiency: Simplified auditing and updation of the knowledge base, ensuring the data remains current and accurate." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.00414v1", |
| "title": "Prompt-Time Symbolic Knowledge Capture with Large Language Models", |
| "abstract": "Augmenting large language models (LLMs) with user-specific knowledge is\ncrucial for real-world applications, such as personal AI assistants. However,\nLLMs inherently lack mechanisms for prompt-driven knowledge capture. This paper\ninvestigates utilizing the existing LLM capabilities to enable prompt-driven\nknowledge capture, with a particular emphasis on knowledge graphs. We address\nthis challenge by focusing on prompt-to-triple (P2T) generation. We explore\nthree methods: zero-shot prompting, few-shot prompting, and fine-tuning, and\nthen assess their performance via a specialized synthetic dataset. Our code and\ndatasets are publicly available at https://github.com/HaltiaAI/paper-PTSKC.", |
| "authors": "Tolga \u00c7\u00f6pl\u00fc, Arto Bendiken, Andrii Skomorokhov, Eduard Bateiko, Stephen Cobb, Joshua J. Bouw", |
| "published": "2024-02-01", |
| "updated": "2024-02-01", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "I.2.7" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Large language models (LLMs) are transforming human-machine interaction with their adeptness in carrying out conversations. However, despite their proficiency in responding to queries, LLMs cannot be considered good listeners. Their limitation lies in their inability to learn from user-provided information. To utilize any data received from users beyond their context window, LLMs require support from an external system, a significant gap in their interactive capabilities. Primarily, LLMs capture knowledge during their training phase. This phase, which enables the implicit encoding of knowledge into the model\u2019s parameters, is deemed efficient in terms of data compression. However, the substantial computational power, time, and cost required for training, par- ticularly in the pre-training stage, render it impractical for prompt-driven continuous learning. LLMs are unable to capture knowledge obtained from users or through external integrations/plugins, which presents significant challenges for many AI applications. For instance, the ability of AI assistants to capture and utilize personal information in future interactions is crucial. This limitation is currently being addressed through various Retrieval-Augmented Generation (RAG) approaches. Within this realm, knowledge graphs are distinguished by their clear structures, symbolic representations, and capacity for factual reasoning, making their integration with LLMs a vibrant area of ongoing research [1, 2]. In this paper, we focus on the building blocks of prompt-driven symbolic knowledge capture. We investigate the extraction of prompts in subject-predicate-object triples1 for a predefined context (relation) through in-context learning and fine-tuning approaches. Utilizing a specially designed dataset, we aim to assess the efficacy of these methods, highlighting their strong points and identifying areas for enhancement. 1https://www.w3.org/TR/rdf12-concepts/ Preprint. Under review. arXiv:2402.00414v1 [cs.CL] 1 Feb 2024 The structure of this paper is as follows: Section 2 introduces the proposed in-context learning and fine-tuning approaches by providing examples. Section 3 describes the experimental setup by presenting the development framework, the language model selection, and the dataset creation process. Section 4 outlines our test results and their interpretations. Finally, Section 5 concludes the paper and suggests future directions.", |
| "main_content": "Triples, composed of (\u2019subject\u2019, \u2019predicate\u2019, \u2019object\u2019), are considered a universal data model thanks to their inherent simplicity and versatility. This format reflects the fundamental structure of human language and cognition, capturing the essence of any asserted statement or fact. Each triple represents a distinct atom of knowledge, with the subject and object identifying entities and the predicate describing their relationship. In our study, we have chosen triples as our data model for these characteristics. Furthermore, for ease of presentation, we employ an informal free-form triple format, which allows for greater flexibility in our discussions and examples. Generating triples based on a predefined context from user prompts can be viewed as a specific aspect of the broader text-to-graph (T2G) generation problem [3, 4]. This perspective led us to define our research problem as prompt-to-triple (P2T) generation. P2T generation entails extracting \u2019subject\u2019 and \u2019object\u2019 terms from user prompts that correspond with a \u2019predicate\u2019 drawn from a restricted vocabulary. This vocabulary consists of predefined, user-specific relations such as birthdays, anniversaries, locations, and events. A key aspect is ensuring that the \u2019predicate\u2019 term of the generated triple accurately reflects the relevant predefined relation. For example, from the prompt \u2019I was born in 1979\u2019, our goal is to generate the triple (\u2019I\u2019, \u2019birthday\u2019, \u20191979\u2019), aligning with the \u2019birthday\u2019 relation. In our research, we began by pinpointing some relevant predefined relations (aka a restricted vocabulary for the \u2018predicate\u2019 term). Following on this, we developed the requisite training and test datasets essential for addressing the problem. Building on this groundwork, we have formulated the following three methodologies to effectively tackle the P2T generation challenge 2.1 P2T zero-shot prompting Zero-shot prompting is an in-context learning technique that enables LLMs to apply their inherent knowledge to complete tasks without additional training. This approach is relevant in the T2G generation domain, especially in the multi-turn question answering form, as noted in [5]. However, the P2T generation task requires tailored zero-shot prompts due to specific predefined relations. Our approach has evolved through a series of iterative developments and tests. The critical aspects of zero-shot prompting, as explored in our research, include: \u2022 For the prompt context to match the predefined relations, the set of predefined relations must be present in the system prompt. This leads to scalability issues, as both the prompt size and processing time vary with the size of the relation set. \u2022 Due to pre-training biases, LLMs often default to assigning the sentence\u2019s verb to the \u2019predicate\u2019 term. This can result in incorrect triples when the verb doesn\u2019t match the predefined relation, despite explicit instructions for relation matching. To improve accuracy, we introduced an extra term for relation matching, which is then incorporated into the \u2018predicate\u2019 term via post-processing. Relevant cases are presented in Figure.1. Case 1 demonstrates that, in the absence of explicit instructions, the LLM generates the \u2018predicate\u2019 by utilizing the verb from the provided sentence, aligning with expectations. In Case 2, despite clear instructions for relation matching, the selection of either \u2019anniversary\u2019 or \u2019birthday\u2019 as the predicate is suppressed. Case 3 resolves this by using a new \u2019relation\u2019 term specifically for matching \u2019birthday\u2019 or \u2019anniversary\u2019. \u2022 LLMs can recognize that a sentence context falls outside the predefined relation set, even without explicit instruction in the zero-shot prompt. The LLM even adds a justification in the response. An instance of this scenario is depicted in Figure.2. 2 Figure 1: Cases of zero-shot prompting to demonstrate the evaluation. Figure 2: Zero-shot prompting case for out-of-context input 2.2 P2T few-shot prompting Few-shot prompting, an in-context learning technique, provides an LLM with a small set of examples to enhance its task understanding before generating a response [6]. This method stands apart from zero-shot prompting, which requires no examples, and fine-tuning, which demands extensive training data. Few-shot prompting seeks a balance by offering sufficient context for efficient model operation. Although there are criticisms in the literature [7, 8] regarding the efficiency of few-shot prompting, in our study, we aimed to evaluate this method using our own dataset. Mirroring the approach used in 3 zero-shot prompting, we adopted an iterative development and testing cycle for few-shot prompting. The significant aspects of few-shot prompting are outlined below: \u2022 In a few-shot prompting, providing an example for every predefined relation is necessary. This requirement, similar to zero-shot prompting, leads to scalability challenges. \u2022 The variety of examples has a direct impact on performance. To effectively match the \u2019birthday\u2019 relation, examples must cover various sentence structures, such as \u201cI was born in 1979\u201d and \u201cMy birthday is in November\u201d. Relevant cases are presented in Figure.3. Figure 3: The impact of example diversity is presented with two different few-shot prompts \u2022 When the LLM encounters a sentence context outside the predefined relation set, it relies on its implicit knowledge to perform triple extraction, due to the absence of a corresponding example in the few-shot prompt. Figure.4 represents a case of this particular scenario. 2.3 P2T generation using a fine-tuned LLM Fine-tuning is a process where a pre-trained LLM is further trained on a specific dataset. This technique adjusts the model\u2019s parameters to make it more adept at a particular task, such as P2T generation in our case. The following points highlight the key aspects of P2T fine-tuning: \u2022 The training dataset is critical to the success of the fine-tuning process. It provides a targeted environment with specific examples for relation matching and extracting subjects and objects, requiring a diverse array of examples to address all potential issues. Details regarding the dataset and its generation are presented in Section 3.3. 4 Figure 4: Few-shot prompting example for out of context input \u2022 For each predefined relation, the training dataset must contain varied examples. This requirement does not lead to scalability issues as in zero-shot or few-shot prompting, but an enlarged training set might increase the risk of performance degradation in other tasks for the LLM. \u2022 When the LLM encounters a sentence context not present in the predefined relation set, it resorts to its implicit knowledge for triple extraction, similar to the few-shot prompting scenario. Figure.5 presents cases for relations both included and excluded in the training set. Figure 5: Fine-tuning examples for in-context and out-of-context prompts 3 Experimental setup This section explores the components of our experimental setup. 3.1 Development framework The methods suggested in this paper have been implemented using the Apple MLX framework [9]. MLX is a specialized array framework designed for machine learning applications, akin to NumPy, PyTorch, or Jax, with the distinction of being exclusive to Apple silicon. 5 P2T fine-tuning has been conducted using the parameter-efficient QLoRA approach [10] on our custom dataset, comprising randomly selected, non-overlapping sets of 1,000 training, 200 validation, and 200 test samples. The fundamental QLoRA parameters used are as follows: \u2022 Optimizer: Adam \u2022 Learning rate: 1x10\u22125 \u2022 Number of layers to fine-tune: 16 \u2022 Minibatch size: 4 \u2022 Iterations: 1,000 3.2 LLM The methods we have developed here do not have a structural dependency on a particular underlying foundation model.. The key factors guiding our LLM selection were its proven effectiveness across diverse domains in community benchmarks and its prevalence in the field. Owing to its performance in the Hugging Face Leadership benchmark [11] and its robust ecosystem, the Mistral-7B-Instructv0.2 [12], based on the Llama 2 [13] architecture, was selected for our research. We ran all examples, tests, and benchmarks on a 4-bit quantized version of this model. 3.3 Dataset In the field of natural language processing (NLP), the creation of robust and diverse datasets is crucial for training and evaluating LLMs, especially for tasks such as knowledge extraction, which involves identifying structured information from unstructured text. Aligning with these needs, we have created a synthetic dataset focused on \u2019birthday\u2019 and \u2019anniversary\u2019 content for P2T generation study, adhering to the detailed stages described below. Our synthetic dataset creation process consists of three distinct stages: 1. We engaged native speakers to write templates for 86 user prompts and model responses. This initial step ensures the dataset\u2019s foundational accuracy and contextual relevance. 2. We leveraged Python\u2019s capabilities, particularly a random date generator, to expand this dataset from 86 to 860 prompt-response pairs. 3. The final stage of our dataset development involved using an LLM, specifically Llama-27b-chat-hf, to paraphrase each prompt from the previous dataset five times. This resulted in a new dataset of 4300 prompt-response pairs. Paraphrasing is a critical step as it introduces linguistic variations and nuances, thereby enriching the dataset and making it more representative of natural language variability [14, 15]. This approach is supported by recent studies which highlight the importance of paraphrasing in dataset creation for NLP tasks, as it significantly contributes to model generalizability and understanding of diverse linguistic expressions. 4 Performance evaluation We have evaluated the proposed methods using a non-overlapping test dataset, comprising 128 \u2019birthday\u2019 and 72 \u2019anniversary\u2019 relations. The outputs generated during the evaluation were compared with manually crafted ground-truths. The comparisons were conducted in two distinct manners: \u2018relation-based\u2019 and \u2018triple-based\u2019. \u2022 Relation-based comparison: This approach focuses solely on the \u2019predicate\u2019 term. In this scenario, equality between the test result and the ground-truth value is reported as a True Positive (TP). \u2019Predicate\u2019 values falling outside the predefined relation set are reported as False Negatives (FN), while those within the set but differing from the ground-truth are reported as False Positives (FP). \u2022 Triple-based comparison: This method involves comparing all terms of the generated triple. The comparison of the \u2019predicate\u2019 term follows the same approach as the relation-based method. However, the \u2019subject\u2019 and \u2019object\u2019 values are compared based on a relationship 6 of inclusion rather than direct equality. For example, a generated triple (\u2019I\u2019, \u2019birthday\u2019, \u2019November 14th\u2019) compared with the ground truth (\u2019I me\u2019, \u2019birthday\u2019, \u2019November 14\u2019) is classified as TP. The macro precision, recall, and f1-score calculated for both the relation-based and entity-based approaches are presented in Table 1. As indicated in Table 1, the recall for relation-based evaluation is impeccable across all methods. We believe this outcome is associated with the tests being conducted on only two relations. When considering precision and f1-score, it is observed that both zero-shot prompting and fine-tuning methods emerge as prominent. We assess that the clear guidance provided by the zero-shot prompt, instructing the LLM to select one of the predefined relations, plays a significant role in its superior performance compared to few-shot prompting. The fact that the fine-tuning method yields significantly good results clearly demonstrates its success in learning straightforward tasks. Upon examining the triple segment in Table 1, we observe that despite differing precision and recall performance, zero-shot prompting and few-shot prompting methods exhibit similar f1-scores. In this segment, fine-tuning has demonstrated superior performance compared to other methods. These outcomes motivate us to focus on fine-tuning and conduct more comprehensive studies on it. Table 1: Relation and triple generation performance based on macro precision, recall and f1-score. Relation Triple Precision Recall F1-score Precision Recall F1-score zero-shot prompting 0.815 1.0 0.8981 0.6636 0.4479 0.5348 few-shot prompting 0.49 1.0 0.6577 0.3855 0.6531 0.4848 fine-tuning 1.0 1.0 1.0 1.0 0.96 0.9796 5 Conclusion In this paper, we initially discussed prompt-driven symbolic knowledge capture and its significance in the LLM domain. We then projected the prompt-driven symbolic knowledge capture problem into prompt-to-triple (P2T) generation, which involves generating triples based on predefined relations from user prompts. To address P2T, we developed new approaches using fundamental LLM techniques, including in-context learning and fine-tuning. We concluded our work with performance evaluations of these proposed methods. Our findings indicate that fine-tuning is particularly sensitive in addressing P2T. In our future work, we aim to refine the fine-tuning approach and comprehensively examine its impact on the overall performance of the model across across various scenarios. Please see the corresponding GitHub repository at https://github.com/HaltiaAI/paper-PTSKC" |
| } |
| ] |
| } |