diff --git "a/title_30K/test_title_long_2404.16627v1.json" "b/title_30K/test_title_long_2404.16627v1.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16627v1.json" @@ -0,0 +1,109 @@ +{ + "url": "http://arxiv.org/abs/2404.16627v1", + "title": "Incorporating Lexical and Syntactic Knowledge for Unsupervised Cross-Lingual Transfer", + "abstract": "Unsupervised cross-lingual transfer involves transferring knowledge between\nlanguages without explicit supervision. Although numerous studies have been\nconducted to improve performance in such tasks by focusing on cross-lingual\nknowledge, particularly lexical and syntactic knowledge, current approaches are\nlimited as they only incorporate syntactic or lexical information. Since each\ntype of information offers unique advantages and no previous attempts have\ncombined both, we attempt to explore the potential of this approach. In this\npaper, we present a novel framework called \"Lexicon-Syntax Enhanced\nMultilingual BERT\" that combines both lexical and syntactic knowledge.\nSpecifically, we use Multilingual BERT (mBERT) as the base model and employ two\ntechniques to enhance its learning capabilities. The code-switching technique\nis used to implicitly teach the model lexical alignment information, while a\nsyntactic-based graph attention network is designed to help the model encode\nsyntactic structure. To integrate both types of knowledge, we input\ncode-switched sequences into both the syntactic module and the mBERT base model\nsimultaneously. Our extensive experimental results demonstrate this framework\ncan consistently outperform all baselines of zero-shot cross-lingual transfer,\nwith the gains of 1.0~3.7 points on text classification, named entity\nrecognition (ner), and semantic parsing tasks. Keywords:cross-lingual transfer,\nlexicon, syntax, code-switching, graph attention network", + "authors": "Jianyu Zheng, Fengfei Fan, Jianquan Li", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "Incorporating Lexical and Syntactic Knowledge for Unsupervised Cross-Lingual Transfer", + "main_content": "Introduction Unsupervised cross-lingual transfer refers to the process of leveraging knowledge from one language, and applying it to another language without explicit supervision (Conneau et al., 2019). Due to the free requirement of the labeled data in target language, it is highly preferred for low-resource scenarios. Recently, unsupervised cross-lingual transfer has been widely applied in various natural language processing (NLP) tasks, such as part-ofspeech (POS) tagging (Kim et al., 2017; de Vries et al., 2022), named entity recognition (NER) (Fetahu et al., 2022; Xie et al., 2018), machine reading comprehension (Hsu et al., 2019; Chen et al., 2022), and question answering (QA) (Nooralahzadeh and Sennrich, 2023; Asai et al., 2021). The success of unsupervised cross-lingual transfer can be attributed to its ability to exploit connections across languages, which are reflected in various linguistic aspects such as lexicon, semantics, and syntactic structures. Consequently, many studies have sought to enhance models by encouraging them to learn these cross-lingual commonalities. For instance, in the lexical domain, Qin et al. (2021) utilize bilingual dictionaries to randomly replace certain words with their translations in other languages, thereby encouraging models to implicitly align representations between the source language and multiple target languages. In the area of syntax, several works have developed novel neural archi\u2217Equal Contribution \u2020 Jianquan Li is the corresponding author tectures to guide models in encoding the structural features of languages. Ahmad et al. (2021), for example, proposes a graph neural network (GNN) to encode the structural representation of input text and fine-tune the GNN along with the multilingual BERT (mBERT) for downstream tasks. Both lexical and syntactic approaches facilitate the alignment of linguistic elements across different languages, thereby enhancing the performance of cross-lingual transfer tasks. However, language is a highly intricate system (Ellis and Larsen-Freeman, 2009), with elements at various levels being interconnected. For example, sentences are composed of phrases, which in turn are composed of words. In cross-lingual transfer, we hypothesize that merely guiding models to focus on a single linguistic aspect is inadequate. Instead, by simultaneously directing models to learn linguistic knowledge across diverse levels, their performance can be further improved. Table 1 presents some example sentences extracted from the XNLI dataset (Conneau et al., 2018). These parallel sentence pairs demonstrate that the multilingual model makes incorrect predictions for sentence pairs in the target languages (French and German) when only one aspect of linguistic knowledge, such as lexical or syntactic knowledge, is incorporated. However, when both types of knowledge are integrated into the model, the correct prediction is obtained. Despite this, most previous studies have focused on either syntactic or lexical information alone, without considering the integration of both types of information. arXiv:2404.16627v1 [cs.CL] 25 Apr 2024 \fLang Premise(P)/Hypothesis(H) Label +Lex +Syn Ours fr P:Votre soci\u00e9t\u00e9 charitable fournit non seulement de les services sociaux communautaires efficaces \u00e0 les animaux et les personnes, mais sert \u00e9galement \u00e9galement de fourri\u00e8re pour la Ville de Nashua. H:La soci\u00e9t\u00e9 humaine est le refuge pour animaux de Nashua. entali contra contra entail de P:Ihre humane Gesellschaft erbringt nicht nur effektive gemeinschaftlich-soziale Dienstleistungen f\u00fcr Tiere und ihre Menschen, sondern dient auch als Zwinger der Stadt Nashua. H:Die Humane Society ist Nashuas Tierheim. entail contra contra entail en P:Your humane society provides not only effective community social services for animals and their people , but also serves as the pound for the City of Nashua . H:The humane society is Nashua\u2019s animal shelter . Table 1: The parallel sentence pairs in French and German from XNLI(Conneau et al., 2018), which are translated from English. Each sentence pair consist of a Premise sentence(P) and a Hypothesis sentence(H). The \"Label\" column indicates the relationship between each sentence pair, which can be contradiction(contra), entailment(entail) or neutral. \"+Lex\" and \"+Syn\" represent the prediction results from the multilingual models infused with lexical and syntactic knowledge, respectively. The \"ours\" column shows the results of integrating both types of knowledge into the model. Compared to the other two methods, our method can accurately predict the relationship between each sentence pair. In this work, we aim to enhance unsupervised cross-lingual transfer by integrating knowledge from different linguistic levels. To achieve this, we propose a framework called \"Lexicon-Syntax Enhanced Multilingual BERT\" (\"LS-mBERT\"), based on a pre-trained multilingual BERT model. Specifically, we first preprocess the input source language sequences to obtain each word\u2019s part-of-speech information and dependency relationships between words in each sentence. Then, we replace some words in the sentence with their translations from other languages while preserving the established dependency relationships. Furthermore, we employ a graph attention network(Veli\u010dkovi\u0107 et al., 2017) to construct a syntactic module, the output of which is integrated into the attention heads of the multilingual BERT. This integration guides the entire model to focus on syntactic structural relationships. Finally, during the fine-tuning process, we simultaneously train the multilingual BERT and the syntactic module with the pre-processed text. As a result, our framework enables the multilingual BERT to not only implicitly learn knowledge related to lexical alignment but also encode knowledge about syntactic structure. To validate the effectiveness of our framework, we conduct experiments on various tasks, including text classification, named entity recognition (ner), and semantic parsing. The experimental results show that our framework consistently outperforms all baseline models in zero-shot cross-lingual transfer across these tasks. For instance, our method achieves the improvement of 3.7 points for mTOP dataset. Our framework also demonstrates significant improvements in generalized cross-lingual transfer. Moreover, we examine the impact of important parameters, such as the replacement ratio of source words, and languages for replacement. To facilitate further research explorations, we release our code at https://github.com/ Tian14267/LS_mBert. 2. Related Work Cross-lingual transfer is crucial in the field of natural language processing (NLP) as it enables models trained on one language to be applied to another. To enhance performance in transfer tasks, numerous studies focus on addressing the characteristics of various languages and their relationships. 2.1. Incorporating Lexical Knowledge for Cross-lingual Transfer A group of studies aims to incorporate lexical alignment knowledge into cross-lingual transfer research (Zhang et al., 2021a; Wang et al., 2022; Qin et al., 2021; Lai et al., 2021). For example, Zhang et al. (2021a) and Wang et al. (2022) employ bilingual dictionaries to establish word alignments and subsequently train cross-lingual models by leveraging explicit lexical associations between languages. Other methods (Qin et al., 2021; Lai et al., 2021) involve substituting a portion of words in a sentence with their equivalents from different languages, a technique commonly known as \"codeswitching.\" By increasing the diversity of input text, these approaches promote implicit alignments of language representations. However, this group of studies mainly offers insights into lexical translation across languages, while neglecting the learning of language-specific structural rules. 2.2. Incorporating Syntactic Knowledge for Cross-lingual Transfer Another research category focuses on integrating syntactic knowledge for cross-lingual transfer (Ahmad et al., 2021; Yu et al., 2021; Zhang et al., 2021b; He et al., 2019; Cignarella et al., 2020; Xu et al., 2022; Shi et al., 2022; Wang et al., 2021). Many studies in this group (Ahmad et al., 2021; Wang et al., 2021) develop graph neural networks to encode syntactic structures, a category to which \four work also belongs. Taking inspiration from Ahmad et al. (2021), we adopt a similar architecture, specifically using a graph attention network to encode syntactic knowledge. Other methods (Cignarella et al., 2020; Xu et al., 2022) extract sparse syntactic features from text and subsequently incorporate them into the overall model. Although these approaches consider the relationships between language elements, they frequently overlook the alignments across languages, which impedes the effective transfer of linguistic elements and rules between languages. Consequently, we combine the strengths of these two categories of approaches. First, we replace the input sequence with translated words from other languages, which aids in guiding the entire model to acquire implicit alignment information. Then, we introduce an additional module to assist the model in encoding syntax. 3. Methodology In this section, we provide a detailed introduction to our framework \"LS-mBERT\", as illustrated in Figure 1. Our objective is to enhance the crosslingual transfer capabilities of multilingual BERT (mBERT) by incorporating both lexical and syntactic knowledge. Given an input sequence, we first pre-process it using a part-of-speech tagger and a universal parser(Section 3.1). This yields the part-of-speech tag for each word and dependency relationships among words in the sequence. To enable mBERT to implicitly encode word alignment information, we substitute some words with their translations from other languages using a code-switching technology (Section 3.2). Moreover, to guide mBERT in attending to syntactic relationships, we construct a graph attention network (GAT), introduced in Section 3.3. The output of the graph attention network is then used as input to the attention heads within BERT, effectively biasing attention information between words. Finally, to integrate both syntactic and lexical knowledge, we pass the code-switched text into both the GAT network and mBERT, which are trained simultaneously (Section 3.4). 3.1. Pre-processing Input Sequence The initial step involves pre-processing the input data to obtain prior knowledge for subsequent training. As our framework incorporates syntactic knowledge, we opt for an off-the-shelf parser with high accuracy to process the input text. In this case, we employ the UDPipe toolkit(Straka and Strakov\u00e1, 2017) to parse the inputs sentences, and Stanza(Qi et al., 2020) to annotate the part-of-speech information of each word. By utilizing both tools, given a sentence, we can obtain the dependency relationships between words and their part-of-speech information, which are then utilized to provide syntactic knowledge and enhance word representations, respectively. 3.2. Code-switching for Text (lexical knowledge) As our objective is to improve unsupervised crosslingual transfer, introducing explicit alignment signals would be inappropriate. Therefore, we employ an implicit strategy to guide the entire model to encode word alignment information. Inspired by the work of Qin et al. (2021), we opt for the codeswitching strategy. Specifically, we first randomly select a proportion \u03b1 of words within each source sentence. Then, for each selected word, we use a high-quality bilingual dictionary to substitute it with a corresponding translation from another target language. This method not only promotes the implicit alignment of representations across diverse languages within our model, but also enhances the model\u2019s robustness when processing input text. 3.3. Graph Attention Network (syntactic knowledge) To guide mBERT in acquiring syntactic knowledge better, we construct an external syntactic module by referring to the method introduced by Ahmad et al. (2021). The overview of this module is displayed in Figure 2. Given that there are n tokens in the input sequence, we first represent each token by combining its embedding representation with part-of-speech (POS) information. The representation of the i-th token can be calculated: xi = ciWc + posiWpos, where ci and posi represent the token representation and the part-ofspeech representation of the i-th token, respectively; while Wc and Wpos denote the token parameter matrix and the part-of-speech parameter matrix. Then, the encoded sequence s\u2032 = [x1, x2, \u00b7 \u00b7 \u00b7 , xn] is passed into the subsequent syntactic module, which is designed with a graph attention network (GAT) (Veli\u010dkovi\u0107 et al., 2017). The GAT module comprises a total of L layers, each with m attention heads. These attention heads play a crucial role in generating representations for individual tokens by attending to neighboring tokens in the graph. Each attention in GAT operates as follows: O = Attention(T, T, V, M), wherein T denotes the query and key matrices, and V represents the value matrix. Besides, M signifies the mask matrix, determining whether a pair of words in the dependency tree can attend each other. Notably, the relationships between words in the attention matrix are modeled based on the distances between words in \fcodeswitching part-of-speech tagging dependency parsing UDPipe bilingual dictionary guidelines (Root) mean needed new the iron donors are more nsubj det amod compound nsubj aux amod ccomp leitlinien (Root) mean necesitaba new the fer donors are \u66f4\u591a\u7684 nsubj det amod compound nsubj aux amod ccomp GAT network The new iron guidelines mean more donors are needed Label Multilingual BERT The_DET new_ADJ iron_NOUN guidelines_NOUN mean_VERB more _ A D J donors _ N O U N are_AUX needed_VERB The_DET new_ADJ fer_NOUN leitlinien_NOUN mean_VERB \u66f4\u591a \u7684_ADJ donors_NOUN are_AUX necesitaba_VERB codeswitching Figure 1: An overview of lexicon-syntax enhanced multilingual BERT (\"LS-mBERT\"). An example sentence is provided to explain how this framework works. To introduce lexical alignment knowledge, we utilize bilingual dictionaries to randomly replace some words in the sentence with the equivalent words from other languages (pink for German, green for Spanish, light blue for Chinese, and orange for French). Then, an graph attention network (GAT) is developed to encode the syntactic structure of this sentence. The output representation of GAT is sent to the attention heads in multilingual BERT for guiding them to focus on the language-specific structures. the dependency tree, rather than the positional information within the word sequence. Subsequently, the resulting representations produced by all attention heads are concatenated to form the output representations for each token. Finally, the output sequence from the final layer can be denoted as Y = [y1, y2, \u00b7 \u00b7 \u00b7 , yn], where yi represents the output representation for the i-th token. To maintain the lightweight nature of the architecture, certain elements in GAT have been excluded. Specifically, we do not employ feed-forward sub-layers, residual connections, or positional representations. We found that these modifications do not result in a significant performance gap. 3.4. Summary of the Framework: Lexicon-syntax Enhanced Multilingual BERT In this subsection, we provide an overview of our \"LS-mBERT\" framework, as illustrated in Figure 1. We first select multilingual BERT (mBERT) as the base model. Then, we process the input sequence using the code-switching strategy in Section 3.2, resulting in the code-switched sequence s\u2032. It is important to note that despite some words in each sentence being replaced with other languages, the original dependency relationships between words are still preserved in s\u2032. Next, we feed the codeswitched text into both mBERT and the syntactic module (GAT), facilitating the fusion of the two types of knowledge. Furthermore, this step guides the entire model to better align different languages within the high-dimensional vector space during training. After GAT processes the code-switched sequence, the output from the final layer is utilized to bias the attention heads of mBERT. The calculation process can be described as follows: O = Attention(Q + Y W Q l , K + Y W K l , V ), where Q, K, and V represent the query, key, and value matrices, respectively; While W Q l and W K l are new parameters to learn for biasing the query and key matrices. t1 c1 pos1 + x1 t2 c2 pos2 + x2 ... ... ... tn-1 cn-1 posn-1 + xn-1 tn cn posn + xn ... ... + + + + + m \u00d7 L layers y1 y2 yn-1 yn ... input seq token emb pos emb att layer Figure 2: The architecture of graph attention network (Ahmad et al., 2021; Veli\u010dkovi\u0107 et al., 2017). Each input token is represented by combining its token embedding and part-of-speech embedding. Each attention head within the graph attention network(GAT) generates a representation for each token embedding by attending to its neighboring tokens in the dependency graph. Next, the resulting representations are concatenated to form the output representation for each token. Finally, we can obtain the representations of the output sequence embeddings from the final layer of GAT. \f4. Experiments 4.1. Experimental Settings As above mentioned, we use UDPipe (Straka and Strakov\u00e1, 2017) and Stanza (Qi et al., 2020) for parsing sentences and obtaining words\u2019 part-ofspeech information in all languages, and employ MUSE (Lample et al., 2018) as the bilingual dictionary for word substitution. For all tasks, we identify the optimal parameter combinations by searching within the candidate sets. The learning rate is set to 2e-5, utilizing AdamW as the optimizer. The batch size is 64, and the maximum length for input sequences is 128 tokens. For code-switching, we vary the replacement ratio (\u03b1) from 0.3 to 0.7 with a step of 0.1. For the GAT network, we adopt the identical parameter values as employed in the work of Ahmad et al. (2021). Specifically, we set L to 4 and k to 4. 4.2. Tasks Our framework is evaluated on the following tasks, using English as the source language. Some statistics are summarized in Table 2, along with the detailed descriptions provided below. Text Classification. Text Classification is a task that assigns predefined categories to open-ended text. In our experiment, we utilize two publicly available dataset: XNLI and PAWS-X. In XNLI (Conneau et al., 2018), models need to predict whether a given pair of sentences is entailed, contradicted, or neutral; In PAWS-X (Yang et al., 2019), models are required to determine whether two given sentences or phrases convey the same meaning. When implementing the two tasks, to establish connections between the dependency trees of the two sentences, we introduce two edges from the [CLS] token to the root nodes. Subsequently, we apply the code-switching technique to randomly replace certain words in the sentence pairs. Named Entity Recognition. Named Entity Recognition (NER) is a task that involves the automatic identification and categorization of named entities. In our experiment, we employ the Wikiann (Pan et al., 2017) dataset. Wikiann consists of Wikipedia articles annotated with person, location, organization, and other tags in the IOB2 format. Our method is evaluated across 15 languages. To ensure that the models can obtain complete entity information, we exclusively substitute words that do not constitute named entities during the code-switching process. Task-oriented Semantic Parsing. In this task, the models are required to determine the intent of the utterance and then fill the relevant slots. The dataset for the experiment is mTOP (Li et al., 2021), which is an almost parallel corpus, containing 100k examples in total across 6 languages. Our experiments cover 5 languages. 4.3. Baselines We choose the following methods as baselines to compare: \u2022 mBERT. We exclusively utilize the multilingual BERT model to perform zero-shot crosslingual transfer for these tasks. \u2022 mBERT+Syn. A graph attention network (GAT) is integrated with multilingual BERT, and these two components are jointly trained for all tasks. \u2022 mBERT+Code-switch. The multilingual BERT model is fine-tuned with the codeswitched text across various languages. 5. Results and analysis 5.1. Cross-Lingual Transfer Results The main experimental results are displayed in Table 3. Our method consistently demonstrates superior performance across all tasks compared to other baselines. This indicates our method\u2019s effectiveness for cross-lingual transfer, achieved through the incorporation of lexical and syntactic knowledge. Especially for the tasks Wikiann and mTOP, our method exhibits a significant improvement, with an increase of 2.2 and 3.7 points, respectively, when compared to the baseline with the best performance. In addition, since code-switching technique blends words from various language, we calculate the results across the languages excluding English, as shown in the column \"AVG/en\" in Table 3. We find that the performance gap between our method and each baseline in most tasks becomes wider. This also indicates that our method can more effectively align non-English languages within the same vector space implicitly. For each task, we discover most of languages can gain improvement by using our method, as compared to the top-performing baseline. Specifically, 84.6% (11/13), 100.0% (7/7), 80.0% (12/15) and 100.0% (5/5) languages demonstrate improvement in XNLI, PAWS-X, Wikiann and mTOP respectively. Furthermore, our method also provides improvement for non-alphabetic languages in many tasks, such as Chinese, Japan and Korean. This reflects that our method can be effectively generalized into various target languages, even in cases where significant differences exist between the source and target languages. \fTask Dataset |Train| |Dev| |Test| |Lang| Metric Classification XNLI 392K 2.5K 5K 13 Accuracy Classification PAWS-X 49K 2K 2K 7 Accuracy NER Wikiann 20K 10K 1-10K 15 F1 Semantic Parsing mTOP 15.7K 2.2K 2.8-4.4K 5 Exact Match Table 2: Evaluation datasets. |Train|, |Dev| and |Test| delegate the numbers of examples in the training, validation and testing sets, respectively. |Lang| is the number of target languages we use in each task. Tasks Methods en ar bg de el es fr hi ru tr ur vi zh ko nl pt ja AVG / en AVG XNLI (Conneau et al., 2018) mBERT 80.8 64.3 68.0 70.0 65.3 73.5 73.4 58.9 67.8 60.9 57.2 69.3 67.8 66.4 67.5 mBERT+Syn 81.6 65.4 69.3 70.7 66.5 74.1 73.2 60.5 68.8 62.4 58.7 69.9 69.3 67.4 68.5 mBERT+code-switch 80.9 64.2 70.0 71.5 67.1 73.7 73.2 61.6 68.9 58.6 57.8 69.9 70.0 67.2 68.3 our method 81.3 65.8 71.3 71.8 68.3 75.2 74.2 62.8 70.7 61.1 58.8 71.8 70.8 68.6 69.5 PAWS-X (Yang et al., 2019) mBERT 94.0 85.7 87.4 87.0 77.0 69.6 73.0 80.2 81.7 mBERT+Syn 93.7 86.2 89.5 88.7 78.8 75.5 75.9 82.7 83.9 mBERT+code-switch 92.4 85.9 87.9 88.3 80.2 78.0 78.0 83.4 84.3 our method 93.8 87.2 89.6 89.4 81.8 79.0 80.0 84.6 85.6 Wikiann(Pan et al., 2017) mBERT 83.7 36.1 76.0 75.2 68.0 75.8 79.0 65.0 63.9 69.1 38.7 71.0 58.9 81.3 79.0 66.9 68.1 mBERT+Syn 84.1 34.6 76.9 75.4 68.2 76.0 79.1 64.0 64.2 68.7 38.0 73.1 58.0 81.7 79.5 67.0 68.1 mBERT+code-switch 82.4 39.2 77.1 75.2 68.2 71.0 78.0 66.1 64.2 72.4 41.3 69.2 59.9 81.3 78.9 67.3 68.3 our method 84.5 41.4 78.9 77.3 70.2 75.3 80.3 67.6 63.9 73.1 46.8 72.6 62.2 81.8 80.8 69.4 70.5 mTOP(Li et al., 2021) mBERT 81.0 28.1 40.2 38.8 9.8 29.2 39.6 mBERT+Syn 81.3 30.0 43.0 41.2 11.5 31.4 41.4 mBERT+code-switch 82.3 40.3 47.5 48.2 16.0 38.0 46.8 our method 83.5 44.5 54.2 51.7 18.8 47.3 50.5 Table 3: The experimental results on four tasks. The best results in each task are highlighted in bold. The baselines include \"mBERT\", \"mBERT+Syn\" and \"mBERT+codeswitch\". They delegate \"only using mBERT\", \"using mBERT with a syntactic module (GAT)\" and \"mBERT with the code-switching technique\" for cross-lingual transfer. The results of \"mBERT\" is from Hu et al. (2020). For \"mBERT+Syn\" and \"mBERT+code-switch\", we adopt open-source code of the work of Ahmad et al. (2021) and Qin et al. (2021) to reproduce these experiments, and report the results. The evaluation metrics are F1 value for the NER task, Accuracy for classification tasks, and Exact Match for semantic parsing. The \"AVG\" column means the average performance across all language for each method, while the \"AVG /en\" indicates the average performance on the languages excluding English. 5.2. Generalized Cross-Lingual Transfer Results In practical scenarios, cross-lingual transfer could involve any language pair. For example, in a crosslingual question-answering (QA) task, the context passage may be in German, while the multilingual model is required to answer the question in French. Considering on this, we conduct zero-shot cross-lingual transfer experiments within a generalized setting. Since PAWS-X and mTOP are completely parallel, we evaluate the performance of our method and \"mBERT\" baseline on generalized cross-lingual transfer tasks using the two dataset. The experimental results are illustrated in Figure 3. For both classification and semantic parsing benchmarks, we have observed improvements among most language pairs. This reflects that our method is very effective for generalized crosslingual transfer. Furthermore, when English is included in the language pair, there is a substantial enhancement in performance. Specifically, when English serves as the source language, the average performance of target languages is increased over 10% and 3% in mTOP and PAWS-X dataset, respectively. This reflects the effectiveness of the code-switching in aligning other languages with English. For the PAWS-X dataset, we find that some non-Indo-European languages such as Japanese, Korean, and Chinese can achieve improvements, even when the source languages belong to the Indo-European language family, including English, Spanish, French, and German. It reflects that syntactic knowledge can effectively narrow the gap of language structures for this task, especially for the language pairs without close linguistic relationships. 6. Analysis and Discussion 6.1. Impact on Languages We investigate whether our method can improve the performance of specific languages or language groups. As shown in Figure 4, we display the performance improvement of our method by comparing the \"mBERT\" baseline. We find that almost languages can obtain benefits from our method. Particularly, when the target language, such as German, Spanish and French, belongs to the IndoEuropean language family, the improvement is very significant. Furthermore, the performance in the mTOP task is improved significantly by our method among all languages. This may be because that our method consider both syntax and lexicon simultaneously, which is beneficial for the semantic parsing task. \ftarget source performance difference (a) mTOP target (b) PAWS-X performance difference source Figure 3: Results for generalized zero-shot cross-lingual transfer on mTOP and PAWS-X. We report the performance differences between our method and \"mBERT\" baseline across all languages. -5 0 5 10 15 20 en de es fr bg ru ar vi tr ur el hi zh ko Performance Improvement(%) Language XNLI PAWS-X Wikiann mTOP Figure 4: Performance improvements for XNLI, PAWS-X, Wikiann, and mTOP across languages. The languages in x-axis are grouped by language families: IE.Germanic (en, de), IE.Romance (es, fr), IE.Slavic (bg, ru), Afro-asiatic (ar), Austro-asiatic (vi), Altaic (tr, ur), IE.Greek (el), IE.Indic (hi), Sino-tibetan (zh), Korean (ko). 6.2. Representation Similarities across Languages To evaluate the effectiveness of our method in aligning different languages, we employ the representation similarity between languages as the metric. Specifically, we utilize the testing set of XNLI (Conneau et al., 2018) as the dataset, which consists of parallel sentences across multiple languages. Then we take the vector of [CLS] token from the final layer of our model, as well as the vectors from two baselines (\"mBERT+Syn\" and \"mBERT+codeswitch) for each sentence. Following Libovick` y et al. (2019), the centroid vector for representing each language is calculated by averaging these sentence representations. Finally, we adopt cosine similarity as the indicator to assess the degree of alignment between English and each target language. Figure 5 illustrates the similarities between languages by using our method and the other two baselines. It can be easily found that our method outperforms the other two baselines in aligning language representations. This suggests that infusing two types of knowledge is indeed effective in reducing the disparities in language typologies, which improve cross-lingual transfer performance. In addition, we observe that \"mBERT+code-switch\" performs better than \"mBERT+Syn\", which reflects that lexical knowledge is more useful than syntactic knowledge for this task. 6.3. Impact of Code-switching The replacement ratio \u03b1 for code-switching is an important hyper-parameter in our method. Hence, we explore its impact on mTOP and PAWS-X, by varying \u03b1 from 0 to 0.9 in increments of 0.1, shown in Figure 6. When \u03b1 is set to 0, it represents the results of the baseline \"mBERT+Syn\". As \u03b1 increases, more source words are substituted with their equivalent words from other languages. The performance improvement certificates the effectiveness of code-switching technique. Notably, when about half of the words are replaced (0.5 for PAWS\f80 85 90 95 100 ar bg de el es fr hi ru tr ur vi zh mBERT+Syn mBERT+code-switch LS-mBERT Figure 5: The similarities between languages. We first calculate the centroid representation for each language following Libovick` y et al. (2019). Then we adopt cosine similarity to evaluate the similarity between English and each target language. X and 0.4 for mTOP), the performance reaches their peaks. After that, both tasks experience a decline in performance. This decline might be because the expression of meaning and sentence structure are influenced severely as too many words are replaced. Therefore, it is a optimal choice to set \u03b1 between 0.4 to 0.5 for code-switching. Figure 6: Performance on mTOP and PAWS-X with different replacement ratio \u03b1 in code-switching. Furthermore, we investigate whether the choice of the replacement language in code-switching impacts our model\u2019s performance. We select mTOP and PAWS-X as the testing tasks. In codeswitching, we devise three different measures for language replacement: \"Exclusively replacing with the target language\", \"Replacing with languages from the same language family as the target language\"; and \"Replacing with languages selected randomly\". The experimental results are illustrated in Figure 7. We can easily observe that \"Exclusively replacing with the target language\" performs best, while \"Replacing with randomly selected languages\" yields the poorest results. Hence, this also underscores the importance of selecting languages closely related to each target language for substitution when employing the code-switching technique. 35 45 55 65 75 85 95 mTOP PAWS-X Performance(%) Type1 Type2 Type3 Figure 7: Performance on mTOP and PAWS-X with different replacement languages in code-switching. The source language for both tasks is English, and the results are averaged across all target languages excluding English. \u201cType1\u201d represents the replacement with the target language; \u201cType2\u201d represents the replacement with languages from the same language family as the target language; \u201cType3\u201d represents the replacement with randomly selected languages. 6.4. Performance with XLM-R To validate the universality of our method, we substitute multilingual BERT with XLM-R in our framework. XLM-R is a more robust multilingual pre-trained model known for its exceptional crosslingual transfer capabilities. Subsequently, we test its performance on the PAWX-S dataset, and the experimental results are displayed in Table 4. In Table 4, we also observe that our framework outperforms the other three baselines. This indicates that integrating lexical and syntactic knowledge is beneficial for enhancing performance, irrespective of the base model employed. Notably, our framework only achieves the slight performance improvement when utilizing XLM-R as the base model compared to employing multilingual BERT. It may be because that the base model, XLM-R, adopt larger corpus during pre-training, resulting in preserving richer language information. Consequently, XLM-R itself has possessed superior cross-lingual transfer capabilities. The assistance by incorporating external linguistic knowledge appears to be relatively minor in comparison. 6.5. Limitations and Challenges In our study, we adopt a bilingual dictionary, such as MUSE (Lample et al., 2018), to substitute words in other languages. However, we randomly choose a target language word when there exist multiple translations for a source language word. This approach, although convenient, neglect the context of the source language word, potentially leading to inaccurate translations. This also highlights us to explore more precise word alignment methods in \fTask Methods en ar bg de el es fr hi ru tr ur vi ko nl pt AVG PAWS-X XLM-R 84.2 48.5 80.5 77.0 77.8 76.1 79.8 67.5 70.4 76.0 54.2 78.5 59.1 83.3 79.3 72.8 XLM-R+Syn 83.5 46.4 80.1 76.0 78.9 77.6 79.1 72.1 70.6 76.1 55.3 77.6 59.0 83.1 79.2 73.0 XKLM-R+code-switch 83.4 46.8 81.7 78.2 79.2 71.1 78.6 72.9 70.6 77.2 57.9 76.0 58.2 83.6 80.0 73.0 our method 83.1 44.9 82.7 76.8 78.4 76.9 79.6 71.1 70.1 76.6 60.4 78.2 58.1 83.5 79.7 73.3 Table 4: Results for PAWS-X with XLM-R. the future. Furthermore, the tasks we have evaluated are quite limited, with some of them involving only a few languages. In the future, we will extend our method to more cross-lingual tasks. Meanwhile, we also develop dataset for these tasks to support more languages. 7. Conclusion In this paper, we present a framework called \"lexicon-syntax enhanced multilingual BERT\" (\"LSmBERT\"), which infuses lexical and syntactic knowledge to enhance cross-lingual transfer performance. Our method employs code-switching technology to generate input text mixed in various languages, enabling the entire model to capture lexical alignment information during training. Besides, a syntactic module consisting of a graph attention network (GAT) is introduced to guide mBERT in encoding language structures. The experimental results demonstrate that our proposed method outperforms all the baselines across different tasks, which certificates the effectiveness of integrating both types of knowledge into mBERT for improving cross-lingual transfer. In the future, we plan to incorporate different linguistic knowledge into large language models (LLMs) to further enhance cross-lingual transfer performance. 8. Acknowledgements The authors would like to thank the anonymous reviewers for their feedback and suggestions. Additionally, this work was supported by the Major Program of the National Social Science Fund of China (18ZDA238), the National Social Science Fund of China (No.21CYY032), Beihang University Sponsored Projects for Core Young Researchers in the Disciplines of Social Sciences and Humanities(KG16183801) and Tianjin Postgraduate Scientific Research Innovation Program (No.2022BKY024). 9. Bibliographical", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2402.00969v1", + "title": "SPARQL Generation with Entity Pre-trained GPT for KG Question Answering", + "abstract": "Knowledge Graphs popularity has been rapidly growing in last years. All that\nknowledge is available for people to query it through the many online databases\non the internet. Though, it would be a great achievement if non-programmer\nusers could access whatever information they want to know. There has been a lot\nof effort oriented to solve this task using natural language processing tools\nand creativity encouragement by way of many challenges. Our approach focuses on\nassuming a correct entity linking on the natural language questions and\ntraining a GPT model to create SPARQL queries from them. We managed to isolate\nwhich property of the task can be the most difficult to solve at few or\nzero-shot and we proposed pre-training on all entities (under CWA) to improve\nthe performance. We obtained a 62.703% accuracy of exact SPARQL matches on\ntesting at 3-shots, a F1 of 0.809 on the entity linking challenge and a F1 of\n0.009 on the question answering challenge.", + "authors": "Diego Bustamante, Hideaki Takeda", + "published": "2024-02-01", + "updated": "2024-02-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.DB", + "cs.IR", + "68P20, 68T50", + "H.2.3; H.3.3; I.2.7" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "SPARQL Generation with Entity Pre-trained GPT for KG Question Answering", + "main_content": "Introduction Knowledge Graphs (KG) [10] are structures often used to represent knowledge. Nowadays, more than ever, the amount of open source information stored in KG throughout the web is beyond comprehension for us humans, e.g., just Wikidata has more than 108 million items1. A clever way to store Knowledge Graphs are graph databases [10]. This approach solves a lot of important problems regarding information management, some of them are: storage, structure, query and update. Although access points to this data are readily available, for example via web browser, information extraction is not a simple task for a user that does not know how to make graph databases queries given a particular language (e.g. SPARQL2). With the recent rise of generative LLMs it is natural for a user to think that ChatGPT3, Copilot4, Bard5 or any other model, could help them in the task of creating a SPARQL query in order to answer a question about a particular KG. Taking in consideration that generating queries it is not the main interest of the mentioned LLMs, the problem is that a simple qualitative analysis using this AI tools can expose that only a few of them do well on this task. 1Wikidata:Statistics https://www.wikidata.org/wiki/Wikidata:Statistics 2SPARQL 1.1 Query Language https://www.w3.org/TR/sparql11-query/ 3OpenAI ChatGPT https://chat.openai.com/ 4Microsoft Copilot https://copilot.microsoft.com/ 5Google Bard https://bard.google.com/ 1 arXiv:2402.00969v1 [cs.CL] 1 Feb 2024 \fThe goal of this work is to identify the main property that makes this problem a difficult one, and propose a training methodology and model architecture such to improve the accuracy of this type of models on this particular task. 2 Related Work There has been a lot of effort from researchers on improving results on the task of Knowledge Graph Question Answering. Given a KG, the problem of KGQA consists of inputs such as \u201cWhat are the papers written by the person X?\u201d and outputs such as the triples of the KG that correspond to the answer to the question. With the goal of motivating competition and innovation, research groups have published challenges and datasets with varied difficulties. Some of the most recent data collections on the field of scholarly knowledge are DBLP-QuAD [3] and SciQA [2]. The Scholarly QALD challenge recently took place at the ISWC 20236. Many groups had great results and, as expected, most of them used LLMs in some aspect of their approaches [17, 14, 15, 11]. Since Natural Language Processing tools have demonstrated being very useful solutions for text-to-text tasks [8, 16], it is natural to think where can those tools be integrated to tackle the inputs of KGQA. The solutions seems to be to considerate SPARQL queries as plain text, as a natural language with its own vocabulary and grammar. Then, technologies like GPT [16] are a clear study path to solve the Scholarly QALD challenge. We manage to replicate the results obtained by Rajpal & Usbeck [14]. We studied how to improve the lack of multi-hops after entity linking in their approach. Inspired by the LLM popularity, we manage to isolate the most difficult characteristic of this task, create a training process to solve it and propose future improvements to our approach. 3 Approach Our approach uses the DBLP-QuAD dataset [3]. This data is composed of 10,000 items, every data point has a question, paraphrased question, SPARQL query, entities, relations and answer (triples). This structure enable us to approach the challenge in many different ways, stimulating innovation. Instead of directly trying to obtain the triples to answer the question, we first focused on dividing the task and translating the natural language question to a SPARQL query. 3.1 Entity linking Our first models did not do very well. Our hypothesis was that translating a sub-string entity into a link while trying to learn the SPARQL grammar was too much work for our model size (3.47 M parameters) and for the amount of training data we had (10,000 items). To solve that problem, we subdivided the task into entity linking and SPARQL generation. We assumed the entity linking task can be done with perfect accuracy and then we would perform the translation. We build a new dataset where the entities are replaced by their iri, e.g., if the question was \u201cWhat are the papers written by the person Wei Li?\u201d, the new question will be \u201cWhat are the papers written by the person ?\u201d. This reduced the amount of different tokens (vocabulary) from more than 46,000 to only 10,399; a significant amount that can affect how the model will perform. We used the entity linking methods of Rajpal & Usbeck [14, 4] with less overhead (and worse accuracy). We ruled out the TP61 template [3] data points due to error detected in the SPARQL queries, entities were not well used on the template. Since the final challenge can certainly have TP61-type questions, this decision is a source of error. But, our goal is to find new training ways to improve performance and comparatively will negatively affect all our models the same. 6Scholarly QALD at ISWC 2023 https://kgqa.github.io/scholarly-QALD-challenge/2023/ 2 \fDataset Input example Output example Original Show the Wikidata ID of the person Robert Schober. SELECT DISTINCT ?answer WHERE { ?answer } Entity linked show the Wikidata ID of the person SELECT DISTINCT ?answer WHERE { ?answer } Pre-train ... ... Table 1: Input and output examples for the GPT model by dataset. The original dataset (20,000 items), the entity linked dataset (9,289 items) and the pre-train dataset (7,617 items). From 20,000 items (questions and paraphrased questions), we ended up with 9,289 successful entity linked data points on the new dataset. We minimize type I error, i.e., we did many checks to ensure perfect entity linking. In production or real testing we would bet on doing it right to improve accuracy, but in this phase we are building a new training dataset. Is evident that our EL performance is not state-of-the-art, but it was not our goal. The difference between the original and the entity linked dataset can are more clear in Table 1. We also performed normalization on the questions described in section 4. 3.2 Query generation With the entity link phase done, a translation model has to learn the different types of query templates and complete the variable information with data from the input. The architecture of our model is based on the GPT implementation by Andrej Karpathy7 which is, in turn, inspired on the paper \u201cAttention Is All You Need\u201d [16]. Karpathy implemented a decoder only transformer so we completed the encoder-decoder model to take the questions as inputs on the encoder and the decoder would generate the queries. The complete architecture is shown in Figure 1a. As we said, the model has to learn i) query templates and ii) learn how to change the template with the corresponding information. Simplifying the second task, it can be described as an identity function f(string) = string with the entity iris found on the input. The problem with our first solution was that while learning the grammatical aspects of the problem it stopped from learning the rest. Our GPT preferred a lazy minimization of the amount of wrong tokens than learning how to perform the entity replacement. We attribute this behavior to the fact that there are only a few entities per question/query so its contribution to the loss function is very small. The parameter optimizer fitted a shortcut learning [9] and could not improve more. It is well studied that pre-training can improve the performance of generative transformer models [1, 13, 8, 12], specially when post fine-tuning for question answer is implemented [12]. Therefore, based on literature [1, 13], our hypothesis is that performance should improve when we first train the model to perform the identity function on entities and then transfer that learning over the KGQA task. The amount of different queries is exponentially large, hence the need for an AI model to do the translation. In contrast, under the Closed World Assumption [10] the amount of entities is fixed. Then, we pre-train the model with all entities to perform the identity task. In the first instance we tried to teach the model the function f(entity) = entity, but since entities can be anywhere in the input and output the improvement was not meaningful. When we pre-train the function f(entity entity entity ...) = entity entity entity ..., we saw an important performance enhancement. Our GPT implementation can be found at GitHub8. When combined with an entity linker and the desired KG database to query, the complete process looks like Figure 1b. 7Let\u2019s build GPT: from scratch, in code, spelled out https://youtu.be/kCc8FmEb1nY 8Code https://github.com/DiegoEmilio01/SPARQL-generation-with-pre-trained-GPT-for-KG-Question-Answering/ 3 \f(a) Diagram of our GPT architecture, very similar to the original model. (b) Complete process when every phase is integrated, we focused on GPT. Figure 1: Our solution, the components of our model and how to integrate it in production setting. 4 Experiments In addition to the entity linking, we also modify the questions to achieve a better generalization in the model. We eliminate punctuation symbols and first word capitalization, and spaced parenthesis because our tokenization is by words space-split. For example, \u201c?\u201d should be the same entity as \u201c\u201d. The distribution of the 9,289 data points is 8,648 for training (93.1%), 456 for validation (5.9%) and 185 for testing (2%). This proportions were found by trial and error, it is important to notice that we did not follow the original distribution due to the creation of the new entity linked dataset. In the case of the pre-training, all 7,617 entities were used as train, validation and testing. The original dataset had 7,143 entities but we added new iris from the final 500 questions of the Scholarly QALD challenge to the vocabulary justified by our Closed World Assumption [10]. The average number of entities per query is 1.231 entities. Example of items in the all datasets are tabulated in Table 1. In the case of the pre-train dataset, the length of the inputs are 34 tokens and outputs are 49 tokens, this is, the maximum input and output size of our model. Our vocabulary was originally of size 9,339; but with the inclusion of the final 500 questions it ended being of size 10,399 (including new entities and other words). The model hyperparameters were also found by trial and error. Dropout of 1%, learning rate of 0.0007, internal vector of dimension 128, number of heads of 8, and 4 layers for the decoder and encoder each. We instanced 2 models, one with pre-training and one without to compare improvements. Both were of size 3.47 million parameters, were trained the same total amount of epochs (19,200) and with the exact same data points distribution between training, validation and testing, in order to be able to compare them. In the case of the pre-trained model, it was trained 14,400 epochs on entities and 4,800 on the new dataset. 4 \fModel \\ Metric Acc@1(%) Acc@3(%) aHD(# tokens) Precision(%) Recall(%) F1 Not pre-trained 31.892 43.784 1.724 0.502 0.605 0.005 Pre-trained 49.189 62.703 2.011 0.725 1.291 0.009 Table 2: Results for the 2 types of models. The metrics are query accuracy at 1 shot, query accuracy at 3 shots, query average Hamming distance, challenge precision, challenge recall and challenge F1. As training metric we used cross entropy for the loss function and as testing metric we used accuracy at 1 shot, accuracy at 3 shots and average Hamming distance. Accuracy in our setting means how many exact SPAQRL queries could we generate over the total, and Hamming distance is how many tokens away is our prediction from the gold answer (only one shot). For the final 500 questions of the challenge we retrieved final triples as answers and submitted them to the official challenge9 to obtain precision, recall and F1 (under the user DiegoEmilio). 5 Results and Discussion Our results are shown in Table 2. Pre-training the model improves almost every metric. A 17.3% improvement on accuracy@1 is remarkable, tough precision, recall and F1 are not much significant because getting few questions with a lot of triples right can produce a random increase in F1 like ours. We observed that the model without pre-training tried to minimize how many tokens gets wrong, i.d., improve the average Hamming distance. In contrast, with pre-training it learned how to maximize the number of exact match answers despite getting more tokens wrong when generating an incorrect answers (hence the higher aHD). We have overcome the initial shortcut learning [9]. This suggests that changes in the loss function could help the training. For example, reward higher exact matches because in our experiments the reward difference between getting 1 to 2 tokens incorrect is the same as between 0 to 1. Due to the type of model and training, we expected our model to extrapolate the rules to manage queries it has never seen before [6]. Considering our accuracy in testing, we think that it managed to learn the different templates. But, the identity function is still a problem when the author has never been seen in training. The architecture of our model allows it to be very good at few-shot [5], and we attribute to this property the success of our results with such little data points (9,289). We tried to generate zero-shots queries with entities we knew were not in the dataset. Even when all iris were included in the pre-training, the model was not capable of generate the exact match, just the template. Moreover, when including the final 500 questions to the vocabulary we added 474 new entities to the problem, so this hints that the reason behind our bad results on the challenge is zero-shots. This analysis is also backed up by our metrics. The 4th column on Table 2 shows that, on average, the model gets tokens wrong approximately the same amount of tokens as the number of entities per query (1.231). Thus, considering that entities are the context in which the question should be answered, we identify that Comprehension-topic hallucinations [7] of the identity function at zero-shots is the main difficulty of this task. Our approach has the potential to be refined and combined to improve results, for example is heavily dependent on the performance of the entity linking phase before the GPT phase. Our entity linking process achieved 92.931% precision, 71.635% recall and 0.80905 F1. Best entity linkers could be used to improve performance but since we are competitive with state-of-art this should not bring a high increase in performance. We propose a better training to tackle zero-shots inaccuracies. Since our framework is under the Closed World Assumption [10], we can add few queries per entity on top of our pre-training. This enrichment of the dataset should be enough to improve performance, specially on the final 500 questions of the Scholarly QALD challenge. 9[DBLP-QuAD] Scholarly QALD @ ISWC 2023 https://codalab.lisn.upsaclay.fr/competitions/14264 5 \fFurthermore, we think that the value of our results lies in at least returning the correct template for difficult questions and queries. In a qualitative analysis, most of the popular LLMs can\u2019t help users to formulate queries as hard as the ones in the DBLP-QuAD dataset [3]. In that sense, being competitive with most popular LLMs using a very small model and training data in comparison shows an optimistic path to solve this task. Finally, a light model like ours can be trained with very few data, time and costs. When a new open source Knowledge Graph appears on the internet is not known by LLMs. Then, the publishers of the KG can find this method useful to easily help their non-programmer users by putting in production a GPT model that suggests queries. Acknowledgement The research was partially funded by the National Institute of Informatics, Japan. D.B. acknowledges partial support from ANID, Subdirecci\u00f3n de Capital Humano (Mag\u00edster Nacional, 2023, folio 22231282). Big thanks to Trinidad Gatica and El\u00edas Sabja for their help with code debugging." + }, + { + "url": "http://arxiv.org/abs/2404.10305v2", + "title": "TC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content", + "abstract": "The automatic recognition of tabular data in document images presents a\nsignificant challenge due to the diverse range of table styles and complex\nstructures. Tables offer valuable content representation, enhancing the\npredictive capabilities of various systems such as search engines and Knowledge\nGraphs. Addressing the two main problems, namely table detection (TD) and table\nstructure recognition (TSR), has traditionally been approached independently.\nIn this research, we propose an end-to-end pipeline that integrates deep\nlearning models, including DETR, CascadeTabNet, and PP OCR v2, to achieve\ncomprehensive image-based table recognition. This integrated approach\neffectively handles diverse table styles, complex structures, and image\ndistortions, resulting in improved accuracy and efficiency compared to existing\nmethods like Table Transformers. Our system achieves simultaneous table\ndetection (TD), table structure recognition (TSR), and table content\nrecognition (TCR), preserving table structures and accurately extracting\ntabular data from document images. The integration of multiple models addresses\nthe intricacies of table recognition, making our approach a promising solution\nfor image-based table understanding, data extraction, and information retrieval\napplications. Our proposed approach achieves an IOU of 0.96 and an OCR Accuracy\nof 78%, showcasing a remarkable improvement of approximately 25% in the OCR\nAccuracy compared to the previous Table Transformer approach.", + "authors": "Avinash Anand, Raj Jaiswal, Pijush Bhuyan, Mohit Gupta, Siddhesh Bangar, Md. Modassir Imam, Rajiv Ratn Shah, Shin'ichi Satoh", + "published": "2024-04-16", + "updated": "2024-04-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "TC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content", + "main_content": "INTRODUCTION As the global digital transformation continues to progress, there is a notable and accelerating trend toward replacing traditional physical paper-based documents with their digitized counterparts. These digital documents frequently contain tables that display various formats and layouts, presenting a diverse range of information. Tables play a pivotal role in succinctly conveying extensive data, allowing readers to efficiently explore, compare, and comprehend the content. Nevertheless, the compact nature of tables often presents significant challenges for machine parsing and comprehension processes. Automatic Information Extraction from tables involves two essential sub-tasks Table Identification and Table Structure Recognition. Several studies [9, 11, 29, 32, 33] have made significant contributions to the advancement of table detection, while others [15, 23, 28] have focused on improving table structure recognition. These tasks are of utmost importance in the field of image analysis arXiv:2404.10305v2 [cs.CV] 19 Apr 2024 \fMMIR \u201923, November 2, 2023, Ottawa, ON, Canada Avinash Anand et al. as they facilitate the extraction of critical information from tables in a digital format. Table detection is concerned with accurately identifying the precise spatial region within an image that contains the table. Conversely, table structure recognition involves the precise identification of table rows and columns, thereby enabling the extraction of individual table cells. In the field of table recognition (TR), To efficiently use data from table images, computer vision-based pattern recognition methods are used. Table detection (TD), table structure recognition (TSR), and table content recognition (TCR) are the three main tasks involved in TR. TD focuses on locating tables within images, TSR aims to recognize their internal structures, and TCR involves extracting textual contents from the tables. The current emphasis is on developing end-to-end TR systems capable of seamlessly integrating all three sub-tasks. The primary goal is to address real-world scenarios where the system performs TD, TSR, and TCR simultaneously, thus enhancing the efficiency and effectiveness of table recognition in practical applications. Despite the advancements in current open-source and commercial document analysis algorithms, such as the Table Transformer Model, certain limitations persist. For instance, due to the computational complexity and maximum sequence length constraint of Transformers, capturing long-range dependencies between cells can be challenging. As a result, lengthy tables may suffer from information loss, affecting the model\u2019s ability to understand the context accurately. Additionally, when encountering tables with numerous empty cells or sparse content, the model might struggle to distinguish meaningful empty cells from those with missing data. To address these limitations, we present our innovative solution that aims to overcome these challenges and enhance the overall performance of table analysis and recognition. With the help of our proposed approach, table extraction methods can gain a better understanding of the inherent characteristics of tables, leading to improved accuracy in detecting and extracting table structures from document images. The main contributions of this paper can be summarized as follows: \u2022 We have proposed a novel integrated pipeline that combines three state-of-the-art models: DETR, CascadeTabNet, and PP OCR v2, to achieve end-to-end table recognition from image-based data. This innovative pipeline effectively addresses the significant challenges posed by variations in table styles, intricate structures, and image distortions commonly encountered in document images. \u2022 Through rigorous experimentation and evaluation, we have demonstrated that our integrated pipeline outperforms existing methods in terms of both accuracy and efficiency for table recognition. The results highlight the pipeline\u2019s remarkable ability to preserve complex table structures and accurately extract tabular data from document images. These findings contribute to the advancement of image-based table recognition techniques and offer practical insights for handling diverse table layouts in real-world scenarios. 2 RELATED WORK The task of table structure identification has been a challenging and unresolved issue within the document-parsing community, leading to the organization of several public challenges to address it [7, 10, 16]. The difficulty of this problem can be attributed to various factors. Firstly, tables exhibit a wide range of shapes and sizes, necessitating a flexible approach to effectively handle their diversity. This is particularly crucial when dealing with complex column and row headers, which can be highly intricate and demanding. Secondly, one of the complexities arises from the scarcity of data specifically tailored for table structure analysis. Nevertheless, there has been significant progress in recent years with the introduction of valuable datasets such as PubTabNet [40], FinTabNet [39], and TableBank [19], addressing this data deficiency. 2.1 Table Detection Several significant contributions have been made in the field of table detection for document analysis. Hao et al. [11] proposed a table detection method based on convolutional neural networks (CNN) specifically designed for PDF documents. Siddiqui et al. [30] introduced an innovative strategy that combines deformable CNN with faster region-based convolutional neural network (R-CNN) or feature pyramid network (FPN) to address the complexities arising from variable table sizes and orientations. Anand et al. [1] proposes a noisy document images dataset for document layout detection, and shown improved performance in detecting tables in the document image. Holevcek et al. [13] extended the application of graph neural networks to structured documents, focusing on bills, where they utilized graph convolutions to facilitate table understanding. Casado et al. [5] extensively explored object detection techniques, including Mask R-CNN, YOLO, SSD, and Retina Net, and demonstrated that fine-tuning from a domain closer to the target domain can significantly improve table detection performance. Nguyen et al. [25] proposed TableSegNet, a compact fully convolutional network capable of simultaneously performing table separation and detection. Zhang et al. [38] introduced a YOLObased table detection methodology, enhancing spatial arrangement learning through improving efficiency by including an involution into the network\u2019s core and using a straightforward feature pyramid network. These studies collectively showcase the effectiveness of deep learning models, such as CNN and YOLO, in the context of table detection. Moreover, they highlight the benefits of incorporating specific techniques like deformable CNN, graph convolutions, and involution, which have proven instrumental in overcoming the inherent challenges associated with this task. 2.2 Table Structure Recognition Early approaches to table structure recognition heavily relied on hand-crafted features and heuristic rules [14, 17, 36]. These methods were particularly suitable for simple table structures or predefined data formats. However, in recent times, inspired by the remarkable success of deep learning in various computer vision tasks like object detection and semantic segmentation, several novel deep learning-based methods [27, 29] have emerged for table structure recognition. \fTC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content MMIR \u201923, November 2, 2023, Ottawa, ON, Canada Table Detection (TD) Table Structure Recognition (TSR) Table Content Recognition (TCR) Figure 1: Our TC-OCR achieves simultaneous Table Detection (TD), Table Structure Recognition (TSR), and Table Content Recognition (TCR), preserving table structures and accurately extracting tabular data from document images. Schreiber et al. (2017) [29] introduced DeepDeSRT, a two-fold system that effectively combines Faster R-CNN and FCN for accurate table detection and precise row/column segmentation. On the other hand, Raja et al. (2020) [27] presented TabStruct-Net, an innovative Customized cell detection and interaction modules that precisely identify cells and anticipate their row and column relationships with other detected cells are incorporated into a framework for recognizing table structures. These cutting-edge deep learning-based methods, exemplified by DeepDeSRT and TabStruct-Net, leverage the intrinsic capabilities of neural networks to significantly enhance table structure recognition by automatically learning relevant and discriminative features while capturing complex interrelationships within the tables. 2.3 Table recognition Prior studies in table recognition have predominantly focused on non-end-to-end methodologies, dividing the problem into two distinct sub-tasks: table structure recognition and cell-content recognition. These approaches attempted to tackle each sub-problem independently using separate systems. TableMASTER, introduced by [12, 20, 21, 37], is a Transformerbased model specifically designed for table structure recognition. The method combines the Transformer model with a text line detector to identify text lines within individual table cells. Furthermore, they employed a text line recognizer based on the work of [21] to extract text content from the identified lines. Another Transformer-based model called TableFormer was proposed by [24], which not only recognizes table structure but also predicts the bounding boxes of each table cell. These predicted bounding boxes were then utilized to extract the cell contents from PDF documents, resulting in a comprehensive table recognition system. Recently, researchers have been shifting towards end-to-end approaches due to the advancements in deep learning and the increased availability of tabular data [22]. As an example, [22] introduced the encoder-dual decoder (EDD) model, which is capable of jointly recognizing both table structure and content for each cell. In addition to the model, they also introduced the PubTabNet dataset, which specifically focuses on table recognition and is made accessible to the research community. Notably, the ICDAR2021 competition on scientific literature parsing organized by IBM Research in collaboration with IEEE ICDAR [2] has further contributed to advancements in table recognition. In summary, the field of table recognition has witnessed significant progress through various techniques, from non-end-to-end to end-to-end approaches, and the development of new datasets and competitions has been instrumental in driving further advancements. 3 DATASET Researchers have developed TableBank [19], an extensive standardized open-domain table benchmark dataset, to address the need for large-scale table analysis in various domains. The dataset surpasses existing human-labeled datasets in terms of size and contains 417,234 tables, each with its original document. TableBank includes a diverse range of domains, such as business documents, official filings, and research papers. The dataset is created by manipulating the mark-up tags for tables present in electronic documents like Microsoft Word (.docx) and LaTeX (.tex) files. Bounding boxes are added using the mark-up language to provide high-quality labeled data. The image-based table analysis approach used in TableBank is versatile, as it can handle different document types, including PDF, HTML, PowerPoint, and scanned versions. This robustness allows for the extraction of tables from various sources, enabling large-scale table analysis tasks. \fMMIR \u201923, November 2, 2023, Ottawa, ON, Canada Avinash Anand et al. Table Detection (DETR) Cropped Table Table Structure Recognition (CascadeTabNet) Text Detection (PP OCR) Text Coordinates (Word Count) Table Cell Coordinates Word to Table Cell Mapping Text Recognition (PP OCR) Word Level Cell Mapping Unstructured Text (Word Count) Structured Table Output Document Image / PDF Figure 2: Architecture of the proposed Methodology, where we have incorporated three distinct models DETR for table detection, CascadeTabNet for table structure recognition, and PP OCRv2 for text detection and recognition 4 METHODOLOGY We have developed a comprehensive pipeline that integrates three distinct models to address various challenges associated with diverse table styles, complex structures, and image distortions commonly encountered in document images. 4.1 DETR Object Detection Model DEtection TRansformer or DETR [4], revolves around key elements, including a set-based global loss that ensures unique predictions through bipartite matching and a transformer encoder-decoder architecture. They presented a method for tackling object detection by formulating it as a direct set prediction problem. The approach employs an encoder-decoder architecture based on transformers, which are renowned for their effectiveness in sequence prediction tasks. Transformers [34] leverage self-attention mechanisms to explicitly model interactions between elements within a sequence. This characteristic makes transformers highly suitable for handling specific constraints in set prediction, such as eliminating duplicate predictions. By using this strategy, the detection pipeline reduced the requirement for manually created elements, such as anchor generation or non-maximum suppression processes, which frequently need prior task-specific expertise. We leverage this as an end-to-end, transformer-based solution for object detection, directly producing sets of bounding boxes and class labels. This ensures clear and distinct predictions, addressing issues related to duplicate detections. Moreover, the transformer encoder-decoder architecture significantly boosts detection performance by effectively capturing contextual relationships within the images. 4.2 CascadeTabNet We used CascadeTabNet [26], an advanced end-to-end deep learning framework, which effectively tackles both table recognition sub-problems using a unified model. This methodology accomplishes pixel-level table segmentation, accurately identifying each table instance within an input image. Additionally, it performs table cell segmentation, predicting segmented regions corresponding to individual cells, thereby enabling the extraction of the table\u2019s structural information. The model accomplishes cell region predictions collectively in a single inference pass. Moreover, the model has the capability to classify tables into two types: bordered (ruling-based) and borderless (no ruling-based) tables. For borderless tables, the model predicts cell segmentation directly. The key components in the architecture involve leveraging the Cascade RCNN [3], which is a multi-stage model specifically designed to address the challenges of high-quality object detection in convolutional neural networks (CNNs). Additionally, a modified version of HRNet [35] is incorporated, providing reliable highresolution representations and multi-level features that prove beneficial for the semantic segmentation tasks related to table recognition. Through the fusion of these two approaches, CascadeTabNet achieves state-of-the-art performance in table recognition, effectively delivering precise table segmentation, cell segmentation, and accurate classification of table types. 4.3 PP OCRv2 The PP OCRv2 system [8] is designed to achieve high accuracy and computational efficiency for practical OCR applications. For text detection, the proposed Collaborative Mutual Learning (CML) and Copy Paste are two methods for data enhancement that have been successful in improving accuracy for object detection and instance segmentation tasks. CML involves training two student networks and a teacher network to develop a more robust text detector, and it also proves to be beneficial for text detection. Moreover, in-text recognition, they introduced the Lightweight CPU Network (PP-LCNet) [6], Unified-Deep Mutual Learning (U-DML), and CenterLoss. U-DML makes use of two student networks to improve text recognition precision. CenterLoss helps reduce errors caused by comparable characters. We used the PP OCRv2 model to perform text-to-cell mapping in three phases. In the first phase, The mapping process links words to table cells T C using centroid coordinates N, ensuring accurate associations within the table boundary. As shown in the equation \fTC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content MMIR \u201923, November 2, 2023, Ottawa, ON, Canada below, where table cell centroid is denoted by T CN. T CN\ud835\udc56,\ud835\udc57= \u0012 T C\ud835\udc56,\ud835\udc57(\ud835\udc651) + T C\ud835\udc56,\ud835\udc57(\ud835\udc652) 2 , T C\ud835\udc56,\ud835\udc57(\ud835\udc661)) + T C\ud835\udc56,\ud835\udc57(\ud835\udc662) 2 \u0013 (1) In the next phrase, A flexible threshold, set at half the cell\u2019s width and length, accommodates variations in word positioning. Here it tries to calculate the centroid coordinates ECN for a text cell \ud835\udc58 using the average of coordinates EC obtained from two reference points. ECN\ud835\udc58= \u0012 EC\ud835\udc58(\ud835\udc651) + EC\ud835\udc58(\ud835\udc652) 2 , EC\ud835\udc58(\ud835\udc661)) + EC\ud835\udc58(\ud835\udc662) 2 \u0013 (2) Lastly, this approach preserves empty cells and avoids incorrect mappings, preventing text misalignment and enhancing word-tocell precision. EC\ud835\udc58= \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (R\ud835\udc56, C\ud835\udc57), \ud835\udc56\ud835\udc53|EN\ud835\udc58(\ud835\udc65) \u2212TN\ud835\udc56,\ud835\udc57(\ud835\udc65)| \u2264W(T \ud835\udc56,\ud835\udc57) 2 \ud835\udc34\ud835\udc41\ud835\udc37 |EN\ud835\udc58(\ud835\udc66) \u2212TN\ud835\udc56,\ud835\udc57(\ud835\udc66)| \u2264W(T \ud835\udc56,\ud835\udc57) 2 \ud835\udf19, otherwise (3) In the pipeline, we utilize PP OCRv2 for text detection and recognition purposes. The text cells detected by PP OCRv2 are compared with the cells identified by CascadeTabnet. Once a correspondence is found between the detected text and the cells, we calculate their centroids. By determining the minimum distance between any two cells, we are able to identify the structure or placement of the text within the rows R and columns C accurately. In our proposed methodology for image-based table recognition, we present a comprehensive pipeline that incorporates three distinct models: DETR for table detection, CascadeTabNet for table structure recognition, and PPOCR for text detection and recognition as shown in figure 2. This pipeline is specifically designed to tackle the challenges arising from various table styles, complex structures, and image distortions commonly encountered in document images. Initially, the input document, which can be in image or PDF format, is preprocessed to ensure a standardized input for subsequent analysis. The document image is then fed into the DETR model, an object detection approach, which accurately localizes tables by generating a fixed-size set of S predictions. It is crucial for S to be bigger than the average number of things in the picture. During the loss computation, the model\u2019s training phase comprises an optimization procedure that creates the ideal bipartite matching between the predicted and ground truth items. To address the limitations of existing table-structure identification models, we evaluated the Table Transformer [31], which introduces a robust table-structure decomposition algorithm. This algorithm is designed to be language agnostic and effectively utilizes data from original PDF documents, enabling faster and more accurate text-cell extraction while establishing a direct link between table cells and their corresponding bounding boxes in the image. However, it is worth noting that the performance of the object detection decoder for table cells heavily relies on the availability of high-quality programmatic PDFs containing well-structured tabular content. In cases where the PDFs are poorly formatted or include non-standard table layouts, the model\u2019s performance may suffer, leading to less accurate content extraction. \u02c6 \ud835\udf0e= arg min \ud835\udf0e\u2208\ud835\udc47\ud835\udc46 \ud835\udc46 \u2211\ufe01 \ud835\udc56=1 Lmatch(\ud835\udc66\ud835\udc56, \u02c6 \ud835\udc66\ud835\udf0e(\ud835\udc56)) (4) In Equation. 4, \ud835\udc66denotes the ground truth set of objects, and \ud835\udc66= { \u02c6 \ud835\udc66\ud835\udc56}\ud835\udc46 \ud835\udc56=1 be the set of S predictions. L\ud835\udc5a\ud835\udc4e\ud835\udc61\ud835\udc50\u210e(\ud835\udc66\ud835\udc56, \u02c6 \ud835\udc66\ud835\udf0e(\ud835\udc56)) is a pairwise matching cost between ground truth \ud835\udc66\ud835\udc56and a prediction with index \ud835\udf0e(\ud835\udc56). The matching cost takes into account both the class predictions and the similarity of predicted and ground truth boxes. Each element \ud835\udc56of the ground truth set can be seen as \ud835\udc66\ud835\udc56= (\ud835\udc50\ud835\udc56,\ud835\udc4f\ud835\udc56), where \ud835\udc50\ud835\udc56 is the target class label and \ud835\udc4f\ud835\udc56\u2208[0, 1]4 is a vector that specifies the height and width of the ground truth box in relation to the image\u2019s size as well as its center coordinates. For index \ud835\udf0e(\ud835\udc56), we define probability of class \ud835\udc50\ud835\udc56as \u02c6 \ud835\udc5d\ud835\udf0e(\ud835\udc56) (\ud835\udc50\ud835\udc56), and predicted box as \u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56). L\ud835\udc3b\ud835\udc62\ud835\udc5b\ud835\udc54\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc4e\ud835\udc5b(\ud835\udc66, \u02c6 \ud835\udc66) = \ud835\udc46 \u2211\ufe01 \ud835\udc56=1 h \u2212\ud835\udc59\ud835\udc5c\ud835\udc54\u02c6 \ud835\udc5d\u02c6 \ud835\udf0e(\ud835\udc56) (\ud835\udc50\ud835\udc56) + 1(\ud835\udc50\ud835\udc56\u2260\u2205) L\ud835\udc4f\ud835\udc5c\ud835\udc65(\ud835\udc4f\ud835\udc56, \u02c6 \ud835\udc4f\u02c6 \ud835\udf0e(\ud835\udc56)) i (5) The next part of the matching cost and the Hungarian loss 5 is L\ud835\udc4f\ud835\udc5c\ud835\udc65(.) that scores the bounding boxes. In the Equation. 6, \ud835\udc591 loss and the generalized IoU loss L\ud835\udc56\ud835\udc5c\ud835\udc62is used, where \ud835\udf06\ud835\udc56\ud835\udc5c\ud835\udc62, \ud835\udf06\ud835\udc3f1 \u2208R. L\ud835\udc4f\ud835\udc5c\ud835\udc65(\ud835\udc4f\ud835\udc56, \u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56)) = \ud835\udf06\ud835\udc56\ud835\udc5c\ud835\udc62L\ud835\udc56\ud835\udc5c\ud835\udc62(\ud835\udc4f\ud835\udc56, \u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56)) + \ud835\udf06\ud835\udc3f1\u2225\ud835\udc4f\ud835\udc56\u2212\u02c6 \ud835\udc4f\ud835\udf0e(\ud835\udc56) \u22251 (6) In this study, we propose a comprehensive approach for automatic table understanding in images. The process involves several key steps, starting with the detection of table regions through a region proposal technique. The identified table regions are then isolated from the original image and utilized as input for the CascadeTabNet model, a specialized deep-learning architecture designed for precise table structure recognition. CascadeTabNet is capable of accurately determining the number of rows and columns within a table and their corresponding spatial coordinates. Subsequently, we employ the PPOCR (Pixel-level Patch-wise Object Character Recognition) method for precise text detection and recognition within the identified table cells. PPOCR extracts the spatial coordinates of the detected text, and we establish a mapping process based on the nearest neighbor approach to align this text with the original coordinates of the table cells obtained from CascadeTabNet. This integrated methodology offers a robust and efficient solution for the automatic extraction and understanding of tabular data from images, enhancing the organization and accessibility of such information in various applications. \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc61\u210e+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc51\ud835\udc5a\ud835\udc59+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc56\ud835\udc59\ud835\udc59 (7) Equation. 7 consists of three losses, 1) Truth Loss, 2) DML Loss, and 3) Distill Loss. Truth Loss is used to make sure that the training is supervised by the true label. Secondly, for calculating the DML Loss, KL Divergence is used which computes the distance between student models. Lastly, the third component is Distill Loss which reflects the supervision of the teacher model on the sub-student models. \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc61\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc50\ud835\udc61\ud835\udc50+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc51\ud835\udc5a\ud835\udc59+ \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc53\ud835\udc52\ud835\udc4e\ud835\udc61 (8) The total loss function 8 consists of three parts [8] section 2.2: \fMMIR \u201923, November 2, 2023, Ottawa, ON, Canada Avinash Anand et al. Column No. No. of Images Rows TATR TC-OCR TATR Accuracy (%) TC-OCR Accuracy (%) Improvement (TC-OCR TATR) Total 240 2785 1818 2485 65 89 24 2 100 1130 838 1075 74 95 21 3 100 1085 760 1010 70 91 21 4 40 570 220 400 39 70 31 Table 1: Comprehensive comparison of results between the Table Transformer (TATR) model and our proposed method \u2022 CTC Loss: Due to the fact that both networks were trained from the beginning, they can converge using CTC loss. \u2022 DML Loss: DML loss is necessary to make sure that the distributions of the two networks are consistent because it is predicted that the ultimate output distributions of the two networks would be identical. \u2022 Feature Loss: It is expected that the two feature maps will be similar because the two network designs are similar. The gap between the intermediate feature maps of the two networks can be reduced via feature loss. By leveraging the known structural characteristics of tables, we have devised a systematic pipeline for precise extraction of text in a structured manner from document images, while preserving the original table organization. The pipeline consists of three interconnected models: table localization, structure recognition, and structured text detection and recognition. The extracted data is then presented in a CSV file format, adhering to the same structure as the original table in the document. To compute the word-level accuracy \ud835\udc4a\ud835\udc34\ud835\udc50\ud835\udc50, the following formula is utilized. \ud835\udc4a\ud835\udc34\ud835\udc50\ud835\udc50= (\ud835\udc4b \ud835\udc4c) \u2217100 (9) Where \ud835\udc4bis the number of words correctly recognized by OCR, and \ud835\udc4cis the total number of words in the ground truth. The proposed end-to-end solution demonstrates its effectiveness in image-based table recognition, addressing various challenges in the process. These challenges encompass table localization, structure recognition, and the accurate detection and recognition of text within the structured table. The successful implementation of this comprehensive approach allows for the accurate extraction of tabular data from document images, which in turn enhances data analysis, and search engine capabilities, and contributes to knowledge graph enrichment. 5 EXPERIMENT We conducted a comparative analysis of inference time for our proposed model and Table Transformer (TATR) [31] on TableBank dataset [19] comprising 47,053 table images as shown in table 2. The table above presents the results of this evaluation. As observed, our model outperforms TATR in terms of efficiency, demonstrating faster inference times across all measured aspects. Specifically, our model achieves a maximum inference time of 12.7 seconds, a minimum of 5.42 seconds, and an average of 8.23 seconds. In contrast, TATR\u2019s corresponding figures are 15.48 seconds, 4.95 seconds, and 12.43 seconds, respectively. Model Inference Time (sec.) TATR [31] Max 15.48 Min 4.95 Avg 12.43 TC-OCR Max 12.7 Min 5.42 Avg 8.23 Table 2: Inference time in seconds for Our model is compared against Table Transformer (TATR) on TableBank [19] dataset of 47,053 images These findings underscore the effectiveness of our approach in jointly representing and integrating textual and visual information within tables, leading to enhanced performance and reduced inference times. The superior inference speed of our model positions it as a promising solution for real-world applications, where timesensitive tasks demand swift and accurate data comprehension. We also carried out a comparative analysis of our proposed model against the state-of-the-art (SOTA) Table Transformer model, we took 8,000 samples from TableBank [19]. The table summarizes the evaluation results in terms of Intersection over Union (IOU) and Optical Character Recognition (OCR) accuracy. As shown in Table. 3, our model outperforms the Table Transformer in both metrics, showcasing its superior performance. Specifically, our model achieves an impressive IOU of 0.96, indicating its effectiveness in accurately delineating and localizing table elements. Moreover, our model demonstrates a significant advancement in OCR accuracy, reaching an impressive 78%, thereby excelling in the crucial task of accurately recognizing and understanding the textual content within tables. Another comprehensive comparison between the Table Transformer (TATR) model and our proposed method shown in table 1 showcases the performance evaluation on different columns of a dataset containing a total of 240 images and 2,785 rows. Our \fTC-OCR: TableCraft OCR for Efficient Detection & Recognition of Table Structure & Content MMIR \u201923, November 2, 2023, Ottawa, ON, Canada method demonstrates superior accuracy across all columns, outperforming TATR significantly. Particularly noteworthy is the overall improvement achieved by our approach, with an impressive 24% increase in accuracy compared to TATR. These findings underscore the effectiveness of our proposed method in tackling the problem of the multimodal table, indicating its potential for enhancing data comprehension and extraction of meaningful insights from diverse tabular data. 6 CONCLUSION In conclusion, we propose an integrated pipeline for end-to-end image-based table recognition, leveraging the capabilities of three state-of-the-art models: DETR, CascadeTabNet, and PP OCR v2. By combining these models, we effectively tackle the challenges posed by diverse table styles and complex structures in document images. Our approach facilitates the accurate reconstruction of table layouts and the extraction of cell content from PDF or OCR through bounding boxes. Empirical evaluations demonstrate the superior performance and efficiency of our method compared to existing techniques, as it excels in preserving table structures and extracting tabular data with high efficacy. It is important to note that while our research serves as a strong foundation for advancing imagebased table recognition, further refinements and optimizations are essential to enhance its applicability across a wider range of scenarios. Ultimately, our work contributes to the advancement of data extraction and comprehension in digitized documents, fostering innovation in the field of document analysis. Model IOU OCR Accuracy Table Transformer [31] 0.94 62 % Our Model (TC-OCR) 0.96 78 % Table 3: Comparison of our model (TC-OCR) with SOTA trained on 8000 samples of TableBank [19] Dataset 7 FUTURE SCOPE The multi-modal tables problem presents a significant challenge in the realm of AI research, necessitating effective understanding and processing of tables that incorporate both textual and visual elements, such as images or graphs [18]. Successfully addressing this challenge requires AI models to not only interpret the content within individual cells but also grasp the intricate relationships between textual and visual information. Therefore, the primary objective of this research is to devise novel methods that can jointly represent and seamlessly integrate these modalities, leading to more comprehensive data comprehension and extraction of meaningful insights across diverse domains. By delving into this unexplored territory, this study aims to pave the way for innovative approaches that advance the capabilities of AI systems in handling multimodal tables and offer valuable contributions to real-world applications. 8 ACKNOWLEDGMENT Dr. Rajiv Ratn Shah is partly supported by the Infosys Center for AI, the Center of Design and New Media, and the Center of Excellence in Healthcare at Indraprastha Institute of Information Technology, Delhi. We sincerely appreciate the guidance and unwavering support provided by Ms. Astha Verma and Mr. Naman Lal throughout our research. Their expertise and insightful feedback have greatly influenced the direction and quality of our study. We are grateful for their time, dedication, and willingness to share knowledge, which significantly contributed to the completion of this work. Their encouragement and constructive discussions served as a constant source of motivation, and we feel privileged to have benefited from their wisdom and mentorship. 9 LIMITATIONS One notable limitation of our proposed approach is its inability to accurately recognize complex tables with merged cells, nested tables, or irregular structures. Dealing with such intricate layouts poses challenges in comprehending the intricate relationships between cells and headers. As a result, our current method may not be suitable for handling these specialized cases, and further research and enhancements are required to address these complexities effectively." + }, + { + "url": "http://arxiv.org/abs/2404.05545v1", + "title": "Evaluating Interventional Reasoning Capabilities of Large Language Models", + "abstract": "Numerous decision-making tasks require estimating causal effects under\ninterventions on different parts of a system. As practitioners consider using\nlarge language models (LLMs) to automate decisions, studying their causal\nreasoning capabilities becomes crucial. A recent line of work evaluates LLMs\nability to retrieve commonsense causal facts, but these evaluations do not\nsufficiently assess how LLMs reason about interventions. Motivated by the role\nthat interventions play in causal inference, in this paper, we conduct\nempirical analyses to evaluate whether LLMs can accurately update their\nknowledge of a data-generating process in response to an intervention. We\ncreate benchmarks that span diverse causal graphs (e.g., confounding,\nmediation) and variable types, and enable a study of intervention-based\nreasoning. These benchmarks allow us to isolate the ability of LLMs to\naccurately predict changes resulting from their ability to memorize facts or\nfind other shortcuts. Our analysis on four LLMs highlights that while GPT- 4\nmodels show promising accuracy at predicting the intervention effects, they\nremain sensitive to distracting factors in the prompts.", + "authors": "Tejas Kasetty, Divyat Mahajan, Gintare Karolina Dziugaite, Alexandre Drouin, Dhanya Sridhar", + "published": "2024-04-08", + "updated": "2024-04-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "stat.ME" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "Evaluating Interventional Reasoning Capabilities of Large Language Models", + "main_content": "Introduction Large language models (LLMs) based on Transformers demonstrate remarkable 1 performance on a variety of human-relevant tasks, from conversing to summarizing web-based information and answering complex question (Chen et al., 2021; Brown et al., 2020; Li et al., 2022; Katz et al., 2024). These breakthroughs have prompted an interest in integrating LLMs into decision-making. Consider two hypothetical scenarios: Example 1 (Scientific discovery assistant) A group of researchers provide an LLM with a text representation of a knowledge graph that is available in their discipline based on current scientific evidence. They want the LLM to generate candidate experiments based on putative cause-and-effect relationships. Importantly, as they come across new experimental conditions tried in the literature, they want the LLM to provide updated recommendations. Example 2 (Automated A/B testing) An e-commerce site runs A/B tests on features of their site, collecting relevant engagement metrics as outcomes. They want an LLM to process these tabular datasets that record which experimental condition each user was assigned to and what their outcome was, and determine which feature value between A and B to adopt. Importantly, they plan to re-query the LLM habitually based on evidence from newly run A/B tests. Both scenarios require reasoning about how new experimental conditions generated by interventions change the state of our knowledge. Example 1 involves assessing which causal \u2217Correspondence to tejas.kasetty@mila.quebec 1Whether or not LLMs perform remarkably in an absolute sense is an active scholarly debate. Here, we mean that LLMs performance on tasks such as question-answering, summarization or serving as conversational agents is remarkable relative to earlier variants of statistical language models. 1 arXiv:2404.05545v1 [cs.LG] 8 Apr 2024 \fPreprint. Under Review. links become plausible upon receiving information about interventions that were performed. Example 2 requires knowing how interventions \u2013 new A/B tests in this case \u2013 change the causal conclusions we previously drew. In these examples, to support decision-making, LLMs need to integrate information about interventions and appropriately update their beliefs about the the data-generating model they have. Contributions. Motivated by the role that the calculus of interventions plays in decisionmaking, this paper makes the following contributions: 1. We introduce intervention effect (IE) prediction, where, given a causal graph and an intervention on a variable in that graph, one must determine how the directed path between two specific variables changes. In essence, IE prediction provides a concrete task that tests the ability of LLMs to appropriately update their beliefs after receiving information about an intervention experiment. 2. To assess the degree to which LLMs can accurately reason about interventions, we propose a methodology to turn instances of these classification problems into a prompt and introduce an intervention effect prediction benchmark. 3. Recognizing the sensitivity that LLMs show to prompt-design, we design studies to further disentangle the effects on IE prediction performance due to spurious properties of the prompt like the names of variables and the causal relations they form. This paper contributes to the emerging body of work on evaluating algorithmic causal reasoning in LLMs (Cai et al., 2023; Jin et al., 2023; 2024; Binz & Schulz, 2023). In Section 5, we summarize the connections to several threads of related work. The empirical studies we conduct on four LLMs suggest that, under certain choices for generating prompts based on intervention effects, variants of GPT-4 achieve promising accuracy of predicting what happens to causal graph relationships under interventions. However, we find that LLMs are sensitive to the choice of variable names when we instantiate prompts. When prompts contain facts that LLMs might have been trained on, their performance drops significantly. This result underscores the importance of careful design when creating benchmarks to avoid drawing spurious conclusions. 2 Preliminaries We summarize the aspects of large language models (LLMs) and causality that are necessary to understand the empirical study we design in this paper. Large language models. For our purposes, it suffices to view LLMs (Touvron et al., 2023; OpenAI, 2023) as models of conditional distributions P\u03b8(Y = y|x) where x is a sequence of tokens called context, the random variable Y is the next item in the sequence and y is a token (the smallest unit that text was broken down to for LLM processing). The parameter vector \u03b8 captures all relevant LLM parameters such as the self-attention and multi-layer perceptron (MLP) weights at each layer. In this paper, we focus on inference, the task of evaluating P\u03b8\u2217(Y = y|x) on unseen contexts x given an LLM with trained parameters \u03b8\u22172. Specifically, we design benchmarks where the contexts pose yes/no causal reasoning questions and we evaluate the output produced by the LLM, generated by y \u223cP\u03b8\u2217(Y = y|x). Causal directed acyclic graphical models and perfect interventions. Causal directed acyclic graphical models (causal DAGs) provide a way to encode a particular class of interventions using minimal changes to the graph structure (Peters et al., 2017). More technically, in a causal DAG G = (V, E) over variables V with directed edges in the set E, we represent a perfect intervention do(Vi = v) that sets the value of variable Vi to v by modifying the DAG G to delete the edges going into Vi. While causal DAGs and perfect interventions do not allow us to express all possible causal questions, we focus on them in this paper because presence/absence of edges are well-suited 2The parameters \u03b8\u2217approximately maximizes the log probability P\u03b8(Y = y|x) on web-scale text. 2 \fPreprint. Under Review. Figure 1: Causal DAGs. In the empirical studies, we define intervention effects based on three causal DAGs: bivariate, confounding and mediation graphs. to constructing binary classification task, which simplify LLM evaluation by enabling us to evaluating the probability of just two tokens, yes and no, rather than needing to evaluate arbitrary utterances. Understanding how interventions modify the data-generating process is key to knowing which causal inferences a dataset can (or cannot) support (Pearl, 2009; Peters et al., 2017). To understand this distinction, consider the confounding DAG in the middle in Figure 1. Suppose this graph represents the beliefs of a scientist who is trying to reason about a possible causal relationship between variables B and C. In the absence of experimental data, they know that their inference is hopeless since a third variable A, called a confounding variable or confounder, can equally well explain all of the association observed between the variables B and C. However, if they learned that an experiment had been conducted to intervene on B, the scientist knows that associations between B and C would now capture the causal effect of B on C, since the intervention removed the variable A\u2019s confounding influence on B. If, unlike human experts, LLMs cannot reliably predict the graphical changes made by intervening, they run the risk of drawing invalid conclusions for causal questions that are in fact non-identifiable given available evidence. In Section 4, we design empirical studies based on the bivariate DAG in the left of Figure 1, the confounding DAG in the center in Fig. 1, and the mediation DAG in the right in Fig. 1, where a third variable B mediates the effect that A has on C. In the bivariate DAG, intervening on B severs the link between A and B and in the mediation DAG, intervening on B similarly means that the variable A can no longer affect C. 3 Methodology: Defining the causal reasoning task In this section, we introduce and define intervention effects, which quantify the changes to graphical relations entailed by a causal DAG G under different do-interventions. 3.1 Intervention effect To formalize effects that interventions have on causal DAGs, we need to define graphical features of DAGs that are of interest. Definition 1 (Causal relation) Given a causal DAG G = (V, E), a causal relation Cuv(G) involving variables Vu and Vv in DAG G is 1 if there is a directed path eua \u2192. . . \u2192ebv in G from Vu to Vv (where each edge eij along the path is in G) and 0 otherwise. In words, a causal relation captures whether or not a variable exerts an indirect or direct causal influence on another variable in a particular DAG G that contains these variables. In this paper, we focus on causal relations as a way to assess how interventions affect causal DAG structure. With causal relations in hand, we now define the main quantity of interest in this paper, the intervention effect (IE). Definition 2 (Intervention effect) Given a causal DAG G = (V, E), a variable Vi on which we perform a perfect intervention captured by do(Vi = \u2217) (where we use \u201c*\u201d to indicate that we do not 3 \fPreprint. Under Review. care about the value that Vi is set to), and a query causal relation Cuv, an intervention effect (IE) is, \u03baG i (Cuv) = Cuv(G) \u2212Cuv(Gi), (1) where the DAG Gi is the modification of DAG G under an intervention to the variable Vi. An IE \u03baG i , which is defined relative to a base DAG G and intervention target Vi, is a function of a particular causal relation Cuv of interest. Since interventions only require us to remove the edges incoming to Vi, note that we expect many intervention effects \u03baG i (Cuv) to evaluate to 0, indicating that the intervention left at least one directed path between the variables Vu and Vv unaffected. 3.2 LLM predictor of intervention effects In this paper, we ask the question: how can we leverage LLMs to predict intervention effects (IEs), given that they take textual contexts as input? To evaluate LLMs on this prediction task, we need to instantiate mapping functions, mG : G \u2192xG; mI : Vi \u2192xi; mC : Cuv(G) \u2192xG uv, (2) which take a DAG G, an intervention target Vi and a causal relation Cuv(G) in DAG G as input and output text sequences xG, xi and xG uv to verbalize these facts as inputs to an LLM. Additionally, the concatenation function c(xG, xi, xG uv) produces a coherent context x while keeping the information in the input sequences intact. In the empirical studies in Section 4, we instantiate these mapping functions concretely, but for now, assume that these mapping functions are given. To ease notation later, we will define one more function, O(x) = 1[y \u223cP\u03b8(Y = y|x) = yes], (3) which returns 1 when the next word sampled by the LLM given the context x is yes. An LLM-based predictor of the intervention effect is a function, \u02c6 \u03baG i (Cuv) = O(c(xG, xG uv)) \u2212O(c(xG, xi, xGi uv)). (4) In words, we turn LLM into predictors by contrasting their responses to two different sequences x and x0. In the sequence x0 = c(xG, xG uv), we describe a causal DAG G, describe no intervention, and ask about a causal relation in this base graph G. In the other sequence x = c(xG, xi, xGi uv), we describe the same causal DAG G, but now, introduce the description of an intervention on some target Vi, and ask about the same causal relation but in the modified graph Gi. Note that we never describe the complete graph post intervention (xGi) in the sequence x. Hence, the LLM has to infer the Gi and subsequently use it to reason about the causal relation post intervention (Cuv(Gi)). 3.3 Accuracy metric for LLM prediction With a concrete question \u2013 intervention effects \u2013 in mind and a strategy for producing predictions of intervention effects using LLMs, what remains is defining a metric that measures the fidelity of LLM predictions. That is, we want to compare an estimated IE \u02c6 \u03baG i (Cuv) against the ground-truth IE \u03baG i (Cuv). The straightforward and natural choice for such a comparison would be accuracy, acc(G, i, u, v) = 1[\u03baG i (Cuv) = \u02c6 \u03baG i (Cuv)]. (5) However, this accuracy metric can return 1 scenarios such as, Cuv(G) = 1, Cuv(Gi) = 1, \u02c6 Cuv(G) = 0, \u02c6 Cuv(Gi) = 0, (6) where (with some slight abuse of notation) \u02c6 Cuv(\u00b7) denotes the LLM\u2019s prediction about whether or not a causal relation is true in a given DAG. In this example, a target causal 4 \fPreprint. Under Review. xG = MG(G) xi = MI(Vi) xGB uv = MC(Cuv(GB)) C B A I'm going to describe a causal directed acyclic graphical model. There are three variables: [A], [B] and [C]. [A] causes [B]. [A] Causes [C]. These are all the causal relationships in the graph. Given candidate set of variable names: \ud835\udcb1 A, B, C \u223c\ud835\udcb1 (Experiment-specific\u00a0sampling\u00a0strategy) Input Output G xG Input Output Suppose we perform the perfect intervention, . do([B] = b) xi C B A Output In this intervened causal graphical model, does [B] cause a change in [C]? xGB uv Input C B A ? Figure 2: An illustration of the mapping functions that verbalize the information for a single intervention effect estimation task into a prompt. relation Cuv is true in both the base causal DAG G and its modified, post-intervention counterpart Gi, but the LLM incorrectly predicts that these causal relations are false under both graphical scenarios. The accuracy metric misleads us when the LLM correctly predicts that the causal relation does not vary, but does not parse the causal relation correctly in either graphical scenario. Thus, to ensure that the accuracy is 0 in such cases, we slightly modify the accuracy metric to be, acc(G, i, u, v) = 1[\u03baG i (Cuv) = \u02c6 \u03baG i (Cuv)] \u00b7 1[Cuv(G) = \u02c6 Cuv(G)]. (7) 4 Experiments We introduce benchmarks that verbalize intervention effects as prompts to LLMs. Using these benchmarks, we empirically investigate four research questions: 1) how accurately do LLMs predict causal relation changes under interventions to causal DAGs on benchmarks generated with randomly chosen variable names? 2) to what extent is LLM performance affected when the instantiated causal relations reflect potentially memorized facts? 3) could LLMs be predicting the effects of interventions accurately by learning a shortcut instead? 4) How robust are LLMs to the way interventions are described in-context? We find that while some LLMs demonstrate good performance on the first benchmark, if the prompts are designed so that some textual graph relationships are well-known causal facts, the performance can drop, pointing to the importance of stress-testing LLMs when evaluating them for abilities such as causal reasoning. We provide the full experimental details in Appendix A and additional experiment results 3 in Appendix B. 4.1 Dataset Generation In this paper, we design three benchmarks, each consisting of three scenarios that correspond to the three causal DAGs in Figure 1. In each scenario, we generate intervention effects by considering an intervention on each variable in the DAG, and enumerating over all causal relations in the DAG per intervention target. This corresponds to 22 intervention effects across all scenarios. To create prompts for each intervention effect, we need to define the mapping functions in Equation (2). Figure 2 illustrates the template we use to map DAGs, interventions, as well as causal relations to generate prompts. To instantiate these templates, we need to sample names for variables in the DAG. The three benchmarks differ in how these names are sampled, with each benchmark designed to help us investigate a different research question. We summarize the differences across benchmarks: 3Code for experiments: https://github.com/tejaskasetty/interventional-reasoning-llms 5 \fPreprint. Under Review. Graph Type Bivariate Confounding Mediation Intervened Variable A B A B C A B C GPT-3.5 0.83 \u00b1 0.08 0.87 \u00b1 0.06 0.80 \u00b1 0.09 0.69 \u00b1 0.12 0.36 \u00b1 0.09 0.58 \u00b1 0.11 0.36 \u00b1 0.12 0.67 \u00b1 0.12 GPT-4 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 0.78 \u00b1 0.09 0.82 \u00b1 0.08 0.96 \u00b1 0.03 GPT-4-turbo 1.0 \u00b1 0.0 0.97 \u00b1 0.03 0.96 \u00b1 0.03 0.93 \u00b1 0.05 1.0 \u00b1 0.0 0.98 \u00b1 0.02 1.0 \u00b1 0.0 1.0 \u00b1 0.0 LLaMA-2 0.50 \u00b1 0.12 0.40 \u00b1 0.12 0.56 \u00b1 0.12 0.53 \u00b1 0.11 0.16 \u00b1 0.06 0.69 \u00b1 0.09 0.56 \u00b1 0.12 0.64 \u00b1 0.12 Table 1: IE prediction accuracy on the Random (D1) benchmark. GPT-4 variants are the best performing models, while LLaMA-2 appears to struggle with interventional reasoning. 1. Random (D1): Variable names are randomly chosen English characters. 2. T\u00a8 ubingen (D2.): Variables names are chosen from entities that appear in the T\u00a8 ubingen pairs (TP) dataset of causal relations (Mooij et al., 2016). 3. Anti-commonsense (D3): Variable names are chosen so that some of the causal relations are the opposite of those found in TP. For each benchmark and intervention effect therein that we want to turn into a prompt, we sample variable names fifteen times to have multiple independently generated examples per intervention effect (allowing us to report significant differences later). 4.2 Empirical studies on benchmarks We investigate the four research questions (RQs) on the proposed benchmarks, studying four LLMs: GPT-3.5, GPT-4, GPT-4-turbo (OpenAI, 2023), and LLaMA-2 (Touvron et al., 2023). For reporting the IE prediction accuracy, unless mentioned otherwise, we aggregate the performance over different causal relation queries using mean for each scenario (causal graph, intervened variable). Further, we always report the mean \u00b1 standard error across 15 choices for the prompt entities. RQ1: How accurate are LLMs at predicting the effects of interventions? To study this first RQ, we evaluate the performance of the four LLMs on D1, the Random benchmark, measuring their accuracy at predicting intervention effects using the metric we introduced in Equation (7). Table 1 summarize these results. We see that GPT-based models perform notably better than LLaMA-2, with GPT-4-turbo demonstrating near-perfect accuracy across all effects. LLaMA\u2019s performance suggests that it is not a reliable model for predicting the impacts of interventions, at least in a zero-shot way. In what follows in the main paper, we focus on results for GPT models, deferring LLaMA results to the appendix (Section B). Model Graph Intervened Variable T\u00a8 ubingen (D2) Anti-commonsense (D3) GPT-3.5 Bivariate B 0.83 \u00b1 0.10 0.63 \u00b1 0.12 Mediation B 0.33 \u00b1 0.12 0.40 \u00b1 0.13 C 0.67 \u00b1 0.12 0.67 \u00b1 0.12 GPT-4 Bivariate B 1.0 \u00b1 0.0 0.97 \u00b1 0.05 Mediation B 1.0 \u00b1 0.0 0.73 \u00b1 0.11 C 1.0 \u00b1 0.0 1.0 \u00b1 0.0 GPT-4-turbo Bivariate B 0.90 \u00b1 0.08 0.57 \u00b1 0.13 Mediation B 1.0 \u00b1 0.0 1.0 \u00b1 0.0 C 1.0 \u00b1 0.0 1.0 \u00b1 0.0 Table 2: IE prediction performance on specific scenarios for benchmarks (D2, D3) to understand the role memorization. Even best performing models in Table 1 like GPT-4 & GPT-4-turbo are negatively impacted by memorization effects in the Mediation scenario. 6 \fPreprint. Under Review. RQ2: To what extent is LLM performance affected by possibly memorized causal relations? While we might be tempted to conclude that GPT-4 reliably predicts changes to models after interventions are performed, we consider spurious factors that can affect model performance. In particular, K\u0131c\u0131man et al. (2023) found that GPT models reliably retrieved information about TP causal relations, suggesting that these relations could have been included in the training data for LLMs. (We reproduce their findings in Appendix Table 6.) This leads to a worrying possibility: after interventions, could LLMs fail to update their beliefs about causal relations that they have potentially memorized? To study this RQ, we consider the anti-commonsense benchmark, D3. We consider the intervention effects defined based on: 1) for the bivariate DAG in Figure 1, consider an intervention on B and consider the causal relations CAB; for the mediation DAG in Figure 1, consider interventions on B and on C, and causal relations CAC. Crucially, these intervention effects were chosen so that if an LLM attends only to the plausibly memorized causal relation, e.g., CAB(G) in the bivariate DAG G, then under intervention to B, where the causal relation CAB(GB) changes, the LLM will be penalized if it relies on its memorized fact. This reasoning holds true for intervention effects defined the mediation DAG as well. We contrast the performance that LLMs achieve on this anti-commonsense benchmark against their performance on the T\u00a8 ubingen benchmark, D2. In D2, although the names of variables come from the same distribution (the TP dataset), the target causal relations such as CAB are not ground-truth relations in TP, meaning that LLMs cannot rely on their memory when being evaluated on this benchmark. If we saw no statistically significant differences in prediction when contrasting the same exact intervention effect but with prompts instantiated according D1 versus D2, then we would have stronger evidence that LLMs might not be distracted by the presence of plausibly memorized causal relations. Table 2 summarizes the results across the two benchmarks. We see that the previously top-performing GPT models are all affected by distracting causal relations when considering effects in the bivariate DAG, with GPT-4-Turbo\u2019s accuracy decreasing to 55%. Interestingly, these models fare better in the mediation DAG setting. However, these findings point to the significant impacts that distracting aspects of prompts can have on LLM reasoning, suggesting continued caution if using LLMs for decision-making tasks. RQ3: Could LLMs be learning a shortcut to predict intervention effects? Consider the confounding DAG (Figure 1) and the causal relation CBC(G) which does not change under intervention on the variable A. In these examples, LLMs that accurately parse causal relations from text descriptions of DAGs would also obtain accurate IE estimates. Hence, predicting causal relations from the input graph in text \u2013 call this task relation retrieval \u2013 offers a shortcut: an LLM can attend to tokens in the context to solve relation retrieval and still perform good at IE estimation, thereby confounding the conclusions that can be drawn with this benchmark. However, the intervention effects we defined to study RQ2 offer an insight into how we can disentangle relation retrieval and accurate IE prediction. Notice that the IEs we defined to study RQ2 characterize scenarios where causal relations differ between the base DAG G and the post-intervention DAG. Thus, LLMs that rely only on relation retrieval cannot accurately estimate IEs. Building on this insight, we divide all the IEs \u03bai G(Cuv) into two groups: 1. IE = 0: Graph doesn\u2019t change as a result of intervention; Cuv(G) = Cuv(Gi). 2. IE = 1: Graph changes as a result of intervention; Cuv(G) \u0338= Cuv(Gi) Note that drop in performance on group (IE = 1) as compared to the group (IE = 0) can indicate reliance on shortcuts based on relation retrieval. We consider the average performance of the LLMs on the effects in each group, selecting only those LLMs which achieve an accuracy \u22650.95 on relation retrieval (which we report in Appendix Table 9). We focus on the random benchmark (D1) to exclude any impacts due to memorized variable names. Table 3 summarizes this study. We find that the general trend does not show strong reliance on shortcuts across LLMs; only GPT-3.5 has a significant drop in its relative performance on group (IE = 1) vs group (IE = 0) for the confounding DAG case. 7 \fPreprint. Under Review. Graph Type Bivariate Confounding Mediation Intervened Variable B B C B C GPT-3.5 IE = 0 0.93 \u00b1 0.06 0.6 \u00b1 0.13 0.47 \u00b1 0.13 0.33 \u00b1 0.12 0.67 \u00b1 0.12 IE = 1 0.8 \u00b1 0.1 0.67 \u00b1 0.12 0.2 \u00b1 0.1 0.37 \u00b1 0.12 0.67 \u00b1 0.12 GPT-4 IE = 0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 0.87 \u00b1 0.09 1.0 \u00b1 0.0 IE = 1 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 0.8 \u00b1 0.1 0.93 \u00b1 0.06 GPT-4-turbo IE= 0 1.0 \u00b1 0.0 0.96 \u00b1 0.05 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 IE = 1 0.93 \u00b1 0.06 0.93 \u00b1 0.06 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 Table 3: IE prediction performance across sub-cases to isolate the effect of relation retrieval on Random (D1) benchmark. LLMs do not significantly rely on shortcuts related to relation retrieval from the input prompt. Since intervening on variable A never changes the causal graph in any scenario, we do consider them for this analysis and leave them blank. Graph Type Bivariate Confounding Mediation Intervened Variable A B A B C A B C GPT-3.5 0.67 \u00b1 0.12 0.83 \u00b1 0.06 0.56 \u00b1 0.11 0.58 \u00b1 0.11 1.0 \u00b1 0.0 0.62 \u00b1 0.10 0.44 \u00b1 0.10 0.4 \u00b1 0.12 GPT-4 0.97 \u00b1 0.03 1.0 \u00b1 0.0 0.89 \u00b1 0.05 0.82 \u00b1 0.07 1.0 \u00b1 0.0 0.96 \u00b1 0.03 0.76 \u00b1 0.10 1.0 \u00b1 0.0 GPT-4-turbo 0.9 \u00b1 0.07 0.97 \u00b1 0.03 0.96 \u00b1 0.03 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 1.0 \u00b1 0.0 Table 4: IE prediction accuracy on the Random (D1) benchmark for the substitution task. The performance of GPT-3.5 & GPT-4 is worse under substitution compared to the non-substitution case (Table 1), while GPT-4-turbo does not show significant change. RQ4: Are LLMs robust to descriptions of interventions in-context? Consider the mapping function MI(\u00b7) that verbalizes an intervention in Figure 2. We ask whether LLMs can achieve the same performance on the intervention effects in the random benchmark if we varied the mapping function MI(\u00b7) to instead \u201cteach\u201d an LLM (Patel et al., 2023) about a new graphical operation that behaves identically to an intervention. We randomly generate strings to instantiate this operation. Figure 3 in the appendix illustrates how we generate prompts for this task which we refer to as the substitution task. Table 4 summarizes IE prediction accuracy for the substitution task on the Random benchmark D1. Contrasting these results against Table 1, we see that for GPT-4 and GPT-3.5, the performance generally suffers. However, GPT-4-turbo appears to be robust to changes in the way interventions are described. 5 Related Work This paper relates most closely to recent papers that develop benchmarks to evaluate LLMs on various causal reasoning tasks. K\u0131c\u0131man et al. (2023) introduced multiple causal reasoning benchmarks for LLMs, including evaluating the ability of LLMs to recover the bivariate causal DAGs introduced in the T\u00a8 ubingen pairs dataset (Mooij et al., 2016). K\u0131c\u0131man et al. (2023) found that GPT models recovered known causal relationships with up to 96% accuracy when experimenting with various prompting strategies such as including system prompts. Building on these results, several recent papers use LLMs to augment the discovery of causal DAGs using natural language descriptions of variables (Abdulaal et al., 2024; Castelnovo et al., 2024; Jiralerspong et al., 2024; Takayama et al., 2024; Long et al., 2023b;a; Ban et al., 2023; Vashishtha et al., 2023). However, evaluating LLMs on their ability to retrieve causal knowledge about known variables constitutes commonsense causal reasoning. In contrast, this paper contributes to work that evaluates abstract causal reasoning (Binz & Schulz, 2023; Jin et al., 2024; 2023; Cai et al., 2023; Liu et al., 2024), assessing the ability of LLMs to use axioms of causality to solve tasks involving general or even new variables. 8 \fPreprint. Under Review. In the vein of causal reasoning, Jin et al. (2023) studied whether LLMs could correctly infer some causal relationships based on conditional independence statements, comparing LLM predictions to those made by an oracle causal discovery algorithm for observational data, where all causal relations cannot be resolved. In contrast to observational causal discovery, this paper focuses on reasoning about interventions. Also focusing on interventions, Jin et al. (2024) introduced CLadder, a comprehensive benchmark that includes the estimation of causal effects from quantitative data. Causal effect estimation is a complex task that requires solving multiple sub-tasks such as: (i) parsing the prompt to extract a causal DAG, (ii) inferring a function that estimates the effect given the DAG, and (iii) applying that function to the given quantitative data. Concurrently, Cai et al. (2023) introduced a task that asks LLMs to output only causal relationships given a tabular dataset that includes variable names. They focus on disentangling the impacts that prior knowledge (e.g., variables names) and quantitative data have on LLM performance. The empirical study we conduct to assess whether LLMs are sensitive to the presence of plausibly memorized causal relations is similar to experiments conducted by Cai et al. (2023). In contrast to these benchmarks, intervention effects target a narrower question than the general estimation of causal effects, since intervention effects involve binary classification only (i.e., the absence/presence of causal relations in DAGs). We argue that the evaluations we design better isolate causal reasoning from sub-tasks like drawing statistical inferences from quantitative data provided in-context, which both CLadder and the work of Cai et al. (2023) require. In focusing on intervention effects, we build on the work of Binz & Schulz (2023), who were motivated by prior work in psychology (Waldmann & Hagmayer, 2005) that shows humans weight collected observational evidence and experimental evidence differently when drawing causal conclusions. Binz & Schulz (2023) adapted this psychology study for LLMs, creating prompts that describe observational and post-interventional findings to LLMs to see if they update their beliefs about a system after interventions. They found that GPT-3, unlike human subjects, fared poorly at understanding the implications of interventions. Motivated by their focus on intervention-based reasoning, we significantly expand on the evaluations designed by Binz & Schulz (2023), systematically generating intervention effects with varying degrees of difficulty to further explore the effects of plausible memorization and shortcuts like relation retrieval. Finally, we note that recent work (Ze\u02c7 cevi\u00b4 c et al., 2023) has called into question whether LLMs are causal parrots, mimicking behavior seen during training without being able to generalize the use of causal logic. K\u0131c\u0131man et al. (2023); Cai et al. (2023); Jin et al. (2024) as well as the benchmarks designed in this paper aim to stress-test LLMs more extensively by varying prompt generation to assess different shortcuts that LLMs could have learned. 6 Discussion and Limitations The goal of this paper is to introduce a causal reasoning benchmark that stress-tests the ability of LLMs to accurately predict how knowledge should be updated after interventions are performed, without conflating other aspects of reasoning such as statistical inference on quantitative data. The research questions that we investigated point to both some optimism and caution. While on one hand, in some scenarios, GPT-4 appears to accurately predict how interventions modify given causal relations, on the other hand, its performance can be negatively affected when prompts describe causal knowledge that it has plausibly memorized from training. Overall, these findings point to the continued importance of designing benchmarks and studies that evaluate varied aspects of abstract causal reasoning in LLMs, especially if practitioners wish to use LLMs to generate candidate decisions. While the intervention effect prediction task we define in this paper has the benefit of being easy to evaluate, since it requires binary responses, the findings that this task can suggest are also limited. For example, IE prediction cannot help us assess how accurately LLMs perform causal identification, the process of deciding which causal inferences can be made given a causal DAG. Moreover, we focus on evaluation in this paper and do not propose methods for improving causal reasoning in LLMs via few-shot learning or fine-tuning. Both of these limitations point to future research directions that we think are worth exploring. 9 \fPreprint. Under Review. Finally, keeping in mind Goodhart\u2019s law 4, we stress that evaluations like ours and those of others do not serve as metrics for LLMs to beat before being deployed in high-stakes situations. We discourage gaming these benchmarks, and instead intend for this study to, like psychology studies, shed light on LLM behavior. Acknowledgements We would like to thank Tejas Vaidhya for contributing to experiments during the initial phase of the project. This work is supported by CIFAR and enabled in part by resources from Digital Research Alliance of Canada (https://alliancecan.ca), Mila (https://mila. quebec), and NVIDIA." + }, + { + "url": "http://arxiv.org/abs/2402.13750v1", + "title": "Breaking the Barrier: Utilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph", + "abstract": "Recommendation systems are widely used in e-commerce websites and online\nplatforms to address information overload. However, existing systems primarily\nrely on historical data and user feedback, making it difficult to capture user\nintent transitions. Recently, Knowledge Base (KB)-based models are proposed to\nincorporate expert knowledge, but it struggle to adapt to new items and the\nevolving e-commerce environment. To address these challenges, we propose a\nnovel Large Language Model based Complementary Knowledge Enhanced\nRecommendation System (LLM-KERec). It introduces an entity extractor that\nextracts unified concept terms from item and user information. To provide\ncost-effective and reliable prior knowledge, entity pairs are generated based\non entity popularity and specific strategies. The large language model\ndetermines complementary relationships in each entity pair, constructing a\ncomplementary knowledge graph. Furthermore, a new complementary recall module\nand an Entity-Entity-Item (E-E-I) weight decision model refine the scoring of\nthe ranking model using real complementary exposure-click samples. Extensive\nexperiments conducted on three industry datasets demonstrate the significant\nperformance improvement of our model compared to existing approaches.\nAdditionally, detailed analysis shows that LLM-KERec enhances users' enthusiasm\nfor consumption by recommending complementary items. In summary, LLM-KERec\naddresses the limitations of traditional recommendation systems by\nincorporating complementary knowledge and utilizing a large language model to\ncapture user intent transitions, adapt to new items, and enhance recommendation\nefficiency in the evolving e-commerce landscape.", + "authors": "Qian Zhao, Hao Qian, Ziqi Liu, Gong-Duo Zhang, Lihong Gu", + "published": "2024-02-21", + "updated": "2024-02-21", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "Breaking the Barrier: Utilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph", + "main_content": "INTRODUCTION The Recommendation System (RS) has been widely used in online service platforms (e.g., Amazon and Taobao) as an effective tool for alleviating information overload. The primary objective of RS is to infer user preferences from their past behaviors, recommend the most suitable items that align with their interest. Hence, existing recommendation systems are mostly trained based on historical exposure and click logs. Hereby, we summarize the existing recommendation tasks as the combination of following sub-tasks: 1) Recommend substitutive items based on the exposure or click feedback from users. 2) Recommend complementary items based on the conversion feedback from users. 3) Conduct traffic exploration or business intervention to explore users\u2019 other potential interests. Traditional deep ClickThrough Rate (CTR) prediction models[8, 14, 20, 21] equipped with well-designed feature interaction techniques through deep neural networks have been widely applied to tackle these sub-tasks in major e-commerce systems. These methods provide personality in RS via extracting user preference from historical exposure-click samples. Despite achieving notable performance improvements in RS, we argue they still suffer from the following two major challenges in real-world scenarios. 1) These models rely heavily on exposed samples and user feedback, which limits the performance of RS in cold-start scenarios and makes it difficult to cope with the continuous emergence of new items. 2) The sparsity of user interaction samples results in existing CTR models being more arXiv:2402.13750v1 [cs.IR] 21 Feb 2024 \fConference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain Qian Zhao and Hao Qian, et al. effective in recommending substitutes (sub-task 1) than complementary items (sub-task 2). While models based on expert-crafted complementary rules or knowledge graphs can aid in recommending complementary items, they are not a sustainable solution in the ever-evolving landscape of e-commerce due to efficiency and expenditure challenges. Therefore, it\u2019s indispensable to incorporate efficient knowledge and Large Language Model (LLM) as the carrier of human reasoning and logic to improve the performance of RS[1, 4, 5, 19]. However, in RS, due to the difficulty of large-scale deployment and long inference time of the large language model, it has only been used as a tool for text embedding in previous work[2, 9], making it difficult to fully utilize its powerful reasoning ability. In the light of the above limitations and challenges, we propose a novel LLM-KERec for recommendation. Our method combines the efficient collaborative signal processing capability of traditional models with large language models and complementary graph to help users quickly find their preferred items. This method not only reduces the homogeneity of traditional model recommendation results, but also improves overall click-through and conversion rates. Specifically, we first use our designed entity extractor to extract unified concept terms (referred to as entities) from the information of all items and user billing information. Next, we generate entity pairs based on the popularity of entities and carefully designed strategies. Then we construct a complementary graph based on a large language model, where each edge in the graph represents a complementary purchasing relationship between corresponding entities. Finally, we launch a new complementary recall module and train the E-E-I weight decision model through real exposure click samples. This model will apply the edge weights of the graph corrected by real feedback to the fine-ranking layer model to achieve recommendation of complementary items. It is worth mentioning that both the entity extractor and complementary graph are periodically updated to adapt to new items and the changing ecommerce environment. The main contributions of this paper can be summarized as follows: \u2022 For the first time, we utilize the inference ability of large language models as a medium to improve the scenario preference when recommending items to each user, achieving large-scale application of large language model in industrial scenarios. \u2022 Our method continuously adjusts the weights of graph edges based on real exposure samples of complementary item pairs, addressing the language model\u2019s weakness in determining user preference strength. \u2022 Extensive experiments are conducted on three industry scenarios, demonstrating our approach is consistently better than a number of competitive baselines. 2 SYSTEM OVERVIEW In this section, we present the overview of the LLM-KERec System, including Traditional Recommendation Module and LLM-based Complementary Knowledge Enhancement, shown in Fig. 1. Entity Extractor Recall Module Coarse-Ranking Model Fine-Ranking Model Re-Ranking Model Items User Complementary Graph E-E-I weight Decision model Recent Bills Request Item Infomation Traditional Recommendation Module LLM-based Complementary Knowledge Enhancement Large Language Model e.g. ChatGPT, ChatGLM, Claude via World Knowledge, Commonsense Reasoning +U2E2I Complementary Recall inject knowledge Display Order Historical Feedbacks logs \u2026 Figure 1: Overall framework of our proposed LLM-KERec System. 2.1 Traditional Recommendation Module In the traditional recommendation architecture, when a user opens an application, the application will automatically send a request to server. This process follows these steps: 1) The server triggers the recall module based on the user\u2019s request information, including popular item recall, LBS recall, personalized recall, etc. The recall module returns a large number of candidate items. 2) These candidate items are then input into the coarse-ranking model for filtering. The coarse-ranking model produces a smaller set of candidate items. 3) Finally, the fine-ranking model and re-ranking model are used to make the final decision on the display order of these items. Additionally, manual intervention may occur at each step, such as assigning weights to the items for publication. The fine-ranking model and re-ranking model are typically trained using historical exposure and click logs. As a result, existing recommendation models often prioritize recommending similar items based on positive user feedback. This poses a challenge when it comes to providing reliable recommendations for supplementary items that have potential reasoning behind them, such as suggesting complementary item B after a user has purchased item A. 2.2 LLM-based Complementary Knowledge Enhancement In this paper, the LLM-KERec system maintains the ability to efficiently process a large number of collaborative signals of the existing recommendation system. It also overcomes above challenge through the LLM-based Complementary Knowledge Enhancement Module. To establish connections between different content in Alipay, LLM-KERec creates a unified entity (category) system for users\u2019 billing behaviors and all items. Each item or bill is classified into a unique entity, which serves as a bridge between various contents. Utilizing world knowledge and commonsense knowledge, we employ a large language model to determine the existence of a complementary relationship between two entities and construct a complementary graph. The nodes of this graph are all entities, while the edges indicate the complementary relationship between the corresponding entities. Subsequently, using the real exposure and click feedback of complementary items, we train an entity-entity-item (E-E-I) weight decision model. This model is then used to inject knowledge into the ranking model. By adopting this approach, we can provide personalized recommendations for both favorite items and complementary items. This solution has been successfully implemented in Alipay marketing scenarios, and experimental results have demonstrated its effectiveness. \fUtilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph Conference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain BERT-CRF BERT-CRF BERT-CRF CRF Layer BERT Preprocessing \u8212\u5ba2 \u5c0f\u82cf\u6253 \u7259\u818f 120\u514b (Shuke baking soda toothpaste 120g) \u4e00\u6b21\u6027 \u514d\u6495 \u4fdd\u9c9c\u819c\u7f69 100\u652f (100 disposable tear free plastic wrap covers) \u786b\u78fa\u9664\u87a8 \u6db2\u4f53 \u2fb9\u7682 400ml Intent / Product / [\u9999\u7682; soap] [\u4fdd\u9c9c\u819c\u7f69; plastic wrap cover] [\u7259\u818f; toothpaste] \u786b\u78fa\u9664\u87a8 \u6db2\u4f53 \u9999\u7682 400ml (Sulfur Mite Removal Liquid Soap 400ml) extract results item information & user bills Figure 2: Extracting entites from item information and user bills. 3 DIVING INTO THE LLM-KEREC SYSTEM In this section, we will zoom into each module in LLM-KERec System. 3.1 Entity Extractor 3.1.1 Entity Dict. In real-world applications, like Alipay, users\u2019 behaviors span across various scenarios, each with diverse content. To align information and knowledge from these diverse sources, it is crucial to establish a unified association pattern. This is where our Entity Dict comes into play, serving as a bridge for different content types. In the Entity Dict, each entity represents a specific concept, such as \u201cphone\u201d or \u201ccola\u201d. Our dedicated group of experts meticulously designed the Entity Dict, incorporating tens of thousands of entities. Importantly, the Entity Dict is regularly updated every week to ensure its adaptability to new items and content. 3.1.2 Extracting Entities. Building upon the Entity Dict, our focus shifts to extracting entities from various user behaviors within Alipay, including bills, visit logs, and the entity information of items in marketing scenarios. This extraction process can be viewed as a Named Entity Recognition (NER) task, which has been extensively studied in the field of Natural Language Processing (NLP) [13, 15, 23]. To perform entity extraction, we utilize the BERT-CRF model. This model combines the transfer capabilities of BERT[4] with the structured predictions of CRF[11]. The BERT-CRF model enables us to accurately extract entities from user behaviors in Alipay. In the LLM-based Complementary Knowledge Enhancement, our primary objective is to establish connections between user purchase behaviors and the items to be recommended. To achieve this, we extract entities from each user\u2019s recent bills, forming their recent entity transaction sequence. Furthermore, we extract entities from item information and assign a unique entity as the item\u2019s category. The detailed procedure is illustrated in Fig. 2. 3.2 Complementary Graph Construction We utilize the results of the entity extractor to construct a complementary graph, which helps us gain insights into users\u2019 purchasing patterns. Specifically, we aim to understand which item B (e.g., paper towels) users typically buy after purchasing item A (e.g., utensils) by leveraging natural language understanding and commonsense reasoning. The construction of the complementary graph involves two main steps: 1) We generate candidate entity pairs from the entity dict, ensuring both execution efficiency and comprehensive item coverage. 2) Through a combination of carefully designed Extremely Popular Unpopular Chicken wings Roll paper pencil glove tablet computer bracket Popular sauce Popular Popular Extremely Popular Unpopular Large Language Model e.g. ChatGPT, ChatGLM, Claude via world Knowledge, commonsense Reasoning (a) rank entities (b) construct entity pairs (c) construct entity graph Figure 3: Long-tail distribution in Entity Dict. prompt engineering and the utilization of a large language model, we perform reasoning tasks to extract meaningful insights from the data. 3.2.1 Entity Pair Construction. Firstly, it is important to recognize that certain items have complementary relationships with specific concepts, and these concepts often encompass more specific items. In industrial e-commerce scenarios, where the number of items can reach millions or even more, there are only a few thousand concept categories. By using concepts as entities instead of individual items, computational resources can be significantly conserved. In Section 3.1, we have already assigned unique entities to all items using the entity extractor. To construct entity pairs, a straightforward approach would involve taking elements from a set containing \ud835\udc5b entities and combining them pairwise, resulting in \ud835\udc5b(\ud835\udc5b\u22121) 2 candidate entity pairs. However, this method is not cost-effective due to the slower inference speed of downstream large language models. Additionally, in real-world scenarios, there is often a long-tail distribution where a few entities are frequently purchased while the majority of entities are rarely consumed (as depicted in Fig. 3). Focusing solely on tail entity combinations makes it challenging to improve the overall performance of the recommendation system. To tackle this challenge, we have devised a cost-effective segmented combination strategy as follows: 1) Initially, we sort entities in descending order based on metrics like total conversions and clicks. This allows us to classify them into three categories: extremely popular, popular, and unpopular entities. 2) We focus on constructing entity pairs exclusively within the popular entities. This approach enhances the performance and coverage specifically \fConference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain Qian Zhao and Hao Qian, et al. for popular items. 3) Additionally, we construct entity pairs that include both extremely popular and unpopular entities. This ensures comprehensive coverage in the complementary graph for unpopular items. By merging and eliminating duplicates from all entity pairs, we obtain the final output. This segmented combination strategy ensures reliable support for downstream modules while minimizing resource waste. 3.2.2 Large Language Model. Large language models have garnered significant attention from researchers due to their remarkable understanding and reasoning abilities in natural language processing. A specific research direction, exemplified by methods like Prompt-Turing[12] and LoRA[7], explores prompt engineering techniques based on these large language models. In these approaches, researchers can obtain desired answers from the large language model by providing a simple task description and a small number of examples. By fine-tuning the model efficiently using techniques like LoRA[7] based on annotated samples, researchers can enhance its support for the current task. In this study, we also leverage the capabilities of large language models to determine the existence of a complementary relationship in an entity pair. Specifically, we utilize Claude 21 as the underlying language model and thoughtfully design reliable prompts to guide the model in conducting a step-by-step analysis and providing dependable reasoning evidence. The ultimate goal is to enhance the interpretability of the reasoning results. Upon completing the reasoning process, we sample thousands of examples for manual annotation and continuously refine the prompts to attain an acceptable level of accuracy in the reasoning outcomes. The prompts we have designed encompass various aspects, including: 1) Description of the input data format, where each line consists of two entities representing real-world concepts. 2) Task description, which involves determining whether there is a likelihood of a person purchasing entity B shortly after purchasing entity A. 3) Provide multiple data examples and their corresponding reasons. For instance, we provide examples like the complementary relationship between bread and milk, as they form a popular breakfast combination. Conversely, we highlight that there is no complementary relationship between a phone and milk, as they are unrelated. 4) Explanation of the output format, which includes a concise description of the purposes of the two entities, whether a complementary relationship exists between them, and a detailed explanation. Ultimately, the answer is denoted as either Y or N. Moreover, we have explored methods such as ChatGPT 3.52 and ChatGLM 2[5, 22]. A comprehensive comparison between these methods can be found in Section 4.5. 3.2.3 Automatic Update Strategy. In a real e-commerce environment, users and merchants continually rely on each other\u2019s cognitive updates and mutually promote one another. This means that the popularity of entities is not static. For instance, certain merchants may employ marketing strategies to rapidly gain public attention for their products, and over time, older products may be phased out. To address this dynamic nature of popularity, we have implemented an automatic daily schedule for constructing 1https://www.anthropic.com/index/claude-2 2https://openai.com/blog/chatgpt the incremental complementary graph. By promptly recognizing such changes and updating our complementary graph accordingly, we can ensure the effective and sustained operation of the entire system. This proactive approach is crucial for maintaining optimal system performance in the long run. 3.3 E-E-I weight decision model At present, we have successfully linked each user\u2019s recent bills and each item to entities in the complementary graph. Our objective is to recommend complementary items (entity2) based on user bill (entity1), where relation entity1-entity2 exist in the complementary graph. However, due to the limited ability of LLM to accurately assess user preferences, we require an E-E-I (entity1-entity2-item) weight decision model to effectively accomplish this task. 3.3.1 Model Overview. Intuitively, the success of the LLM-KERec System relies heavily on the construction of a high-quality E-E-I weight decision model. Therefore, we propose a Two-stage Complementary Knowledge Enhancement Procedure, which consists of the Ranking Stage and the Integration Stage, as shown in Fig. 4. In the following sections, we will take a closer look at each well-designed stage. 3.3.2 Ranking Stage. As shown in Fig. 4(a.0), our model adopts a dual-tower architecture, where the outputs of the two towers represent the representations of the complementary item and bill entity, respectively. The dot product of these outputs serves as the preference level indicator. For the representation of item, we can extract a rich set of features from the database, including basic features, statistical features, and interaction features, etc. However, for the entity representation, we face a challenge as we lack specific information to describe them, aside from a pre-assigned ID. To overcome this limitation, we employ Graph Neural Network[10] and Contrastive Learning to representative entity from two distinct perspectives: the first-order substitutable view and the second-order complementary view. The Ranking Stage can be further subdivided into the following modules: Graph Construction. Graph Neural Networks (GNNs) has demonstrated promising results for recommender systems, as they can effectively leverage high-order relationship. These methods represent interaction data as graphs, such as the user-item interaction graph, and iteratively propagate neighborhood information to learn effective node representations. Similarly, as show in Fig. 4(a.1), we have designed the following edge relationships around how to better represent entity: 1) Establish edges for click behaviors between user nodes and item nodes. 2) Establish edges for dependency relationships between item nodes and entity nodes. 3) Establish edges for complementary relationships between entity nodes and entity nodes. Given the user set U = {\ud835\udc62}, the item set I = {\ud835\udc56} and the entity set E = {\ud835\udc52}. The number of nodes is \ud835\udc5b= |U| + |I| + |E|. Our method formulate the available data as a user-item-entity graph G = (V, A), where V = U \u222aI \u222aE and A \u2208R\ud835\udc5b\u00d7\ud835\udc5bis the adjacent matrix. \fUtilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph Conference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain item features bill entity features DNN DNN score dot product basic infomation interactions statistical value entity id (a) Ranking Stage (b) Integration Stage Attention Attention AB9XicbVDLSgMxFL1TX7W+qoIbN8EiuCozUtRlqRs3Qgv2Ae1YMmDc1khiSjlKH/4caFIu7En/AL3LnxW8y0XWjrgcDhnHu5J8eLOFPatr+szNLy yupadj23sbm1vZPf3WuoMJaE1knIQ9nysKcCVrXTHPaiTFgcdp0xtepn7zjkrFQnGjRxF1A9wXzGcEayPdgKsBwTz5Lo67jrdfMEu2hOgReLMSKF8UPtmb5WPajf/2emFJA6o0IRjpdqOHWk3wVIzwuk414kVjTAZ4j5tGypwQJWbTFKP0bFResgPpXlCo4n6eyPBgVKjwDOTaUo176Xif1471v6FmzARxZoKMj3kxzpEKUVoB6TlGg+MgQTyUxWRAZYqJNUTlTgjP/5UXSOC06Z8VSzbRgSmycAhHcAIOnEMZrqAKdSAg4QGe4Nm6tx6tF+t1Op qxZjv78AfW+w8075YHMP1 AB9XicbVDLSgMxFL1TX7W+qoIbN8EiuCozRdRlqRs3Qgv2Ae1YMmDc1khiSjlKH/4caFIu7En/AL3LnxW8y0XWjrgcDhnHu5J8eLOFPatr+szNLy yupadj23sbm1vZPf3WuoMJaE1knIQ9nysKcCVrXTHPaiTFgcdp0xtepn7zjkrFQnGjRxF1A9wXzGcEayPdgKsBwTz5Lo67pa6+YJdtCdAi8SZkUL5oPbN3iof1W7+s9MLSRxQoQnHSrUdO9JugqVmhNxrhMrGmEyxH3aNlTgCo3maQeo2Oj9JAfSvOERhP190aCA6VGgWcm05Rq3kvF/7x2rP0LN2EijUVZHrIjznSIUorQD0mKdF8ZAgmkpmsiAywxESbonKmBGf+y4ukUSo6Z8XTmjAlNk4RCO4AQcOIcyXEV6kBAwgM8wbN1bz1aL9brdD RjzXb24Q+s9x82c5YIMP2 meta-path AB+XicbVDLSsNAFL2pr1pfUcGNm2ARXJVERF2WunHhogX7gDaEyXTSDp1MwsykUEL+xI0LRQRX/oJ f4M6N3+Kk7UJbDwczrmXe+b4MaNS2faXUVhZXVvfKG6WtrZ3dvfM/YOWjBKBSRNHLBIdH0nCKCdNRUjnVgQFPqMtP3RTe63x0RIGvF7NYmJG6IBpwHFSGnJM81eiNQI5beZV6KWeaZbtiT2EtE2dOytWjxjd9q3UPfOz149wEhKuMENSdh07Vm6KhKYkazUSySJER6hAelqylFIpJtOk2fWqVb6VhAJ/biypurvjRSFUk5CX0/mOeWil4v/ed1 EBduSnmcKMLx7FCQMEtFVl6D1aeCYMUmiAsqM5q4SESCtdVkmX4Cx+eZm0zivOZeWioduowQxFOIYTOAMHrqAKt1CHJmAYwM8wbORGo/Gi/E6Gy0Y851D+APj/QfvsJeRLcl graph feature AB6nicbVBNS8NA EJ3Ur1q/aj16WSyCp5KUoh4LXjxWtLXQhrLZTtqlm03Y3 Qgl9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq7R Q2Nre2d4q7pb39g8Oj8nGlo+NUMWyzWMSqG1CNgktsG24 EdhOFNAoEPgaTm7n/+IRK81g+mGmCfkRHkoecUWOlexzU B+WqW3MXIOvEy0kVcrQG5a/+MGZphNIwQbXueW5i/Iwqw 5nAWamfakwom9AR9iyVNELtZ4tTZ+TcKkMSxsqWNGSh/p 7IaKT1NApsZ0TNWK96c/E/r5ea8NrPuExSg5ItF4WpICY m87/JkCtkRkwtoUxeythY6oMzadkg3BW315nXTqNe+y 1rhrVJuVPI4inMIZXIAHV9CEW2hBGxiM4Ble4c0Rzovz7 nwsWwtOPnMCf+B8/gDksY1ne2 AB6 nicbVBNS8NAEJ3Ur1q/aj1 6WSyCp5KUoh4LXjxWtLXQhr LZTtqlm03Y3Qgl9Cd48aCI V3+RN/+N2zYHbX0w8Hhvhpl 5QSK4Nq7RQ2Nre2d4q7p b39g8Oj8nGlo+NUMWyzWMSq G1CNgktsG24EdhOFNAoEPg aTm7n/+IRK81g+mGmCfkRHk oecUWOlexzUB+WqW3MXIOv Ey0kVcrQG5a/+MGZphNIwQb XueW5i/Iwqw5nAWamfakwo m9AR9iyVNELtZ4tTZ+TcKkM SxsqWNGSh/p7IaKT1NApsZ 0TNWK96c/E/r5ea8NrPuExS g5ItF4WpICYm87/JkCtkRk wtoUxeythY6oMzadkg3BW 315nXTqNe+y1rhrVJuVPI4 inMIZXIAHV9CEW2hBGxiM4B le4c0Rzovz7nwsWwtOPnMC f+B8/gDksY1ne2 AB9XicbVDJSgNBEO2JWxy3qEcvjUHwFGZE1IsY9OIxglkgGz2dnqRJT8/QXaPGYf7Di wcXvPoZ3r2If2NnOWjig4LHe1VU1fMiwTU4zreVmZtfWFzKLtsrq2vrG7nNrYoOY0VZmYiVDWPaCa4ZGXgIFgtUowEnmBVr38x9Ks3TGkeymsYRKwZkK7kPqcEjNRqALsDz0/u01bip+1c3ik4I+BZ4k5I/uzDPo1evuxSO/fZ6IQ0DpgEKojWdeJoJkQBZwKltqNWLOI0D7psrqhkgRMN5PR1SneM0oH+6EyJQGP1N8TCQ m0HgSe6QwI9PS0NxT/8+ox+CfNhMsoBibpeJEfCwhHkaAO1wxCmJgCKGKm1sx7RFKJigbBOCO/3yLKkcFNyjwuGVky+eozGyaAfton3komNURJeohMqIoUe0BN6tm6tR+vVehu3ZqzJzDb6A+v9B9U+lmM= zf AB9XicbVDJSgNBEO2JWxy3qEcvjUHwFGZE1IsY9OIxglkgGz2dnqR JT8/QXaPGYf7DiwcXvPoZ3r2If2NnOWjig4LHe1VU1fMiwTU4zreVmZtfWFzKLtsrq2vrG7nNrYoOY0VZmYiVDWPaCa4ZGXgIFgtUowEnmBVr38x9Ks3TGkeymsYRKwZkK7kPqcEjNRqALsDz0/u01ai03Yu7xScEfAscSckf/Zhn0YvX3apnftsdEIaB0wCFUTrutE0EyIAk4FS+1G rFlEaJ90Wd1QSQKm8no6hTvGaWD/VCZkoBH6u+JhARaDwLPdAYEenraG4r/efUY/JNmwmUA5N0vMiPBYQDyPAHa4YBTEwhFDFza2Y9ogiFExQtgnBnX5lQOCu5R4fDKzRfP0RhZtIN20T5y0TEqoktUQmVEkUIP6Ak9W7fWo/VqvY1bM9ZkZhv9gfX+A+lPlnE=zs (a.0) Overview AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkVI8FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7 /Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNA oEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ+8QbniVt0FyDrxclKBHM1B+as/jFkaoTRMUK17npsYP6PKc CZwVuqnGhPKJnSEPUsljVD72eLUGbmwypCEsbIlDVmovycyGmk9jQLbGVEz1qveXPzP6UmvPEzLpPUoGTLR WEqiInJ/G8y5AqZEVNLKFPc3krYmCrKjE2nZEPwVl9eJ+2rqlev1u5rlYaXx1GEMziHS/DgGhpwB01oAYMRP MrvDnCeXHenY9la8HJZ07hD5zPH/KYjYk=i1 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lK UY9FLx4r2lpoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgbjm5n/+IRK81g+mEmCfkSHkoecUWOle96v9csVt+rOQVaJl5 MK5Gj2y1+9QczSCKVhgmrd9dzE+BlVhjOB01Iv1ZhQNqZD7FoqaYTaz+anTsmZVQYkjJUtachc/T2R0UjrSRTYzoiakV72ZuJ/Xjc14ZWfcZmkBiVbLApTQUxMZn+TAVfIjJhYQpni9lbCRlRZmw6JRuCt/zyKmnXqt5FtX5XrzSu 8ziKcAKncA4eXEIDbqEJLWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QP5OY2bi2 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0 qMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1Bqw8GHu/NMDMvSATXxnW/nMLK6tr6RnGztLW9s7tX3j9o6ThVDJsFrHqBFSj4BKbhuBnUQhjQKB7WB8M/Pbj6g0j+WDmSToR3QoecgZNVa65/3zfrniVt05yF/i5a QCORr98mdvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JilQEJY2VLGjJXf05kNJ6EgW2M6JmpJe9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb/8l7TOqt5FtXZXq9Sv 8ziKcATHcAoeXEIdbqEBTWAwhCd4gVdHOM/Om/O+aC04+cwh/ILz8Q36vY2ci3 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mk qMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPu1frniVt05yCrxcl KBHI1+as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqndZrd3XKvWb PI4inMApnIMHV1CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gD8QY2di4 AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKeyK r2PQi8eI5gHJEmYnvcmQ2dlZlYISz7BiwdFvPpF3vwbJ8keNLGgoajqprsrSATXxnW/ncLK6tr6RnGztLW9s7tX3j9o6jhVDBsFrFqB1Sj4BIbhuB7UQhjQKBrWB0O/VbT6g0j+WjGSfoR3QgecgZNVZ64L2LXrniVt0ZyDLxcl KBHPVe+avbj1kaoTRMUK07npsYP6PKcCZwUuqmGhPKRnSAHUsljVD72ezUCTmxSp+EsbIlDZmpvycyGmk9jgLbGVEz1IveVPzP6QmvPYzLpPUoGTzRWEqiInJ9G/S5wqZEWNLKFPc3krYkCrKjE2nZEPwFl9eJs2zqndZPb8/r9Ru 8jiKcATHcAoeXEN7qAODWAwgGd4hTdHOC/Ou/Mxby04+cwh/IHz+QP9xY2ei5 AB 6nicbZDLSgMxFIbPeK31Vuv STbAIruqMiLqz4MZli/YC7 VAy6WkbmskMSUYoQ9ANy4 UcesTuRB8AZ/D9LQ1h8CH/ 9/DjnBLHg2rjul7O0vLK6 tp7ZyG5ube/s5vbyNR0lim GVRSJSjYBqFxi1XAjsBErp GEgsB4Mrsd5/R6V5pG8M8M Y/ZD2JO9yRo21brHtXMFt +hORBbBm0Hh6rvy+cBPonI7 9HqRCwJURomqNZNz42Nn1 JlOBM4yrYSjTFlA9rDpkVJ Q9R+Ohl1RI6s0yHdSNknDZm 4vztSGmo9DANbGVLT1/PZ2 Pwvayame+mnXMaJQcmH3U TQUxExnuTDlfIjBhaoExOy thfaoM/Y6WXsEb37lRaid Fr3z4lnFLZTyMFUGDuAQjs GDCyjBDZShCgx68AjP8OI5 8l5d6mpUvOrGcf/sh5/wE ZAZFB e1 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKUY9FLx4r2lpoQ9lsJ+3SzSbsb oQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgbjm5n/+IRK81g+mEmCfkSHkoecUWOl+7Rf65crbtWdg6wSLycVyNHsl796g5ilEUrDBNW67mJ8TOqDGcCp6VeqjGhbEyH2LVU0gi1n81PnZIzqwxIGCtb0pC5+ns io5HWkyiwnRE1I73szcT/vG5qwis/4zJDUq2WBSmgpiYzP4mA6QGTGxhDLF7a2EjaizNh0SjYEb/nlVdKuVb2Lav2uXmlc53EU4QRO4Rw8uIQG3EITWsBgCM/wCm+OcF6cd+dj0Vpw8plj+APn8wcLkI2nu2 AB6nicbZDLSgMx FIbPeK31VuvSTbAIruqMiLqz4MZli/YC7VAy6WkbmskM SUYoQ9ANy4UcesTuRB8AZ/D9LQ1h8CH/9/DjnBLHg2 rjul7O0vLK6tp7ZyG5ube/s5vbyNR0limGVRSJSjYBqF xi1XAjsBErpGEgsB4Mrsd5/R6V5pG8M8MY/ZD2JO9yRo 21brHtXMFt+hORBbBm0Hh6rvy+cBPonI79HqRCwJURo mqNZNz42Nn1JlOBM4yrYSjTFlA9rDpkVJQ9R+Ohl1RI6s 0yHdSNknDZm4vztSGmo9DANbGVLT1/PZ2Pwvayame+mn XMaJQcmH3UTQUxExnuTDlfIjBhaoExOythfaoM/Y6W XsEb37lRaidFr3z4lnFLZTyMFUGDuAQjsGDCyjBDZShCg x68AjP8OI58l5d6mpUvOrGcf/sh5/wEZAZFB e1 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkqMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+ gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPu1frniVt05yCrxclKBHI1+as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqndZrd3XKvWbPI4inMApnIMHV1CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gD8QY2di4 AB6nicbVDLSgNBEOyNrxhfUY9eBoPgKeyKr2PQi8eI5gHJEmYnvcmQ2dlZlYISz7BiwdFvPpF3vwbJ8keNLGgoajqprsrSATXxnW/ncLK6tr6RnGztLW9s7t X3j9o6jhVDBsFrFqB1Sj4BIbhuB7UQhjQKBrWB0O/VbT6g0j+WjGSfoR3QgecgZNVZ64L2LXrniVt0ZyDLxclKBHPVe+avbj1kaoTRMUK07npsYP6PKcCZwUuqmGhPKRnSAHUsljVD72ezUCTmxSp+EsbIlDZmpvycyGmk9jgLbGVEz1IveVPzP6QmvPYzLpPUoGTzRWEqiInJ9G/S5wqZEWNLKFPc3krYkCrKjE2nZEPwFl9eJs2zqndZPb8/r9Ru8jiKcATHcAoeXEN7qAODWAwgGd4hTdHOC/Ou/Mxby04+cwh/IHz+QP9xY2ei5 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbB U0mkVI8FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ+8Qbni Vt0FyDrxclKBHM1B+as/jFkaoTRMUK17npsYP6PKcCZwVuqnGhPKJnSEPUsljVD72eLUGbmwypCEsbIlDVmovycyGmk9jQLbGVEz1qveXPzP6UmvPEzLpPUoGTLRWEqiInJ/G8y5AqZEVNLKFPc3krYmCrKjE2nZEPwVl9 eJ+2rqlev1u5rlYaXx1GEMziHS/DgGhpwB01oAYMRPMrvDnCeXHenY9la8HJZ07hD5zPH/KYjYk=i1 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0qMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1Bqw8GHu/NMDMvSATXxnW/nMLK6tr6RnGztLW9s7t X3j9o6ThVDJsFrHqBFSj4BKbhuBnUQhjQKB7WB8M/Pbj6g0j+WDmSToR3QoecgZNVa65/3zfrniVt05yF/i5aQCORr98mdvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JilQEJY2VLGjJXf05kNJ6EgW2M6JmpJe9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb/8l7TOqt5FtXZXq9Sv8ziKcATHcAoeXEIdbqEBTWAwhCd4gVdHOM/Om/O+aC04+cwh/ILz8Q36vY2ci3 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkqMeiF48V7Qe0oWy2k3bpZh N2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0kPa9frniVt05yCrxclKBHI1+as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEs bIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqndZrd3XKvWbPI4inMApnIMHV1CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gAKDI2mu1 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKUY9FLx4r2lpoQ9lsJ+3SzSbsb oQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgbjm5n/+IRK81g+mEmCfkSHkoecUWOl+7Rf65crbtWdg6wSLycVyNHsl796g5ilEUrDBNW67mJ8TOqDGcCp6VeqjGhbEyH2LVU0gi1n81PnZIzqwxIGCtb0pC5+ns io5HWkyiwnRE1I73szcT/vG5qwis/4zJDUq2WBSmgpiYzP4mA6QGTGxhDLF7a2EjaizNh0SjYEb/nlVdKuVb2Lav2uXmlc53EU4QRO4Rw8uIQG3EITWsBgCM/wCm+OcF6cd+dj0Vpw8plj+APn8wcLkI2nu2 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkqMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3 n1BpHstHM0nQj+hQ8pAzaqz0kPa9frniVt05yCrxclKBHI1+as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP6YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwl9eJa2LqndZrd3XKvWbPI4inMApnIMHV1CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gAKDI2mu1 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKUY9FLx4r2lpoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgbjm5n/+IRK81 g+mEmCfkSHkoecUWOl+7Rf65crbtWdg6wSLycVyNHsl796g5ilEUrDBNW67mJ8TOqDGcCp6VeqjGhbEyH2LVU0gi1n81PnZIzqwxIGCtb0pC5+nsio5HWkyiwnRE1I73szcT/vG5qwis/4zJDUq2WBSmgpiYzP4mA6QGTGxhDLF7a2EjaizNh0SjYEb/nlVdKuVb2Lav2uXmlc53EU4QRO4Rw8uIQG3EITWsBgCM/wCm+OcF6cd+dj0Vpw8plj+APn8wcLkI2nu2 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKUY9FLx4r2lpoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5 B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgbjm5n/+IRK81g+mEmCfkSHkoecUWOle96v9csVt+rOQVaJl5MK5Gj2y1+9QczSCKVhgmrd9dzE+BlVhjOB01Iv1ZhQNqZD7FoqaYTaz+anTsmZVQYkjJUtachc/T2R0UjrSRTYzoiakV72ZuJ/Xjc14ZWfcZmkBiVbLApTQUxMZn+TAVfIjJhYQpni9lbCRlRZmw6JRuCt/zyKmnXqt5FtX5XrzSu8ziKcAKncA4eXEIDbqEJLWAwhGd4hTdHOC/Ou/OxaC04+cwx/IHz+QP5OY2bi2 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lKUY9FLx4r2lpoQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgbjm5n/+IRK81 g+mEmCfkSHkoecUWOl+7Rf65crbtWdg6wSLycVyNHsl796g5ilEUrDBNW67mJ8TOqDGcCp6VeqjGhbEyH2LVU0gi1n81PnZIzqwxIGCtb0pC5+nsio5HWkyiwnRE1I73szcT/vG5qwis/4zJDUq2WBSmgpiYzP4mA6QGTGxhDLF7a2EjaizNh0SjYEb/nlVdKuVb2Lav2uXmlc53EU4QRO4Rw8uIQG3EITWsBgCM/wCm+OcF6cd+dj0Vpw8plj+APn8wcLkI2nu2 AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0qMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1Bqw8GHu/NMDMvSATXxnW/nMLK6tr6RnGztLW9s7t X3j9o6ThVDJsFrHqBFSj4BKbhuBnUQhjQKB7WB8M/Pbj6g0j+WDmSToR3QoecgZNVa65/3zfrniVt05yF/i5aQCORr98mdvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JilQEJY2VLGjJXf05kNJ6EgW2M6JmpJe9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb/8l7TOqt5FtXZXq9Sv8ziKcATHcAoeXEIdbqEBTWAwhCd4gVdHOM/Om/O+aC04+cwh/ILz8Q36vY2ci3 (a.3) Second-order Complementary View (a.2) First-order Substitutable View (a.4) Contrasive Learning (a.1) Graph Construction user item entity AB6nicbZDLSgMx FIbPeK31VuvSTbAIruqMiLqz4MZli/YC7VAy6WkbmskM SUYoQ9ANy4UcesTuRB8AZ/D9LQ1h8CH/9/DjnBLHg2 rjul7O0vLK6tp7ZyG5ube/s5vbyNR0limGVRSJSjYBqF xi1XAjsBErpGEgsB4Mrsd5/R6V5pG8M8MY/ZD2JO9yRo 21brHtXMFt+hORBbBm0Hh6rvy+cBPonI79HqRCwJURo mqNZNz42Nn1JlOBM4yrYSjTFlA9rDpkVJQ9R+Ohl1RI6s 0yHdSNknDZm4vztSGmo9DANbGVLT1/PZ2Pwvayame+mn XMaJQcmH3UTQUxExnuTDlfIjBhaoExOythfaoM/Y6W XsEb37lRaidFr3z4lnFLZTyMFUGDuAQjsGDCyjBDZShCg x68AjP8OI58l5d6mpUvOrGcf/sh5/wEZAZFB e1 AB6nicbZDLSgMx FIbPeK31VuvSTbAIruqMiLqz4MZli/YC7VAy6WkbmskM SUYoQ9ANy4UcesTuRB8AZ/D9LQ1h8CH/9/DjnBLHg2 rjul7O0vLK6tp7ZyG5ube/s5vbyNR0limGVRSJSjYBqF xi1XAjsBErpGEgsB4Mrsd5/R6V5pG8M8MY/ZD2JO9yRo 21brHtXMFt+hORBbBm0Hh6rvy+cBPonI79HqRCwJURo mqNZNz42Nn1JlOBM4yrYSjTFlA9rDpkVJQ9R+Ohl1RI6s 0yHdSNknDZm4vztSGmo9DANbGVLT1/PZ2Pwvayame+mn XMaJQcmH3UTQUxExnuTDlfIjBhaoExOythfaoM/Y6W XsEb37lRaidFr3z4lnFLZTyMFUGDuAQjsGDCyjBDZShCg x68AjP8OI58l5d6mpUvOrGcf/sh5/wEZAZFB e1 AB6nicbZDLSgMx FIbPeK31VuvSTbAIruqMiLqz4MZli/YC7VAy6WkbmskM SUYoQ9ANy4UcesTuRB8AZ/D9LQ1h8CH/9/DjnBLHg2 rjul7O0vLK6tp7ZyG5ube/s5vbyNR0limGVRSJSjYBqF xi1XAjsBErpGEgsB4Mrsd5/R6V5pG8M8MY/ZD2JO9yRo 21brHtXMFt+hORBbBm0Hh6rvy+cBPonI79HqRCwJURo mqNZNz42Nn1JlOBM4yrYSjTFlA9rDpkVJQ9R+Ohl1RI6s 0yHdSNknDZm4vztSGmo9DANbGVLT1/PZ2Pwvayame+mn XMaJQcmH3UTQUxExnuTDlfIjBhaoExOythfaoM/Y6W XsEb37lRaidFr3z4lnFLZTyMFUGDuAQjsGDCyjBDZShCg x68AjP8OI58l5d6mpUvOrGcf/sh5/wEZAZFB e1 AB6 nicbZDLSgMxFIbP1Fut1qX boJFcFVniqg7C25ctmgv0A4 lk2ba0EwyJBmhDH0D3bhQxK 1P5ELwBXwO08tCW38IfPz/ OeScE8ScaeO6X05mZXVtfSO 7mdva3tndy+8XGlomitA6kV yqVoA15UzQumG01asKI4CT pvB8HqSN+p0kyKOzOKqR/ hvmAhI9hY65Z2y9180S25U6 Fl8OZQvPqufT6wU1nt5j86P UmSiApDONa67bmx8VOsDCOc jnOdRNMYkyHu07ZFgSOq/X Q6hgdW6eHQqnsEwZN3d8dK Y60HkWBrYywGejFbGL+l7UT E176KRNxYqgs4/ChCMj0WR v1GOKEsNHFjBRzM6KyArT Iy9Ts4ewVtceRka5ZJ3Xjqr ucVKAWbKwiEcwQl4cAEVuIE q1IFAHx7hGV4c7jw5r87brD TjzHsO4I+c9x8ahZFC e2 AB6nicbZDLSgMxF IbP1Fut1qXboJFcFVniqg7C25ctmgv0A4lk2ba0EwyJB mhDH0D3bhQxK1P5ELwBXwO08tCW38IfPz/OeScE8ScaeO6 X05mZXVtfSO7mdva3tndy+8XGlomitA6kVyqVoA15UzQu mG01asKI4CTpvB8HqSN+p0kyKOzOKqR/hvmAhI9hY65Z 2y9180S25U6Fl8OZQvPqufT6wU1nt5j86PUmSiApDONa6 7bmx8VOsDCOcjnOdRNMYkyHu07ZFgSOq/XQ6hgdW6eHQq nsEwZN3d8dKY60HkWBrYywGejFbGL+l7UTE176KRNxYqg gs4/ChCMj0WRv1GOKEsNHFjBRzM6KyArTIy9Ts4ewVtce Rka5ZJ3XjqrucVKAWbKwiEcwQl4cAEVuIEq1IFAHx7hGV 4c7jw5r87brDTjzHsO4I+c9x8ahZFC e2 U2E2I Recall Basic Recall Popular item / LBS / Personalization user requests \u2026 duplicate recall recalled items user score entity & item entity & item embeddings user & entity embeddings E-E-I weight decision user & entity coarse/\ufb01ne ranking \u2026 \u2026 0.1 0.4 0.8 0.6 0.3 0.7 scores Figure 4: Overall framework of Two-stage Complementary Knowledge Enhancement Procedure. First-order Substitutable View. In order to model substitutable relationships, we consider two different sources of information for each entity: (1) From an item sub-perspective, we need to explore the common features of items that have a dependency relationship on the current entity. (2) Similarly, from an user sub-perspective, we need to explore the common features of the users who frequently click on the current entity. Specifically, we aggregate information using the Graph Attention Network (GAT), denoted by h\u2032 \ud835\udc56= \ud835\udc53\ud835\udc61(h,\ud835\udc56, N\ud835\udc56;\ud835\udf03\ud835\udc61). Here, h represents the embeddings of all nodes, \ud835\udc56denotes current node index, N\ud835\udc56is neighbors of node \ud835\udc56, \ud835\udf03\ud835\udc61is the network parameters, and function \ud835\udc53(\u00b7, \u00b7, \u00b7; \u00b7) is defined as: h\u2032 \ud835\udc56= \u2211\ufe01 \ud835\udc57\u2208N\ud835\udc56 \ud835\udf0e \u0010 \ud835\udefc\ud835\udc56\ud835\udc57W1h\ud835\udc57 \u0011 , (1) where \ud835\udefc\ud835\udc56\ud835\udc57is defined as: \ud835\udefc\ud835\udc56\ud835\udc57= exp LeakyReLU \u0010 W2[W1h\ud835\udc56||W1h\ud835\udc57] \u0011! \u00cd \ud835\udc58\u2208N\ud835\udc56exp LeakyReLU \u0010 W2[W1h\ud835\udc56||W1h\ud835\udc58] \u0011! , (2) where h\ud835\udc56represents the embeddings of node \ud835\udc56, W1 \u2208R\ud835\udc51\u00d7\ud835\udc51and W2 \u2208R2\ud835\udc51are trainable parameters, \ud835\udf0e(\u00b7) is a non-linearity activate function, LeakyReLU(\u00b7) is the LeakyReLU activate function and [\u00b7||\u00b7] is concatenate operation. Then we can fuse information from different sub-perspectives including user and item side, based on attention mechanism to obtain entity node \ud835\udc56embedding: z\ud835\udc53 \ud835\udc56= Attention \u0010 \ud835\udc53\ud835\udc61(h,\ud835\udc56, N\ud835\udc56\u2229I;\ud835\udf031), \ud835\udc53\ud835\udc61(h,\ud835\udc56, N\ud835\udc56\u2229U;\ud835\udf031) \u0011 . (3) As shown in the Fig. 4(a.2), where \ud835\udc561 and \ud835\udc563 are aggregated to \ud835\udc521 on the item side, and \ud835\udc621 and \ud835\udc622 are aggregated to \ud835\udc521 on the user side. Second-order Complementary View. In the modeling of complementary relationships, we also consider two different sources of information for each entity: (1) From the complementary graph, we design a meta path MP1: item (database) -> entity (graph) -> entity (bill), represents the collection of item features complementary to the current entity from the perspective of semantic reasoning. (2) From the user\u2019 daily behaviors, we also design a meta path MP2: item1 (bill) -> user -> item2 (bill) -> entity (bill), which indicates what items have been recently consumed by users who have consumed item2 in the short term. Similarly, we obtain the representation of entity node \ud835\udc56through Eq. 4: z\ud835\udc60 \ud835\udc56= Attention \u0010 \ud835\udc53\ud835\udc61(h,\ud835\udc56, N\ud835\udc56\u2212MP1;\ud835\udf032), \ud835\udc53\ud835\udc61(h,\ud835\udc56, N\ud835\udc56\u2212MP2;\ud835\udf032) \u0011 , (4) where N\ud835\udc56\u2212MP1 and N\ud835\udc56\u2212MP2 are the target node sets explored throught meta-path MP1 and MP2, respectively, starting from entity node \ud835\udc56. As shown in the Fig. 4(a.3), where \ud835\udc564 and \ud835\udc565 are aggregated to \ud835\udc521, and \ud835\udc563 is aggregated to \ud835\udc521. Contrasive Learning. z\ud835\udc53 \ud835\udc56and z\ud835\udc60 \ud835\udc56are aggregated through information from first-order substitutable view and second-order complementary view, respectively, representing the characterization of entity \ud835\udc56from two independent and complementary perspectives. z\ud835\udc53 \ud835\udc56and z\ud835\udc60 \ud835\udc56are interrelated and complementary, as they can supervise each other in training process. Therefore, we utilize the contrastive loss, InfoNCE [17], to maximize the agreement of positive pairs and minimize that of negative pairs: L\ud835\udc50\ud835\udc59= \u2211\ufe01 \ud835\udc56\u2208E \u2212log exp(\ud835\udc60(z\ud835\udc53 \ud835\udc56, z\ud835\udc60 \ud835\udc56)/\ud835\udf0f) \u00cd \ud835\udc57\u2208E exp(\ud835\udc60(z\ud835\udc53 \ud835\udc56, z\ud835\udc60 \ud835\udc57)/\ud835\udf0f) , (5) where \ud835\udc60(\u00b7) measures the similarity between two vectors, which is set as cosine similarity function; \ud835\udf0fis the hyper-parameter, known as the temperature in softmax. Finally, the representation of the node \ud835\udc56is the weighted sum of z\ud835\udc53 \ud835\udc56and z\ud835\udc60 \ud835\udc56, which will be used for downstream recommended tasks. Training Process. We leverage a multi-task training strategy to optimize the main E-E-I weight decision task and the auxiliary tasks including contrastive learning task and L2 normalization task \fConference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain Qian Zhao and Hao Qian, et al. jointly: L = L\ud835\udc5a\ud835\udc4e\ud835\udc56\ud835\udc5b+ \ud835\udf061L\ud835\udc50\ud835\udc59+ \ud835\udf062||\u0398||2, (6) where \u0398 is the set of model parameters, \ud835\udf061, \ud835\udf062 are hyperparameters to control the strengths of the diversity preserving loss. L\ud835\udc5a\ud835\udc4e\ud835\udc56\ud835\udc5bis the Cross Entropy Loss of the main E-E-I weight decision task. 3.3.3 Integration Stage. To effectively and efficiently recommend items complemented by recent user bills to those with higher demand, we optimize both the recall module and the fine-ranking model, as shown in Fig. 4(b). Specifically, for the recall module, we added a new complementary recall route. To avoid excessive recall, we prepared a set of up to top-k newly recalled complementary items based on the scores from the E-E-I weight decision model and the recent bill entities retrieved from real-time requests by user ID. As for the fine-ranking model, during the training phase, we also introduce the E-E-I weight decision model to provide scores, entity embeddings, and item embeddings for current samples. The new recall module enables the downstream fine-ranking model to pay attention to complementary items, overcoming the limited input of complementary items caused by exposure bias in previous recommendation systems. The fine-ranking model combines the features of current complementary items and user profile behaviors to comprehensively and personalized sort candidate items. 4 EXPERIMENTS To verify the effectiveness of the proposed LLM-KERec, we conduct extensive offline experiments utilizing the real industrial dataset procured from the Alipay online environment and report detailed analysis results. Moreover, we conduct online A/B tests in realworld marketing recommendation scenario to evaluate the performance of LLM-KERec in real industrial applications. This section encompasses a series of experiments designed to answer the following key questions: \u2022 Q1: How does LLM-KERec perform when compared with other state-of-the-art (SOTA) baseline methods? (see Subsection 4.2) \u2022 Q2: How does LLM-KERec perform in real-world industrial applications? (see Subsection 4.3) \u2022 Q3: How do the distinct modules of LLM-KERec contribute to performance improvements? (see Subsection 4.4) \u2022 Q4: How do the different large language models impact the performance of LLM-KERec? (see Subsection 4.5) 4.1 Experimental Setups 4.1.1 Datasets. This paper mainly focuses on recommendation in digital marketing scenarios, we utilize real-world industrial datasets3 from Alipay. It includes three major marketing and recommendation scenarios within Alipay: Super 567 (Dataset A), Consumer Channel (Dataset B), and Payment Result Page (Dataset C). The Alipay application (APP) facilitates the presentation of numerous coupons to users through Super 567 and the Payment Result Page. The intention is to encourage user engagement by prompting them to click and collect these coupons, subsequently redeeming 3The data set does not contain any Personal Identifiable Information (PII). The data set is desensitized and encrypted. Adequate data protection was carried out during the experiment to prevent the risk of data copy leakage, and the data set was destroyed after the experiment. Table 1: The statistics of three datasets. #Users #Items #click #conversion Dataset A 155209 84846 285001 26780 Dataset B 1301782 376111 1448213 19437 Dataset C 28361313 172786 15336011 143502 them through purchases made on Alipay. Moreover, within the Consumer Channel, the APP directly showcases goods that align with users\u2019 potential interests, aiming to stimulate clicks and subsequent purchases. Each day, a substantial user base, amounting to tens of millions, is exposed to the assortment of coupons and goods available on Alipay. To conduct our study, we randomly selected some instances spanning various dates over a one-month duration. The primary objective underlying data optimization efforts is to augment user conversions. These scenarios exhibit significant differences in terms of user population distribution, as well as user intentions and behaviors. They are further randomly divided into the disjoint training set, validation set, and test set. The statistics of these datasets are presented in Table 1. 4.1.2 Evaluation Metrics. In order to assess the overall system performance, we employ AUC (Area Under Curve) as the evaluation metric for offline experiments. Despite the actual industrial scenario being a ranking scenario, we simplify the offline experiments by treating them as a binary classification problem during modeling. In this approach, the model produces a score indicating whether the user likes (clicks or converts) the recommended item. Therefore, AUC is utilized for offline evaluation purposes. For online experiments, we directly measure the quality of different models by counting the number of clicks and conversions made by real users in various experimental groups. Consequently, the experimental group exhibiting a higher number of recommended items clicked and converted by users signifies better model performance. 4.1.3 Baselines. We choose the state-of-the-art recommendation system models as baselines for efficiency comparison. The baselines include DNN[6], Wide&Deep[3], DCN[20], ESMM[16], PLE[18] and Masknet[21]. 4.2 Offline Performance Comparison Table 2 presents the AUC results of the offline performance comparison for all methods. The Click and Conv. columns indicate the click AUC and conversion AUC values for the three datasets, respectively. In order to facilitate a more comprehensive comparison, we have incorporated the i-i graph into the baseline models, denoted as \"+ ii graph\". This adjustment has been implemented as our methodology capitalizes on graph-based techniques. The superior results are emphasized in bold, while the second-best results are denoted in underline. We utilize the symbol \u201c \u2020 \u201d to indicate that LLM-KERec exhibits a significant difference from the top-performing baseline, as determined by paired t-tests at a significance level of 0.01. Upon meticulous examination of the table, it is evident that LLM-KERec surpasses other methods in terms of AUC across the three datasets, \fUtilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph Conference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain Table 2: Offline performance comparison, with evaluation metrics including click AUC and conversion AUC(conv.). The best results are bolded and the second best results are underlined. Dataset A Dataset B Dataset C Click Conv. Click Conv. Click Conv. DNN 0.61182 0.75844 0.77597 0.76092 0.86060 0.93010 DNN + ii graph 0.61580 0.80684 0.77751 0.73187 0.86061 0.93997 LLM-KERec 0.62882\u2020 0.82460\u2020 0.78523\u2020 0.76271\u2020 0.85972 0.94608 Wnd 0.60599 0.72751 0.77766 0.74243 0.86025 0.93069 Wnd + ii graph 0.62822 0.81782 0.77507 0.75197 0.86064 0.93384 LLM-KERec 0.63207\u2020 0.81896\u2020 0.77897\u2020 0.77140\u2020 0.86059 0.94064\u2020 DCN 0.62457 0.81760 0.78053 0.75231 0.84966 0.92919 DCN + ii graph 0.63121 0.80487 0.778024 0.75280 0.85706 0.93370 LLM-KERec 0.67284 0.82507\u2020 0.78285 0.76789\u2020 0.85732\u2020 0.94174\u2020 ESMM 0.61259 0.78366 0.76509 0.75246 0.85090 0.91357 ESMM + ii graph 0.61927 0.82100 0.77920 0.76136 0.85378 0.92483 LLM-KERec 0.62488\u2020 0.82239\u2020 0.78168\u2020 0.76263\u2020 0.85398\u2020 0.92832\u2020 PLE 0.60652 0.77282 0.77157 0.73986 0.85640 0.93391 PLE + ii graph 0.59870 0.80117 0.77185 0.73817 0.85672 0.93683 LLM-KERec 0.62576\u2020 0.82238\u2020 0.78636\u2020 0.74725\u2020 0.85681\u2020 0.93897\u2020 Masknet 0.59360 0.81263 0.69159 0.61034\u2020 0.82782 0.86086 Masknet + ii graph 0.63998 0.8166\u2020 0.72044 0.58009 0.82889 0.89715 LLM-KERec 0.65137\u2020 0.81631 0.72863\u2020 0.59534 0.84161\u2020 0.90086 exhibiting superior performance across the majority of experimental outcomes. 4.3 Online Performance Comparison To assess the effectiveness of LLM-KERec in real-world industrial scenarios, online A/B Tests were conducted across the three recommendation scenarios in Alipay: Super 567, Consumer Channel, and Payment Result Page. Evaluation metrics differed based on the dataset. For Dataset A (Super 567) and Dataset C (Payment Result Page), both representing coupon issuance scenarios, we employed #Click and #Conv as evaluation metrics. #Click denotes the number of coupons clicked by users, while #Conv signifies the number of converted items. On the other hand, for Dataset B (Consumer Channel), which represents a selling goods scenario, we used #Click and GMV as evaluation metrics. #Click represents the number of goods clicked by users, while GMV (Gross Merchandise Volume) indicates the total monetary value spent by users on purchasing goods. Our objective was to increase both coupon conversion and goods GMV. To conduct the A/B Tests, 10 percent of the actual online traffic was allocated, with the testing traffic candidates assigned randomly and evenly to two experimental groups. LLM-KERec was compared against the online baseline approach, which represents the existing model version serving all online users. Over a period of one month, data on #Click, #Conv, and GMV were collected for the different experimental groups. The results of the online experiments are summarized in Table 3. Due to commercial confidentiality, specific figures are withheld and represented with the symbol \u201c \u2217\u2217\u201d. The percentage of relative improvement achieved by our method compared to the baseline is presented in the last row. The results demonstrate that our proposed LLM-KERec approach achieved a 6.24% increase and a 10.07% increase in #Conv for Dataset A and Dataset C, respectively. Additionally, a 6.45% increase in GMV was observed for Dataset B. The results of the A/B Test demonstrate the Table 3: The overall online performance comparison, where #conv. is number of coupon conversion and GMV is Gross Merchandise Volume. Note that the improvements achieved by LLM-KERec are statistically significant (\ud835\udc5d-value \u226a0.05). Dataset A Dataset B Dataset C Methods #Click #Conv. #Click GMV #Click #Conv. Baseline 3* *7 2* *0 3* *0 2* *1 7* *6 1* *1 Ours 3* *7 3* *6 3* *9 2* *2 7* *7 1* *2 Improv. +2.67% +6.24% +6.18% +6.45% +4.39% +10.07% significant improvements achieved by our method in real-world industrial recommendation scenarios. 4.4 Ablation Study In order to comprehensively evaluate the impact of the U2E2I recall module and the E-E-I model ranking module on LLM-KERec, we conducted deeper ablation studies on Dataset A by selectively removing either the recall or ranking modules. The annotation w/o indicates the absence of the U2E2I recall module or the E-E-I model ranking module, while w/ signifies the inclusion of these modules. The results are show in Table 4 and final row represents the improvements achieved by retaining each respective module compared to removing it. The experimental findings presented in Table 4 demonstrate that both U2E2I recall and E-E-I model ranking modules contribute to an increase in clicks and conversions, thus affirming the effectiveness of our U2E2I recall module and E-E-I model. Table 4: The online ablation performance comparsion for Dataset C, where w/o and w/ represent without and with, respectively. Method U2E2I recall E-E-I model for ranking #click #conv. #click #conv. w/o 7* *1 1* *1 4* *7 8* *5 w/ 7* *7 1* *2 4* *5 8* *2 Improv. +3.33% +2.95% +1.05% +0.59% 4.5 Difference LLMs Comparison In this subsection, we perform a comparative analysis of different large language models, namely ChatGPT, ChatGLM, and Claude. To assess their performance, we randomly selected 1,000 complementary entity pairs from the generated complementary graphs of these models. These entity pairs were manually evaluated and assigned scores based on their relevance. The scoring scale consists of five levels: 1-Completely unrelated, 2-Somewhat unrelated, 3-Uncertain, 4-Somewhat related, and 5-Completely related. The numbers of entity pairs falling into each of these five levels are reported in Table 5. We then calculate the weighted average of these entity pairs using the following formula: \fConference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain Qian Zhao and Hao Qian, et al. 1 \u00d7 Completely Unrelated Num + . . . + 5 \u00d7 Completely Related Num 1000 . This calculation yields the final manual judgment score, which is presented as the last row in Table 5. Based on the manual judgment scores reported in Table 5, it is evident that the complementary entity pairs recommended by Claude exhibit a higher level of correlation. Table 5: Comparing the performance of complementary graph generated by different LLMs using five levels of manual annotation (randomly sampled 1000 entity pairs), a higher Mean Score indicates that the model\u2019s predictions are closer to human judgments. Model ChatGLM 2 ChatGPT 3.5 Claude 2 (1) Completely unrelated 191 171 109 (2) Somewhat unrelated 40 26 36 (3) Uncertain 145 145 127 (4) Somewhat related 242 263 146 (5) Completely related 382 395 582 Mean Score 3.584 3.685 4.056 To provide a more comprehensive understanding, we also extract and present the instances of misjudgment made by ChatGPT and ChatGLM, where the models considered certain entity pairs as relevant, but they were manually determined as irrelevant. These instances are listed in Table 6. An analysis of the table reveals that ChatGPT associates \u201cPresbyopic Glasses\" with \u201cMakeup Remover\" based on the reasoning that Makeup Remover needs to be carefully applied by hand, and using a Presbyopic Glass after makeup removal can provide enhanced observation of the facial skin condition. ChatGLM, on the other hand, links \u201cCake\" with \u201cPajamas\" by suggesting that people may wear pajamas while eating cakes at night. We consider these explanations provided by the language models to be excessively imaginative, as they forcefully establish connections between these entity pairs. Table 6: The instances of problematic complementary entity pairs generated by the large language models (LLMs) from their respective complementary graphs. LLM Model Entity Pairs Bad Reason ChatGPT 3.5 Presbyopic Glasses, Makeup Remover oil Makeup Remover oil needs to be carefully applied by hand, and using a Presbyopic Glass after makeup removal can provide enhanced observation of the facial skin condition. ChatGLM 2 Cake, Pajamas People may wear pajamas while eating cakes at night. 1.00 0.75 0.50 0.25 0.00 -0.25 -0.50 -0.75 -1.00 piggy bank chicken nuggets supermarket tea milk tea grape shared bike alcoholic beverages snack food delivery recharge call fees chicken leg fort movie tickets electric vehicle hotels card coupon noodles medicine box online ride hailing train tickets convenience stores local cuisine barbecue fruits VIP cards roll paper piggy bank chicken nuggets supermarket tea milk tea grape shared bike alcoholic beverages snack food delivery recharge call fees chicken leg fort movie tickets electric vehicle hotels card coupon noodles medicine box online ride hailing train tickets convenience stores local cuisine barbecue fruits VIP cards roll paper Figure 5: The relative improvement of conversion rate (CVR) for randomly sampled complementary pairs in LLM-KERec compared to the baseline. 4.6 Case Study In this subsection, we present an additional case study focusing on the online experiment conducted on Dataset A. Specifically, we calculate and compare the Conversion Rate (CVR) of a sample set of complementary entity pairs recommended by LLM-KERec and the baseline model. The comparison results are depicted in Figure 5. In the figure, blank squares indicate a non-associative relationship between the two entity words, while colored squares indicate the improvement in CVR of the experimental group compared to the baseline group. Red squares represent a higher CVR in the experimental group compared to the baseline group, while blue squares indicate a lower CVR in the experimental group compared to the baseline group. As observed from Figure 5, the complementary pairs recommended by the experimental group generally exhibit a higher CVR than those recommended by the baseline group. 5 CONCLUSION In this paper, we propose a novel LLM based Complementary Knowledge Enhanced Recommendation (LLM-KERec) System. It involves utilizing an entity extractor to extract unified concept terms from the information available for all items and user bills. To construct a complementary graph, we initially generate entity pairs on their popularity and designed strategies. Next, we leverage a large language model to uncover existing complementary purchasing relationship between each entity pairs. Furthermore, we incorporate a new complementary recall module and train the E-E-I weight decision model to enhance the ranking model\u2019s knowledge and facilitate the recommendation of complementary items. Comprehensive experiments demonstrate the effectiveness of our proposed LLM-KERec system. \fUtilizing Large Language Models for Industrial Recommendation Systems through an Inferential Knowledge Graph Conference acronym \u2019XX, August 25\u201329, 2024, Barcelona, Spain" + }, + { + "url": "http://arxiv.org/abs/2402.09911v1", + "title": "Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering", + "abstract": "Mitigating the hallucinations of Large Language Models (LLMs) and enhancing\nthem is a crucial task. Although some existing methods employ model\nself-enhancement techniques, they fall short of effectively addressing unknown\nfactual hallucinations. Using Knowledge Graph (KG) enhancement approaches fails\nto address the generalization across different KG sources and the enhancement\nof open-ended answer questions simultaneously. To tackle these limitations,\nthere is a framework that combines Pseudo-Graph Generation and Atomic Knowledge\nVerification proposed. The enhancement of LLM using KG in an open-ended\nquestion-answering setting is implemented by leveraging the Pseudo-Graph\nGeneration. Atomic Knowledge Verification utilizes atomic-level knowledge\nquerying and verification to achieve generalizability under different KG\nsources. Compared to the baseline, this approach yields a minimum improvement\nof 11.5 in the ROUGE-L score for open-ended questions. For precise questions,\nwe observe a minimum accuracy improvement of 7.5. Moreover, there is also\ndemonstration that this framework exhibits generalizability across different KG\nsources. In summary, our results pave the way for enhancing LLMs by\nincorporating Pseudo- and Multisource-KGs, particularly in the context of\nopen-ended questions.", + "authors": "Jiaxiang Liu, Tong Zhou, Yubo Chen, Kang Liu, Jun Zhao", + "published": "2024-02-15", + "updated": "2024-02-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Knowledge AND Graph", + "gt": "Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering", + "main_content": "Introduction Large language models (LLMs) (Brown et al., 2020; Ouyang et al., 2022; OpenAI, 2023; Touvron et al., 2023; Chowdhery et al., 2022) have achieved remarkable results in the field of question answering tasks. They obtain the capability to handle various questions through pre-training on a large scale of data with a massive amount of parameters. However, LLMs still face issues of hallucination and lack of specific domain knowledge when dealing with complex problems (Huang et al., 2023; Ye et al., 2023). To mitigate the hallucination of models and thus improve the accuracy of model responses, various methods have been proposed. The first is to use the model\u2019s own capabilities to address the model\u2019s hallucination about uncertain knowledge. Chain-of-Thought (CoT) prompting (Wei et al., 2022) method, by having the model generate intermediate processes in its responses, has improved the accuracy of the model\u2019s answers. The Self-Consistency (SC) (Wang et al., 2023b) method enhances the robustness of CoT by considering a synthesis of multiple models\u2019 thought processes. However, these methods cannot fundamentally solve the problem of hallucinations in LLMs because of the errors or missing knowledge in the training data of LLMs (Ye et al., 2023). Therefore, we need to introduce external knowledge to enhance the LLMs, thereby achieving mitigation of hallucinations. The second approach is to use knowledge graphs (KGs) to enhance LLMs. Knowledge graphs, like Wikidata, Freebase and YAGO, are highly valued in LLMs tasks due to their structured knowledge, high accuracy, and timely updates of information (Pan et al., 2024). Therefore , how to extract knowledge from knowledge graphs to enhance large models is a important researched field. A straightforward approach is to prompt (Chang and Fosler-Lussier, 2023) or fine tune (SQL-PALM) (Sun et al., 2023) LLMs to generate Structured Query Language (SQL). However, the schema of different knowledge graphs may vary, limiting the generalization ability of this method. To address the generalization issue across different knowledge graphs, one approach is to encode the knowledge graph semantically and enhance it through retrievalbased methods (Lewis et al., 2020). However, for questions where the relationship is not explicitly stated or for open-ended answer questions, the effectiveness of semantic retrieval may be limited. For example, it would be difficult to find the enarXiv:2402.09911v1 [cs.CL] 15 Feb 2024 \fmethods abilities No training No linking Knowledge enhanced Multi graph Robustness Open-ended QA CoT \u2714 \u2714 \u2718 \u2718 \u2718 \u2714 RAG \u2714 \u2714 \u2714 \u2718 \u2714 \u2718 SQL-PALM \u2718 \u2714 \u2714 \u2718 \u2718 \u2718 ToG \u2714 \u2718 \u2714 \u2714 \u2718 \u2718 KGR \u2714 \u2718 \u2714 \u2718 \u2714 \u2718 Ours \u2714 \u2714 \u2714 \u2714 \u2714 \u2714 Table 1: Abilities of some representative methodologies. 1) No training means that the method requires no training; 2) No linking indicates that this method does not require entity linking within KGs; 3) Knowledge enhanced indicates that this method uses external knowledge to enhance LLMs; 4) Multi graph means the method exhibits good generalization across various KG sources; 5) Robustness refers to the property that current errors have minimal impact on subsequent steps; 6) Open-ended question indicates that this method can enhance LLMs in questions with open-ended answers. tity \"Leonardo da Vinci\" in Wikidata solely based on the question \"Who is the most famous painter in the world\". For other methods like KGR (Guan et al., 2023) and ToG (Anonymous, 2024), although they have achieved good results, there are some limitations. ToG leaks the QID of entities in the KG during the reasoning process, and KGR does not explicitly indicate entity linking. These limitations can affect the generalization ability of these models in practical applications. In summary, the previous KG-enhanced LLM methods cannot simultaneously address the following two problems: 1) utilizing KG to enhance open-ended question answering, and 2) generalization across different knowledge graphs. To address the aforementioned issues, PseudoGraph Generation and Atomic Knowledge Verification are propose. In Pseudo-Graph Generation, the framework first use LLMs to generate pseudotriples relevant to the question. From this, it can use LLM to clarify the knowledge needed in the open-answer questions. This approach can handle open-ended question answering because the hallucination property of LLMs could be leveraged. Even if LLMs experience hallucinations during the generation process, they can still effectively construct the framework of the required knowledge, allowing us to perform queries in open-ended answers. For Atomic Knowledge Verification, we perform semantic querying on the semantically encoded KG based on these pseudo-triples. This approach exhibits generalization across different KGs because both the querying and verification processes are based on atomic-level knowledge, independent of the KG schema. All in all, LLMs are utilized to verify the pseudo-triples based on the queried triples from different KG sources, resulting in the desired answer. Therefore, the method can alleviate factual hallucinations by enhancing LLMs with external knowledge from different KG sources. Our contributions are listed below: \u2022 The Pseudo-Graph Generation leverages the LLMs to generate pseudo-triples relevant to the question, allowing the framework to utilize KG as knowledge augmentation for LLMs in open-ended question answering. \u2022 Atomic Knowledge Verification uses atomiclevel knowledge. Therefore, it ensures that the framework has good generalization across different KG sources. \u2022 We introduce an open-ended questionanswering dataset in a KG-enhanced setting named Natural Questions. Our experimental results show that our method not only performs excellently on this dataset but also demonstrates strong performance on existing datasets such as QALD-10(Perevalov et al., 2022) and SimpleQuestions(Bordes et al., 2015a). 2 Related Work Table 1 demonstrates a comparison of the capabilities of the representative methods. 2.1 Self Enhanced LLMs Directly fine-tuning LLMs to achieve performance improvements is difficult, due to the huge computational resources consumed. Wei et al., 2022 has shown that the Chain-of-Thought (CoT) prompt method can stimulate reasoning in LLMs. They \fQuestion: Who is acknowledged as the trailblazer in the field of artificial intelligence? Step 1 Step 2 Sematic Query prune Step 3 Answer: Based on the graph above there are some people acknowledged as the trailblazer in the field of artificial intelligence, such as \u0001 Step 4 AI contribution Allen Newell MADE Soar contribution Marvin Minsky John McCarthy MADE LISP BORN IN 1958 Allen Newell Marvin Minsky John McCarthy contribution Subject 3 BORN IN 192703-19 award Turing Award educated at Stanford University \u0002\u0002\u0001 \u0002\u0002\u0001 educated at Princeton University BORN IN 192709-04 award Turing Award \u0002\u0002\u0001 \u0001 educated at Harvard University BORN IN 192708-09 \u0002\u0002\u0001 \u0002\u0002\u0001 award Turing Award Subject 2 Subject 1 Top 3 Top 1 Top 2 Top \u0002\u0002\u0001 Mark Twain BORN IN 183511-30 John McCarthy BORN IN 192709-04 award Turing Award MADE LISP contribution AI \u0001 Top 4 contribution \u0002\u0002\u0001 \u0002\u0002\u0001 \u0002\u0002\u0001 Figure 1: The over view of our method. In step 1, we prompt LLM to generate pseudo-graph Gp related to the question. For step 2, the triples generated are used to query sematic KG and get the ground graph Gg. In step 3, LLM verifies the Gp. Finally, LLM provide the answer based on the fixed Gp. enhance LLMs by generating reasoning processes during the answer generation of answer. The introduction of this method has sparked a series of follow-up works. Zero-shot-CoT (Kojima et al., 2022) use \"Let\u2019s think step by step\" eliciting effective CoT reasoning. Auto-CoT (Zhang et al., 2023) automates the construction of high-quality CoTs. The Self-Consistency (Wang et al., 2023b)(SC) method considers a synthesis of multiple models\u2019 thought processes. Additionally, other methods have incorporated knowledge into CoT, such as Knowledge-driven CoT (Wang et al., 2023a) and KAM-CoT (Mondal et al., 2024) 2.2 KG Enhanced LLMs Simply enhancing LLMs through prompts is far from sufficient. For instance, with new questions like \"What kind of chips does the Apple Vision Pro use?\" which involve new knowledge not covered by LLMs, enhancement through simple prompts is inadequate. Because of the structured knowledge, high accuracy, and timely updates of information (Pan et al., 2024), Enhancing LLMs with KGs is a practical method. A straightforward approach is to prompt (Chang and Fosler-Lussier, 2023) or fine tune (SQLPALM) (Sun et al., 2023) LLMs to generate Structured Query Language (SQL). However, for the prompt method, generating SQL without providing entity IDs to LLMs is difficult. For example, when we ask ChatGPT1: Please tell me what is the QID of Yellow River in wikidata?, it returns the output as Q1826. But Yellow River\u2019s real QID is Q2066882. For the fine-tuning approach, firstly, it requires a significant amount of computational resources. Secondly, the schema of different knowledge graphs may vary, limiting the generalization ability of this method. Embedding representations, like Trans-E (Bordes et al., 2013), of KGs is a good method to address the issue of differing schemas among various KGs. However, for multi-hop relationships and large-scale KGs, the embedding approach makes it difficult to memorize knowledge (Li et al., 2023). ToG (Anonymous, 2024) and KGR (Guan et al., 2023) are methods for augmenting models by introducing knowledge from KGs in the model inference process. ToG utilizes the model to search for relevant entities and relationships within KGs to solve complex problems. KGR identifies relevant entities existing within KGs from the answers of LLMs, thereby achieving corrections to the answers. However, these methods all exhibit ambiguity in the entity-linking step. ToG directly leaks the entity\u2019s ID, while KGR does not explicitly indicate the entity linking. This signifi1https://chat.openai.com/ \fcantly weakens the generalizability of these methods in practical applications. And, to the best of our knowledge, methods of enhancing LLMs based on KGs not yet have been applied to questions with open-ended answers. 3 Methodology In this section, we would like to describe the process of the approach. The general flow of the method can be seen in Figure 1. First, we define a triple of the form G = {O, R, T}, where O denotes the set of subjects, R is the set of relations, and S is the set of objects. 3.1 Generation of Pseudo-graph For processing of generating the pseudo-graph, initially, it is aimed to utilize LLM to directly generate fact-related triples. However, when the LLM has not been fine-tuned, enabling it to understand and generate fact-related triples is a relatively challenging task. Since LLMs are trained on large-scale natural corpora (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023; Chowdhery et al., 2022). They are more inclined to use continuous language for responses, rather than answering with discrete triples. Relying on directly generating triples may lead to the model producing triples that do not conform to rules, for example: < Y angtzeRiver >< flows Hubei > Considering that LLMs have decent code capabilities (Yeti\u00b8 stiren et al., 2023), programming languages are adopted as an intermediary bridge between natural language and triples. [Task description]: [Example 1]: Who has the largest area of the Great Lakes in the United States? \u0001\u0001\u0002 [Example 2]:Who covers more countries, the Andes or the Himalayas? \u0001\u0001\u0002 [Task]: What kind of chips does the Apple Vision Pro use? Input: Out: \u0002 CREATE (vision pro)-[:COMES_WITH]->(Intel \u0001\u0001\u0002) \u0001\u0001\u0002 Figure 2: Generation of pseudo-graph Specifically, the LLM is instructed with two examples in Cypher language, asking it to generate Cypher queries that could solve the problem. Then, we run the Cypher queries on Neo4j2 and decode them into the form of triples. This ensures that the model outputs in a manner it is familiar with. Additionally, in the subsequent semantic querying process, it resembles the structure of triples in semantics. Finally, there is the pseudo-graph Gp = {Op, Rp, Tp} get from LLM. 3.2 Atomic Knowledge Verification 3.2.1 Sematic Querying First, for the semantic knowledge graph, we extracted a subset of subgraphs Gbase = {Obase, Rbase, Tbase} from Wikidata or Freebase based on the questions. Then we use SentenceBERT (Reimers and Gurevych, 2019) to encode the triples after parsing them into semantic form. Following that, the cosine similarity between each triple in the Gh and that in the Gbase is calculated. We select for each triple in Gp the top 10 in Gbase as the result of the query to form Gt = {Ot, Rt, Tt}. Next, we can extract relevant entities and relationships. However, due to the large number of triples in Gp, the resulting relationships and entities obtained from the query can be quite extensive, sometimes even exceeding the maximum token limit of LLM. So a pruning method to address this problem should be proposed. For the ToG (Anonymous, 2024) method, they use LLM scoring to prune relationships. However, for pruning method, if we do the same, that won\u2019t make efficient use of the knowledge graph Gp generated by LLM. Moreover, relying heavily on LLM can lead to accumulated errors. So here is proposed a two-step pruning method that eliminates the need for LLM judgment. Firstly, we utilize the size k of Sp in Gp to identify the top k entities from St with the highest number of triples as candidates. This step helps us to eliminate some less popular entities with the same name. For example, in Wikidata, there are 7 entities labeled as \"YaoMing\", but the basketball player Yao Ming is the most popular one. So for the question \"Where was Yao Ming born?\", it is more inclined to select the basketball player Yao Ming as the answer entity. Next, cosine similarity are utilize during semantic querying to further prune and rank entities. For each subject s belonging to St, it can 2https://neo4j.com/ \fcalculate the average semantic score (the cosine similarity when querying with Gp) of all triples with s as the subject as the entity confidence score. At the same time, entities with scores below 0.7 are filtered out. Finally, we get the ground graph Gg. It is worth noting that our method is significantly different from previous approaches. ToG (Anonymous, 2024) leaks the entity\u2019s ID in the KG during model decision-making. The KGR (Guan et al., 2023) method does not clearly specify how to perform entity linking. These methods have limitations in practical applications. 3.2.2 Pseudo-Graph Verification When verifying Gh with LLM, the triples of entity s in Sg with higher scores are place closer to Gh. This approach is more advantageous for LLM to establish better attention between the Gh and Gg. Then, there are two simple examples used to enable LLM to perform self-verification of the knowledge graph. Afterwards, we obtain the final knowledge graph Gf. Park et al., 2023 utilizes LLM to verify the KG generated by LLM. However, their method focuses on verifying the consistency of the model in solving subproblems in multi-hop questions. Moreover, they enhance LLM using retrieval-based methods using the subproblems on the text corpus, which establishes a weak coupling relationship between the knowledge graph, the question, and the model. 3.3 Answer Generation In this step, we use two examples to teach LLM how to answer questions based on the knowledge graph. Then, the model is instructed to generate answers to questions using the examples, the question itself, and Gf. 4 Experimens 4.1 Models In our experiments, we used GPT-3.5 and GPT-4 (OpenAI, 2023) as large language models to generate graphs, perform verification, and produce the final answers. Sentence-BERT (Plenz et al., 2023) is chosen as the encoder for the semantic KG and as the query module to query knowledge related to the triples generated by the LLM. 4.2 Datasets To verify that the method has validity in natural question and answer, it is tested on three different types of datasets. They include single-hop questions, multi-hop questions, and open-answer quizzes: SimpleQuestions (Bordes et al., 2015a), QALD-10 (Perevalov et al., 2022), Nature Questions. \u2022 SimpleQuestions (Bordes et al., 2015a) employs a manually annotated method to generate corresponding questions based on facts in the knowledge base, using Freebase as the source of answers. \u2022 QALD-10 (Perevalov et al., 2022) is a multilingual, multi-hop question answering dataset that uses Wikidata as its answer knowledge base and includes multiple languages. In our experiments, English is choose for the question-and-answer tasks. \u2022 Nature Questions is a dataset we compiled, featuring questions people commonly ask in daily life that include open-ended answers, multiple-answer responses, and queries about new knowledge. We manually constructed 50 questions for this dataset, writing three answers for each question, expecting the answer will be comprehensive enough. Detail Due to the closure of the Freebase API, we used a subset of FB2M (Bordes et al., 2015b) as our knowledge base on Freebase. Furthermore, because the number of questions in SimpleQuestions (Bordes et al., 2015b) is too large (100k), 1000 questions are randomly selected for GPT-3.5. Due to the price factor of the GPT-4, there are 150 data chosen for the SimpleQuestion test. For both QALD-10 and Nature Questions, we use the full dataset for testing and constructing the corresponding semantic KG based on the questions. Evaluation Metrics Regarding evaluation metrics, for both SimpleQuestions (Bordes et al., 2015a) and QALD-10 (Perevalov et al., 2022), we adopted the Hit@1 metric as the measure of question answering accuracy. For the Nature Questions dataset, ROUGE-L-f1 (Lin, 2004) is used to evaluate the accuracy and comprehensiveness of LLM\u2019s answers. 4.3 Baselines In order to judge the validity of the Pseudo-Graph Generation and Atomic Knowledge Verification method, the following baselines for comparison are chosen: \u2022 IO (Brown et al., 2020) We use the standard input-output (IO) prompt as for conducting \fMethod Hit@1 ROUGE L SimpleQuestions QALD-10 Nature Questions GPT-3.5 IO 20.2 38.7 20.5 CoT 22.0 40.5 23.2 SC 21.2 41.1 QSM 27.5 34.2 23.8 Ours 34.3 48.6 37.5 GPT-4 IO 29.9 44.7 20.9 CoT 32.2 48.9 27.7 SC 36.0 48.9 QSM 31.3 46.2 27.0 Ours 40.0 56.5 39.2 Table 2: The main results of our experiments. Bold indicates the best-performing method on the dataset for the model. SimpleQuestions and QALD use Hit@1 for evaulation. And Nature Questions evals with ROUGE-L. direct input-output testing of the model, with 6 in-context examples. \u2022 Chain of Thougnt (CoT) (Wei et al., 2022) It encourages the model to generate the thinking process during the output generation to enhance the model, with 6 in-context examples. \u2022 Self-Consistency (SC) (Wang et al., 2023b) We use a sampling temperature of 0.7 and perform three sampling iterations, using voting to process the results in our experiments. \u2022 Question Sematic Matching (QSM) Directly matching the question with the semantic knowledge graph for retrieval. 4.4 Main Results Our main results can be seen in Table 2. It demonstrates the effectiveness of the framework in openended questions and in traditional precise questions 4.4.1 Comparison With Other Methods Firstly, we can observe from Table 2 that the the Pseudo-Graph Generation and Atomic Knowledge Verification method method achieves better results than the baselines across different LLMs and datasets. That QSM performs worst on QALD10. Especially on QALD with GPT-3.5 can also be observed. It performs 4.5 points lower compared to the IO method. This to some extent indicates the challenge of directly matching the semantics of the question to obtain triples in multi-hop questions. The continuous nature of question expression contrasts with the discontinuous nature of semantic triples, leading to a certain gap between the two. Advantage in deterministic question answering: For deterministic question answering, PseudoGraph Generation and Atomic Knowledge Verification methodmethod achieves the highest accuracy on the QALD-10 dataset. On the QALD10 dataset, the method even outperforms the ToG method, which achieves a result of 54.7 with GPT4(Anonymous, 2024). Furthermore, this method with all models achieves a higher accuracy than the fine-tuned SOTA(Borroto et al., 2022), whose accuracy is 45.4. The improvement is also evident on the SimpleQuestions. This method achieves improvements of 10.9(GPT-3.5) and 5.3(GPT-4), respectively, compared to the second baseline on the two models. We also found that, with the enhancement of the approach, the factual hallucination of GPT-3.5 can be effectively addressed. It even outperforms various GPT-4 baselines on SimpleQuestion dataset. Better performance in nature questions answering: On the Nature Questions answering dataset, the method also achieves significant improvements in terms of the ROUGE-L evaluation metric. With the Pseudo-Graph Generation and Atomic Knowledge Verification method method, for open problems, GPT-3.5 performs even better than GPT-4 under CoT prompt enhancement. And GPT-4 could improve about at least 11.5 in ROUGE-L. This indicates that this method can be effectively applied to real-world problems as well. Taking into account the improvement from QALD-10 (multi-hop questions) and Nature Questions (open-ended questions), there is a conclusion that the method is effective for complex questions. \fIn conclusion, the above results indicate that good performance in both precise questionanswering and open-ended question-answering tasks has been achieved by LLMs with this method. 4.4.2 Generalization Across Different KG Sources Additionally, we also proof the generalization capability of our method across different knowledge graph sources by evaluating it on the same set of questions but with varying KG sources. GPT-3.5 is selected as the model for the test. SimpleQuestions and Nature Questions are selected to validate the generalization capability of the method. We compared the improvement of model compared to the CoT on different datasets and different knowledge graphs to validate its performance. From the Method SimpleQuestion Nature Questions CoT 22.0 23.2 Our/ Freebase 38.2 26.7 Gain +16.2 +3.5 Our/ Wikidata 28.1 37.5 Gain +6.1 +14.3 Table 3: Performance on SimpleQuestions and Nature Questions with different KG sources. The SimpleQuestions is based on the Freebase as KG sources. Table 3, it is evident that the method has shown improvement compared to the CoT method on the same set of questions across different KG sources. It is worth mentioning that SimpleQuestions is based on Freebase as the KG source. However, certain single-hop relations in Freebase might require multi-hop in Wikidata. This factor was not considered during the construction of the knowledge graph, which led to a smaller improvement in performance. In conclusion, the results above have demonstrated to some extent the generalization capability of the method across different KG sources for the same set of questions. 4.5 Ablation Study In this section, we aim to verify whether components of the method are operational. A most important question is: Are the verification steps useful? It is possible that when we use the model to generate a pseudo-graph, we activate the internal knowledge of the model, thereby enhancing the model\u2019s performance. Following this, pseudo-graph verification steps may lightly reduce the accuracy of the pseudograph. Thereby this makes the results superficially better than other baselines. Therefore, GPT-3.5 and GPT-4 are selected for testing on QALD-10 and Nature Questions. During the experiment, we compared the accuracy of directly providing the model with the pseudo-graph to determine whether the verification steps could perform well. Method QALD-10 Nature Questions CoT 40.5 23.2 Pseudo-Graph 44.4 24.3 Gain from CoT +3.9 +1.1 Verification 48.6 37.5 Gain from Pseudo-Graph +4.2 +13.2 Table 4: GPT3.5\u2019s performance on QALD-10 and Nature Questions with different KG. Method QALD-10 Nature Questions CoT 48.9 27.7 Pseudo-Graph 53.9 24.4 Gain from CoT +5.0 -3.3 Verification 56.5 39.2 Gain from Pseudo-graph +2.6 +14.8 Table 5: GPT4\u2019s performance on QALD-10 and Nature Questions with different KG. Pseudo-Graphs Generation stimulate model\u2019s knowledge: From Table 4 , we can see that for GPT-3.5, using the model to generate pseudographs, as opposed to the traditional CoT, can to some extent activate the model\u2019s factual capabilities. Regarding Table 5, for the QALD-10 dataset, the pseudo-graph also enhanced the traditional CoT with better performance. Atomic Knowledge Verification increases precision and breadth of knowledge: Additionally, from Table 4, verification steps did not lead to a decline in the model\u2019s performance; instead, they resulted in an improvement in accuracy from pseudograph. In Table 5, for Nature Questions, the pseudo-graph actually led to a certain decline in the model\u2019s performance. This could be because, when the model generate the pseudo-graph, it becomes more inclined to output knowledge that it is certain of. This leads to the pseudo-graph not being comprehensive in terms of facts. Moreover, the performance of GPT-4 when using CoT has been quite good. So, there is a slight decline in performance in this case. However, this also further \fdemonstrates that the Atomic Knowledge Verification steps indeed perform effectively. In summary, from the results mentioned above, it is not difficult to find that the verification steps can work effectively. Moreover, this results can to some extent explain the function of the two steps. Utilizing the model to generate pseudo-graphs can more effectively stimulate the model\u2019s self-factual retrieval capability compared to the traditional CoT method. Additionally, the Atomic Knowledge Verification steps can not only correct erroneous facts in precise questioning but also significantly enhance the factual accuracy and comprehensiveness of the model\u2019s answers in open-ended questions. 4.6 Error Analysis It is aimed to identify some of the reasons for errors by analyzing the four steps. The details can be seen in Appendix A.2. 4.6.1 Generation of Pseudo-Graph In the approach to generating pseudo-graphs using LLM, we first utilize LLM to generate Cypher statements and then parse them. Therefore, there might be instances where errors occur during the LLM\u2019s generation of Cypher statements. However, since Cypher is a relatively simple and formally structured language, the model seldom makes mistakes in writing the code in actual practice. GPT-3.5 exhibited a 0.6% error rate on QALD-10 and SimpleQuestion datasets. For other cases, the model did not make mistakes in generating Cypher statements. We discovered that the primary reason for errors in generating Cypher by the model is: during generation, it mistakenly believes that a query needs to be made in the KG and thus uses the \"MATCH\". This could be related to the fact that the model primarily focuses on querying the knowledge graph during the training process. 4.6.2 Sematic Querying During the process of semantic querying, there were instances where entities were not matched. This could be due to the issue of threshold settings in the semantic querying process. Additionally, during the pruning process based on the pseudograph, there might also be cases where the answer entities were inadvertently removed. 4.6.3 Pseudo-graph Verification Errors may also occur during the process of fact verification using LLM. For example, some inaccuracies in the model during the verification process could lead to errors in the final graph. The proportion of new errors caused by verification could be calculated, compared to directly using the pseudograph, for two models on the QALD-10 dataset, as a percentage of the total errors. For GPT-3.5 and GPT-4, the proportions are 15.2% and 13.8%, respectively. This is to a certain extent within our acceptable range, and it also reflects to some degree that GPT-4\u2019s verification performance is superior. We discovered that the main errors in the model\u2019s verification process were caused by directly appending the base graph after the pseudo-graph and not making modifications to the pseudo-graph. 4.6.4 Answer Generation We found that in terms of answering, the model largely follows the graph for responses. 5 Conclusion & Future Work A framework that combines Pseudo-Graph Generation and Atomic Knowledge Verification is proposed to enhance LLMs for open-ended questions. Pseudo-Graph Generation utilizes the property of LLM hallucination-even under erroneous conditions provide us with a framework for knowledge points. Atomic Knowledge Verification contains atomic-level fact semantic querying and verification to solve the generalization issue for the same question across different KG sources. We implemented the use of KG for LLM augmentation in open-answer questions scenarios. Experimental results show that the framework not only achieves good results in traditional precise questioning, but also gets an equally good boost in the natural question-answering. Our approach points out a feasible direction for the enhancement of LLM using KG in practical applications. In future work, we would like to utilize better semantic encoding models to enhance semantic querying. Whether there are better pruning strategies to improve the quality of the knowledge acquired is also a direction to ponder. There is also a reflection to build an additional Pseudo-Graph Verification module to better enhance the knowledge of the LLM. Finally, we want to engineer our framework and build means of enhancing LLM that can be practically applied. \f6 Limitations However, the method still has certain limitations. For example, during Semantic Querying, it is possible that the answer entity may be omit during pruning, resulting in errors. Additionally, during Pseudo-Graph Verification, due to using LLM to verify itself, there may be a bias towards the LLM\u2019s pseudo-graph, leading to unsuccessful verification." + } + ] +} \ No newline at end of file