diff --git "a/title_31K_G/test_title_long_2405.03085v1.json" "b/title_31K_G/test_title_long_2405.03085v1.json" new file mode 100644--- /dev/null +++ "b/title_31K_G/test_title_long_2405.03085v1.json" @@ -0,0 +1,52 @@ +{ + "url": "http://arxiv.org/abs/2405.03085v1", + "title": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation", + "abstract": "Large Language Models (LLMs) have made significant strides in information\nacquisition. However, their overreliance on potentially flawed parametric\nknowledge leads to hallucinations and inaccuracies, particularly when handling\nlong-tail, domain-specific queries. Retrieval Augmented Generation (RAG)\naddresses this limitation by incorporating external, non-parametric knowledge.\nNevertheless, the retrieved long-context documents often contain noisy,\nirrelevant information alongside vital knowledge, negatively diluting LLMs'\nattention. Inspired by the supportive role of essential concepts in\nindividuals' reading comprehension, we propose a novel concept-based RAG\nframework with the Abstract Meaning Representation (AMR)-based concept\ndistillation algorithm. The proposed algorithm compresses the cluttered raw\nretrieved documents into a compact set of crucial concepts distilled from the\ninformative nodes of AMR by referring to reliable linguistic features. The\nconcepts explicitly constrain LLMs to focus solely on vital information in the\ninference process. We conduct extensive experiments on open-domain\nquestion-answering datasets to empirically evaluate the proposed method's\neffectiveness. The results indicate that the concept-based RAG framework\noutperforms other baseline methods, particularly as the number of supporting\ndocuments increases, while also exhibiting robustness across various backbone\nLLMs. This emphasizes the distilled concepts are informative for augmenting the\nRAG process by filtering out interference information. To the best of our\nknowledge, this is the first work introducing AMR to enhance the RAG,\npresenting a potential solution to augment inference performance with\nsemantic-based context compression.", + "authors": "Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation", + "main_content": "Introduction Large Language Models (LLMs) have emerged as indispensable tools for daily information acquisition, owing to their extensive knowledge base and ability to fulfil diverse user instructions [6, 47, 1]. By leveraging large-scale pre-training on massive datasets, LLMs memorize vast amounts of knowledge within their parameters as internal memory, known as parametric knowledge [33]. However, the presence of outdated or incorrect knowledge within internal memory can lead to hallucinations, hindering the performance of LLMs\u2019 inferencing process [46]. This limitation is particularly pronounced when handling long-tail knowledge for domain-specific or highly specialized queries, as the inherent difficulty in memorizing rare entities persists even in the most robust models. Consequently, the overreliance on potentially flawed parametric knowledge can significantly interfere with the reliability of LLMs\u2019 outputs, especially in scenarios with fine-grained knowledge requirements [58, 36]. Retrieval Augmented Generation (RAG) employs additional retrievers to augment LLMs with external, non-parametric knowledge, effectively expanding their internal knowledge boundaries [27, 14]. This Preprint. Under review. arXiv:2405.03085v1 [cs.CL] 6 May 2024 \fallows LLMs to access up-to-date, query-focused information that may not be adequately memorized within their parametric memory to alleviate the aforementioned limitations [24]. In contrast to finetuning by updating the model parameters, RAG preserves pre-trained knowledge while dynamically incorporating relevant external context. This paradigm offers greater flexibility and scalability, as the retrievers can be easily plug-and-play without modifying the underlying language model\u2019s parameters, thus circumventing complex computational hurdles [17, 16]. However, RAG is easily confused when dealing with long contextual retrieved support documents, which often consist of multiple shreds of evidence for providing vital knowledgeable context but are also accompanied by noisy and irrelevant information [56]. The distracting contexts can dilute the LLMs\u2019 attention and adversely affect their performance with misrepresentation [30, 25]. Compressing lengthy contexts to distil vital knowledge is crucial for enhancing LLMs and ensuring factually consistent responses in the RAG process. Figure 1: The examples of concept-based RAG1. Numerous studies have demonstrated that individuals tend to directly search for key concepts when reading long documents as the brain will complete the remaining details based on prior knowledge, expectations, background, and motivations [15, 22]. This selective attention to critical information allows ignoring redundant details and rearranging the text informatively [51]. As illustrated in Fig. 1, given only the key concepts of the question-related supporting documents that still enable us to grasp the crucial semantics. LLMs parameterize massive common knowledge, enabling them to exhibit a similar ability in context understanding even when the word or character-level information is disrupted [43, 7]. This provides the possibility of whether LLMs can comprehend scenarios solely based on discrete informative concepts. Linguistic features, such as semantic and syntactic, have significantly improved the interpretability, controllability, and diversity of Natural Language Generation (NLG) [28]. Language models can implicitly discover these features during pre-training to ensure the logic of the generated text [21]. It has been demonstrated that explicitly leveraging linguistic features for downstream tasks is beneficial, as it refactors the source documents into concise representations that reduce entropy by focusing on the critical information, thereby aiding in a comprehensive understanding of the described scenarios [41, 48, 44, 28, 23, 55]. This advantage enables the stable linguistic features to reliably assist context understanding. Inspired by the aforementioned insights, we propose enhancing RAG\u2019s performance with the crucial concepts distilled from the raw retrieved supporting documents. To effectively capture the informative concepts, we introduce Abstract Meaning Representation (AMR), a semantic formalism that encodes the meaning of serialized texts by a rooted, directed, labelled, acyclic graph [3]. Compared to other linguistic representations, AMR prioritizes semantic consistency among concepts carried by nodes when representing sentences, offering the advantage of automatically rectifying surfacelevel variations or understanding abbreviated terms, ensuring the structured concepts represent the underlying meaning to transcend the limitations of linguistic noise [59]. Specifically, we propose the concept-based RAG framework with the AMR-based concept distillation algorithm, which formats the concepts for augmenting LLMs by compressing the lengthy context to concentrate on crucial information exclusively. We empirically experiment on two open-domain Q&A datasets, PopQA [32] and EntityQuestions [40]. The results show that the performance of our method improves significantly as the number of supporting documents increases, outperforming baselines with various compression methods and backbone LLMs. The contributions of this paper can be summarized as follows: \u2022 This paper proposes the concept-based RAG framework that explicitly integrates AMR, a semantic representation, to enable LLMs to focus on essential rather than messy knowledge 1The corresponding complete sentences: [1] The Outfit is a 1973 crime film directed by John Flynn. [2] It stars Robert Duvall, Karen Black, Joe Don Baker and Robert Ryan. [3] Flynn\u2019s screenplay is an adaptation of the novel of the same name by Richard Stark. [4] Two hitmen drive to Eddie Macklin\u2019s house to assassinate him as he builds a brick wall in his backyard. 2 \fwhen processing long-context retrieved supporting documents. To the best of our knowledge, this is the first research introducing AMR to enhance RAG for more reliable inference. \u2022 We propose an AMR-based concept distillation algorithm, which compresses long-context raw supporting documents into concepts by formatting the informative nodes. The distilled concepts are more knowledge-centralized than the raw supporting documents, reducing the interference of irrelevant information during the inference process of LLMs. \u2022 We conduct extensive experiments on open-domain Q&A datasets. The results indicate that our framework effectively enhances inference performance as the number of supporting documents increases, outperforming baselines with various context compression methods and backbone LLMs. This demonstrates its applicability in long-context RAG scenarios. 2 Related Works 2.1 Long-context Understanding The increasing complexity of downstream tasks and the demand for models capable of capturing intricate dependencies have driven significant attention to the long-context understanding of LLMs [37, 19, 53]. One prominent research avenue involves modifying the basic architecture of LLMs. For instance, Dai et al.[11] introduced a segment-level recurrence mechanism with their Transformer-XL model, enabling it to retain longer contextual information than the standard Transformer structure. Similarly, Beltagy et al.[4] extended the self-attention mechanism in their Longformer model to handle longer sequences by introducing a sparse attention pattern, thereby facilitating the efficient processing of documents with thousands of tokens. However, a significant drawback of modifying model architecture is the necessity for complex re-training processes. In contrast, research on prompt compression aims to understand long-token prompts by compressing them into low-dimensional soft prompts [50, 9, 34]. While offering a more efficient alternative to architecture modification, this approach constrains the transferability of learned prompts across various LLMs. Recent research has advanced to a more intuitive level, aiming to comprehensively understand the context by directly expanding the context window or explicit compression. Chen et al.[8] introduced position interpolation to extend the context window of pre-trained LLMs, scaling LLaMA\u2019s context window to 32k tokens with few fine-tuning steps. Ding et al.[12] proposed LongRoPE to extend LLMs\u2019 context window to 2048k tokens while maintaining the performance of the original short context window through a positional and interpolation progressive extension strategy. However, the long context window raises another challenge of diluting core information with redundant data [53]. To address this, Li et al.[29] filtered out irrelevant context with low self-information for compressing the long prompts. Chuang et al.[10] proposed the Nano-Capsulator to compress original prompts into capsule prompts, decreasing inference latency across diverse LLMs. Compression methods can benefit the RAG by allowing LLMs to focus on essential knowledge in supporting documents [54]. 2.2 Linguistics-augmented NLG Incorporating linguistic principles into LLMs has shown promise in improving the coherence and semantic fidelity of generated text [55]. Augmentation techniques like syntactic trees [35] and lexical patterns [28] assist in linguistic feature injection, enabling language models to generate more faithful text. Ahmed et al. [2] proposed automatic semantic augmentation of prompts to enhance LLMs with tagged facts, resulting in improved code summarization performance. Zhou et al. [60] introduced InstructCTG, a framework for controlling LLMs\u2019 generation based on syntax constraints, facilitating flexibility and adaptation to new conditions without complex model modification. LLMs can be explicitly guided by leveraging linguistic insights to mitigate biases inherent in parameterized-only approaches, hereby enhancing performance in tasks demanding strict factual consistency. Abstract Meaning Representation (AMR) has proven its efficacy in enhancing downstream generation tasks by providing a structured semantic representation that encapsulates static concepts [18]. Frisoni et al. [13] integrated AMR with pre-trained language models to enhance biomedical summarization by capturing inter-entity relations. Ribeiro et al. [38] employed AMR to improve factuality evaluation in abstractive summarization by identifying content verifiability errors and subsentence-level factual inconsistencies. Shi et al. [42] proposed AMR-TST, which generates fluent and reliable texts with the target style by optimizing core concept nodes. Jangra et al. [20] preserved style-agnostic content 3 \fwhile generating transferred text by utilizing AMR as an intermediate representation. These studies illustrate AMR\u2019s advantages in capturing essential concepts containing informative linguistic features. 3 Method 3.1 Concept-based RAG Framework This section introduces the proposed concept-based RAG framework for inference utilising the concepts distilled from the raw supporting documents. The overview of the framework is in Fig. 2. Figure 2: The overview of the concept-based RAG framework, which consists of three main components: (a) information retrieval, (b) concept distillation, and (c) concept-based inference. Given an input question Q, the (a) information retrieval component aims to utilize a retriever to return the top-K knowledgeable supporting documents D = {D1, ..., DK} relevant to Q from sources such as Wikipedia or other information repositories. At this stage, the retriever\u2019s performance significantly influences the resulting answer set A = {A1, ..., AM} [33, 14]. However, the retriever\u2019s performance is beyond this paper\u2019s scope. We hypothesize that all retrieved supporting documents D contain the correct answer corresponding to Q, expressed as a proposition: \u2200Dk \u2208D, \u2203Am \u2208A, Am \u2286Dk. The (b) concept distillation component is devised to format the concept C from the retrieved supporting document D by the proposed AMR-based concept distillation algorithm. This algorithm converts the supporting documents from continuous sequences to discrete concepts formatted from the AMR graph, denoted as G. Further details of this algorithm will be elucidated in the subsequent section. After obtaining the distilled concept C, the (c) concept-based inference component proceeds to integrate it with various backbone LLMs to derive answers A using a faithful-intensive prompt template as follows: [Refer to the following facts to answer the question. Facts: C. Question: Q]. The intensity of prompts has been demonstrated to influence LLMs\u2019 adherence to knowledge from internal memory and retrieved documents [52]. Since our hypothesis is that the retrieved documents contain correct answers, we encourage the LLMs to leverage the knowledge encapsulated in C when responding to queries. This strategy helps minimize potential conflicts caused by their memorized parametric knowledge. To achieve this objective, we designate the concept as a \"fact\" within the instructional prompt, explicitly delineating a delimited sandbox for LLMs to presuppose the absolute correctness of the knowledge conveyed by C. This non-parametric knowledge can seamlessly integrate into LLMs in a plug-and-play manner. The overarching framework can be represented as Eq. 1. P(A|Q) = P(A|C, Q)P(C|D, Q)P(D|Q). (1) 3.2 AMR-based Concept Distillation Abstract Meaning Representation (AMR) serves as a logical formal semantic structure proficient in encapsulating common-sense knowledge necessary for representing events, time, participants, and other elements within serialized texts [39]. Given a supporting document Dk \u2208D, the AMR parser is utilized to parse Dk into the corresponding AMR graph G =< N, E >, where C represents the nodes for concepts and E denotes the edges for the correlation relationships. In this context, we utilize a 4 \fmBart-based [31] parser2 trained on the AMR 3.0 corpus3 to address potential multilingual concerns. The detailed illustration of the AMR graph parsing is depicted in Table A1. Algorithm 1: Concept Distillation Input :AMR Graph (G) Output :concept (C) 1 Function Concept_Distillation(G): 2 concept \u2190[], role \u2190[]; 3 for Gsntn in SplitSnt (G) do 4 for N in DFS(Gsntn) do 5 if IsRole(N) then 6 if IsName(N) then 7 AppendRole(HandleName(N)) 8 if IsWiki(N) then 9 AppendRole(HandleWiki(N)) 10 if IsDate(N) then 11 AppendRole(HandleDate(N)) 12 else 13 if role is not None then 14 AppendConcept(HandleRole(role)); 15 role \u2190[]; 16 AppendConcept(N); 17 if (N is Last) and (role is not None) then repeat :Algorithm.Line 5-11 18 AppendConcept(HandleRole(role)); 19 concept \u2190ConceptFormat (concept); 20 concept \u2190ConceptBacktrace (concept); 21 return C \u2190concept We propose the concept distillation algorithm to format the concepts represented in G, as described in Algorithm 1. The supporting document Dk encompasses multiple sentences (sntn), and the AMR parser can structurally parse Dk into a pre-defined multi-sentence structure. The SplitSnt(\u00b7) function is designed to partition G and organize the resulting sentence-based sub-graphs according to the sequential order. Notably, we simplify G by disregarding the agent and patient of the concepts, i.e., the edges denoting relations between the connected concepts (Frame args, ARGX). Consequently, G is streamlined into a unidirectional connecting structure. Leveraging this structure, we perform a Depth First Search, DFS(\u00b7) on the N of G to traverse the concepts while maintaining the relative positional correlation of adjacent nodes. This approach emphasizes the connection as it exists in the preceding sequential representation, and the process is elaborated in Fig. A1. Previous research has investigated the influence of context order on LLMs [30]. We delve into the various traversal methods for testing their potential impact in Section D. The AMR defines a set of roles to meticulously delineate the semantic fabric of sentences. This paper underscores the meticulous handling of three roles, namely :name, :wiki, and date-entity, employing IsRole(\u00b7) to identify the predefined roles comprehensively. The :name role signifies a property node within the AMR graph, signifying entities such as individuals, organizations, or geographic locations. In instances where the concept expressed by :name spans multiple words, the parsing process of AMR decomposes each word within the :name into predicate roles (:op), thereby dispersing the holistic concept across multiple nodes. During the DFS(\u00b7) traversal process, fragmented nodes can potentially confuse LLMs due to incomplete meaning expressions. To maintain the integrity of concepts carried by :name, we introduce HandleName(\u00b7), organizing predicates in a stack structure. The :wiki role provides reliable external concept references sourced from Wikipedia. For standardizing concepts\u2019 diverse expressions referring to the same named entities, we utilize the HandleWiki (\u00b7) function, which aligns the concepts with the corresponding definitions in Wikipedia. If the concept in :name differs from :wiki, we designate the concept expressed by this node as :wiki to avoid semantic disambiguation. In addition, there is a date-entity role that depicts temporal concepts. In our algorithm, we specifically manage the roles :year, :month, and :day by HandleDate (\u00b7). This function consolidates roles under the same date-entity, forming concepts like \"19 04 2024\" with numerical months translated into textual representations, \"19 April 2024\", for clear expression. AMR incorporates special numerical annotations for certain parsing nodes, such as work-01, where the number appended to the word indicates different meanings of the same word in distinct contexts as defined in OntoNotes [49]. In the RAG scenario, we provide 2https://github.com/BramVanroy/multilingual-text-to-amr 3https://catalog.ldc.upenn.edu/LDC2020T02 5 \fLLMs with supporting documents comprising a set of concepts. This suggests that concepts are understood in relation to relevant contexts rather than in isolation. Therefore, the proposed conceptbased RAG framework depends on the contextual learning capability of LLMs to distinguish between polysemous concepts, instead of relying on intricate semantic references. The nodes belonging to the aforementioned roles are integrated into the preliminary concept set with the HandleRole(\u00b7), while the AppendConcept(\u00b7) directly integrate the remaining nodes based on the corresponding instances. The structure of AMR comprises a collection of canonical nodes (city-district, market-sector, etc.) designed to enforce knowledge and prevent hallucination regarding entity types. However, in the concept-based RAG scenario, the inference process isn\u2019t directly based on AMR but distilled concepts. The auxiliary semantics embedded within these nodes, which are absent in the source supporting documents, may dilute the essence of the core concept. To address this concern, we employ ConceptFormat(\u00b7) to filter out these nodes to reduce the potential interference. Additionally, frequently occurring concepts are filtered out based on their Inverse Document Frequency (IDF). Furthermore, the selection of representations in AMR is based on the principle of abstraction and generalization rather than the exact lexical items. This representation may mislead the nodes into ignoring variations such as tense, which are informative for concept-based RAG without reference annotations. To mitigate this, we develop the ConceptBacktrace(\u00b7) function to maintain consistency with concepts expressed in the source supporting documents. This function facilitates the backtracking of formatted concepts by incorporating representations from the supporting documents, ensuring they closely adhere to the original semantics without deviation. Subsequently, the backtraced concepts serve as the finalized concepts C, providing conceptual support for LLMs in RAG inference. 4 Experiments 4.1 Datasets We conducted extensive experiments to verify the efficacy of the concept-based RAG framework on open-domain Q&A datasets: PopQA [32] and EntityQuestions [40]. Each dataset includes a label (\"hasanswer\") for every supporting document, indicating whether it contains the answer to the associated question. To ensure a focused evaluation, we screened out the \"\" pairs where hasanswer=True. This selection criterion accommodates scenarios where all retrieved documents contribute positively to answering questions, thus mitigating interference from extraneous factors. The experiments involved verifying the LLMs\u2019 inference performance with different K, which denotes the amount of supporting documents to Q. For the PopQA dataset, we filtered out questions with subject entities having monthly Wikipedia pageviews (spop) \u2265500. This step excludes frequently accessed entities, preserving the dataset focused on long-tail knowledge. This approach serves the dual purpose of preventing data contamination and encouraging LLMs to rely more on retrieved documents than memorized knowledge, mitigating potential knowledge conflicts in the RAG process. The statistical results of the number of the selected pairs with different K settings are in Table 1. Table 1: Statistical results of the number of screened-out pairs from the datasets. K= 1 2 3 4 5 6 7 8 9 10 PopQA [32] 738 1307 422 262 161 151 108 79 66 70 EntityQuestions [40] 1671 1127 670 454 335 264 196 166 163 103 4.2 Baselines The baseline evaluations encompass two aspects: (1) exploration of diverse backbone LLMs, and (2) experimentation with different context compression methods. Specifically, we consider various mainstream LLMs as backbones, including GPT-Neo-1.3B, GPT-Neo-2.7B [5], OPT-1.3b, OPT2.7b [57], bloom-560m, bloom-1b1, bloom-1b7, bloom-3b [26], LLaMA-2-7b-chat-hf, LLaMA-213b-chat-hf [47]. The backbone LLMs coupled with the original supporting documents serve as the Vanilla methods. Regarding the alternative aspect, we explore the three context compression methods: context keywords extraction, context summarization, and Selective Context (SelCon) [29]. These methods aim to validate the efficacy of context compression while preserving essential information for inference, emphasizing discrete key features, fluent representation, and non-redundant information. 6 \fInspired by Chuang et al. [10], we employ a novel open-access LLM, LLaMA-2-13b-chat-hf [47], for context keyword extraction and summarization. This process involves extracting key phrases or terms from the context and generating a concise summary of the provided content, constrained by prompts of \"[Generate a short summary of the following content.]\" and \"[Extract a few keywords from the following content.]\". The detailed prompts are available in Appendix B. The SelCon enhances the efficiency of LLMs\u2019 inference by identifying and eliminating redundant content from the source context for compression. The reduction ratio of the SelCon compared here is set to 0.5. These baseline settings effectively demonstrate the comprehensive advantages of the proposed algorithm in capturing informative concepts when compared to various alternative compression techniques, whether generative-based or semantic-based methods. 4.3 Evaluation Metrics We employ two metrics to evaluate the concept-based RAG: accuracy (Acc.) and integration (Intg.). Accuracy (Acc.) is determined by assessing whether any answer A matches any of the gold answers corresponding to the question Q. The integration metric (Intg.) is designed to comprehensively evaluate the performance across various K of the retrieved supporting documents D. Specifically, the Intg. signifies the area beneath the accuracy curve of each model plotted against the X-axis (K). The calculation of Intg. is as Eq. 2, where K \u2208[xs, xe], and xs and xe represent the minimum and maximum number of supporting documents respectively. A higher value of Intg. indicates superior overall performance. Given that the proposed framework aims to enhance long-context RAG, we segment the evaluation of Intg. into two distinct intervals: normal interval (In = [1, 10], K \u2208In) and longer interval (Il = [6, 10], K \u2208Il). This division is intended to emphasize the effectiveness of the concept-based RAG framework, particularly in scenarios involving longer contexts. Intg. = Z xe xs Acc(x) dx \u22481 2 xe\u2212xs+1 X i=1 (xi \u2212xi\u22121) [Acc(xi) + Acc(xi\u22121)] (2) 5 Results and Analysis The evaluation results for the PopQA and EntityQuestion datasets are depicted in Fig. 3 and Fig. 4, respectively, providing graphical trends of Acc. as K increases intuitively. Furthermore, Table 2 and Table 3 present quantitative results of Intg. for the datasets. These tables include the calculation of \u2206, quantifying the improvement achieved by our proposed method over the Vanilla methods. Specifically, \u2206is computed as follows: \u2206= Intg.ours \u2212Intg.vanilla. The detailed quantitative evaluation results of Acc. are provided in Table A3 and Table A4. Section E and section F examine compression ratio and inference latency comparison to demonstrate the advantages of concept-compressed contexts. Figure 3: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the PopQA dataset. The vertical axis represents Acc., and the horizontal axis represents the number of supporting documents, K. The polyline reflects the changing trend of Acc. with different K, and the under area is Intg. A key intuitive finding reflected by Fig. 3 and Fig. 4 is the superior performance of our method in long-context scenarios, particularly evident when K is high. As K increases, especially within 7 \fFigure 4: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the EntityQuestion dataset. The definitions of the axis and symbols are the same with the Fig. 3. Table 2: The quantitative results of Intg. \u2191for the PopQA dataset, where the full name order of the LLMs is: GPT-Neo-1.3B, GPT-Neo-2.7B, OPT-1.3b, OPT-2.7b, bloom-560m, bloom-1b1, bloom-1b7, bloom-3b, LLaMA-2-chat-7b, LLaMA-2-chat-13b. The best results are in bold, and the second best results are in underlined. The increased and decreased \u2206are marked differently. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 620.68 631.39 656.68 687.15 619.86 692.68 707.25 671.88 682.30 672.03 Il 291.08 275.32 300.85 322.23 294.94 325.37 326.29 305.91 337.19 312.62 Keywords In 468.94 484.98 554.67 571.38 502.70 610.69 621.85 600.65 628.78 617.06 Il 257.12 244.24 297.70 305.64 275.39 327.70 338.01 318.37 326.41 315.93 Summary In 517.57 513.37 619.78 575.32 573.95 608.41 637.55 591.12 564.51 553.24 Il 263.14 260.64 316.80 290.50 304.55 313.36 336.20 297.44 291.50 291.39 SelCon In 444.29 524.54 615.78 607.12 423.22 634.81 606.15 625.66 715.90 703.29 Il 237.49 262.78 313.39 323.69 230.20 318.64 306.72 314.07 344.10 332.51 Ours In 625.31 652.71 668.86 688.47 608.31 686.29 698.91 681.22 738.82 716.55 Il 322.37 321.73 329.65 344.31 314.34 347.71 355.52 344.08 357.56 339.38 \u2206 In +4.63 +21.32 +12.18 +1.32 -11.55 -6.93 -8.34 +9.34 +56.52 +44.52 Il +31.29 +46.41 +28.8 +22.08 +19.40 +22.34 +29.23 +38.17 +20.37 +26.76 Table 3: The quantitative results of Intg. \u2191for the EntityQuestions dataset. The LLMs\u2019 order and symbol definitions are the same as Table 2. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 531.54 605.06 602.52 634.28 488.95 594.88 608.85 619.30 607.22 632.24 Il 247.50 284.47 277.47 299.03 222.99 266.91 284.00 289.26 289.95 287.48 Keywords In 280.76 360.00 403.37 439.73 295.02 428.54 465.15 462.65 584.67 574.61 Il 134.96 167.13 196.04 215.41 143.68 207.59 227.84 223.38 287.84 284.53 Summary In 366.73 406.72 501.51 446.50 388.36 415.61 501.90 435.49 425.70 438.31 Il 179.97 205.02 255.51 210.93 187.75 197.43 257.16 211.83 210.34 222.92 SelCon In 298.49 405.22 471.36 468.18 215.52 460.37 451.41 539.49 623.91 641.01 Il 144.69 195.05 231.76 223.55 108.45 214.94 217.40 261.79 295.33 304.57 Ours In 551.50 618.18 609.88 652.48 483.02 600.72 624.53 621.36 664.18 703.67 Il 267.12 298.74 285.06 303.49 243.55 286.20 295.45 300.29 303.39 320.87 \u2206 In +19.96 +13.12 +7.36 +18.2 -5.93 +5.84 +15.58 +2.06 +56.96 +71.43 Il +19.62 +14.27 +7.59 +4.45 +20.56 +19.29 +11.45 +11.03 +13.44 +33.39 8 \fthe longer context setting (Il), the Acc. of our method consistently outperforms that of various backbone LLMs coupled with other context compression methods. This trend suggests that the concepts distilled by our method are supportive of reducing interference and enabling the LLMs to concentrate on key knowledge. Moreover, the positive values of \u2206in Table 2 and Table 3 for the Il interval further underscore the improvement achieved by our framework over baseline methods when handling longer contexts. This observation emphasizes the effectiveness of the AMR-based concept distillation algorithm in capturing essential semantic information from supporting documents, thereby enabling LLMs to generate more accurate answers even when confronted with messy contexts. When setting the bloom-560m model as the backbone LLMs, an interesting finding is that \u2206exhibits negative trends in the In interval of both datasets, while the SelCon does not perform ideally either. We hypothesize that this is due to the limitation of small-scale models to associate semantic scenarios through discrete concepts, which results in the model\u2019s inability to understand the core information expressed in the compressed supporting documents. Conversely, when coupling advanced LLMs, such as LLaMA-2, the contexts compressed by the proposed method and SelCon exhibit the most significant and second most significant enhancements to the LLMs, respectively. This observation likely arises from these large-scale models\u2019 superior contextual understanding capabilities, which corroborates our hypothesis. Regarding the improvements of \u2206on Il interval of two datasets, our method\u2019s enhancement on the PopQA dataset is more pronounced. This is because PopQA was released recently, and its knowledge is less likely to be memorized by earlier models such as GPT-Neo and OPT. Moreover, the screening of long-tail knowledge further accentuates the unique scenario provided by PopQA, making it an ideal testbed for evaluating context compression methods. The proposed AMR-based concept distillation method demonstrates clear advantages over generative compression methods of keyword extraction and summarization. While these methods utilise the LLMs to generate compressed representations and show competitive results in certain cases, they may inadvertently introduce noise or lose essential details during the compression process. Moreover, the generative nature of these methods makes them inherently difficult to control, even when provided with instructions as constraints. Consequently, the generated keywords and summaries may exhibit randomness, potentially deviating from the core concepts conveyed in the original supporting documents. In contrast, our framework leverages the inherent structured semantic representation of AMR to capture the core concepts explicitly. This semantic-level abstraction enables the framework to faithfully format the concepts to provide more reliable and informative support for the RAG process. Compared to the linguistics context compression baseline, SelCon, which identifies and prunes redundant content based on self-information computed at the lexical level, the proposed method based on the semantic level achieves superior results. SelCon\u2019s effectiveness depends on determining the right granularity for redundancy removal, making it sensitive to lexical unit choice. In contrast, our method takes a macro view by focusing on the semantic consistency carried by the AMR structure, making it insensitive to the delicate lexical bias. This characteristic enables it to be a reliable plug-andplay component in various RAG systems dealing with supporting documents containing irrelevant information and potential lexical errors. The robustness of the proposed framework is demonstrated by its consistent performance improvements across various LLMs. The experimental results on both datasets showcase the generalizability of our method, irrespective of the underlying LLM architecture. This finding suggests that the concept-based RAG framework can be effectively coupled with diverse LLMs, making it a versatile solution for enhancing inference performance in long-context scenarios. 6 Conclusion and Future Research This paper introduces a novel concept-based RAG framework that utilizes AMR to distil essential concepts from long-context supporting documents, enabling LLMs to focus on the most supportive knowledge for accurate question-answering efficiently. The proposed AMR-based concept distillation algorithm systematically traverses the AMR graph to format key concept nodes with informative semantic features, transforming redundant supporting documents into a concise concept set. The proposed framework significantly enhances RAG performance compared with baselines comprising various backbone LLMs and context compression methods. To the best of our knowledge, this is the first work to augment RAG with AMR, offering a novel direction for integrating reliable structured semantic representations with RAG to handle tasks requiring high fidelity to the knowledge. 9 \fIt has been demonstrated that the LLMs with fewer parameters within the proposed framework can also exhibit comparable or superior performance to larger models in certain cases. Consequently, it is plausible to speculate on the feasibility of employing small-scale LLMs solely equipped with the general natural language understanding capabilities, coupled with comprehensive and informative concept sets, to implement the lightweight Q&A systems. This approach would alleviate the constraints imposed by the computational complexity of large-scale LLMs during their practical application and deployment. Exploring this possibility will be one of the focus of our future research. 10", + "additional_graph_info": { + "graph": [], + "node_feat": { + "Kaize Shi": [ + { + "url": "http://arxiv.org/abs/2405.03085v1", + "title": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation", + "abstract": "Large Language Models (LLMs) have made significant strides in information\nacquisition. However, their overreliance on potentially flawed parametric\nknowledge leads to hallucinations and inaccuracies, particularly when handling\nlong-tail, domain-specific queries. Retrieval Augmented Generation (RAG)\naddresses this limitation by incorporating external, non-parametric knowledge.\nNevertheless, the retrieved long-context documents often contain noisy,\nirrelevant information alongside vital knowledge, negatively diluting LLMs'\nattention. Inspired by the supportive role of essential concepts in\nindividuals' reading comprehension, we propose a novel concept-based RAG\nframework with the Abstract Meaning Representation (AMR)-based concept\ndistillation algorithm. The proposed algorithm compresses the cluttered raw\nretrieved documents into a compact set of crucial concepts distilled from the\ninformative nodes of AMR by referring to reliable linguistic features. The\nconcepts explicitly constrain LLMs to focus solely on vital information in the\ninference process. We conduct extensive experiments on open-domain\nquestion-answering datasets to empirically evaluate the proposed method's\neffectiveness. The results indicate that the concept-based RAG framework\noutperforms other baseline methods, particularly as the number of supporting\ndocuments increases, while also exhibiting robustness across various backbone\nLLMs. This emphasizes the distilled concepts are informative for augmenting the\nRAG process by filtering out interference information. To the best of our\nknowledge, this is the first work introducing AMR to enhance the RAG,\npresenting a potential solution to augment inference performance with\nsemantic-based context compression.", + "authors": "Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Large Language Models (LLMs) have emerged as indispensable tools for daily information acquisition, owing to their extensive knowledge base and ability to fulfil diverse user instructions [6, 47, 1]. By leveraging large-scale pre-training on massive datasets, LLMs memorize vast amounts of knowledge within their parameters as internal memory, known as parametric knowledge [33]. However, the presence of outdated or incorrect knowledge within internal memory can lead to hallucinations, hindering the performance of LLMs\u2019 inferencing process [46]. This limitation is particularly pronounced when handling long-tail knowledge for domain-specific or highly specialized queries, as the inherent difficulty in memorizing rare entities persists even in the most robust models. Consequently, the overreliance on potentially flawed parametric knowledge can significantly interfere with the reliability of LLMs\u2019 outputs, especially in scenarios with fine-grained knowledge requirements [58, 36]. Retrieval Augmented Generation (RAG) employs additional retrievers to augment LLMs with external, non-parametric knowledge, effectively expanding their internal knowledge boundaries [27, 14]. This Preprint. Under review. arXiv:2405.03085v1 [cs.CL] 6 May 2024 \fallows LLMs to access up-to-date, query-focused information that may not be adequately memorized within their parametric memory to alleviate the aforementioned limitations [24]. In contrast to finetuning by updating the model parameters, RAG preserves pre-trained knowledge while dynamically incorporating relevant external context. This paradigm offers greater flexibility and scalability, as the retrievers can be easily plug-and-play without modifying the underlying language model\u2019s parameters, thus circumventing complex computational hurdles [17, 16]. However, RAG is easily confused when dealing with long contextual retrieved support documents, which often consist of multiple shreds of evidence for providing vital knowledgeable context but are also accompanied by noisy and irrelevant information [56]. The distracting contexts can dilute the LLMs\u2019 attention and adversely affect their performance with misrepresentation [30, 25]. Compressing lengthy contexts to distil vital knowledge is crucial for enhancing LLMs and ensuring factually consistent responses in the RAG process. Figure 1: The examples of concept-based RAG1. Numerous studies have demonstrated that individuals tend to directly search for key concepts when reading long documents as the brain will complete the remaining details based on prior knowledge, expectations, background, and motivations [15, 22]. This selective attention to critical information allows ignoring redundant details and rearranging the text informatively [51]. As illustrated in Fig. 1, given only the key concepts of the question-related supporting documents that still enable us to grasp the crucial semantics. LLMs parameterize massive common knowledge, enabling them to exhibit a similar ability in context understanding even when the word or character-level information is disrupted [43, 7]. This provides the possibility of whether LLMs can comprehend scenarios solely based on discrete informative concepts. Linguistic features, such as semantic and syntactic, have significantly improved the interpretability, controllability, and diversity of Natural Language Generation (NLG) [28]. Language models can implicitly discover these features during pre-training to ensure the logic of the generated text [21]. It has been demonstrated that explicitly leveraging linguistic features for downstream tasks is beneficial, as it refactors the source documents into concise representations that reduce entropy by focusing on the critical information, thereby aiding in a comprehensive understanding of the described scenarios [41, 48, 44, 28, 23, 55]. This advantage enables the stable linguistic features to reliably assist context understanding. Inspired by the aforementioned insights, we propose enhancing RAG\u2019s performance with the crucial concepts distilled from the raw retrieved supporting documents. To effectively capture the informative concepts, we introduce Abstract Meaning Representation (AMR), a semantic formalism that encodes the meaning of serialized texts by a rooted, directed, labelled, acyclic graph [3]. Compared to other linguistic representations, AMR prioritizes semantic consistency among concepts carried by nodes when representing sentences, offering the advantage of automatically rectifying surfacelevel variations or understanding abbreviated terms, ensuring the structured concepts represent the underlying meaning to transcend the limitations of linguistic noise [59]. Specifically, we propose the concept-based RAG framework with the AMR-based concept distillation algorithm, which formats the concepts for augmenting LLMs by compressing the lengthy context to concentrate on crucial information exclusively. We empirically experiment on two open-domain Q&A datasets, PopQA [32] and EntityQuestions [40]. The results show that the performance of our method improves significantly as the number of supporting documents increases, outperforming baselines with various compression methods and backbone LLMs. The contributions of this paper can be summarized as follows: \u2022 This paper proposes the concept-based RAG framework that explicitly integrates AMR, a semantic representation, to enable LLMs to focus on essential rather than messy knowledge 1The corresponding complete sentences: [1] The Outfit is a 1973 crime film directed by John Flynn. [2] It stars Robert Duvall, Karen Black, Joe Don Baker and Robert Ryan. [3] Flynn\u2019s screenplay is an adaptation of the novel of the same name by Richard Stark. [4] Two hitmen drive to Eddie Macklin\u2019s house to assassinate him as he builds a brick wall in his backyard. 2 \fwhen processing long-context retrieved supporting documents. To the best of our knowledge, this is the first research introducing AMR to enhance RAG for more reliable inference. \u2022 We propose an AMR-based concept distillation algorithm, which compresses long-context raw supporting documents into concepts by formatting the informative nodes. The distilled concepts are more knowledge-centralized than the raw supporting documents, reducing the interference of irrelevant information during the inference process of LLMs. \u2022 We conduct extensive experiments on open-domain Q&A datasets. The results indicate that our framework effectively enhances inference performance as the number of supporting documents increases, outperforming baselines with various context compression methods and backbone LLMs. This demonstrates its applicability in long-context RAG scenarios. 2 Related Works 2.1 Long-context Understanding The increasing complexity of downstream tasks and the demand for models capable of capturing intricate dependencies have driven significant attention to the long-context understanding of LLMs [37, 19, 53]. One prominent research avenue involves modifying the basic architecture of LLMs. For instance, Dai et al.[11] introduced a segment-level recurrence mechanism with their Transformer-XL model, enabling it to retain longer contextual information than the standard Transformer structure. Similarly, Beltagy et al.[4] extended the self-attention mechanism in their Longformer model to handle longer sequences by introducing a sparse attention pattern, thereby facilitating the efficient processing of documents with thousands of tokens. However, a significant drawback of modifying model architecture is the necessity for complex re-training processes. In contrast, research on prompt compression aims to understand long-token prompts by compressing them into low-dimensional soft prompts [50, 9, 34]. While offering a more efficient alternative to architecture modification, this approach constrains the transferability of learned prompts across various LLMs. Recent research has advanced to a more intuitive level, aiming to comprehensively understand the context by directly expanding the context window or explicit compression. Chen et al.[8] introduced position interpolation to extend the context window of pre-trained LLMs, scaling LLaMA\u2019s context window to 32k tokens with few fine-tuning steps. Ding et al.[12] proposed LongRoPE to extend LLMs\u2019 context window to 2048k tokens while maintaining the performance of the original short context window through a positional and interpolation progressive extension strategy. However, the long context window raises another challenge of diluting core information with redundant data [53]. To address this, Li et al.[29] filtered out irrelevant context with low self-information for compressing the long prompts. Chuang et al.[10] proposed the Nano-Capsulator to compress original prompts into capsule prompts, decreasing inference latency across diverse LLMs. Compression methods can benefit the RAG by allowing LLMs to focus on essential knowledge in supporting documents [54]. 2.2 Linguistics-augmented NLG Incorporating linguistic principles into LLMs has shown promise in improving the coherence and semantic fidelity of generated text [55]. Augmentation techniques like syntactic trees [35] and lexical patterns [28] assist in linguistic feature injection, enabling language models to generate more faithful text. Ahmed et al. [2] proposed automatic semantic augmentation of prompts to enhance LLMs with tagged facts, resulting in improved code summarization performance. Zhou et al. [60] introduced InstructCTG, a framework for controlling LLMs\u2019 generation based on syntax constraints, facilitating flexibility and adaptation to new conditions without complex model modification. LLMs can be explicitly guided by leveraging linguistic insights to mitigate biases inherent in parameterized-only approaches, hereby enhancing performance in tasks demanding strict factual consistency. Abstract Meaning Representation (AMR) has proven its efficacy in enhancing downstream generation tasks by providing a structured semantic representation that encapsulates static concepts [18]. Frisoni et al. [13] integrated AMR with pre-trained language models to enhance biomedical summarization by capturing inter-entity relations. Ribeiro et al. [38] employed AMR to improve factuality evaluation in abstractive summarization by identifying content verifiability errors and subsentence-level factual inconsistencies. Shi et al. [42] proposed AMR-TST, which generates fluent and reliable texts with the target style by optimizing core concept nodes. Jangra et al. [20] preserved style-agnostic content 3 \fwhile generating transferred text by utilizing AMR as an intermediate representation. These studies illustrate AMR\u2019s advantages in capturing essential concepts containing informative linguistic features. 3 Method 3.1 Concept-based RAG Framework This section introduces the proposed concept-based RAG framework for inference utilising the concepts distilled from the raw supporting documents. The overview of the framework is in Fig. 2. Figure 2: The overview of the concept-based RAG framework, which consists of three main components: (a) information retrieval, (b) concept distillation, and (c) concept-based inference. Given an input question Q, the (a) information retrieval component aims to utilize a retriever to return the top-K knowledgeable supporting documents D = {D1, ..., DK} relevant to Q from sources such as Wikipedia or other information repositories. At this stage, the retriever\u2019s performance significantly influences the resulting answer set A = {A1, ..., AM} [33, 14]. However, the retriever\u2019s performance is beyond this paper\u2019s scope. We hypothesize that all retrieved supporting documents D contain the correct answer corresponding to Q, expressed as a proposition: \u2200Dk \u2208D, \u2203Am \u2208A, Am \u2286Dk. The (b) concept distillation component is devised to format the concept C from the retrieved supporting document D by the proposed AMR-based concept distillation algorithm. This algorithm converts the supporting documents from continuous sequences to discrete concepts formatted from the AMR graph, denoted as G. Further details of this algorithm will be elucidated in the subsequent section. After obtaining the distilled concept C, the (c) concept-based inference component proceeds to integrate it with various backbone LLMs to derive answers A using a faithful-intensive prompt template as follows: [Refer to the following facts to answer the question. Facts: C. Question: Q]. The intensity of prompts has been demonstrated to influence LLMs\u2019 adherence to knowledge from internal memory and retrieved documents [52]. Since our hypothesis is that the retrieved documents contain correct answers, we encourage the LLMs to leverage the knowledge encapsulated in C when responding to queries. This strategy helps minimize potential conflicts caused by their memorized parametric knowledge. To achieve this objective, we designate the concept as a \"fact\" within the instructional prompt, explicitly delineating a delimited sandbox for LLMs to presuppose the absolute correctness of the knowledge conveyed by C. This non-parametric knowledge can seamlessly integrate into LLMs in a plug-and-play manner. The overarching framework can be represented as Eq. 1. P(A|Q) = P(A|C, Q)P(C|D, Q)P(D|Q). (1) 3.2 AMR-based Concept Distillation Abstract Meaning Representation (AMR) serves as a logical formal semantic structure proficient in encapsulating common-sense knowledge necessary for representing events, time, participants, and other elements within serialized texts [39]. Given a supporting document Dk \u2208D, the AMR parser is utilized to parse Dk into the corresponding AMR graph G =< N, E >, where C represents the nodes for concepts and E denotes the edges for the correlation relationships. In this context, we utilize a 4 \fmBart-based [31] parser2 trained on the AMR 3.0 corpus3 to address potential multilingual concerns. The detailed illustration of the AMR graph parsing is depicted in Table A1. Algorithm 1: Concept Distillation Input :AMR Graph (G) Output :concept (C) 1 Function Concept_Distillation(G): 2 concept \u2190[], role \u2190[]; 3 for Gsntn in SplitSnt (G) do 4 for N in DFS(Gsntn) do 5 if IsRole(N) then 6 if IsName(N) then 7 AppendRole(HandleName(N)) 8 if IsWiki(N) then 9 AppendRole(HandleWiki(N)) 10 if IsDate(N) then 11 AppendRole(HandleDate(N)) 12 else 13 if role is not None then 14 AppendConcept(HandleRole(role)); 15 role \u2190[]; 16 AppendConcept(N); 17 if (N is Last) and (role is not None) then repeat :Algorithm.Line 5-11 18 AppendConcept(HandleRole(role)); 19 concept \u2190ConceptFormat (concept); 20 concept \u2190ConceptBacktrace (concept); 21 return C \u2190concept We propose the concept distillation algorithm to format the concepts represented in G, as described in Algorithm 1. The supporting document Dk encompasses multiple sentences (sntn), and the AMR parser can structurally parse Dk into a pre-defined multi-sentence structure. The SplitSnt(\u00b7) function is designed to partition G and organize the resulting sentence-based sub-graphs according to the sequential order. Notably, we simplify G by disregarding the agent and patient of the concepts, i.e., the edges denoting relations between the connected concepts (Frame args, ARGX). Consequently, G is streamlined into a unidirectional connecting structure. Leveraging this structure, we perform a Depth First Search, DFS(\u00b7) on the N of G to traverse the concepts while maintaining the relative positional correlation of adjacent nodes. This approach emphasizes the connection as it exists in the preceding sequential representation, and the process is elaborated in Fig. A1. Previous research has investigated the influence of context order on LLMs [30]. We delve into the various traversal methods for testing their potential impact in Section D. The AMR defines a set of roles to meticulously delineate the semantic fabric of sentences. This paper underscores the meticulous handling of three roles, namely :name, :wiki, and date-entity, employing IsRole(\u00b7) to identify the predefined roles comprehensively. The :name role signifies a property node within the AMR graph, signifying entities such as individuals, organizations, or geographic locations. In instances where the concept expressed by :name spans multiple words, the parsing process of AMR decomposes each word within the :name into predicate roles (:op), thereby dispersing the holistic concept across multiple nodes. During the DFS(\u00b7) traversal process, fragmented nodes can potentially confuse LLMs due to incomplete meaning expressions. To maintain the integrity of concepts carried by :name, we introduce HandleName(\u00b7), organizing predicates in a stack structure. The :wiki role provides reliable external concept references sourced from Wikipedia. For standardizing concepts\u2019 diverse expressions referring to the same named entities, we utilize the HandleWiki (\u00b7) function, which aligns the concepts with the corresponding definitions in Wikipedia. If the concept in :name differs from :wiki, we designate the concept expressed by this node as :wiki to avoid semantic disambiguation. In addition, there is a date-entity role that depicts temporal concepts. In our algorithm, we specifically manage the roles :year, :month, and :day by HandleDate (\u00b7). This function consolidates roles under the same date-entity, forming concepts like \"19 04 2024\" with numerical months translated into textual representations, \"19 April 2024\", for clear expression. AMR incorporates special numerical annotations for certain parsing nodes, such as work-01, where the number appended to the word indicates different meanings of the same word in distinct contexts as defined in OntoNotes [49]. In the RAG scenario, we provide 2https://github.com/BramVanroy/multilingual-text-to-amr 3https://catalog.ldc.upenn.edu/LDC2020T02 5 \fLLMs with supporting documents comprising a set of concepts. This suggests that concepts are understood in relation to relevant contexts rather than in isolation. Therefore, the proposed conceptbased RAG framework depends on the contextual learning capability of LLMs to distinguish between polysemous concepts, instead of relying on intricate semantic references. The nodes belonging to the aforementioned roles are integrated into the preliminary concept set with the HandleRole(\u00b7), while the AppendConcept(\u00b7) directly integrate the remaining nodes based on the corresponding instances. The structure of AMR comprises a collection of canonical nodes (city-district, market-sector, etc.) designed to enforce knowledge and prevent hallucination regarding entity types. However, in the concept-based RAG scenario, the inference process isn\u2019t directly based on AMR but distilled concepts. The auxiliary semantics embedded within these nodes, which are absent in the source supporting documents, may dilute the essence of the core concept. To address this concern, we employ ConceptFormat(\u00b7) to filter out these nodes to reduce the potential interference. Additionally, frequently occurring concepts are filtered out based on their Inverse Document Frequency (IDF). Furthermore, the selection of representations in AMR is based on the principle of abstraction and generalization rather than the exact lexical items. This representation may mislead the nodes into ignoring variations such as tense, which are informative for concept-based RAG without reference annotations. To mitigate this, we develop the ConceptBacktrace(\u00b7) function to maintain consistency with concepts expressed in the source supporting documents. This function facilitates the backtracking of formatted concepts by incorporating representations from the supporting documents, ensuring they closely adhere to the original semantics without deviation. Subsequently, the backtraced concepts serve as the finalized concepts C, providing conceptual support for LLMs in RAG inference. 4 Experiments 4.1 Datasets We conducted extensive experiments to verify the efficacy of the concept-based RAG framework on open-domain Q&A datasets: PopQA [32] and EntityQuestions [40]. Each dataset includes a label (\"hasanswer\") for every supporting document, indicating whether it contains the answer to the associated question. To ensure a focused evaluation, we screened out the \"\" pairs where hasanswer=True. This selection criterion accommodates scenarios where all retrieved documents contribute positively to answering questions, thus mitigating interference from extraneous factors. The experiments involved verifying the LLMs\u2019 inference performance with different K, which denotes the amount of supporting documents to Q. For the PopQA dataset, we filtered out questions with subject entities having monthly Wikipedia pageviews (spop) \u2265500. This step excludes frequently accessed entities, preserving the dataset focused on long-tail knowledge. This approach serves the dual purpose of preventing data contamination and encouraging LLMs to rely more on retrieved documents than memorized knowledge, mitigating potential knowledge conflicts in the RAG process. The statistical results of the number of the selected pairs with different K settings are in Table 1. Table 1: Statistical results of the number of screened-out pairs from the datasets. K= 1 2 3 4 5 6 7 8 9 10 PopQA [32] 738 1307 422 262 161 151 108 79 66 70 EntityQuestions [40] 1671 1127 670 454 335 264 196 166 163 103 4.2 Baselines The baseline evaluations encompass two aspects: (1) exploration of diverse backbone LLMs, and (2) experimentation with different context compression methods. Specifically, we consider various mainstream LLMs as backbones, including GPT-Neo-1.3B, GPT-Neo-2.7B [5], OPT-1.3b, OPT2.7b [57], bloom-560m, bloom-1b1, bloom-1b7, bloom-3b [26], LLaMA-2-7b-chat-hf, LLaMA-213b-chat-hf [47]. The backbone LLMs coupled with the original supporting documents serve as the Vanilla methods. Regarding the alternative aspect, we explore the three context compression methods: context keywords extraction, context summarization, and Selective Context (SelCon) [29]. These methods aim to validate the efficacy of context compression while preserving essential information for inference, emphasizing discrete key features, fluent representation, and non-redundant information. 6 \fInspired by Chuang et al. [10], we employ a novel open-access LLM, LLaMA-2-13b-chat-hf [47], for context keyword extraction and summarization. This process involves extracting key phrases or terms from the context and generating a concise summary of the provided content, constrained by prompts of \"[Generate a short summary of the following content.]\" and \"[Extract a few keywords from the following content.]\". The detailed prompts are available in Appendix B. The SelCon enhances the efficiency of LLMs\u2019 inference by identifying and eliminating redundant content from the source context for compression. The reduction ratio of the SelCon compared here is set to 0.5. These baseline settings effectively demonstrate the comprehensive advantages of the proposed algorithm in capturing informative concepts when compared to various alternative compression techniques, whether generative-based or semantic-based methods. 4.3 Evaluation Metrics We employ two metrics to evaluate the concept-based RAG: accuracy (Acc.) and integration (Intg.). Accuracy (Acc.) is determined by assessing whether any answer A matches any of the gold answers corresponding to the question Q. The integration metric (Intg.) is designed to comprehensively evaluate the performance across various K of the retrieved supporting documents D. Specifically, the Intg. signifies the area beneath the accuracy curve of each model plotted against the X-axis (K). The calculation of Intg. is as Eq. 2, where K \u2208[xs, xe], and xs and xe represent the minimum and maximum number of supporting documents respectively. A higher value of Intg. indicates superior overall performance. Given that the proposed framework aims to enhance long-context RAG, we segment the evaluation of Intg. into two distinct intervals: normal interval (In = [1, 10], K \u2208In) and longer interval (Il = [6, 10], K \u2208Il). This division is intended to emphasize the effectiveness of the concept-based RAG framework, particularly in scenarios involving longer contexts. Intg. = Z xe xs Acc(x) dx \u22481 2 xe\u2212xs+1 X i=1 (xi \u2212xi\u22121) [Acc(xi) + Acc(xi\u22121)] (2) 5 Results and Analysis The evaluation results for the PopQA and EntityQuestion datasets are depicted in Fig. 3 and Fig. 4, respectively, providing graphical trends of Acc. as K increases intuitively. Furthermore, Table 2 and Table 3 present quantitative results of Intg. for the datasets. These tables include the calculation of \u2206, quantifying the improvement achieved by our proposed method over the Vanilla methods. Specifically, \u2206is computed as follows: \u2206= Intg.ours \u2212Intg.vanilla. The detailed quantitative evaluation results of Acc. are provided in Table A3 and Table A4. Section E and section F examine compression ratio and inference latency comparison to demonstrate the advantages of concept-compressed contexts. Figure 3: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the PopQA dataset. The vertical axis represents Acc., and the horizontal axis represents the number of supporting documents, K. The polyline reflects the changing trend of Acc. with different K, and the under area is Intg. A key intuitive finding reflected by Fig. 3 and Fig. 4 is the superior performance of our method in long-context scenarios, particularly evident when K is high. As K increases, especially within 7 \fFigure 4: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the EntityQuestion dataset. The definitions of the axis and symbols are the same with the Fig. 3. Table 2: The quantitative results of Intg. \u2191for the PopQA dataset, where the full name order of the LLMs is: GPT-Neo-1.3B, GPT-Neo-2.7B, OPT-1.3b, OPT-2.7b, bloom-560m, bloom-1b1, bloom-1b7, bloom-3b, LLaMA-2-chat-7b, LLaMA-2-chat-13b. The best results are in bold, and the second best results are in underlined. The increased and decreased \u2206are marked differently. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 620.68 631.39 656.68 687.15 619.86 692.68 707.25 671.88 682.30 672.03 Il 291.08 275.32 300.85 322.23 294.94 325.37 326.29 305.91 337.19 312.62 Keywords In 468.94 484.98 554.67 571.38 502.70 610.69 621.85 600.65 628.78 617.06 Il 257.12 244.24 297.70 305.64 275.39 327.70 338.01 318.37 326.41 315.93 Summary In 517.57 513.37 619.78 575.32 573.95 608.41 637.55 591.12 564.51 553.24 Il 263.14 260.64 316.80 290.50 304.55 313.36 336.20 297.44 291.50 291.39 SelCon In 444.29 524.54 615.78 607.12 423.22 634.81 606.15 625.66 715.90 703.29 Il 237.49 262.78 313.39 323.69 230.20 318.64 306.72 314.07 344.10 332.51 Ours In 625.31 652.71 668.86 688.47 608.31 686.29 698.91 681.22 738.82 716.55 Il 322.37 321.73 329.65 344.31 314.34 347.71 355.52 344.08 357.56 339.38 \u2206 In +4.63 +21.32 +12.18 +1.32 -11.55 -6.93 -8.34 +9.34 +56.52 +44.52 Il +31.29 +46.41 +28.8 +22.08 +19.40 +22.34 +29.23 +38.17 +20.37 +26.76 Table 3: The quantitative results of Intg. \u2191for the EntityQuestions dataset. The LLMs\u2019 order and symbol definitions are the same as Table 2. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 531.54 605.06 602.52 634.28 488.95 594.88 608.85 619.30 607.22 632.24 Il 247.50 284.47 277.47 299.03 222.99 266.91 284.00 289.26 289.95 287.48 Keywords In 280.76 360.00 403.37 439.73 295.02 428.54 465.15 462.65 584.67 574.61 Il 134.96 167.13 196.04 215.41 143.68 207.59 227.84 223.38 287.84 284.53 Summary In 366.73 406.72 501.51 446.50 388.36 415.61 501.90 435.49 425.70 438.31 Il 179.97 205.02 255.51 210.93 187.75 197.43 257.16 211.83 210.34 222.92 SelCon In 298.49 405.22 471.36 468.18 215.52 460.37 451.41 539.49 623.91 641.01 Il 144.69 195.05 231.76 223.55 108.45 214.94 217.40 261.79 295.33 304.57 Ours In 551.50 618.18 609.88 652.48 483.02 600.72 624.53 621.36 664.18 703.67 Il 267.12 298.74 285.06 303.49 243.55 286.20 295.45 300.29 303.39 320.87 \u2206 In +19.96 +13.12 +7.36 +18.2 -5.93 +5.84 +15.58 +2.06 +56.96 +71.43 Il +19.62 +14.27 +7.59 +4.45 +20.56 +19.29 +11.45 +11.03 +13.44 +33.39 8 \fthe longer context setting (Il), the Acc. of our method consistently outperforms that of various backbone LLMs coupled with other context compression methods. This trend suggests that the concepts distilled by our method are supportive of reducing interference and enabling the LLMs to concentrate on key knowledge. Moreover, the positive values of \u2206in Table 2 and Table 3 for the Il interval further underscore the improvement achieved by our framework over baseline methods when handling longer contexts. This observation emphasizes the effectiveness of the AMR-based concept distillation algorithm in capturing essential semantic information from supporting documents, thereby enabling LLMs to generate more accurate answers even when confronted with messy contexts. When setting the bloom-560m model as the backbone LLMs, an interesting finding is that \u2206exhibits negative trends in the In interval of both datasets, while the SelCon does not perform ideally either. We hypothesize that this is due to the limitation of small-scale models to associate semantic scenarios through discrete concepts, which results in the model\u2019s inability to understand the core information expressed in the compressed supporting documents. Conversely, when coupling advanced LLMs, such as LLaMA-2, the contexts compressed by the proposed method and SelCon exhibit the most significant and second most significant enhancements to the LLMs, respectively. This observation likely arises from these large-scale models\u2019 superior contextual understanding capabilities, which corroborates our hypothesis. Regarding the improvements of \u2206on Il interval of two datasets, our method\u2019s enhancement on the PopQA dataset is more pronounced. This is because PopQA was released recently, and its knowledge is less likely to be memorized by earlier models such as GPT-Neo and OPT. Moreover, the screening of long-tail knowledge further accentuates the unique scenario provided by PopQA, making it an ideal testbed for evaluating context compression methods. The proposed AMR-based concept distillation method demonstrates clear advantages over generative compression methods of keyword extraction and summarization. While these methods utilise the LLMs to generate compressed representations and show competitive results in certain cases, they may inadvertently introduce noise or lose essential details during the compression process. Moreover, the generative nature of these methods makes them inherently difficult to control, even when provided with instructions as constraints. Consequently, the generated keywords and summaries may exhibit randomness, potentially deviating from the core concepts conveyed in the original supporting documents. In contrast, our framework leverages the inherent structured semantic representation of AMR to capture the core concepts explicitly. This semantic-level abstraction enables the framework to faithfully format the concepts to provide more reliable and informative support for the RAG process. Compared to the linguistics context compression baseline, SelCon, which identifies and prunes redundant content based on self-information computed at the lexical level, the proposed method based on the semantic level achieves superior results. SelCon\u2019s effectiveness depends on determining the right granularity for redundancy removal, making it sensitive to lexical unit choice. In contrast, our method takes a macro view by focusing on the semantic consistency carried by the AMR structure, making it insensitive to the delicate lexical bias. This characteristic enables it to be a reliable plug-andplay component in various RAG systems dealing with supporting documents containing irrelevant information and potential lexical errors. The robustness of the proposed framework is demonstrated by its consistent performance improvements across various LLMs. The experimental results on both datasets showcase the generalizability of our method, irrespective of the underlying LLM architecture. This finding suggests that the concept-based RAG framework can be effectively coupled with diverse LLMs, making it a versatile solution for enhancing inference performance in long-context scenarios. 6 Conclusion and Future Research This paper introduces a novel concept-based RAG framework that utilizes AMR to distil essential concepts from long-context supporting documents, enabling LLMs to focus on the most supportive knowledge for accurate question-answering efficiently. The proposed AMR-based concept distillation algorithm systematically traverses the AMR graph to format key concept nodes with informative semantic features, transforming redundant supporting documents into a concise concept set. The proposed framework significantly enhances RAG performance compared with baselines comprising various backbone LLMs and context compression methods. To the best of our knowledge, this is the first work to augment RAG with AMR, offering a novel direction for integrating reliable structured semantic representations with RAG to handle tasks requiring high fidelity to the knowledge. 9 \fIt has been demonstrated that the LLMs with fewer parameters within the proposed framework can also exhibit comparable or superior performance to larger models in certain cases. Consequently, it is plausible to speculate on the feasibility of employing small-scale LLMs solely equipped with the general natural language understanding capabilities, coupled with comprehensive and informative concept sets, to implement the lightweight Q&A systems. This approach would alleviate the constraints imposed by the computational complexity of large-scale LLMs during their practical application and deployment. Exploring this possibility will be one of the focus of our future research. 10" + }, + { + "url": "http://arxiv.org/abs/2308.04913v1", + "title": "LLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following", + "abstract": "E-commerce authoring involves creating attractive, abundant, and targeted\npromotional content to drive product sales. The emergence of large language\nmodels (LLMs) introduces an innovative paradigm, offering a unified solution to\naddress various authoring tasks within this scenario. However, mainstream LLMs\ntrained on general corpora with common sense knowledge reveal limitations in\nfitting complex and personalized features unique to e-commerce products and\ncustomers. Furthermore, LLMs like GPT-3.5 necessitate remote accessibility,\nraising concerns about safeguarding voluminous customer privacy data during\ntransmission. This paper proposes the LLaMA-E, the unified and customized\ninstruction-following language models focusing on diverse e-commerce authoring\ntasks. Specifically, the domain experts create the seed instruction set from\nthe tasks of ads generation, query-enhanced product title rewriting, product\nclassification, purchase intent speculation, and general Q&A. These tasks\nenable the models to comprehensively understand precise e-commerce authoring\nknowledge by interleaving features covering typical service aspects of\ncustomers, sellers, and platforms. The GPT-3.5 is introduced as a teacher\nmodel, which expands the seed instructions to form a training set for the\nLLaMA-E models with various scales. The experimental results show that the\nproposed LLaMA-E models achieve state-of-the-art results in quantitative and\nqualitative evaluations, also exhibiting the advantage in zero-shot scenes. To\nthe best of our knowledge, this study is the first to serve the LLMs to\nspecific e-commerce authoring scenarios.", + "authors": "Kaize Shi, Xueyao Sun, Dingxian Wang, Yinlin Fu, Guandong Xu, Qing Li", + "published": "2023-08-09", + "updated": "2023-08-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "main_content": "INTRODUCTION E-commerce authoring encompasses creating diverse and innovative textual content for scenario-based services, such as product copywriting, advertisements, and Q&A material [39]. Automatically generating authoring content compellingly and persuasively can improve promotion efficiency, attract potential customers, and ultimately drive sales and conversions [10]. Present task-specific authoring models predominantly focus on independent features, missing the capacity to interleave the multiple aspects of features inherent in diverse service objectives within interactive e-commerce scenarios. These limitations constrain the model\u2019s understanding of multidimensional authoring tasks, disregarding their potential to fit and apply positively promoted features in the correlated tasks [2]. Generating high-quality authoring content consistently for intricate and extensive business scenarios poses an ongoing challenge [36]. Natural language processing (NLP) has recently witnessed a significant transformation with the emergence of large language models (LLMs) [40]. These powerful models have revolutionized how NLP tasks are approached, introducing a unified paradigm arXiv:2308.04913v1 [cs.CL] 9 Aug 2023 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Kaize Shi and Xueyao Sun, et al. Figure 1: We train the LLaMA-E models based on the instructions set consisting of multiple e-commerce authoring tasks, which enables the LLaMA-E models with the capability of generalization to unseen instructions and items2 with potential for advancements [17]. LLMs, such as the widely known ChatGPT 1, acquire a broad spectrum of knowledge trained on vast corpora, enabling them to demonstrate remarkable performance across domains and deliver impressive results in numerous applications such as text summarization, question-answering, text style transfer, etc. [23]. The comprehensive corpora allow LLMs to capture the logic of language representation and acquire a macro understanding of common sense, linguistic patterns, and semantics. However, the general LLMs encounter challenges in comprehending the intricacies of personalized and specialized scenarios due to data barriers that isolate precise domain-specific knowledge [41]. Furthermore, certain LLMs rely on remote centralized services, which raises concerns regarding privacy protection in data transmission. The multi-aspect instruction following presents significant opportunities for empowering LLMs with the comprehension of interleaving features within complex e-commerce scenarios, which enables the handling of diverse authoring tasks in a unified manner [11]. This procedure injects domain-specific knowledge into the general LLMs based on the common sense knowledge learned in pretraining. Consequently, LLMs enhance the capacity for generalization and feature-fitting through contextually sensitive instructions, thereby releasing their ability for various practical business applications [24]. Moreover, the customization of LLMs locally maximizes privacy by mitigating potential breaches related to the external sharing of sensitive information during the inference process [20]. This paper proposes the LLaMA-E, a series of instructions following large language models specifically tailored for e-commerce authoring tasks. Recent studies have demonstrated impressive performance gains through automatic, self-instructional fine-tuning techniques [29]. Inspired by this, we create a seed instruction set according to the common aspect of different authoring tasks in the e-commerce scenario by interleaving the features of seller, customer, and platform; and utilize the GPT-3.5-turbo-301 model as a teacher model to expand the seed instruction set for model training. Specifically, domain experts are engaged to craft the seed set containing 300 instructions pairs manually, focusing on the tasks of ads 1https://www.openai.com 2The LLaMA-E icon was generated by DALL\u00b7E: https://labs.openai.com/. generation, query-enhanced product title rewriting, product classification3, query intent speculation, and general e-commerce Q&A. Following this, the GPT-3.5-turbo-301 model is tasked with expanding these seed instructions, resulting in an expanded instruction set of 120k pairs after pruning and postprocessing. The LLaMA-E models have been trained locally based on the expanded instruction set and are evaluated using metrics designed to assess their applicability in real-world applications. The quantitative and qualitative results demonstrate that LLaMA-E models achieve state-of-the-art performance in authoring tasks that even include zero-shot unseen tasks. The contributions of this paper can be summarized as follows: \u2022 We design an instruction set for e-commerce authoring tasks by interleaving the task requirements and the scenario features of customers, sellers, and the platform, which are essential service objectives in the authoring scenario. \u2022 Based on the expanded instruction set, we develop the LLaMAE, a series of large language models that uniformly focus on serving various e-commerce authoring tasks. \u2022 The proposed LLaMA-E models outperform other baselines, achieving state-of-the-art results in quantitative and qualitative evaluations. To the best of our knowledge, LLaMA-E is the first work in the e-commerce scenario that specifically introduces LLMs to focus on authoring services. 2 RELATED WORKS 2.1 E-commerce Authoring E-commerce authoring covers the tasks of fulfilling various aspects and requirements from different objects in e-commerce services. It aims to create diverse and captivating content that effectively highlights the product\u2019s features, thereby promoting customers\u2019 purchasing behaviour [6]. One straightforward approach is based on modifying fixed patterns. Wang et al. [28] proposed a statistical framework that generates product descriptions using templates extracted from product attributes. Xiao et al. [31] generated summaries corresponding to product titles by defining the necessary keyword categories. The advent of novel NLG paradigms like Transformers [27] has enhanced the ability to represent complex text features. Consequently, models in this field are progressively capable of incorporating domain-specific features into general NLG tasks, leading to practical research focused on scenario features. From the e-commerce authoring task of generating product descriptions, Zhang et al. [39] introduced APCG, a system that offers seller services by providing feedback on transformer-generated content through human screening. This method has improved the click-through rate (CTR) and conversion rate (CVR) when applied at JD.com. Wang et al. [30] generate product descriptions by incorporating the product title, attributes, and auxiliary descriptions created by marketers using an autoencoder structure. By combining product descriptions with user-personalized characteristics, the overall appeal of the descriptions can be enhanced for specific customers, aiding them in making informed decisions. Building on this idea, Chen et al. [4] combined product aspects, user categories, and 3The product taxonomy in this paper is defined as clothing, accessories, home and living, weddings, art and collectibles, craft supplies and tools, jewellery, paper and party supplies, toys and games, electronics and accessories, books movies and music, bath and beauty, bags and purses, shoes, pet supplies. \fLLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY a knowledge base to generate informative and personalized product descriptions. Additionally, there is research focusing on generating product advertisements. Chan et al. [2] generate ads for multiple products by selecting representative products corresponding to the post topic and generating ad content based on the multi-agent framework. Similarly, Zhang et al. [38] propose SMPACG, a model for generating ads based on multiple products. This model emphasises generating ads that align the features of multiple products with specific display requirements in various scenarios. 2.2 E-commerce Language Model LLMs have emerged as a new paradigm for addressing multi-class tasks in a unified manner. E-commerce platforms can leverage LLMs to enhance user experience, boost sales results, foster user interaction, and provide personalized services [3]. LLMs in e-commerce primarily concentrate on tasks such as auto Q&A, product summarization, sentiment analysis, etc [26]. Zhang et al. [34] proposed E-BERT, a BERT-based language model incorporating phrase-level and product-level knowledge. Experimental results demonstrate that this knowledge augmentation improves the performance of various downstream e-commerce tasks like Q&A and product classification. Xu et al. [32] introduced K-PLUG, a transformer-based pre-trained language model that incorporates e-commerce knowledge derived from product entities and an e-commerce knowledge base. Zhang et al. [37] presented DSGPT, a GPT structure-based language model fine-tuned specifically for tasks involving product titles and comments summarization. In summary, the key to successfully inheriting general knowledge from pre-trained LLMs while emphasizing domain-specific knowledge lies in designing a fine-tuning method guided by specific downstream requirements. Several research has successfully applied the LLMs for additional e-commerce applications, particularly in recommender systems and information retrieval, complementing the existing NLP tasks [14]. For instance, Geng et al. [5] developed a path language model to generate explainable recommendations for Amazon products. Lu et al. [16] introduced a multilingual product retrieval model based on the BERT model serviced for the e-commerce engine. Huang et al. [8] used a variety of Amazon-fine-tuned LLMs to predict query similarity. Their fine-tuned LLM significantly improves search ranking and matching applications over the general BERT model. 3 METHOD The development process of the LLaMA-E models is illustrated in Figure 2. Initially, domain experts in e-commerce formulate the seed instruction set that concentrates on five common authoring tasks covering the features from the aspects of customers, sellers, and the e-commerce platform. Then the instructions are expanded by the GPT-3.5-turbo-0301 model and refined by experts to generate diverse expressions. The resulting expanded instruction set is then utilized for instruction tuning to introduce e-commerce authoring knowledge. The following sections will detail each sub-process. 3.1 Instruction Formulating Designing the innovative instruction format that considers the key objectives of the e-commerce authoring scenario is crucial for guiding LLMs to carry out the authoring tasks effectively. This paper specifically emphasizes the e-commerce features from three aspects: the seller, customer, and platform, which are the primary agent and service objectives in the e-commerce authoring services. The following section details the features of each aspect. Seller (\ud835\udc46): The seller assumes a critical role in e-commerce authoring, acting as both the service agent and object. Their contribution to the e-commerce authoring scenario is manifested through creating product titles that encompass essential features, such as the name, style, and model of the intended products. These product titles serve as an intuitive channel to effectively communicate specific features to potential customers and the official platform. Customer (\ud835\udc36): The customer serves as the primary audience for e-commerce authoring services. They actively participate in the authoring process by providing personalized product preferences. The customer query corresponding to a specific product plays a vital role as the textual carrier for associating product features and personalized preferences. The features can be subdivided as follows: +Explicit feature (\ud835\udc360): This feature is intuitively reflected through the textual information in the customer query, which can provide specific feedback on the explicit features of the customer\u2019s intended product. The query text acts as an indicator of the specific features or attributes that the customer is seeking in the product. +Implicit feature (\ud835\udc361): This feature encapsulates the potential purchase interest that can be inferred from the user query, thereby supporting the authoring process correlated with the specific customer intention. The features are semantically abstracted from the query text and can be mined to associate with other features from different aspects based on specific authoring scenarios. Platform (\ud835\udc43): As the service carrier of the e-commerce authoring model, the features of the platform offer a comprehensive and macro perspective. Their main objective is to establish the abstract connection that unifies the features from both the seller and the customer aspects. This holistic feature helps ensure the authoring content aligns with the platform\u2019s characteristics as follows: +Product correlation (\ud835\udc430): This feature is derived from the product taxonomy, which encompasses the distinctions and associations among different products. The incorporation of this feature enhances e-commerce authoring by providing a comprehensive understanding of product semantics through coherent and official ground-truth taxonomy labels based on expert knowledge. +Platform background (\ud835\udc431): This feature pertains to the background knowledge of specific e-commerce platforms. It aids the authoring process by aligning linguistic habits and policy knowledge corresponding to the platform. The textual information of blogs and Q&A pairs serve as official reflections of these features. Specifying a unified instruction set containing the tasks that interleave essential features can enhance the generalization ability of LLMs [15]. In the e-commerce scenario, a productive approach involves formulating fine-tuning tasks that interleave features from the aforementioned aspects while meeting specific scenario requirements. We identify the tasks encompassing interaction scenarios that utilise the defined features: Ads Generation, Queryenhanced Title Rewriting, Product Classification, Purchase Intent Speculation, and General Q&A. Table 1 shows the instantiated instructions, providing references for instruction formulating. The ads generation aims to create compelling content that highlights product features and incorporates persuasive language to \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Kaize Shi and Xueyao Sun, et al. Figure 2: The development process of the LLaMA-E models, which includes the instruction formulating, instruction expansion, and instruction tuning for e-commerce authoring scenarios stimulate purchasing. This is also the most prevalent task in ecommerce authoring. The query-enhanced title rewriting focuses on personalizing the original product titles based on user queries, making them more appealing and aligned with user purchasing preferences. Since the semantic features in product titles and queries have domain prominence, we utilize the product classification and purchase intention speculation tasks to map these semantic features to specific product taxonomy, making the model shifts its focus from general to e-commerce knowledge. The intention speculation establishes semantic associations between queries and taxonomy for enhancing product recommendation. The general Q&A introduces background knowledge through Q&A pairs defined by the platform. Its testing scenario can be seen as a zero-shot learning task for platform understanding. The experts formulated the seed instruction set containing 300 instructions focusing on these tasks. 3.2 Instruction Expansion We employ the GPT-3.5-turbo-0301 as the teacher model to expand the diversity of instructions in the seed set for fine-tuning the LLaMA-E models. The instruction expansion algorithm is illustrated in Algorithm 1. The instructions in the seed set are rewritten by the teacher model while keeping its semantic consistency. Due to the strict predefined responses, we only rewrite the instructions for the tasks of product classification and intention speculation. For the generative tasks that encourage the generation of varied linguistic expressions, we adopt two strategies to expand responses corresponding to the instructions: response generation and rewriting. Specifically, response generation utilizes the teacher model to generate appropriate responses based on the rewritten instructions, while response rewriting involves the teacher model rewriting the original responses that align with the instructions in the seed set. Subsequently, the expanded instruction set undergoes a postprocessing phase by experts, where instruction-response pairs with duplicate instructions and responses are filtered out. The screened instructions are then uniformly distributed across the respective aforementioned tasks, culminating in the final instruction set, encompassing 120k instructions and employed for training LLaMA-E. The examples of the expanded instructions are given in Appendix. A Algorithm 1: Instruction Expansion 1 Function askGPT(msg): 2 response \u2190openai.ChatCompletion.create( 3 model=\"gpt-3.5-turbo-0301\", 4 messages=msg ); 5 message \u2190 response[\"choices\"][0][\"message\"][\"content\"]; 6 return message 7 Function Instruction_Expanding(Seed_Input, Seed_Resp, Seed_Instruction, Mode): 8 EI_messages \u2190[ 9 \"role\": \"user\", 10 \"content\": \"Rewrite the following instruction while maintaining semantic consistency:\" + Seed_Instruction]; 11 Expanded_Instruction \u2190askGPT(EI_messages); 12 if Mode == \"generation\" then 13 ER_messages \u2190[ 14 \"role\": \"user\", 15 \"content\": Expanded_Instruction + Seed_Input ]; 16 else if Mode == \"rewriting\" then 17 ER_messages \u2190[ 18 \"role\": \"user\", 19 \"content\": \"Rewrite the following generated response to diversify its expression:\" + Seed_Resp ]; 20 Expanded_Response \u2190askGPT(Resp_messages); 21 return {\"instruction\": Expanded_Instruction, \"input\": Seed_Input, \"output\": Expanded_Response} 3.3 Instruction Tuning We inject the e-commerce knowledge into the general pre-trained LLaMA [25] models by fine-tuning with the expanded instructions. The LLaMA models with parameter scales of 7b, 13b, and 30b serve as the base models. However, deploying LLaMA models with a large number of parameters in a customer-specific scenario is challenging due to the computational complexity involved. To overcome this challenge, we employ LoRA [7], a Parameter-Efficient Fine-Tuning \fLLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 1: Examples of the instantiated instructions in e-commerce authoring interaction scenarios, where the tasks are interleaved to cover features from the aspect of seller , customer , and platform Task Instantiation Instruction Ads Generation Generate a short advertisement for the following product: [product title] Query-enhanced Title Rewriting Rewrite the product title of [product title] according to the following query: [query] . Product Classification What is the [product category] of this following product belongs to? [product title] Intent Speculation Given the query of [query] , which of the following [product category] is the customer interested in? General Q&A [How am I charged for Ads?] (PEFT) strategy that enables cost-effective fine-tuning and achieves competitive results to full model fine-tuning. Specifically, the LoRA is designed for low-rank adaptation, aiming to reduce the number of trainable parameters in the LLMs fine-tuning process by learning rank-decomposition matrix pairs while keeping the original weights static. By significantly reducing the requirements for taskspecific LLM adaptations, LoRA allows efficient task-switching for deployment without introducing inference latency. In our experiments, the forward pass of a linear layer represented by \u210e= \ud835\udc4a0\ud835\udc65 of the LLaMA models is modified by LoRA following the Eq.1. \u210e= \ud835\udc4a0\ud835\udc65+ \ud835\udc35\ud835\udc34\ud835\udc65, (1) where\ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc58represents the frozen pre-trained weight matrices from the base LLaMA models, whereas \ud835\udc35\u2208R\ud835\udc51\u00d7\ud835\udc5fand \ud835\udc34\u2208R\ud835\udc5f\u00d7\ud835\udc58are the trainable parameters that are initialized with zero and Gaussian initialization, respectively. All variables with the rank of \ud835\udc5a\ud835\udc56\ud835\udc5b(\ud835\udc51,\ud835\udc58). Given that the training data for LLaMA models is derived from general corpora, such as Wikipedia and C4, it becomes essential to focus specifically on the unique e-commerce semantic features, such as rare stylistic words (e.g., Boho, Berber) when it comes to authoring tasks. This emphasis is particularly important for modelling product descriptions and personalized customer queries. In order to enable the model to fit these nuanced features, we utilize LoRA for fine-tuning the trainable parameters of \ud835\udc4a\ud835\udc5e, \ud835\udc4a\ud835\udc58, \ud835\udc4a\ud835\udc63, and \ud835\udc4a\ud835\udc5c, which are the weight matrices in the self-attention module. 4 EXPERIMENT 4.1 Implementation Details The dataset for constructing the instruction set is derived from realworld application scenarios, encompassing essential information such as product title, taxonomy, and customer query. Additionally, it includes an action element that represents the customer\u2019s actions concerning the retrieved product list, including the value of \"no action\", \"click\", and \"purchase\". To ensure that the data expresses the potential purchase interest established by the correlation between the customer\u2019s query and the product information, we filter out data marked as \"no action,\" which indicates that the customer skipped the specific disinterested products. Furthermore, all screened data is post-processed to remove emojis and other interfering characters. The test set comprises 19,367 previously unseen product information instances, including the additional product description element. For evaluating the LLaMA-E in the general Q&A task, we utilized 30 authentic Q&A pairs from the platform\u2019s \"Help Center\" not included in the training instruction set. The LLaMA-E models are trained using two Nvidia A40 GPUs. The number of the trainable parameters and training time (per epoch) are provided in Table 2. Table 2: Training details of the proposed LLaMA-E models Model Trainable Parameters GPU Hours LLaMA-E-7b 8.39m 3.93 LLaMA-E-13b 13.11m 9.51 LLaMA-E-30b 25.56m 41.14 4.2 Evaluation System The evaluation system is designed to test the LLaMA-E models\u2019 generalization capability in practical applications, which requires the generated content to focus on the coverage of essential features in the task requirements rather than strictly adhering to fixed responses based on specific instructions. The metrics for evaluating LLaMA-E models in each of the authoring tasks are as follows: Ads Generation: Evaluation metrics for this task encompass \ud835\udc35\ud835\udc3f\ud835\udc38\ud835\udc48[19] and \ud835\udc45\ud835\udc42\ud835\udc48\ud835\udc3a\ud835\udc38[13], commonly used in combination in NLG tasks [18]. We calculate the \ud835\udc35\ud835\udc3f\ud835\udc38\ud835\udc48and \ud835\udc45\ud835\udc42\ud835\udc48\ud835\udc3a\ud835\udc38\u2212\ud835\udc3fscores between the generated ads and the product title and description separately, represented as \ud835\udc35\ud835\udc3f\ud835\udc34\ud835\udc51\ud835\udc61, \ud835\udc35\ud835\udc3f\ud835\udc34\ud835\udc51\ud835\udc51, \ud835\udc45\ud835\udc3f\ud835\udc34\ud835\udc51\ud835\udc61, and \ud835\udc45\ud835\udc3f\ud835\udc34\ud835\udc51\ud835\udc51. This evaluation setting considers the motivation of seller-written ads, assessing whether the generated ads incorporate the crucial features mentioned in the title and the significant details in the descriptions. Query-enhanced Title Rewriting: We calculate the \ud835\udc35\ud835\udc3f\ud835\udc38\ud835\udc48and \ud835\udc45\ud835\udc42\ud835\udc48\ud835\udc3a\ud835\udc38\u2212\ud835\udc3fscores between the rewritten title and the original product title and customer query separately, represented as \ud835\udc35\ud835\udc3f\ud835\udc47\ud835\udc61, \ud835\udc45\ud835\udc3f\ud835\udc47\ud835\udc61, \ud835\udc35\ud835\udc3f\ud835\udc47\ud835\udc5e, and \ud835\udc45\ud835\udc3f\ud835\udc47\ud835\udc5e. These metrics measure how comprehensively the rewritten title covers features of the original title and customer queries. The original product titles are dysfluent text stacked with keywords, making the readability a criterion for evaluating whether the rewritten title can be used in publicity scenes like online banners. To evaluate this character, we calculate the perplexity (\ud835\udc43\ud835\udc43\ud835\udc3f) [9] metric of the rewritten title by taking the GPT-2-XL 4 as the evaluation model, which boasts 1.5 billion parameters and is pre-trained on the WebText dataset with extensive general semantic features. Product Classification: This task evaluates whether the LLaMAE model can accurately classify the products according to the predefined taxonomy based on the title. The evaluation metrics include the macro-average Precision (\ud835\udc43\ud835\udc5d\ud835\udc61), Recall (\ud835\udc45\ud835\udc5d\ud835\udc61), and F1-score (\ud835\udc391\ud835\udc5d\ud835\udc61). 4https://huggingface.co/gpt2-xl \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Kaize Shi and Xueyao Sun, et al. Intent Speculation: This task evaluates the performance of the LLaMA-E in analyzing customer potential interest expressed by queries based on the product taxonomy. The evaluation can be quantitatively measured using the classification metrics, including macro-average Precision (\ud835\udc43\ud835\udc5e\ud835\udc60), Recall (\ud835\udc45\ud835\udc5e\ud835\udc60), and F1-score (\ud835\udc391\ud835\udc5e\ud835\udc60). General E-commerce Q&A: The metrics of \ud835\udc35\ud835\udc3f\ud835\udc38\ud835\udc48and\ud835\udc45\ud835\udc42\ud835\udc48\ud835\udc3a\ud835\udc38\u2212 \ud835\udc3fmeasure the explicit overlap and similarity between the generated and standard answers for evaluating the generalization on unseen questions, represented as \ud835\udc35\ud835\udc3f\ud835\udc5e\ud835\udc4eand \ud835\udc45\ud835\udc3f\ud835\udc5e\ud835\udc4e. We also introduce the average BERT Score (\ud835\udc35\ud835\udc38\ud835\udc5e\ud835\udc4e) [35] to evaluate the implicit semantic similarity between the generated and standard answers. This can be regarded as a measure of the platform-specific background knowledge conveyed by semantics injected into the general LLMs. Overall: Following Yi et al. [33], we calculate the geometric mean of the aforementioned metrics to derive an intuitive uniform metric as \ud835\udc3a\ud835\udc40. The \ud835\udc43\ud835\udc43\ud835\udc3fis transformed to 1 ln \ud835\udc43\ud835\udc43\ud835\udc3ffor calculation. 4.3 Baseline Methods We compare the proposed LLaMA-E models with the LLMs of GPT2 [21], BART [12], T5-base [22], GPT-Neo [1], and LLaMA [25]. We employ the expanded instruction set to fine-tune the GPT-2 and BART models for each e-commerce authoring task. This approach can evaluate the performance difference between comprehensive instruction fine-tuning and distributed task-specific fine-tuning so that the baseline model can be aligned with the data distribution similar to the LLaMA-E models to ensure fairness. The other baselines are introduced with their pre-trained models, and the comparison of the LLaMA-7/13/30b models can be regarded as the ablation study to evaluate the advantages of designed fine-tuning instructions in enabling general LLMs to learn e-commerce authoring knowledge. Additionally, we report the performance of the teacher model, GPT-3.5-turbo-0310, on each of the evaluation tasks. 5 RESULT AND ANALYSIS 5.1 Quantitative Evaluation The quantitative evaluation results are shown in Table 3. The LLaMA-E models have generally achieved better results than the baseline models in most quantitative evaluation metrics. The LLaMAE-7b model significantly outperforms other baseline models in the \ud835\udc3a\ud835\udc40metric, proving it has the best overall performance in the Ecommerce authoring tasks. Within the internal comparison of the LLaMA-E models, a significant trend is the gradual enhancement of performance in the classification tasks as the scale of parameters increases. This demonstrates that a larger parameter scale aids in fitting the more granular scenario features within the instruction set. However, one potential drawback is overfitting, stemming from the limitations in the scale and diversity of the current instruction set. The models\u2019 ability to generalize knowledge from general corpora and effectively model natural language may be affected. Compared with the teacher model, GPT-3.5, the LLaMA-E model achieved competitive performance in \ud835\udc35\ud835\udc3f\ud835\udc38\ud835\udc48and \ud835\udc45\ud835\udc42\ud835\udc48\ud835\udc3a\ud835\udc38\u2212\ud835\udc3fmetrics, which evaluate text overlap, as well as in the \ud835\udc43\ud835\udc43\ud835\udc3fmetric, which evaluates the text quality. These results demonstrate that the text generated by the LLaMA-E models converges with GPT-3.5 regarding information coverage and readability. In product classification, intent speculation, and general Q&A tasks requiring professional domain knowledge, the LLaMA-E models achieved better performance. This proves that general LLMs cannot yet meet the finegrained requirements in specific tasks, demonstrating the necessity of designing LLMs based on scenario features. This comparison proves the feasibility of training the LLMs that comprehensively serve e-commerce authoring applications and ensures multi-agent data privacy through the designed multi-aspect instructions. Compared with the task-specific fine-tuned GPT-2 and BART models, the LLaMA-E models achieved the best performance, demonstrating the designed instruction set is more fine-grained in fitting the overall features of given tasks than task-specific fine-tuning. Both of these two models outperformed other baselines in the \ud835\udc391\ud835\udc5e\ud835\udc60, \ud835\udc391\ud835\udc5d\ud835\udc61, and \ud835\udc35\ud835\udc38\ud835\udc5e\ud835\udc4emetrics, especially GPT-2 achieved the best performance in the \ud835\udc43\ud835\udc5d\ud835\udc61metric. These results prove that domain knowledge significantly improves the serviceability of LLMs in specific scenarios. However, these models require cumbersome task-specific fine-tuning, and the limited in-context learning ability further restricts their efficient utilization of the available training data. These defects prevent it from being easily applied to practical scenarios. The remaining baselines, T5-base, GPT-Neo, and LLaMA, with similar scales to the LLaMA-E models, are incorporated to assess the applicability of extensive general knowledge in specific application scenarios. The findings indicate that these models excel in certain generative metrics but fall short in classification and \ud835\udc35\ud835\udc38\ud835\udc5e\ud835\udc4emetrics. We can infer that large-scale general knowledge only equips these models to generate readable yet context-independent text. This limits their ability to represent detailed scenario knowledge and provide precise support for e-commerce authoring services. This hypothesis will be further examined in qualitative evaluations. Compared to the LLaMA models for ablation studies, the LLaMA-E models perform better on all metrics. This validates the positive support of the proposed instruction set for the authoring scenarios. 5.2 Qualitative Evaluation We conduct the qualitative evaluation from the tasks of ads generation (Table 4), query-enhanced title rewriting (Table 5), purchase intent speculation (Table 6), and general e-commerce Q&A (Table 7). 5.2.1 Ads Generation. This task requires the generation of attractive ads covering the product features. The results are shown in Table 4. A significant advantage of the LLaMA-E model is that it generates stimulating words (\"Order Now\", \"Perfect\") that stimulate purchases, which endows the advertising function of the generated text. Moreover, the LLaMA-E models derive a more concrete product function description based on the abstract adjectives (\"Pure Authentic\") and the product words while ensuring factual consistency. This phenomenon is also unavailable in the ads generated by other baselines except for GPT-3.5 and BART. In contrast, the text generated by general LLMs of T5-base and LLaMA-13b hallucinated, while the GPT-2 model generated unreadable repetitive words. 5.2.2 Query-enhanced Title Rewriting. Query-enhanced title rewriting requires the rewritten title to contain the key features in both the original title and query. The results are in Table 5. The complex requirements of this task resulted in baselines rewriting titles that are not readable (GPT-2, BART, T5-base). The GPT-Neo rewritten title lost the crucial product feature of \"shirt\". In contrast, the titles \fLLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 3: The quantitative evaluation results of the LLaMA-E models and baselines Model Ads Generation Query-enhanced Title Rewriting Product Classification Intent Speculation General Q&A GM\u2191 \ud835\udc35\ud835\udc3f\ud835\udc34\ud835\udc61 \ud835\udc45\ud835\udc3f\ud835\udc34\ud835\udc61 \ud835\udc35\ud835\udc3f\ud835\udc34\ud835\udc51 \ud835\udc45\ud835\udc3f\ud835\udc34\ud835\udc51 \ud835\udc35\ud835\udc3f\ud835\udc47\ud835\udc61 \ud835\udc45\ud835\udc3f\ud835\udc47\ud835\udc61 \ud835\udc35\ud835\udc3f\ud835\udc47\ud835\udc5e \ud835\udc45\ud835\udc3f\ud835\udc47\ud835\udc5e \ud835\udc43\ud835\udc43\ud835\udc3f \ud835\udc43\ud835\udc5d\ud835\udc61 \ud835\udc45\ud835\udc5d\ud835\udc61 \ud835\udc391\ud835\udc5d\ud835\udc61 \ud835\udc43\ud835\udc5e\ud835\udc60 \ud835\udc45\ud835\udc5e\ud835\udc60 \ud835\udc391\ud835\udc5e\ud835\udc60 \ud835\udc35\ud835\udc3f\ud835\udc5e\ud835\udc4e \ud835\udc45\ud835\udc3f\ud835\udc5e\ud835\udc4e \ud835\udc35\ud835\udc38\ud835\udc5e\ud835\udc4e GPT-3.5 16.76 47.65 0.56 11.15 26.08 60.04 9.10 35.00 120.86 49.48 49.23 49.35 19.58 19.18 19.38 2.83 14.41 85.53 15.06 GPT-2 14.85 25.03 0.29 6.83 16.57 39.48 1.64 19.98 253.73 87.50 24.01 33.18 56.25 6.33 10.69 2.14 11.42 85.66 10.26 BART 13.05 36.04 0.37 8.37 18.64 41.40 5.75 20.33 389.35 73.75 54.82 62.39 66.67 47.97 54.71 3.32 14.02 86.02 15.83 T5-base 14.55 37.96 0.92 9.10 21.16 53.42 7.95 23.82 300.02 40.04 9.52 9.62 26.17 9.98 9.01 3.25 13.99 85.33 11.03 GPT-Neo 12.93 30.62 0.97 8.16 21.43 49.04 7.21 25.49 306.83 9.88 5.86 2.42 2.61 5.05 1.61 2.41 10.10 83.56 6.65 LLaMA 7b 10.05 21.63 0.77 8.52 12.00 27.32 3.22 13.86 206.71 28.64 4.29 4.12 9.64 3.01 2.29 2.01 11.17 84.81 6.31 13b 6.31 16.35 0.75 7.94 15.28 30.40 3.35 13.61 181.54 19.64 1.78 2.62 13.62 3.48 4.79 0.86 11.53 84.39 5.72 30b 12.67 22.93 0.91 7.44 18.03 32.03 3.15 12.95 159.18 32.15 6.12 9.27 11.54 4.25 5.73 2.49 11.38 84.55 7.79 LLaMA-E 7b 15.18 46.96 0.45 9.87 18.88 54.36 4.66 25.69 132.86 60.03 63.80 59.01 59.52 61.09 59.71 4.04 15.86 86.43 17.41 13b 13.08 46.99 0.32 8.99 15.07 50.48 4.15 23.21 152.23 72.51 68.92 69.99 72.87 68.08 69.62 3.32 12.36 86.14 16.77 30b 14.23 47.23 0.41 10.32 15.96 52.95 4.27 24.60 177.75 74.32 73.16 71.75 74.51 72.18 70.53 2.28 13.29 86.01 17.28 Table 4: Qualitative evaluation of Ads generation Model Generated Ads Instr. Generate an ad for the following product. Input Himalayan pink salt lamp bowl massage balls. Pure authentic. Dimmer switch. Night light. Home decor. GPT-3.5 Relax with pure Himalayan bliss. Authentic salt lamp bowl with massage balls. Dimmable night light. GPT-2 Himalayan pink salt lamp bowl massage balls. Pure authentic. Pure authentic. Pure authentic... BART Enhance ambiance with salt lamp bowl massage balls. Pure authentic ambiance. Perfect for home decor. T5-base Introducing our luxurious and stylish linen spherical pillow that offers an authentic, dimmer switch for a cozy night light in your home. GPT-Neo Introducing our new Himalayan Pink Salt Lamp Bowl Massage Balls! You can feel the power of the natural light in your bedroom... LLaMA 7b Himalayan Pink Salt Lamp Bowl Massage Balls are perfect for any room in your house. 13b This light gives your room a warm glow and is made of real salt from Pakistan. 30b Himalayan Pink Lamp Bowl! I hope you like it :) LLaMA-E 7b Purchase the Himalayan Pink Salt Lamp for a calming ambience and aesthetic. Buy now! 13b Himalayan Salt Lamp Bowl: Relaxing, Dimmable, Massage Balls. Pure Authentic. Order it for Home Decor. 30b Himalayan salt lamp bowl with massage balls and dimmer: night light, decor. Buy now! rewritten by the LLaMA-E models cover the key features, while LLaMA-7b deduces and visualizes the \"fiftieth\" in \"Input\" as a male character of the corresponding age group, which proves the ability of the chain-of-thought reasoning in LLMs. The LLaMA models cover titles and queries by generating redundant descriptors. This phenomenon can be understood as the model does not understand the instructions accurately and deviates from the task requirements. Table 5: Qualitative evaluation of title rewriting Model Rewritten Title Instr. Rewrite the following product title combined with the query of \"birthday gift men\" Input Vintage birthday shirt fiftieth gift ideas GPT-3.5 Vintage 50th birthday shirt: Unique gift ideas for men\u2019s milestone celebrations. GPT-2 vintage birthday shirt fiftieth gift ideasth birthday gift men\u2019s shirt fiftieth gift ideasth birthday shirt fiftieth gift... BART vintage birthday shirt fiftieth birthday gift ideas for gifting birthday gifts and more! T5-base Personalised Birthday Gift for Father\u2019s Birthday Gift Men GPT-Neo Men\u2019s Vintage Birthday Gift Ideas LLaMA 7b The birthday man can get a gift from this vintage shirt as it\u2019s for his fiftieth birthday... 13b A good example of a birthday gift for a fifty-year-old man would be a customized T-shirt... 30b Fifty Birthday Shirts For Men, Unique Funny T-Shirts With Slogans For The Big Five-Oh! LLaMA-E 7b Vintage Shirt for Men\u2019s Birthday Gift, Ideal for Husbands, Dads, and Brothers. 13b Vintage 50th Birthday Shirt, Perfect for Men. Customize with Name, Age, and Number. Ideal Gift. 30b Vintage Birthday Shirt A Perfect Gift Idea for Men\u2019s Fiftieth Birthday. Shop Now. 5.2.3 Purchase Intent Speculation. This evaluation can be achieved by measuring whether the generated intention contains the corresponding taxonomy keywords based on the given query. The results are shown in Table 6. The fine-tuned LLMs (BART, GPT-2, and LLaMA-E) can accurately identify specific categories from the product taxonomy. Although GPT-2 incorrectly inferred the customer\u2019s purchase intention as \"jewellery,\" this category still falls within the standard taxonomy. In contrast, due to the lack of domain-specific knowledge and adherence to instruction constraints, the intention generated by general LLMs is diverse and unrelated to the instructions, which limits their serviceability in practical applications. This phenomenon also supports the hypothesis in quantitative evaluations that such models have better text generation metrics but \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Kaize Shi and Xueyao Sun, et al. Table 6: Qualitative evaluation of purchase intent speculation Model Purchase Intent Instr. Which category does the following query suggest the user is interested in? Input Himalayan salt lamp Taxonomy home and living GPT-3.5 Home and Living GPT-2 jewelry BART home and living T5-base The query suggests that the user is interested in \"toys and games\" category. GPT-Neo The query suggests that the user is interested in accessories, specifically a \"Himalayan salt lamp\". LLaMA 7b This query shows the products with Himalayan salt lamp as part of their title or description on Amazon.com. 13b https://www.etsy.com/search?q=himalayan+salt+lamp 30b Home & Living LLaMA-E 7b home and living 13b home and living 30b home and living poorer classification metrics. Despite GPT-3.5 performing well than other baselines, it still exhibits the aforementioned issues in our other testing cases. This emphasizes the necessity of domain knowledge to enhance the applicability of LLMs to specific scenarios. 5.2.4 General E-commerce Q&A. This task can be regarded as a zero-shot evaluation since the testing questions are unseen in the training set. The results are in Table 7. Most general LLMs (such as T5-base, GPT-Neo, LLaMA-13/30b) are unable to effectively incorporate both the features of the e-commerce platform (\"Etsy\") and enquired entity (\"Stats\") in the given example, resulting in generated answers that are generic and semantically distant from the standard answers. In contrast, the LLaMA-E models generated answers all reflect the core semantics that this is a store evaluation tool provided by the platform, which is attributed to its background platform knowledge acquired through other related Q&A instructions. We show more zero-shot generative tasks of the scenario-based ads generation and product recommendation in Appendix B and C. 5.3 Human Evaluation We invite ten volunteer annotators with extensive experience in English understanding and e-commerce to conduct human evaluations on ads generation and query-enhanced title rewriting. Each annotator is asked to anonymously rate ten randomly selected generated texts from the progressive perspectives of readability, coverage, and attractiveness based on the following criteria: \u2022 Rate-A: Captivate customers and encourage purchases while covering the essential features of the products or queries. \u2022 Rate-B: Encompass the essential features of products or queries but lacks attractiveness or persuasive appeal. \u2022 Rate-C: The generated text is legible and is presented in fluent natural language but lacks some essential features. \u2022 Rate-D: The generated text can not be understood. Table 7: Qualitative evaluation of general e-commerce Q&A Model Generated Answer Instr. How to Use Etsy Stats for Your Shop? Answer Etsy Stats provide you with data for your shop. With Stats, you can see how your shop is performing, where your traffic to your shop comes from and how shoppers are engaging with your listings. GPT-3.5 I can provide a list of ways to use Etsy Stats for your shop, as follows: 1. Track your shop\u2019s progress. 2. Learn about your customers. 3. Keep track of your promotions. 4. Analyze your item\u2019s performance. 5. Plan for the future. GPT-2 Use Etsy Stats for your shop performance. BART Use Etsy\u2019s Stats to track sales, categories, and shipments. Optimize and expand your campaign with data insights. T5-base To use Etsy Stats for Your Shop, you can follow these steps: 1. Choose a platform. 2. Create a website. 3. Set up your account on the Etsy platform. 4. Add content to your website. 5. Use analytics tools to track sales. GPT-Neo You can use Etsy\u2019s latest features and features for your favorite handmade items, such as personalized designs, custom prints, and affiliate marketing LLaMA 7b Etsy Stats are important when it comes to running your shop. You can easily track the sales and view other statistics to help you know about your customers and business. 13b How often do you check your Etsy stats and shop page views in the last 30 days, or even 7 days time period? To maximize sales on Etsy, you should be checking these daily. 30b The guest post by Maria will share with us her tips and experience on how to use your stats on Etsy in order to help grow your business! LLaMA-E 7b Etsy Stats: View sales, views, traffic, and search trends. Use to understand your shop\u2019s performance. 13b Etsy Stats is used to calculate the shop performance like traffic sources, views, and favourites. 30b Etsy Stats to is for analyzing your shop\u2019s performance and customer behaviour. Figure 3: Human evaluation of ads generation and queryenhanced title rewriting. The ranking is marked in legend. \fLLaMA-E: Empowering E-commerce Authoring with Multi-Aspect Instruction Following Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 3 illustrates the results of human evaluation, displaying the rating distribution of the test samples. The findings indicate that the LLaMA-E models attain competitive rating scores compared to the GPT-3.5 while outperforming other baseline models. Additionally, the annotators report that the title rewriting outputs generated by the LLaMA-E models are more attractive. This observed advantage can be primarily attributed to the ads generation task during the instruction fine-tuning process, which encourages the model to produce captivating phrases like \"order now\" and other persuasive language. This phenomenon indicates the beneficial correlation within the instruction set. The GPT-3.5 shows stronger robustness as it generates text in both tasks without any unreadable content. 6 CONCLUSION AND FUTURE RESEARCH This paper introduces LLaMA-E, a series of large language models for e-commerce authoring scenarios. With a focus on data privacy concerns and limited instructions, the training instructions of the LLaMA-E models establish interleaving correlations between key aspects features and the specific requirements of real-world tasks in e-commerce authoring scenarios. This approach enables efficient utilization of features at various service agents, providing a reliable and privacy-conscious solution for e-commerce authoring based on LLMs. Compared with other baselines, the LLaMA-E models achieve state-of-the-art results in quantitative and qualitative evaluations. Extending the current models to fit a wider range of authoring tasks is valuable for future research. Most mainstream e-commerce authoring models are based on the Chinese corpus, while our research is based on the English corpus. How to extend the current models to a multilingual environment is also a valuable future direction. Incorporating user personalization features into e-commerce authoring tasks and thus enabling more customized authoring content-based recommendation services is also an attractive topic. 7 ETHICAL STATEMENTS All human experts involved in customising seed instructions have been informed and agreed their privacy is strictly protected. The query information in the dataset has been desensitized and does not contain any personal privacy of customers. The proposed LLaMA-E models follow the commercial specification of the LLaMA models and are only for research purposes. This model will not be applied in commercial scenarios to ensure that it strictly complies with ethical considerations and avoids potential risks or conflicts of interest." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file