diff --git "a/title_30K/test_title_long_2404.16766v1.json" "b/title_30K/test_title_long_2404.16766v1.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16766v1.json" @@ -0,0 +1,106 @@ +{ + "url": "http://arxiv.org/abs/2404.16766v1", + "title": "Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model", + "abstract": "While supervised fine-tuning (SFT) has been a straightforward approach for\ntailoring the output of foundation large language model (LLM) to specific\npreferences, concerns have been raised about the depth of this alignment, with\nsome critiques suggesting it is merely \"superficial\". We critically examine\nthis hypothesis within the scope of cross-lingual generation tasks, proposing\nthat the effectiveness of SFT may be constrained by its reliance on prior\ntokens to guide cross-lingual generation. Based on this crucial insight, and in\nresponse to the challenges posed by the costly and limited availability of\nnon-English data for SFT, we introduce a novel training-free alignment method\nnamed PreTTY, which employs minimal task-related prior tokens to bridge the\nfoundation LLM and the SFT LLM, achieving comparable performance without\ntraining. Experiments on machine translation and part-of-speech tagging across\neight languages demonstrate the efficacy of PreTTY in cross-lingual settings.\nRemarkably, by initiating the decoding process with only one or two prior\ntokens, foundation LLMs can achieve performance comparable to their SFT\ncounterparts. This method presents a cost-effective alternative to SFT and\nadvances the democratization of multilingual LLMs.", + "authors": "Runzhe Zhan, Xinyi Yang, Derek F. Wong, Lidia S. Chao, Yue Zhang", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model", + "main_content": "Introduction Supervised fine-tuning (SFT) refines large language models (LLMs) using task-specific instruction data to enhance their capability to follow instructions (Touvron et al., 2023; Peng et al., 2023) and to align their outputs with human preferences and safety considerations (Ouyang et al., 2022; Rafailov et al., 2023; Dong et al., 2023b; Yuan et al., 2023). This process is often termed \u201calignment\u201d, signifying the tailoring of model outputs *Work was done during a visit to Westlake University. \u0000 Co-corresponding authors. to conform to specific downstream requirements. Nevertheless, current research casts doubt on the necessity and potential adverse impacts of SFT. But the alignment achieved through SFT is often considered to be \u201csuperficial\u201d, with the process potentially repurposing pre-existing knowledge from pre-training to merely reshape outputs to meet specific criteria (Zhou et al., 2023; Lin et al., 2023). It has been observed that even a small-scale SFT training dataset can produce significant alignment effects (Liu et al., 2023; Xia et al., 2024). On the other hand, recent empirical studies (Luo et al., 2023; Dong et al., 2023a) have raised concerns that SFT might hurt the knowledge acquired during its pre-training phase, leading to serious consequences like catastrophic forgetting. Not only is there no definitive consensus on the necessity of SFT, but the majority of these studies also focus on monolingual tasks. LLMs still encounter challenges in handling complex crosslingual generation tasks (Schioppa et al., 2023; Wang et al., 2023). Current research on crosslingual alignment primarily seeks to extrapolate or align English capabilities to other languages using the SFT paradigm (Zhang et al., 2023; Chai et al., 2024; Xu et al., 2024), yet there remains a gap in exploring the specific impacts of SFT-based cross-lingual alignment. Furthermore, given the potential risk of SFT leading to the forgetting of pre-training knowledge, the question of how to achieve cross-lingual alignment without training remains underexplored. To bridge these gaps, our study conducts an indepth examination of the impact of SFT on crosslingual generation. We investigate the influence of SFT on the decoding patterns of foundation models in cross-lingual contexts, hypothesizing that the success of SFT largely hinges on the selection of initial prior tokens that are critical for eliciting taskspecific generation in the target language. Furthermore, the observed decoding similarities between 1 arXiv:2404.16766v1 [cs.CL] 25 Apr 2024 \fInstruction: Translate the following sentence from English to Ukrainian: \u201cWe now have 4-month-old mice that are non-diabetic that used to be diabetic,\u201d he added. \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month\u2026 \u041c\u0438 \u0442\u0435\u043f\u0435\u0440\u0456\u0448\u043d\u0456\u0445 4 \u043c\u0456\u0441\u044f\u0446\u0456\u0432 \u043c\u0430\u044e\u0442\u044c \u043c\u0438\u0448\u0435\u0439, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 Foundation LLM SFT-tuned LLM SFT-based Alignment \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month\u2026 Foundation LLM + Prior Tokens + SFT Pipeline Pretty: Prefix TexT as a Yarn ? ? Input: [Instruction, \u201c\u041c\u0438\u201d] SFT-like LLM \u041c\u0438 \u043d\u0430\u0440\u0435\u0448\u0442\u0456 \u043c\u0430\u043b\u0438 \u043c\u0438\u0448\u0435\u0439, \u0449\u043e \u043d\u0435\u043c\u0430\u044e\u0442\u044c \u0434\u0456\u0430\u0431\u0435\u0442\u0443, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 1) Understand the alignment under cross-lingual setting. 2) Propose a training-free alignment method for non-English tasks. SFT Prior Refined Prior How does SFT change the model? Pseudo Prior High Resource Low Figure 1: Illustration of our research question and proposed Prefix TexT as a Yarn (PRETTY) framework. foundation and SFT models support the extension of the superficial alignment hypothesis to crosslingual scenarios. Responding to these insights, we introduce a training-free alignment method named \u201cPRETTY\u201d for cross-lingual and non-English tasks. The Prefix TexTs act as a Yarn (PRETTY) linking the foundation LLM and the SFT LLM, eliciting the foundation LLM to exhibit near-SFT performance levels. Specifically, we augment the original input with a few tokens that serve as decoding priors, and then prompt the foundation LLM to resume decoding based on this modified input. In most cases, only one or two task-related prior tokens are needed, and the method for constructing these prior tokens is flexible across various kinds of language resources, fostering the democratization of multilingual LLMs. We conducted experiments on machine translation (Goyal et al., 2022), cross-lingual summarization (Bhattacharjee et al., 2023) and non-English part-of-speech (POS) tagging (Liang et al., 2020) tasks across eight languages. These tasks exemplify cross-lingual generation and multilingual language understanding, and they provide ample nonEnglish test data to evaluate effectiveness across varying levels of resource availability. The experimental results demonstrate that PRETTY can effectively align the foundation model to match SFT model\u2019s performance without training, by merely adding two prior tokens in the decoding. 2 Iceberg Model of SFT 2.1 Preliminaries Pre-training The pre-training (PT) of LLMs is primarily conducted through language modeling tasks on large-scale unlabeled data (Touvron et al., 2023; Achiam et al., 2023). During this phase, given a sequence XPT of length N and a context window k, the optimization objective is maximizing the joint probability PLM as: PLM(XPT) = N Y i=1 P(xi|xi\u2212k:i\u22121) (1) which encourages the model to generate text that naturally follows from the preceding context. However, this \u201ctext completion\u201d behavior can become a bottleneck when models are prompted to switch languages or follow specific instructions of crosslingual generation. It is frequently observed that when prompted with English input and instructed to produce text in a different language, as illustrated in the upper example of Figure 1, the foundation model often continues to decode in English. SFT SFT leverages labeled data pair (Xins., Y ) to empower models with the ability to follow instructions. This stage aims to maximize the probability of the expected answer Y conditioned on the 2 \finput text Xins., where Xins. consists of the task instruction and task input. PSFT(Y |Xins.) = T Y j=1 P(yj|y1:j\u22121, Xins.) (2) SFT is crucial for aligning foundation models to perform task-specific instructions, effectively transforming a general-purpose LLM into an instructionfollowing assistant. However, data quality, training costs, and the imbalance of multilingual data hinder the democratization of assistant LLM. As mentioned before, SFT may be harmful to pre-training knowledge. Thus, it is meaningful and important to understand the underlying mechanism of SFTbased alignment and propose a more efficient alignment method. 2.2 Beneath the SFT-based Alignment Prior Knowledge Hypothesis It is worth noting that pre-training corpora also contain sequences that naturally express task-specific information, which imparts certain capabilities to the foundation LLMs. For example, the presence of semantically equivalent expressions in the pre-training text may enable LLM acquire machine translation ability during pre-training stage (Radford et al., 2019). Despite its extensive prior knowledge, the foundation LLM still struggles with complex crosslingual generation tasks. Beyond existing studies, we provide more concrete insights into this issue by prompting foundation LLMs with various instructions (Bawden and Yvon, 2023). Notably, only 31.8% of these prompts successfully elicit translation capability from the foundation LLMs1. This deficiency may stem from two main factors: First, the proportion of text with the aforementioned characteristics in the pre-training corpus XPT is still relatively small, and most of it is far from resembling human instruction text Xins.. Consequently, the model is more likely to predict tokens suitable for completing formal texts than those required for task-specific instructions. As a result, the foundation LLM often fails to produce tokens y \u2208Y1:T in the intended target language. Secondly, the predominance of English in the pretraining data skews the token generation probabilities of foundation LLM. Given a cross-lingual context, the model favors predicting tokens in English, while the token probabilities for other languages remain comparatively low. For example, English data 1For detailed information, please refer to Appendix B.3. 1 3 10 20 30 40 0 20 40 60 80 100 Top-K Sampling Tokens Agreement@K (%) Foundation LLM + Prior Token Figure 2: The agreement between the SFT model and the foundation model in terms of the selection of the next token. Once the Prior Token is provided, the token chosen by the SFT model is also can be found within the Top-K candidate words of foundation model. comprises up to 90% of the Llama2 pre-training data (Touvron et al., 2023), which may lead models to generate text with an English-centric bias. The above hypothesis might be reasonable when we revisit Equation (1) and Equation (2). The probability PLM(XPT) of the next token prediction for the foundation model is conditioned on the distribution of the pre-training text XPT. SFT narrows the probability space for token selection, adjusting the parameters to better align with the distribution, i.e., the probability PSFT(y|Xins.) is conditioned on the distribution of the instruction text Xins.. Experimental Settings To validate the aforementioned hypothesis, we selected the representative cross-lingual task of machine translation as our analytical testbed. The main research method involved quantifying the differences and similarities in the decision space and token selection behavior between the foundation LLM and the SFT-aligned LLM. For the model selection, we chose the foundation Llama2 7B model and conducted supervised fine-tuning on it using the Alpaca dataset2(Taori et al., 2023). The optimization was carried out using a cosine learning rate scheduler, with the maximum learning rate set to 2e\u22125 and a warmup ratio of 0.03. Training was performed on two NvidiaH800 GPUs using LoRA parameter-efficient finetuning (Hu et al., 2022) technique, with a cumulative batch size of 64. Other hyper-parameters follow those of the original Alpaca settings. 2https://github.com/tatsu-lab/stanford_alpaca 3 \f+ Prior Token Figure 3: The probability distribution of tokens selected by various models. Incorporation of a Prior Token causes the decision probabilities of both models to converge across all data instances. 0 3 6 9 Comparison Group KL Divergence Foundation LLM vs. SFT LLM + Prior Token vs. SFT LLM 0 0.2 0.4 0.6 JS Divergence 0 5 10 15 Cross Entropy Figure 4: The divergence in probability distributions across the entire vocabulary during decoding. Prior Token significantly reduces the discrepancy between the foundation model and the SFT model. A Prior Token Elicits Silent Majority Inspired by the categorization of token shifts by Lin et al. (2023), we propose to quantify the agreement of token selection between foundation LLM \u03b8PT and SFT LLM \u03b8SFT. Given the same prefix input \u02c6 X, we aim to measure whether the next token selected by the SFT LLM, ySFT, is among the top-K tokens, yPT, with the highest probabilities in the decision space of the foundation LLM, which can be formally expressed as follows: ySFT = argmax y\u2208V P(y| \u02c6 X; \u03b8SFT) yPT = {y| arg topK y\u2208V P(y| \u02c6 X; \u03b8PT)} AggrementK = 1 L L X l=1 1ySFT\u2208yPT (3) where V is the vocabulary shared by two models, and L is the length of the dataset. We compare the agreement of the token selection made by the models under the same prefix text \u02c6 X in two different experimental setups. The first setup uses the instruction text as the prefix, i.e., \u02c6 X = Xins.; the second takes the first token decoded by the SFT model as a prior token, appending it to the original instruction prefix, i.e., \u02c6 X = h Xins., y(1) SFT i . For the SFT model, the second setup is equivalent to continuing its own decoding behavior, whereas for the foundation model, it becomes decoding with the addition of a prior token. Figure 2 illustrates the agreement between the foundation model\u2019s predictions and those of the SFT model regarding the selection of the next token, given an identical text prefix. Across the entire translation data, it is observed that after incorporating merely one prior token, the foundation model exhibits a high degree of agreement with the SFT model in terms of token selection. This demonstrates that the alignment effect of SFT in crosslingual generation tasks is also somewhat superficial. Even in instances where the token with the highest probability differs between the two models, 90.8% of the tokens chosen by the SFT model are present within the \u201csilent majority\u201d in the decision space of the foundation model, specifically, among the top 20 most probable token choices. Lens of Distribution Instead of focusing on the coverage of token selection outcomes, we also observe the decision dynamics and similarities from the perspective of the overall probability distribution, with the data settings consistent with the previous setup. First, as shown in Figure 3, after adding a prior token, the probability of the next tokens chosen by both models have closely aligned distributions. The reason that the foundation model 4 \fexhibits a high probability given the instruction text as a prefix lies in a preference for choosing to continue the instruction text rather than completing the cross-linguistic semantic transformation. Additionally, we quantify the distribution disparities between the two models through the probability distribution of the vocabulary. The disparity metrics used include Kullback-Leibler (KL) divergence, Jensen-Shannon (JS) divergence, and cross-entropy (Kullback, 1997). As depicted in Figure 4, the disparity of decision space of the foundation model significantly decreases after adding the prior token, aligning more closely with the SFT model. These findings indicate that such prior tokens serve a dual function: they not only steer the foundation model towards generating tokens pertinent to cross-lingual generation but also modulate the decision space to align more closely with the taskspecific distribution. 3 Pretty: Prefix TexT as a Yarn 3.1 Motivation The observations discussed earlier confirm that SFT effectively narrows the decision space of the foundation model during text generation that is conditioned on instruction text. The disparity in token selection between the foundation LLM and the SFT LLM, however, might not be reduced by a trainingbased transfer methodology. By appending a prior token into the instruction text, the choices of the next token between the two models tend to become largely consistent, and in the vast majority of cases, the tokens chosen by SFT model are also found within the high-probability candidate words of foundation model. These phenomena show that the alignment elicited by SFT is somewhat superficial in cross-lingual generation tasks and motivate us to propose a training-free alignment method by leveraging these prior tokens. 3.2 Formulation Upon revisiting Equation (1) and Equation (2), the goal of proposing a training-free approach is to enable the conditional decoding probability of foundation model to approximate those of SFT model. Therefore, ideally, the selected prior tokens Xpri. = {xpri.} may satisfy the following criteria: P(yPT| [Xins., Xpri.] ; \u03b8PT) \u2248P(ySFT|Xins.; \u03b8SFT) (4) where yPT and ySFT represent the outputs of the foundation and the SFT models, respectively. It is important to note that a single prior token may not serve as an optimal solution due to its non-derivable characteristic. Hence, we extend our methodological approach to include appending multiple prior tokens, grouping them to form a prefix text. 3.3 Construction of Prior Tokens To ensure that the proposed method is applicable to a wide array of languages, we propose three construction strategies based on the availability of language resources, aiming to guarantee the universality of our approach. SFT Prior represents an ideal scenario where the first few tokens generated by a SFT model are used as priors. This method is theoretically rational when the SFT model is derived from the same foundation model because it directly approximates Equation (4) by sampling xpri. \u223c{ySFT}. In practical applications, this might be suitable for high-resource languages due to the imbalanced language capabilities of other languages. Additionally, SFT could potentially degrade the knowledge and abilities that the foundation model has already acquired. In such cases, using prior tokens from the SFT model can contribute to generating better results. This situation will be discussed further in the subsequent section. Refined Prior is more readily accessible for most languages and tasks. We can utilize the output tokens generated by a smaller model trained for specific downstream tasks and use them as prior tokens to achieve weak-to-strong generalization (Burns et al., 2023). Pseudo Prior For extremely low-resource language pairs, where there is no labeled data for downstream tasks, both SFT and Refined priors are difficult to obtain. For cross-lingual tasks, we can create pseudo labels in target language as prior tokens. For instance, in machine translation tasks, we might use bilingual dictionaries to acquire pseudo prior tokens. However, the quality and accuracy of pseudo labels remain uncertain, and the extent of their impact on the generative performance of the foundation LLM is not yet clear. We will explore this problem further in the context of experimental results discussed later in the paper. 5 \f4 Experiments We examine the effectiveness of our proposed training-free alignment method on two distinct tasks: machine translation, cross-lingual summarization and non-English POS tagging. Machine translation serves as a prototypical cross-lingual generation task, entailing the transformation of a sequence from a source language to a target language (Bahdanau et al., 2015; Vaswani et al., 2017; Zhan et al., 2023). As for cross-lingual summarization, it requires the model to generate a summary of an article in a different language (Bhattacharjee et al., 2023; Chen et al., 2023). Although POS tagging (Manning, 2011; Nivre et al., 2017; Chiche and Yitagesu, 2022) primarily assesses the model\u2019s ability to understand monolingual text, we include it as multilingual experiments to show the universality of our methods. 4.1 Experimental Settings Data We use Flores-101 (Goyal et al., 2022), CrossSum (Bhattacharjee et al., 2023) as benchmarks for machine translation and cross-lingual summarization tasks, respectively. For POS tagging tasks, we choose the POS test split from the XGLUE benchmark (Liang et al., 2020), which is derived from the Universal Dependencies Treebank v2.5. To investigate the performance across various resource languages, we carefully selected eight languages based on the pre-training data proportions disclosed in the Llama2 technical report (Touvron et al., 2023). These languages are French, German, Chinese, Russian, Ukrainian, Portuguese, Hindi and Arabic. Among these, the first four languages account for more than 0.1% of the pretraining data of Llama2, while Ukrainian and Portuguese fall below 0.1%, Hindi and Arabic is below 0.05%. For the Llama2 model, we can categorize these three types of languages into high-resource languages, low-resource languages, and extremely low-resource languages, respectively. Models and Baselines The settings of Llama2 foundation model and the SFT model are consistent with those described in Section 2.1. To further demonstrate the generality of our proposed method, we incorporated the Mistral-7B LLM family (Jiang et al., 2023) into our experiments, covering both out-of-the-box SFT and foundation models. In the machine translation task, the Llama2 foundation model does not tend to generate translations when given explicit translation instructions. While this is a normal phenomenon according to our previous discussion, to ensure a fair comparison, we also searched for a better prompts for the foundation model. This prompting approach is referred to as \u201cLlama2-7BPROMPTING\u201d in subsequent sections. For POS tagging, we experimented with various instructions and selected one that consistently prompts both the foundation model and the SFT model to reliably generate classification results in text. Although we report the zero-shot performance for the aforementioned tasks, we found that even out-of-the-box SFT models cannot produce stable output for cross-lingual summarization task. Hence, we prepend a constant demonstration before the input to also assess the effectiveness of our proposed method under the in-context learning paradigm (Dong et al., 2023c). Sources of Prior Token The sources of crafting prior tokens include: \u2022 SFT Prior: We took the first k tokens of output produced by SFT model as the prior tokens. For multiple SFT models, we select the model that demonstrates better performance. \u2022 Refined Prior: We use downstream task models with smaller parameter sizes as the source of refined priors. For the different tasks, we utilized the distilled 600M variant of NLLB-200 translation model3(Costajuss\u00e0 et al., 2022), mT5 cross-lingual summarization model4 and the Unicoder-NLU model5(Huang et al., 2019), respectively. \u2022 Pseudo Prior: The pseudo prior is applied to two cross-lingual tasks since it can utilize cross-lingual language resources. We create pseudo prior tokens for machine translation task by referencing dictionary 6 entries. For cross-lingual summarization, we initially extract keywords from each passage using KeyBERT (Grootendorst, 2020) and then perform word-by-word translation. However, not all initial sentence tokens will be covered by the dictionary. To handle such instances, a backoff strategy is implemented, where the target language equivalent of the first available dictionary token is used as the prior token. 3https://huggingface.co/facebook/ nllb-200-distilled-600M 4https://hf.co/csebuetnlp/mT5_m2m_crossSum 5https://github.com/microsoft/Unicoder/ 6Please refer to Appendix B.4 for dictionary information. 6 \fEnglish-Centric Models En-Zh En-Uk Zh-En Uk-En Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 13.6 80.9 24.0 83.3 23.5 85.1 34.4 85.5 23.9 83.7 Llama2-7B-Chat 7.8 67.2 18.1 71.0 18.5 81.3 30.4 83.3 18.7 75.7 Llama2-7BPROMPTING 5.9 64.1 11.0 60.9 24.3 84.8 34.2 85.0 18.9 73.7 80.4 Llama2-7B 7.7 72.0 0.2 32.4 12.0 74.4 9.3 59.2 7.3 59.5 52.5 +PRETTY (SFT Prior) 13.3 80.0 23.0 83.1 23.7 84.9 33.6 85.3 23.4 83.3 98.8 +PRETTY (Pseudo Prior) 12.0 75.7 18.1 74.1 16.9 80.3 27.2 78.3 18.6 77.1 85.4 +PRETTY (Refined Prior) 14.2 80.5 24.1 83.8 24.0 84.9 34.6 85.6 24.2 83.7 100.9 Mistral-7B Mistral-7B-Instruct 6.6 64.6 20.3 78.2 20.5 83.2 32.9 84.8 20.1 77.7 Mistral-7B 1.2 42.6 0.3 30.8 19.9 77.1 21.5 69.4 10.7 55.0 46.2 +PRETTY (SFT Prior) 13.8 78.1 23.1 79.2 20.0 82.3 32.1 83.3 22.3 80.7 117.2 +PRETTY (Pseudo Prior) 13.3 75.8 20.1 75.7 16.5 79.7 24.9 77.3 18.7 77.1 107.2 +PRETTY (Refined Prior) 15.9 81.3 24.9 82.9 21.5 83.0 32.3 83.9 23.7 82.7 124.6 Non-English-Centric Models De-Fr Fr-De Zh-Pt Pt-Zh Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 29.8 81.5 24.1 80.9 16.6 81.4 11.3 78.6 20.5 80.6 Llama2-7B-Chat 6.2 68.0 7.3 64.5 3.0 67.8 6.2 66.6 5.7 66.7 Llama2-7BPROMPTING 22.2 77.4 15.4 73.3 14.4 78.9 4.4 64.1 14.1 73.4 78.5 Llama2-7B 1.0 51.1 3.2 54.0 0.9 61.4 7.3 70.0 3.1 59.1 47.6 +PRETTY (SFT Prior) 28.2 80.6 23.0 80.4 16.3 81.1 10.5 77.4 19.5 79.9 97.2 +PRETTY (Pseudo Prior) 18.3 68.9 17.3 72.2 11.6 70.4 5.0 65.6 13.1 69.3 73.9 +PRETTY (Refined Prior) 29.1 81.4 22.9 80.4 17.1 81.1 12.2 79.4 20.3 80.6 100.4 Mistral-7B Mistral-7B-Instruct 22.1 76.1 20.4 75.9 10.5 74.8 3.3 60.2 14.1 71.8 Mistral-7B 1.2 46.1 1.6 40.6 1.0 52.8 0.4 43.6 1.1 45.8 36.5 +PRETTY (SFT Prior) 20.1 73.3 20.7 75.1 11.0 74.7 6.8 67.3 14.7 72.6 113.8 +PRETTY (Pseudo Prior) 18.1 66.4 17.3 70.4 5.9 65.6 3.7 59.4 11.3 65.5 87.7 +PRETTY (Refined Prior) 28.3 78.8 22.3 78.5 14.2 78.6 13.6 80.6 19.6 79.1 153.8 Table 1: Translation performance of different models on Flores-101 subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model of each family. For two cross-lingual task, the first k = 2 tokens are chosen as the prior tokens. This helps to avoid inadequate guidance from single non-informative tokens like punctuation or numbers. In the case of the pseudo prior, due to the back-off strategy, only one token is used for fair comparison. For POS tagging task, the strategy is more straightforward with only the first k = 1 label considered as the prior token. 4.2 Evaluation To ensure the integrity of the output data from all models, we standardized the output by cleaning it in accordance with the specific output style of each model. Subsequently, we conducted a manual inspection to guarantee that only the required labels were retained. Task-specific Metrics We use two metrics to evaluate the performance of translation quality: spBLEU7 (Goyal et al., 2022) and COMET8(Rei et al., 2020). We employed the ROUGE (Lin, 2004) and LaSE (Bhattacharjee et al., 2023) metrics for the evaluation of summarization quality. For the POS tagging task, we report both the precision score and F1 score. Relative Performance We further compute the ratio of the performance scores of the foundation model to the scores of the SFT model with the application of different strategies. This ratio serves 7https://github.com/mjpost/sacrebleu/ 8https://github.com/Unbabel/COMET 7 \fModels En-Zh En-Hi Uk-Pt Ar-Ru Avg. %SFT. R2 RL LS R2 RL LS R2 RL LS R2 RL LS R2 RL LS All Llama2-7B w/ Constant 1-Shot Demonstration Llama2-7B-Alpaca 7.0 12.4 11.9 1.7 10.7 17.3 1.5 6.1 5.8 0.1 0.5 1.3 2.6 7.4 9.1 Llama2-7B-Chat 6.3 11.6 8.7 1.5 11.7 27.1 2.5 8.3 7.1 0.0 0.3 0.2 2.6 8.0 10.7 Llama2-7B 9.3 16.6 29.2 1.6 10.2 15.3 0.8 4.0 1.9 0.6 4.1 15.5 3.1 7.6 12.1 262.4 +PRETTY (SFT Prior) 7.4 13.9 25.9 1.5 9.7 12.9 1.9 6.7 9.8 0.1 0.4 0.8 2.7 6.7 9.8 106.3 +PRETTY (Pseudo Prior) 8.0 14.5 29.1 1.4 9.9 14.5 2.5 9.1 13.6 1.2 5.9 23.5 3.3 8.5 15.4 387.5 +PRETTY (Refined Prior) 11.2 19.0 32.6 1.6 10.8 15.9 3.4 10.5 11.3 1.5 7.9 30.1 4.4 10.5 17.5 490.6 Mistral-7B w/ Constant 1-Shot Demonstration Mistral-7B-Instruct 5.9 12.2 17.2 1.0 10.3 23.4 1.5 6.2 17.7 0.4 2.6 12.8 2.2 7.8 17.8 Mistral-7B 12.3 20.9 44.5 1.6 10.6 17.6 4.8 12.9 27.7 1.8 6.5 23.3 5.1 11.2 21.6 206.1 +PRETTY (SFT Prior) 9.7 17.6 40.7 1.4 10.0 17.0 2.3 7.9 17.5 0.2 1.1 3.2 3.4 8.0 15.0 114.5 +PRETTY (Pseudo Prior) 9.9 17.5 41.0 1.4 9.9 17.4 3.1 11.6 35.1 1.7 7.9 32.9 4.0 10.2 23.5 195.8 +PRETTY (Refined Prior) 15.0 24.1 49.6 1.8 11.3 19.7 5.5 16.5 46.9 2.6 10.9 42.0 6.2 13.8 29.7 275.6 Table 2: Summarization performance of different models on CrossSum subsets. \u201cR2/L\u201d and \u201cLS\u201d refer to the ROUGE and LaSE score, respectively. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model. Models Fr Zh Pt Ru Ar Avg. %SFT. Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. All Llama2-7B-Alpaca 48.2 42.8 38.6 36.3 40.7 35.9 42.3 36.7 34.4 30.8 38.7 Llama2-7B 45.0 37.9 39.8 36.2 39.8 33.2 42.5 33.8 36.5 32.1 37.7 97.4 +PRETTY (SFT Prior) 54.8 50.0 38.0 33.5 49.1 45.3 49.7 44.1 35.1 31.1 43.1 111 +PRETTY (Refined Prior) 59.3 54.8 43.0 38.8 54.5 50.6 55.3 49.2 44.0 39.6 48.9 126 Table 3: POS tagging performance of different Llama2 models on XGLUE subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to Alpaca model. as a metric for assessing the extent to which the foundation model approximates the SFT model\u2019s performance when different strategies are applied. 4.3 Main Results Machine Translation As shown in Table 1, for the machine translation task, we use up to two prior tokens as decoding guidance, allowing the base model to achieve performance comparable to that of a model after SFT. Moreover, in some language pairs, the translation performance outperforms SFT model when guided by Refined Prior tokens from a smaller model. For Llama2 model family, the prior tokens provided by the SFT model, although slightly less effective, still allow the foundation model to achieve 98% of the performance of SFT model. On the other hand, the use of pseudo labels derived from a dictionary exhibits the least effectiveness, yet this strategy still surpasses the results achieved through costly prompt engineering. Cross-lingual Summarization The results presented in Table 2 indicate that the foundation model exhibited superior performance compared to the SFT model in this in-context learning scenario. For prior-guided decoding, the performance of the foundation model was degraded when using prefix tokens from the SFT model, and the small performance gap in this setting suggests that the alignment achieved by the SFT model is relatively \u201csuperficial\u201d. Notably, the performance of Llama2 foundation model significantly improved when other priors were provided, even when using translated keywords as pseudo labels. Non-English POS tagging The performance results of POS tagging task are presented in Table 3. These results align with the insights gleaned from the machine translation task, specifically regarding the strategy of prior token construction. Notably, for POS tagging task, the performance of SFT model on most language pairs falls short of the foundation model, suggesting that SFT detrimentally affect the knowledge learned at the pretraining stage. Encouragingly, when the foundation model empowered by auxiliary prior token surpasses the performance of SFT model as well as the prompting results of itself, highlighting the poten8 \ftial of our proposed method in mitigating the catastrophic forgetting problem associated with SFT. 5 Analysis and Discussion 5.1 Quality of Prior Tokens To investigate the quality of prior tokens from different sources and how they impact the final performance, we further analyze why the prior tokens given by the SFT model are less effective than those from external auxiliary models in POS tagging task. Unlike the machine translation task, the positional result for the POS task is definite, so we are able to verify whether it corresponds to a ground truth label. The results in Table 4 confirm two points. First, even if the prior tokens provided by the SFT model are of low quality, the foundation model does not suffer from severe error propagation. Secondly, the final performance of proposed method is still associated with the quality of prior tokens. This suggests that prior tokens closely aligned with the ground truth can steer the foundation model towards a more accurate decision trajectory, thereby yielding superior performance. Fr Zh Pt Ru Ar SFT Prior 18.3 18.3 3.74 16.3 12.1 Refined Prior 88.9 88.9 88.54 87.7 79.6 Table 4: Accuracy of prior tokens used in POS tagging task. SFT prior tokens are of inferior quality. 5.2 Choice of Prior Tokens Based on the findings from the previous section, if incorrect labels used as prior tokens can still elicit the ability of foundation model, then could random prior tokens in the target language trigger crosslingual generative capabilities? To investigate this, we attempted to use random tokens of different parts of speech as the prior tokens in the EnglishChinese machine translation task. For instance, \u201cModal Prior\u201d refers to the use of randomly picked modal verb in Chinese as the initial token. The results shown in Table 5 indicate that the model could not be aligned to a better decision trajectory by these random prior tokens, whether they were function words or tokens with actual meaning. This supports the validity of our proposed methods for constructing prior tokens and also supplements previous findings. From this, we can summarize some rules about prior tokens: they can be of low quality but should not be completely unrelated to the target sequence. spBLEU COMET BLEU Llama2-7B 7.7 72.01 16.1 + Modal Prior 8.0 68.29 16.0 + Adverb Prior 6.4 63.72 13.1 + Random Prior 6.2 57.11 11.5 Table 5: Comparison of translation performance using three types of random prior tokens. 5.3 Number of Prior Tokens Figure 5 depicts the relationship between the number of preceding tokens provided and the resulting changes in translation performance. It becomes apparent that performance generally improves with the addition of more tokens. Additionally, we note that introducing two prior tokens appears to be a performance inflection point, which may be due to instances where the initial token is a punctuation mark or a number. 1 2 3 4 5 85 90 100 110 Number of Prior Tokens %SFT. En-Zh De-Fr Pt-Zh Zh-Pt Figure 5: Impact of incrementally adding refined prior tokens on performance across Flores-101 subsets. 6 Conclusions In this paper, we investigate and analyze the decision-making discrepancies between the foundation model and the SFT model within crosslingual generation contexts. Drawing from our analysis, we introduce a novel cross-lingual alignment method that requires no additional training and is resource-efficient. The proposed method aligns the foundation LLM to perform comparably with the SFT model solely by utilizing prefix text as priors during generation. In the future, we aim to broaden our research to encompass additional alignment scenarios, such as those involving reinforcement learning from human feedback. 9 \fLimitations The primary limitations of our study stem from the scope of model validation. Our research is limited to 7B models. Future endeavors should aim to extend the validation to a broader scope of models and incorporate various parameter scales to support the universality of our findings. Furthermore, the availability of language resources is still a practical problem, particularly for low-resource languages where access to Prior Token and Refined Token sources is limited. Despite these challenges, our experimental results indicate that Pseudo Prior tokens still exhibits promising potential. It is important to note, however, that the development of pseudo tags may require a dedicated investigation into the linguistic rules specific to each downstream task. This process is inherently time-intensive and resourcedemanding. Acknowledgements This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/0070/2022/AMJ, FDCT/060/2022/AFJ), Ministry of Science and Technology of China (Grant No. 2022YFE0204900), National Natural Science Foundation of China (Grant No. 62261160648), the Multi-year Research Grant from the University of Macau (Grant No. MYRG-GRG2023-00006FST-UMDF), and Tencent AI Lab Rhino-Bird Gift Fund (Grant No. EF2023-00151-FST). This work was performed in part at SICC which is supported by SKL-IOTSC, and HPCC supported by ICTO of the University of Macau.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.14469v1", + "title": "SnapKV: LLM Knows What You are Looking for Before Generation", + "abstract": "Large Language Models (LLMs) have made remarkable progress in processing\nextensive contexts, with the Key-Value (KV) cache playing a vital role in\nenhancing their performance. However, the growth of the KV cache in response to\nincreasing input length poses challenges to memory and time efficiency. To\naddress this problem, this paper introduces SnapKV, an innovative and\nfine-tuning-free approach that efficiently minimizes KV cache size while still\ndelivering comparable performance in real-world applications.\n We discover that each attention head in the model consistently focuses on\nspecific prompt attention features during generation. Meanwhile, this robust\npattern can be obtained from an `observation' window located at the end of the\nprompts. Drawing on this insight, SnapKV automatically compresses KV caches by\nselecting clustered important KV positions for each attention head. Our\napproach significantly reduces the growing computational overhead and memory\nfootprint when processing long input sequences. Specifically, SnapKV achieves a\nconsistent decoding speed with a 3.6x increase in generation speed and an 8.2x\nenhancement in memory efficiency compared to baseline when processing inputs of\n16K tokens. At the same time, it maintains comparable performance to baseline\nmodels across 16 long sequence datasets. Moreover, SnapKV can process up to\n380K context tokens on a single A100-80GB GPU using HuggingFace implementation\nwith minor changes, exhibiting only a negligible accuracy drop in the\nNeedle-in-a-Haystack test. Further comprehensive studies suggest SnapKV's\npotential for practical applications.", + "authors": "Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "SnapKV: LLM Knows What You are Looking for Before Generation", + "main_content": "Introduction Many inspiring works have successfully expanded LLMs to handle longer contexts, overcoming the difficulties in context maintenance and attention mechanism scalability, such as GPT-4 [1] and Command-R [2] with context length 128K, Claude-3 [3] with 200K, and Gemini-Pro-1.5 with 1M [4]. Despite their impressive capabilities, LLMs still face significant challenges when dealing with long context inputs. Specifically, the KV caches in attention calculation become an obstacle in efficiently processing long context. During inference time, as input length increases, the decoding speed per step grows linearly due to the computation for attention across past KVs. Moreover, the large KV cache created during prompting requires significant on-chip and off-chip memory, increasing hardware demands and limiting model scalability. \u2217equal contribution arXiv:2404.14469v1 [cs.CL] 22 Apr 2024 \fHelp me analyze the Q4 report of this company\u2026 Can you help me rephrase my email? \u2026 I want to buy a gift for my mom\u2026 I don\u2019t understand what is KV cache in LLMs\u2026 Can you tell me the details of R&D expense of Q4? The company\u2019s R&D expenses for the fourth quarter of 2023 is xxx.xx billion. This figure can be seen in the context of\u2026 Clustered information Prompt Window Figure 1: The graph shows the simplified workflow of SnapKV, where the orange area represents the group of positions per head clustered and selected by SnapKV. These clustered features are then used to form a new Key-Value pair concatenated with the tokens in the observation window (denoted as \u2018Window\u2019). Together, the selected prefix and observation windows constitute the new KV cache utilized for the generation. There are many perspectives to mitigate these problems, including KV cache eviction during token generation [5\u20138]. However, most of these methods lack a detailed evaluation of the generated context in a long-context setting. Moreover, they mainly focus on optimizing the KV cache appended during generation steps, while overlooking the realistic problem of compressing KV cache for input sequences, which is typically the bottleneck in memory efficiency. In practical applications such as chatbots and agents, where inputs can be multi-turn conversations, extensive articles or codebases [1, 9, 10], input sizes are often much larger than the sizes of generated responses, resulting in significant overhead. Additional challenge lies in compressing such vast inputs without losing crucial information for accurate generation, especially in scenarios with various noisy contexts. In our paper, we identify the patterns of these important prompt attention features during generation. To validate the robustness of this finding, we also design a thorough set of experiments across diverse inputs in terms of length, format, and content. Based on our observations, we derive an innovative and intuitive method, SnapKV, which can effectively compress the KV cache for long sequence inputs without compromising the model\u2019s accuracy. Our contributions are as follows: \u2022 We design experiments to explore the patterns of attention features in output generation, focusing on three key questions: 1. Is there a consistent pattern in the attention allocated to prompt tokens? 2. How does the context and instruction positioning influence this attention allocation pattern? 3. Does the nature of the user\u2019s instructions play a role in shaping these attention patterns? Our finding suggests that most of the LLMs\u2019 attention allocation of input sequence remains unchanged during generation. Thus, LLMs knows what you are looking for before generation. \u2022 We develop an efficient algorithm, SnapKV, inspired and validated by extensive observations and testing. SnapKV intelligently identifies important KVs with minimal modification (See Fig. 1). The algorithm can be easily integrated into popular deep-learning frameworks with just a few code adjustments. \u2022 We evaluate SnapKV for accuracy and efficiency across diverse LLMs and long-sequence datasets, affirming its improvement over previous work and comparability to conventional KV caching. Furthermore, we conduct the Needle-in-a-Haystack test to demonstrate its memory efficiency and illustrate decoding speed enhancements through varied batch sizes and input lengths. In addition, SnapKV\u2019s integration with a leading RAG model showcases its 2 \fextended performance capabilities. We also show that SnapKV can be combined orthogonally with other acceleration strategies such as parallel decoding. 2 Related Works Many previous works address the KV cache compression by evicting the KV cache using different algorithms. For example, StreamLLM [5] maintains the first few tokens and the local tokens to effectively reduce the KV cache size. However, it faces the challenge of losing important information since it continuously evicts the KV cache.2 Another perspective is to compress the KV cache for generation steps. Heavy-Hitter Oracle [6] introduces a KV cache eviction policy that greedily selects tokens during generation steps based on a scoring function derived from cumulative attention. While this approach effectively compresses the KV cache for generated tokens, it overlooks compression of the input sequence KV cache, which is crucial for reducing memory and computational overhead. Building on a similar concept, Adaptive KV Compression (FastGen) [8] implements a dual-phase algorithm that encompasses four KV cache compression policies. Initially, it identifies optimal policies through profiling results obtained from prompt encoding. Subsequently, it dynamically evicts caches during the generation phase based on these policies. Nonetheless, it faces the similar problem with H2O. ScissorHands [7] focuses on identifying and retaining pivotal tokens that exhibit a consistent attention weight pattern with previous token windows during generation steps. However, this method concentrates solely on the window of previous pivotal tokens in generation and neglects the extensive input that contains essential information for generating accurate responses. This oversight could lead to an inability to extract detailed information from prompts. In summary, existing compression methods merely address the challenges encountered in realworld applications, such as document processing and multi-round chats, where prompts are exceptionally long yet require accurate information retrieval. In common use cases, the generated outputs, like summaries, code pieces, or retrieved data, are significantly shorter compared to the extensive input sequences from novels, entire code bases, or annual financial reports. Although these techniques may effectively reduce the KV cache size during the generation phase, they do not tackle the primary overhead and challenges arising from a lack of comprehension of complex input contexts, thus leaving the critical issues unresolved. 3 Observations In this section, we present our observations regarding the patterns in the Query-Key matrix during token generation. We discuss how these patterns can be potentially exploited for KV cache compression. Our findings are based on the analysis of various generation contexts and the behavior of attention mechanisms in LLMs and are concluded into three key observations as follows: 1. Pattern consistency across contexts: Irrespective of the generation context length, we observed that specific keys within the prompt consistently exhibit higher attention weights. Such \u201cactive\u201d keys tend to follow stable patterns that appear to be intrinsically related to the structure and content of the prompt. (Sec. 3.1) 2. Invariance to question positions in summarization tasks: In the context of long summarization and question-answering tasks, the positioning of questions within the prompt (either at the beginning or the end) does not significantly alter the consistency of attention patterns observed. This suggests a level of robustness in how we can obtain the attention of relevant features trivially, regardless of the position of questions. (Sec. 3.2.1) 3. Contextual dependency of patterns: The observed attention patterns are highly contextsensitive, indicating a strong association with the specific instructions posed by the user 2https://github.com/mit-han-lab/streaming-llm?tab=readme-ov-file#faq 3 \f(Sec. 3.2.2). Thus, a context-aware KV compression approach can potentially lead to better performance. To structure our experimental analysis coherently, we introduce the following terminologies: Prompt Length (Lprompt): The total length of the user-provided input. Prefix Length (Lprefix): The length of the input preceding the observation window. It is part of the prompt and does not include the observation window. Observation Window (Lobs): The last segment of the prompt. This window is crucial for analyzing the influence of different contexts on attention patterns. These definitions are interconnected as follows: Lprompt = Lprefix + Lobs (1) Voting: The process of calculating attention weights for each query within the observation window across all heads, aggregating these weights to highlight the prefix positions that are considered most significant. For a single batch of sequence, formally: C = Lobs X i=0 Wobs[:, i, :] (2) I = Topk(C, k) (3) where Topk(T, k) selects the indices of the top k values in tensor T per head, k is defined as \u230ap \u00d7 Lprefix\u230b. The tensor Wobs \u2208RN\u00d7Lobs\u00d7Lprefix represents the subset of the prompt softmaxnormalized attention features over N heads. Hit Rate: The hit rate, H, quantifies the effectiveness of the voting mechanism by measuring the ratio of attention features identified as significant by the voting process that are also essential in the generation outcome, calculated as: Mvote_obs = zeros_like(Acur) (4) Mvote_obs[I] = 1 (5) Mthreshold_cur = 1(Acur > \u03b8) (6) O = Mthreshold_cur \u2227Mvote_obs (7) H = P O P Mthreshold_cur (8) Acur \u2208RN\u00d7Lprefix represents the attention features between the current generated query and prefix keys. The threshold operation filters Acur to retain only values exceeding \u03b8, indicating significant attention activations. The overlap O between these significant activations and the mask M quantifies the alignment of the current attention with previously identified significant features. The hit rate H is then computed as the ratio of the sum of overlap O to the sum of significant activations Athreshold, providing a metric for the efficacy of the attention mechanism in recognizing and emphasizing important attention features within the context. We can use H(Mthreshold_cur, Mvote_obs) denote combination of eq. 7 and eq. 8. We use p = 0.05 (top 5% location per head) and \u03b8 = 0.05 (note it is a large value due to the softmax function over a long sequence) for the observation experiments. The model we probe is Mistral-7B-Instruct-v0.2. 3.1 Observations in Multi-Turn Conversations This study examines if the positions of features identified as crucial in the observation window maintain their significance in the subsequent token generation. The analysis utilizes samples from 4 \f0.85 0.90 0.95 1.00 Hit rate (%) Hit rates for windows within 512 generated tokens 0 5 10 15 20 25 30 Layer 0.00 0.05 0.10 0.15 window 0 window 1 window 2 window 3 Avg Prompt Len: 3263.80 Avg T urn: 4.13 Avg Context Len: 955.78 T otal Num: 3050 Figure 2: The layer-wise average hit rate of important positions utilized along token generation with an average input length exceeding 3k. 0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Hit rate (%) Hit rates for different datasets with question at the beginning QMSum Openreview SPACE Avg Prompt Len: 16702.67/10900.52/19041.76 Avg Context Len: 320.79/623.54/427.96 T otal Num: 177/69/144 0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Hit rate (%) Hit rates for different datasets with question at the end QMSum Openreview SPACE Avg Prompt Len: 16702.67/10900.52/19041.76 Avg Context Len: 320.79/623.54/427.96 T otal Num: 177/69/144 Figure 3: The layer-wise average hit rate of important positions utilized by prompts with questions at the beginning and the end. Ultrachat [11], a multi-turns, high-quality instruction dataset consisting of 1.4 million dialogues. We further filter the sequences with response length greater than 512 and prompt length greater than 3k. In the experiment, we split the generated tokens into 4 context windows, each spanning 128 tokens, to compute the averaged hit rates of these windows versus the observation window with size 32. According to the findings presented in Fig.2, important keys in prefixes obtained from voting in observation windows exhibit remarkable consistency throughout the generation process, as evidenced by high hit rates. 5 \f0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Overlap (%) Overlap of important positions for different answer pairs on the same documents QMSum Openreview SPACE Avg Doc Len: 16621.08/10694.43/18953.88 Avg Context Len: 320.79/623.54/427.96 T otal Pairs: 654/69/360 Figure 4: The layer-wise overlap of important positions utilized by different question-answer pairs in the same dataset. 3.2 Observations in Long Document QA To further validate this finding, we also observe on multiple long documents QA datasets including QMSum [12], a query-based multi-domain meeting summarization; Openreview [13], a collection of papers from openreview.net; SPACE [14], an extractive opinion summarization in quantized transformer spaces. 3.2.1 Effectiveness of Instruction Positions Our investigation also extends to the significance of instruction positioning on the interpretability of LLMs and their selection of important features. We calculate the average hit rate for the responses using the same observation window size of 32 as in the previous experiment. Our results shown in Fig. 3 indicate that across all three datasets, the hit rates are consistently high regardless of whether instructions are positioned before or after extensive supplementary contexts. This consistency suggests that the patterns identified by observation windows are independent of the question\u2019s positions. 3.2.2 Effectiveness of Various Instructions for One Document Furthermore, we investigate whether instructions will affect the selection of important features even if the provided context is the same. Our experiment utilizes different instructions on the same document and selects the important features based on the observation window that consists of both the instructions and their corresponding responses. Then we calculate the hit rates between important features selected by different instruction-response pairs within the same document by using H(Mvote_A, Mvote_B). By varying the instructions, we observe that different instructions prioritize different prefix keys, as indicated by the descending trend in hit rates shown in Fig. 4. Our findings reveal an interesting aspect of KV cache management in LLMs: the important attention features change with different instructions. This variability challenges the effectiveness of static compression methods that depend on constant weighted importance or fixed policies [7, 6, 8]. Thus, the complex relationship between context and related KV cache emphasizes the need for context-aware compression strategies and highlights the limitations of current methods that ignore this dynamic. 4 SnapKV 4.1 Basic Method In the attention mechanism, keys and values are tensors containing information from the previous context. The linear growth in prompts will lead to exponential time complexity for generation due to 6 \fthe Query-Key matrix multiplication. SnapKV addresses this by keeping prompt KV cache counts constant during generation, significantly reducing serving times for long-context LLMs. The fundamental approach of SnapKV involves identifying and selecting the most crucial attention features per head to create the new KV cache. SnapKV operates through two stages as shown in Fig. 1: \u2022 Voting for Important Previous Features By the voting process defined previously (Eq. 1), we select the important features based on the observation window\u2014defined as the last segment of the prompt. Sec. 3.1 highlights the consistency of these attention features throughout the sequence, suggesting that these features are vital for subsequent generation. Besides, we implement clustering to retain the features surrounding the selected features (See Sec.4.2). \u2022 Update and Store Truncated Key and Value We concatenate these selected features with the window features, which encompass all features containing prompt information. We store back the concatenated KV caches for later use in generation and save the memory usage. 1 def snap_kv(query_states , key_states , value_states , window_size , max_capacity_prompt , kernel_size): 2 bsz , num_heads , q_len , head_dim = query_states .shape 3 # Ensure it is the prompt phase. 4 assert key_states.shape [-2] == query_states .shape [-2] 5 if q_len < max_capacity_prompt : 6 return key_states , value_states 7 else: 8 # Compute attention weights of observing window \u2019s queries and prefix context \u2019s Keys. 9 attn_weights = compute_attn ( query_states [... , -window_size:, :], key_states , attention_mask ) 10 # (bsz , num_heads , window_size , k_len) 11 # Sum the weight along the query dimension. 12 attn_weights_sum = attn_weights [... , -window_size:, :-window_size ]. sum(dim =-2) 13 # Apply 1D pooling for clustering. 14 attn_cache = pool1d(attn_weights_sum , kernel_size =kernel_size , padding=kernel_size //2, stride =1) 15 # Select top -k indices per head based on the pooled weights to identify important positions. 16 indices = attn_cache.topk( max_capacity_prompt window_size , dim=-1).indices 17 # Expand the indices to match the head dimension for gathering. 18 indices = indices.unsqueeze (-1).expand (-1, -1, -1, head_dim) 19 # Gather the compressed past key and value states based on the selected indices. 20 k_past_compress = key_states [... , :-window_size , :]. gather(dim=2, index=indices) 21 v_past_compress = value_states [... , :-window_size , :]. gather(dim=2, index=indices) 22 k_obs = key_states [... , -window_size:, :] 23 v_obs = value_states [... , -window_size :, :] 24 key_states = torch.cat([ k_past_compress , k_obs], dim =2) 25 value_states = torch.cat([ v_past_compress , v_obs], dim =2) 26 return key_states , value_states Listing 1: Implementation of SnapKV in pseudo PyTorch style. 4.2 Efficient Clustering via Pooling In LLMs, information retrieval and generation rely on features with high attention weight and are supplemented by copying the rest in context using induction heads [15]. Hence, naively selecting the top features results in retaining only portions of details and then losing the completeness of the information. For example, such compression might cause the LLMs to retrieve only the country code of a phone number and hallucinate the rest. Our experiment also revealed that only selecting the features with the highest weights is insufficient (Sec. 5.2). Such sparse selection risks compromising the contextual integrity encapsulated in between features, thereby reducing accuracy. Based on the insights, We propose a fine-grained clustering algorithm utilizing a pooling layer shown in Line 14. 7 \f5 Experiments In our experimental setup, we explore the performance of SnapKV across models that can handle extended sequence contexts. First, we deliver a pressure test and benchmark the speed of LWM-Text-Chat-1M [16], which is state-of-the-art regarding its context length. We then conduct an ablation study on Mistral-7B-Instruct-v0.2 to understand the influence of pooling on the model\u2019s information retrieval performance. We assess model performances using the LongBench [17] dataset. Further, we dive into a comprehensive examination of the Command-R [2] model, another leading open-source model in the field. Lastly, we show that SnapKV can be utilized with other acceleration strategies such as parallel decoding. 5.1 Benchmarks on LWM-Text-Chat-1M LWM-Text-Chat-1M [16] is a 7B instruction-finetuned model with up to one million context length. In this section, we conduct a pressure test on this model and examine its algorithmic efficiencies through the lens of hardware optimization. 5.1.1 Needle-in-a-Haystack The Needle-in-a-Haystack test [18] challenges the model to accurately retrieve information from a specific sentence(\"needle\") hidden within a lengthy document (the \"haystack\"), with the sentence placed at a random location. To rigorously evaluate SnapKV\u2019s capabilities, we extended the document length to 380k tokens which is the longest content that can be processed by a single A100-80GB GPU. We configured the prompt KV cache size to 1024, enabling SnapKV to select the most crucial 1024 attention features from the prompt using our algorithm for answer generation, with a maximum pooling kernel size of 5 and a observation window size of 16. The compelling outcomes in Fig. 5 from the Needle-in-a-Haystack test underscore SnapKV\u2019s potential to precisely manage small details on extremely long input contexts with a 380x compression ratio. 1000 7513 14026 20538 27051 33564 40077 46590 53103 59615 66128 72641 79154 85667 92179 98692 105205 111718 118231 124744 134000 144231 154462 164692 174923 185154 195385 205615 215846 226077 236308 246538 256769 267000 277231 287462 297692 307923 318154 328385 338615 348846 359077 369308 379538 T oken Limit 0.0 11.0 22.0 33.0 44.0 56.0 67.0 78.0 89.0 100.0 Depth Percent LWM-T ext-Chat-1M with SnapKV 0.0 0.2 0.4 0.6 0.8 1.0 Score Figure 5: Needle-in-a-Haystack test performance comparison on single A100-80GB GPU, native HuggingFace implementation with only a few lines of code changed. The x-axis denotes the length of the document (the \u201chaystack\u201d); the y-axis indicates the position that the \u201cneedle\u201d (a short sentence) is located within the document, from 1K to 380K tokens. For example, 50% indicates that the needle is placed in the middle of the document. Here LWMChat with SnapKV is able to retrieve the needle correctly before 160k and with only a little accuracy drop after. Meanwhile, the original implementation encounters OOM error with 33k input tokens. 5.1.2 Decoding Speed and Memory Bound We further benchmark the speed of LWM-Text-Chat-1M under different batch-size settings using SnapKV. We set the maximum prompt KV cache size as 2048 for SnapKV. There are two main 8 \f4096 8192 16384 32768 65536 131072 262144 Sequence Length 40 60 80 100 120 Decode speed (ms/token) Optimized Batch 1 Optimized Batch 2 Optimized Batch 4 Optimized Batch 8 Baseline Batch 1 Baseline Batch 2 Baseline Batch 4 Baseline Batch 8 (OOM) OOM Common SoTA model's max seq length Figure 6: Deconding speed comparison of baseline implementation and SnapKV optimized solutions on various batch sizes. The x-axis denotes the input sequence length; the y-axis indicates decoding speed (ms/token). All experiments are conducted on an A100 80GB GPU. The red dotted line denotes the current state-of-the-art open-sourced models\u2019 context length. takeaways from our experiment on decoding speed and input sequence length on various batch sizes, as shown in Fig. 6. First, as the input sequence length increases, the decoding speed of the baseline implementation escalates exponentially. Conversely, the SnapKV-optimized model maintains a constant decoding speed since the KV cache stays the same and there is no extra update during the inference. For instance, at a sequence length of 16k and a batch size of 2, the decoding time for the baseline model surpasses 0.1 seconds, whereas the SnapKV-optimized model consistently remains below 0.04 seconds, achieving approximately a 3.6x speedup. Second, with the same batch size, the model optimized with SnapKV can decode significantly longer sequences. For example, at a batch size of 2, the baseline model encounters an OOM issue beyond 16k input tokens, whereas the SnapKV-enhanced model extends this limit to 131k input tokens, indicating an approximately 8.2x improvement. This demonstrates SnapKV\u2019s effectiveness in minimizing memory consumption. 5.2 Ablation Study of Effectiveness of Pooling We perform an ablation study to assess the impact of our pooling technique, a straightforward but efficient method for consolidating information through clustering. Our evaluation utilizes the modified LongEval-Lines benchmark [19], incorporating random generated pairs and averaged scores. LongEval-Lines presents a greater challenge compared to Needle-in-a-Haystack because it involves identifying key-value pairs in noisy contexts of the same format, while in Needle-in-a-Haystack, the relevant information is more distinctly separated from other contexts. We apply max pooling with a kernel size of 5 and use the observation window with a size of 16. The findings, illustrated in our results (Fig. 7), indicate that pooling significantly enhances retrieval accuracy compared to methods not utilizing pooling. We hypothesize that this is due to the ability of strong attention mechanisms to focus on the initial portion of tokens. Without information compression, large language models tend to replicate the subsequent tokens, leading to retrieved partially correct results when the KV cache is compressed as we observed. Note that throughout our experiments, the choice between max pooling and average pooling did not yield significant differences in performance. 9 \f4000 7000 9000 11000 13000 15000 18000 20000 22000 24000 26000 30000 T oken Limit 0.0 11.0 22.0 33.0 44.0 56.0 67.0 78.0 89.0 100.0 Depth Percent Mistral-7B-Instruct-v0.2 without Pooling 0.0 0.2 0.4 0.6 0.8 1.0 Score 5000 7000 9000 11000 13000 16000 18000 20000 22000 24000 27000 30000 T oken Limit 0.0 11.0 22.0 33.0 44.0 56.0 67.0 78.0 89.0 100.0 Depth Percent Mistral-7B-Instruct-v0.2 without Pooling 0.0 0.2 0.4 0.6 0.8 1.0 Score Figure 7: Ablation study of pooling on LongEval-Lines. The evaluation includes inputs, each comprised of lines formatted as \"line makeshift-penguin: REGISTER_CONTENT is <10536>\", where the key is an adjective-noun pair and the value is a random 5-digit number. The model needs to retrieve the value based on a given key. The x-axis denotes the length of the input; the y-axis indicates the position of the groundtruth, from 5K to 30K tokens. With the pooling, the model can retrieve correct values before 16k and performs significantly better than the one without pooling. 5.3 Experiments on LongBench We evaluate SnapKV on these four models using LongBench [17], a multi-task benchmark designed to rigorously evaluate long context understanding capabilities across various datasets, spanning single and multi-document QA, summarization, few-shot learning, synthetic tasks, and code completion. We choose LWM-Text-Chat-1M with 1 million context length, LongChat-7b-v1.5-32k, Mistral-7B-Instruct-v0.2, Mixtral-8x7B-Instruct-v0.1 with 32k context length as our baselines. For each model, we test SnapKV with various settings: compressing KV caches in the prompt to 1024, 2048, and 4096 tokens. We use max pooling with kernel size 7 and observation window size 32. Table 1 illustrates a negligible performance drop from models with SnapKV compared with original implementations for 16 different datasets, even with prompt-KV with 1024 tokens. Some models even outperform the baseline. Our results substantiate that SnapKV can grasp the key information in the long context and give comprehensive summaries with details. Moreover, our results also indicate the effectiveness of SnapKV in compressing the prompt KV cache. For LongChat-7b-v1.5-32k, the average input token length is 12521; for LWM-Text-Chat-1M, 13422; for Mistral, 13160. Thus, using 1024, SnapKV achieves an average compression rate of 92%, and using 4096, it reaches 68%, all with negligible drops in accuracy. We compare SnapKV and H2O on the LongBench dataset to further demonstrate the performance of SnapKV. To fairly evaluate the accuracy, we set the prompt capacity for H2O to 4096. As table 1 shows, SnapKV 10 \fTable 1: Performance comparison of SnapKV and H2O across various LLMs on LongBench. LLMsa Single-Document QA Multi-Document QA Summarization Few-shot Learning Synthetic Code NrtvQA Qasper MF-en HotpotQA 2WikiMQA Musique GovReport QMSum MultiNews TREC TriviaQA SAMSum PCount PRe Lcc RB-P LWMChat All KV 18.18 25.56 40.94 24.57 19.39 10.49 27.97 24.9 24.81 71.0 60.9 39.73 3.17 3.5 44.4 43.82 SnapKV: 1024 18.02 23.73 40.25 24.61 19.84 10.77 19.79 24.44 23.53 70.0 61.42 39.64 1.67 3.0 43.34 44.0 SnapKV: 2048 17.92 25.03 41.38 24.49 19.38 11.34 21.6 24.22 24.36 70.0 61.11 39.91 2.17 4.0 44.46 44.92 SnapKV: 4096 17.92 25.47 40.76 24.92 19.53 11.27 25.34 25.42 24.58 70.5 61.08 39.62 3.17 4.0 44.49 44.08 H2O: 4096 13.17 24.82 20.01 16.86 9.74 7.2 25.77 23.26 23.83 71.0 61.06 40.33 0.0 0.0 41.52 40.97 LongChat All KV 20.88 29.36 43.2 33.05 24.58 14.66 30.89 22.76 26.61 66.5 83.99 40.83 0.0 30.5 54.89 59.05 SnapKV: 1024 19.32 26.6 37.93 34.15 23.34 12.71 23.45 21.81 24.93 65.0 80.88 38.19 0.0 31.0 53.63 57.62 SnapKV: 2048 19.28 28.81 40.26 35.31 23.75 13.44 26.3 22.29 25.73 66.0 79.93 39.59 0.0 31.0 56.05 58.61 SnapKV: 4096 20.68 29.34 42.21 33.95 24.88 14.15 28.55 23.11 26.45 66.0 81.25 40.52 0.0 29.5 54.79 58.81 H2O: 4096 19.31 28.3 37.75 30.51 23.06 11.76 27.55 21.37 26.49 66.0 75.8 39.92 0.0 25.5 53.56 55.53 Mistral All KV 26.82 33.06 49.28 42.77 27.33 19.27 32.85 24.25 27.06 71.0 86.23 42.98 2.75 86.98 55.51 52.88 SnapKV: 1024 25.54 29.51 49.25 40.94 25.7 19.42 25.89 23.82 26.11 69.5 86.48 42.06 2.98 88.56 55.65 51.87 SnapKV: 2048 25.89 32.47 48.6 41.71 27.31 18.69 28.81 24.5 26.6 70.0 86.27 42.47 3.09 87.43 55.93 52.01 SnapKV: 4096 26.41 33.36 49.81 42.32 27.93 18.76 30.74 24.19 27.08 71.0 86.25 43.01 2.73 86.18 55.62 52.65 H2O: 4096 22.61 29.06 47.22 36.54 20.6 16.25 30.0 23.8 26.75 70.5 86.16 42.97 3.46 86.38 53.72 51.1 Mixtral All KV 26.81 37.06 51.55 47.77 32.46 26.59 34.25 26.05 27.91 76.0 90.57 46.98 5.5 100.0 69.07 69.65 SnapKV: 1024 26.01 34.65 51.58 48.23 32.67 25.92 27.77 25.0 27.25 74.5 90.42 46.48 5.5 99.5 69.02 68.98 SnapKV: 2048 27.12 36.9 51.91 47.46 33.23 26.27 30.19 25.84 27.8 76.0 90.24 46.31 5.5 100.0 68.72 70.01 SnapKV: 4096 26.46 37.03 52.62 47.71 33.35 26.45 32.64 25.87 27.94 75.5 90.71 47.14 5.5 100.0 68.81 69.56 H2O: 4096 20.45 32.09 48.02 34.76 25.69 16.5 29.76 23.53 26.84 74.5 90.24 47.1 7.06 99.42 64.91 63.52 a Credit to Jin et al. [20] for the template used in the table. delivers significantly better performance than H2O. Even with 1024 prompt KV caches, SnapKV on Mistral-7B-Instruct-v0.2 achieves better performance than H2O with 4096 caches on 11 out of 16 benchmarks. 5.4 Experiments on Command-R To further assess the performance of SnapKV, we conduct experiments using Cohere\u2019s Command-R model [2], an open-source model with 35B parameters and capable of handling sequences of up to 128k token length. Command-R is designed for complex tasks requiring long context, such as retrievalaugmented generation (RAG). We extensively test Command-R on NarrativeQA and a modified version of the Needle-in-a-Haystack where it achieves promising results. To evaluate SnapKV\u2019s impact on RAG, we ran tests on bioasq [21], multi-hop question answering with HotpotQA [22], and an internal benchmark on tool use, which further demonstrated its effectiveness. Throughout all experiments, we limit the KV cache to a maximum of 4096 tokens, while the pooling kernel size and window size are set to 13 and 64, respectively. For our evaluations, these hyper-parameters give a KV cache compression ratio between 2x to 32x depending on the sequence length. 5.4.1 Needle-in-a-Haystack In previous experiments [23], it was noted that Needle-in-a-Haystack [18] evaluation was heavily influenced by the specific context used. To address this issue, we modify the evaluation by permuting context compositions for each length and depth combination. This approach, which we ran eight times, yielded more robust results. We observe a slight decrease in scores across all models tested under this setting compared to the original setup with no context shuffling. For simplicity, we aggregated the scores across all depths and lengths for the baseline model and the one with SnapKV. As seen in Table 2, applying SnapKV to Command-R shows no degradation in performance, even with a 128k sequence length resulting in 32x compression of KV cache. Table 2: Needles-in-a-Haystack Test Results Model Command-R Command-R + SnapKV % Difference Score 9.866 9.819 -0.5% 11 \f5.4.2 Retrieval Augmented Generation (RAG) We assess SnapKV\u2019s effectiveness in RAG tasks, which are more intricate than synthetic long-context tasks like Needle-in-a-Haystack and closer to real use cases compared to tasks like NarrativeQA. RAG tasks require selecting pertinent documents from an indexed corpus based on the given prompt. An expanded context window enables the retrieval of additional documents, which can lead to improved model performance. However, this also increases memory requirements and latency, highlighting the delicate balance between retrieval scope and system resources. SnapKV proves beneficial in these tasks by reducing memory usage while enhancing the performance. We evaluated SnapKV\u2019s impact on RAG tasks with sequence lengths up to approximately 40,000 tokens. RAG Citation We begin by assessing SnapKV\u2019s impact on the model\u2019s ability to select relevant documents, a crucial aspect of effective RAG. We evaluate on an internal benchmarks from Cohere. The setup of the benchmark is as follow: for each prompt, we gathered a set of topic-related documents that included ground truth answers along with a sample of negative documents ensuring a total of 100 documents per prompt. We measured the model\u2019s performance by calculating the F1-score when the model successfully retrieved the ground truth documents. The dataset employed in this experiment spanned context lengths from 20,000 to 40,000 tokens. Given our KV cache size of 4096, we achieve a compression of 5-10x. As observed in Table 3, SnapKV demonstrates a remarkable ability to retain nearly 98.8% of Command-R\u2019s performance. Table 3: RAG Test Results Evaluation Task Metric % Difference RAG Citation F1 score -1.2% RAG End-to-end F1 score -2.1% Generation As the quality of generation is important to a model\u2019s RAG capability, we evaluate Command-R on lost-in-the-middle and generation quality. Lost-in-the-middle is aimed to analyze whether the performance of the model varies when altering the position of ground-truth information in the context [24]. The latter is a relatively simple metric where we define the accuracy of the model to be the proportion of the ground-truth answer phrase appearing in model\u2019s response. We conducted 3 experiments with 30, 100 and 200 sampled documents for each ground-truth. We repeat each experiment 3 times and insert the relevant documents at beginning, middle and end of the context to test SnapKV\u2019s robustness.We report the relative difference to the baseline model. The dataset used in this phase is based on the bioasq dataset [21] with RAG-style formulation from Cohere [25]. As Table 4 shows, SnapKV is robust in terms of generation quality and does not suffer from the well-known lost-in-the-middle pathology. Moreover, SnapKV improves performance over the baseline model when the context contains close to 200 documents. One potential explanation to this is that by adequately compressing the KV cache, we can effectively reduce the noise from negative documents and push the model to construct attention scores more focused on the relevant information. End-to-End RAG To assess SnapKV\u2019s robustness in a comprehensive manner, we integrated it into a complete RAG pipeline. This evaluation starts by retrieving 200 documents using Cohere\u2019s embedding service [26] in response to a given query. These documents were then re-ranked using Cohere\u2019s re-ranking model [27], which filtered out half of the candidates, resulting in a list of 100 documents. We prompt Command-R using this list and calculate the accuracy metric as described in Section 5.4.2. We employed a modified version of the HotpotQA dataset [22] and leveraged Wikipedia as the document source. This setup introduces a more challenging set of documents as all documents, relevant or not, are semantically similar. Table 3 showcases SnapKV\u2019s robust performance in a production-like RAG setting. With an average dataset length of around 16,000 tokens, the KV cache benefits from a compression ratio of approximately 4x. 12 \fTable 4: RAG Generation Test Results on bioasq Number of Documents Approximate Context Length Ground Truth Position % Difference 30 8k 0 -1.8% 14 0% 30 -3.4% Avg -1.7% 100 14k 0 -1.2% 14 +0.9% 30 -0.9% Avg -0.6% 200 24k 0 +4.9% 14 +4.9% 30 +6.4% Avg +5.4% Note: For each number of sampled documents, we report the approximate context length and the difference from the baseline at each ground-truth position. 4 5 6 7 8 9 10 Prompt Length (k) 15 20 25 30 35 40 45 50 Speed (ms/token) Speed vs Prompt Length Medusa w SnapKV Medusa Baseline Figure 8: Comparison of generation speed (ms/token). The baseline is the Huggingface implementation of naive decoding. 5.5 Case Study: Compatibility with Parallel Decoding In this section, we provide a novel perspective on employing KV cache compression synergistically with parallel decoding [28\u201332]. Parallel decoding leverages a lightweight model or an adaptor to draft initial tokens, which are subsequently verified by larger LLMs. This strategy effectively reduces memory overhead, a critical concern given the autoregressive nature of LLMs that renders them more memory-intensive than computationally demanding. Specifically, in LLMs, each decoding step involves generating a single token, with the transfer of weights between High Bandwidth Memory (HBM) and cache contributing to significant overhead [33, 34]. Our investigation incorporates SnapKV with Medusa [35]3, a cutting-edge parallel decoding framework that utilizes multiple classifiers and tree attention mechanisms for drafting tokens, subsequently 3https://github.com/FasterDecoding/Medusa 13 \fverified by LLMs. One of the challenges identified is the issue of speculative decoding in processing long sequences since generating multiple tokens per decoding step introduces computational bottlenecks during long sequence processing, such as query-key matrix multiplication tiling [36]. By maintaining a constant size for the KV cache associated with prompts during generation, SnapKV enhances generation efficiency. Empirical results shown in Figure 8 highlight the performance across various prompt lengths, with Mistral-7B-Instruct-v0.24 undergoing a maximum of 128 generation steps unless preemptively halted. The experiments utilized a subset of the QASPER [37], with a fixed prompt instructing the LLM to summarize the paper. The truncation strategy adopted aligns with LongBench [17] standards, by removing the context in the middle to achieve the desired sequence length for benchmarking. The findings indicate a slowdown in Medusa\u2019s performance as sequence lengths extend, a challenge effectively mitigated by SnapKV\u2019s intervention, which achieved a 1.3x speedup for sequences with 10k length compared to Medusa and a 2.2x speedup compared to the native decoding. This improvement underscores the potential of combining KV cache compression with parallel decoding frameworks to enhance LLM efficiency, particularly in long-context scenarios. 6 Discussions SnapKV emerges as a potent yet straightforward solution, adeptly compressing the KV caches of models to mitigate the computational and memory burdens associated with processing extensive inputs. Originating from a nuanced observation that specific tokens within prompts garner consistent attention from each head during generation, our methodology not only conserves crucial information but also enhances processing efficiency. Despite its strengths, SnapKV\u2019s scope is primarily confined to the generative aspect of models, specifically targeting the KV caches during the generation. This limitation implies that SnapKV cannot extend a model\u2019s long context capability if the model inherently struggles with long contexts or exhibits poor performance. Additionally, SnapKV\u2019s design does not cover the processing of the prompt inference, which limits its effectiveness in scenarios where the system cannot handle prompts of extensive length. Nonetheless, our contributions offer significant insights and tools for the community, paving the way for more refined approaches on managing the challenges of large-scale language modeling." + }, + { + "url": "http://arxiv.org/abs/2404.14372v1", + "title": "Beyond Scaling: Predicting Patent Approval with Domain-specific Fine-grained Claim Dependency Graph", + "abstract": "Model scaling is becoming the default choice for many language tasks due to\nthe success of large language models (LLMs). However, it can fall short in\nspecific scenarios where simple customized methods excel. In this paper, we\ndelve into the patent approval pre-diction task and unveil that simple\ndomain-specific graph methods outperform enlarging the model, using the\nintrinsic dependencies within the patent data. Specifically, we first extend\nthe embedding-based state-of-the-art (SOTA) by scaling up its backbone model\nwith various sizes of open-source LLMs, then explore prompt-based methods to\nharness proprietary LLMs' potential, but find the best results close to random\nguessing, underlining the ineffectiveness of model scaling-up. Hence, we\npropose a novel Fine-grained cLAim depeNdency (FLAN) Graph through meticulous\npatent data analyses, capturing the inherent dependencies across segments of\nthe patent text. As it is model-agnostic, we apply cost-effective graph models\nto our FLAN Graph to obtain representations for approval prediction. Extensive\nexperiments and detailed analyses prove that incorporating FLAN Graph via\nvarious graph models consistently outperforms all LLM baselines significantly.\nWe hope that our observations and analyses in this paper can bring more\nattention to this challenging task and prompt further research into the\nlimitations of LLMs. Our source code and dataset can be obtained from\nhttp://github.com/ShangDataLab/FLAN-Graph.", + "authors": "Xiaochen Kev Gao, Feng Yao, Kewen Zhao, Beilei He, Animesh Kumar, Vish Krishnan, Jingbo Shang", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Beyond Scaling: Predicting Patent Approval with Domain-specific Fine-grained Claim Dependency Graph", + "main_content": "Introduction Scaling up language models has demonstrated predictable improvement and unprecedented abilities in many language tasks (Chung et al., 2022; Wei et al., 2022a; Zhao et al., 2023). However, emerging evidence shows that simply scaling up back\u2217The first two authors contributed equally to this paper. The listing order is fully decided by dice rolling. LLM Patent Applications Claim 1 Body Text Claim 2 Claim N ... Claim 2 Graph Model Claim N Claim 1 APPROVED ... ... REJECTED REJECTED APPROVED \u00a0 Adopt plain text\u00a0 \u00a0 \u00a0-> poor performance Build dependency graph -> better performance ... \u274c\u00a0 expensive cost-effective \u2705 s1 s2 s3 s4 s5 s1 s2 s3 s4 s5 Graph Text Claim 2 split build graph Figure 1: An illustration for the patent approval prediction task approached by LLMs and graph models, where each node of the graph is an informative segment decomposed from the original claim text. bone models to large language models (LLMs) may not guarantee success (Peng et al., 2023; Hou et al., 2023; Wang et al., 2023). In addition, scaling up models imposes demanding computational costs that prevent it from being widely adopted for realworld applications. Such limitations necessitate cost-effective methods beyond scaling, especially for domain-specific tasks that have distinct traits. In this paper, we look into the task of patent approval prediction, a challenging yet straightforward classification task that scaling struggles to address, and explore customized cost-effective solutions. As shown in Figure 1, the objective is to determine if each claim in a patent application will be approved or rejected by the U.S. government patent office (USPTO). It is vital for intellectual property (IP) protection, taking up to 40% of the U.S. GDP and over 30% of employment. Due to the demanding requirements for knowledge in both technology and law, patent examination is conducted manually, 1 arXiv:2404.14372v1 [cs.CL] 22 Apr 2024 \fleading to potential inconsistent outcomes across patent examiners (O\u2019Neill, 2018; USPTO, 2016). Such inconsistency underscores the necessity for objective and automated computational support. The state-of-the-art (SOTA) of this task is based on BERT embedding (Devlin et al., 2019) augmented with handcrafted features (Gao et al., 2022). An intuitive idea is to replace its backbone model with modern LLMs. To this end, we employ LLaMA2 (Touvron et al., 2023), Mistral (Jiang et al., 2023), Vicuna (Chiang et al., 2023) in various sizes (7&13&70B) and apply both LoRA (Hu et al., 2022) and full fine-tuning. Surprisingly, they do not live up to expectations, performing on par or worse than BERT. To exploit LLMs\u2019 emergent abilities, we utilize prompt-based methods tailored to these open-source LLMs, as well as closed-source GPT3.5 (OpenAI, 2022) and GPT-4 (OpenAI, 2023), but the results are still unsatisfying. The shattered hope in LLMs motivates us to dive into patent data analyses, which leads us to the standardized writing of claims and the dependencies nature among them. As depicted in Figure 2, Claim 1 compromises multiple sub-components, which are then referenced in subsequent claims. Such intricate inner-claim (between sub-components in Claim 1) and inter-claim dependencies (between Claims 1&2 as well as Claims 1&3) have critical implications for the patent approval prediction task, as the patent examination is conducted on each claim and the rejection of one claim can result in the automatic rejection of its dependents. Inspired by the observations and domain-specific knowledge acquired from painstakingly extensive data analyses, we propose Fine-grained cLAim depeNdency (FLAN) Graph for patent approval prediction, which represents each claim by a single graph that encapsulates both innerand inter-claim dependencies. Specifically, as shown in Figure 1, we first design a novel algorithm to automatically construct the FLAN Graph at scale, where each node is an informative segment of the claim text. Then, the model-agnostic FLAN Graph is fed into a generic graph model for prediction. Examples of the FLAN graphs and the corresponding claims are shown in Appendix C. In the experiments, we adopt a variety of cost-effective graph models such as GCN (Chen et al., 2020), GAT (Velickovic et al., 2018), and TreeLSTM (Tai et al., 2015) to verify the effectiveness of our proposed FLAN Graph for patent approval prediction. All models with FLAN Graph applied outperform the previous Claim 1: A system for\u00a0 [...], the system compromising:\u00a0 an authentication component configured to [...];\u00a0 a tracking component configured to [...];\u00a0 and a control component configured to:\u00a0 receive authentication information [...];\u00a0 receive location information from [...];\u00a0 \u00a0\u00a0 \u00a0 \u00a0 \u00a0and deliver a message to the touchpoint authorizing [...]. Claim 2: The system of claim 1, where the control component is also configured [...]. Claim 3: The system of claim 1, where the authentication component includes [...]. Figure 2: A brief example of the typical patent claim writing style and hierarchical dependencies within claims from a real-world patent application. SOTA, among which GraphSage (Hamilton et al., 2017) achieves the highest improvements of 7.4% in AUC and 7.8% in Macro-F1 scores, achieving absolute scores of 66.04 and 58.22, respectively. To summarize, our contributions are two-fold: (1) We propose a novel algorithm to automatically construct the Fine-grained cLAim depeNdency (FLAN) Graph at scale that consistently improves the SOTA by a large margin. (2) We conduct comprehensive experiments and analyses of modern LLMs on patent approval prediction, which identify the limitations of LLMs and provide valuable references for developing LLM-based solutions in the future. The source code1 and dataset2 are publicly released to facilitate future research. 2 Problem Formulation In this section, we formally introduce the definition of the patent approval prediction task and analyze the dataset we construct for experiments. 2.1 Task Definition As illustrated in Figure 1, patent applications are initially submitted to the USPTO in the form of documents. The examination process, however, focuses on approving or rejecting each individual claim. Therefore, given a patent application Ai = {C(i) j }n j=1 containing n claims, the task of patent approval prediction is to determine whether each claim C(i) j will be approved or rejected by the USPTO indicated by a binary label y(i) j \u2208{0, 1}. In practice, patent claims are reviewed according to the legal section 35 U.S. Code \u00a7 102, where the core criterion is novelty-based, bringing in dis1https://github.com/ShangDataLab/FLAN-Graph 2https://huggingface.co/datasets/ ShangDataLab-UCSD/PatentAP 2 \f#Claim #Application Approval (%) Train 1, 485, 693 87, 883 81.36 Valid 278, 215 16, 955 83.41 Test 185, 477 11, 148 84.92 Table 1: Statistics of PATENTAP dataset. \u201cApproval (%)\u201d indicates the percentage of the approved claims. tinct challenges below. (1) Time-sensitive. Unlike traditional text classification, novelty assessment depends on the application filing date, allowing opposite decisions for the same claim over time. (2) Structure-dependent. Many claims (e.g., Claims 2&3 in Figure 2) are dependent on others within the same application, and such structure can influence novelty evaluation. (3) Knowledge-intensive. Evaluating novelty requires up-to-date knowledge of both technologies and patent law. (4) Outcomeinconsitent. Novelty examination outcomes are subject to preferences across patent officers, which may introduce inconsistencies in patent data. 2.2 Dataset Collection We collect data of real-world patent applications from Gao et al. (2022) and filter out those outdated data before 2018. The data is initially merged and derived from the publicly available resources officially released on the USPTO websites3. Considering the real-world scenario, we utilize historical data for training and more recent data for evaluation. Specifically, we sort the applications based on filing dates and then split them into training, validation, and test sets. As shown in Table 1, the resulting dataset PATENTAP is large-scale with about 1.5M claims for training and 0.5M for evaluation. It is also highly imbalanced, with most claims being approved, further adding to the difficulty. The claims are relatively short and 92% have less than 128 tokens and the average length is 54. Each application has 17 claims on average. 3 Methodology In this section, we delve into the details of our proposed Fine-grained cLAim depeNdency (FLAN) Graph. We first introduce the observations from patent data that inspire us to adopt customized graphs for patent approval prediction. Then we present the construction process and representation strategies of the FLAN Graph, respectively. 3https://ped.uspto.gov/peds/#/ 3.1 Observations In principle, patent claims are filed to seek legal protection for complex systems that usually compromise multiple (sub-)components. Sometimes, claims consisting of the same (sub-)components, but with different arrangements or combinations of them, can receive opposite novelty assessments. Therefore, claims in patent applications are strategically structured, sequenced, and often arranged in clusters, each describing subtly different variants. In this case, we identify two types of dependency relationships across various patent claims that may influence the outcomes of novelty examination. Inner-claim Dependency. Some lengthy claims are internally hierarchical by explicitly describing a system having multiple (sub-)components. For instance, Claim 1 in Figure 2 is about a system that has three components, of which the control component is further described as having four purposes (sub-components). Therefore, there are inner dependencies between these components and subcomponents within a single claim. Inter-claim Dependency. Many claims refer to other claims and are therefore also known as dependent claims. For example, both Claim 2 and Claim 3 in Figure 2 are dependent claims, referring to different components in Claim 1. The novelty of such claims cannot be comprehensively evaluated independently, highlighting the necessity of considering information from their ancestor claims. Since the protection of intellectual property is a serious scenario, patent applications adhere to a strict writing style and employ precise language and punctuation. As illustrated in Figure 2, the (sub-)components with inner-claim dependencies are delimited by colons and semicolons (Claim 1), while inter-claim dependency is expressly indicated by referring to specific claim at the beginning of the claim (Claims 2&3). Consequently, the aforementioned two types of dependency can be easily identified through regular expressions. 3.2 Graph Construction Based on the observations above, we construct Fine-grained cLAim depeNdency (FLAN) Graph utilizing both inner-claim and inter-claim dependencies. The general idea is to decompose each claim into text segments as nodes and match those nodes describing the same (sub-)component together to build a graph to model the dependency re3 \fClaim Text Check writing pattern Split claim text into segments Create nodes assign \u00a0text segments Extract identities Duplicate ancestor graph, connect new subgraph to ancestor node Match text segments with\u00a0ancestor nodes based on identites Yes No Yes Yes No Add nodes Dependent Claim? No Dependent Claim? Have\u00a0 inner-structure? Extract identity Create node, matching claim text with ancestor node based on identity Figure 3: Flowchart of constructing FLAN Graph. Here, \u201cidenties\u201d refers to the anchor words/phrases extracted from the claim or claim segments for node matching. lationships. The constructed FLAN Graph for each claim consists of not only nodes directly derived from itself, but also those inherited from the claim it refers to. Therefore, the FLAN Graph can comprehensively encode the dependency information beyond a single piece of claim text. The detailed construction process is described as follows. Node Construction. Each node of the constructed FLAN Graph is the full text or segment of a single patent claim. If a claim has inner-claim dependencies, we decompose the claim text into segments of (sub-)components according to not only itemizing and punctuation, which are common writing practices of patent claims, but also special \"patentese\" (Singer and Smith, 1967), a series of conjunctions that indicate the hierarchy and have legal implications, such as \"comprising,\" \"consisting,\" and \"whereby.\" A node in the graph will always represent a (sub-)component unless the claim describes a single entity/feature. We must also check whether it is a dependent claim or not. If not, the (sub-)component nodes will constitute the graph. If yes, we shall attach the nodes to the duplicated parent graph. How the connections are made will be discussed next. Edge Construction. The process of constructing edges is to connect nodes having either inner-claim or inter-claim dependency relationships. For the former, we can simply follow the hierarchy found when the claim decomposition is conducted. The latter requires meticulously formulated heuristics. As each of the nodes is simply plain A system for [...] authentication component configured [...] tracking component configured [...] control component configured to receive authentication [...] receive location [...] deliver a message [...] is also configured [...] Figure 4: FLAN Graph for Claim 2 in Figure 2. Here, the blue texts are the \u201cidenties\u201d for node matching. Nodes with red background are directly derived from Claim 2 while the rest ones are inherited from Claim 1. text, we connect them based on text similarities instead of relying on text embeddings. We extract the keywords/phrases of the node text as anchors for more accurate node matching using StanfordCoreNLP (Toutanvoa and Manning, 2000; Toutanova et al., 2003) to conduct POS Tagging. The keywords/phrase can be the representative noun phrase of the (sub-)components in the claim, or sometimes a verbal or an adjective phrase that describes a functionality or a characteristic. We term the phrases identites for simplicity. Note that the identity belongs to the (sub)component level. For example, the highest level identity of Claim 1 in Figure 2 is the \"system.\" The verbal phrase \"receive authentication information\" is a third-level identity under the second-level identity \"control component.\" Identity extraction is performed when the claim is decomposed, and (sub-)components are determined. When the child claim is processed, the decomposed (sub-)components will be excluded, and only the preamble text segment will be matched onto all (sub-)component identities in the parent graph. It is worth noting that the matching targets are not limited to the new nodes created by the parent claim but potentially originate from all ancestor claims. (If the child claim has no inner dependency, the entire claim text is used.) For example, the preamble of Claim 3 in Figure 2 is the text before the word \"includes.\" 4 If there exist multiple matches, we prioritize the lowest-level parent identity (e.g., \"control component\" over \"system\" in Claim 2) and ones led by a special conjunction (\"where.\") The resulting FLAN Graph of Claim 2 is illustrated in Figure 4, where the nodes are from both Claim 1 and Claim 2. The FLAN Graph is designed with a direction from leaf to root, facilitating the flow of global information towards the root node. 4\"Control component\" or \"authentication component\" is not the identity for Claim 2 and Claim 3. Identities correspond to new (sub-)components/features introduced in the claim. 4 \fThe entire process of constructing FLAN Graphs can be summarized by the flowchart depicted in Figure 3. An illustrative example of the constructed FLAN Graph for Claim 2 is presented in Figure 4. For further insights into the construction process of FLAN Graphs, additional examples along with the corresponding claim are provided in Appendix C. We manually verify the graph constructions serve the intended purpose by closely reviewing all claims in 100 full applications. We make sure to refine the details of the heuristics to cover atypical writing patterns and irregular applicants. 3.3 Graph Representation The topology and the nodes of the FLAN Graphs are finalized during the construction stage, resulting in a distinct graph for each of the claims. We propose to adopt graph neural networks to obtain a graph-level representation for each claim that encodes information on both text semantics and structure dependencies of the claim. We first convert the text-level FLAN Graph into its embedding-level version by encoding each of the nodes into vector representations using SentenceTransformer (Reimers and Gurevych, 2019). Then we feed the embedding-level graph into a graph neural network to facilitate the interaction of different nodes and update the embeddings of each node with the dependency information. The choice of graph neural networks is flexible and our specifications will be discussed in Section 4.3. Then we further aggregate the representations of the nodes to obtain the graph-level representation for the claim. Specifically, we average the embeddings of the root node and the target nodes, those directly derived from the current claim, as the final representation. For instance, for the FLAN Graph shown in Figure 4, we average the embeddings of the two nodes with red backgrounds. Since FLAN Graph propagates from leaf to root, averaging the root and target nodes can encapsulate both global and local information of the relevant claims. 4 Experiments In this section, we elaborate on our experiments and the corresponding results with both: (1) scaling with LLMs; and (2) customized graph methods using the FLAN Graphs. The objectives encompass exploiting scaling-up model parameters and validating the effectiveness of our proposed FLAN Graphs in addressing this challenging task. Plain Text Feature Added Metric AUC Macro-F1 AUC Macro-F1 Random Guess 50.00 50.00 50.00 50.00 BERT-base 52.66 45.98 61.47 53.99 BERT-large 54.79 46.92 63.53 54.83 BERT-patent 55.81 47.46 63.63 54.91 LLaMA-7B 51.02 42.64 58.18 51.24 w. Full-FT 52.38 44.91 59.02 52.85 Mistral-7B 51.88 43.38 59.22 52.99 w. Full-FT 53.63 45.89 60.34 53.20 Vicuna-7B 51.14 43.04 58.88 51.10 w. Full-FT 53.10 45.24 59.22 52.21 LLaMA-13B 51.44 43.23 59.68 53.03 Vicuna-13B 51.97 43.70 60.12 53.18 LLaMA-70B 52.11 44.12 60.44 53.46 Table 2: Performance (%) of embedding-based methods. Models excluding BERT are fine-tuned with LoRA by default.\u201cw. Full-FT\u201d means with full fine-tuning. 4.1 Experiment Settings Dataset. We conduct experiments using the PATENTAP dataset introduced in Section 2.2 and the data statistics are shown in Table 1. Evaluation Metric. Following Gao et al. (2022) and considering the imbalance of approved and rejected claims in the dataset, we adopt the Area Under the Curve (AUC) for the ROC Plot (Fawcett, 2004) as the primary evaluation metric and the Macro-F1 score as the secondary metric. Baseline Model. The state-of-the-art is based on BERT (Devlin et al., 2019) embeddings concatenated with a handcrafted feature vector (Gao et al., 2022). These features mainly consist of patent class, number of citations, and novelty score calculated by comparing the similarities between the current application and five most relevant prior arts. 4.2 Scaling with LLM Manipulations We are interested in re-evaluating the task using LLMs, and investigating whether model scaling-up can transcend the performance standards. Specifically, we adopt LLaMA2 (Touvron et al., 2023), Mistral (Jiang et al., 2023), Vicuna (Chiang et al., 2023) in their 7B, 13B, and 70B versions. 4.2.1 Embedding-based We first extend the SOTA to some BERT variants and then to multiple LLMs of various sizes, using both plain text embeddings and those concatenated with feature vectors. Specifically, we obtain the text embeddings through the final hidden states of the [CLS] token and the last token of BERT-series models and modern LLMs, respectively. 5 \fOpen-source LLMs Closed-source LLMs Model Size 7B 13B 70B unknown Model Name LLaMA Vicuna Mistral LLaMA Vicuna LLaMA GPT-3.5 GPT-4 Vanilla Prompt 47.81 49.83 31.00 32.62 49.43 37.44 48.38\u2217 43.01\u2217 w. time 47.80 48.38 29.75 35.54 47.82 13.82 48.81\u2217 44.91\u2217 CoT Prompt 39.83 37.84 22.65 23.51 46.01 38.77 23.93\u2217 40.75\u2217 w. time 46.73 34.32 20.64 28.81 44.23 35.33 10.27\u2217 36.57\u2217 Table 3: Macro-F1 scores (%) of prompt-based methods with modern LLMs. Here, \u201cw. time\u201d indicates adding the filing date of the claim to the prompt, and * means the value is calculated based on a sub-set of 1K testing claims. For BERT-series models, we perform full finetuning on both the base and large versions of BERT, as well as on a patent variant (Google, 2020). Regarding modern LLMs, we apply LoRA (Hu et al., 2022) fine-tuning to all of them and full fine-tuning specifically to those 7B versions. The hyper-parameters are listed in Appendix B.1.1. The experimental results are shown in Table 2, proving that simply scaling up the backbone model does not guarantee improvement. More in-depth analyses can be found in Appendix A.1. 4.2.2 Prompt-based The embedding-based manipulations of LLMs fall short unexpectedly. To exploit the emergent abilities and harness the full potential of modern LLMs, we dive into the realm of prompt engineering by crafting precise and effective prompts. Model. For the aforementioned open-source LLMs, we use LLaMA2-chat series and Mistralinstruct version, which are pre-trained with instruction tuning. In addition, we extend our repertoire to include GPT-3.5-Turbo (OpenAI, 2022) and GPT4 (OpenAI, 2023) for addressing this task. Prompt Template. Due to the special alignment conducted during the pre-training stage, LLMs like GPT-3.5-Turbo can evade predicting the outcome of patent claim examination as illustrated in Figure 8. Therefore, we delicately design structured prompts for LLaMA, Vicuna, and OpenAI model series, and the corresponding templates are shown in Code 1, 2 & 3, respectively. Moreover, we adopt the Chain-of-Thought (CoT) prompt (Wei et al., 2022b) to elicit the reasoning abilities of LLM by providing a step-by-step analysis of the claim before predicting the approval or rejection. Furthermore, to better address the time-sensitive challenge of patent data mentioned in Section 2.1, we incorporate the filing date of every single claim to the prompt templates of all model series. 0 2 4 6 8 10 Number of Shots 43 44 45 46 47 48 49 50 51 52 Macro-F1 Score Few-shot Prompting Supervised Fine-tuning Figure 5: Performance (%) of Vicuna-7B model with few-shot prompting and supervised fine-tuning (SFT). Here, SFT does not include any few-shot examples. Adapting Strategy. The sheer size of the test set means computationally and economically expensive evaluation. Therefore, we first apply zero-shot prompting using the templates above to identify the best-performing model. Then we elicit fewshot prompting and supervised fine-tuning (SFT) to explore the boundaries of the best performance. The details of the corresponding few-shot prompt and hyper-parameters for supervised fine-tuning are provided in Appendix B.1.2 Performance. Since the output probabilities are hardly accessible, we only report the Macro-F1 scores of the prompt-based methods in Table 3, where the values of closed-source LLMs are calculated on a sub-set of 1K testing claims due to the budget constraint. Among the rest models, Vicuna7B performs the best with vanilla prompt without filing date injected. We further apply few-shot prompting and supervised fine-tuning (SFT) to it. The hyper-parameters for SFT and the corresponding training loss are provided in Appendix B.1.2. Figure 5 presents the results. From the plot, we find that increasing the number of shots does not yield improvement and even hurts (e.g., 10-shot). Applying SFT is also far from satisfying. More in-depth analyses of model sizes, CoT prompt, and 6 \fInput Model AUC Macro-F1 FLAN Graph GCN 59.36 \u00b1 0.18 53.98 \u00b1 0.35 GAT 58.44 \u00b1 0.20 53.29 \u00b1 0.94 GCN-II 58.28 \u00b1 0.26 53.92 \u00b1 0.13 GraphSage 60.67 \u00b1 0.36 54.66 \u00b1 0.22 TreeLSTM 59.88 \u00b1 0.32 51.74 \u00b1 0.46 Feature Added GCN 66.03 \u00b1 0.36 58.06 \u00b1 0.19 GAT 65.82 \u00b1 0.34 58.05 \u00b1 0.21 GCN-II 65.91 \u00b1 0.31 58.11 \u00b1 0.14 GraphSage 66.04 \u00b1 0.26 58.22 \u00b1 0.17 TreeLSTM 65.46 \u00b1 1.14 57.78 \u00b1 0.75 Table 4: Performance (%) of different GNNs using plain FLAN Graph and adding extra features, respectively. added time feature are provided in Appendix A.2. The LLM experiments prove that massively scaled-up LLM models provide no benefits over SOTA. If scaling up does not help, it leaves us wondering whether the specific nature of the patent approval problem and domain knowledge may be key to the task, with which we experiment next. 4.3 Customized Graph Methods It turns out that both embedding-based and promptbased manipulations of LLMs fail to compete with the previous state-of-the-art method. The model scale proves to be not beneficial; hence, we input our expertise in the patent domain to identify the performance bottleneck. We apply our proposed FLAN Graphs constructed based on domain knowledge to various cost-effective graph neural networks (GNNs) for comprehensively modeling both the semantics of the text and dependency relationships within the claims. Model. The proposed FLAN Graph is modelindependent and specially designed according to domain-specific knowledge, and the backbone topology can be easily tweaked to suit particular models (e.g., adding self-loops). Hence, we employ various cost-effective graph models to obtain the graph-level representation, including GCN (Chen et al., 2020), GAT (Velickovic et al., 2018), GCNII (Chen et al., 2020), GraphSage (Hamilton et al., 2017), and TreeLSTM (Tai et al., 2015). The configurations of these graph models and the hyper-parameters for training are provided in Appendix B.2. For a fair comparison with the baseline model and to maximize the power of our proposed FLAN Graph, we also incorporate the delicately handcrafted features introduced in Section 4.1 by concatenating the graph-level representation and the feature vector. The final representation of the claim is further fed into a multi-layer perceptron GCN GAT GCNII GraphSage TreeLSTM 40 44 48 52 56 60 FLAN Graph Coarse Graph Solitary Node GCN GAT GCNII GraphSage TreeLSTM 48 52 56 60 64 68 GCN GAT GCNII GraphSage TreeLSTM 40 44 48 52 56 60 AUC Macro-F1 Figure 6: Performance comparison between utilizing FLAN Graph, Coarse Graph, and Solitary Node. The detailed score values are provided in Table 5. (MLP) layer to conduct binary classification over either being approved or rejected. Performance. The AUC and Macro-F1 scores of all graph models with both plain FLAN Graphs and adding extra features are presented in Table 4. Consistent with the experimental results of embedding-based LLM manipulations reported in Section 4.2.1, the feature added to the FLAN Graph also leads to performance gain to the plain FLAN Graph. Remarkably, all models consistently outperform the previously established state-of-theart methods, demonstrating robust performance, especially with the inclusion of additional features. Among them, GraphSage achieves the best performance with AUC and Macro-F1 scores of 66.04 and 58.22, surpassing the baseline model by 7.4% in AUC and 7.8% in Macro-F1 scores, respectively. Input Model AUC Macro-F1 FLAN Graph GCN 66.03 \u00b1 0.36 58.06 \u00b1 0.19 GAT 65.82 \u00b1 0.34 58.05 \u00b1 0.21 GCN-II 65.91 \u00b1 0.31 58.11 \u00b1 0.14 GraphSage 66.04 \u00b1 0.26 58.22 \u00b1 0.17 TreeLSTM 65.46 \u00b1 1.14 57.78 \u00b1 0.75 Coarse Graph GCN 62.21 \u00b1 0.25 54.69 \u00b1 0.28 GAT 62.61 \u00b1 0.21 54.98 \u00b1 0.53 GCN-II 60.28 \u00b1 0.24 53.69 \u00b1 0.30 GraphSage 63.80 \u00b1 0.14 56.64 \u00b1 0.16 TreeLSTM 60.17 \u00b1 0.10 55.47 \u00b1 0.17 Solitary Node MLP 59.33 \u00b1 0.51 54.45 \u00b1 0.31 Table 5: Ablation study on performance (%) of different GNNs using FLAN Graph, Coarse Graph, and Solitary Node, with feature added. Ablation study. Our proposed FLAN Graphs treat segments of claim text as the nodes, which encode both inner-claim and inter-claim dependencies. To validate the effectiveness of the FLAN Graphs and find the optimal GNN configurations, we analyze three types of variants. \u2022 Applying Coarse Graph. We first remove 7 \fModel AUC Macro-F1 GCN 65.98 \u00b1 0.06 58.16 \u00b1 0.02 GAT 65.91 \u00b1 0.46 58.02 \u00b1 0.28 GCN-II 65.28 \u00b1 1.34 57.64 \u00b1 1.02 GraphSage 65.86 \u00b1 0.25 58.10 \u00b1 0.12 TreeLSTM 65.66 \u00b1 1.15 58.17 \u00b1 0.61 Table 6: Expanding those GNNs to 4 layers makes little difference compared to only using 2 layers. the inner-claim dependencies to build Coarse Graphs by skipping the text segmentation step and treating every single claim as a node, which only encodes inter-claim dependencies while ignoring the inner-claim ones. Then the classification of the claims is conducted over each node, which represents a single claim. \u2022 Utilizing Solitary Node. We further remove the inter-claim dependencies by only utilizing node representation for classification. Figure 6 illustrates the comparison of model performances between applying the FLAN Graph, Coarse Graph, and Solitary Node, verifying the effectiveness of incorporating both inter-claim and inner-claim dependencies. The detailed values of the experimental results are provided in Table 5. \u2022 Adopting Deeper GNN. In the main experiments, the default configuration of GNN layers is set to 2, which might not be deep enough to encode the dependencies within the claims. Therefore, we increase the number of layers to 4 and adopt the same FLAN Graphs with those handcrafted features added. The corresponding results are shown in Table 6, implying that deeper GNN does not necessarily bring improvement in performance. Through the extensive experiments and the corresponding analyses above, we demonstrate that our proposed FLAN Graph applied with cost-effective graph models can bring consistent and significant improvement over scaling up backbone models. Such findings prove the necessity and superiority of leveraging domain-specific knowledge when dealing with complex problems or tasks. 5 Related Work Patent documents are receiving increasing attention in the NLP community due to their structured language and extensive content. The survey by Krestel et al. (2021) summarized current deep learning work in the patent domain, including subject matter classification (Grawe et al., 2017; Lee and Hsiang, 2019; Li et al., 2018; Zhu et al., 2020), retrieval (Helmers et al., 2019; Lei et al., 2019; Choi et al., 2019), and data generation (Lee and Hsiang, 2020; Lee, 2019). We highlight a few more specifically relevant or more recent works. Yoshikawa et al. (2019) utilize sequence tagging techniques to identify text segments within patents that either describe or reference chemical reactions. Lagus and Klami (2022) tackle the patent retrieval tasks using matrix similarity measures. Hashimoto et al. (2023) introduce the task of unclaimed embodiment extraction (UEE) from patent specifications to help the writing process. Zuo et al. (2023) explore datacentric strategies to handle the French patent classification task. The state-of-the-art (SOTA) work of our task, Gao et al. (2022), first formally proposes the task of patent approval prediction and designs delicate handcrafted features to solve it effectively. There also has been work utilizing graphs on patent data. Fang et al. (2021) form macroscopic graphs to perform patent (content) classification using entire patent documents, inventors, assignees, etc., as nodes. Siddharth et al. (2022) model published patents (grants) into \u201c\u201d knowledge graphs, but on a single hierarchical level and not constructed on the basis of individual claims. Bj\u00f6rkqvist and Kallio (2023) follow a similar approach to our graph construction, incorporating dependencies among elements in claims. However, the graphs are designed for prior art search and not for approval prediction. 6 Conclusions and Future Work In this paper, we delve into a domain-specific task, patent approval prediction, where simply scaling up the backbone model of previous SOTA falls short and simple customized graph methods work well. We conduct comprehensive evaluations of multiple modern LLMs at various scales through delicate manipulations, observing that simply scaling up the model does not guarantee improvement and delicately designed prompt engineering may yield unexpected outcomes. In addition, based on the analysis of real-world patent data, we propose Fine-grained cLAim depeNdency (FLAN) Graph, a simple yet effective graph method that effectively encodes the inner-claim and inter-claim dependencies and thus consistently outperforms complicated 8 \fLLM manipulations, dispelling the overconfidence in LLMs for this task. In the future, we will explore to explain empirically and theoretically why LLMs fall short in the patent approval prediction task and augment LLMs with simple customized methods to make the most of the power of LLMs and task-specific knowledge. Limitations The major limitations of our work are three-fold: (1) We only use one single dataset for all experiments because there are few datasets publicly available in this domain. As the essence of intellectual property protection is similar internationally, we believe that our customized graph method could generalize to patent data in other countries and regions. (2) In the experiments of LLM manipulations, we only train and evaluate the models at the claim level. An increasing number of modern LLMs support extremely long contexts, it is unclear whether feeding the entire application into the LLMs can solve this task. (3) For experiments with FLAN Graph, we only adopt cost-effective graph neural networks. Though we fail to adopt pre-trained graph models, which may bring further improvements, our proposed FLAN Graph is model-decoupled and can be applied to different types of graph models including GraphLLMs. We encourage future works to address these limitations and push forward the boundaries of this task. Ethical Considerations This paper focuses on patent approval prediction, which is to facilitate the protection of intellectual property. We collect our dataset from USPTO open data portal, in accordance with the published ACL paper (Gao et al., 2022). The patent application data that USPTO releases are publicized by law. Anyone is legally entitled to utilize the data. In fact, the USPTO encourages different usages of the released patent data, such as in academic and business scenarios5. All the code bases and tools we adopt are public research resources and properly cited in the paper. Therefore, we do not observe significant ethical risks in our work." + }, + { + "url": "http://arxiv.org/abs/2404.15458v1", + "title": "Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT", + "abstract": "Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are\ntrained on massive quantities of text parsed from the internet and have shown a\nremarkable ability to respond to complex prompts in a manner often\nindistinguishable from humans. We present a LLM fine-tuned on up to 40,000 data\nthat can predict electromagnetic spectra over a range of frequencies given a\ntext prompt that only specifies the metasurface geometry. Results are compared\nto conventional machine learning approaches including feed-forward neural\nnetworks, random forest, linear regression, and K-nearest neighbor (KNN).\nRemarkably, the fine-tuned LLM (FT-LLM) achieves a lower error across all\ndataset sizes explored compared to all machine learning approaches including a\ndeep neural network. We also demonstrate the LLM's ability to solve inverse\nproblems by providing the geometry necessary to achieve a desired spectrum.\nLLMs possess some advantages over humans that may give them benefits for\nresearch, including the ability to process enormous amounts of data, find\nhidden patterns in data, and operate in higher-dimensional spaces. We propose\nthat fine-tuning LLMs on large datasets specific to a field allows them to\ngrasp the nuances of that domain, making them valuable tools for research and\nanalysis.", + "authors": "Darui Lu, Yang Deng, Jordan M. Malof, Willie J. Padilla", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "physics.optics", + "cats": [ + "physics.optics", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Can Large Language Models Learn the Physics of Metamaterials? An Empirical Study with ChatGPT", + "main_content": "Introduction Deep learning, and particularly deep neural networks (DNNs), have recently emerged as a valuable tool in the field of metamaterials research and have produced many novel results. [1, 2, 3] This data-driven approach to metamaterial design has profound capabilities for both forward [4] and inverse processes. [5, 6] Once trained, DNNs can accelerate the simulation of material systems by orders of magnitude compared to traditional numerical simulations [7, 8, 9, 10], enabling faster prototyping and exploration. Similarly, in inverse design, these models have successfully discovered state-of-the-art solutions that push the boundaries of what is achievable with metamaterials [11, 12, 13, 14]. Despite these advances, the implementation of DNNs still faces several challenges [1, 2, 3]. As a data-driven method, DNNs necessitate large datasets for training to achieve high accuracy and generalizability. [15] This so-called \"data bottleneck\" issue is compounded by the question of interpretability\u2014understanding, and explaining model predictions remains a significant hurdle. [16, 17] This has led to the pursuit of models that are capable of learning effectively from smaller datasets, leading to the development of techniques such as transfer learning, [18, 19, 20, 21] and physicsinformed/driven DNNs [22, 9, 23, 24, 25, 26, 27]. \u2217Citation: Authors. Title. Pages.... DOI:000000/11111. arXiv:2404.15458v1 [physics.optics] 23 Apr 2024 \fFoundational models are sophisticated, large-scale DNNs trained on extensive, diverse datasets. [28] This training enables them to generalize knowledge across various domains without the need for substantial task-specific data. [28, 29] However, the extravagant cost of acquiring large, diverse datasets for training a physics foundational model is prohibitive for academic researchers. [30] In this work we hypothesize that with LLMs, such as ChatGPT, we may be able to leverage their broad domain capabilities to reason about physical systems with far less training data than existing DNN-based models. Rather than building a foundational physics model from scratch, we explore the potential of repurposing existing foundational models to address problems in metamaterial design and show proming results. [29, 31] Large language models (LLMs) like generative pre-trained transformers (GPTs) have recently emerged as a foundational model primarily designed to handle natural language processing tasks. [29] By harnessing vast amounts of text data, these models learn to predict the next word in a sentence, thus acquiring an ability to construct coherent and contextually relevant text. Their design incorporates a deep understanding of language structure and encapsulates broad knowledge across diverse domains, enabling them to perform reasoning tasks. [32] For instance, LLMs can engage in conversations, translate languages, summarize texts, and even generate content that mimics human writing styles. [29, 31] The multifaceted capabilities of LLMs are rooted in their extensive training on diverse datasets. [33] First, they integrate a vast range of information from their training sets, making them repositories of wide-reaching knowledge about much the world. This allows them to recall and leverage facts, concepts, and relationships when generating responses. Secondly, LLMs can perform reasoning tasks based on the information they have been trained on, enabling them to handle queries that require logical deductions, problem-solving, or creative generation. This can range from solving mathematical problems to crafting detailed narratives or technical explanations. Lastly, the ability of LLMs to explain their reasoning process adds a layer of interpretability, often allowing users to understand the steps the model took to arrive at a conclusion or response, thus providing insights into the model\u2019s thought process. In the case of metamaterial design, LLMs such as GPT could revolutionize how physical systems are modeled and understood with minimal training data. [34] The diverse, extensive textual training of LLMs encompasses fundamental physics concepts, which are crucial for understanding the dynamics of metamaterials. By internalizing multiple laws of physics and their applications, LLMs can potentially extrapolate and make accurate predictions about new metamaterial scenarios. Such potential could lead to a more efficient process in predicting metamaterial properties and behaviors by leveraging learned physical laws rather than relying solely on extensive empirical data typical of traditional DNN approaches. LLMs could significantly speed up the design and simulation processes in metamaterial engineering. In light of this, our study aims to initiate the field by examining the feasibility of employing currently available LLMs to tackle a metamaterials challenge. Building on recent findings that suggest LLMs\u2019 proficiency in scientific regression and classification tasks [35], we explore their potential in predicting the electromagnetic spectra of metamaterials \u2013 a problem expressible in textual terms. This adaptability of LLMs has already been demonstrated in chemistry [36, 37], optics [38], and mechanics[39], signaling their versatility across various scientific fields. Our investigation looks explicitly at all-dielectric metasurfaces, comparing the capabilities of LLMs with established machine learning models. To our knowledge, there has yet to be a study of the use of LLMs on regression tasks with high-dimensional outputs, such as those often encountered in metamaterial problems. We find that a fine-tuned LLM (FT-LLM) achieves a lower error metric on all dataset sizes than several machine learning based approaches, including a deep neural network. We also probe the capability of the FT-LLM to generate physically meaningful insights on ADMs and compare results to an out-of-the-box LLM. Through this work, we highlight the potential of LLMs as powerful tools in scientific exploration, potentially broadening the horizons for future innovations and discoveries in metamaterials and beyond. 2 Related Work In this section, we will focus mainly on previous research work on deep learning in metamaterials and the application of LLM in the science field. Deep Learning for Metamaterial Simulator Machine learning approaches offer an effective solution to the above challenges and have gained significant attention over the last decade. Due to their strong generalizability, machine learning and deep learning methods can discover a general mapping from geometry to electromagnetic spectra. There are many options for representation of the metamaterial geometry as input to the DNN, which directly influences the deep learning architecture. Most research efforts in metamaterial employ a one-dimensional (1D) vector to represent metamaterial structures; thus, they apply fully connected deep neural networks (DNNs). [40] This approach has been explored in various. [13, 41, 7] In other work, a transformer was used to perform forward prediction and achieved superior prediction accuracy compared to standard MLP models. [42] On the other hand, some papers conceptualized 2 \fmetamaterials as two-dimensional (2D) arrays, where binary values (0 and 1) represent the different materials, effectively mapping geometry. This mapping aligns well with the convolutional neural network (CNN) architecture, and good results have been shown. [43, 44, 45] For example the developed CNN models could make accurate predictions of the dispersion relation for a given structural configuration. Advancements in Large Language Models for Science With the appearance of GPT3.5[29], LLMs have seen rapid development. Indeed, various models, such as GPT-3.5 [29] and LLaMA [31], have shown great capabilities in language understanding and text generation. [33] These advancements have extended beyond traditional text-based applications, and LLMs are increasingly being used in various scientific endeavors, such as knowledge extraction [46] and automating experimental procedures [34, 47]. Recently, LLMs have been shown to posses good capabilities for regression and classification tasks. For example, the so-called language-interfaced fine-tuning for non-language machine learning tasks (LIFT) was shown to transform tabular data into text sentences for fine-tuning of GPT-J without altering the structure and loss of the model. [35] Subsequent studies have further explored the utility of LLMs in specific scientific domains. A survey on GPT-3 performance for chemistry tasks was undertaken which demonstrated that LLMs have better data efficiency than a base machine learning model for classification tasks. [36] One study used content learning to predict the compressive strength of concrete, [37] while another found that material structures can be encoded as linear text descriptions to fine-tune Llama2 [39] However, the application of LLMs for metamaterials research remains underexplored. In addition, previous works have focused on regression problems with relatively small output dimensions and limited data sets. Our study seeks to bridge this gap by examining the performance of LLMs in addressing the multidimensional regression challenges inherent in the simulation of electromagnetic metamaterials. Our objective is to elucidate the potential of LLMs to advance the field of metamaterials. 3 Methodology In this study, we employed LIFT [35] allowing for the adaptation of LLMs for metamaterial regression tasks without the need for architectural modifications or alterations to the loss function. The workflow of our method is shown in Figure 1. It contains three parts: data transformation, fine-tuning, and inference. 3. Fine Tuning Large Luanuage Model 2. Data Transform 1. Simulation Text Description Text Prediction Abosorptance Metamaterial Figure 1: The figure illustrates our comprehensive workflow, which begins with simulation to acquire geometry-spectrum data sets. Subsequently, these numerical geometric vectors are transformed into textual descriptions. we fine-tune GPT 3.5 by OPENAI\u2019s API. For the inference phase, we employ the fine-tuned model to predict the absorptance. Data Transformation Our first step is to transform the numerical data into a suitable format for input to the LLM. The all-dielectric metamaterial we explore is a unit-cell consisting of four elliptical resonators, and therefore has 14 geometrical parameters of height, periodicity, semi-major axis, semi-minor axis, and rotations angles \u2013 see Figure 1. 3 \fThese 14 parameters are denoted as: (h, p, rma1, rmi1, rma2, rmi2, rma3,rmi3,rma4, rmi4, \u03b81, \u03b82, \u03b83, \u03b84). We employ a gene vector to encode the geometry of the metamaterial. [39] This encoding uses a series of numbers to define the geometry, and follows the template: \"The All-dielectric metasurface suspend in free space is: Get the absorptivity\". For the output spectrum, we maintain three-decimal point precision, generating completions such as \u2019[0.001, \u00b7 \u00b7 \u00b7 , 0.678 ]\u2019. Model Fine-Tuning and Inference The generated sentence data is then used to fine-tune the large language model, here GPT-3.5. Since GPT-3.5 is a black-box model, we use OPENAI\u2019s API for fine-tuning. The fine-tuning phase adapts the model to the output structure of our dataset, focusing on generating 50-length vectors to represent absorptivity spectra. During inference, the fine-tuned model outputs a 50-length vector as the absorptivity curve, such as \u2019[0.001, \u00b7 \u00b7 \u00b7 , 0.864, ...]\u2019. We convert this text list back to a number for comparison with ground truth data. It\u2019s important to note that the output of a LLM may not match the expected output length because LLMs are generative models. For a given geometry g, the prediction spectra sp produced by LLM would be a vector of length La, while the length of the ground truth st is Lb. It may happen that a \u0338= b. To solve this, we implemented an alignment strategy that compares only the first min(a, b) elements of the predicted and true spectra vectors. Notably, we find that the most common invalid value of a is 51. Also, we may fail to convert LLM\u2019s textual output to numerical values. For example, an output such as \"0.0.13\" that cannot be directly converted to a valid numeric value is identified as an anomaly. In these cases, the output is flagged for regeneration to ensure that the model predictions are in the expected format. 4 Experimental Design and Resources Dataset Our study harnesses the dataset initially introduced and benchmarked in previous studies[13, 40], chosen for its relevance to the understanding of the capability of LLM in designing metasurfaces. This dataset is open-access, and the structured vector formats of its geometrical inputs and spectral outputs offer an advantageous footing for prompt engineering in LLMs. Such characteristics are helpful for physically meaningful data manipulation and model training, enabling a higher degree of interpretability of feature importance. Below, we delineate the metasurface geometry and spectra in the dataset. The all-dielectric metasurface is fashioned from silicon carbide and operates within the 150-500 THz frequency range. This complex structure is defined by a supercell comprising four elliptical resonators, each positioned at the center of a subdivided quadrant within a square supercell. This metasurface\u2019s geometric configuration is given as a 14-dimensional vector:[h, p, rma1, rmi1, rma2, rmi2, rma3, rmi3, rma4, rmi4, \u03b81, \u03b82, \u03b83, \u03b84]. The periodicity p parameter specifies the side length of the supercell, setting the foundational operating range for the resonator array, with the height h parameter establishing the uniform height of all resonators. The dimensions of each elliptical resonator along the x and y axes are proportionally scaled to the supercell\u2019s periodicity through the radius x-axis ratio rma,i and radius y-axis ratio rmi,i for each resonator, respectively. Additionally, the orientation of each elliptical resonator is adjusted through a rotational angle \u03b8i, measured in radians, about the x-axis. All parameters are integral for defining the electromagnetic response of the metasurface, with units in \u00b5m. Given the challenges associated with processing high-dimensional data by LLMs, the spectrum output was manipulated by first downsampling from 2000 frequency points to 100 frequency points. Then, we only select 50 points from the 150 \u2212350THz frequency range, aiming to refine the LLM\u2019s predictive accuracy and computational efficiency within the expansive operational bandwidth. Data Handling To ensure experimental integrity, we divided the dataset into three distinct, independently sampled sets: training, validation, and test. The training set was randomly selected for each dataset size\u2019s training session, whereas the validation and test sets comprising 1,000 samples each were specified prior to the experiments. Optimal model selection was based on validation set performance, with the test set used for final evaluation. This approach guarantees the reliability and reproducibility of our comparison between baseline and large language models for metamaterials research. Scoring Metrics This section outlines the metrics employed for evaluating the performance of both baseline and LLM models. It is pertinent to acknowledge that the selected baseline models all utilize Mean Squared Error (MSE) as a training criterion, which could inherently bias the evaluation in their favor as opposed to LLMs, which are predominantly trained using a variant of cross-entropy loss. Despite this discrepancy, MSE was selected for the regressive metamaterials problem as it is best-suited for regression tasks training due to its robustness in quantifying the average squared difference between predicted and actual values. In predicting electromagnetic spectra for metamaterials, the MSE is modified to be: 4 \fMSE = 1 n n X i=1 1 f f X j=1 (Si,j \u2212\u02c6 Si,j)2, (1) where n is the number of samples, f is the number of frequency in the spectrum, Si,j represents the absorptivity value for ith sample at jth frequency point, and \u02c6 Si,j denotes the model predicted value. For a thorough evaluation of all regression models in the study, we benchmark their performance using two additional regression task metrics: Mean Absolute Error (MAE) and Mean Absolute Relative Error (MARE). Specifically for the metasurface problem, these metrics have been adapted as follows: MAE = 1 n n X i=1 1 f f X j=1 |Si,j \u2212\u02c6 Si,j|, (2) MARE = 1 n n X i=1 1 f f X j=1 |Si,j \u2212\u02c6 Si,j| |Si,j| , (3) Moreover, the relevance of cross-entropy loss, particularly for LLMs, cannot be overlooked. This loss function is defined as: Cross Entropy Loss = \u2212 M X c=1 yo,c log(po,c), (4) where M represents the number of classes, yo,c is a binary indicator of whether class label c is the correct classification for observation o, and po,c is the predicted probability of observation o being in class c. To ensure a fair and uniform assessment across all models, performance metrics are exclusively reported in terms of MSE, MARE and MAE on the test set. This standardized evaluation criterion facilitates a straightforward comparison across different models applied on the all-dielectric metasurface problem. Baseline Models To benchmark the performance of LLMs, we incorporate four other machine learning algorithms: Feed-forward Neural Networks (NN), Random Forests (RF), K Nearest Neighbors (KNN), and Linear Regression (LR). The selection of the NN was motivated by its published efficacy in various practical applications, particularly within the domain of metamaterials research, where its ability to model complex, nonlinear relationships is highly valued[1, 2, 3]. Conversely, RF, KNN, and LR represent classical machine learning algorithms designed to address regression challenges. Given that the design of metamaterials predominantly poses regression problems, these algorithms were deemed especially suitable for benchmarking against the metasurface design challenges alongside LLMs. Our choice of algorithms aims to encompass both the cutting-edge capabilities of neural networks and the robust, wellestablished methodologies of classical machine learning, ensuring a comprehensive and fair evaluation of LLM applicability and performance in the all-dielectric metasurface design. Prior to model training, our dataset undergoes preprocessing steps to ensure optimal model performance. The geometry inputs are normalized to a range of [-1,1], promoting convergence in the NN training process. The absorptivity spectra, already within the range of [0,1], require no additional preprocessing. We allow 30 iterations of Bayesian optimization to fine-tune each model\u2019s hyperparameters. The selection of optimal hyperparameters is determined based on performance metrics evaluated on the validation set, with the final model performance reported against the test set. Experimental Resources and accessibility For the training and inference of NNs, we utilize NVIDIA GTX 3090 GPUs, employing the PyTorch library to facilitate our computations. The execution of RF, KNN, and LR models is conducted on an Intel\u00ae Xeon\u00ae Gold 6226 CPU, leveraging their computational efficiency for these specific algorithms. 5 Experimental Results and Discussion In this section, our primary focus is investigating the performance of large-language models and comparing them with other baseline models (Sect. 5.1). We also examine the impact of temperature (Sect. 5.2) and the influence of the prompt template (Sect. 5.3). Additionally, we explore model performance on inverse design (Sect. 5.4) and interpretability (Sect. 5.5). 5 \f5.1 Data Size Influence We trained our Large Language Model (LLM) and other baseline models using varying sizes of training data and evaluated their performance on a consistent test set comprising 1000 samples. Figure 2 illustrates the performance of the models in different data size scenarios. We applied the temperature as 0.5 while testing the GPT model. A more detailed discussion of the impact of temperature on performance can be found in Section 5.2 MSE MARE 0.03 0.02 0.01 0.2 0.4 0.6 0.8 102 103 104 Linear KNN RF NN GPT (a) (b) 102 103 104 Dataset Size Figure 2: Evaluations of model performance with varying dataset sizes. (a) MARE and (b) MSE trends for baseline models and the fine-tuned GPT model as dataset size increases. All the results presented are averages from three models. However, the GPT model results at the 10,000, 20,000, and 40,000 data points are exceptions, as computational constraints limited these to single trials. Error bars indicate the standard deviation of the three models. Regarding to MARE, Fig. 2(a) shows the superior performance of the GPT in all data scenarios. Particularly in relatively low-data(1000-10,000) environments, GPT outperforms neural network models a lot. With training samples greater than 10000, the performance of the NN is close to the GPT. However, the analysis based on MSE offers a different perspective. Based on Fig. 2(b), we observed that fine-tuned GPT 3.5 performs the poorest among all models in the low-data scenario (\u22641000 samples). However, as the size of the data set increases, the fine-tuned GPT shows a remarkable improvement in performance, outpacing some baseline models. In particular, for 40,000 training samples, the fine-tuned GPT 3.5 emerges as the top-performing model, with its MSE only slightly inferior to that of neural networks. This improvement underscores the LLM\u2019s capacity to identify and apply complex patterns from extensive datasets. Even with 40,000 training samples, the performance of GPT 3.5 has not yet reached convergence. Furthermore, the slope of GPT 3.5 is much higher than that of other models. We anticipate that with more data input, the performance of our GPT model will narrow the gap with neural networks and may even surpass them. The observed differences in model performance between the MSE and MARE evaluations can be attributed to the sensitivity to the error types of these metrics. MSE penalizes large deviations in absolute values between predicted and actual values. However, MARE provides a normalized error measure that reflects the accuracy of the model relative to the magnitude of the actual values. This metric focuses on the proportionality of the error. Given that our baseline models are optimized towards minimizing MSE, they inherently focus more on reducing large absolute value errors. However, the GPT model is fine-tuned on a cross-entropy loss, which maintains a consistent performance across all ranges of values. 5.2 Temperature Influence Temperature plays a crucial role in LLM output by influencing the level of randomness in the generated results. In the low-temperature setting, the model tends to produce the most probable results based on its training data. Conversely, a high-temperature setting increases the randomness and diversity of the model\u2019s output, making it more likely to generate low probable values. Our tests explored the effects of varying temperature settings, specifically [0, 0.25, 0.5, 0.75, 1], on model performance, as shown in Fig.3(a). Our findings indicate that the impact of temperature is related to the size of the data set. Specifically, for small data sets (\u226410000), the setting Tmeperature = 0 leads to poor predictions, indicative of an overreliance on training data that may not capture the input-output relationship effectively with limited data. In contrast, as the dataset expands, a 6 \flow temperature near zero is conducive to minimizing the Mean Squared Error (MSE), as the LM\u2019s predictions are increasingly informed by the enriched data. However, the extremely high temperature(1) decreases performance across all data sizes, as the over-randomized output is not expected in the regression tasks. Interestingly, as the volume of training data increases, the optimal temperature setting trends to be lower. The result is shown in Fig.3(b). Although moderate randomness can improve the quality of the output in data-constrained scenarios, on the contrary, it decreases the accuracy of the output in data-rich environments. 0.75 0.50 0.25 0.00 Best Temperature (a) 0.00 0.50 MSE(Log10) -1.6 -2.0 -2.2 Temperature 102 103 104 Data Size (b) 1.00 -1.8 100 1000 10000 20000 40000 Figure 3: Evaluations of model performance with varying temperature settings. (a) MSE trends for the GPT model fine-tuned on different numbers of training samples as the temperature increases. (b) The temperature that gives the best MSE in different data sizes. The results are averaged from three trials. . 5.3 Prompt Influence Large language models, trained on extensive datasets, are adept at processing information about a wide array of topics, including electromagnetism. We tested whether providing a more detailed geometric description prompt improves model performance to leverage the knowledge. To this end, we evaluated the impact of prompt designs on the performance of fine-tuned GPT. In addition to the vector representation template, we propose a detailed description prompt template, aimed at enhancing the model\u2019s understanding of our task. This approach hypothesizes that a detailed contextual introduction might help the model to grasp the physical implications of the parameters. Examples of these templates are provided in Table 1. We fine-tuned GPT 3.5 using both prompt designs across datasets of varying sizes. The MSE comparison is shown in Figure 4. Our findings reveal an intriguing observation: both prompt designs yield nearly indistinguishable performance in all sizes of training samples. This consistency suggests that whether the input data is presented in a concise vector form or in a detailed description does not significantly influence the predictive accuracy of the model. The minimal impact of the feature name on the performance of the model is consistent with the insights from the previous paper[35]. 5.4 Inverse Design An important goal of deep learning in metamaterials is to use a model to generate geometry with the desired spectrum, which is called inverse design. Compared to regression, it is a difficult problem, as it is a one-to-many problem[40]. In this approach, we follow the concept of neural-adjoint[13], in which the inverse design is achieved by directly querying the well-trained forward LLM. Table 2 illustrates the example prompt and outputs. Unfortunately, this strategy did not produce successful results. On the one hand, models trained on datasets exceeding 10,000 samples were prone to producing invalid output. In the majority of instances, despite the imposition of strict constraints on the output format, our model will disregard the instruction and merely provide a list of numbers. In some cases, our model will provide responses that are not only erroneous but also crazy. This may be attributed to the lack of diversity in our training dataset. As all the completions are the list of numbers, this dominance in training data appears to have skewed the model learning process. However, models fine-tuned with small datasets could generate geometries, yet the majority of these are erroneous. Since our model is not trained to design metamaterial, it lacks this ability. 7 \fData Size 102 103 104 MSE 0.03 0.02 0.01 0.00 Figure 4: Evaluations of model performance with two different prompt templates. These templates are provided in Table 1. All the results presented are averages from three models. However, the results at the 10,000 data points are exceptions, as computational constraints limited these to single trials. 5.5 Interpretability The previous section discussed the fine-tuned GPT as a standard regression model. In this section, we evaluate the fine-tuned GPT\u2019s comprehension of electromagnetic metamaterials by asking questions about the impact of altering the geometry features. Table 3 presents the questions and answers. Contrary to expectations, the fine-tuned GPT did not demonstrate a significantly better understanding than the original model. This observation suggests that training on geometry-spectra pairs might not give the model a holistic grasp of the physical concepts of our task. Additionally, we observed a stylistic difference: The fine-tuned GPT tends to produce one-paragraph answers, in contrast to the original GPT\u2019s piecewise interpretation. The difference in output style could be due to the format of our training data. The completions are a single paragraph, which can significantly affect the output style of the model. 6 Conclusion In this paper, we fine-tuned LLMs to predict electromagnetic spectra and address design challenges in metamaterials. The experimental results indicate that LLMs, particularly when fine-tuned with extensive training data, can achieve competitive performance in high-dimensional regression tasks. In terms of MARE, the LLM exhibited superior performance. This indicates that LLMs can effectively capture complex relationships between geometry and spectra. However, we also face some limitations. We tested LLMs\u2019 performance in inverse design. The unexpected results show that the performance of LLMs in one-to-many tasks remains inadequate. Additionally, the reliance on extensive training data sets and the high cost of fine-tuning in LLMs limits the practical application, especially in limited data or budget scenarios. Future work could focus on improving the geometry representation and fine-tuning adjustments. Inspired by the success of SMILES[48] in cheminformatics, which is a technique that uses ASCII to present the structure of the module, developing encoding schemes to represent the geometry of metamaterials in linear text could improve LLM performance. Additionally, the cross-entropy used for LLM fine-tuning may not align well with our regression tasks. Therefore, exploring fine-tuning algorithms specifically designed for regression, or reformulating regression challenges as classification tasks, could enhance the performance of LLMs. Acknowledgments D.L. and Y.D. acknowledge the assistance of ChatGPT, developed by OpenAI, for editing and language improvement. 8" + }, + { + "url": "http://arxiv.org/abs/2404.15406v1", + "title": "Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs", + "abstract": "Multimodal LLMs are the natural evolution of LLMs, and enlarge their\ncapabilities so as to work beyond the pure textual modality. As research is\nbeing carried out to design novel architectures and vision-and-language\nadapters, in this paper we concentrate on endowing such models with the\ncapability of answering questions that require external knowledge. Our\napproach, termed Wiki-LLaVA, aims at integrating an external knowledge source\nof multimodal documents, which is accessed through a hierarchical retrieval\npipeline. Relevant passages, using this approach, are retrieved from the\nexternal knowledge source and employed as additional context for the LLM,\naugmenting the effectiveness and precision of generated dialogues. We conduct\nextensive experiments on datasets tailored for visual question answering with\nexternal data and demonstrate the appropriateness of our approach.", + "authors": "Davide Caffagni, Federico Cocchi, Nicholas Moratelli, Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.MM" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs", + "main_content": "Introduction Recently, Large Language Models (LLMs) have demonstrated impressive performance in zero-shot textual tasks. Specifically, recent literature has devised models capable of tackling diverse tasks, as instructed by the user [6, 30, 41]. In this context, the classical approach is that of fine-tuning a model on varied tasks that are described through natural language [7, 34], thus empowering the model to assimilate externally provided instructions and facilitating robust generalization across multiple domains. Following these advancements, the computer vision community has started to investigate the extension of such models to vision-andlanguage contexts, thus generating Multimodal Large Language Models (MLLMs). On this line, the fusion of visual features into LLM backbones through vision-to-language adapters [1, 21, 23, 48] has induced notable performance \u2217Equal contribution. Notation Standard MLLMs What is the closest parent taxonomy of this bird? Image + Question The closest parent taxonomy of this bird is the parrot. Wiki-LLaVA (Ours) What is the closest parent taxonomy of this bird? Opisthocomidae Image + Question The closest parent taxonomy of this bird is the parrot. What is the closest parent taxonomy of this bird? Standard MLLMs Image + Question + Retrieved Passages Opisthocomidae Wiki-LLaVA (Ours) What is the closest parent taxonomy of this bird? Visual Tokens Textual Tokens Retrieval Tokens Retrieved Docs Figure 1. Comparison between a standard multimodal LLM and Wiki-LLaVa. Our model integrates knowledge retrieved from an external knowledge base of documents through a hierarchical retrieval pipeline. As a result, it provides more precise answers when tasked with questions that require external knowledge. improvements, enabling extensive generalization to visionand-language tasks requiring elaborate visual descriptions. In this context, MLLMs excel by simply including a small module (i.e., an adapter) that aligns visual features with textual ones. However, despite these models being built upon LLMs trained on large-scale data, they exhibit notable limitations when confronted with highly specific user queries or when a certain degree of compositional reasoning is required to formulate the response. Moreover, certain knowledge proves itself challenging to be encoded within the parameters of an MLLM, due to the scarcity of long-tail information in the training data. In response to this challenge, different benchmarks have 1 arXiv:2404.15406v1 [cs.CV] 23 Apr 2024 \fbeen recently introduced for evaluating the capabilities of MLLM to tackle queries related to external data, such as InfoSeek [5] and Encyclopedic-VQA [28]. While different works [8, 20, 21, 32] have been testing on these benchmarks, underscoring the significance of this area, none of them has developed architectures specifically designed for tackling external knowledge. Driving from these considerations, in this paper we propose the first MLLM augmented with a retrieval module, thus shifting the focus towards teaching the model to leverage diverse information in its responses and learning to discern the relative importance of each. In particular, our model retrieves appropriate information from an external knowledge base of documents and employs a hierarchical retrieval approach to identify relevant passages. This additional knowledge is then fed to an MLLM, without changing its structure but improving its answering capabilities. To the best of our knowledge, our work represents the first MLLM to harness the retrieval capability of external sources. We assess the quality of the proposed approach by conducting extensive experiments and comparisons with respect to recent MLLMs [8, 21, 24] and by showcasing the effectiveness of our design choices. Experimental results demonstrate the advantage of retrieving from external sources and the appropriateness of our model design. Overall, we conceive our work as a first step in the direction of retrievalaugmented MLLMs, which could foster future works in the same area. 2. Related Work Multimodal LLMs. LLMs have significantly reshaped the landscape of AI research and applications, spearheaded by notable examples like OpenAI\u2019s ChatGPT and GPT-4. These models leverage alignment techniques such as instruction tuning [30] and reinforcement learning from human feedback [39] and achieve remarkable capabilities in language understanding and reasoning. Open-source LLMs like Flan-T5 [7], Vicuna [6], LLaMA [41], and Alpaca [40] have further accelerated the advancement within the research community. This surge in the development of LLMs subsequently led to the emergence of MLLMs [3], which can combine the understating of visual inputs with natural language generation. Early attempts of building MLLMs such as VisualGPT [4] and Frozen [42] used pre-trained language models to enhance vision-and-language models specifically for tasks like image captioning and visual question answering. This initial investigation paved the way for subsequent research in this domain, with the introduction of solutions such as Flamingo [1] or BLIP-2 [21] which allowed the integration of image features into LLMs respectively through trainable cross-attention layers directly within the LLM or Q-Former blocks that instead combine image and textual features via learnable queries. Building upon these advancements, subsequent models like FROMAGe [19], Kosmos-1 [14], and MiniGPT-4 [48] have been introduced to further refine the interplay between visual and language modalities within the LLM architecture. Concurrently, the LLaVA family of models [23\u201325] introduced the usage of instruction tuning in the multimodal domain, by training on a curated dataset collected with GPT-4. This strategy is now among the most promising recipes for building MLLMs. Retrieval-augmented language models. In recent years, retrieval-augmentation has been applied to language models by expanding their input space with relevant text passages extracted from external sources [10] or eventually retrieved directly from the web [29]. These techniques have demonstrated large improvements in knowledge-intensive tasks and significant savings in terms of model size. Traditionally, the integration of external knowledge into textual generation has been confined to the initial stages. Different solutions [17] proposed to adaptively retrieve passages for generation on top of a proprietary LLM. Some works [10], instead, focused on capturing knowledge in a more modular and interpretable way, by augmenting the language model pre-training with a latent knowledge retriever. This allows the model to retrieve and attend documents taken from a large corpus such as Wikipedia. While much attention has been directed towards textual augmentation, similar research efforts have recently been dedicated in the context of vision-and-language tasks [2, 13, 31, 37]. Following this direction, the work presented in [13] proposed a retrieval-augmented visual-language model that encodes world knowledge into a large-scale memory. Other approaches [35, 36] also apply retrieval to specific downstream tasks such as image captioning. Differently from all the aforementioned approaches, our work is the first to apply retrieval-augmentation to MLLMs. We do this by applying a hierarchical retrieval strategy on top of a knowledge base made of multimodal documents. Knowledge-based visual question answering. Recently, the emergence of new benchmarks like EncyclopedicVQA [28] and InfoSeek [5] has raised the difficulty of standard knowledge-based VQA [16, 27, 38] with questions that require intensive knowledge about specific entities, such that even LLM-based models perform poorly without retrieving information from external sources. Often, contrastive image-text encoders are employed to retrieve the target entity given the query image [44, 46]. Then, the entity name is used as a key to access an external knowledge base, which is typically composed of several text passages that encompass the correct answer. In this work, we design a hierarchical retrieval scheme based on CLIP [33] and the Contriever model [15] to extrapolate relevant passages, and we feed them to an MLLM to help the answer generation. 2 \fEntity: Q358813 Given the following context: When was this piece of sporting equipment invented? 1926 Wiki-LLaVA A surfboard is a narrow plank used in surfing. Surfboards are\u2026 The Ochroma wood's surfboard history originates Hawaii, in 1926.. In the late 1960s Gordon Clark found the formulation for foam.. 0.45 0.75 0.2 Adapter Visual Encoder Contriever Figure 2. Overview of the architecture of Wiki-LLaVA, which augments a multimodal LLM with external knowledge through a hierarchical retrieval pipeline. 3. Proposed Method Our goal is to equip Multimodal LLMs (MLLMs) with the ability to answer complex and specific questions that cannot be addressed solely through the image content and pre-trained knowledge. To achieve this, we propose WikiLLaVA, which integrates external knowledge derived from an external memory into the LLaVA model, without significantly altering its design. Instead, we augment the capabilities of the model by incorporating retrieval information as additional input context. Overall, Wiki-LLaVA comprises three components, as shown in Fig. 2: a visual encoder, which is employed to provide the MLLM with visual context and as a query to retrieve from an external knowledge base, the knowledge base itself (e.g., Wikipedia), and a hierarchical retrieval module which retrieves relevant documents and passages from the external knowledge base, to be employed as additional context for the MLLM. 3.1. Knowledge-based Augmentation Multimodal integration and autoregressive generation. An MLLM usually takes as input a multimodal input query, comprising both image and text, and generates a textual output in an autoregressive manner. Formally, the architecture is trained to model a probability distribution p(wt|I, w0, w1, ..., wt\u22121, \u03b8), where \u03b8 denotes the parameters of the model, I represents an input image, and w0, .., wt\u22121 denotes the textual prompt. The textual prompt usually includes a pre-defined system-level prompt and a question related to the input image, given by the user. Clearly, a standard MLLM can only rely on the user prompt, the input image, and the knowledge stored in its internal parameters (i.e., \u03b8) to accommodate requests, thus limiting its ability to answer questions that rely on external knowledge. In the rest of the paper, we employ LLaVA [24] as our reference MLLM. LLaVA exploits the capabilities of a pre-trained LLM (i.e., Vicuna [6]) and a pre-trained visual model (i.e., a CLIP-based visual encoder [33]), which are interconnected through an MLP adapter, in charge of converting CLIP features to dense input tokens. For an input image I, therefore, LLaVA utilizes a pre-trained CLIP visual encoder Ev, extracts a dense grid of visual features Zv = Ev(I), which is then projected via a learnable MLP to produce a sequence of dense embedding tokens vo, v1, ..., vN. Finally, these are prepended to the system prompt, and the full sequence of visual and textual tokens is then given as input to the LLM component of the model. Augmentation with external knowledge. To augment the MLLM with external knowledge, we enrich the input context by injecting relevant textual data from an external memory composed of documents. Formally, the distribution of the MLLM is conditioned on additional textual retrievalknowledge tokens, leading to p(w _t| \\o verbra cke t { v_o, v_ 1, ... , v_ N}^{ \\text { Visu al tok e ns}},\\ \\ \\ \\ \\ underb rac ket {w_ 0, w_1, ..., w_{t-1}}_\\text {\\textcolor {red}{System + user prompt}}, \\overbracket {e_0, e_1, ..., e_{\\tau }}^\\text {\\textcolor {blue}{External memory tokens}}), (1) where e0, ..., e\u03c4 represents the added tokens retrieved from the external memory. Differently from the standard formulation of MLLMs, by enriching the input context we allow the model to generate more specific answers by exploiting tokens retrieved from the memory. Hierarchical retrieval from an external memory. The external memory comprises a collection of (document, image, text-title) triplets taken from documents, denoted as D = {(di, ti)i}. Within this memory, we conduct a hierarchical two-step search to retrieve appropriate information. 3 \fInitially, we locate the most pertinent document, followed by identifying the relevant passage inside a particular document, which is subsequently exploited as additional input context in the MLLM. In the first stage, given an input query image I we perform an approximate k-nearest neighbor search into the external memory, using document titles as retrievable keys. The similarity between the query image and the text titles is modeled as the inner product between their respective embeddings, which are computed through the visual and textual CLIP encoders (i.e., Ev and Et), as follows: \\text {s i m}(I_ i , t_i) = E_v(I) \\cdot E_t(t_i)^T. (2) Then, the knowledge retriever returns the top-k documents associated with the most relevant items retrieved using the aforementioned procedure. Retrieving document passages. In the second step, we analyze each of the retrieved documents to identify the most relevant passages corresponding to the user\u2019s question. Each document is defined as a sequence of chunks, denoted as di = [ci0, .., ciT ], and, given the input question, we retrieve the chunks with the highest similarity to the question. We employ the Contriever architecture [15] to embed each chunk of the selected document, along with the query (i.e., the question provided by the user), and compute the similarity as an inner product between embeddings. By retrieving the n most appropriate passages inside each of the retrieved documents, overall we obtain k \u00b7 n passages. Context enrichment. Once we find the most relevant chunks, we employ their raw contents as an additional input to the MLLM. Specifically, the final prompt that we employ includes the image tokens, the retrieved raw chunks, the system-level prompt, and the user question. Formally, considering three retrieved passages, the final prompt is defined as follows: \\small \\nonum ber \\texttt {\\t extbackslash nGiv en the fol lowi n g con text:\\t extbackslash n} \\\\ \\nonumber \\small \\texttt { \\textbackslash n\\textbackslash \\textbackslash n } \\\\ \\small \\texttt {Give a short answer. ASSISTANT:} (3) 3.2. Training While the aforementioned approach could work in a zeroshot fashion, using the original weights \u03b8 of the pre-trained MLLM, we also investigate the case of fine-tuning the model to augment its capabilities of exploiting retrieved passages. In particular, in this case, the model is trained on pairs of questions and ground-truth answers requiring external knowledge. As this would potentially reduce the capabilities of the MLLM on tasks not requiring external knowledge (i.e., all the other tasks on which the model has been originally trained), we apply a data mixing approach in which ground-truth pairs requiring external knowledge are mixed with ground-truth pairs not requiring external knowledge in the same mini-batch. 4. Experiments In this section, we first introduce the experimental settings, describing the datasets employed, the evaluation protocol, and the implementation and training details used to perform the experiments. Then, we present our experimental results, analyzing the effectiveness of CLIP fine-tuning and evaluating how it is possible to incorporate retrieved knowledge in an MLLM. Finally, limitations of the proposed approach and possible future works are reported. 4.1. Datasets Encyclopedic-VQA [28]. The dataset contains around 221k question-answer pairs associated with 16.7k different fine-grained entities, with up to 5 images representing the same entity. Overall, there are more than 1M triplets composed of an image, a question, and the corresponding answer. Fine-grained entities and related images are extracted from iNaturalist 2021 [43] and Google Landmarks Dataset V2 [45], which are associated with the corresponding Wikipedia article. Questions are divided into four different categories, namely single-hop, automatically generated, multi-answer, and two-hop. In particular, single-hop questions have been manually annotated and a single Wikipedia article is needed to answer them. Automatically generated questions are similar to the single-hop questions but have been generated by automatic models. Multi-answer questions, instead, can be answered with a list of terms, but always refer to a single fine-grained entity. Finally, two-hop questions require two retrieval steps to answer them. The dataset also comes with a knowledge base composed of 2M Wikipedia articles, suitable for answering dataset questions. Dataset triplets are divided into training, validation, and test splits respectively composed of 1M, 13.6k, and 5.8k samples. In our experiments, we employ the training split to fine-tune the LLaVA model and report the results on the test set of the dataset. During testing, we filter out two-hop questions resulting in 4,750 test triplets. InfoSeek [5]. The dataset contains 1.3M image-questionanswer triplets corresponding to around 11k different entities (i.e., Wikipedia articles). The vast majority of questions have been obtained with an almost entirely automatic procedure, by filling human-authored templates with knowledge triples from Wikidata. In this case, images are derived from the OVEN dataset [12]. Triplets are divided into training, validation, and test sets, with around 934k, 73k, and 348k samples respectively. At the time of the submission, the ground-truth answers for the test set were not available. Therefore, we report our results on the validation split. Both validation and test sets contain questions related to new en4 \ftities not included in the training split and questions not seen during training. Along with image-question-answer triplets, a knowledge base composed of 6M Wikipedia entities is provided. In our experiments, we consider a randomly extracted subset of 100k entities, in which we guarantee the presence of the 11k entities associated with the dataset questions. 4.2. Implementation Details LLaVA fine-tuning. We employ two distinct fine-tuning approaches, with each being exclusively applied to one of the datasets. In order to maintain the performance of the LLaVA model on well-established MLLM datasets, we supplement fine-tuning data with samples from the LLaVAInstruct dataset [24]. Specifically, given its size of 158k, we double the probability of having examples from this dataset in each mini-batch. To reduce the number of trainable parameters, we train using low-rank adapters [11] with a total batch size of 512 samples. Retrieval. Textual documents sourced from Wikipedia content are embedded using the Contriever architecture [15], segmenting the text into chunks of 600 characters each. Furthermore, for streamlined efficiency, the process involves utilizing a single visual encoder. Specifically, following the LLaVA architecture [24], we employ the CLIP ViTL/14@336 backbone to embed images to give as input to the MLLM, while simultaneously leveraging it to extract query visual features in the initial hierarchical retrieval step, facilitating the integration of an external memory component. To perform entity retrieval, we employ approximate kNN search rather than exact kNN search because it significantly improves the computational speed of the entire pipeline. To this aim, we employ the Faiss library [18] and a graph-based HNSW index with 32 links per vertex. 4.3. Evaluation Protocol We evaluate our models in two settings: without external knowledge base and with external knowledge base. The former means that we ask the model to directly answer a visual question, by solely relying on the competencies learned during pre-training and/or fine-tuning. On the other hand, in the latter setting, we leverage the proposed hierarchical retrieval method to search for additional information in the external knowledge base. In practice, this is represented by two dumps of Wikipedia comprehending 2M and 100k pages, respectively for Encyclopedic-VQA and InfoSeek. Concerning the evaluation metrics, we report the accuracy over the Encyclopedic-VQA test split and the InfoSeek validation split, following the official evaluation scripts provided along with the datasets. Dataset KB R@1 R@10 R@20 R@50 Encyclopedic-VQA 2M 3.3 9.9 13.2 17.5 InfoSeek 100k 36.9 66.1 71.9 78.4 Table 1. Entity retrieval results on the Encyclopedic-VQA test set and InfoSeek validation set. To comply with the visual encoder employed in LLaVA, all results are obtained using CLIP ViT-L/14@336. 4.4. Experimental Results Analyzing CLIP performance. We start by evaluating entity retrieval results using CLIP. In this setting, we consider images from the Encyclopedic-VQA test set and InfoSeek validation set and measure the CLIP ability to find the correct entity within the knowledge base of each respective dataset (i.e., composed of 2M entries for EncyclopedicVQA and 100k entries for InfoSeek). As previously mentioned, we perform retrieval using images as queries and Wikipedia titles as retrievable items. Results are reported in Table 1 in terms of recall@k (R@k) with k = 1, 10, 20, 50 which measures the percentage of times the correct entity is found in the top-k retrieved elements. Notably, correctly retrieving the Wikipedia entity associated with the input image strongly depends on the size of the employed knowledge base. In fact, when using 100k items, as in the case of InfoSeek, the correct entity is retrieved as the first item 36.9% of the time and among the top-10 66.1% of the time. Instead, when using a significantly larger knowledge base as in the case of Encyclopedic-VQA, which contains 2M items, retrieval results are significantly lower with 3.3% and 9.9% respectively in terms of R@1 and R@10. Results on Encyclopedic-VQA and InfoSeek. We then report visual question-answering results in Table 2. We include the performance of zero-shot models like BLIP2 [21], InstructBLIP [8], and the LLaVA-1.5 baseline model [24], which are not fine-tuned on the considered datasets and that do not leverage the external knowledge base. Moreover, we consider the accuracy results of LLaVA-1.5 when fine-tuned on the training set of Encyclopedic-VQA and InfoSeek, but not augmented with retrieved context. The results of our approach (i.e., WikiLLaVA) are reported both in the standard setting in which CLIP is used to retrieve the most representative entity from the knowledge base and in its oracle version, which employs the entity corresponding to the input image-question pair. For both cases, we consider a different number n of retrieved textual chunks, all corresponding to the top-1 (or ground-truth) entity. When employing CLIP, we also vary the number k of retrieved entities (i.e., k = 1, 2, 3) using n = 1 when k is greater than 1. This choice is given by the maximum context length that Vicuna takes as input, which is set to 2,048 tokens. 5 \fEnc-VQA InfoSeek Model LLM KB k n Single-Hop All Unseen-Q Unseen-E All Zero-shot Models BLIP-2 [21] Flan-T5XL \u2717 12.6 12.4 12.7 12.3 12.5 InstructBLIP [8] Flan-T5XL \u2717 11.9 12.0 8.9 7.4 8.1 LLaVA-1.5 [23] Vicuna-7B \u2717 16.3 16.9 9.6 9.4 9.5 Fine-tuned Models LLaVA-1.5 [23] Vicuna-7B \u2717 23.3 28.5 19.4 16.7 17.9 Wiki-LLaVA Vicuna-7B \u2713 1 1 21.8 26.4 26.6 24.6 25.5 Wiki-LLaVA Vicuna-7B \u2713 1 2 19.9 23.2 29.1 26.3 27.6 Wiki-LLaVA Vicuna-7B \u2713 1 3 17.7 20.3 30.1 27.8 28.9 Wiki-LLaVA Vicuna-7B \u2713 2 1 21.3 25.4 27.8 24.6 26.1 Wiki-LLaVA Vicuna-7B \u2713 3 1 20.5 24.3 27.4 24.5 25.3 Wiki-LLaVA Vicuna-7B \u2713 1 1 34.7 37.2 41.1 41.1 41.1 Wiki-LLaVA Vicuna-7B \u2713 1 2 39.2 40.2 49.1 46.5 47.8 Wiki-LLaVA Vicuna-7B \u2713 1 3 38.5 38.6 52.7 50.3 51.5 Table 2. Accuracy results on the Encyclopedic-VQA test set and InfoSeek validation set. Yellow color indicates models employing the CLIP model to perform entity retrieval, while gray color indicates the use of ground-truth entities (i.e., oracle). k denotes the number of retrieved entities, and n represents the number of textual chunks retrieved for each entity that are given to the MLLM as additional context. As it can be seen, zero-shot MLLMs face difficulties in correctly answering the given questions as these models can only rely on the knowledge embedded inside the LLM. When instead using an external knowledge base, the accuracy results significantly increase especially on the InfoSeek dataset with 100k retrievable items. The limited performance of the CLIP model in retrieving the correct entity on larger knowledge bases, instead, leads to a slight degradation of accuracy scores. This is due to the noisy textual passages that are provided to the MLLM as additional external context which, being related to a different entity, often do not contain informative content. Overall, retrieving passages from different entities does not always help increase the results. Instead, using more than one textual chunk as additional context for the MLLM generally improves the final accuracy on the InfoSeek validation set with an overall improvement of 2.1 and 3.4 accuracy points with n = 2 and n = 3 respectively. Furthermore, it is worth noting that employing oracle entities significantly boosts the final accuracy. In particular, oracle entities lead to an improvement of 13.8% on Encyclopedic-VQA and 22.6% on InfoSeek, comparing the best-performing configuration with CLIP-based entity retrieval (i.e., k = 1 and n = 1 for Encyclopedic-VQA and k = 1 and n = 3 for InfoSeek) with the best performing oracle-based version (i.e., k = 1 and n = 2 for Encyclopedic-VQA and k = 1 and n = 3 for InfoSeek). These results confirm the effectiveness of directly employing retrieved passages to augment a pre-trained MLLM and further highlight the importance of having a good entity retrieval model to limit the possibility of feeding the MLLM with irrelevant content. Enc-VQA InfoSeek Fine-tuning Single-Hop All Unseen-Q Unseen-E All \u2717 16.3 16.9 9.6 9.4 9.5 \u2713 23.4 29.0 17.1 15.0 16.0 \u2713+ LLaVA-Instruct 23.3 28.5 19.4 16.7 17.9 Table 3. Performance analysis when using the LLaVA-Instruct dataset during fine-tuning. All results are obtained without external knowledge retrieval. Some qualitative results on sample image-question pairs from Encyclopedic-VQA (first row) and InfoSeek (second row) are reported in Fig. 3, comparing the answers given by Wiki-LLaVA with those coming from the original LLaVA1.5 model. For completeness, we also report some failure cases (third row) in which both models are not able to correctly answer the given question. Evaluating the importance of the fine-tuning datasets. As described in Sec. 3.2 and Sec. 4.2, the MLLM finetuning is done with a mixture of data containing imagequestion-answer triples from the Encyclopedic-VQA or InfoSeek training set and visual instruction tuning data from LLaVA-Instruct [24], which has been used to originally fine-tune the LLaVA model. In Table 3, we evaluate the effect of mixing fine-tuning data for the knowledge-based VQA task. In this setting, we only report the results of the fine-tuned models without external knowledge retrieval. Notably, using visual instruction tuning data can help to regularize the fine-tuning phase on the InfoSeek dataset, leading to an overall improvement of 1.9 accuracy points compared to the model fine-tuned only on image-questionanswer triplets from the training set of the dataset. On 6 \fIn what state is this building located? LLaVA-1.5: California \u2717 Wiki-LLaVA: Washington \u2713 When was this building constructed? LLaVA-1.5: 1970 \u2717 Wiki-LLaVA: 1927 \u2713 What\u2019s the height of the tallest minaret from this mosque? LLaVA-1.5: 100 feet \u2717 Wiki-LLaVA: 49mt \u2713 Which geographic area is this fish found? LLaVA-1.5: Gulf of Mexico \u2717 Wiki-LLaVA: Brazil \u2713 What is the oldest age of this animal? LLaVA-1.5: 10 years \u2717 Wiki-LLaVA: 24.9 \u2713 Who designed this building? LLaVA-1.5: Architect \u2717 Wiki-LLaVA: James of Saint George \u2713 Which culture is associated with this place? Ancient Greek LLaVA-1.5: Roman \u2717 Wiki-LLaVA: Nuragic Civilization \u2717 What is the name of the main club of this stadium? FC Rotor LLaVA-1.5: Real Madrid \u2717 Wiki-LLaVA: FC Dynamo Kyiv \u2717 Which mountain range is this mountain belong to? Snowdonia LLaVA-1.5: Rocky mountains \u2717 Wiki-LLaVA: Lake District \u2717 Figure 3. Qualitative results on sample image-question pairs from Encyclopedic-VQA (first row) and InfoSeek (second row) comparing the proposed approach with the original LLaVA-1.5 model. Some failure cases are shown in the third row with the corresponding ground-truth. MME MMMU MMB POPE Fine-tuning Cogn Perc Acc Acc Acc F1 355.7 1513.3 35.1 71.6 86.9 85.8 Enc-VQA 200.7 802.8 36.6 67.7 72.9 63.4 Enc-VQA + LLaVA-Instruct 290.0 1170.1 36.6 70.4 87.2 86.6 InfoSeek 296.8 1377.2 35.2 71.7 82.0 79.6 InfoSeek + LLaVA-Instruct 341.3 1438.9 35.6 71.1 85.8 84.2 Table 4. Performance preservation analysis with respect to the original LLaVA-1.5 model (first row) on diverse benchmarks for MLLM evaluation. Encyclopedic-VQA, instead, training with instruction tuning data does not lead to performance improvement although without degrading the original results. Preservation of LLaVA performance. Finally, we analyze the impact of LLaVA fine-tuning on knowledge-based VQA datasets when evaluating the model on common MLLM evaluation benchmarks [3]. In particular, we include results on MME [9] which contains image-question pairs covering 14 different tasks grouped in two macro-categories (i.e., cognition and perception), MMMU [47] that is composed of multiple-choice and open-ended questions possibly interleaved with one or more images and extracted from diverse university textbooks and online courses, MMBench (MMB) [26] that includes multiple-choice questions across 20 different domains, and POPE [22] that is focused on evaluating object hallucinations and comprises binary classification entries, each related to an image. More details about the evaluation metrics and number of samples can be found in the original paper of each dataset. Results are shown in Table 4 comparing the original LLaVA model with the two fine-tuned versions on Encyclopedic-VQA and InfoSeek, with and without the use of visual instruction tuning data. Overall, employing samples from the LLaVA-Instruct dataset can better preserve the results of the original model, only partially degrading the performance on the considered benchmarks compared to the original model. While the most significant deterioration is achieved on the MME dataset, in the other settings the original performances are better preserved, also leading to a slight improvement on MMMU and POPE benchmarks compared to the LLaVA-1.5 results. 4.5. Limitations and Future Works While our work provides an initial step towards MLLM which can properly exploit external multimodal data, it is worthwhile mentioning that significant research is needed in two directions. The fist is defining proper embedding spaces in which documents can be retrieved from questions and input images, so as to improve the performance of the higher level of our hierarchical retrieval. The second is modeling an efficient and sustainable paradigm to select from one or more documents. Here, the challenge is to increase the capability of the MLLM of distinguishing the appropriateness of retrieved items. This point might also require novel architectural design, which might go beyond the pure inclusion of retrieved items in the context. Regardless of its current limitations, our research testifies the potential of adding multimodal external knowledge to a MLLM and inherits all the advantages of retrieval-augmented approaches, such as the adaptability to different domains and the loosely-coupled relationship between pre-trained information and retrievable data. 5. Conclusion We have presented Wiki-LLaVA, an architecture for augmenting an existing MLLM with external knowledge. Our 7 \fproposal leverages an external knowledge source of documents to improve the effectiveness of an MLLM when tasked with questions and dialogues. In particular, we devise a hierarchical architecture for retrieving documents and eliciting selected parts to be included in the MLLM input context. Extensive experiments demonstrate the effectiveness of the proposed solution, and its capability to maintain the proficiency of the MLLM across different tasks. Acknowledgments We acknowledge the CINECA award under the ISCRA initiative, for the availability of high-performance computing resources and support. This work has been conducted under two research grants, one co-funded by Leonardo S.p.A. and the other co-funded by Altilia s.r.l., and supported by the PNRRM4C2 project \u201cFAIR Future Artificial Intelligence Research\u201d, funded by the European Commission, and by the PNRR project \u201cItalian Strengthening of Esfri RI Resilience\u201d (ITSERR) funded by the European Union NextGenerationEU (CUP B53C22001770006)." + }, + { + "url": "http://arxiv.org/abs/2404.12957v1", + "title": "Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction", + "abstract": "We propose an approach for estimating the latent knowledge embedded inside\nlarge language models (LLMs). We leverage the in-context learning (ICL)\nabilities of LLMs to estimate the extent to which an LLM knows the facts stored\nin a knowledge base. Our knowledge estimator avoids reliability concerns with\nprevious prompting-based methods, is both conceptually simpler and easier to\napply, and we demonstrate that it can surface more of the latent knowledge\nembedded in LLMs. We also investigate how different design choices affect the\nperformance of ICL-based knowledge estimation. Using the proposed estimator, we\nperform a large-scale evaluation of the factual knowledge of a variety of open\nsource LLMs, like OPT, Pythia, Llama(2), Mistral, Gemma, etc. over a large set\nof relations and facts from the Wikidata knowledge base. We observe differences\nin the factual knowledge between different model families and models of\ndifferent sizes, that some relations are consistently better known than others\nbut that models differ in the precise facts they know, and differences in the\nknowledge of base models and their finetuned counterparts.", + "authors": "Qinyuan Wu, Mohammad Aflah Khan, Soumi Das, Vedant Nanda, Bishwamittra Ghosh, Camila Kolling, Till Speicher, Laurent Bindschaedler, Krishna P. Gummadi, Evimaria Terzi", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Towards Reliable Latent Knowledge Estimation in LLMs: In-Context Learning vs. Prompting Based Factual Knowledge Extraction", + "main_content": "Introduction Conversational chatbots (e.g., OpenAI\u2019s ChatGPT) built around large language models (e.g., OpenAI\u2019s GPT) are increasingly being used for a variety of information retrieval tasks such as searching for information or seeking recommendations related to real world entities like people or places (Wu et al., 2023; Zhu et al., 2023). A worrisome concern in such scenarios is the factual correctness of information generated by the LLMs (Peng et al., 2023; Hu et al., 2023a; Snyder et al., 2023; Yao et al., 2023; Ji et al., 2023; Zhang et al., 2023; Wang et al., 2023). The latent knowledge estimation problem: To avoid making false assertions about a real-world entity, an LLM first needs to have factual (true) knowledge about the entity. Given a prompt like \u201cEinstein was born in the year\u201d, LLMs may generate both the correct answer (\u201c1879\u201d) and wrong answers (e.g., \u201c1878\u201d or \u201c1880\u201d) with some probabilities. If an LLM knows the fact, one can hope that the probability with which it would generate the correct answer would be much higher than the wrong answers (Jiang et al., 2021). As LLMs are typically pretrained over a Web corpus (including Wikipedia data) with millions of facts about realworld entities, they have the opportunity to learn factual knowledge about our world and latently embed the knowledge in their parameters. But, how can we estimate the extent to which LLMs have knowledge of real-world facts? Reliability of latent knowledge estimates: Prior works (Jiang et al., 2020; Bouraoui et al., 2020) followed (Petroni et al., 2019), and represented factual knowledge in the form of triplets \u27e8x, r, y\u27e9, where the subject x has a relation of type r with the object y (e.g., \u27e8Einstein, birth-year, 1879\u27e9). The central challenge of latent knowledge estimation is to infer y given x and r by only using information extracted from the LLM. Typically, the inference relies on probing the LLM with prompts constructed using x and r and analyzing the responses. Current approaches have few well-defined rules to avoid prompt engineering and prompt hacking, raising serious concerns about the reliability of their estimates. Against this background, in this paper, we make four primary contributions: 1. A simple yet reliable latent knowledge estimator (LKE) leveraging in-context learning (ICL): We propose a latent knowledge estimator (LKE) that leverages in-context learning (ICL), called ICLKE, in a simple yet clever way to avoid the many reliability concerns with prompting based previous knowledge estimators. 2. Exploring the nuances of using ICL for knowledge estimation: We investigate the impact of dif1 arXiv:2404.12957v1 [cs.CL] 19 Apr 2024 \fferent ICL design choices on the estimation of latent knowledge, such as the number of in-context examples, when some of the examples are unknown to the model or simply incorrect, as well as the sequence in which they appear. While we focus on knowledge estimation, our findings can inform the application of ICL in other contexts. 3. A comparison of IC-LKE with previous approaches: We empirically demonstrate that IC-LKE outperforms previous knowledge estimation approaches that rely on human-generated or machine-mined prompts across a variety of different open-source models and different types of factual relations. In contrast to prompting based methods, which are relation-specific and LLM-specific, IC-LKE\u2019s design is straightforward to apply. 4. A systematic comparison of latent knowledge of open source LLMs at scale: We use IC-LKE to evaluate the knowledge of 49 open-source LLMs spanning many families such as Llama(2), Gemma, Mistral, OPT, Pythia, etc. across a wide range of sizes, both with and without instruction-finetuning over 50 different relations and 20,000 facts from Wikidata. We find that models from some families such as Llama2, Mistral and Gemma and larger models know more facts than others, that models within the same family differ in the specific facts they know, despite being trained on the same data, and that fine-tuning reduces the amount of factual knowledge that can be extracted from the models. Related Work: Researchers have proposed several approaches to estimate latent knowledge from LLMs, which can be categorized into two ways: (i) Model-internals based approaches leverage the LLM attention map (Wang et al., 2020), activation function (Burns et al., 2022), or model parameters (Kazemnejad et al., 2023) to decide whether factual information can be extracted from the LLM. In our study, we rely on the probability distribution of generated tokens in an LLM \u2013 thereby our method belongs to the model-responses based approach. (ii) Model-responses based approaches \u2013 generally applicable to a wide range of LLM models \u2013 often propose different prompting techniques to nudge the LLM to validate whether a target fact is stored in it (Chern et al., 2023; Sun et al., 2023; Wang et al., 2020; Petroni et al., 2019; Jiang et al., 2021; Newman et al., 2022; Jiang et al., 2020). Prompt-based methods differ subtly by the choice of prompts and evaluation criteria. Besides, the prompts are often brittle (Zamfirescu-Pereira et al., 2023; Arora et al., 2023; Sclar et al., 2023) \u2013 their success depends on the hypothesis that the LLM indeed understands the prompts. In our study, we instead seek a minimal understanding of prompts by an LLM and design a knowledge estimation method based on the in-context learning. As a test bed (Elsahar et al., 2018; Hu et al., 2023b; Sun et al., 2023; Petroni et al., 2019; Zhu and Li, 2023; Kry\u00b4 sci\u00b4 nski et al., 2019), we consider facts from existing knowledge graphs for performing knowledge estimation of LLMs. 2 Designing Reliable LKEs Today, there exist many general-purpose as well as domain-specific factual knowledge bases that contain a very large number (millions to billions) of facts. The facts can be encapsulated as triplets, represented as \u27e8subject (x), relation (r), object (y)\u27e9. These triplets offer a general way to represent factual knowledge about real-world entities in knowledge graphs or other structured knowledge bases. The goal of latent knowledge estimation is to infer what fraction of the facts are known to a LLM. We call methods that estimate the amount of latent knowledge inside an LLM latent knowledge estimators (LKEs). 2.1 Reliability concerns with existing LKEs Existing approaches to estimating latent knowledge in LLMs use a variety of factual knowledge tests. Below, we identify several reliability concerns with current designs that motivate our new LKE design. 1. LLM-specific restrictions on test topics: Many prior works (Petroni et al., 2019; Jiang et al., 2020) limit the choice of facts that can be used in tests to those where the surface form of the objects (y) is represented by a single token by the LLM\u2019s tokenizer. As different LLMs use different tokenizers, this limitation prevents us from comparing the latent knowledge across different LLMs. Furthermore, only popular objects tend to be represented by a single token and so the resulting estimates are not representative of the LLM\u2019s knowledge of facts with multi-token object representations. 2. Unrestricted choice of test prompts: Many past works have attempted to use test prompts without any restrictions, including both humangenerated or machine-mined prompts (Jiang et al., 2020; Zamfirescu-Pereira et al., 2023; Arora et al., 2023; Sclar et al., 2023). They typically intersperse the subject x and object y between additional relationship context-communicating tokens. Some 2 \fanalyze the performance of a variety of prompts and then pick the best-performing or use an ensemble of the best-performing prompts (Jiang et al., 2020; Newman et al., 2022; Fernando et al., 2023). However, these approaches raise two important concerns: First, the generated prompts, particularly those that are machine-mined, may include tokens that can implicitly or explicitly introduce additional (side-channel) information that makes it easier to answer the question. As a specific example, in a prior work (Jiang et al., 2020), for the relation \u201cposition held\", the prompt \u201cx has the position of y\" performed worse than \u201cx is elected y\". But, note that the second prompt potentially introduces a side-channel: it implicitly rules out answer choices for unelected positions like Professor and favors elected positions like President. Second, selecting from an unbounded number of potential prompt choices raises concerns about the complexity of LKEs (the size of the set of all considered prompts) and the potential for over-fitting, which in turn brings the reliability of estimates into question. 3. Reliance on LLMs\u2019 meta-linguistic judgments: Prior works used prompts (Chern et al., 2023; Sun et al., 2023; Wang et al., 2020; Petroni et al., 2019; Jiang et al., 2021; Newman et al., 2022; Jiang et al., 2020) for communicating the question as well as the expected format of answers. But, the scores (estimates) resulting from such prompt-based testing conflate an LLM\u2019s latent knowledge of the facts with the LLM\u2019s meta-linguistic judgments, i.e., the LLM\u2019s ability to comprehend the prompt, understand the question embedded within the prompt and output the answer in some expected format (Hu and Levy, 2023). The impact on meta-linguistic judgments can be seen from the fact that multiple semantically-equivalent prompts result in different responses from an LLM and thereby, different estimates of latent knowledge (Hu and Levy, 2023). Motivated from the above, we derive the following three design principles for LKEs. A reliable LKE design should: \u2022 DP1: generate estimates for any factual topic and tokenization scheme. \u2022 DP2: limit arbitrary prompt engineering to minimize over-fitting & side-channels. \u2022 DP3: minimize reliance on meta-linguistic prompts. 2.2 A new In Context learning based LKE (IC-LKE) Our goal is to estimate whether an LLM knows a fact f = \u27e8x, r, y\u27e9. The challenge is to probe the LLM and evaluate its responses in a way compatible with the design principles set in Section 2.1. Key idea: Leverage in-context learning. LLMs have shown to exhibit In-Context Learning (ICL) abilities (Brown et al., 2020) that allow them to infer and extrapolate patterns in their inputs. We leverage this ability to communicate information about relation r without additional instructions to the LLM (DP3) by providing it with a list of facts based on r. Example 1. Assume that we want to probe for whether an LLM knows the fact \u27e8Einstein, birthyear, 1879 \u27e9. We can use other facts for the birthyear relation such as \u27e8Feynman, birth-year, 1918 \u27e9, \u27e8Heisenberg, birth-year, 1901 \u27e9to construct an input \u201cFeynman 1918 Heisenberg 1901 Einstein\u201d. By providing in-context examples to the model, we communicate the relation between subjects and objects. To correctly extrapolate the pattern, the model needs to retrieve Einstein\u2019s birth-year as the completion of the sequence. More formally, given a training dataset of facts Fr = {\u27e8xi, r, yi\u27e9}n i=1 for relation r, as well as a test fact f = \u27e8x, r, y\u27e9, we leverage ICL to construct prompts that elicit information about f as \u03c3(x, r) = x1 y1 . . . xn yn x (1) We use r to pick facts from Fr and concatenate the tokens corresponding to the subjects and objects, but do not include any other information about r (DP2). We use space \u201c \u201d as the separator token and discuss this choice in more detail in Section 4.1. We discuss other design choices for IC-LKE construction in Section 3. When further details are not needed, we simply refer to some input as \u03c3. Evaluating model outputs. We evaluate the output of model \u03b8 for input \u03c3(x, r) based on the probabilities \u03b8 assigns to the tokens of the corresponding object y. To allow for objects y consisting of multiple tokens and to be independent of the specific tokenization scheme (DP1), we compute the object probability over multiple tokens as follows: P\u03b8(y | \u03c3) = |y| Y i=2 P\u03b8(y(i) | y[i\u22121:1] \u03c3) \u00b7 P\u03b8(y(1) | \u03c3), (2) where |y| denotes the number of tokens in y and P\u03b8(y(i) | y[i\u22121:1] \u03c3) is the conditional probability 3 \fof predicting the i-th token y(i) of y given the preceding tokens y(i\u22121), . . . , y(1), and \u03c3. Multiple-choice testing. To determine whether model \u03b8 knows a fact f = \u27e8x, r, y\u2217\u27e9, we test whether given input \u03c3(x, r), \u03b8 can choose the correct object y\u2217from among a set of M unique alternatives. Specifically, given fact f, we derive a test instance called choice c = \u27e8x, r, y\u2217, Y\u27e9, where Y is a set of M plausible but incorrect alternatives. We discuss the choice of Y in Section 4. pred\u03b8(c) \u225cargmax y \u2208{y\u2217} \u222aY P\u03b8(y | \u03c3(x, r)) (3) denotes the prediction of \u03b8 for choice c = \u27e8x, r, y\u2217, Y\u27e9. The predicted object has the maximal object probability within {y\u2217} \u222aY. Evaluation Metric. We evaluate the factual knowledge of model \u03b8 over a dataset of choices D = {ci}n i=1 using multiple choice accuracy: acc(\u03b8, D) \u225c P c\u2208D \u03b4 (y\u2217= pred\u03b8(c)) |D| (4) where \u03b4(\u00b7) is the indicator function. The IC-LKE design satisfies the knowledge estimation design principles. The IC-LKE design proposed here satisfies the design principles from Section 2.1, since \u2022 DP1: its relative probability comparisons between different answer-options make it applicable to arbitrary types of facts. \u2022 DP2: it uses the same, minimal prompt design based on ICL across all relations. \u2022 DP3: its only requirement is that the LLM is able to use ICL, no further assumptions about any metalinguistic abilities are made. 3 Exploring the design space of IC-LKE By design, IC-LKE avoids many limitations of prior works. However, IC-LKE introduces a few design choices for the input, i.e., \u03c3(x, r) in Equation (1). One must decide the right n, the number of in-context examples included in \u03c3(x, r). Further, it is unclear how IC-LKE would be impacted when some of the chosen examples are unknown to the model or are incorrect. We study both these factors in detail by varying n and introducing unknown or incorrect examples within these n examples. These experiments allows us to better understand the number of in-context examples needed and how robust IC-LKE is to several 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Pythia-12B Falcon-7B Llama2-7B Mistral-7B Gemma-7B Number of Examples Accuracy Figure 1: [Influence of the number of in-context examples] We examine how varying numbers of incontext examples influence the accuracy (calculated as defined in Eq 5) across different LLMs. The vertical dashed line indicates the number of examples at which the models achieve 95% of their respective stable accuracy at 50 examples. types of noise in these in-context examples. We perform an in-depth empirical analysis on a Nobel Laureate dataset for the relation \u2018birth year\u2019 (details in A.1). The dataset consists of facts formatted as \u27e8Person(x), birth-year(r), YYYY(y)\u27e9. More knowledgeable models need fewer incontext examples, but a small number suffices for most models. In Figure 1, we report knowledge estimation accuracy (Eq. (5)) for different LLMs evaluated on 900 test samples, with varying numbers of in-context examples (n) by randomly sampling from the training set using five random seeds. With an increasing number of in-context examples, the mean accuracy increases while the standard deviation decreases in different LLMs, i.e., the models gradually converge to a stable performance. Using dashed vertical lines, we report the minimum number of examples required by different LLMs to achieve 95% of the accuracy at 50 in-context examples. Interestingly, LLMs with higher estimation accuracy tend to require fewer in-context examples compared to those with lower accuracy. A potential explanation for this behavior is that in order to infer the relation r, models need to comprehend the examples presented in the prompt. Therefore, less knowledgeable models need to see more examples in order to infer r. To further investigate which individual facts may be known or unknown to a model, we look at the generation probability of in-context objects in 200 correct subject (x)-object (y) pairs using the Mistral-7B model, as shown in Figure 2a. Similar results for additional models are presented in Appendix E. Note that here we are only looking at probabilities of the object (y) for in-context examples given previous x y pairs in the input to understand which of these samples are known by the LLM. The Mistral-7B model demon4 \f0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Example Position Probability (a) (Subject, object) examples in a prompt 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Unknown Following (b) Distributed unknown examples 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Unknown (c) Continuous unknown examples 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Incorrect Following (d) Distributed incorrect examples 0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Correct Incorrect (e) Continuous incorrect examples Figure 2: [Variation in object probabilities of Nobel laureate data using Mistral-7B] Figure 2a illustrates the probability of each object at various positions in the prompt. We show the impact on probabilities after replacing objects with unknown ones at randomly distributed positions in Figure 2b and at continuous positions in Figure 2d. Similarly, we also show the impact of incorrect examples when replaced at randomly distributed positions (Figure 2d) and continuous positions (Figure 2e). In all plots, the horizontal dashed line shows the average probability of the correct examples (blue dots). strates a gradual increase in probability for generating correct objects as we go from left to right on the x-axis (note that for a point on the x-axis, points before it are in context, thus points on the right have more context to leverage) in Figure 2a, stabilizing at a mean probability of approximately 85%. We also see that some objects at later positions have a lower generation probability. This suggests that the LLM may be less confident about its knowledge of the facts corresponding to them. We can leverage the token generation probability as a signal of LLM\u2019s confidence when evaluating LKEs (see Appendix D). Models are robust to unknown examples. Next, we investigate the robustness of estimates to occurrence of unknown examples. We insert unknown examples in two distinct ways: one where we randomly distribute the occurrence of unknown examples throughout \u03c3(x, r), and another more extreme scenario where we replace a continuous block of examples with unknown ones. We chose 40 out of the 200 examples and replaced them with unknown examples created using fictitious names and birth years 1. Our findings are shown in Figures 2b and 2c for random and continuous replacement respectively. Unknown examples are marked by red dots, examples immediately following unknown ones in cyan dots and the rest in blue dots. The unknown examples show generation probabilities close to zero, confirming the LLM\u2019s tendency to assign low probabilities to unknown data. However, interestingly, unknown examples minimally impact surrounding data in both settings. Models are vulnerable to incorrect examples. We investigate the impact of including incorrect examples in \u03c3(x, r). Similar to the setup for unknown 1generated via https://en.namefake.com/api examples, we also insert 40 (out of 200) incorrect examples randomly (Figure 2d) and simultaneously (Figure 2e). In our experiments, these incorrect examples are created by altering the birth years of known Nobel laureates and are marked by red dots in the plots. In contrast to inserting unknown examples, the LLM significantly struggles with incorrect examples. Injection of such examples detrimentally affects the LLM\u2019s performance in both settings. We highlight one randomly marked yellow star example in Figure 2a, Figure 2b, and Figure 2d to show how the presence of incorrect samples brings down the probability of surrounding points. Summary: LLMs can identify the relation pattern of subject-object pairs even with a small set of in-context examples in the prompt. LLMs are relatively robust to unknown examples, but their ability to recollect factual knowledge is vulnerable to incorrect examples, particularly when they appear in a continuous sequence. Our findings allude to the effectiveness of designing an IC-LKE, where we carefully place correct examples from a training dataset and proceed to estimate the latent knowledge of the LLM on examples from the test set. Furthermore, the findings also motivate us to design a more efficient in-context learning based LKE, called EIC-LKE, that can process multiple test examples simultaneously in a single prompt where training examples are placed preceding each test example, see more details in the Appendix F. 4 Experiments and Results We present the empirical findings of IC-LKE (as well as the efficient version, EIC-LKE) on the knowledge-estimation task on 49 open-source (pretrained and fine-tuned) LLMs across different LLM families and sizes. We enlist models and their simplified names used in this paper in Appendix 6, Ta5 \f0.73 0.55 0.83 0.82 0.74 0.53 0.86 0.83 0.71 0.5 0.85 0.58 0.68 0.48 0.71 0.72 HGP MMP IC-LKE EIC-LKE 0 0.2 0.4 0.6 0.8 1 Mistral-7B Llama2-7B Falcon-7B Pythia-12B Knowledge Extraction Method Accuracy Figure 3: [Performance comparison for different latent knowledge extractors] We compare the accuracy of IC-LKE and EIC-LKE with the baseline method (Jiang et al., 2020) across 12 relations from T-REx-MC. ble 6, and provide a leader-board of models based on IC-LKE in Table 7. Dataset: We evaluate the knowledge of models on a large set of facts from the T-REx dataset2 (Elsahar et al., 2018). We selected relations from TREx with at least 500 samples and linked to a minimum of 100 unique objects. This filtering leads to 50 distinct relations spanning categories like birth dates, directorial roles, parental relationships, and educational lineage. The resulting T-REx Multiple Choice (T-REx-MC) dataset comprises 5,000 training and 20,000 test facts. Appendix A contains detailed information on the dataset and relations. Choosing the set Y & its impact on test difficulty: For each fact \u27e8subject (x), relation (r), object (y\u2217)\u27e9, we generate alternative objects Y to create multiple choices. Note that the alternative objects in Y are viable choices and cannot be easily eliminated. Therefore, for each fact \u27e8x, r, y\u2217\u27e9we select y \u2208Y from other facts in the dataset that share the same relationship r. For computational feasibility, we sample |Y| = 99 alternative objects per fact, so that a random guess between {y\u2217} \u222aY has a 0.01 probability of being correct. 4.1 IC-LKE vs. prompt-based approaches We compare the performance of IC-LKE and EICLKE with the existing prompt-based approaches (Jiang et al., 2020) and report two key takeaways. IC-LKE outperforms prompt-based approaches. We randomly sample three humangenerated prompts (HGP) and machine-mined prompts (MMP) from (Jiang et al., 2020) for 12 common relations between T-REx-MC and (Jiang et al., 2020). The HGPs and MMPs for all relations are in Appendix G. In Figure 3, IC-LKE and EICLKE outperform HGP and MMP in terms of higher 2https://huggingface.co/datasets/relbert/t_rex IC-LKE IC-HGP1 IC-HGP2 IC-HGP3 IC-MMP1 IC-MMP2 IC-MMP3 0 0.2 0.4 0.6 0.8 1 Mistral-7B Llama2-7B Falcon-7B Pythia-12B In-Context Prompt Accuracy Figure 4: [Influence of different separators] We replace the \u2018[space]\u2019 token separating the subject-object pairs with human-generated prompts (HGP, red background) and machine-mined prompts (MMP, blue background) for the relation \u2018original broadcaster\u2019. Accuracy performance is agnostic to the separators. mean accuracy across different models and 12 relations. Also, IC-LKE and EIC-LKE have lower standard deviation than HGP and MMP, indicating a higher consistency of IC-LKE and EIC-LKE on knowledge estimation tasks. In Appendix H.2, we report relation specific results, where IC-LKE and EIC-LKE estimate higher factual knowledge than the existing works in most relations, thereby demonstrating the superiority of IC-LKE and EIC-LKE over existing methods. IC-LKE is a flexible and effective knowledge estimator. We adapt IC-LKE by replacing the separator \u2018[space]\u2019 with three separators from HGP and MMP each for the relation \u2018original broadcaster\u2019 and report estimation accuracy in Figure 4. We can observe that \u2018[space]\u2019 token demonstrates an equivalent performance with semantically meaningful prompts via HGP and MMP. Therefore, adding relation specific separators has a limited impact on factual knowledge estimation, as long as the subject-object pairs are correctly presented. Furthermore, finding relation-specifc prompts often require hand-crafted efforts vs. an automatic incontext based approach like ours where (subject, object) pairs are used. Therefore, IC-LKE can potentially extend to any facts from knowledge graphs over any LLM while HGP and MMP requires additional supervision and relation-specific validation. 4.2 Evaluating Diverse Models and Relations We investigate the performance of 35 pre-trained LLMs and 14 fine-tuned LLMs across 50 relations using the IC-LKE framework. Our analysis is designed to uncover nuanced insights into the knowledge levels and structures within these models. We will examine the results through two primary lenses: (1) the variations in knowledge across different model families, and (2) the influence of model size and fine-tuning within the same 6 \fMistral-8x7B Mistral-7B Llama2-70B Llama2-13B Llama2-7B Gemma-7B Gemma-2B Llama-65B Llama-33B Llama-13B Llama-7B Falcon-7B MPT-7B GPT-NEOX-20B OPT-6.7B OPT-13B OPT-30B OPT-2.7B OPT-1.3B OPT-350M OPT-125M GPT-J-6B Pythia-12B Pythia-6.9B Pythia-2.8B Pythia-1.4B Pythia-1B Pythia-410M Pythia-160M Pythia-70M Bloom-7.1B Bloom-3B Bloom-1.7B Bloom-1.1B Bloom-560M 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 0 0.2 0.4 0.6 0.8 1 Model Relation Order Figure 5: [Accuracy for 35 pre-trained LLMs on the 50 different relations in T-REx-MC] Models are grouped by family and arranged from left to right based on the accuracy of the model closest to 7 billion parameters. Within each family, models are ordered by their average accuracy. 0.91 0.86 0.74 0.86 0.85 0.67 0.79 0.75 0.79 0.68 0.73 0.86 0.92 0.72 0.85 0.86 0.74 0.86 0.78 0.78 0.73 0.75 0.74 0.72 0.8 0.75 0.81 0.62 0.77 0.74 0.74 0.71 0.72 0.86 0.85 0.75 0.85 0.84 0.67 0.77 0.74 0.8 0.71 0.73 0.85 0.86 0.81 0.84 1 0.74 0.9 0.81 0.88 0.79 0.81 0.67 0.74 0.62 0.67 0.74 1 0.78 0.71 0.64 0.69 0.73 0.79 0.86 0.77 0.77 0.9 0.78 1 0.86 0.82 0.8 0.82 0.75 0.78 0.74 0.74 0.81 0.71 0.86 0.87 0.79 0.8 0.83 0.79 0.78 0.74 0.8 0.88 0.64 0.82 0.79 1 0.8 0.82 0.68 0.73 0.71 0.71 0.79 0.69 0.8 0.8 0.8 0.8 0.81 0.73 0.75 0.72 0.73 0.81 0.73 0.82 0.83 0.82 0.81 0.85 Mistral Llama2 Gemma Llama Falcon MPT GPT-NEOX OPT GPT-J Pythia Bloom Mistral Llama2 Gemma Llama Falcon MPT GPT-NEOX OPT GPT-J Pythia Bloom 0 0.2 0.4 0.6 0.8 1 Model Family Model Family Figure 6: [Pearson correlation coefficients between model families] We compute the Pearson correlation coefficients between each pair of models and then compute the average correlation across the same model family. model family on their knowledge attributes. 4.2.1 Comparing different LLMs families Some model families are consistently more knowledgeable than the rest. We sort the model families based on the performance of the model closest to 7B parameters 3, and the models within each family based on average accuracy across 50 relations. Figure 5 shows that the Mistral, Llama2, Gemma, and Llama families have higher performance on most of the relations than Pythia, Bloom, and OPT, indicating their lower factual knowledge. Different model families align in their relative factual knowledge. We investigate the correla37B parameters is a good reference point since all model families except GPT-NEO-X have models within a gap of \u2264 1B parameters: Mistral-7B, Gemma-7B, Llama-7B, Falcon7B, MPT-7B, OPT-6.7B, GPT-J-6B, Pythia-6.9B, and Bloom7.1B. tions between each model pair\u2019s performance over 50 relations to assess the agreement in their knowledge levels of the 50 relations. We compute the average correlations within each model family (e.g. Llama2 7B, 13B, 70B) in Figure 6. Despite differences in architecture and training datasets among model families, there is a significant consensus (correlation > 0.6, see Figure 14) regarding the hierarchy of knowledge across various relations. We also compile the three best and worst-performing relations for each model in Table 9, illustrating the consensus among all models. 4.2.2 Comparing within the same LLM family Larger models embed more knowledge. We show in Figure 5 that, within each model family, bigger models (e.g. Llama-65B) generally outperform their smaller counterparts (e.g. Llama-13B) in terms of accuracy with an exception in the OPT family. Models within the same family are typically pre-trained on the same datasets (Biderman et al., 2023; Zhang et al., 2022; Touvron et al., 2023). Thus, this observation suggests that, when trained on identical datasets, the larger models capture a broader set of facts. Despite being trained on the same data, models might remember different facts. From these results, however, it is not clear if the larger models are subsuming smaller models in their factual knowledge, i.e., are the larger models also correct on the facts that the smaller models are correct on? To assess this, we compute the subsumption rate \u03b7: \u03b7(\u03b81|\u03b82, F) = |\u03d5(\u03b81, F) \u2229\u03d5(\u03b82, F)| |\u03d5(\u03b81, F)| i.e., the fraction of facts from F known by smaller model \u03b81 that larger model \u03b82 also knows. A subsumption rate of \u223c1 indicates that all of the smaller model\u2019s knowledge is also contained in the larger model. To ensure a meaningful comparison across scales, we only consider models that were pre-trained using the same training data. Table 1 shows the average subsumption rate (\u03b7) between the largest and smallest models in a family, as well as the average accuracy, over all relations for different model families. Interestingly, \u03b7 is relatively low (< 0.5) for OPT, Pythia and Bloom (i.e., the larger models know less than 50% of what the smaller models know) and only reaching up to 0.8 for Gemma, Llama and Llama-2. Therefore, even though models within each family are trained on the same datasets and generally agree on the relative knowledge of different relations (Figure 6), 7 \fTable 1: Average subsumption rate (\u03b7) for different model families over the relations in T-REx-MC. Despite being trained on the same datasets, models of different sizes differ in the specific facts that they know (low \u03b7). Smallest Model Largest Model Family #Parameters Accuracy #Parameters Accuracy \u03b7 Llama 7B 0.699 65B 0.836 0.769 Llama-2 7B 0.741 70B 0.846 0.801 Gemma 2B 0.666 7B 0.750 0.710 OPT 125m 0.430 30B 0.588 0.481 Pythia 70m 0.334 12B 0.648 0.403 Bloom 560m 0.410 7.1B 0.548 0.498 Llama-7B Llama-7B-FT1 Llama-13b Llama-13B-FT1 Llama2-7b Llama2-7b-FT1 Llama2-7b-FT2 Llama2-13b Llama2-13b-FT1 Llama2-13b-FT2 Llama2-70b Llama2-70b-FT1 Mistral-7b Mistral-7b-FT1 Mistral-7b-FT2 Mixtral-8x7b Mixtral-8x7b-FT1 Mixtral-8x7b-FT2 Gemma-2b Gemma-2b-FT1 Gemma-7b Gemma-7b-FT1 Falcon-7b Falcon-7b-FT1 0 0.2 0.4 0.6 0.8 1 Model Accuracy Figure 7: [Accuracy of base vs chat-finetuned models] We see that finetuned versions (in lighter shades) obtain lower accuracy across the relations in T-REx-MC than pre-trained models (in darker shades). there are differences in the knowledge of specific facts they retain from their training data. Fine-tuning reduces latent knowledge. Finally, we investigate the effects of chat-based fine-tuning on the factual knowledge of models. Base language models are often fine-tuned (using a mix of supervised and reinforcement learning (Ouyang et al., 2022)) to make them better at following instructions. While prior works have shown that this makes the models better at various benchmarks, it\u2019s unclear how such fine-tuning affects latent knowledge. Figure 7 illustrates the comparative accuracy of pre-trained models and their fine-tuned counterparts. In almost all cases, the fine-tuned models obtain lower accuracy than their base versions. This suggests that fine-tuning reduces the amount of extractable latent knowledge in the models. A similar observation was also made by Yu et al. (2024). We observe a similar trend using EIC-LKE in Appendix H.6, Figure 15. Additional results on evaluating generated outputs (using 50 tokens) in Figure 16 reveal the same pattern. To further assess if the fine-tuned models are acquiring new knowledge, we compute the subsumption rate between pre-trained and fine-tuned versions (Table 10). We find that most of the latent knowledge in fine-tuned models is already present in base models (high \u03b7), thus indicating, that fine-tuned models may not be obtaining additional knowledge. 5 Concluding Discussion In this work, we investigate a new way to estimate latent factual knowledge from an LLM. Unlike prior approaches that use prompting, our method relies on in-context learning. Our method not only addresses many reliability concerns with prompting, but it also recollects (at time significantly) more factual knowledge than prompting. In contrast to prompting, which requires relationship-specific and LLM-specific prompt engineering, our method can be applied with minimal effort to test factual knowledge of relations across a variety of structured knowledge bases and LLMs. This ability enables us to compare the latent knowledge captured by many different families of open-source LLMs; we expect our results to be of interest to designers of these LLMs. Finally, to design our incontext learning based LKE, we explore the impact of the number and ordering of correct, incorrect, and unknown examples used as inputs; our findings may be of independent interest to developing a better understanding of in-context learning. A fundamental question posed by our and prior work on estimating latent knowledge in LLMs: What does it mean for an LLM to know a fact? Suppose we tried to infer if an LLM knows the capital of Germany using the input \"France Paris; Spain Madrid; Germany \" and suppose the answer were Berlin. What we have learnt is that the LLM knows that the relationship r between Germany and Berlin is similar to that between France and Paris or Spain and Madrid. What we have not learned is whether the LLM knows that the relation r is called \"capital\" in English or \"hauptstadt\" in German. The latter is revealed by prompts such as \"The capital of Germany is \". But, such prompts don\u2019t reveal whether the LLM knows that what Berlin means to Germany is similar to what Paris means to France. Is one type of knowing facts better than other? It is difficult to answer in general. Neither type of knowing guarantees that the knowledge can be put to use in different contexts and tasks, such as when we ask the LLM where the parliament of Germany is located. Nevertheless, one clear takeaway from our study is related to how factual knowledge is latently embedded in an LLM. We show that more factual knowledge can be recollected using in-context learning, i.e., the representations of subjects and objects that share the same relationship, than by prompting with the name of their relationship. 8 \f6 Limitations This study contributes to advancing our understanding of latent factual knowledge in LLMs through an innovative in-context learning approach. However, it is essential to acknowledge the inherent limitations of our work. While the use of in-context learning aims to mitigate the influence of prompt engineering and the reliability issues associated with previous prompting methods, it introduces its own biases based on the selection and formulation of in-context examples. We discus these in detail in Section 3. For example, the choice of which examples to include, their order, and their factual accuracy can influence model responses, and thus these in-context examples must be carefully curated for reliable latent knowledge estimation. Additionally, our study\u2019s limitation in testing simple-format facts underlines a critical gap in assessing LLMs\u2019 complex reasoning abilities. The knowledge estimation framework employed predominantly hinges on the LLM\u2019s capacity to correctly recall or recognize factual information from a given set of triplets or structured prompts. This narrows the scope of evaluation to straightforward factual recall, thereby overlooking the models\u2019 capability to engage in more sophisticated cognitive processes such as reasoning, synthesis, and inference, which we leave as open avenues for future work." + } + ] +} \ No newline at end of file