| { |
| "url": "http://arxiv.org/abs/2404.16899v1", |
| "title": "mlr3summary: Concise and interpretable summaries for machine learning models", |
| "abstract": "This work introduces a novel R package for concise, informative summaries of\nmachine learning models.\n We take inspiration from the summary function for (generalized) linear models\nin R, but extend it in several directions:\n First, our summary function is model-agnostic and provides a unified summary\noutput also for non-parametric machine learning models;\n Second, the summary output is more extensive and customizable -- it comprises\ninformation on the dataset, model performance, model complexity, model's\nestimated feature importances, feature effects, and fairness metrics;\n Third, models are evaluated based on resampling strategies for unbiased\nestimates of model performances, feature importances, etc.\n Overall, the clear, structured output should help to enhance and expedite the\nmodel selection process, making it a helpful tool for practitioners and\nresearchers alike.", |
| "authors": "Susanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "mlr3summary: Concise and interpretable summaries for machine learning models", |
| "main_content": "Introduction Machine learning (ML) increasingly supports decision-making processes in various domains. A data scientist has a wide range of models available, ranging from intrinsically interpretable models such as linear models to highly complex models such as random forests or gradient boosted trees. Intrinsically interpretable models can come at the expense of generalization performance, i.e., the model\u2019s capability to predict accurately on future data. Being able to interpret predictive models is either often a strict requirement for scientific inference or at least a very desirable property to audit models in other (more technical) contexts. Many methods have been proposed for interpreting black-box ML models in the field of interpretable ML (IML). For comparing (generalized) linear models (GLMs), the stats package in R offers a summary function, which only requires the model (fitted with lm or glm) as input. As an example, glm is applied to a preprocessed version of the German credit dataset (Hofmann 1994) (available in the package via data(\"credit\", package = \"mlr3summary\")): arXiv:2404.16899v1 [cs.LG] 25 Apr 2024 \f2 mlr3summary: Concise and interpretable summaries for machine learning models > logreg = glm(risk ~., data = credit , + family = binomial(link = \"logit\")) > summary(logreg) Call: glm(formula = risk ~., data = credit, family = binomial(link = \"logit\")) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.057e+00 3.646e-01 2.900 0.00373 ** age 9.103e-03 8.239e-03 1.105 0.26925 ... Residual deviance: 656.19 on 515 degrees of freedom AIC: 670.19 ... This (shortened) summary informs about the significance of variables (Pr(>|z|)), their respective effect size and direction (Estimate), as well as the goodness-of-fit of the model (Residual deviance and AIC). Unfortunately, many other non-parametric ML models currently cannot be analyzed similarly: either targeted implementations exist for specific model classes, or an array of different model-agnostic interpretability techniques (e.g., to derive feature importance) scattered across multiple packages (Molnar, Bischl, and Casalicchio 2018; Biecek 2018; Zargari Marandi 2023) must be employed. However, especially in applied data science, a user often performs model selection or model comparison across an often diverse pool of candidate models, so a standardized diagnostic output becomes highly desirable. Another issue is that in the glm-based summary, the goodness-of-fit is only evaluated on the training data, but not on hold-out/test data. While this might be appropriate for GLMtype models \u2013 provided proper model diagnosis has been performed \u2013 this is not advisable for non-parametric and non-linear models, which can overfit the training data.1 Here, holdout test data or in general resampling techniques like cross-validation should be used for proper estimation of the generalization performance Simon (2007). Such resampling-based performance estimation should also be used for loss-based IML methods. For interpretability methods that only rely on predictions, this might also be advisable but might not lead to huge differences in results (Molnar, K\u00f6nig, Herbinger, Freiesleben, Dandl, Scholbeck, Casalicchio, Grosse-Wentrup, and Bischl 2022; Molnar, Freiesleben, K\u00f6nig, Herbinger, Reisinger, Casalicchio, Wright, and Bischl 2023). Contributions With the mlr3summary package, we provide a novel model-agnostic summary function for ML models and learning algorithms in R. This is facilitated by building upon mlr3 (Lang, Binder, Richter, Schratz, Pfisterer, Coors, Au, Casalicchio, Kotthoff, and Bischl 2019; Bischl, Sonabend, Kotthoff, and Lang 2024) \u2013 a package ecosystem for applied ML, including resampling-based performance assessment. The summary function returns a structured overview that gives information on the underlying dataset and model, generalization performances, complexity of the model, fairness metrics, and feature importances and effects. For the latter two, the function relies on model-agnostic methods from the field of IML. The 1For completeness\u2019 sake: Overfitting can happen for GLMs, e.g., in high-dimensional spaces with limited sample size. \fSusanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann 3 output is customizable via a flexible control argument to allow adaptation to different application scenarios. The mlr3summary package is released under LGPL-3 on GitHub (https:// github.com/mlr-org/mlr3summary) and CRAN (https://cran.r-project.org/package= mlr3summary). Documentations in the form of help pages are available as well as unit tests. The example code of this manuscript is available via demo(\"credit\", package = \"mlr3summary\"). 2. Related work Most R packages that offer model summaries are restricted to parametric models and extend the stats summary method (e.g., modelsummary (Arel-Bundock 2022), broom (Robinson, Hayes, and Couch 2023)). Performance is only assessed based on training data \u2013 generalization errors are not provided. Packages that can handle diverse ML models focus primarily on performance assessment (e.g., mlr3 (Lang et al. 2019), caret (Kuhn and Max 2008)). Packages that primarily consider feature importances and effects do not provide overviews in a concise, decluttered format but provide extensive reports (e.g., modelDown (Romaszko, Tatarynowicz, Urba\u0144ski, and Biecek 2019) and modelStudio (Baniecki and Biecek 2019) based on DALEX (Biecek 2018), or explainer (Zargari Marandi 2023)). While it is possible to base the assessment on hold-out/test data, assessment based on resampling is not automatically supported by these packages. Overall, to the best of our knowledge, there is no R package yet that allows for a concise yet informative overview based on resampling-based performance assessment, model complexity, feature importance and effect directions, and fairness metrics. 3. Design, functionality, and example The core function of the mlr3summary package is the S3-based summary function for mlr3 Learner objects. It has three arguments: object reflects a trained model \u2013 a model of class Learner fitted with mlr3; resample_result reflects the results of resampling \u2013 a ResampleResult object fitted with mlr3; control reflects some control arguments \u2013 a list created with summary_control (details in Section 3.2). The mlr3 package is the basis of mlr3summary because it provides a unified interface to diverse ML models and resampling strategies. A general overview of the mlr3 ecosystem is given in Bischl et al. (Bischl et al. 2024). With mlr3, the modelling process involves the following steps: (1) initialize a regression or classification task, (2) choose a regression or classification learner, (3) train a model with the specified learner on the initialized task, (4) apply a resampling strategy. The last step is necessary to receive valid estimates for performances, importances, etc., as mentioned in Section 1. The following lines of code illustrate steps (1)-(4) on the (preprocessed) credit dataset from Section 1 using a ranger random forest. As a resampling strategy, we conduct 3-fold cross-validation. > task = TaskClassif$new(id = \"credit\", backend = credit , + target = \"risk\") > rf = lrn(\"classif.ranger\", predict_type = \"prob\") > rf$train(task) > cv3 = rsmp(\"cv\", folds = 3L) > rr = resample(task = task , learner = rf , resampling = cv3, \f4 mlr3summary: Concise and interpretable summaries for machine learning models + store_models = TRUE) Internally, the resample function fits, in each iteration, the model on the respective training data, uses the model to predict the held-out test data, and stores the predictions in the result object. To compute performances, complexities, importances, and other metrics, the summary function iteratively accesses the models and datasets within the resulting resample object, which requires setting the parameter store_models = TRUE within the resample function. For the final summary output, the results of each iteration are aggregated (e.g., averages and standard deviations (sds)). 3.1. Summary function and output This section shows the summary call and output for the random forest of the previous credit example and provides some details on each displayed paragraph. > summary(object = rf , resample_result = rr) General provides an overview of the task, the learner (including its hyperparameters), and \fSusanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann 5 the resampling strategy.2 Residuals display the distribution of residuals of hold-out data over the resampling iterations. For regression models, the residuals display the difference between true and predicted outcome. For classifiers that return class probabilities, the residuals are defined as the difference between predicted probabilities and a one-hot-encoding of the true class. For classifiers that return classes, a confusion matrix is shown. Performance displays averages and sds (in [ ]) of performance measures over the iterations.3 The shown performance values are the area-under-the-curve (auc), the F-score (fbeta), the binary Brier score (bbrier), and Mathew\u2019s correlation coefficient (mcc). The arrows display whether lower or higher values refer to a better performance. \u201c(macro)\u201d indicates a macro aggregation, i.e., measures are computed for each iteration separately before averaging. \u201c(micro)\u201d would indicate that measures are computed across all iterations (see Bischl et al. (2024) for details). Complexity displays averages and sds of two model complexity measures proposed by Molnar, Casalicchio, and Bischl (Molnar et al. 2020): sparsity shows the number of used features that have a non-zero effect on the prediction (evaluated by accumulated local effects (ale) (Apley and Zhu 2020)); interaction_strength shows the scaled approximation error between a main effect model (based on ale) and the prediction function.4 Importance shows the averages and sds of feature importances over the iterations. The first column (pdp) displays importances based on the sds of partial dependence curves (Friedman 2001; Greenwell, Boehmke, and McCarthy 2018), the second column (pfi.ce) shows the results for permutation feature importance Breiman (2001); Fisher, Rudin, and Dominici (2019). Effects shows average effect plots over the iterations \u2013 partial dependence plots (pdp) and ale plots (Friedman 2001; Apley and Zhu 2020). For binary classifiers, the effect plots are only shown for the positively-labeled class (here, task$positive = \"good\"). For multi-class classifiers, the effect plots are given for each outcome class separately (one vs. all). For categorical features, the bars are ordered according to the factor levels of the feature. The learner can also be a complete pipeline from mlr3pipelines (Binder, Pfisterer, Lang, Schneider, Kotthoff, and Bischl 2021), where the most common case would be an ML model with associated pre-processing steps. Then, the summary output also shows some basic information about the pipeline.5 Since preprocessing steps are treated as being part of the learner, the summary output is displayed on the original data (e.g., despite one-hot encoding of categorical features, importance results are not shown for each encoding level separately). The learner can also be an AutoTuner from mlr3tuning, where automatic processes for tuning the hyperparameters are conducted. Examples on pipelining and tuning are given in the demo of the package. 3.2. Customizations 2Currently, this is the only paragraph that is based on object, all other paragraphs are based on resample_result. 3Please note that there is no unbiased estimator of the variance, see (Nadeau and Bengio 1999) and Section 5 for a discussion. 4The interaction strength has a value in [0, 1], 0 means no interactions, 1 means no main effects but interactions. 5Linear pipelines can be displayed in the console, non-linear parts are suppressed in the output. \f6 mlr3summary: Concise and interpretable summaries for machine learning models The output of the summary function can be customized via a control argument which requires a list created with the function summary_control as an input. If no control is specified, the following default setting is used: > summary_control(measures = NULL , + complexity_measures = c(\"sparsity\", \"interaction_strength\"), + importance_measures = NULL , n_important = 15L, + effect_measures = c(\"pdp\", \"ale\"), + fairness_measures = NULL , protected_attribute = NULL , + hide = NULL , digits = max(3L, getOption(\"digits\") 3L)) Performances are adaptable via measures, complexities via complexity_measures, importances via importance_measures and effects via effect_measures within summary_control. Examples are given in the demo of the package. The default for measures and importance_ measures is NULL, which results in a collection of commonly reported measures being chosen, based on the task type \u2013 for concrete measures see the help page (?summary_control). n_important reflects that, by default, only the 15 most important features are displayed in the output. This is especially handy for high-dimensional data. With hide, paragraphs of the summary output can be omitted (e.g., \"performance\") and with digits, the number of printed digits is specified. Fairness assessment for classification and regression models is also available in mlr3summary based on the mlr3fairness package (Pfisterer, Siyi, and Lang 2023). Therefore, a protected attribute must be specified. This can be done either within the task by updating the feature roles or by specifying a protected_attribute in summary_control. The following shows the code and output when specifying sex as a protected attribute. The shown default fairness measures are demographic parity (dp), conditional use accuracy equality (cuae) and equalized odds (eod), other measures are possible via fairness_measures in summary_control. > summary(object = rf , resample_result = rr , + control = summary_control(protected_attribute = \"sex\")) 4. Runtime assessment To assess how the runtime scales with differing numbers of features p \u2208{5, 10, 25, 50, 100} and numbers of observations n \u2208{50, 100, 500, 1000, 2000}, we conducted a simulation study. Given X1, X2, X3 \u223cU(0, 1), X4 \u223cBern(0.75), the data generating process is y = f(x) + \u03f5 with f(x) = 4x1 +4x2 +4x4x2 3 and \u03f5 \u223cN(0, 0.1\u00b7f(x)). As noise variables, X5 as a categorical feature with five classes, and X6, ..., Xp \u223cN(0, 1) were added to the data. We trained random forests and linear main effect models on the datasets and conducted 3-fold cross-validation. The first two figures in Figure 1 show that runtimes of the linear model were lower compared to the random forest. To improve runtimes, we added parallelization over the resampling \fSusanne Dandl, Marc Becker, Bernd Bischl, Giuseppe Casalicchio, Ludwig Bothmann 7 iterations (via the future package (Bengtsson 2021)) as another feature to mlr3summary \u2013 results for the random forest (with 3 cores) are on the right. Overall, scaling of runtimes is worse in p than in n. Figure 1: Runtimes of the summary function for linear models (left), and random forests without (middle) and with (right) parallelization, for differing numbers of features p and observations n. 5. Outlook and discussion In conclusion, this paper introduces a novel R package for concise model summaries. The summary output is highly adaptable due to a control argument and might be extended in the future. We also plan to offer a report function for detailed visualizations and model comparisons. To assess importance and effects of single features, mlr3summary builds upon the iml and fastshap packages. These packages only offer a limited set of interpretation methods. Recommended alternatives to permutation feature importances like conditional feature importance Molnar et al. (2022), are currently not available in a proper R package (published on CRAN). Our summary also currently lacks proper statistical tests for importances or confidence intervals for performances. This is because unbiased estimates of the variance are required which is a challenge for resampling strategies and the available methods that propose unbiased estimates are computationally infeasible (e.g., due to many model refits) (Molnar et al. 2023; Stephen Bates and Tibshirani 2023). Addressing this issue requires some concerted efforts from the research community. If methods are readily available in R, we are happy to integrate them in mlr3summary. Acknowledgments This work has been partially supported by the Federal Statistical Office of Germany.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.13558v2", |
| "title": "LASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation", |
| "abstract": "Revolutionary advancements in text-to-image models have unlocked new\ndimensions for sophisticated content creation, e.g., text-conditioned image\nediting, allowing us to edit the diverse images that convey highly complex\nvisual concepts according to the textual guidance. Despite being promising,\nexisting methods focus on texture- or non-rigid-based visual manipulation,\nwhich struggles to produce the fine-grained animation of smooth\ntext-conditioned image morphing without fine-tuning, i.e., due to their highly\nunstructured latent space. In this paper, we introduce a tuning-free LLM-driven\nattention control framework, encapsulated by the progressive process of LLM\nplanning, prompt-Aware editing, StablE animation geneRation, abbreviated as\nLASER. LASER employs a large language model (LLM) to refine coarse descriptions\ninto detailed prompts, guiding pre-trained text-to-image models for subsequent\nimage generation. We manipulate the model's spatial features and self-attention\nmechanisms to maintain animation integrity and enable seamless morphing\ndirectly from text prompts, eliminating the need for additional fine-tuning or\nannotations. Our meticulous control over spatial features and self-attention\nensures structural consistency in the images. This paper presents a novel\nframework integrating LLMs with text-to-image models to create high-quality\nanimations from a single text input. We also propose a Text-conditioned\nImage-to-Animation Benchmark to validate the effectiveness and efficacy of\nLASER. Extensive experiments demonstrate that LASER produces impressive,\nconsistent, and efficient results in animation generation, positioning it as a\npowerful tool for advanced digital content creation.", |
| "authors": "Haoyu Zheng, Wenqiao Zhang, Yaoke Wang, Hao Zhou, Jiang Liu, Juncheng Li, Zheqi Lv, Siliang Tang, Yueting Zhuang", |
| "published": "2024-04-21", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "LASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation", |
| "main_content": "INTRODUCTION Diffusion models [8, 12, 24] form a category of deep generative models that has recently become one of the hottest topics in multimodal intelligence, showcasing impressive capabilities of text-to-image (T2I) generation, ranging from the high level of details to the diversity of the generated examples. Such diffusion models also unlock a new world of creative processes in content creation, e.g., text-guided image editing [5, 6, 10], involves editing the diverse images that convey highly complex visual concepts with text-to-image models arXiv:2404.13558v2 [cs.CV] 23 Apr 2024 \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang solely through the textual guidance. Broadly, the contemporary image editing paradigm can be summarized in two aspects: i) Texture editing [5, 6, 10], manipulating a given image\u2019s stylization and appearance while maintaining the input structure and scene layout; ii) Non-rigid Editing [6, 18], enabling non-rigid image editing (e.g., posture changes) while preserving its original characteristics. Despite achieving impressive image-level editing effects, the aforementioned methods fail to harness the editing animation, i.e., the smooth transition of the sequence of intermediary images according to the user\u2019s textual requirement, including the fine-grained texture and non-rigid transformation. Such text-conditioned imageto-animation serves as an imperative component in various realworld content creation tasks, ranging from cinematic effects to computer games, as well as photo-editing tools for artistic and entertainment purposes to enrich people\u2019s imagination. Nevertheless, realizing animation-level editing is highly challenging, primarily due to the highly unstructured latent space of the intermediary images. Of course, we can introduce more animation data to fine-tune the entire T2I diffusion models, thereby capturing the smooth animation edit. However, it comes at a tremendous cost and deteriorates the flexibility of the pre-trained diffusion models under the animation-level editing setting. Based on the above insights, one question is thrown: Given the input image and textual description, could we achieve the high-quality animation editing effect with the pre-trained text-to-image models without fine-tuning? In this paper, we introduce a novel tuning-free LLM-driven attention control framework framework for text-conditioned imageto-animation, through LLM planing \u2192prompt-Aware Editing \u2192 StablE moRphing, named as LASER. The core of our framework is that by leveraging the large language models (LLMs)[1, 21, 37, 48] with significant potential in natural language processing, to effectively parse the textual description into relevant and continuous control statements for pre-trained T2I diffusion models, thereby transforming the given image to animation. Specifically, LASER comprises the following progressive steps: Step 1, given a multimodal input, i.e., a description of the animation \ud835\udc430 and an initial image \ud835\udc3c0 (which can be optional, allowing the T2I model generation), LLM decomposes the general and coarse-grained description \ud835\udc430 into multiple fine-grained and consistent prompts. These prompts are closely aligned and exhibit subtle variations, aiding in the guided editing of subsequently corresponding keyframes; Step 2, the LLM analyzes these prompts to the feature and attention injection control signals, adapting to the nuanced differences between adjacent prompts. This enables tailored injection strategies for editing different keyframe types. Notably, the injection strategy delineates into two base categories: Feature and Association Injection (FAI) for texture-based editing and Key-Value Attention Injection (KVAI) for non-rigid editing. Notably, to facilitate the simultaneous portrayal of both texture and non-rigid editing within a singular animation phase, we propose the forward hybrid Attention Injection (HAI) for the image editing; Step 3, effectively synthesizing intermediate frames between keyframes, ensuring animations are coherent and fluid. This generator utilizes advanced interpolation methods, such as spherical linear interpolation, to ensure smooth transitions and reduce artifacts. Additionally, Adaptive Instance Normalization (AdaIN) is applied to enhance color and brightness consistency. The Hybrid Attention Injection (HAI) strategy is also employed to integrate texture and structural transformations within a single animation phase, further enhancing the animation\u2019s overall quality and coherence. Additionally, we inaugurate a Text-conditioned Image-to-Animation Benchmark, a comprehensive collection designed to challenge and quantify the adaptability and precision of the proposed LASER. Summing up, our contributions can be concluded as: \u2022 We introduce the tuning-free text-conditioned image-toanimation task, designed to craft high-quality animations based on the multimodal input using the pre-trained textto-image models, without additional fine-tuning or annotations. To evaluate the efficacy of our approach, we introduce the Text-conditioned Image-to-Animation Benchmark, hoping that it may support future studies within this domain. \u2022 The proposed the LASER encapsulated by the progressive process of LLM planing \u2192Prompt-aware editing \u2192Stable morphing, enabling the smooth textureand non-rigid animation generation. \u2022 Both qualitative and quantitative assessments underscore the superior efficacy of the proposed framework, showcasing its proficiency in generating animations that are not only smooth and of high quality but also diverse. 2 RELATED WORK Text-to-Image Generation. In artificial intelligence[49\u201351], textto-image (T2I) Generation aims to generate high-quality images based on text descriptions. Previous text-conditioned image generation approaches were primarily based on Generative Adversarial Networks (GANs) [4, 41, 43, 44, 52], leveraging their robust capabilities for high-fidelity image synthesis. These models, through multimodal vision-language learning, have endeavored to align text descriptions with synthesized image contents, yielding gratifying synthesis results on specific domain datasets. Recently, diffusion models [8, 12, 24] have demonstrated exceptional generative capabilities, achieving state-of-the-art results in terms of generation quality and diversity. By incorporating text prompts into diffusion models, various text-to-image diffusion models [28, 30, 31] have been developed. They are intricately conditioned on the provided text via cross-attention layers, ensuring that the generated images are not only visually coherent but also semantically consistent with the input descriptions. Text-guided Image Editing. Text-guided image editing is a challenging task that aims to edit images based on textual descriptions, enabling users to achieve desired changes in natural language. Previous deep-learning-based approaches based on GANs [20, 23, 27, 40] have achieved certain success, but they are limited to specific domain datasets and exhibit limited applicability and generalization. VQGANCLIP [7] is an autoregressive model that combines VQGAN [9] and CLIP [29] to produce high-quality images and enable precise editing, yielding diverse and controllable results. However, this method suffers from slow generation speed and high computational cost. Recently, diffusion models trained on large-scale text-image pairs such as Imagen [31] and Stable Diffusion [30] have achieved unprecedented success in text-to-image generation. Therefore, they serve as a robust prior for various editing tasks, including textguided image manipulation [5, 6, 10, 18, 26, 38]. Prompt-to-Prompt \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation [10] and Plug-and-Play [38] utilize cross-attention or spatial features to edit both global and local aspects of the image by directly modifying the text prompt. MasaCtrl [6] and Imagic [18] can handle non-rigid transformations such as changing object poses. Particularly, Plug-and-Play [38] consider the task of text-guided image-toimage translation that aims to estimate a mapping of an image from a source domain to a target domain, where the target domain is not specified through a dataset of images but rather via a target text prompt. However, most of these approaches directly generate the final edited image, with limited exploration concerning continuous animations such as image morphing. Image Morphing. Image morphing is a task in computer graphics and image processing that aims to obtain reasonable intermediate images in the smooth transition between two images [2, 53]. With the advent of deep learning, neural networks have been used for image morphing, learning to identify correspondences and generate intermediate frames through latent interpolations. For instance, in the works on GANs [15\u201317, 32, 33], it has been demonstrated that their latent embedding space is highly continuous, and linear interpolation between two latent codes yields impressive image morphing results. Recent studies on diffusion models have also indicated the feasibility of generating plausible intermediate images through latent noise interpolation and text embedding interpolation [3, 36, 39]. Impus [42] explored the application of diffusion models in image morphing tasks, performing interpolation in the locally linear continuous text embedding space and Gaussian latent space. DiffMorpher [45] utilizes pre-trained diffusion models to achieve smooth and natural image interpolation and morphing. It performs spherical linear interpolation on the latent noise obtained through DDIM inversion for two images and combines it with textconditioned linear interpolation, thus addressing the limitations of smooth interpolation between two image samples within the unstructured latent space of diffusion models. 3 METHODOLOGY Given a user-defined descriptor \ud835\udc43\u2217and an initial image \ud835\udc3c0 (provided or generated), our method generates the animation sequence {\ud835\udc65(\ud835\udefc) 0 ,\ud835\udc65(\ud835\udefc) 1 , . . . ,\ud835\udc65(\ud835\udefc) \ud835\udc5b }, where \ud835\udefcvaries from 0 to 1. The length of the \ud835\udc65(\ud835\udefc) \ud835\udc56 sequence is set by \ud835\udc5b\ud835\udc53and the number of sequences \ud835\udc65(\ud835\udefc) \ud835\udc56 corresponds to the transformation stages \ud835\udc5b\ud835\udc61. The resulting animation is expected to visually manifest the smooth transitions of \ud835\udc3c0 to \ud835\udc3c\ud835\udc5band characteristics as described by \ud835\udc43\u2217. To guide this generative process, a series of descriptive prompts {\ud835\udc430, \ud835\udc431, . . . , \ud835\udc43\ud835\udc5b\ud835\udc61} are derived to anchor each keyframe in the animation\u2019s continuity. 3.1 Preliminary for Diffusion Models Diffusion models [12] [35] [24] are a series of probabilistic generative models that produce images by gradual denoising from a noise distribution, e.g., Gaussian distribution. The generation process consists of two main phases: the forward (diffusion) process and reverse (denoising) process. The forward process gradually adds noise to initial data \ud835\udc650 to generate a noisy data \ud835\udc65\ud835\udc61given variance schedule \ud835\udefc\ud835\udc61\u2208(0, 1) at time-step \ud835\udc61: \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc650) = N (\ud835\udc65\ud835\udc61; \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc650, (1 \u2212\u00af \ud835\udefc\ud835\udc61)I), (1) where \u00af \ud835\udefc\ud835\udc61= \u00ce\ud835\udc61 \ud835\udc56=1 \ud835\udefc\ud835\udc56. After \ud835\udc47steps, we obtain noise \ud835\udc65\ud835\udc47\u223cN (0, 1). The reverse process aims to gradually clean the noise. By utilizing the Bayzes\u2019 rules and Markov property, we can intuitively express the conditional probabilities as: \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) = N (\ud835\udc65\ud835\udc61\u22121; 1 \u221a\ud835\udefc\ud835\udc61 (\ud835\udc65\ud835\udc61\u22121 \u2212\ud835\udefc\ud835\udc61 \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61 \ud835\udf16), \u02dc \ud835\udefd\ud835\udc61I), (2) where \u02dc \ud835\udefd\ud835\udc61is a time-dependent constant and added noise \ud835\udf16can be predicted by a neural network \ud835\udf16\ud835\udf03. By sampling \ud835\udc65\ud835\udc61\u22121 iteratively, we finally get a clean image \ud835\udc650 from initial Gaussian noise \ud835\udc65\ud835\udc47. We employ a text-conditioned Stable Diffusion (SD) [30], which operates within lower-dimensional latent space rather than pixel space. It begins with encoding images to latent representation by a variational auto-encoder (VAE) [19], followed by a diffusiondenoising process within the latent space. After denoising, the latent representation is decoded back into the image space via a decoder network, culminating in the final generated image. In the noise-predicting network\ud835\udf16\ud835\udf03, residual blocks process image features to generate intermediate features \ud835\udc53\ud835\udc59 \ud835\udc61, which are then used in the self-attention module to produce \ud835\udc44, \ud835\udc3e, \ud835\udc49for capturing longrange interactions. Subsequently, cross-attention integrates textual prompt \ud835\udc43input, merging text and image semantics. The attention mechanism can be formulated as follows: \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc44, \ud835\udc3e,\ud835\udc49) = \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(\ud835\udc44\ud835\udc3e\ud835\udc47 \u221a\ufe01 \ud835\udc51\ud835\udc58 )\ud835\udc49, (3) where \ud835\udc44, \ud835\udc3e, and \ud835\udc49represent queries, keys, and values, respectively, with \ud835\udc51\ud835\udc58denoting the key/query dimension for scaling dot product. In this model, \ud835\udc44originates from spatial features, while \ud835\udc3eand \ud835\udc49come from spatial features and text embeddings for self and cross-attention, respectively. Leveraging attention layers within the SD model significantly affects image composition and development [10] [38], guiding image editing and synthesis by manipulating attention-related information during denoising [6]. 3.2 LLM-driven Controller In this section, we first utilize LLM to extract aligned textual prompts for each key animation stage. Our approach supports two input modalities: text-image pairs and text-only inputs. If the user provides an image, it is directly utilized as the initial image \ud835\udc3c0. In cases where the initial image is absent, we leverage pre-trained Stable Diffusion models to generate \ud835\udc3c0.To generate animations that adhere to the semantics of a specified text description \ud835\udc43\u2217, we require text prompts {\ud835\udc430, \ud835\udc431, . . . , \ud835\udc43\ud835\udc5b\ud835\udc61} for each key animation stage, as these prompts directly guide the animation process. High-quality, detailed text prompts are crucial when no initial image is provided, as the model generates \ud835\udc3c0 based on \ud835\udc430\u2019s semantic cues. Prompts for Stable Diffusion should be richly descriptive to accurately produce a high-quality starting image. To enhance the quality and stability of the process, we introduce two agents based on large language models: the \u201cStage Image Text Prompt Agent\u201d (SIA) and the \u201cStable Diffusion Prompt Generator Agent\u201d (PGA). Initially, SIA generates text prompts that guide the image generation for each key stage, as illustrated in Fig. 2 (a). SIA generates text prompts based on two fundamental principles: i) By decomposing the animation descriptor \ud835\udc43\u2217into multiple independent processes, SIA reduces semantic differences between \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang P2 \u201cA dog jumping on the ground.\u201d P0 \u201cA cat sitting on the ground.\u201d Input or Generated Image I0 A sitting cat turns into a jumping dog. Textual Guidance P* LLM P1 \u201cA cat jumping on the ground.\u201d Multimodal Input \ud835\udc81\ud835\udfce,\ud835\udc7b \u2217 Denoising U-Net \ud835\udc99\ud835\udfce \ud835\udc81\ud835\udfcf,\ud835\udc7b \u2217 \ud835\udc99\ud835\udfcf Latent Noise Keyframe Interpolation intermediate image \ud835\udc99\u03b1 \ud835\udfce \ud835\udc99\u03b1 \ud835\udfcf \ud835\udc81\ud835\udfce,\ud835\udc7b DDIM Inversion \ud835\udc33\ud835\udc7b\u03b1 \ud835\udfce e0 e1 \ud835\udc86\u03b1 \ud835\udfce \ud835\udc81\ud835\udfcf,\ud835\udc7b Inversion \u03b1 1-\u03b1 Diffusion Prompt Generator Agent Stage Image Text Prompt Agent Injection Control Agent LLM Control Prompt Generation \ud835\udc81\ud835\udfd0,\ud835\udc7b \u2217 \ud835\udc99\ud835\udfd0 P0 \u201cA cat sitting on the ground.\u201d P1 \u201cA cat jumping on the ground.\u201d \u201csitting\u201d \u201cjumping\u201d Non-Rigid and Texture Editing P2\u201cA cat jumping on the ground.\u201d \u201ccat\u201d \u201cdog\u201d DDIM Inversion Feature or Self Attention Injection P0 \u201cA cat sitting on the ground.\u201d P1 \u201cA cat jumping on the ground.\u201d a b c LLM-driven Controller Hybrid Prompt-aware Editor Animation Generator Optional Image Generation Model with Fixed Parameters Figure 2: Overview of proposed LASER. (a) The LLM-driven Controller first parses the descriptive prompts to generate the descriptive prompts for corresponding frames of animation. (b) By doing so, the LLM analyzes these prompts to the feature and attention injection control signals, to facilitate the simultaneous portrayal of both texture and non-rigid editing. (c) The animation generator leverages spherical linear interpolation and adaptive instance normalization to generate the intermediate images between keyframes, accessing smooth animation generation. adjacent prompts, enhancing the overall quality of the results. ii) The prompts must be highly aligned to facilitate high-quality intermediate results through linear interpolation. Given the local linearity within the CLIP text embedding space [18], minimizing the gap between adjacent embeddings is essential. A practical method involves using consistent sentence structures across prompts, such as \u201cA cat [action] on the ground\u201d and \u201cA [animal] jumping on the ground\u201d [42]. This approach ensures that while the prompts are semantically distinct, they share a common categorical root, thus streamlining the generation process. This generation method successfully mitigates the non-linearity and discontinuity commonly encountered between text embeddings. With the deployment of the Stage Image Text Prompt Agent (SIA), we significantly bolster our model\u2019s capacity to generate semantically coherent and high-quality images. The Stable Diffusion Prompt Generator Agent (PGA) converts broad, high-level concepts from the SIA into richly detailed and vividly descriptive prompts specifically crafted for Stable Diffusion. As depicted in Fig. 2 (b), once PGA receives the initial text prompt from SIA, it refines this input to craft a more detailed prompt. This enhanced prompt not only delineates the subject and action but also enriches the scene with specific elements like texture, lighting, and artistic style, which instructs Stable Diffusion to produce images of higher fidelity and complexity [22]. 3.3 Hybrid Prompt-aware Editor This section utilizes the aligned textual prompts to obtain keyframe images. During the editing process, \ud835\udc4d\u2217 1,\ud835\udc47is a direct copy of \ud835\udc4d\u2217 0,\ud835\udc47. For \ud835\udc56\u22651, each keyframe \ud835\udc65\ud835\udc56undergoes DDIM inversion to produce \ud835\udc4d\ud835\udc56,\ud835\udc47, which is then cloned to form \ud835\udc4d\u2217 \ud835\udc56+1,\ud835\udc47for the subsequent keyframe. Despite using aligned prompts for text-guiding image editing, we still observe a marked discrepancy in semantic identity between the images, which results in animations that do not transition smoothly. To overcome this challenge, we draw inspiration from previous image editing techniques [6, 38] and propose a feature and attention injection method controlled by the LLM, tailored to query semantically similar content from the previous keyframes according to the changing nature of the corresponding stage. Utilizing DDIM inversion on the prior keyframe, we obtain the initial state \ud835\udc4d\ud835\udc56,\ud835\udc47. Past work [38] has demonstrated that injecting features \ud835\udc53\ud835\udc59 \ud835\udc61within residual blocks and self-attention projections \ud835\udc5e\ud835\udc59 \ud835\udc61, \ud835\udc58\ud835\udc59 \ud835\udc61significantly boosts text-guided image edition tasks. The encoding in the fourth layer \ud835\udc534 \ud835\udc61specifically captures shared semantics necessary for structure retention during generation. Moreover, the injections of self-attention are underpinned by the attention scores, which arise from the product of query and key vectors, exhibiting a profound connection to the well-established self-referential paradigms within neural attention schemas. By injecting specific features \ud835\udc534 \ud835\udc61into the fourth layer of residual blocks and introducing self-attention elements \ud835\udc5e\ud835\udc59 \ud835\udc61and \ud835\udc58\ud835\udc59 \ud835\udc61throughout all decoder layers, we have successfully achieved texture variations between keyframes. \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d Residual Block + + Self Attention \ud835\udc81\ud835\udc8a,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a,\ud835\udc95 \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d + + \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95 replace replace Figure 3: Overview of Feature and Association Injection. We refer to this injection strategy as \u201cFeature and Association Injection\u201d (FAI). However, the aforementioned method struggles with non-rigid keyframe modifications. The usual solution, limiting injection range to reflect rigid changes from prompts, risks losing image identity. To navigate this, especially for non-rigid edits, we avoid injecting into residual blocks, thereby maintaining the image\u2019s structural integrity without being obscured by local semantics. Our strategy uses targeted attention injections. As image layout solidifies early in denoising and self-attention queries align semantically [6], they can extract content from various objects. Post-denoising, we inject keys \ud835\udc58\ud835\udc59 \ud835\udc61and values \ud835\udc63\ud835\udc59 \ud835\udc61from the previous keyframe\u2019s self-attention block, as shown in Fig. 4. This process forms objects\u2019 outlines following text prompts and then enriches the generative structure with detailed content from the source image. Consequently, we achieve semantically coherent images that also support non-rigid transitions. We refer to this injection strategy as \u201cKey-Value Attention Injection\u201d (KVAI). Up to this point, the model has acquired the capability to generate diverse keyframes, enabling the production of the expected animations. Recognizing the need for a systematic approach to select the optimal injection strategy for each stage of the animation generation, we have developed the Injection Control Agent (ICA), as showcased in Fig. 2 (a). ICA\u2019s primary role is to process the text prompts from the Stage Image Text Prompt Agent (SIA), which performs an indepth analysis of semantic differences between these text prompts at consecutive key stages. This analysis enables SIA to issue tailored control signals: \u201c0\u201d signals ICA to deploy the injection strategy for stages where texture changes are dominant, and \u201c1\u201d signals the use of the KVAI strategy for stages with non-rigid transformations. By precisely managing the type of attention injection at each stage, ICA ensures that the generated animations are both visually coherent and closely aligned with the textual descriptors. 3.4 Animation Generator In this section, we generate intermediate images between keyframe images to obtain consistent and smooth animations. After generating the text prompts corresponding to each key stage \ud835\udc430, \ud835\udc431, . . . , \ud835\udc43\ud835\udc5b\ud835\udc61 in section 3.2, we obtain the respective text embeddings\ud835\udc520,\ud835\udc521, . . . ,\ud835\udc52\ud835\udc5b\ud835\udc61. When generating intermediate images, we perform a simple linear interpolation between the text embeddings of two adjacent key stages to obtain the corresponding text embedding \ud835\udc52. \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d Residual Block + + Self Attention \ud835\udc81\ud835\udc8a,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a,\ud835\udc95 \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d + + \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95 replace Figure 4: Overview of Key-Value Attention Injection. \ud835\udc52\ud835\udc56 \ud835\udefc= (1 \u2212\ud835\udefc)\ud835\udc52\ud835\udc56+ \ud835\udefc\ud835\udc52\ud835\udc56+1 (4) In the construction of the animation sequence, the interpolation parameter \ud835\udefcis discretized into a series of values that facilitate a smooth transition between frames. This discretization is achieved by defining a set of equidistant points within the closed interval [0, 1], where the number of points corresponds to the intended number of frames in an animation stage, denoted as \ud835\udc5b\ud835\udc53. Thus, \ud835\udefctakes on values \ud835\udefc0, \ud835\udefc1, . . . , \ud835\udefc\ud835\udc5b\ud835\udc53\u22121, where \ud835\udefc0 = 0 represents the starting frame, and \ud835\udefc\ud835\udc5b\ud835\udc53\u22121 = 1 indicates the ending frame. The intermediate values of \ud835\udefccorrespond to proportionally spaced frames within the animation sequence, ensuring linear spacing. This arrangement guarantees that each frame represents a weighted blend of the preceding and subsequent key stage embeddings, facilitating a smooth and continuous transformation across the animation. To ensure visual continuity in the sequence of intermediate images, we interpolate the latent noise of these images using the latent noise from adjacent key stages. However, standard linear interpolation may introduce artifacts. To address this, we adopt spherical linear interpolation (slerp) [34], which effectively minimizes artifacts and enhances the smoothness of transitions. z\ud835\udc56 \ud835\udc47\ud835\udefc= sin((1 \u2212\ud835\udefc)\ud835\udf09) sin \ud835\udf09 \ud835\udc4d\ud835\udc56,\ud835\udc47+ sin(\ud835\udefc\ud835\udf09) sin \ud835\udf09\ud835\udc4d\ud835\udc56+1,\ud835\udc47 (5) where \ud835\udf09= arccos \u0010 \ud835\udc4d\ud835\udc56,\ud835\udc47\ud835\udc4d\ud835\udc56+1,\ud835\udc47 \u2225\ud835\udc4d\ud835\udc56,\ud835\udc47\u2225\u2225\ud835\udc4d\ud835\udc56+1,\ud835\udc47\u2225 \u0011 . To maintain consistency in the color and luminance aspects of both generated and source images, we implement a variant of Adaptive Instance Normalization (AdaIN) [14] for the pre-denoising stage adjustment of the interpolated latent noise z\ud835\udc56 0\ud835\udefc. We calculate and then interpolate the means (\ud835\udf07) and standard deviations (\ud835\udf0e) of the latent noises for each channel: \ud835\udf07\ud835\udc56 \ud835\udefc= (1 \u2212\ud835\udefc)\ud835\udf07\ud835\udc56+ \ud835\udefc\ud835\udf07\ud835\udc56+1 (6) \ud835\udf0e\ud835\udc56 \ud835\udefc= (1 \u2212\ud835\udefc)\ud835\udf0e\ud835\udc56+ \ud835\udefc\ud835\udf0e\ud835\udc56+1 (7) \u02dc z\ud835\udc56 0\ud835\udefc= \ud835\udf0e\ud835\udefc z\ud835\udc56 0\ud835\udefc\u2212\ud835\udf07(z\ud835\udc56 0\ud835\udefc) \ud835\udf0e(z\ud835\udc56 0\ud835\udefc) ! + \ud835\udf07\ud835\udc56 \ud835\udefc (8) Subsequently, adjusted latent noise \u02dc z\ud835\udc56 0\ud835\udefcsupplants the original z\ud835\udc56 0\ud835\udefcduring the denoising steps, thereby improving the brightness and color consistency of the resulting images. \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang Ours DiffMorpher Diffinterp MasaCtrl PnP DDIM Textual Guidance: A standing horse turns into a running zebra. Textual Guidance: A bird starts to fly. Figure 5: Qualitative evaluation. Our method produces animations that significantly outperform previous methods in terms of quality, smoothness, and alignment with user input. Finally, during the denoising process of each intermediate image \ud835\udc65\ud835\udc56 \ud835\udefc, we also perform feature and self-attention injection. When the stage number \ud835\udc5b\ud835\udc61is not \u201c-1\u201d, we implement the standard injection strategy, wherein, while generating \ud835\udc65\ud835\udc56 \ud835\udefc, the injection is obtained from \ud835\udc65\ud835\udc56. Furthermore, when SIA\u2019s feedback on the \u201c\ud835\udc5b\ud835\udc61\u201d is \u201c-1\u201d, it indicates a special request from the user for a \u201csingle-stage generation,\u201d which involves both texture changes and non-rigid transformations within a single animation stage. In such cases, ICA leads the model to execute the Hybrid Attention Injection (HAI) strategy. HAI solves the issue that when using the normal injection strategies, the model is unable to produce animations that simultaneously exhibit changes in texture and structure within a single phase. This phenomenon will be further discussed in the 4. The HAI process initiates by editing \ud835\udc650 to produce \ud835\udc651 using the Feature and Association Injection (FAI), and subsequently \ud835\udc652 is edited from \ud835\udc651 utilizing the KVAI. Following these edits, DDIM Inversion is applied to extract the latent representations \ud835\udc4d0,\ud835\udc47and \ud835\udc4d2,\ud835\udc47, which are then interpolated to construct the intermediate latent representation z\ud835\udc56 \ud835\udc47\ud835\udefc. During the denoising phase, injections are strategically administered based on the interpolation parameter \ud835\udefc; specifically, injections from {\ud835\udc58\ud835\udc59 \ud835\udc61, \ud835\udc63\ud835\udc59 \ud835\udc61} corresponding to \ud835\udc650 are applied in the initial (1-\ud835\udefc)T steps, and those corresponding to \ud835\udc652 in the subsequent \ud835\udefcT steps. This method effectively conveys the semantic and structural information of significantly transformed images, ensuring smooth and consistent animations by querying local structures and textures from input images throughout the denoising process. 4 EXPERIMENTS We employ the publicly available Stable Diffusion v2.1-base [30] as our diffusion model and use GPT-4 8k [25] as the LLM in our experiments. For generating the initial image \ud835\udc3c0, we utilize two pretrained models: the real-style model dreamshaper-8 and the animestyle model MeinaMix, to assess our model\u2019s capability in producing animations across diverse styles. In creating intermediate images, we apply DDIM deterministic sampling. For keyframe synthesis, aiming to optimize the balance between efficiency and quality, we perform deterministic DDIM inversion with 100 forward steps followed by deterministic DDIM sampling with 100 backward steps. When implementing Feature and Association Injection (FAI), we inject features and self-attention within the first 25 of the 50-step sampling process, specifically targeting layers 4 to 10 of the UNet decoder. In the case of Key-Value Attention Injection (KVAI), injections commence after the initial five sampling steps and are applied within layers 6 to 10 of the decoder. The Hybrid Attention Injection (HAI) method follows the same timing and targets the same layers as KVAI. These injection strategies can be customized to align with the different input images \ud835\udc3c0. For sampling, the classifierfree guidance scale is set at 7.5. Runtime evaluations are performed on an NVIDIA RTX 4090 GPU. \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation 4.1 Text-conditioned Image-to-Animation Benchmark Our method enables text-guided image-to-animation transitions, leveraging either image-textual or solely textual descriptions through pre-trained text-to-image diffusion models. Due to the lack of benchmarks for such configurations, we have proposed a new mini dataset: Text-conditioned Image-to-Animation Benchmark, which consists of 100 sets of textual descriptions. The collection comprises 100 sets, categorized as follows: 20 sets of animal actions and appearance transformations, 20 sets focused on animal appearance and species changes, 20 sets depicting transitions in natural landscapes and objects, 20 sets related to human figures and alterations in painting styles, 10 sets featuring character identity transformations, and 10 sets concerning changes in object colors and materials. Our model utilizes these textual prompts to generate 100 corresponding animation sequences. This benchmark serves as a preliminary evaluation of our model\u2019s performance, and we hope it will facilitate further research in this direction. 4.2 Qualitative Evaluation We present a visual comparison of our method against prior approaches to underscore its superiority. Although there are no other tuning-free methods for text-controlled image-to-animation currently available, we draw a detailed comparison with state-of-theart baselines promising for text-controlled image morphing. These include: 1) Diffusion-based deep interpolation methods such as DDIM [35], Diff.Interp [39], and DiffMorpher [45], all utilizing Stable Diffusion v2.1-base; 2) Text-driven, tuning-free image editing methods like PnP [38] and MasaCtrl [6]. For the first category of methods, which depend on multiple pre-existing image inputs and lack the capability to generate content directly from text, our experimental procedure includes: i) Utilizing LLM control for generating consistent outputs, which involves creating initial images and key stage prompts through stable diffusion prompt generation, similar to our method. ii) Generating initial images using the same stable diffusion checkpoint as employed in our experiments. iii) Producing subsequent key stage images via DDIM Inversion. For the second category, which also leverages LLM control, we maintain consistent application of text embedding interpolation rules to generate intermediate images, aligning with our approach. Generation Results. As illustrated in Fig. 5, our method outperforms previous approaches in alignment with user input, transition smoothness, semantic coherence, and maintaining the animation subject\u2019s semantic identity. Previous methods have often failed to accurately respond to user input changes in appearance and motion. These approaches typically struggle to generate the intended motions accurately or introduce noticeable artifacts post-motion changes, often resulting in a significant loss of primary subject information in the images. Compared to previous methods, our approach consistently generates coherent animations that closely align with the semantic content of the user input, resulting in visually satisfactory outputs. For additional examples of generated results, we encourage readers to consult the appendix. Generation Diversity. Furthermore, the extensive prior knowledge and generative capabilities of the LLM enhance our model\u2019s ability to produce diverse outputs, as shown in Fig. 6. When users request multiple distinct results, our model meets this demand by generating high-quality, varied animations, significantly broadening its creative potential. 4.3 Quantitative Evaluation Drawing from established objectives in prior research [6, 42, 45], we quantitatively evaluate the models using these metrics: (1) Learned Perceptual Image Patch Similarity (LPIPS, \u2193) [47]: LPIPS is employed to assess the perceptual deviation within an animation sequence in our work. We compute the total LPIPS (LPIPS\ud835\udc47) to quantify the overall perceptual variance throughout the sequence, highlighting the dynamic range of visual changes. Additionally, the maximum LPIPS to the nearest endpoint (LPIPS\ud835\udc40) is determined to identify the maximum perceptual variance, providing insights into the most significant changes within the animation. These measurements are crucial for assessing directness of the animation, ensuring finding the most efficient transition to generate animation. (2) CLIP Score (\u2191) [11]: The CLIP score is a metric that quantifies the alignment between images and textual descriptions, serving as a powerful tool for evaluating the coherence and relevance of generated images about their specified textual prompts. For an intermediate image, we describe its CLIP score by calculating its average similarity with the prompt before editing and the prompt after editing (e.g., \ud835\udc650 \ud835\udefcwith \ud835\udc430 and \ud835\udc431, \ud835\udc651 \ud835\udefcwith \ud835\udc431 and \ud835\udc432, etc.). (3) Perceptual Path Length (PPL, \u2193) [17]: To evaluate smoothness, i.e., transitions within the generated animation sequence should be seamless between any two consecutive images, we compute PPL: PPL\ud835\udf16= E\ud835\udefc\u223c\ud835\udc48(0,1) [ 1 \ud835\udf162 LPIPS(\ud835\udc99(\ud835\udefc), \ud835\udc99(\ud835\udefc+\ud835\udf00))], where \ud835\udf16is a small constant and we set it to 1 \ud835\udc5b\ud835\udc53\u22121. It is worth noting that we regard the entire sequence as a single animation process, despite consisting of multiple stages. The quantitative evaluation results of all methods are presented in Table 1. Generation Quality. Our method achieves a leading Clip Score, demonstrating its semantic alignment with user input. While DDIM may excel in CLIP Score, it often compromises structural coherence in favour of textual alignment, an issue our approach adeptly avoids as demonstrated in Fig. 5. Due to PnP\u2019s inability to effectively perform non-rigid edits, its generated animations often exhibit only appearance changes, thereby achieving higher levels of smoothness. This limitation hinders its capability to handle a diverse range of animation generation tasks. Similarly, Masacontrol, which struggles with texture transformations, also falls short in producing a diverse range of animations. Even when benchmarked against deep interpolation techniques that require fine-tuning, our results consistently exhibit superior smoothness. This performance not only underscores the effectiveness of our method but also affirms its suitability for adapting to a wide range of text-conditioned imageto-animation generation scenarios. \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang Textual Guidance: \"A photo of an anime-style girl transforms into the painting styles of different artists. 3 groups with different artists.\" First group Second group Third group Figure 6: The rich prior knowledge of the LLM grants the model the ability to generate diverse outcomes from the same input text and image. Table 1: Comparison of current methods. Superscript \u2663indicates that the model employs an external network (e.g., ControlNet [46]) to generate intermediate images and \u2020 indicates that the model fine-tunes with training LoRA [13]. \u201cTE\u201d stands for texture editing, a process that involves altering the surface appearance of objects within an image to match a specific texture style, while preserving the underlying structure and layout of the scene. \u201cNRIE\u201d refers to non-rigid image editing which involves altering the shape and structure of objects in images, like changing facial expressions or body poses. \u201cAG\u201d, standing for animation generation, refers to producing intermediate images between keyframes. Method Characteristics Metrics Runtime\u2193 TE NRIE AG CLIP Score \u2191 LPIPS\ud835\udc47\u2193 LPIPS\ud835\udc40\u2193 PPL \u2193 DDIM[35] ! 27.37 3.13 0.49 36.91 32s PnP[10] ! 26.57 0.90 0.23 10.51 2min MasaCtrl[6] ! 26.56 1.54 0.28 17.96 37s Diff.Interp\u2663[39] ! 20.05 5.14 0.58 72.29 2min6s DiffMorpher\u2020[45] ! 26.94 0.99 0.40 14.78 1min46s Ours ! ! ! 26.99 1.22 0.25 14.14 41s Generation Efficiency. In assessing efficiency, our method uniquely blends quality with speed, setting it apart from other deep interpolation techniques. This superior performance primarily stems from our operation without the need for fine-tuning. Although it may not lead in all image editing benchmarks, our model excels at managing a diverse array of edits, enabling it to adeptly tackle a broad spectrum of generative tasks. Given this versatility, the efficiency of our method is exceptionally high. 4.4 Ablation Study We have conducted an ablation study to evaluate the effectiveness of the proposed components, with experimental results shown in Table 2 and Fig.2 (in Appendix). The findings demonstrate that using DDIM alone cannot accurately restore the structure of the input image. In contrast, our feature and self-attention injections address the loss of texture and structural information during the DDIM generation process, significantly enhancing the quality of the generated animations. However, remnants of structural features from the initial state are still noticeable in the generated animations, including in the intermediate segments. The implementation of Latent Interpolation addresses this issue. While it may slightly Standard FAI Standard KVAI HAI Figure 7: The comparative effects of different injection strategies given the textual description \u201cA sitting cat turns into a jumping dog\u201d. Table 2: Ablation study results. Injection, Latent Interp, and AdaIN represent different components studied in the ablation. Method Components Metrics Injection Latent Interp AdaIN Clip Score \u2191 LPIPS\ud835\udc47\u2193 LPIPS\ud835\udc40\u2193 PPL \u2193 DDIM[35] 27.37 3.13 0.49 36.91 ! 26.73 1.09 0.26 12.85 ! ! 26.81 1.22 0.26 14.03 Ours ! ! ! 26.99 1.22 0.25 14.14 elevate the \ud835\udc3f\ud835\udc43\ud835\udc3c\ud835\udc43\ud835\udc46\ud835\udc47and PPL metrics, it ensures that subsequent frames more accurately reflect the semantic information of the transformed state. After applying the AdaIN adjustment to the latent noise, the consistency of brightness and color across the image sequence has improved. To demonstrate the effectiveness of Hybrid Attention Injection (HAI) in producing single-stage animations that incorporate both texture changes and non-rigid transformations, we conducted qualitative experiments. We generated animations using basic injection strategies (FAI and KVAI) and HAI, with the results displayed in Fig. 7. When employing only FAI, the images failed to respond to non-rigid changes; using KVAI alone did not result in significant texture modifications. Our proposed HAI strategy successfully handles both texture and non-rigid changes, effectively fulfilling the task of single-stage animation generation. 5 CONCLUSION We introduce LASER, a tuning-free LLM-driven attention control framework that utilizes pre-trained text-to-image models to generate high-quality and smooth animations from multimodal inputs. Experimental results validate the superior performance of our method, which consistently produces diverse and high-quality animations. We believe our approach demonstrates significant potential and serves as an inspiration for future research in this field. \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.12737v1", |
| "title": "LLM App Store Analysis: A Vision and Roadmap", |
| "abstract": "The rapid growth and popularity of large language model (LLM) app stores have\ncreated new opportunities and challenges for researchers, developers, users,\nand app store managers. As the LLM app ecosystem continues to evolve, it is\ncrucial to understand the current landscape and identify potential areas for\nfuture research and development. This paper presents a forward-looking analysis\nof LLM app stores, focusing on key aspects such as data mining, security risk\nidentification, development assistance, etc. By examining these aspects, we aim\nto provide a vision for future research directions and highlight the importance\nof collaboration among stakeholders to address the challenges and opportunities\nwithin the LLM app ecosystem. The insights and recommendations provided in this\npaper serve as a foundation for driving innovation, ensuring responsible\ndevelopment, and creating a thriving, user-centric LLM app landscape.", |
| "authors": "Yanjie Zhao, Xinyi Hou, Shenao Wang, Haoyu Wang", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "cs.SE", |
| "cats": [ |
| "cs.SE" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "LLM App Store Analysis: A Vision and Roadmap", |
| "main_content": "INTRODUCTION Large Language Models (LLMs), such as GPT-4 [2] and LLaMA [81], are trained on vast amounts of text data, allowing them to capture the intricacies of language and perform a wide range of natural language processing tasks. The advent of LLMs has opened up new possibilities for various applications, including chatbots, content generation, language translation, and sentiment analysis. As the capabilities of LLMs continue to expand, there has been a growing interest in making these models accessible to a broader audience. This has led to the emergence of LLM app stores, such as OpenAI\u2019s GPT Store [63], Poe [69], and FlowGPT [89], which provide a platform for developers to showcase their LLM-powered apps and for users to discover and engage with these apps. LLM app stores offer a centralized marketplace where users can browse, download, and use LLM-based apps across various domains, such as productivity, education, entertainment, and personal assistance. 1.1 Definitions To provide the foundation for our analysis and discussions, it is essential to establish clear definitions for the key concepts and terms related to LLM app and LLM app store. The following definitions explain these elements, serving as a reference point for the subsequent sections of this paper. \u2022 LLM app: A specialized application (app) powered by an LLM, distinct from conventional mobile apps that may incorporate LLM technology. These apps, typically found on platforms like OpenAI\u2019s GPT store, Poe, and FlowGPT, are specifically designed to harness the advanced capabilities of LLMs for a variety of purposes, tasks, or scenarios. \u2022 LLM app store: A centralized platform that hosts, curates, and distributes LLM apps, enabling users to discover and access tailored intelligent services. As illustrated in Figure 1, the LLM app ecosystem presents a collaborative environment that harnesses the power of LLMs to create tailored AI apps for a wide range of users. In this ecosystem, LLM app store managers play a key role by boosting the visibility and reach of LLM apps. They support LLM app developers by providing essential resources such as comprehensive documentation, technical support, and marketing assistance, facilitating the creation and launch of cutting-edge LLM apps. Additionally, they ensure a user-friendly experience for end-users with easy-to-use search and navigation features, helping them find the LLM apps that best meet their needs. Moreover, these managers smooth out the transaction process, enabling developers to profit from their innovations. LLM app developers (creators) are the driving force behind the ecosystem\u2019s innovation. They create and customize LLM apps tailored to unique requirements and use cases. During the development process, developers design instructions and define the desired capabilities for their apps, such as web browsing, image generation, or code interpretation. They can further enrich their apps by uploading external knowledge sources or integrating third-party services through API keys, OAuth protocols [47], or other authentication mechanisms, enhancing the apps\u2019 functionality and versatility. Once developed, LLM app developers can deploy and submit their apps to the platforms, making them accessible to other users. These distribution channels include the official LLM app stores, third-party LLM app stores, as well as social media like Twitter, and Reddit, or search engines like Google, Bing, Yandex, and Baidu, providing multiple avenues for discoverability. End-users, encompassing individuals, businesses, or organizations, form the consumer base of the LLM app ecosystem. They can browse and discover available LLM apps through the various distribution channels, purchase or acquire them, and provide reviews and feedback to help improve the ecosystem. 1.2 Importance of analyzing LLM app stores The LLM app store ecosystem is experiencing rapid growth and diversification, as evidenced by several key players and milestones in the field. FlowGPT [89] stands as a prime example of the vast potential within LLM app stores, boasting over 4 million monthly active arXiv:2404.12737v1 [cs.SE] 19 Apr 2024 \fSE 2030, November 2024, Puerto Galin\u00e0s (Brazil) Y Zhao, X Hou, S Wang, and H Wang Apps Distribution Channels Official LLM app stores Deploy apps Access apps LLM Apps Knowledge files Third-party services Request & Response Upload Third-party LLM app stores Other channels Third-party server External knowledge Instructions& capabilities LLM app store managers LLM app developers End users Provide services Manage apps Coordinate apps Develop apps Update apps Use apps Review apps Figure 1: LLM app ecosystem components and operating mechanisms. users. Moreover, it has recently secured a significant milestone by completing a $10 million Pre-A funding round [62], underscoring its growing influence and success in the sector. Additionally, OpenAI\u2019s GPT Store [63] leading this evolution by hosting over 3 million apps [64]. In the burgeoning third-party LLM app store arena, as of April 1, 2024, the landscape is diverse and expansive: GPTs App [67] dominates with 801,185 apps, and GPTs Hunter [38] is not far behind, offering a substantial repository of 519,000 apps. Meanwhile, GPTStore.AI [31] provides a solid selection of 179,895 apps, and GPTs Works [40] contributes 103,739 apps, each platform adding unique value and perspective to the LLM app ecosystem. This rapid expansion parallels the earlier trajectories observed in traditional mobile app stores [36], where the proliferation of apps necessitated advanced analytical approaches to ensure quality, security, and relevance. Just as mobile app store analysis has become indispensable in optimizing user experience and app performance [102], a similar emphasis on LLM app store analysis is crucial [99]. This new domain presents unique challenges and opportunities, from ensuring the ethical deployment of LLM technologies to navigating the complex dynamics of user engagement and content moderation. Unfortunately, the academic landscape in this area remains starkly underexplored, presenting an extensive frontier teeming with opportunities for inquiry. The burgeoning LLM app ecosystem offers a fertile ground for indepth exploration. Investigating LLM app stores is pivotal for gaining insights into the dynamics of LLM apps in real-world scenarios, encompassing user engagement, market dynamics, and technological trends. This examination can highlight best practices, pinpoint prevailing challenges, and spotlight areas ripe for enhancement. Furthermore, delving into LLM app stores can illuminate the broader societal impacts of LLM-driven applications. As they gain ubiquity, it becomes imperative to scrutinize their utility, the nature of the content they deliver, and their influence on user choices and behaviors. Analysis of elements such as user feedback, app narratives, and promotional content within these platforms could reveal underlying biases, potential misinformation issues, or privacy concerns linked to LLM apps. This exploration can, therefore, provide critical guidance for developers and policymakers in crafting more ethically aligned and user-centric LLM apps, ensuring that these tools contribute positively to societal progress. 1.3 Overview This paper aims to provide a forward-looking analysis of LLM app stores, focusing on key aspects that shape the user experience, developer strategies, and the dynamics of the ecosystem. Through an exploration of LLM app data, security, privacy, and market dynamics, we aim to uncover trends, pinpoint challenges, and highlight opportunities that could inform future research directions. Rather than proposing a specific framework or solution, this paper serves as a visionary document, highlighting the importance of collaboration and shared responsibility among stakeholders in addressing the challenges and leveraging the opportunities presented by LLM app stores. We believe that by providing insights and recommendations based on a comprehensive analysis of the current landscape, this paper can contribute to the development of a thriving, user-centric, and responsible LLM app ecosystem. In the following sections, we present the roadmap for mining and analyzing LLM app stores, comprising three key stages as shown in Figure 2. The data collection stage (\u00a72) involves gathering and preprocessing LLM app raw data, metadata, and user feedback from LLM app stores. The security and privacy analysis stage (\u00a73) focuses on identifying potential risks and regulatory compliance issues. The ecosystem and market analysis stage (\u00a74) leverages the collected data to gain insights into developer engagement, market trends, and strategic decision-making within the LLM app ecosystem. \u00a75 discusses the implications of our analysis, challenges faced by the ecosystem, and recommendations for stakeholders. Finally, \u00a76 concludes the whole paper. \fLLM App Store Analysis: A Vision and Roadmap SE 2030, November 2024, Puerto Galin\u00e0s (Brazil) LLM App Raw Data LLM App Metadata User Feedback LLM App Store Data Collection Security and Privacy Analysis (from data perspective) Privacy Protection Security Risks Ecosystem and Market Analysis (from human perspective ) Developer Engagement Market Trends and Innovations Competitive Landscape Conversation starter App name Instruction Knowledge \ufb01le Authentication Privacy policy Description Capability Rating Category Ranking Creator Updated time Number of conversations FAQ Sample chat Tag User review Reviewer Number of ratings Social media mention Daily active users Retention rate App clone Fake app Ranking fraud Third-party service integration Privacy policy compliance and regulations App recommendation Malicious ASO Requirements engineering App promotion Analysis & testing framework Development assistance Third-party service interface Trend analysis mechanism Cross-platform migration User review analysis Cross-market/culture comparison Human value alignment Featured app selection App protection Advertisement fraud For developer: instruction/knowledge \ufb01le privacy data \ufb01ltering Custom temperature Bonus system For user: user input privacy data \ufb01ltering User tracking & pro\ufb01ling Market policy violation Malicious app Spam review App vulnerability App classi\ufb01cation Figure 2: LLM app store mining and analysis roadmap. 2 DATA COLLECTION AND PREPROCESSING To conduct a comprehensive analysis of LLM app stores, researchers must identify the key data types to collect and preprocess. This section outlines the essential data categories for understanding the LLM app ecosystem, including LLM app raw data, LLM app metadata, and user feedback, as illustrated in Figure 2. The following subsections will delve into each data category, followed by a discussion on the importance of data preprocessing in ensuring data quality and preparing the dataset for analysis. 2.1 LLM app raw data LLM app raw data encompasses various components that define the behavior and capabilities of the LLM apps. Instructions play a vital role in specifying the desired functionality and behavior of the app, outlining actions to perform and those to avoid. Knowledge files provide custom information that the LLM app can access to inform its responses, retrieving relevant sections based on user input. These files may be viewable by other users through LLM app responses or citations, enhancing transparency and trust. Authentication mechanisms, such as API keys or OAuth protocols [47], are necessary when LLM apps require integration with third-party services, ensuring secure access. Additionally, LLM apps must adhere to the privacy policies of any integrated third-party platforms to maintain user confidentiality. Conversation starters are designed to guide new users in asking better questions, providing a smooth onboarding experience. Lastly, custom temperature settings allow for controlling the creativity of the LLM app\u2019s responses, balancing variation and predictability to suit different use cases. 2.2 LLM app metadata LLM app metadata plays a crucial role in helping users navigate the LLM app store, providing essential information about each app to facilitate discovery, understanding, and comparison. The app name and creator are fundamental pieces of metadata, allowing users to identify and attribute each app to its respective developer. A detailed description of the app\u2019s purpose and features is essential for users to grasp the app\u2019s intended use case and capabilities quickly. Capabilities provide users with a clear understanding of the app\u2019s functionalities. They can include a wide range of features, such as web browsing, which enables the app to access and retrieve information from the internet; image generation, allowing users to create visual content through the app; and code interpretation, enabling the app to understand and execute programming languages. Other potential capabilities include speech generation, video generation, etc. \fSE 2030, November 2024, Puerto Galin\u00e0s (Brazil) Y Zhao, X Hou, S Wang, and H Wang Categories group LLM apps by their primary function or domain, such as productivity, entertainment, or education. Tags provide more granular information about the app\u2019s features, use cases, or compatibility. For example, tags may indicate whether an app is suitable for beginners, works offline, or integrates with specific platforms. The updated time informs users about the app\u2019s currency, ensuring they can access the latest features and content. Sample chats showcase the app\u2019s conversational abilities, response quality, and potential use cases, giving users a realistic preview of what to expect. Frequently Asked Questions (FAQs) constitute a critical component that systematically addresses prevalent user inquiries. It provides quick answers to common queries about the app\u2019s functionality, limitations, and best practices. 2.3 User feedback User feedback is a valuable source of data for assessing the performance and popularity of LLM apps. One of the key metrics is the number of conversations, which indicates the level of user engagement with the app. A high number of conversations suggests that users find the app valuable and engaging, regularly interacting with it to fulfill their needs. The retention rate measures the percentage of users who continue to use the app over a specific period. Daily active users (DAU) provide a snapshot of the app\u2019s active user base, representing the number of unique users who engage with the app daily. Tracking DAU over time offers insights into the app\u2019s ongoing appeal and growth trajectory. Ratings and the number of ratings offer a quantitative measure of user satisfaction, allowing users to express their opinions on a standardized scale. A high average rating and a large number of ratings signify that users generally have a positive experience with the app and are willing to share their feedback. Rankings provide a comparative measure of an app\u2019s performance against other similar apps within the store. User reviews offer qualitative feedback, allowing users to share detailed opinions, experiences, and suggestions. Positive reviews highlight an app\u2019s strengths and the value it provides to users, while negative reviews can reveal weaknesses, bugs, or areas for improvement. Analyzing user reviews can help developers prioritize updates, fix issues, and enhance features based on user preferences. Information about reviewers, such as their user profile or history with the app, can provide additional context and credibility to their feedback. Social media mentions capture an LLM app\u2019s broader impact and popularity beyond the confines of the LLM app store. Users may share their experiences, recommend the app to others, or engage in discussions related to the app on various social media platforms. 2.4 Data preprocessing Once the data is collected, it must undergo a rigorous preprocessing phase to ensure its quality, security, and compliance with privacy regulations. Preprocessing steps should be applied to ensure data quality and consistency [4, 24, 59]. This involves removing duplicate entries, handling missing values, and normalizing text data. Text preprocessing techniques such as tokenization, lowercasing, and removing stop words and punctuation should be employed. Data cleaning steps, such as removing irrelevant or spam reviews and filtering out apps with insufficient information or user engagement, are also necessary. This preprocessing phase is crucial for obtaining high-quality, reliable data for analysis. Furthermore, this phase may involve filtering out sensitive or personal information, removing malicious content, and ensuring adherence to established policies and guidelines. Data normalization and formatting procedures are also applied to facilitate efficient storage, retrieval, and analysis of the collected information. By following this comprehensive data collection and preprocessing approach, researchers can gain a holistic understanding of the LLM app ecosystem, enabling them to conduct in-depth analyses, identify trends and patterns, and ultimately contribute to the advancement and growth of this rapidly evolving field. 3 SECURITY AND PRIVACY ANALYSIS As shown in Figure 2, in the evolving landscape of LLM app stores, security and privacy emerge as paramount concerns, necessitating a comprehensive and multifaceted analysis to ensure the integrity and trustworthiness of the ecosystem. 3.1 Security risks LLM app raw data-related risks. App cloning, where someone unauthorized copies a legitimate app, infringes on intellectual property rights, and potentially introduces security threats or subpar user experiences. In the mobile app ecosystem, app cloning has been a persistent issue, and app stores have employed techniques like code signing [17, 20] and similarity analysis [3, 27] to detect and prevent cloned apps. For LLM app stores, researchers should explore adapting these existing methods, such as code fingerprinting and advanced similarity analysis techniques tailored to LLM apps, to combat app cloning effectively. App vulnerabilities refer to security weaknesses within LLM apps that attackers can exploit, potentially leading to data breaches or unauthorized activities. For example, these vulnerabilities could arise from inadequate input validation, allowing attackers to perform injection attacks with crafted inputs to elicit unintended responses. This can manipulate LLMs into generating sensitive information, violating content policies, or executing unauthorized actions, compromising app integrity and user data. Additionally, insufficient input checks may render apps vulnerable to jailbreaking, enabling LLMs to output content or perform tasks against terms of service or regulations, raising legal and ethical concerns. These security flaws are also frequently the result of substandard development practices, such as insecure data storage, where sensitive information is poorly protected, making it accessible to unauthorized parties. Weak encryption methods or a lack of robust database security can further exacerbate these issues. Moreover, inadequate authentication mechanisms, including predictable passwords or the absence of multi-factor authentication, can simplify unauthorized access to app functionalities and data. App vulnerabilities are not uncommon in the mobile app ecosystem [60, 91], with a wealth of established detection techniques available [1, 16, 76]. Accordingly, one of the future research directions should be developing tailored solutions to identify and mitigate vulnerabilities specific to LLM apps. This may involve techniques for securing input validation, \fLLM App Store Analysis: A Vision and Roadmap SE 2030, November 2024, Puerto Galin\u00e0s (Brazil) preventing jailbreaking, enforcing robust authentication, and ensuring secure data storage and transmission within the LLM app ecosystem. Malicious apps pose a substantial risk in LLM app stores. The malice can manifest in several ways. For example, developers may create LLM apps using instructions or knowledge files containing malicious content, resulting in the app\u2019s knowledge base being tainted with harmful information. Moreover, LLM apps themselves may output content lacking proper constraints, including pornographic or gambling-related information, or even links directing users to malicious websites. Another example is low descriptionto-behavior fidelity, that is when the actual performance or actions of an app diverge significantly from its documented descriptions or expected behaviors. These phenomena are also prevalent in the mobile app ecosystem. Various tools and techniques have been developed to detect and mitigate malicious mobile apps, such as static and dynamic analysis [55, 68, 74], machine learning-based malware detection [48, 79, 92], and app vetting processes [11, 98]. The unique challenges posed by malicious LLM apps necessitate the development of tailored detection and mitigation strategies. Researchers should focus on developing novel techniques specifically designed to identify and address the distinct threats posed by malicious LLM apps, ensuring a safe and trustworthy ecosystem. Third-party service integration is another area of concern, as integrating external services or APIs into an app can introduce vulnerabilities or data privacy issues. For example, if the third-party service provider experiences a data breach or has weak security measures, it could compromise the security and privacy of the LLM app and its users. In the mobile app domain, various mature methods have been proposed to address similar issues, including extensive research on third-party library analysis [22, 37, 44, 85]. To effectively mitigate the risks associated with third-party service integration in LLM apps, developers should adhere to the principle of least privilege, granting only the minimum necessary permissions and access required for the service to function within the app. Robust authentication and authorization mechanisms should be implemented to ensure that only authorized users and processes can interact with the integrated services. Furthermore, encrypting sensitive data both in transit and at rest is crucial when exchanging information with third-party services to protect the confidentiality and integrity of the data. Regular monitoring and auditing of thirdparty services should also be conducted to detect any suspicious activities or changes in their security posture. User tracking and profiling without proper consent is another risk, where excessive tracking of user data, behavior, or activities occurs, often for targeted advertising or analyzing user preferences. This can manifest in various harmful ways, such as identity theft, personalized phishing attacks, or unwanted exposure to tailored yet intrusive advertising [49, 93, 96]. Moreover, the accumulation and analysis of such data could result in biased or discriminatory outcomes, where decisions made by these LLM apps might favor or disfavor individuals based on their profiled characteristics. This not only undermines user trust but also raises ethical concerns about the fairness and transparency of LLM-powered apps. To mitigate risks associated with user tracking and profiling, LLM app stores should enforce strict privacy policies, obtain explicit user consent, and employ privacy-preserving techniques like differential privacy and data anonymization. Strategies such as regular audits should be adopted to ensure fairness, accountability, and transparency in LLM app decision-making. Similar to mobile app protection techniques that often involve obfuscation [19, 90], encryption [78, 84], and packing [15], LLM apps may employ comparable app protection techniques to safeguard their models and data. For example, a potential risk arises from the reliance on third-party frameworks for app protection. To safeguard against model stealing [65, 82] and unauthorized model reuse [42], developers might obfuscate their LLM apps to protect the model itself. However, this obfuscation process could introduce new security risks. It might inadvertently obscure crucial monitoring and debugging features, making it harder to identify and respond to genuine security threats. Additionally, the complexity added by obfuscation could lead to performance degradation, not only affecting the user experience but also potentially introducing vulnerabilities that attackers could exploit. Advertisement fraud can occur during the user\u2019s interaction with the LLM app, involving deceptive or misleading ad practices, such as hidden payments, unauthorized data collection, or intrusive ad experiences. Mobile app stores have employed ad network monitoring [34], real-time ad analysis [75], and user feedback analysis [32] to combat advertisement fraud. For LLM app stores, researchers should explore adapting these techniques and developing new methods tailored to the unique challenges of LLM apps, ensuring a transparent and trustworthy advertising ecosystem. Market policy violations, where LLM apps breach the LLM app store\u2019s terms of service, content policies, or other regulations governing app publication and monetization, can undermine the LLM app store\u2019s integrity and user trust. Mobile app stores have implemented automated policy compliance checks [53, 103] and app vetting processes [98] to enforce market regulations. In the context of LLM app stores, researchers should focus on developing automated policy compliance checks tailored to the unique characteristics and challenges of LLM apps, ensuring a secure and trustworthy LLM app ecosystem. LLM app metadata-related risks. Fake apps, designed to impersonate legitimate LLM apps and deceive users or steal sensitive information, pose a significant risk to users. Mobile app stores have implemented app vetting processes [98] and leveraged techniques like app analysis [71] and user feedback [54] monitoring to identify fake apps. In the context of LLM app stores, researchers should investigate developing advanced natural language processing and multimedia analysis methods to aid in the detection of fake LLM apps, ensuring user safety and trust. User feedback-related risks. In the context of LLM app stores, security risks should be carefully considered and addressed. One significant risk is ranking fraud, where attackers attempt to manipulate the LLM app store rankings through illegal methods, such as using bot programs to generate fake ratings, downloads, or reviews, or engaging in keyword stuffing. Similar to the mobile app market, where researchers have proposed systems to detect ranking fraud by analyzing leading sessions, rating patterns, and review behaviors [100, 101], app stores may need to employ advanced techniques to identify and mitigate fraudulent activities aimed at artificially inflating app rankings and popularity. \fSE 2030, November 2024, Puerto Galin\u00e0s (Brazil) Y Zhao, X Hou, S Wang, and H Wang Another concern is malicious ASO (i.e., App Store Optimization), where attackers exploit irregular methods to falsify user feedback, such as user engagement metrics or app ratings, to artificially boost an app\u2019s search result rankings and discoverability, ultimately gaining higher exposure and usage when users search for related keywords. This issue is analogous to the collusive promotion groups in the mobile app ecosystem, where developers pay service providers to organize groups of attackers to post fraudulent reviews, inflate download numbers, or manipulate app ratings in an attempt to boost their app\u2019s ranking and visibility [14, 66, 88], which can ultimately undermine the integrity of the app store\u2019s ecosystem if not addressed. Spam reviews in LLM app stores can also contain malicious content or involve large-scale fake reviews manipulated by bots or manual efforts, intending to inflate the app\u2019s reputation artificially. This issue is well-documented in the mobile app industry, where spam reviews and review fraud have been a persistent challenge. Various detection methods have been established in the mobile app domain [26, 73]. Similarly, LLM app stores must adopt and refine such techniques to preserve the authenticity and reliability of their review mechanisms. 3.2 Privacy protection Protecting privacy is a critical aspect of LLM app stores. For developers, it is essential to filter out any privacy data that may be included in the instructions or knowledge base files provided to the LLM app. This includes not only personal identifiable information (PII) [72] such as addresses, contact details, and other sensitive data that could compromise individual user privacy, but also extends to sensitive information related to businesses, governmental bodies, and other entities. Protecting this wider range of data ensures the privacy and security of related stakeholders, safeguarding against potential misuse, data breaches, or other forms of exploitation that could have far-reaching consequences. This is similar to the principles and practices adopted in the mobile app industry, where developers are required to implement appropriate data protection measures to safeguard user privacy and comply with relevant regulations, such as the General Data Protection Regulation (GDPR) [87]. Furthermore, developers must also comply with the LLM app store\u2019s privacy policies and relevant legal regulations when collecting user information for personalized fine-tuning or optimization of the app. This involves clearly informing users about the purpose, scope, and manner of data collection and usage, and obtaining user consent. The LLM app store should review the app to ensure compliance with these requirements. Again, this aligns with the standard practices in mobile app stores, where apps are vetted for their data collection and privacy practices, and users are provided with clear information about how their data is being used [5, 77]. From the user\u2019s perspective, privacy data filtering is crucial when interacting with LLM apps. Users\u2019 input may contain private information, and the app should have filtering mechanisms in place to identify and remove this sensitive data, preventing it from being leaked to developers or stored in the knowledge base. This is analogous to the privacy protection measures implemented in mobile apps, where user inputs and data are often processed locally on the device or through secure channels to protect user privacy [52, 61]. Additionally, users can expect LLM app stores to provide transparent information about the privacy practices of listed apps, similar to how mobile app stores provide privacy labels and summaries to help users make informed decisions [9]. 4 ECOSYSTEM AND MARKET ANALYSIS In the dynamic ecosystem of LLM app stores, the interplay between developer engagement, competitive landscape, and market trends drives innovation and growth. As displayed in Figure 2, developer support mechanisms, strategies for navigating competitive pressures, and responsiveness to evolving market dynamics are crucial for cultivating a vibrant, sustainable marketplace that caters to diverse user needs and preferences while fostering technological advancements. 4.1 Developer engagement Enhancing support for LLM app developers is essential to fostering a thriving ecosystem. Implementing effective requirements engineering processes and tools can help developers gain clarity on app specifications and functionalities. Although mobile app development benefits from established practices like user story mapping [18] and wireframing [13], the LLM app ecosystem should develop specialized tools that cater to the unique needs of conversational AI, such as dialogue flow designers, intent mappers, and entity recognizers. Providing comprehensive development assistance, including documentation, examples, and best practices, can lower entry barriers and guide developers in creating high-quality LLM apps. Drawing inspiration from the mobile app domain\u2019s extensive resources, such as Apple\u2019s Human Interface Guidelines [7] and Google\u2019s Material Design [29], the LLM app ecosystem should create similar guides tailored to conversational AI, covering topics like prompt engineering [10], context management [25], and multi-turn dialogue handling [94]. Offering robust analysis and testing tools or frameworks can assist developers in evaluating app performance, identifying vulnerabilities, and optimizing the user experience, ensuring highquality output. Mobile app development has benefited from tools like Appium [86] and Espresso [28], which have revolutionized automated testing, enabling developers to catch bugs early and ensure app stability. Similarly, the LLM app ecosystem needs to invest in developing testing frameworks that can simulate user interactions and detect potential biases or inconsistencies in the generated output. Standardized third-party service interfaces can simplify the integration process for developers. The LLM app store can provide a list of certified service providers or establish partnerships with leading companies in LLMs and knowledge bases, similar to how mobile app stores have streamlined integration with payment gateways [41, 97] and analytics [35, 58] providers. Cross-platform migration tools and support can help developers deploy LLM apps across multiple platforms. In the mobile app development domain, frameworks like React Native [21] and Flutter [45, 95] have greatly simplified the process of building crossplatform apps. The LLM app ecosystem could explore similar solutions that allow developers to write once and deploy across various conversational AI platforms. \fLLM App Store Analysis: A Vision and Roadmap SE 2030, November 2024, Puerto Galin\u00e0s (Brazil) Implementing a comprehensive bonus system that rewards developers based on app quality and user feedback can incentivize continuous optimization. Similar to Apple\u2019s app store small business program [6] offering financial incentives and recognition for highperforming developers, the LLM app ecosystem should consider initiatives that encourage innovation and user satisfaction. 4.2 Competitive landscape Leveraging user preferences, history, and ratings, LLM app stores can develop sophisticated recommendation algorithms to suggest potentially interesting LLM apps to users. This not only improves app discoverability but also increases user satisfaction and engagement. Mobile app stores have successfully implemented such recommendation systems, with examples like Apple\u2019s app store featuring \u201cApps You Might Like\u201d and Google Play\u2019s \u201cRecommended for You\u201d sections [39, 83]. However, in the context of LLM app stores, academic research on tailoring recommendation algorithms to this novel domain remains largely unexplored. To help developers effectively promote their LLM apps, LLM app stores can offer a range of promotion tools and channels. Mobile app stores usually use ads, featured spots, and events to promote LLM apps, leveraging influencer partnerships for added trust and visibility [46, 70]. Similarly, LLM app stores could also include advertising placement options, such as sponsored search results or featured app listings, allowing developers to increase their app\u2019s visibility to potential users. Additionally, LLM app stores can provide promotional opportunities through curated collections, themed showcases, or developer spotlights, highlighting noteworthy LLM apps and their creators. Curating and featuring top LLM apps is a strategic move by LLM app stores to influence the app market landscape [31, 38, 63]. Through spotlighting apps that excel in innovation and quality, LLM app stores could establish benchmarks and inspire developers to aim high. This not only guides users to superior apps but also rewards developers for outstanding user experiences. To improve LLM app discoverability, LLM app stores are encouraged to deploy a coherent app classification system. Sorting LLM apps by their functionality, usage scenarios, industries, or target audiences simplifies the search process for users. This not only elevates the user experience but also supports developers in strategically showcasing their apps. For instance, renowned stores like Google Play [23, 30], Apple App Store [8, 43], and Blackberry World App Store [36] typically employ a consolidated category structure. 4.3 Market trends and innovations Establishing a trend analysis mechanism is crucial for LLM app stores to uncover and predict LLM app market trends by mining user behavior data, download volumes, and reviews. This helps stores and developers formulate future strategies, such as identifying increasingly popular features or scenarios. LLM app stores can draw inspiration from the well-established practices in traditional mobile app stores, which have successfully employed trend analysis techniques to identify emerging app trends and user preferences [23, 51, 57]. For instance, analyzing in-app user behavior patterns, feature usage, and navigation paths can provide valuable insights into user preferences and emerging trends for LLM apps. Analyzing and comparing consumer preferences for LLM apps across different regional markets and cultural backgrounds can reveal market differences, enabling LLM app stores to adjust product strategies and operational approaches accordingly. Crosscultural research on mobile app adoption has highlighted the importance of tailoring app interfaces, content, and functionality to cater to diverse cultural norms and expectations, which can significantly impact user engagement and retention [50, 80]. LLM app stores can leverage these learnings and adapt their offerings, marketing strategies, and localization efforts to better resonate with users from various cultural backgrounds. User review analysis is a vital channel for LLM app stores to understand genuine user feedback and identify areas for improvement. By applying natural language processing and sentiment analysis to a vast number of user reviews, stores can gain insights into user pain points, app deficiencies, bugs, user expectations, suggestions, and overall acceptance and trust levels for LLM apps. Just as in the mobile app domain, where user reviews have been extensively leveraged to improve app quality and user experience [33], LLM app stores can benefit from similar techniques to mine valuable feedback from user reviews. As artificial intelligence apps, the development of LLM apps must adhere to human ethical values and maintain a high degree of alignment with humanistic ideals, which is essential for gaining public trust and recognition. App stores should establish review standards that prohibit the listing of LLM apps containing content that violates social morality or harms public interests. The design of LLM apps should embody human-centric values, such as respect for privacy, explainability, and controllability. The functions, algorithms, knowledge bases, and other aspects of LLM apps must align with human interests and avoid producing harmful effects. Similar to the guidelines and best practices established in the mobile app industry for protecting user privacy, ensuring data security, and promoting ethical app development [12, 56], LLM app stores can adopt and adapt these principles to address the unique challenges and risks associated with AI-powered apps. 5 DISCUSSION The analysis of LLM app stores following the proposed roadmap has several implications for the development and regulation of these platforms. In addition, this section also discusses the key challenges and provides recommendations for LLM app store stakeholders. 5.1 Implications The burgeoning LLM app store ecosystem presents a unique blend of opportunities and challenges. For developers, the potential to create innovative, AI-driven apps is immense. Yet, this potential comes with the responsibility to ensure that apps are secure, privacycompliant, and ethically aligned. The detailed analysis of LLM app raw data, metadata, and user feedback is crucial for developers to understand user needs and preferences, enabling them to design more engaging and useful apps. For regulators and LLM app store managers, the rapid evolution of LLM app stores necessitates a proactive approach to governance. Ensuring a safe, trustworthy, and inclusive platform requires continuous monitoring for security threats such as malicious apps, \fSE 2030, November 2024, Puerto Galin\u00e0s (Brazil) Y Zhao, X Hou, S Wang, and H Wang spam reviews, and ranking fraud. Furthermore, privacy protection remains paramount, demanding stringent measures to safeguard user data from unauthorized tracking, profiling, and third-party service vulnerabilities. 5.2 Challenges The analysis of LLM app stores following the proposed roadmap reveals several challenges that need to be addressed to ensure the sustainable growth and responsible development of this ecosystem. Data privacy and security. The integration of third-party services and the collection of user data by LLM apps raise significant privacy and security concerns. Ensuring compliance with data protection regulations, such as GDPR and CCPA, and implementing robust security measures to prevent data breaches and unauthorized access to user information are critical challenges that require attention from both developers and platform providers. Intellectual property protection. The prevalence of app cloning and the potential for intellectual property infringement within LLM app stores pose a significant challenge to developers and platform owners. Detecting and preventing the unauthorized copying or reuse of app code, designs, and features is crucial to maintaining a fair and competitive environment that rewards innovation and original work. Ensuring app quality and reliability. With the rapid growth of LLM app stores, maintaining high standards of app quality and reliability becomes increasingly challenging. Implementing effective app review processes, establishing clear guidelines for developers, and continuously monitoring app performance and user feedback are essential to provide users with a consistent and trustworthy experience. Addressing algorithmic biases and fairness. LLM apps rely on complex algorithms and models that may inadvertently perpetuate biases or discriminate against certain user groups. Identifying and mitigating these biases, ensuring fairness in app recommendations and search results, and promoting diversity and inclusivity within the app ecosystem are significant challenges that require ongoing research and collaboration between developers, researchers, and LLM app store managers. Balancing innovation and responsibility. The rapid advancements in LLM technologies and the increasing capabilities of LLM apps present both significant opportunities for innovation and formidable challenges in terms of responsible development and deployment. Striking the right balance between pushing the boundaries of what is possible and considering the ethical, social, and long-term implications of LLM apps is a critical challenge that requires input from multiple stakeholders, including developers, researchers, policymakers, and users. User education and awareness. As LLM apps become more prevalent and influential in various domains, educating users about their capabilities, limitations, and potential risks becomes increasingly important. Providing clear and accessible information about how LLM apps work, what data they collect, and how users can control their interactions with these apps is a significant challenge that requires collaboration between developers, platform providers, and educational institutions. Regulatory and policy challenges. The rapid growth and evolving nature of the LLM app ecosystem present challenges for regulatory bodies and policymakers. Developing appropriate legal frameworks, guidelines, and standards that promote innovation while protecting user rights and ensuring accountability is a complex task that requires ongoing dialogue and collaboration between industry stakeholders and policymakers. 5.3 Recommendations for LLM app store stakeholders For LLM app store managers, it is crucial to implement robust vetting processes for app submissions, incorporating automated and manual review mechanisms to ensure compliance with store policies and ethical standards. Establishing transparent guidelines and providing resources for developers on best practices, security, and privacy can enhance the overall quality of the ecosystem. Furthermore, operators should invest in advanced algorithms for fraud detection and user review analysis to proactively address security risks and improve user trust. Developers are encouraged to prioritize security and privacy in the design and development of their LLM apps. This includes adhering to best practices for data handling, implementing robust authentication mechanisms, and ensuring transparency in app functionalities and data usage policies. Engaging with the user community to gather feedback and continuously iterating on app features based on user insights can also drive improvement and innovation. Researchers and policymakers play a pivotal role in shaping the future of LLM app stores. Conducting in-depth studies on user behavior, market trends, and security challenges can provide valuable insights for all stakeholders. Moreover, developing frameworks and guidelines for ethical AI use, privacy protection, and security in LLM apps can guide developers and store operators in creating a responsible and user-centric ecosystem. For users, staying informed about the apps they use is key. This includes reviewing app permissions, understanding data usage policies, and providing constructive feedback to developers. By actively participating in the ecosystem, users can contribute to the improvement of LLM apps and help foster a culture of transparency and accountability. The rapidly evolving nature of LLM app stores offers vast potential for innovation and growth. However, realizing this potential requires concerted efforts from all stakeholders to address the challenges of security, privacy, and ethical considerations, ensuring a thriving and sustainable ecosystem for the future. 6 CONCLUSION This paper provides a forward-looking analysis of LLM app stores, focusing on key aspects such as app data collection, security and privacy analysis, and ecosystem and market analysis. Through this exploration, we underscore the importance of user-centric design, data privacy, intellectual property protection, and collaboration among stakeholders in shaping the future of the LLM app ecosystem. As the LLM app landscape continues to evolve, ongoing research and collaboration among researchers, developers, LLM app store managers, and policymakers are crucial to address challenges, leverage opportunities, and drive responsible innovation. \fLLM App Store Analysis: A Vision and Roadmap SE 2030, November 2024, Puerto Galin\u00e0s (Brazil)" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15660v1", |
| "title": "KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering", |
| "abstract": "Large language models (LLMs) suffer from the hallucination problem and face\nsignificant challenges when applied to knowledge-intensive tasks. A promising\napproach is to leverage evidence documents as extra supporting knowledge, which\ncan be obtained through retrieval or generation. However, existing methods\ndirectly leverage the entire contents of the evidence document, which may\nintroduce noise information and impair the performance of large language\nmodels. To tackle this problem, we propose a novel Knowledge Selection of Large\nLanguage Models (KS-LLM) method, aiming to identify valuable information from\nevidence documents. The KS-LLM approach utilizes triples to effectively select\nknowledge snippets from evidence documents that are beneficial to answering\nquestions. Specifically, we first generate triples based on the input question,\nthen select the evidence sentences most similar to triples from the evidence\ndocument, and finally combine the evidence sentences and triples to assist\nlarge language models in generating answers. Experimental comparisons on\nseveral question answering datasets, such as TriviaQA, WebQ, and NQ,\ndemonstrate that the proposed method surpasses the baselines and achieves the\nbest results.", |
| "authors": "Xinxin Zheng, Feihu Che, Jinyang Wu, Shuai Zhang, Shuai Nie, Kang Liu, Jianhua Tao", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering", |
| "main_content": "Introduction Large language models (LLMs) have made significant progress in the field of natural language processing, achieving remarkable results in tasks such as text generation [Lin et al., 2023], machine translation [Moslem et al., 2023], and dialogue systems [Sun et al., 2023b]. However, despite notable successes in certain areas, LLMs suffer from severe hallucination problems, which may generate contents that deviate from the facts or contain fabricated information [Rawte \u2217Corresponding author Question: What star sign is Jamie Lee Curtis? Evidence Document: Jamie Lee Curtis is an American actress and author. She is the daughter of renowned actors Tony Curtis and Janet Leigh. She was born on November 22, 1958. Curtis gained fame for her roles in ... Question: What star sign is Jamie Lee Curtis? Answer: Sagittarius Answer: Sagittarius Question: What star sign is Jamie Lee Curtis? Evidence Sentence: She was born on November 22, 1958. Answer: Scorpio Figure 1: The large language model generates the incorrect answer with the given evidence document, while obtaining the correct answer with evidence sentences selected from the evidence document. et al., 2023]. It has always been a challenge for large language models to handle knowledge-intensive tasks [Petroni et al., 2021], such as question answering [Wu et al., 2022; Andrus et al., 2022] and fact checking [Atanasova et al., 2020], since they may potentially provide incorrect or misleading information, leading to task failures or inaccurate results. In the question answering task, introducing supporting knowledge related to the input question can effectively alleviate the hallucination problem of large language models [Sun et al., 2023a]. Previous methods use evidence documents as the external supporting knowledge, providing extra arXiv:2404.15660v1 [cs.CL] 24 Apr 2024 \finformation and validation for the model when generating answers. There are currently two main approaches for acquiring evidence documents: retrieval-based methods [Izacard and Grave, 2021; Abdallah and Jatowt, 2023] and generationbased methods [Yu et al., 2023; Sun et al., 2023c]. Retrievalbased methods involve retrieving evidence documents relevant to the input question from large-scale corpus, such as Wikipedia. In contrast, generation-based methods leverage the internal knowledge of large language models to generate evidence documents or background knowledge related to the input question. Existing results demonstrate that generationbased methods significantly improve the accuracy of answering questions, even without incorporating new external information. [Yu et al., 2023]. Although the above methods provide extra knowledge for LLMs to better understand questions, they still suffer from two drawbacks. First, previous methods directly integrate all the contents in the evidence document into LLMs, which may lead to information overload and decrease the accuracy and efficiency of answering questions. Considering an evidence document that involves a large amount of contents, if LLMs need to process and understand the entire contents, they may struggle to accurately extract and utilize the knowledge relevant to the question. As shown in Figure 1, using the complete evidence document fails to facilitate large language models to answer the question correctly, while providing precise evidence sentences can lead to an accurate answer. Secondly, existing methods only use a single form of data source for knowledge augmentation [Gao et al., 2023], ignoring the interaction and complementary relationship between different forms of knowledge. For example, structured knowledge can provide relations between entities, while textual knowledge can offer more detailed descriptions and contextual information. Interestingly, when performing the question answering task, humans leverage their comprehensive capabilities to select key knowledge associated with the question from the evidence document to produce accurate answers. Inspired by this, we propose a novel Knowledge Selection of Large Language Models (KS-LLM) method, which aims to enhance the performance of large language models in the QA task by extracting relevant and useful knowledge from evidence documents. Specifically, we first construct triples based on the input questions, and then select evidence sentences from the evidence document that are most relevant to the triples. Finally, we incorporate the selected evidence sentences with constructed triples as supporting knowledge for LLMs to generate the final answer. We conduct comprehensive experiments on three widely used datasets, i.e., TriviaQA-verified, WebQ, and NQ, using three representative large language models, i.e., Vicuna13B, Llama 2-13B, and Llama 2-7B. Experimental results demonstrate that KS-LLM can significantly improve the performance of large language models on the question answering task, indicating that our method is capable of effectively selecting relevant knowledge from evidence documents for generating accurate answers. In summary, our main contributions are as follows: \u2022 We propose a novel method that can select knowledge snippets that are highly relevant to the input question from the evidence document, improving the accuracy and reliability of large language models in answering questions and alleviating the hallucination problem. \u2022 Our proposed method combines multiple forms of knowledge, including textual evidence sentences and structured triples, taking full advantages of the interaction and complementary relationship between different forms of knowledge. \u2022 We demonstrate the effectiveness of the proposed KSLLM method in the QA task. Extensive experimental results show that our method surpasses different baselines and achieves the best performance on three datasets. 2 Related Work 2.1 Question Answering with Evidence Documents Evidence documents typically refer to documents containing information relevant to the query question, which are used to facilitate accurate answers or support the reasoning process. Question answering methods with evidence documents are mainly divided into two categories: retrieval-based methods and generation-based methods. Retrieval-based methods retrieve documents that may contain the answer strings from a large-scale corpus, and then use the retrieved documents to generate correct answers. Early research utilizes sparse retrieval methods, such as BM25 [Chen et al., 2017], or neural ranking models [Guo et al., 2016; Qaiser and Ali, 2018] to retrieve documents. Representative works of early research include DrQA [Seo et al., 2016] and BiDAF [Chen et al., 2017]. Subsequently, dense retrieval models like ORQA [Lee et al., 2019] and DPR [Karpukhin et al., 2020] are proposed, which encode contextual information to obtain dense representations of documents. Recent works [Qu et al., 2021; Raffel et al., 2020] enhance the performance of retrievers to obtain more effective evidence documents, further improving the accuracy of models in answering questions. Rather than relying on external knowledge, generationbased methods extract knowledge from the parameters of large language models to generate evidence documents. Recent research shows that large-scale pre-trained models can form an implicit knowledge base after pre-training [Radford et al., ; Yang et al., 2019], which contains a vast amount of knowledge. GenRead [Yu et al., 2023] is the first work to propose using documents generated by large language models instead of retrieved documents. GenRead [Yu et al., 2023] and RECITE [Sun et al., 2023c] generate contextual documents with the implicit knowledge of large language models, and then read the documents to predict final answers. Although evidence documents can provide additional knowledge to help answer questions, the above method utilizes all the information in the evidence documents as supporting knowledge, which may introduce noise irrelevant to the query question. Our proposed method effectively extracts the most relevant sentences from the evidence documents to assist the large language model, improving the accuracy and efficiency of answering questions. \fFigure 2: The proposed KS-LLM framework consists of three components: (1) triple construction, (2) evidence sentence selection, and (3) answer generation. The triple construction and answer generation steps are implemented by large language models, while the evidence sentence selection step is implemented by the vector database. The dashed line indicates the input of each step and the solid line indicates the output of each step. Given a question and its corresponding evidence document as input, our method can effectively extract valuable knowledge from the evidence document to acquire the correct answer. 2.2 Question Answering with Knowledge Graphs Knowledge graphs (KGs) store factual knowledge in the real world, and have advantages in dynamic, explicit, and structured knowledge representation [Pan et al., 2023]. Question answering methods with knowledge graphs utilize structured knowledge graphs as auxiliary information to improve the performance of question answering systems, usually involving knowledge bases such as Wikidata [Vrande\u02c7 ci\u00b4 c and Kr\u00a8 otzsch, 2014] and Freebase [Bollacker et al., 2008]. Early studies [Zhang et al., 2019; Peters et al., 2019; Wang et al., 2021] require models to learn structured knowledge in knowledge graphs during the training or fine-tuning process, which consumes a large amount of computing resources. Recent methods leverage knowledge by incorporating knowledge graphs into the prompts of large language models and express knowledge graphs in the form of triples. ToG [Sun et al., 2023a] explores entities and relations through external knowledge bases, dynamically discovering multiple reasoning paths on the knowledge graph to enhance the multi-hop reasoning capabilities of large language models. KGR [Guan et al., 2023] uses factual knowledge stored in the knowledge graph to correct errors that may occur during the reasoning process, which can automatically alleviate the hallucination problem of large language models. CoK [Li et al., 2023] leverages query languages to obtain knowledge from structured knowledge sources, improving the factual correctness of large language models. Although the above methods improve the performance of large language models on the question answering task, they only utilize a single form of knowledge. Our proposed method simultaneously combines structured triples and textual sentences from evidence documents, taking full advantage of multiple forms of knowledge. 3 Method The goal of this study is to enhance the performance of large language models on knowledge-intensive tasks by leveraging triples for effective knowledge selection from evidence documents. In this section, we present a detailed description of our proposed approach, KS-LLM, for solving QA tasks. As shown in Figure 2, KS-LLM consists of three stages: (1) triple construction, which generates a set of triples based on the subject entities in the query question; (2) evidence sentence selection, where the most relevant evidence sentences to the triples are extracted from the evidence document; (3) answer generation, which utilizes the triples and evidence sentences as supporting knowledge to generate the final answer. Next, we will describe each component in KS-LLM respectively. 3.1 Triple Construction The process of triple construction employs the large language model to generate structured triples based on the natural language question, facilitating the precise capture of the intent and crucial information of the question. Given a query question Q, the process of triple construction aims to generate a set of triples T = {(hi, ri, ti)}, i = 1, ..., m using the large language model, where m is the number of triples, h and t are the head entity and tail entity respectively, and r denotes the relation between the head entity and tail entity. Formally, T is obtained by: T = LM(Q) (1) where LM represents a specific large language model. Taking a query question as input, the process of triple construction first identifies the subject entity in the query question, and then generates a set of triples with rich information based on the subject entity. Specifically, we extract the entity related to the topic of the query question, referred to as the subject entity. This entity can be individuals, locations, organizations, or other entities that reflect the core contents \fof the query question. Next, we construct a set of triples utilizing the subject entity as the head entity. The expanded triples cover various aspects of knowledge closely related to the query question, providing contextual information to the model from multiple perspectives. As illustrated in Figure 2, we first extract the subject entity Jamie Lee Curtis from question \u201cWhat star sign is Jamie Lee Curtis?\u201d, and then construct several triples with Jamie Lee Curtis as the head entity, such as (Jamie Lee Curtis, occupation, actress). By focusing on the subject entity, we ensure that the constructed triples capture the most relevant information necessary for answering the query question. The constructed triples not only help the model better comprehend the query question but also guide large language models in performing complex reasoning, ultimately generating accurate and consistent answers. The process of triple construction is automatically executed by large language models, without requiring additional manual efforts. 3.2 Evidence Sentence Selection Evidence documents refer to documents that provide background information, relevant facts, or supporting knowledge to query questions. However, evidence documents typically contain a large amount of information, and inputting the entire document into a large language model may introduce noise information, making it more difficult for the model to understand and filter relevant knowledge. Therefore, it is crucial to select valuable evidence sentences from evidence documents, which can significantly improve the quality and accuracy of large language models on the question answering task. Given the constructed triples T and an evidence document D = (s1, s2, ..., sn), where s represents a sentence and n is the number of sentences, the process of evidence sentence selection extracts the evidence sentences S most relevant to the triples T from the evidence document D. Specifically, we initially employ the BERT [Kenton and Toutanova, 2019] model to obtain the embedding representations of constructed triples and each sentence in the evidence document. This can be formulated as: q = Bert(T), K = {ki|ki = Bert(si)} (2) where q and K denote the embeddings of triples and the evidence document respectively. The BERT model captures the semantic information and contextual features of sentences by encoding them into dense vectors. Subsequently, in order to measure the semantic similarity, we calculate the Euclidean distance between each sentence and the triples based on embedding representations, and select top k sentences with the closest distance as evidence sentences. The indices of evidence sentences are calculated by: L = arg k min i {\u2225ki \u2212q\u22252} (3) Here, L = (l1, l2, ..., lk) represents the indices of the top k minimum values returned by k arg min, and \u2225\u00b7\u2225denotes the Euclidean distance. Finally, S = {sli|li \u2208L} is the evidence sentences selected from the evidence document. We set k = 2 in the experiments. As shown in Figure 2, we compute the Euclidean distance between the triples (Jamie Lee Curtis, occupation, actress), (Jamie Lee Curtis, birthdate, November 22, 1958), (Jamie Lee Curtis, notable work, Halloween) and each sentence in the evidence document. Then we select the top two sentences with the closest distances as the evidence sentences, i.e., \u201cShe was born on November 22, 1958.\u201d and \u201cScorpio corresponds to the solar calendar time from October 23 to November 22.\u201d. During the evidence sentence selection step, we can extract the most relevant evidence sentences to the triples from voluminous evidence documents. These sentences contain crucial information related to the query question and provide supporting knowledge for subsequent answer generation. In addition, compared to directly using the entire document as evidence, effective evidence sentence selection eliminates irrelevant information in the evidence document that may hinder the answer. The evidence sentence selection process is implemented through the vector database, which offers the advantage of high efficiency. 3.3 Answer Generation We integrate the triples and evidence sentences as supporting knowledge, combine them with the query question, and leverage the reasoning capability of large language models to obtain the final answer. Formally, the final answer A is generated by: A = LM(Q, T, S) (4) Previous methods [Sun et al., 2023c; Sun et al., 2023a] only utilize knowledge graphs or evidence documents as external knowledge to assist large language models in question answering, without considering the interaction between different forms of knowledge. The triples provide structured knowledge in knowledge graphs, while evidence sentences provide detailed information from the evidence document in textual format. By fusing multiple forms of knowledge at different granularities, we are able to provide the model with richer context and factual knowledge, facilitating large language models to generate more accurate and consistent answers. Because different models with different sizes possess varying abilities to follow instructions, the output format control in the prompt may differ slightly when generating answers. We expect the model to generate a single entity as the answer so that a fair comparison can be made. 4 Experiments In this section, we conduct comprehensive experiments of our proposed KS-LLM method on the QA task with evidence documents. We report empirical evaluations of KS-LLM on three widely adopted datasets: TriviaQA-verified [Joshi et al., 2017], WebQ [Berant et al., 2013], and NQ [Kwiatkowski et al., 2019]. Following previous works, we use the exact match (EM) score to evaluate the model performance on the QA task. We also evaluate the effectiveness of KS-LLM on two different base LLMs with various sizes: Vicuna [Zheng et al., 2023] and Llama 2 [Touvron et al., 2023]. \fTriviaQA-verified WebQ NQ Vicuna -13B Llama 2 -13B Llama 2 -7B Vicuna -13B Llama 2 -13B Llama 2 -7B Vicuna -13B Llama 2 -13B Llama 2 -7B Without evidence document Standard 51.45 62.07 51.03 17.91 23.18 17.08 14.93 16.59 15.10 With evidence document Standard+doc 52.69 56.97 49.24 18.31 23.47 17.52 17.04 21.06 19.39 CoT+doc 50.34 55.45 49.52 18.26 22.98 18.46 17.12 20.16 20.36 KS-Q 52.10 62.76 49.66 20.47 24.66 19.29 15.07 20.00 17.92 KS-T 52.28 53.66 40.59 21.79 23.82 20.42 15.40 16.92 11.25 KS-S 51.59 57.79 44.55 20.23 24.11 18.85 15.62 18.89 17.26 KS-LLM (Ours) 58.48 55.72 44.14 21.85 24.70 21.11 17.59 21.69 20.95 Table 1: Performance comparison on TriviaQA-verified, WebQuestions (WebQ), and Natural Questions (NQ) datasets using three large language models with different sizes. We report the exact match (EM) score in the table, and the best result for each model is bolded. 4.1 Experimental Setup Datasets and Evaluation. We conduct experiments on three representative QA datasets. TriviaQA-verified [Joshi et al., 2017] is a reading comprehension dataset that covers a wide range of topics, consisting of over 650K question-answer-evidence triples. The evidence documents are collected by remote supervision and may include noise which is irrelevant to the question. Therefore, in this paper, we utilize the verified set in TriviaQA, which is manually verified to ensure that each document contains the relevant facts necessary for answering the question. WebQ [Berant et al., 2013] refers to WebQuestions, which is an open-domain question answering dataset containing numerous question-answer pairs. WebQ includes questions that are sourced from the web, aiming to evaluate the performance of QA systems in handling real-world questions without domain restrictions. NQ [Kwiatkowski et al., 2019] refers to Natural Questions, which is a widely used open-domain question answering dataset created by the Google AI team. This dataset contains real-world questions selected from Google search logs and is of significant importance for evaluating and advancing research in question answering systems. Due to the absence of evidence documents in WebQ and NQ datasets, we follow the previous work [Yu et al., 2023] and employ the large language model to generate an evidence document for each question in WebQ and NQ. Specifically, we use Vicuna 13B for evidence document generation. We report the exact match (EM) score respectively on the validation set of each dataset in order to evaluate the model performance. An answer is considered correct if and only if its normalized form has a match in the acceptable answer list. Fundamental Models. We conduct experiments on three representative fundamental large language models with various sizes. Vicuna [Zheng et al., 2023] is an open-source large language model launched by the Large Model Systems Organization (LMSYS Org). Vicuna includes three versions: 7B, 13B, and 33B, and is fine-tuned based on Llama with the open conversation dataset collected by SharedGPT. The latest release, Vicuna 1.5, is fine-tuned based on Llama 2 and supports inputs with a maximum context length of 16K. We utilize the 13B versions of Vicuna 1.5. Llama 2 [Touvron et al., 2023] is an open-source large language model released by Meta Company, including three versions: 7B, 13B, and 70B. Llama 2 is trained on datasets comprising over 2 trillion tokens, and the fine-tuning data includes publicly available instruction datasets, along with over 1 million new annotated examples. We utilize the 7B and 13B versions of Llama 2. Baselines. We set up six different baselines. Standard baseline prompts the large language model to directly output answers to the questions. Standard+doc baseline combines the question and the corresponding evidence document as input, prompting the large language model to output the answer. For the Trivia-verified dataset, since the length of its evidence documents may exceed the maximum input length of the model, we set a unified parameter max token to limit the length of evidence documents. In this paper, we set max token to 300. CoT+doc baseline follows the same setup as Standard+doc baseline while incorporating the Chain of Thought (CoT) [Wei et al., 2022] approach. KS-Q calculates the embeddings of the question and each sentence from the evidence document, and selects the top k sentences most similar to the question as evidence sentences. Subsequently, KS-Q inputs the question and evidence sentences into the large language model. Instead of using the question, our approach leveraging triples to select evidence sentences. KS-T & KS-S baselines utilize the triples generated in the triple construction step and the evidence sentences obtained in the evidence sentence selection step as their supporting knowledge, respectively. In contrast, our proposed method integrates both triples and evidence sentences as supporting knowledge. Then KS-T and KS-S respectively input questions and triples, questions and evidence sentences into the large language model. \f4.2 Main Results As reported in Table 1, the proposed KS-LLM method demonstrates superior performance by outperforming multiple baselines and achieving significant advancements across all three datasets. Specifically, in the case of utilizing opensource models, the KS-LLM method achieves impressive EM scores of 58.48, 24.7, and 21.69 on the Trivia-verified, WebQ, and NQ datasets, respectively. Moreover, the KSLLM method still maintains superior performance compared to methods with evidence documents. For example, compared to the Cot+doc method using Vicuna-13B, our KSLLM yields substantial enhancements of 8.14 and 3.59 on the Trivia-verified and WebQ datasets, respectively. These results fully demonstrate that our method effectively extracts valuable knowledge from evidence documents, thereby significantly improving the accuracy of large language models in answer generation. Furthermore, our method outperforms the KS-T and KS-S methods, which solely exploit a single form of knowledge, in the vast majority of cases. This indicates that integrating different forms of knowledge enables the effective utilization of the interaction and complementary relationship between knowledge, further enhancing the knowledge absorption capability of large language models. We also discover that directly incorporating appropriate evidence documents often leads to minor performance improvements (Standard v.s. Standard+doc). In seven out of nine cases across three datasets using three large language models, incorporating evidence documents results in higher accuracy for the large language models. However, applying the chaining of thought (CoT) technique does not consistently enhance the performance of large language models (Standard+doc v.s. CoT+doc). This could be due to the use of 0-shot prompt in our experiments, where the performance of 0-shot CoT is not stable. Moreover, the choice of fundamental models significantly affects the utilization of knowledge. For example, the Llama 2 model struggles to effectively utilize valid knowledge on the TriviaQA-verified dataset. This may be attributed to the fact that evidence documents for TriviaQA-verified are obtained from the web through distant supervision [Joshi et al., 2017] and contain non-typical natural language expressions, such as \u201cSam Smith releases new James Bond title song \u2014 Film \u2014 DW.COM \u2014 25.09.2015\u201d, which Llama 2 is not adapted to handle such form of knowledge. Method Length (token) EM Time Standard 0 51.45 3min40s Standard+doc 300 52.69 4min41s Standard+doc 500 49.52 5min22s Standard+doc 1000 47.45 7min10s Standard+doc 2000 37.66 11min31s KS-LLM 58.48 Table 2: Impact of evidence document length on TriviaQA-verified dataset with Vicuna-13B. Time refers to the running time of the large language model for inference on NVIDIA A100 80G. The best result in baselines is underlined and the best result overall is bolded. 4.3 Impact of Evidence Document Length The length of external knowledge may affect the performance of large language models. Providing appropriate external knowledge can enhance the performance of the model, while excessively long knowledge may degrade the performance of large language models. The length of evidence document can vary significantly in the real world, making it necessary to select the appropriate document length to facilitate large language models in answering questions. To investigate the impact of different evidence document lengths on large language models, we conduct experiments using Vicuna-13B on the TriviaQA-verified dataset. Specifically, we report the performance of the Standard+doc baseline under different evidence document lengths and compare them with the Standard baseline and the proposed KS-LLM method. In addition, we also report the running time required for inference with various lengths of documents on NVIDIA A100 80G. Through this experiment, we aim to gain a deeper understanding of the adaptability of large language models under different evidence document lengths. This is of great help in choosing the optimal evidence document length in practical applications, balancing the requirements of performance and efficiency. As shown in Table 2, the performance of large language models in answering questions is significantly influenced by the length of evidence document. Using a 300-token-length evidence document resulted in a 1.24 increase in the evaluation metric compared to not using any evidence document. However, as the length of the evidence document increases, there is a corresponding decrease in performance. This is consistent with the hypothesis from previous research that appropriate external knowledge can improve the performance of the model, while excessively long knowledge has a negative impact on the performance of large language models. The length of the evidence document also affects the inference time of large language models. As the length of the evidence document increases, the inference time proportionally extends. Using a 2000-token-length evidence document takes approximately three times longer for inference compared to not using any evidence document. 4.4 Impact of Parameter k The parameter k represents the number of sentences selected from the evidence document during evidence sentence selection process. We extract the top k sentences with the highest semantic similarity score to the triples from the evidence document and use them for the subsequent answer generation process. The parameter k indicates how the quantity of supporting knowledge affects the performance of large language models. If k is too small, large language models may not have sufficient knowledge for reasoning. If k is too large, additional noisy knowledge may be introduced, interfering with the decision-making of large language models. We evaluate the impact of parameter k on the performance of large language models using Vicuna-13B on Triviaverified and WebQ datasets. Specifically, we report the performance of our KS-LLM method under different k values while comparing with the Standard baseline and the Standard+doc baseline. Through this experiment, we were able \fQuestion: For which team did Babe Ruth blast his last Major League home run? Answer: Boston Braves Output: Babe Ruth (Standard) \u00e9; Philadelphia Athletics (Standard+doc) \u00e9; Philadelphia Athletics (CoT+doc) \u00e9; Boston Braves (KS-LLM) \u00cb Triple Construction Evidence Sentence Selection Answer Generation Intermediate Output of KS-LLM (Babe Ruth, played for, Boston Red Sox), (Babe Ruth, played for, New York Yankees), (Babe Ruth, played for, Baltimore Orioles), (Babe Ruth, played for, St. Louis Browns), (Babe Ruth, played for, Boston Braves) On May 25, 1935, with the team on a road trip and playing at Forbes File in Pittsburgh, Ruth hammered three home runs and a single, driving in six runs. In 1935, Babe Ruth was forty years old, in poor physical shape, and playing out the string with the Boston Braves. Boston Braves Table 3: Case study on the TriviaQA-verified dataset with Vicuna-13B. Answer strings that appear during the inference process of the KS-LLM approach are marked in red. Figure 3: Impact of parameter k on TriviaQA-verified and WebQ datasets with Vicuna-13B. We report the exact match (EM) score in the table. to determine the optimal value of k, balancing the quantity of supporting knowledge and the risk of introducing noisy knowledge, which is conducive to improving the performance of large language models. From Figure 3, it can be observed that the KS-LLM method achieves the best performance on both datasets when k=2, reaching 58.48 and 21.85, respectively. As the value of parameter k increases, there is a gradual decline in the performance of the model. This indicates that the parameter k plays an important role in the process of evidence sentence selection, and the number of evidence sentences directly affects the accuracy of large language models. When the parameter k is set to 2, we are able to achieve the best results using large language models in the question answering task. Furthermore, across different values of k, the proposed KSLLM method consistently outperforms the Standard baseline and Standard+doc baseline. In the case of utilizing evidence documents, compared with the Standard+doc baseline, the KS-LLM approach achieves a maximum performance improvement up to 5.79 on the TriviaQA-verified dataset, while in the WebQ dataset, the maximum improvement reached 3.94. This demonstrates that effective knowledge selection from evidence documents can significantly enhance the performance of large language models, showcasing the superiority of our KS-LLM method. 4.5 Case Study To better understand how the proposed KS-LLM method works, we provide a detailed example in Table 3. Given a question \u201cFor which team did Babe Ruth blast his last Major League home run?\u201d as input, the large language model gets the incorrect answer Babe Ruth by directly answering the question. Given the question and its corresponding evidence document as input, the large language model yields the wrong answer Philadelphia Athletics even if the evidence document contains the correct answer string Boston Braves. As for the KS-LLM method, it first indicates that Babe Ruth played for multiple teams in the triple construction step, such as (Babe Ruth, played for, Boston Braves). Then, in the evidence sentence selection step, the crucial sentence \u201cIn 1935, Babe Ruth was forty years old, in poor physical shape, and playing out the string with the Boston Braves.\u201d containing the answer string is successfully identified from the evidence document. Finally, our proposed KS-LLM approach generates the precise answer Boston Braves according to the triples and evidence sentences. Through this example, it can be fully demonstrated that large language models may not be able to effectively utilize the contents of evidence documents, while KS-LLM can extract valuable information from the evidence documents to further generate accurate answers. 5 Conclusion In this paper, we introduce KS-LLM, a novel knowledge selection method for large language models designed to tackle the question answering problem. Given the corresponding evidence documents, the KS-LLM approach effectively identifies the knowledge relevant to the question from evidence documents, thereby enhancing the performance and efficiency of large language models in the question answering task. The proposed method first constructs triples according to the query question, then extracts sentences from the evidence document that are most similar to the triples as \fevidence sentences, and finally integrates the triples and evidence sentences into the input of large language models to generate accurate answers. Experimental results demonstrate that our method achieves remarkable improvements on three datasets, indicating that KS-LLM is capable of selecting valuable knowledge snippets from evidence documents to assist large language models in answering questions." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13207v1", |
| "title": "STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases", |
| "abstract": "Answering real-world user queries, such as product search, often requires\naccurate retrieval of information from semi-structured knowledge bases or\ndatabases that involve blend of unstructured (e.g., textual descriptions of\nproducts) and structured (e.g., entity relations of products) information.\nHowever, previous works have mostly studied textual and relational retrieval\ntasks as separate topics. To address the gap, we develop STARK, a large-scale\nSemi-structure retrieval benchmark on Textual and Relational Knowledge Bases.\nWe design a novel pipeline to synthesize natural and realistic user queries\nthat integrate diverse relational information and complex textual properties,\nas well as their ground-truth answers. Moreover, we rigorously conduct human\nevaluation to validate the quality of our benchmark, which covers a variety of\npractical applications, including product recommendations, academic paper\nsearches, and precision medicine inquiries. Our benchmark serves as a\ncomprehensive testbed for evaluating the performance of retrieval systems, with\nan emphasis on retrieval approaches driven by large language models (LLMs). Our\nexperiments suggest that the STARK datasets present significant challenges to\nthe current retrieval and LLM systems, indicating the demand for building more\ncapable retrieval systems that can handle both textual and relational aspects.", |
| "authors": "Shirley Wu, Shiyu Zhao, Michihiro Yasunaga, Kexin Huang, Kaidi Cao, Qian Huang, Vassilis N. Ioannidis, Karthik Subbian, James Zou, Jure Leskovec", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "cs.IR", |
| "cats": [ |
| "cs.IR", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases", |
| "main_content": "Introduction Natural-language queries are the primary forms in human society for acquiring information, involving diverse knowledge in the real world [14, 18, 22]. For example, users on e-commerce web search \u2217\u2021Equal contribution. Correspondence: {shirwu, jamesz, jure}@cs.stanford.edu. The work of Vassilis N. Ioannidis and Karthik Subbian does not relate to their position at Amazon. arXiv:2404.13207v1 [cs.IR] 19 Apr 2024 \fExample query Title of ground truth node(s) STARK-AMAZON Looking for durable Dart World brand dart flights that <Amazon Standard Flights> resist easy tearing. Any recommendations? <Dart World Broken Glass Flight> (12 more) What are recommended scuba diving weights for experienced divers <Sea Pearls Vinyl Coated Lace Thru Weight> that would fit well with my Gorilla PRO XL waterproof bag? STARK-MAG Search publications by Hao-Sheng Zeng on non-Markovian dynamics. <Distribution of non-Markovian intervals...> <Comparison between non-Markovian...> What are some nanofluid heat transfer research papers published by <A Numerical Study on Convection Around A scholars from Philadelphia University? Suqare Cylinder using AL2O3-H2O Nanofluid> STARK-PRIME Could you provide a list of investigational drugs that <(S)-3-phenyllactic Acid>, interact with genes or proteins active in the epididymal region? <Anisomycin>, <Puromycin> Search for diseases without known treatments and induce <Intrahepatic Cholestasis> pruritus in pregnant women, potentially associated with Autoimmune. Please find pathways involving the POLR3D gene within nucleoplasm. <RNA Polymerase III Chain Elongation> Which gene or protein associated with lichen amyloidosis can <OSMR>, <IL31RA> bind interleukin-31 to activate the PI3K/AKT and MAPK pathways? Table 1: Example queries from STARK, which involve semi-structured information, where the relational and textual aspects are highlighted. can express complex information needs by combining free-form elements or constraints, such as \u201cCan you help me find a push-along tricycle from Radio Flyer that\u2019s both fun and safe for my kid?\u201d. Here, the underlying knowledge can be represented in semi-structured formats [29, 32, 40], referred to as semi-structured knowledge bases (SKBs), which integrate unstructured data, such as textual descriptions and natural language expressions (e.g., descriptions about a tricycle), with structured data, like entity interactions on knowledge graph or database (e.g., a tricycle with \u201cbelongs to\u201d relation with brand Radio Flyer). This allows semi-structured knowledge bases to represent comprehensive knowledge in specific applications, making them indispensable in domains such as e-commerce [12], social media [26], and biomedicine [5, 15]. In this work, we focus on semi-structured knowledge bases (SKBs) with texts and relation information, which are two prevalent modalities on the existing knowledge bases. In this context, as shown in Figure 1, information retrieval serves as a predominant step by identifying relevant data from the semi-structured knowledge bases, such as specific tricycles that match the request, which are then processed to generate accurate answers to user queries [34]. The retrieval accuracy on semi-structured knowledge bases is crucial for enhancing user experience, supporting informed decision-making, and preventing hallucination. Limitations of existing works. However, previous works and benchmarks have primarily focused on either purely textual queries on unstructured knowledge [11, 17, 21, 24, 43] or structured SQL or knowledge graph queries [1, 36, 45, 47], which are inadequate to study the complexities of retrieval tasks involving semi-structured knowledge bases. Another line of works [9, 20, 37, 41, 42, 45] have explored the middle ground via multi-hop reasoning within single or across multiple documents. However, they are either limited in the span of knowledge or lack of diverse textual properties and explicit relational information. Recently, large language models (LLMs) have demonstrated significant potential on information retrieval tasks [11, 25, 33, 51]. Nevertheless, it remains an open question how effectively LLMs can be applied to the specific challenge of retrieval from SKBs. It is also important to note that existing works focus mainly on general knowledge, e.g., from wikipedia. In fact, the knowledge may also come from private sources, necessitating information retrieval systems adaptable to private semi-structured knowledge bases. Therefore, there is a gap in our understanding of how current LLM retrieval systems handle the complex interplay between textual and relational requirements in queries that optionally involve private knowledge. Challenges. To address this gap, our goal is to develop a benchmark for LLM retrieval on SKBs. However, the lack of real queries on SKBs has become an obstacle. Although retrieval datasets with synthetic queries is a valuable supplement, accurately simulating user queries on SKBs is particularly difficult. This difficulty arises from the interdependence of textual and relational information, which can lead to challenges in precisely filtering through millions of candidates to construct the ground truth answers. Additionally, ensuring that queries are relevant to real-world applications and resembles real-world scenarios adds further complexity to the benchmarking process. Our contribution: STARK. As shown in Figure 1, we present a large-scale Semi-structure retrieval benchmark on Textual and Relational Knowledge Bases (STARK). We propose a novel pipeline that simulates user queries and constructs precise ground truth answers using three SKBs built from 2 \fextensive texts and millions of entity relations from public sources. We validate the quality of queries in our benchmark through detailed analysis and human evaluation, focusing on their naturalness, diversity, and practicality. With STARK, we delve deeper into retrieval tasks on SKBs, evaluate the capability of current retrieval systems, and providing insights for future advancement. We point out the key features of STARK as follows: \u2022 Natural-sounding queries on semi-structured knowledge: As illustrated in Table 1, the queries in our benchmark are crafted to incorporate rich relational information and complex textual properties. Additionally, these queries closely mirror the types of questions users would naturally ask in real-life scenarios, e.g., with flexible query formats and possibly with extra contexts. \u2022 Context-specific reasoning: The queries entail reasoning capabilities specific to the context. This includes the ability to infer customer interests, understand specialized field descriptions, and deduce relationships involving multiple subjects mentioned within the query. For example, the context \u201cI had a dozen 2.5-inch Brybelly air hockey pucks, so I\u2019m trying to find matching strikers.\u201d entails the user\u2019s interest in looking for complementary products. Such reasoning capabilities are crucial for accurately interpreting and responding to the nuanced requirements of each query. \u2022 Diverse domains: Our benchmark spans a wide range of content and domains, reflecting the diverse nature of real-world information needs. It covers areas on product recommendation, academic paper search, and precision medicine inquiry. Therefore, STARK provides a comprehensive evaluation of retrieval systems across various contexts and knowledge domains. Moreover, we perform extensive experiments on each benchmark dataset for the prevailing retrieval systems. We highlight research challenges especially on the capability to handle textual and relational requirements and the latency of retrieval systems on large-scale SKBs that involve million-scale entities or relations. Finally, we provide insights into future directions to build more capable retrieval systems. The benchmark datsets and code are available on https://github.com/snap-stanford/stark. 2 Related Work We identify three primary categories on retrieval-oriented question answering (QA) datasets based on the knowledge sources they utilize: unstructured, structured, and semi-structured knowledge bases. We discuss the differences and limitations in the previous works that we seek to address. Unstructured QA Datasets. The line of work explores retrieving answers from a single document [31] or multiple document sources [9, 20, 39, 41, 44]. For example, SQuAD [31] focuses on extracting answers from a single document, testing model\u2019s ability to understand and retrieve information within a specific context. For multi-document retrieval, HotpotQA [44] demands reasoning over multiple documents and includes sentence-level supporting facts, enhancing the complexity of the retrieval task. TriviaQA [20] combines questions from trivia games with evidence documents, evaluating both retrieval and reasoning over multiple sources. Moreover, some works [23, 28] leverage generated results by search engines as ground truth answers or information sources for retrieval. However, unstructured QA datasets or knowledge bases often lack the depth of relational reasoning commonly required in answering complex user queries. Structured QA Datasets. Within this category, the datasets challenge models to retrieve answer from structured knowledge bases, either knowledge graphs [2\u20134, 10, 13, 38, 48] or tabular data [49, 50]. For example, ComplexWebQuestions [38] challenge model\u2019s abilities to decompose complex queries and interpret queries using entities and relationships defined in knowledge graphs. GraphQA [13] textualizes graphs by describing node attributes and edges with natural language, offering language model a way to handle KBQA. For datasets with tabular data, WikiSQL [50] and Spider [49] both address the task of converting natural language questions to SQL queries, with WikiSQL concentrating on single-table databases and Spider extending to complex multi-tables. While these datasets primarily focus on relational information, the lack of textual information limits questions within predefined relationships and entities, which constrains the breadth of available information, resulting in less comprehensive and nuanced responses compared to the extensive insights drawn from abundant textual data. Semi-Structured QA Datasets. Another line of work focuses on semi-structured QA with tabular and textual data. WikiTableQuestions [30] emphasizes comprehension of table structure and 3 \fTable 2: Data statistics of our constructed semi-structured knowledge bases #entity #relation avg. #entities #relations #tokens types types degree STARK-AMAZON 2 3 3.0 1,032,407 3,886,603 592,067,882 STARK-MAG 4 4 10.6 1,872,968 19,919,698 212,602,571 STARK-PRIME 10 18 62.6 129,375 8,100,498 31,844,769 Title: Distribution of non-Markovian intervals for open qubit systems Abstract: We study nonMarkovianity of open qubit systems \u2026 Publication Date: 2011-11-01 Venue: Chinese Physics B has brand has brand Title: Ultimate Stroll 'N Tricycle Feature: AGES 1 TO 5 YEARS ... Dimensions: 37.2\u201dx34.3\u201dx22\u201d Description: The ultimate grow-with-me trike. There are 4 ways to ride ... Review: #1 :Very sturdy and good ... Customer Q&A: Q: Is it easy to assemble? A: Semi easy to assemble :-) ... has brand has brand also bought Amazon Semi-structured Knowledge Base also viewed MAG Semi-structured Knowledge Base paper has field of study author writes paper author belongs to institute paper cites paper paper cites paper Prime Semi-structured Knowledge Base author writes paper Drug Gene Protein Disease Molecular function Anatomy Pathway interact with carrier interact with carrier expression present interact with indication/contraindication Phenotype present/absent associated with Name: GPANK1 Alias: DYRK1AP3, PAHX-AP , PAHXAP1 Description: This gene encodes a protein which is thought to play a role in immunity. Multiple alternatively spliced variants, encoding the same protein, have been identified. Name: GM1 gangliosidosis type I Definiton: GM1 gangliosidosis type 1 is the severe infantile form of GM1 gangliosidosis with variable neurological manifestations... Epidemiology: Type 1 is the most frequent form but the exact prevalence is not known. paper has field of study author writes paper . . . Figure 2: Demonstration of Semi-structured Knowledge Bases, where each knowledge base combines both textual and relational information in a complex way, making the retrieval tasks challenging. textual content, reflecting the semi-structured nature of many real-world datasets. TabFact [6] and HybridQA [7] combine textual and tabular data, requiring models to validate statements or answer questions using both data types. TabMCQ [19] introduces multiple-choice questions based on tables, adding complexity in understanding and reasoning over semi-structured data. However, different from this work, these existing datasets mainly focus on tables as structured sources, which may not incorporate rich relational information naturally exists between entities. Additionally, previous attempts to combine text and tabular data through external web links or additional text columns often result in data structures that are hard to navigate. Therefore, there is an emerging need to benchmark retrieval tasks on textual and relational knowledge base, offering more insights beyond the scope of structured tabular sources. 3 Benchmarking Retrieval Tasks over Textual and Relational Knowledge We develop three large-scale retrieval datasets on three semi-structure knowledge bases (SKBs). In Section 3.1, we introduce the schema and scale of the SKBs, which integrate entities with rich relational and textual information. In Section 3.2, we introduce the retrieval datasets and the key characteristics of the queries. Then, we highlight our novel dataset construction pipeline in Section 3.3, which simulates user queries involving textual and relational knowledge and automatically generates the ground truth answers in the retrieval datasets. Finally, we conduct data distribution analysis and human evaluation on the queries in Section 3.4. 4 \fTable 3: Dataset statistics on STARK. #queries #queries w/ avg. #answers train / val / test multiple answers STARK-AMAZON 9,100 7,082 17.99 0.65 / 0.17 / 0.18 STARK-MAG 13,323 6,872 2.78 0.60 / 0.20 / 0.20 STARK-PRIME 11,204 4,188 2.56 0.55 / 0.20 / 0.25 3.1 Semi-structured Knowledge Bases As shown in Figure 2, we construct three SKBs from the relational structure between entities and the textual information associated with a subset of the entities. We present the statistics of the relational structure in Table 2 and introduce each SKB as follows: Amazon Semi-structured Knowledge Base. This knowledge base is derived from the Sports and Outdoors category from Amazon Product Reviews [12] and Amazon Question and Answer Data [27]. The knowledge base features two entity types: product and brand, and three types of relational information: also_bought, also_viewed between product entities, and has_brand relation between product and brand entities. In total, it comprises around 1.03M entities (product entities: 0.96M, brand entities: 0.07M) and 3.9M relations (also_bought: 1.7M, also_viewed: 1.3M, has_a_brand: 0.9M). We obtain the textual information by combining the meta data from Amazon Product Reviews with the customer Q&A records from Amazon Question and Answer Data. This provides a rich amount of textual data, including product titles, descriptions, prices, customer reviews, and related customer Q&A for each product. For brand entities, we extract brand titles as the textual attribute. Especially, Amazon SKB features an extensive textual data, which is largely contributed from customer reviews and Q&A. MAG Semi-structured Knowledge Base. This knowledge base is constructed from obgn-MAG [16], obgn-papers100M [16], and Microsoft Academic Graph (version 2019-03-22) [35, 40]. The knowledge base contains around 1.9M entities under four entity types (author: 1.1M, paper: 0.7M, institution: 9K, field_of_study: 0.06M) and 20M relations under four relation types (author_writes_paper: 7.9M, paper_has_field_of_study: 7.2M, paper_cites_paper: 4.9M, author_affiliated_with_institution: 1.0M). We construct the knowledge base by selecting papers1 shared between obgn-MAG and obgn-papers100M. The relational structure comes from the extracted subgraph in obgn-mag based on the selected papers. The textual information such as titles and abstracts is sourced from obgn-papers100M. Additionally, we augment the textual data by integrating details from the Microsoft Academic Graph database, providing extra information like paper venue, author and institution names. This SKB demonstrates a large number of entities and relations associate with paper nodes, especially on citation and authorship relation types. Prime Semi-structured Knowledge Base. We leverage the existing knowledge graph PrimeKG [5] which contains ten entity types including disease, drug, gene/protein, and eighteen relation types, such as associated_with, synergistic_interaction, indication, expression_present. The entity count in our knowledge base is approximately 129K, with around 8M relations. The details on entity numbers and relation tuples are available in Appendix A.2. Compared to the Amazon and MAG SKBs, Prime SKB is denser and features a greater variety of relation types. While PrimeKG already provides text information on disease and drug entities, including drug mechanisms and disease descriptions, we additionally integrate the textual details from multiple databases for gene/protein and pathway entities such as genomic position, gene activity summary and pathway orthologous event. We include the comprehensive public sources we used to construct the SKBs in Table 8 of Appendix A. 3.2 Retrieval Tasks on Semi-structured Knowledge Bases We develop three novel retrieval datasets. Specifically, the task is to retrieve node entities over the SKBs given any input queries. To highlight, the queries feature the interplay and fusion between relational and textual knowledge. Moreover, to elevate their applicability in practical scenarios, these 1 We filter out non-English language papers as we only considers single-lingual queries. 5 \fqueries are designed to mimic real-life query patterns in terms of the natural-sounding property and flexible formats. Each query may have single answer or multiple answers, formulated as a subset of entities from the knowledge base, which requires the model to accurately identify the ground truth entities out of potentially millions in response to each query. In Table 1, we demonstrate example queries from these benchmarks. We detail our retrieval benchmarks\u2019 scale in Table 3 and introduce the specifics of each retrieval dataset as follows: STARK-AMAZON. This dataset comprises 9,100 customer-focused queries aimed at product searches, with a notable 68% of these queries yielding multiple answers. The dataset prioritizes customer-oriented criteria, highlighting textual elements such as product quality, functionality, and style. Additionally, it accentuates relational aspects, including brand and product connections (e.g., complementary or substitute items). The queries are framed in detailed, conversation-like formats, enriching the context and enhancing the dataset\u2019s relevance to real-world scenarios. STARK-MAG. This dataset comprises 13,323 paper queries, with roughly half yielding multiple answers. Beyond the single-hop relational requirements, STARK-MAG also emphasizes the fusion between the textual requirements with multi-hop queries. For example, \u201cAre there any papers from King\u2019s College London\u201d highlights the metapath (institution \u2192author \u2192paper) on the relational structure. We designed three single-hop and four multi-hop relational query templates, cf. Appendix A.1. The textual aspects of the queries diversifies, focusing on different elements such as the paper\u2019s topic, methodology, or the advancements it introduces, sourced mainly from the abstracts. STARK-PRIME. This dataset, comprising 11,204 queries across all ten entity types, incorporates single-hop and multi-hop relational information similar to STARK-MAG. We developed 28 multihop query templates, detailed in Appendix A.2, to cover various relation types and ensure their practical relevance. For instance, the query \u201cWhat is the drug that targets the genes or proteins expressed in <anatomy>?\u201d serves applications in precision medicine and pharmacogenomics, aiding researchers and healthcare professionals in identifying drugs that act on genes or proteins associated with specific anatomical areas and enabling more targeted treatments. For entities like drug, disease, gene/protein, and pathway, the queries are a hybrid of relational requirements and textual requirements drawn from documents. A notable feature is the deliberate simulation of three distinct roles \u2013 medical scientist, doctor, and patient \u2013 for certain queries related to drugs and diseases. This is intended to diversify the language used across various user types and to evaluate the robustness of the retrieval system under varied linguistic scenarios. For entities such as effect/phenotype, the queries rely solely on relational data due to limited textual information in the knowledge base. We divide each dataset into training, validation, and testing subsets, with the ratios detailed in Table 3. For the STARK-AMAZON dataset, we randomly allocate a subset of queries with 20 or fewer answers to the validation and testing sets, while the remaining queries are assigned to the training set. For the STARK-MAG and STARK-PRIME, we use random splits to divide the data. 3.3 Benchmark Construction Here we detail the novel pipeline to synthesize the retrieval datasets present in Section 3.2. The core idea is to entangle relational information and textual properties, while constructing ground truth answers in an accurate and efficient way. As shown in Figure 3, the construction of the retrieval datasets generally involves four steps, and the specific processes vary depending on the characteristics of each dataset. These steps are as follows: \u2022 Sample Relational Requirements: Initially, we sample a relation template, such as \u201c(a product) belongs to <brand>\u201d and instantiate it into a relational requirement, e.g., \u201cbelongs to Radio Flyer\u201d shown in the first step of Figure 3. Each relational requirement yields a set of candidate entities that meet the requirement, i.e., products belonging to Radio Flyer. Therefore, we obtain the relational requirements and the corresponding candidates. The relation templates for the three SKBs are available in Appendix A, including 28 relation templates for the Prime SKB. Each relation template in the SKBs serves a distinct and practical purpose. For instance, the template \u201cWhat is the drug that targets the genes or proteins which are expressed in <anatomy>?\u201d is particularly valuable for precision medicine inquiries. It aids in pinpointing medications aimed at specific genetic or protein targets within defined anatomical areas, thereby contributing to the development of targeted treatments. 6 \fInstantiate Semi-structured Knowledge Base Relation template Check satisfaction e.g., \u201cBelongs to Radio Flyer\u201d \u2461Textual properties Products \u2460 Relational information Filtering nodes Extract \u2463Ground truth nodes \u2462Query Gold node Filtered node \u201cCan you help me find a push-along tricycle from Radio Flyer that\u2019s both fun and safe for my kid?\u201d\u201d 1) \u201cpush-along tricycle\u201d (Source from Description) 2) \u201cfun and safe for kids\u201d (Source from Review: \u201cMy son loves it and still rides it at 3!... Very safe as long as the big kids don't ride it and scrape their knees\u2026\u201d) <Tricycle #1>, < Tricycle #2> Radio Flyer Products Other nodes \u2026 Figure 3: The process of constructing semi-structured retrieval datasets involves four main steps: 1) Sample Relational Requirement: Based on relational templates, sample a relational requirement on a SKB. 2) Extract Textual Properties: From a node that meets the relational requirement, extract relevant textual properties. 3) Combine Information: Merge the relational information and textual properties to form a natural-sounding query. 4) Construct Ground Truth Nodes: Check if nodes satisfy the textual properties using multiple language models to establish ground truth nodes. \u2022 Extracting Textual Properties: Subsequently, we select one of the candidate nodes that satisfies the relational requirement, which is referred to as the gold answer. We extract textual properties that align with the interests of specific roles from the textual information of the gold answer. The roles could be customers, researchers, or medical scientists, differing across the SKBs. For example, in the second step of Figure 3, we extract the phrases of functionality and user experience from a Radio Flyer product, which come from different sources in the product document. We utilize GPT-3.5-turbo-16k for the STARK-AMAZON and Claude-v2 for the other two datasets to conduct this step, where the prompts used are included in Appendix B. \u2022 Combining Textual and Relational Information: With the textual and relational requirements, we simulate queries by fusing them using LLMs. We conduct two-stage fusion using two different LLMs, i.e., Claude-v2 and GPT-4-Turbo, to avoid bias that might arise from relying on a single LLM and ensure a more diverse set of simulated queries. Specifically, this first-stage integration is guided by various criteria, such as ensuring a natural-sounding query, adhering to the style of ArXiv searches, or aligning with specific roles. Furthermore, the second stage instructs the LLM to enrich the context and rephrase the language, thereby posing a more demanding reasoning challenge in comprehending the requirements of the query. In our example, by combining relational information \u201cbelongs to Radio Flyer\u201d and textual information \u201cpush-along tricycle\u201d and \u201cfun and safe for kids\u201d, we arrive at the final query in the third step. \u2022 Filtering Additional Answers: In the final step, we assess whether each remaining candidate, excluding the gold answer used for extracting textual properties, meets the additional textual requirement. For the verification, we employ three Claude models. Only candidates that pass the verification across all models are included in the final ground truth answer set. Therefore, this stringent verification ensures the accuracy of the nodes in the ground truth answers. During this process, we also evaluate the accuracy of the gold nodes that pass the verification criteria. The accuracy rates STARK-AMAZON, STARK-MAG, and STARK-PRIMEare 86.6%, 98.9%, and 92.3%, respectively, demonstrating the effectiveness of our filtering approach in maintaining high-quality ground truth answers. This dataset construction pipeline is automatic, efficient, and broadly applicable to the SKBs within our context. We include the complete prompts of the above steps in Appendix B. 3.4 Benchmark Data Distribution Analysis and Human Evaluation We study the data distribution to understand the characteristics of our benchmark datasets and conduct human evaluation on the benchmark quality. We study the data distribution in three dimensions: \u2022 Query and Answer Length. We analyze the query length distribution (measured in the number of words) and the number of ground truth answers for each query. The query length reflects the amount of context information provided by users, while the number of ground truth answers 7 \fQuery length Density STARK-AMAZON STARK-MAG STARK-PRIME Answer length STARK-AMAZON STARK-MAG STARK-PRIME Figure 4: Distribution of query and answer lengths on STARK datasets. Table 4: Query diversity measurement on STARK. Shannon Type-Token Entropy Ratio STARK-AMAZON 10.39 0.179 STARK-MAG 10.25 0.180 STARK-PRIME 9.63 0.143 Reference1 10.44 0.261 Figure 5: Average relative composition of relational vs. textual information. Textual Information Relational Information STARK-AMAZON STARK-MAG STARK-PRIME indicates the level of inclusiveness or ambiguity of queries within the specific SKB. As illustrated in Figure 4, all three datasets exhibit similar query length distributions, with most queries containing around 16 words. Queries with up to 50 words typically involve mentions of other entities, such as product or paper titles, or provide more detailed context, such as specific symptoms descriptions. Interestingly, the answer length distribution for STARK-AMAZON shows a more significant long-tail pattern, with approximately 22% of answers exceeding 30 entities, whereas the answer lengths for STARK-PRIME and STARK-MAG are all within 20 entities. On average, as shown in Table 3, STARK-AMAZON presents an average answer length of 5.32, indicative of the e-commerce recommendation domain where general customer inquiries frequently result in diverse recommendations. While STARK-PRIME has the smallest average answer length, partially attributed to the smaller size of the Prime SKB compared to the other two SKBs. \u2022 Query Diversity. A diverse set of queries pose challenges for broader applicability to meet varying user demands. Specifically, we measure query diversity using Shannon Entropy, which quantifies the uncertainty in the word distribution across all queries, and the Type-Token Ratio (TTR), which calculates the proportion of unique words to the total number of words. Higher values of Shannon Entropy and TTR indicate greater lexical diversity, reflecting a wider range of topics and linguistic expressions in the query set. As shown in Table 4, we observe high values of Shannon Entropy on all of the datasets. For reference, we compute the metrics for the wikipedia page of Barack Obama1. While TTR is more sensitive to the document length, the TTR values consistently demonstrate a steady presence of unique words relative to the large length in our query document. \u2022 Ratio of Relational vs. Textual Information. A key feature of our benchmark dataset is the composition of textual and relational information. Therefore, it is crucial to understand the proportionality between these two types of information. We calculate the ratio of the length of relational requirements to the length of textual requirements for each query and then average these ratios across each dataset. Intuitively, the ratio serves as an approximation of the relative distribution based on the length of requirements, and does not directly reflect the importance of each type of information in determining the final answers. As shown in Figure 5, the ratios vary across different datasets, highlighting a differing emphasis on textual versus relational information. This distribution variation presents additional challenges for retrieval systems, requiring them to adapt to the dataset characteristic and specific balance of information. 1https://en.wikipedia.org/wiki/Barack_Obama 8 \fTable 5: Positive/Non-negative rates (%) from human evaluation. Naturalness Diversity Practicality STARK-AMAZON 73.6 / 89.5 68.4 / 89.5 89.5 / 94.7 STARK-MAG 94.7 / 100 73.7 / 84.2 68.4 / 84.2 STARK-PRIME 67.8 / 92.8 71.4 / 82.1 71.4 / 89.3 Average 78.7 / 94.1 71.0 / 85.3 76.4 / 89.4 Human evaluation. Moreover, we qualitatively assess sampled queries from our benchmark to determine if they are natural-sounding (resembling natural conversation), diverse (covering a wide range of question structures and complexity levels), and practical (relevant and useful in real-life situations), with participation from 63 individuals. The evaluation results are converted from a 5-point Likert-like scale to a positive/tie/negative scale for reporting. We report the positive and non-negative rates in Table 5. On average across the three datasets, 94.1%, 85.3%, and 89.4% of participants rated neutral or above in terms of naturalness, diversity, and practicality, respectively. The results justify the quality of our benchmark and highlight its potential for diverse and realistic retrieval tasks. 4 Experiments In this section, we aim to explore the following questions: \u2022 Q1: How well do current retrieval approaches perform on our benchmark datasets? \u2022 Q2: What challenges are revealed by the experiments, and more importantly, what are the potential future directions for improvement? 4.1 Retrieval Models Vector Similarity Search (VSS). VSS embeds both the query and the concatenated textual and relational information of each candidate entity. Then, the similarity score is computed based on cosine similarity between the query and candidate embeddings. We use the text-embedding-ada-002 model from OpenAI for generating the embeddings. Multi-Vector Similarity Search (Multi-VSS). Multi-VSS represents candidate entities with multiple vectors, capturing detailed features for complex retrieval tasks. Here, we chunk and separately embed the textual and relational information of each candidate entity. The final score is aggregated from the similarities between the chunks and the query. Dense Retriever. Dense Retriever finetunes a query encoder and a document encoder separately using the query answer pairs from the training dataset. We optimize the encodes using contrastive loss. Specifically, the positive pairs are constructed from the training questions and their ground truth answers, while we construct 20 hard negative answers for each query from the top false positive predictions from VSS. We use roberta-base as the base model and finetune the encodes over the entire training split for each dataset. QAGNN [46]. QAGNN constructs a graph where nodes include entities found in the question or answer choices, incorporating their neighboring nodes as intermediate context. It extracts a subgraph from a larger knowledge graph, enabling a comprehensive understanding by leveraging the relational information from the knowledge graph. The approach integrates this with semantic embeddings from a language model, jointly modeling both relational and semantic information to enhance the question-answering modeling. VSS + LLM Reranker [8, 52]. This method improves the precision of the top-v results from VSS by reranking them using language models, taking advantage of their advanced language understanding capabilities. For this purpose, we employ two different language models: GPT-4-turbo (gpt-4-1106-preview) and Claude-v2. We set the value of v = 20 for the reranking process to balance between precision and computational efficiency. Specifically, we construct a prompt that instructs the language models to assign a score between 0 and 1 to a node, based on its combined textual and relational information, with certain criteria provided for rating the node. We find that this 9 \fTable 6: Main experimental results. Dense Retriever QAGNN VSS Multi-VSS VSS+Claude2 VSS+GPT4 (roberta) (roberta) (ada-002) (ada-002) Reranker Reranker STARK-AMAZON Hit@1 0.1529 0.2656 0.3916 0.4007 0.4266 0.4479 Hit@5 0.4793 0.5001 0.6273 0.6498 0.6746 0.7117 Recall@20 0.4449 0.5205 0.5329 0.5512 0.5376 0.5535 MRR 0.3020 0.3775 0.5035 0.5155 0.5329 0.5569 STARK-MAG Hit@1 0.1051 0.1288 0.2908 0.2592 0.3202 0.4090 Hit@5 0.3523 0.3901 0.4961 0.5043 0.5334 0.5818 Recall@20 0.4211 0.4697 0.4836 0.5080 0.4834 0.4860 MRR 0.2134 0.2912 0.3862 0.3694 0.4129 0.4900 STARK-PRIME Hit@1 0.0446 0.0885 0.1263 0.1510 0.1611 0.1828 Hit@5 0.2185 0.2135 0.3149 0.3356 0.3582 0.3728 Recall@20 0.3013 0.2963 0.3600 0.3805 0.3598 0.3405 MRR 0.1238 0.1473 0.2141 0.2349 0.2466 0.2655 approach, which involves directly scoring nodes, performs better than asking for strict satisfaction of conditions [52]. 4.2 Evaluation Metrics We use the following metrics for a holistically evaluating the model performance. Hit@k. The Hit@k metric assesses whether the correct item is among the top-k results from the model. We used k = 1 and k = 5 for evaluation. At k = 1, it evaluates the accuracy of the top recommendation; at k = 5, it examines the model\u2019s precision in a wider recommendation set. It\u2019s a straightforward yet effective way to measure the immediate relevance of answers. Recall@k. Recall@k measures the proportion of relevant items in the top-k results. In our study, k = 20 is used, as the answer length of all of the queries in our benchmarks are equal or smaller then 20. This metric offers insight into the model\u2019s ability to identify all relevant items, particularly in scenarios where missing any could be critical. Mean Reciprocal Rank (MRR). MRR is a statistic for evaluating the average effectiveness of a predictive model. It calculates the reciprocal of the rank at which the first relevant item appears in the list of predictions. MRR is particularly useful for understanding the model\u2019s performance at presenting the correct item at an early stage, without the need for a predefined k. This metric emphasizes the importance of the rank of the first correct answer, which is crucial in many practical applications where the first correct answer is often the most impactful. 4.3 Analysis Main results. In Table 6, we report the experiment results. Firstly, we notice a significant performance gap between models trained or fine-tuned on our training datasets, such as Dense Retriever and QAGNN, and models like VSS and Multi-VSS that use embeddings generated from text-embedding-ada-002. For the dense retriever, training the encoders proves challenging when handling both textual and relational information as text, leading to training challenges. The difficulty in training a dense retriever is exacerbated by the stark differences in length between long documents and short relational pairs, making it tough for the model to learn a new knowledge base\u2019s structure from scratch during fine-tuning. Without the advantage of pre-trained embeddings, it also struggles to align with the textual requirements. For QAGNN, it faces its own training challenges due to its high computational demands during training. Finally, the superiority of VSS and Multi-VSS demonstrates the superior pre-trained embeddings obtained from a large corpus. Further, the Multi-VSS strategy, which breaks down the documents into chunks and embeds them into embeddings separately to capture more fine-grained context, generally outperforms VSS. However, it falls short in the Hit@1 metric on the STARK-MAG dataset, likely due to this dataset\u2019s property for understanding more extended contexts or potential interaction or reasoning between different chunks. Significantly, the methods combining VSS with an LLM reranker consistently outperform other approaches across most metrics. Notably, reranking the top 20 predictions with an LLM leads 10 \fTable 7: Latency (s) of the retrieval systems on STARK. Dense Retriever QAGNN VSS Multi-VSS VSS+Claude VSS+GPT4 STARK-AMAZON 2.34 2.32 5.71 4.87 27.24 24.76 STARK-MAG 0.94 1.35 2.25 3.14 22.60 23.43 STARK-PRIME 0.92 1.29 0.54 0.90 29.14 26.97 Average 1.40 1.65 2.83 2.97 26.33 25.05 A B C D Title: Nonlinear piezoelectricity in \u2026 Modeling and experimental identification Abstract: \u2026 model for \u2026 nonlinear piezoelectric \u2026 piezoelectric nonlinearities \u2026 linear piezoelectric modeling \u2026 nonlinear coefficients Institute: University of Michigan Cites paper: 1. \u2026 nonlinear piezoelectric vibrational \u2026 2. \u2026 piezoelectric energy 4. \u2026 piezoelectric \u2026 (15 cited papers have related key words) Title: Nonlinear optimization of acoustic energy harvesting using piezoelectric devices Abstract: \u2026 piezoelectric inserts \u2026 piezoelectric devices \u2026 nonlinear processing \u2026 piezoelectric actuators \u2026 Institute: Institut national des sciences Appliqu\u00e9es de Lyon Cites paper: 7. \u2026 piezoelectric conversion \u2026 9. \u2026 piezoelectric vibration \u2026 (2 cited papers have related key words) Title: Modelling of hysteretic behavior of piezoceramic materials under electrical loading Abstract: \u2026 piezoceramic materials \u2026 deformation of piezoceramics \u2026 Institute: \u00c9cole nationale d'ing\u00e9nieurs du Val de Loire Cites paper: 1. Ultrasonic characterization of electrochemically etched porous silicon (0 cited paper has related key words) Title: Modelling nonlinearity in piezoceramic transducers: \u2026 Abstract: Quadratic nonlinear materials \u2026 weak nonlinearity \u2026 nonlinear response \u2026 nonlinear equivalent electrical circuits Institute: \u00c9cole nationale d'ing\u00e9nieurs du Val de Loire Cites paper: 1. \u2026 fouling in a planar setup \u2026 2. Second order ultrasonic guided wave mutual interactions \u2026 (0 cited paper has related key words) : Are there any Physics research papers from \u00c9cole nationale d'ing\u00e9nieurs du Val de Loire that discuss nonlinear modeling of piezoelectric elements? Ground truth: [A , B] Figure 6: A case study on STARK-MAG, where VSS mistakenly ranks non-ground truth papers C and D higher due to the repeated key words in the relational information \"cites paper.\" After reranking with Claude, it correctly prioritizes the ground truth papers A and B. This correction is attributed to the more accurate reasoning and analysis of the combined textual and relational information. to significant improvements in Hit@k and MRR metrics compared to using VSS alone. This improvement underscores the limitations of traditional embedding methods, which lack the advanced reasoning capabilities provided by LLMs. Additionally, using GPT-4 as a reranker tends to yield better results than using Claude, further demonstrating the effectiveness of better reasoning and context understanding in improving retrieval performance. However, even for VSS+GPT4 Reranker, we observe insufficient performances, e.g., Hit@1 is only around 18% on STARK-PRIME and 41% on STARK-MAG , meaning that for a significant portion of queries, the top-ranked answer is not the correct one. Moreover, Recall@20 metric values are lower than 60% across all of the datasets, which indicates the ranking results may not be comprehensive, potentially missing relevant answers in specific applications. Also, we notice that the MRR scores generally follow a similar trend as the Hit@1 metric. However, the MRR values remain relatively low, particularly for STARK-PRIME, indicating large space for further optimization in the ranking process. Retrieval Latency. Another aspect vital for the practical application of retrieval systems is their latency. As shown in Table 7, we test the latency of the retrieval methods on a single NVIDIA A100SXM4-80GB. Specifically, the Dense Retriever and QAGNN models exhibit lower average latency, making them more suitable for time-sensitive applications. In contrast, the VSS and Multi-VSS models show moderate latency, but when combined with LLM rerankers, such as Claude and GPT-4, the latency significantly increases. Therefore, in the context of semi-structured retrieval tasks, the balance between enhanced accuracy and increased latency is crucial. Effective retrieval methods should consider such trade-offs, particularly when addressing complex queries involving reasoning over both textual and relational information. 11 \fCase study. Finally, we highlight the importance of reasoning ability in achieving good retrieval results on our benchmark datasets. In Figure 6, we conduct a case study to compare the performance of VSS and VSS+Claude Reranker. Specifically, we examine a user query that requests papers from an institution on a particular topic. In this scenario, VSS fails to adequately address the relational requirement due to directly embedding the entire documents without detailed analysis. As a result, papers containing frequently repeated keywords, such as \u201cnonlinear modeling\u201d and \u201cpiezoelectric elements\u201d, are mistakenly assigned high scores. However, the result improves significantly after the reranking with an LLM, where the LLM is equipped to reason the relationship between the query and the information contained in each paper, making the scores more accurately reflect the papers\u2019 relevance to the query. This case study underscores the need for reasoning ability in grasping the complexities of queries. As embedding methods may capture some aspects of the query, they often fail to capture the detailed interaction between textual and relational information. Thus, integrating LLMs with advanced reasoning into the retrieval process can be crucial for the retrieval accuracy. 5 Conclusion We introduce STARK, the first benchmark designed to thoroughly evaluate the capability of retrieval systems driven by LLMs in handling semi-structured knowledge bases (SKBs). This benchmark features a diverse set of queries that are semi-structured and natural-sounding, requiring contextspecific reasoning across various domains, thereby setting a new standard for assessing retrieval systems in the context of SKBs. We utilize public datasets to construct three SKBs and develop an automated, general pipeline to simulate user queries that mimic real-world scenarios. Our extensive experiments on STARK reveal significant challenges for current models in handling both textual and relational information effectively and flexibly. Overall, STARK offers valuable opportunities for future research to advance the field of complex and multimodal retrieval systems, where reducing retrieval latency and incorporating strong reasoning ability into the retrieval process are identified as two prospective future directions. 6 Acknowledgement We thank group members in Jure Leskovec and James Zou\u2019s lab for providing valuable suggestions. We also acknowledge the support of Amazon, DARPA under Nos. N660011924033 (MCS); NSF under Nos. OAC-1835598 (CINES), CCF-1918940 (Expeditions), DMS-2327709 (IHBEM); Stanford Data Applications Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Initiative, Genentech, GSK, Hitachi, Juniper Networks, KDDI, and UCB. 12" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.16164v1", |
| "title": "Towards a Holistic Evaluation of LLMs on Factual Knowledge Recall", |
| "abstract": "Large language models (LLMs) have shown remarkable performance on a variety\nof NLP tasks, and are being rapidly adopted in a wide range of use cases. It is\ntherefore of vital importance to holistically evaluate the factuality of their\ngenerated outputs, as hallucinations remain a challenging issue.\n In this work, we focus on assessing LLMs' ability to recall factual knowledge\nlearned from pretraining, and the factors that affect this ability. To that\nend, we construct FACT-BENCH, a representative benchmark covering 20 domains,\n134 property types, 3 answer types, and different knowledge popularity levels.\nWe benchmark 31 models from 10 model families and provide a holistic assessment\nof their strengths and weaknesses. We observe that instruction-tuning hurts\nknowledge recall, as pretraining-only models consistently outperform their\ninstruction-tuned counterparts, and positive effects of model scaling, as\nlarger models outperform smaller ones for all model families. However, the best\nperformance from GPT-4 still represents a large gap with the upper-bound. We\nadditionally study the role of in-context exemplars using counterfactual\ndemonstrations, which lead to significant degradation of factual knowledge\nrecall for large models. By further decoupling model known and unknown\nknowledge, we find the degradation is attributed to exemplars that contradict a\nmodel's known knowledge, as well as the number of such exemplars. Lastly, we\nfine-tune LLaMA-7B in different settings of known and unknown knowledge. In\nparticular, fine-tuning on a model's known knowledge is beneficial, and\nconsistently outperforms fine-tuning on unknown and mixed knowledge. We will\nmake our benchmark publicly available.", |
| "authors": "Jiaqing Yuan, Lin Pan, Chung-Wei Hang, Jiang Guo, Jiarong Jiang, Bonan Min, Patrick Ng, Zhiguo Wang", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Towards a Holistic Evaluation of LLMs on Factual Knowledge Recall", |
| "main_content": "Introduction Recent advancements of large language models (LLMs), exemplified by ChatGPT1, GPT-4 (OpenAI, 2023), are leading to their widespread adoption in various domains. Despite their remarkable performance on NLP tasks, they are still plagued by the issue of hallucinations (Ji et al., 2023). Therefore, it is important to conduct holistic assessments to learn how well LLMs capture factual knowledge and what are the factors that affect their ability to recall knowledge learned from pretraining. Previous factuality benchmarks created from knowledge bases (Mallen et al., 2023; Yu et al., 2023) focus on a few domains and property types, and questions are created from templates with limited patterns (Sun et al., 2023). Evaluation of LLMs on these benchmarks reveal a large gap from mastery of factual knowledge. However, it is unclear whether such gap is caused by design challenges, such as ambiguity of the questions and presence of multiple plausible answers, which could lead to biased results. In this work, we introduce FACT-BENCH, a comprehensive factuality benchmark consisting of 20K question-answer (QA) pairs and featuring four characteristics: (1) Simplicity: we create simple questions from Wikidata triplets (subject, property, object) using Claude 2, to elicit knowledge from LLMs. (2) Validity: To make sure the answers are grounded, we select triplets whose subject has a Wikipedia article and whose object also appears in the same article. (3) Diversity: FACT-BENCH covers 20 domains, 134 property types, and 3 answer types (entities, dates and numbers). (4) Specificity: we manually select property types that are highly likely to yield unique answers and perform prompt engineering to generate specific questions. 1https://platform.openai.com/docs/models 2https://www.anthropic.com/index/ introducing-claude. Specifically, we use claude-v1.3-100k to generate questions. arXiv:2404.16164v1 [cs.CL] 24 Apr 2024 \fWe benchmark 31 models across 10 model families on FACT-BENCH. Our results reveal that instruction-tuning hurts knowledge recall, as pretraining-only models consistently outperform their instruction-tuned counterparts. We observe positive effects of model scaling \u2014 for all model families, larger models outperform smaller ones across all metrics. However, the best performance from GPT-4 still represents a large gap with the upper-bound. To identify where the gap lies, we conduct evaluation from multiple perspectives and find that LLMs struggle with long-tail entities and certain property types, consistent with the findings in (Mallen et al., 2023) and (Sun et al., 2023). In addition, we perform counterfactual incontext learning (ICL) experiments to examine the role of in-context exemplars. Our results indicate that counterfactual exemplars lead to significant degradation of factual knowledge recall for large models. By further decoupling model known and unknown knowledge, we find the degradation is attributed to exemplars that contradict a model\u2019s known knowledge, as well as the number of such exemplars. Lastly, we fine-tune LLaMA-7B in different settings of known and unknown knowledge. In particular, fine-tuning on knowledge that is known to the model is beneficial, and consistently outperforms fine-tuning on knowledge that is unknown, which empirically verifies the hypothesis in Schulman (2023) that fine-tuning on unknown knowledge teaches the model to hallucinate. Our contributions include: (1) A comprehensive benchmark to evaluate LLMs\u2019 ability to recall factual knowledge learned from pretraining. (2) Holistic assessment of the strengths and weaknesses of 31 LLMs, and the factors that affect their recall of factual knowledge. (3) Counterfactual ICL experiments to study the role of in-context exemplars, where we find contradicting a model\u2019s known knowledge leads to significant degradation of knowledge recall, as well as the number of such exemplars. (4) Fine-tuning experiments that show the advantage of using known knowledge over mixed and unknown knowledge. 2 FACT-Bench 2.1 Dataset Construction We formulate the factuality evaluation task as closed-book question answering (Roberts et al., 2020), where a question is fed to the model without any context, and the model needs to leverage its parametric knowledge to answer the question. As simple as the setup is, we identify four challenges: (1) How to make the questions simple enough so that it solely requires knowledge recall rather than complex reasoning or multi-source information? (2) What types of questions are fair to ask? It is unfair to query knowledge that does not exist in the pretraining data of all LLMs. (3) How to make the questions diverse and representative? (4) How to make the question specific enough so that the answer is unique and grounded in some knowledge source? We address these challenges from the following four aspects. Simplicity. Although LLMs have shown remarkable performance for solving composite questions (Wei et al., 2022); (Zhou et al., 2023b), we aim to decouple the ability to reason and to recall factual knowledge. Therefore, we focus on a simple QA setting to elicit knowledge from LLMs and build up the questions based on sampled Wikidata triplets3. The knowledge in Wikidata is in the format of (subject, property, object) triplets, where a simple question can be asked for the property of the subject, and the answer would be the object. Validity. To benchmark performance across various models, we take steps to make sure questions in FACT-BENCH are answerable from their pretraining corpora. Although the exact pretraining corpora are not disclosed for some LLMs, it is reasonable to assume that they have all been pretrained on Wikipedia articles. Therefore, we normalize the main content of the Wikipedia page4 for the subjects, and only select triplets whose objects also appear in the same Wikipedia page. Diversity. We diversify FACT-BENCH from five aspects: (1) Multi-domain. We leverage the knowledge domain categories from Freebase (Bollacker et al., 2008) and select triplets whose subject has a Wikipedia article page, as well as a Freebase ID. We manually aggregate the 99 top-level domains from Freebase into 20 general domains, such as finance, travel, and literature. (2) Multianswer-type. Unlike previous work, we not only include questions with textual answers, but also dates and numbers. (3) Multi-property-type. We manually select a total of 134 diverse properties, which is much more comprehensive than previous benchmarks. The full list of property types 3We use the dump from https://dumps.wikimedia. org/wikidatawiki/20230601/. 4We use the 20220301.en subset from the Hugging Face datasets library: https://huggingface.co/ datasets/wikipedia. \fby answer type can be found in Appendix C. (4) Multi-knowledge-popularity. Following previous work (Mallen et al., 2023), we use the view count of subject Wikipedia article from the whole year of 2021 to approximate the popularity of knowledge and sample triplets from the top-25% and bottom25% most popular triplets sets within each domain. (5) Diverse questions. Previous benchmarks typically use templates to construct questions from triplets, whereas we leverage a LLM to generate syntactically rich questions. Specificity. A challenging issue for the opendomain QA task is that multiple plausible answers may exist for certain questions. We tackle this challenge from two levels. First, select proper triplets. For example, the triplet [\u00d6rjan Sandred, student of, Sven-David Sandstr\u00f6m] may not be a good triplet since there could be multiple teachers for everyone, whereas the triplet [Jacob Viner, doctoral advisor, F. W. Taussig] is more restricted. We manually select property types that are highly likely to yield unique answers. Second, ask specific questions. Given a proper triplet, there could be multiple ways to ask questions. For example, given [Dan Wickline, place of birth, \u201cNorwalk, California\u201d], the question \u201cwhere was Dan Wickline born?\u201d has multiple valid answers such as Norwalk, and California, even though the place of birth is unique for everyone. The question \u201cWhat city and state was Dan Wickline born in?\u201d is more specific. We test multiple prompts for question generation and select one that works best for us (prompt shown in Table 5). Additionally, we filter out triplets whose subjects contain \u201c()\u201d in their Wikipedia titles as \u201c()\u201d is used for disambiguation5. We also remove triplets that share the same subject and property. Lastly, for specific numerical answers, we check the number together with the unit. For example, for length, we check for 500 kilometers or 500 km instead of just 500, and for temperature, we check for 98 \u00b0C instead of just 98. 2.2 Dataset Statistics and Evaluation Metrics We manually select 90 properties with textual answers, 22 properties with date answers, and 22 properties with numerical answers. We randomly sample 1000 triplets from each of the 20 domains, where 500 are from the top-25% most popular triplets, and 500 from the bottom-25%. The re5https://en.wikipedia.org/wiki/Wikipedia: Article_titles#Disambiguation sulting 20k QA pairs are split into training and evaluation set, with a size of 5K and 15K, respectively. The 5K training set is released to facilitate exemplar sampling for ICL and small-scale finetuning. We keep the distribution consistent for any subset, i.e., there is an equal number of examples from each domain, out of which half comes from the top-25% and the other half from the bottom25%. For evaluation, we use standard metrics for QA tasks, such as SQuAD (Rajpurkar et al., 2016): Exact Match (EM) and F1 score. For answers that are entities, we collect their aliases from Wikidata as additional ground-truth answers. Dates are normalized in the format of month, day, year. In zeroshot experiments, we observe models that have not been instruction-tuned tend to generate verbose answers, which leads to low EM and F1 scores but does not necessarily mean that the prediction is wrong. Therefore, we introduce an additional metric Contains, which simply checks if any of the ground-truth answers appear in the prediction. 2.3 Dataset Validation We provide a solid estimation of the upper-bound through human validation to validate that FACTBENCH is of high quality from the triplet sampling and question generation efforts. Concretely, we sample a 2k subset from the 15k evaluation set while keeping the distribution of questions consistent, and manually check the validity and specificity of the questions by examining supporting evidence from Wikipedia articles. We identify 201 questions from the 2k subset that are either ambiguous or not supported by Wikipedia, and replace them with valid ones. Empirically, the upper-bound is 90% for the 15k set and 100% for the 2k subset, which we denote as PREMIUM2K. 3 Benchmarking LLMs 3.1 Experimental Setup We consider LLMs with different model architectures, sizes, pretraining-only/instruction-tuning, and conduct zero-shot and few-shot ICL experiments. Specifically, we benchmark GPT-4, GPT3.5-turbo6, BLOOM/BLOOMZ (7B) (Scao et al., 2023), LLaMA (7B, 13B, 33B, 65B) (Touvron et al., 2023), Vicuna (7B, 13B, 33B) (Chiang et al., 2023), OpenLLaMA (7B, 13B) (Geng & Liu, 2023), FLAN-T5-XXL (11B) (Chung et al., 6We access the APIs of OpenAI models from the week of July 3rd to that of July 17th, 2023 \fGPT-4 ChatGPT BLOOM-7.1B BLOOMZ-7.1B LLaMA-7B LLaMA-13B LLaMA-33B LLaMA-65B Vicuna-7B-v1.3 Vicuna-13B-v1.3 Vicuna-33B-v1.3 OpenLLaMA-7B OpenLLaMA-13B FLAN-T5-XXL T0++ UL2-20B FLAN-UL2-20B Falcon-7B Falcon-7B-instruct Falcon-40B Falcon-40B-instruct Falcon-180B Falcon-180B-chat MPT-7B MPT-7B-instruct MPT-30B MPT-30B-instruct Pythia-6.9B Pythia-12B 0 20 40 60 80 Exact Match Top-25% Bottom-25% Figure 1: 10-shot EM by knowledge popularity. Knowledge popularity is a strong predictor of knowledge recall. LLMs struggle with long-tail entities (Bottom-25%) as shown by the large gap with popular entities (Top-25%). all country located in admin territorial entity place of birth inception country of citizenship date of birth occupation place of death sport manufacturer taxon rank educated at date of death country of origin headquarters location part of parent taxon 0 20 40 60 80 100 Exact Match GPT-4 ChatGPT LLaMA-7B LLaMA-13B LLaMA-33B LLaMA-65B Figure 2: 10-shot EM by property type. LLMs do well on certain property types, such as country-related properties, while struggle on other property types, such as date-related properties. Due to space, we show results for GPT and LLaMA models, and the most common property types from the full set of 134 property types. Models zero-shot 1-shot 6-shot 10-shot EM Contains EM Contains EM Contains EM Contains GPT-4 58.60 64.65 59.85 63.20 63.35 66.45 65.90 69.15 GPT-3.5-turbo 49.75 52.60 51.25 53.70 52.65 55.80 53.55 56.40 BLOOM-7.1B 03.20 20.30 18.95 19.95 17.85 19.90 18.15 19.75 BLOOMZ-7.1B 18.00 19.45 14.05 15.35 14.40 17.05 15.20 17.70 LLaMA-7B 14.65 35.20 33.25 34.15 35.55 37.15 35.05 36.75 LLaMA-13B 21.35 39.95 36.45 37.30 41.15 42.75 41.20 42.95 LLaMA-33B 27.25 46.55 45.25 46.70 48.30 50.30 48.90 51.10 LLaMA-65B 35.25 49.20 47.15 48.45 52.15 53.80 52.45 54.10 Vicuna-7B-v1.3 24.65 33.25 31.15 33.80 30.10 35.05 31.00 34.65 Vicuna-13B-v1.3 32.95 35.15 36.45 37.60 38.00 41.20 38.40 41.15 Vicuna-33B-v1.3 34.30 44.15 41.39 44.75 44.10 48.10 44.00 48.05 OpenLLaMA-7B 14.05 32.30 31.75 32.80 32.55 34.70 33.80 35.95 OpenLLaMA-13B 25.70 37.35 37.05 38.40 38.75 40.70 39.70 41.55 FLAN-T5-XXL (11B) 20.60 21.60 20.45 21.45 21.05 22.15 20.95 22.00 T0++ (11B) 16.05 21.25 16.75 19.95 16.80 20.00 17.05 19.85 UL2 (20B) 03.40 23.55 23.50 24.40 24.15 25.75 23.50 25.00 FLAN-UL2 (20B) 24.05 25.20 24.10 25.25 24.10 25.30 23.90 24.95 Falcon-7B 23.60 30.05 30.25 31.90 30.70 32.60 30.45 32.25 Falcon-7B-instruct 10.85 25.10 21.75 24.60 22.45 25.45 22.45 25.20 Falcon-40B 26.55 30.90 39.10 40.50 42.05 43.60 42.25 43.80 Falcon-40B-instruct 21.95 40.25 38.85 40.75 40.40 42.20 40.00 41.85 Falcon-180B 44.90 47.45 49.25 50.60 53.55 55.05 53.45 55.00 Falcon-180B-chat 39.95 47.10 47.00 49.30 49.05 51.50 49.30 51.60 MPT-7B 03.45 30.35 28.85 29.85 29.75 31.15 30.45 31.55 MPT-7B-instruct 03.55 30.40 21.55 29.25 26.35 29.30 27.85 29.60 MPT-30B 25.30 35.00 34.35 35.55 35.80 37.55 36.05 37.75 MPT-30B-instruct 19.05 33.50 28.80 31.20 31.00 33.50 31.50 33.85 Pythia-6.9B 11.00 13.15 21.20 22.45 21.85 23.05 21.70 23.25 Pythia-12B 15.25 22.00 22.75 23.70 22.95 24.35 23.20 24.65 Mistral-7B 28.45 29.25 38.90 39.80 40.45 41.85 40.75 42.60 Mistral-7B-instruct 26.00 29.30 26.80 30.05 26.80 30.35 27.20 30.75 Table 1: Benchmarking results on PREMIUM2K. 2022), T0++ (11B) (Sanh et al., 2021), UL2/FLANUL2 (20B) (Tay et al., 2023), Falcon/Falconinstruct (7B, 40B, 180B) (Almazrouei et al., 2023), MPT/MPT-instruct (7B, 30B) (MosaicML NLP Team, 2023), Pythia (6.9B, 12B) (Biderman et al., 2023), and Mistral/Mistral-instruct (7B) (Jiang et al., 2023). For all LLMs, we use the same prompts shown in Table 6 and 7. The exemplars in the few-shot experiments are shared across models and are randomly sampled from the training set, considering coverage for all 3 answer types (entities, dates and numbers). All our experiments are conducted on the PREMIUM2K subset to reduce the cost of running LLMs7. 3.2 Results Benchmarking results are presented in Table 18. Large gap with upper-bound. GPT-4 outperforms all the other models we consider on our benchmark. However, its performance of 65.9% EM in the 10-shot setting still represents a large gap with the estimated upper-bound, which shows the challenge of mastering factuality, as well as the potential risks of using LLMs in certain tasks. Positive effect of model scaling. Overall, we observe positive effects of model scaling. For all model families (i.e., LLaMA, Falcon, and MPT), larger model sizes translate to better performances across settings. Closed-source GPT models significantly outperform open-source models with the 7For the full 15k evaluation set, we provide zero-shot and 10-shot results in Appendix B.4 for reference. 8Full results including F1 scores can be found in Appendix B.3. \fall automotive-fashion food law religion travel visual-art device transportation education media architecture science finance literature event sports organization entertainment location people 0 20 40 60 80 100 Exact Match GPT-4 ChatGPT LLaMA-7B LLaMA-13B LLaMA-33B LLaMA-65B Figure 3: 10-shot EM by domain. Compared to knowledge popularity and property type, domain is less predictive of knowledge recall as model performances across different domains are more flat. Due to space, we show results for GPT and LLaMA models. all entity date number 0 20 40 60 80 Exact Match GPT-4 ChatGPT LLaMA-7B LLaMA-13B LLaMA-33B LLaMA-65B Figure 4: 10-shot EM by answer type. LLMs are less capable on date and numerical knowledge. Due to space, we show results for GPT and LLaMA models. notable exception of LLaMA-65B, which is competitive with GPT-3.5-turbo in the 10-shot setting. Negative impact of instruction-tuning. Comparing models in their pretraining-only form and their instruction-tuned counterparts, such as LLaMA/Vicuna, Falcon/Falcon-instruct, and MPT/MPT-instruct in the few-shot setting, all instruction-tuned models display inferior performance for all metrics. In the zero-shot setting, pretraining-only models tend to generate verbose answers, which leads to low EM and F1 scores, but the Contains metric reveals that they outperform their instruction-tuned counterparts. This result empirically verifies the hypothesis in Zhou et al. (2023a) that most knowledge of LLMs is learned during pretraining and alignment only helps with output style and format. We hypothesize that the alignment tax (Ouyang et al., 2022) from instruction-tuning leads to the performance drop. Overall, the best performance for each model family is achieved by few-shot ICL with the pretraining-only version of the model. Diminishing returns from adding more exemplars. Going from zero-shot to 1-shot, all opensource models benefit greatly learning from the answer format of the in-context exemplar, which is reflected in their improved EM scores. This is especially the case for pretraining-only models. By the Contains metric, results are mixed. As k increases to 6, all models, with the exception of BLOOM and T0++, show improvements over zero-shot and 1-shot. However, going from 6-shot to 10-shot, model performances mostly stay flat, except for GPT-4, improving by 2.7%. To further validate this, we run zero, 1\u201310 shots with LLaMA models, and results are shown in Figure 5, where the curves flatten after providing 3\u20135 exemplars. 0 1 2 3 4 5 6 7 8 9 10 20 40 60 Number of Exemplars EM 7B 13B 33B 65B Figure 5: LLaMA zero-to-10-shot results by EM. 3.3 Fine-grained Evaluation To gain a better understanding of where the gap with the upper-bound lies, we examine model performances from multiple perspectives. Knowledge popularity and property type are predictive of knowledge recall. Figure 1 shows 10-shot performance by knowledge popularity and Figure 2 by property type. We observe similar findings in Mallen et al. (2023) that knowledge popularity and property type are strong predictors of knowledge recall. LLMs struggle with long-tail entities (Bottom-25%) as shown by the large gap with popular entities (Top-25%). This result suggests that knowledge distribution of the pretraining data (if known to the model user) can potentially be leveraged as a predictor for factual knowledge recall. LLMs do well on certain property types, \fsuch as country-related properties, while struggle on other property types, such as date-related properties. Further results by answer type (Figure 4) show that LLMs are less capable on date and numerical knowledge. Domain is less predictive of knowledge recall. On the other hand, domain is not a strong predictor of model performance as shown in Figure 3, where model performances across different domains are more flat compared to knowledge popularity levels and property types. 4 The Role of In-context Exemplars Previous work (Min et al., 2022) suggests that ground-truth labels play an insignificant role for ICL, such that replacing ground-truth labels with random labels on classification and multi-choice tasks only results in marginal loss of accuracy. Compared to classification and multi-choice tasks, the label space of our task is much larger. We design a set of experiments to investigate how counterfactual in-context exemplars affect a model\u2019s ability to recall factual knowledge. 4.1 Counterfactual ICL Experimental setup. In this set of experiments, we replace the ground-truth answers of our regular 10-shot exemplars with random answers chosen from the 5k training set. We impose an additional constraint that the random answer is chosen within the same property type, denoted as shuffle. For example, we change the ground-truth answer for \u201cIn which military branch did Henry Curtis serve?\u201d from \u201cRoyal Navy\u201d to the counterfactual answer \u201cUnited States Marine Corps\u201d. Without prior knowledge required to answer the question, the new input-label pair looks reasonable but is actually not factual. Results. Figure 6 shows the results. Notably, LLaMA-65B experiences a major drop from 52.45% EM (regular 10-shot) to 29.45%, followed by Falcon-180B from 53.45% to 37.05%, and LLaMA-33B from 48.9% to 43.2%, while the performance of smaller models remains flat and unaffected. In addition, we observe that instructiontuned models are less affected by counterfactual exemplars than their pretraining-only counterparts. For example, compared to LLaMA, Falcon and MPT, the drop is less significant for Vicuna, Falconchat, and MPT-instruct models, respectively. 4.2 Counterfactual ICL with known and unknown knowledge Results in the previous section show that counterfactual exemplars lead to significant degradation of factual knowledge recall for large models. However, it is not clear what factors lead to this behavior besides model scale. LLaMA-65B, Falcon-180B and LLaMA-33B are the three most capable opensource models on our benchmark (Table 1). Since the 10 in-context exemplars are randomly sampled, it is expected that these three models have more knowledge about the exemplars than the other models. Therefore, we further decouple known and unknown knowledge of the exemplars to study their role. Experimental setup. We conduct controlled experiments on LLaMA models and text-davinci-0029. To approximate model known and unknown knowledge, we sample k = 32 questions that are correctly answered by each model as known knowledge, and k = 32 incorrectly answered as unknown knowledge. We corrupt the exemplars with the same shuffling method as the previous experiment. Contradicting LLMs\u2019 known knowledge teaches them to lie. Results are shown in Figure 7. Comparing known-shuffle with unknownshuffle in the 10-shot setting, LLaMA-65B drops from 52.45% EM (regular 10-shot) to 26.60% with known-shuffle while the drop with unknownshuffle is much less significant from 52.45% EM to 42.90%. For LLaMA-33B, performance drops from 48.30% to 42.20% with known-shuffle, and from 48.30% to 46.25% with unknown-shuffle. For the larger text-davinci-002 model, performances are near identical with known-shuffle and unknownshuffle (41.60% vs 41.30%). However, as we increase k, the gap between knownand unknownshuffle becomes increasingly deep (i.e., 34.10% vs 42.80% in 20-shot, and 29.95% vs 40.55% in 32-shot). Similar effect from increasing k is observed on LLaMA-33B and LLaMA-65B. Notably, as k increases, the smaller LLaMA-13B also starts experiencing sharp drops with known-shuffle. In the 32-shot setting, its performance drops from 41.29% (regular 10-shot) to 24.45%, while remains flat with unknown-shuffle at 40.80%. For the smallest LLaMA-7B, its performances stay flat across 9In Wei et al. (2023), experiments using in-context exemplars with flipped labels show that text-davinci-002 experiences the largest drop on binary classification tasks. We further include this model in this set of experiments. \fGPT-4 ChatGPT BLOOM-7.1B BLOOMZ-7.1B LLaMA-7B LLaMA-13B LLaMA-33B LLaMA-65B Vicuna-7B-v1.3 Vicuna-13B-v1.3 Vicuna-33B-v1.3 OpenLLaMA-7B OpenLLaMA-13B FLAN-T5-XXL T0++ UL2-20B FLAN-UL2-20B Falcon-7B Falcon-7B-instruct Falcon-40B Falcon-40B-instruct Falcon-180B Falcon-180B-chat MPT-7B MPT-7B-instruct MPT-30B MPT-30B-instruct Pythia-6.9B Pythia-12B 0 20 40 60 Exact Match Regular 10-Shot Counterfactual 10-Shot Figure 6: Comparison of regular 10-shot and counterfactual 10-shot by Exact Match. LLaMA-65B experiences a major drop with counterfactual exemplars, followed by Falcon-180B and LLaMA-33B, while the performance of smaller models remains flat and unaffected. 10 15 20 32 0 20 40 60 Shot Exact Match text-davinci-002 known-unshuffle known-shuffle unknown-unshuffle unknown-shuffle 10 15 20 32 0 20 40 60 Shot LLaMa-7B 10 15 20 32 0 20 40 60 Shot LLaMa-13B 10 15 20 32 0 20 40 60 Shot LLaMa-33B 10 15 20 32 0 20 40 60 Shot LLaMa-65B Figure 7: Counterfactual few-shot with known and unknown knowledge, evaluated by Exact Match. Result shows that the degradation in factual knowledge recall is primarily due to exemplars that contradict models\u2019 known knowledge, and the number of such exemplars. different settings. The results suggest that the degradation in factual knowledge recall is primarily due to exemplars that contradict models\u2019 known knowledge, i.e., counterfactual ICL with known knowledge is essentially teaching LLMs to lie, leading to unexpected results. Additionally, the number of counterfactual exemplars also plays a prominent role. As k increases, models experience sharper drops and even smaller models (LLaMA-13B in our experiments) can suffer from significant performance drops. In practical applications, it is therefore important to pair in-context exemplars with the correct answers if known to the model, in order to maximally elicit their parametric knowledge. Finally, we observe comparable performances for knownunshuffle and unknown-unshuffle across different models. 5 Fine-tuning In this section, we examine how fine-tuning affects a model\u2019s ability to recall factual knowledge and use LLaMA-7B to conduct experiments. 5.1 Regular fine-tuning Experimental setup. We fine-tune LLaMA-7B on the 5k training set and sample 4k additional examples using the same procedure described in Section 2 as the validation set. We train for 40 steps where training stabilizes based on validation loss and report results on the PREMIUM2K subset. For model input and output, we use the same inputlabel format as in the prompting experiments (i.e., input consists of an instruction and a question, and output is the answer to the question). Results. In the zero-shot setting, we compare models using the contains metric instead of EM since the predictions of pretraining-only LLaMA are verbose. Table 2 shows that our fine-tuned LLaMA underperforms Vicuna, and both models underperform the pretraining-only LLaMA. Results of this experiment further verify the hypothesis in Zhou et al. (2023a) that a model\u2019s knowledge is mostly learned from pretraining, and instructiontuning only helps align the answer format. Models zero-shot EM F1 Contains LLaMA-7B 14.65 27.66 35.20 Vicuna-7B 24.65 33.33 33.25 LLaMA-7B (fine-tuned) 28.75 35.22 29.85 Table 2: Comparison of LLaMA, Vicuna and our finetuned LLaMA. \f5.2 Counterfactual fine-tuning Experimental setup. In the counterfactual ICL experiments (Section 4), our experiment results indicate that LLaMA-7B is mostly unaffected by counterfactual exemplars. We set up similar experiments in the fine-tuning setting, where we corrupt the training data with inner-property-shuffle. Results. Table 3 shows the results. Factuality of in-context exemplars plays a critical role for finetuning. The model can recover part of its capability as training goes on. However, its performance is still significantly worse than that from regular finetuning (11.25% EM vs 29.1%). Setup for fine-tuning zero-shot EM F1 Contains Regular fine-tuning 28.75 35.22 29.85 Counterfactual fine-tuning 10.75 15.61 12.45 Table 3: Fine-tuning LLaMA-7B with counterfactual knowledge. 5.3 Fine-tuning with known, unknown and mixed knowledge Experimental setup. We fine-tune LLaMA-7B with three types of factual knowledge separately: (1) known. (2) unknown. (3) mixed. To approximate known and unknown knowledge, we use the same method described in Section 4.2. We use our evaluation set (not including PREMIUM2K) as the candidate pool to select training data since we need to distinguish between known and unknown knowledge, and 5k is insufficient. We then randomly choose 2.5k training examples for known and unknown knowledge, respectively. Results. Table 4 shows the results. Training with known knowledge consistently outperforms training with mixed knowledge, and training with unknown knowledge leads to the worst performance. The results verify the claim in Schulman (2023) that fine-tuning on knowledge unknown to the model teaches the model to hallucinate. Setup for fine-tuning zero-shot EM F1 Contains Known knowledge 33.00 39.54 33.85 Unknown knowledge 27.55 34.10 28.75 Mixed knowledge 29.30 36.36 30.25 Table 4: Fine-tuning LLaMA-7B with known, unknown and mixed knowledge. 6 Related Work Factuality Benchmarks Question answering datasets, such as Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), WebQuestions (Berant et al., 2013), TruthfulQA (Lin et al., 2022) have been used to evaluate factuality of language models. LAMA (Petroni et al., 2019, 2020) leverages 4 knowledge sources and converts fact triplets into cloze-style questions. More recent works, such as POPQA (Mallen et al., 2023) and KoLA (Yu et al., 2023), construct benchmarks from Wikidata using templates and cover a limited set of property types and domains. Head-to-Tail (Sun et al., 2023) creates their benchmark from DBpedia (Auer et al., 2007) with a focus on evaluating LLMs on knowledge at different popularity levels. Compared to previous benchmarks, FACT-BENCH is more diverse and representative, covering 134 property types, 20 general domains and 3 answer types. We strictly filter Wikidata triplets and generate valid and specific questions whose answers are grounded in Wikipedia. The role of in-context exemplars Min et al. (2022) studies the role of in-context exemplars and shows that ground-truth labels are not required for ICL. Yoo et al. (2022) revisits the findings and proposes additional metrics to reveal the importance of ground-truth labels. Wei et al. (2023) conducts similar experiments and finds that overriding semantic priors is an emergent ability of large models. Our counterfactual ICL experiments corroborate this finding, where large models suffer from significant degradation of knowledge recall. We additionally find that contradicting a model\u2019s known knowledge is the primary factor leading to this behavior, along with the number of such exemplars. Pan et al. (2023) separates task recognition from task learning in studying how ICL leverages demonstrations, and find that task recognition does not drastically improve with model scaling and more exemplars, while task learning does. 7 Conclusion In this paper, we introduce FACT-BENCH, a comprehensive benchmark that focuses on evaluating factual knowledge of LLMs. We conduct experiments on 31 models from 10 model families and investigate the factors that affect their knowledge recall. We find that instruction-tuning can hurt knowledge recall. In studying the effects of counterfactual in-context exemplars, we highlight the role of known and unknown knowledge. We also conduct fine-tuning experiments, where we highlight the importance of factuality in the training data. We hope that release of our benchmark will \fbe beneficial to the community and help facilitate future research. 8 Limitations In this work, we strive to benchmark and analyze as many popular LLMs as resource allows. However, due to the fast pace at which models are released and limited resource, we pick representative and available models at the time of our experimentation. Additionally, distinguishing between model known knowledge and unknown knowledge is an ongoing research topic and in Section 4.2 and 5, we check if the model can answer the question correctly as a proxy for model known and unknown knowledge." |
| } |
| ] |
| } |