| { |
| "url": "http://arxiv.org/abs/2404.16745v1", |
| "title": "Statistical Inference for Covariate-Adjusted and Interpretable Generalized Factor Model with Application to Testing Fairness", |
| "abstract": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).", |
| "authors": "Jing Ouyang, Chengyu Cui, Kean Ming Tan, Gongjun Xu", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "stat.ME", |
| "cats": [ |
| "stat.ME" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).", |
| "main_content": "Introduction Latent factors, often referred to as hidden factors, play an increasingly important role in modern statistics to analyze large-scale complex measurement data and find wide-ranging applications across various scientific fields, including educational assessments (Reckase 2009, Hambleton & Swaminathan 2013), macroeconomics forecasting (Stock & Watson 2002, Lam et al. 2011), and biomedical diagnosis (Carvalho et al. 2008, Frichot et al. 2013). For instance, in educational testing and social sciences, latent factors are used to model unobservable traits of respondents, such as skills, personality, and attitudes (von Davier Matthias 2008, Reckase 2009); in biology and genomics, latent factors are used to capture underlying genetic factors, gene expression patterns, or hidden biological mechanisms (Carvalho et al. 2008, Frichot et al. 2013). To uncover the latent factors and analyze large-scale complex data, various latent factor models have been developed and extensively investigated in the existing literature (Bai 2003, Bai & Li 2012, Fan et al. 2013, Chen et al. 2023b, Wang 2022). In addition to measuring the latent factors, the observed covariates and the covariate effects conditional on the latent factors hold significant scientific interpretations in many applications (Reboussin et al. 2008, Park et al. 2018). One important application is testing fairness, which receives increasing attention in the fields of education, psychology, and social sciences (Candell & Drasgow 1988, Belzak & Bauer 2020, Chen et al. 2023a). In educational assessments, testing fairness, or measurement invariance, implies that groups from diverse backgrounds have the same probability of endorsing the test items, controlling for individual proficiency levels (Millsap 2012). Testing fairness is not only of scientific interest to psychometricians and statisticians but also attracts widespread public awareness (Toch 1984). In the era of rapid technological advancements, international and large-scale educational assessments are becoming increasingly prevalent. One example is the Programme for International Student Assessment (PISA), which is a large-scale international assessment with substantial sample size and test length (OECD 2019). PISA assesses the knowledge and skills of 15-year-old students in mathematics, reading, and science domains (OECD 2 \f2019). In PISA 2018, over 600,000 students from 37 OECD1 countries and 42 partner countries/economies participated in the test (OECD 2019). To assess fairness of the test designs in such large-scale assessments, it is important to develop modern and computationally efficient methodologies for interpreting the effects of observed covariates (e.g., gender and race) on the item responses, controlling for the latent factors. However, the discrete nature of the item responses, the increasing sample size, and the large amount of test items in modern educational assessments pose great challenges for the estimation and inference for the covariate effects as well as for the latent factors. For instance, in educational and psychological measurements, such a testing fairness issue (measurement invariance) is typically assessed by differential item functioning (DIF) analysis of item response data that aims to detect the DIF items, where a DIF item has a response distribution that depends on not only the measured latent factors but also respondents\u2019 covariates (such as group membership). Despite many statistical methods that have been developed for DIF analysis, existing methods often require domain knowledge to pre-specify DIF-free items, namely anchor items, which may be misspecified and lead to biased estimation and inference results (Thissen 1988, Tay et al. 2016). To address this limitation, researchers developed item purification methods to iteratively select anchor items through stepwise selection models (Candell & Drasgow 1988, Fidalgo et al. 2000, Kopf et al. 2015). More recently, tree-based methods (Tutz & Berger 2016), regularized estimation methods (Bauer et al. 2020, Belzak & Bauer 2020, Wang et al. 2023), item pair functioning methods (Bechger & Maris 2015), and many other non-anchor-based methods have been proposed. However, these non-anchorbased methods do not provide valid statistical inference guarantees for testing the covariate effects. It remains an open problem to perform statistical inference on the covariate effects and the latent factors in educational assessments. To address this open problem, we study the statistical estimation and inference for a general family of covariate-adjusted nonlinear factor models, which include the popular factor 1OECD: Organisation for Economic Co-operation and Development 3 \fmodels for binary, count, continuous, and mixed-type data that commonly occur in educational assessments. The nonlinear model setting poses great challenges for estimation and statistical inference. Despite recent progress in the factor analysis literature, most existing studies focus on estimation and inference under linear factor models (Stock & Watson 2002, Bai & Li 2012, Fan et al. 2013) and covariate-adjusted linear factor models (Leek & Storey 2008, Wang et al. 2017, Gerard & Stephens 2020, Bing et al. 2024). The techniques employed in linear factor model settings are not applicable here due to the nonlinearity inherent in the general models under consideration. Recently, several researchers have also investigated the parameter estimation and inference for generalized linear factor models (Chen et al. 2019, Wang 2022, Chen et al. 2023b). However, they either focus only on the overall consistency properties of the estimation or do not incorporate covariates into the models. In a concurrent work, motivated by applications in single-cell omics, Du et al. (2023) considered a generalized linear factor model with covariates and studied its inference theory, where the latent factors are used as surrogate variables to control for unmeasured confounding. However, they imposed relatively stringent assumptions on the sparsity of covariate effects and the dimension of covariates, and their theoretical results also rely on data-splitting. Moreover, Du et al. (2023) focused only on statistical inference on the covariate effects, while that on factors and loadings was unexplored, which is often of great interest in educational assessments. Establishing inference results for covariate effects and latent factors simultaneously under nonlinear models remains an open and challenging problem, due to the identifiability issue from the incorporation of covariates and the nonlinearity issue in the considered general models. To overcome these issues, we develop a novel framework for performing statistical inference on all model parameters and latent factors under a general family of covariate-adjusted generalized factor models. Specifically, we propose a set of interpretable and practical identifiability conditions for identifying the model parameters, and further incorporate these conditions into the development of a computationally efficient likelihood-based estimation 4 \fmethod. Under these identifiability conditions, we develop new techniques to address the aforementioned theoretical challenges and obtain estimation consistency and asymptotic normality for covariate effects under a practical yet challenging asymptotic regime. Furthermore, building upon these results, we establish estimation consistency and provide valid inference results for factor loadings and latent factors that are often of scientific interest, advancing our theoretical understanding of nonlinear latent factor models. The rest of the paper is organized as follows. In Section 2, we introduce the model setup of the covariate-adjusted generalized factor model. Section 3 discusses the associated identifiability issues and further presents the proposed identifiability conditions and estimation method. Section 4 establishes the theoretical properties for not only the covariate effects but also the latent factors and factor loadings. In Section 5, we perform extensive numerical studies to illustrate the performance of the proposed estimation method and the validity of the theoretical results. In Section 6, we analyze an educational testing dataset from Programme for International Student Assessment (PISA) and identify test items that may lead to potential bias among different test-takers. We conclude with providing some potential future directions in Section 7. Notation: For any integer N, let [N] = {1, . . . , N}. For any set S, let #S be its cardinality. For any vector r = (r1, . . . , rl)\u22ba, let \u2225r\u22250 = #({j : rj \u0338= 0}), \u2225r\u2225\u221e= maxj=1,...,l |rj|, and \u2225r\u2225q = (Pl j=1 |rj|q)1/q for q \u22651. We define 1(y) x to be the y-dimensional vector with x-th entry to be 1 and all other entries to be 0. For any symmetric matrix M, let \u03bbmin(M) and \u03bbmax(M) be the smallest and largest eigenvalues of M. For any matrix A = (aij)n\u00d7l, let \u2225A\u2225\u221e,1 = maxj=1,...,l Pn i=1 |aij| be the maximum absolute column sum, \u2225A\u22251,\u221e= maxi=1,...,n Pl j=1 |aij| be the maximum of the absolute row sum, \u2225A\u2225max = maxi,j |aij| be the maximum of the absolute matrix entry, \u2225A\u2225F = (Pn i=1 Pl j=1 |aij|2)1/2 be the Frobenius norm of A, and \u2225A\u2225= p \u03bbmax (A\u22baA) be the spectral norm of A. Let \u2225\u00b7 \u2225\u03c61 be subexponential norm. Define the notation Av = vec(A) \u2208Rnl to indicate the vectorized matrix 5 \fA \u2208Rn\u00d7l. Finally, we denote \u2297as the Kronecker product. 2 Model Setup Consider n independent subjects with q measured responses and p\u2217observed covariates. For the ith subject, let Yi \u2208Rq be a q-dimensional vector of responses corresponding to q measurement items and Xc i \u2208Rp\u2217be a p\u2217-dimensional vector of observed covariates. Moreover, let Ui be a K-dimensional vector of latent factors representing the unobservable traits such as skills and personalities, where we assume K is specified as in many educational assessments. We assume that the q-dimensional responses Yi are conditionally independent, given Xc i and Ui. Specifically, we model the jth response for the ith subject, Yij, by the following conditional distribution: Yij \u223cpij(y | wij), where wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . (1) Here \u03b2j0 \u2208R is the intercept parameter, \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba\u2208Rp\u2217are the coefficient parameters for the observed covariates, and \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba\u2208RK are the factor loadings. For better presentation, we write \u03b2j = (\u03b2j0, \u03b2\u22ba jc)\u22baas an assembled vector of intercept and coefficients and define Xi = (1, (Xc i )\u22ba)\u22bawith dimension p = p\u2217+ 1, which gives wij = \u03b3\u22ba j Ui + \u03b2\u22ba j Xi. Given wij, the function pij is some specified probability density (mass) function. Here, we consider a general and flexible modeling framework by allowing different types of pij functions to model diverse response data in wide-ranging applications, such as binary item response data in educational and psychological assessments (Mellenbergh 1994, Reckase 2009) and mixed types of data in educational and macroeconomic applications (Rijmen et al. 2003, Wang 2022); see also Remark 1. A schematic diagram of the proposed model setup is 6 \fpresented in Figure 1. Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi \u2208Rp Ui \u2208RK Yij \u2208R, j \u2208[q] Figure 1: A schematic diagram of the proposed model in (1). The subscript i indicates the ith subject, out of n independent subjects. The response variable Yij can be discrete or continuous. Our proposed covariate-adjusted generalized factor model in (1) is motivated by applications in testing fairness. In the context of educational assessment, the subject\u2019s responses to questions are dependent on latent factors Ui such as students\u2019 abilities and skills, and are potentially affected by observed covariates Xc i such as age, gender, and race, among others (Linda M. Collins 2009). The intercept \u03b2j0 is often interpreted as the difficulty level of item j and referred to as the difficulty parameter in psychometrics (Hambleton & Swaminathan 2013, Reckase 2009). The capability of item j to further differentiate individuals based on their latent abilities is captured by \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba, which are also referred to as discrimination parameters (Hambleton & Swaminathan 2013, Reckase 2009). The effects of observed covariates Xc i on subject\u2019s response to the jth question Yij, conditioned on latent abilities Ui, are captured by \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba, which are referred to as DIF effects in psychometrics (Holland & Wainer 2012). This setting gives rise to the fairness problem of validating whether the response probabilities to the measurements differ across different genders, races, or countries of origin while holding their abilities and skills at the same level. 7 \fGiven the observed data from n independent subjects, we are interested in studying the relationships between Yi and Xc i after adjusting for the latent factors Ui in (1). Specifically, our goal is to test the statistical hypothesis H0 : \u03b2js = 0 versus Ha : \u03b2js \u0338= 0 for s \u2208[p\u2217], where \u03b2js is the regression coefficient for the sth covariate and the jth response, after adjusting for the latent factor Ui. In many applications, the latent factors and factor loadings also carry important scientific interpretations such as students\u2019 abilities and test items\u2019 characteristics. This motivates us to perform statistical inference on the parameters \u03b2j0, \u03b3j, and Ui as well. Remark 1. The proposed model setup (1) is general and flexible as various functions pij\u2019s could be used to model diverse types of response data in wide-ranging applications. For instance, in educational assessments, logistic factor model (Reckase 2009) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, y \u2208{0, 1} and probit factor model (Birnbaum 1968) with pij(y | wij) = {\u03a6(wij)}y{1 \u2212\u03a6(wij)}1\u2212y, y \u2208{0, 1} where \u03a6(\u00b7) is the cumulative density function of standard normal distribution, are widely used to model the binary responses, indicating correct or incorrect answers to the test items. Such types of models are often referred to as item response theory models (Reckase 2009). In economics and finances, linear factor models with pij(y | wij) \u221dexp{\u2212(y \u2212wij)2/(2\u03c32)}, where y \u2208R and \u03c32 is the variance parameter, are commonly used to model continuous responses, such as GDP, interest rate, and consumer index (Bai 2003, Bai & Li 2012, Stock & Watson 2016). Moreover, depending on the the observed responses, different types of function pij\u2019s can be used to model the response from each item j \u2208[q]. Therefore, mixed types of data, which are common in educational measurements (Rijmen et al. 2003) and macroeconomic applications (Wang 2022), can also be analyzed by our proposed model. 8 \fRemark 2. In addition to testing fairness, the considered model finds wide-ranging applications in the real world. For instance, in genomics, the gene expression status may depend on unmeasured confounders or latent biological factors and also be associated with the variables of interest including medical treatment, disease status, and gender (Wang et al. 2017, Du et al. 2023). The covariate-adjusted general factor model helps to investigate the effects of the variables of interest on gene expressions, controlling for the latent factors (Du et al. 2023). This setting is also applicable to other scenarios, such as brain imaging, where the activity of a brain region may depend on measurable spatial distance from neighboring regions and latent structures due to unmodeled factors (Leek & Storey 2008). To analyze large-scale measurement data, we aim to develop a computationally efficient estimation method and to provide inference theory for quantifying uncertainty in the estimation. Motivated by recent work in high-dimensional factor analysis, we treat the latent factors as fixed parameters and apply a joint maximum likelihood method for estimation (Bai 2003, Fan et al. 2013, Chen et al. 2020). Specifically, we let the collection of the item responses from n independent subjects be Y = (Y1, . . . , Yn)\u22ba n\u00d7q and the design matrix of observed covariates to be X = (X1, . . . , Xn)\u22ba n\u00d7p. For model parameters, the discrimination parameters for all q items are denoted as \u0393 = (\u03b31, . . . , \u03b3q)\u22ba q\u00d7K, while the intercepts and the covariate effects for all q items are denoted as B = (\u03b21, . . . , \u03b2q)\u22ba q\u00d7p. The latent factors from all n subjects are U = (U1, . . . , Un)\u22ba n\u00d7K. Then, the joint log-likelihood function can be written as follows: L(Y | \u0393, U, B, X) = 1 nq n X i=1 q X j=1 lij(\u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i ), (2) where the function lij(wij) = log pij(Yij|wij) is the individual log-likelihood function with wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . We aim to obtain (b \u0393, b U, b B) from maximizing the joint likelihood function L(Y | \u0393, U, B, X). While the estimators can be computed efficiently by maximizing the joint likelihood 9 \ffunction through an alternating maximization algorithm (Collins et al. 2002, Chen et al. 2019), challenges emerge for performing statistical inference on the model parameters. \u2022 One challenge concerns the model identifiability. Without additional constraints, the covariate effects are not identifiable due to the incorporation of covariates and their potential dependence on latent factors. The latent factors and factor loadings encounter similar identifiability issues as in traditional factor analysis (Bai & Li 2012, Fan et al. 2013). Ensuring that the model is statistically identifiable is the fundamental prerequisite for achieving model reliability and making valid inferences (Allman et al. 2009, Gu & Xu 2020). \u2022 Another challenge arises from the nonlinearity of our proposed model. In the existing literature, most studies focus on the statistical inference for our proposed setting in the context of linear models (Bai & Li 2012, Fan et al. 2013, Wang et al. 2017). On the other hand, settings with general log-likelihood function lij(wij), including covariateadjusted logistic and probit factor models, are less investigated. Common techniques for linear models are not applicable to the considered general nonlinear model setting. Motivated by these challenges, we propose interpretable and practical identifiability conditions in Section 3.1. We then incorporate these conditions into the joint-likelihood-based estimation method in Section 3.2. Furthermore, we introduce a novel inference framework for performing statistical inference on \u03b2j, \u03b3j, and Ui in Section 4. 3 Method 3.1 Model Identifiability Identifiability issues commonly occur in latent variable models (Allman et al. 2009, Bai & Li 2012, Xu 2017). The proposed model in (1) has two major identifiability issues. The first issue is that the proposed model remains unchanged after certain linear transformations of 10 \fboth B and U, causing the covariate effects together with the intercepts, represented by B, and the latent factors, denoted by U, to be unidentifiable. The second issue is that the model is invariant after an invertible transformation of both U and \u0393 as in the linear factor models (Bai & Li 2012, Fan et al. 2013), causing the latent factors U and factor loadings \u0393 to be undetermined. Specifically, under the model setup in (1), we define the joint probability distribution of responses to be P(Y | \u0393, U, B, X) = Qn i=1 Qq j=1 pij(Yij|wij). The model parameters are identifiable if and only if for any response Y, there does not exist (\u0393, U, B) \u0338= (e \u0393, e U, e B) such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). The first issue concerning the identifiability of B and U is that for any (\u0393, U, B) and any transformation matrix A, there exist e \u0393 = \u0393, e U = U + XA\u22ba, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). This identifiability issue leads to the indeterminacy of the covariate effects and latent factors. The second issue is related to the identifiability of U and \u0393. For any (e \u0393, e U, e B) and any invertible matrix G, there exist \u00af \u0393 = e \u0393(G\u22ba)\u22121, \u00af U = e UG, and \u00af B = e B such that P(Y | e \u0393, e U, e B, X) = P(Y | \u00af \u0393, \u00af U, \u00af B, X). This causes the latent factors and factor loadings to be unidentifiable. Remark 3. Intuitively, the unidentifiable e B = B \u2212\u0393A can be interpreted to include both direct and indirect effects of X on response Y. We take the intercept and covariate effect on the first item ( e \u03b21) as an example and illustrate it in Figure 2. One part of e \u03b21 is the direct effect from X onto Y (see the orange line in the left panel), whereas another part of e \u03b21 may be explained through the latent factors U, as the latent factors are unobserved and there are potential correlations between latent factors and observed covariates. The latter part of e \u03b21 can be considered as the indirect effect (see the blue line in the right panel). 11 \fXi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Figure 2: The direct effects (orange solid line in the left panel) and the indirect effects (blue solid line in the right panel) for item 1. The first identifiability issue is a new challenge introduced by the covariate adjustment in the model, whereas the second issue is common in traditional factor models (Bai & Li 2012, Fan et al. 2013). Considering the two issues together, for any (\u0393, U, B), A, and G, there exist transformations e \u0393 = \u0393(G\u22ba)\u22121, e U = (U + XA\u22ba)G, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). In the rest of this subsection, we propose identifiability conditions to address these issues. For notation convenience, throughout the rest of the paper, we define \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) as the true parameters. Identifiability Conditions As described earlier, the correlation between the design matrix of covariates X and the latent factors U\u2217results in the identifiability issue of B\u2217. In the psychometrics literature, the intercept \u03b2\u2217 j0 is commonly referred to as the difficulty parameter, while \u03b2\u2217 jc represents the effects of observed covariates, namely DIF effects, on the response to item j (Reckase 2009, Holland & Wainer 2012). The different scientific interpretations motivate us to develop different identifiability conditions for \u03b2\u2217 j0 and \u03b2\u2217 jc, respectively. Specifically, we propose a centering condition on U\u2217to ensure the identifiability of the intercept \u03b2\u2217 j0 for all items j \u2208[q]. On the other hand, to identify the covariate effects \u03b2\u2217 jc, a natural idea is to impose the covariate effects \u03b2\u2217 jc for all items j \u2208[q] to be sparse, as shown in many regularized methods and item purification methods (Candell & Drasgow 1988, Fidalgo et al. 2000, Bauer et al. 2020, Belzak & Bauer 2020). In Chen et al. (2023a), 12 \fan interpretable identifiability condition is proposed for selecting sparse covariate effects, yet this condition is specific to uni-dimensional covariates. Motivated by Chen et al. (2023a), we propose the following minimal \u21131 condition applicable to general cases where the covariates are multi-dimensional. To better present the identifiability conditions, we write A = (a0, a1, . . . , ap\u2217) \u2208RK\u00d7p and define Ac = (a1, . . . , ap\u2217) \u2208RK\u00d7p\u2217as the part applied to the covariate effects. Condition 1. (i) Pn i=1 U \u2217 i = 0K. (ii) Pq j=1 \u2225\u03b2\u2217 jc\u22251 < Pq j=1 \u2225\u03b2\u2217 jc \u2212A\u22ba c\u03b3\u2217 j \u22251 for any Ac \u0338= 0. Condition 1(i) assumes the latent abilities U\u2217are centered to ensure the identifiability of the intercepts \u03b2\u2217 j0\u2019s, which is commonly assumed in the item response theory literature (Reckase 2009). Condition 1(ii) is motivated by practical applications. For instance, in educational testing, practitioners need to identify and remove biased test items, correspondingly, items with non-zero covariate effects (\u03b2\u2217 js \u0338= 0). In practice, most of the designed items are unbiased, and therefore, it is reasonable to assume that the majority of items have no covariate effects, that is, the covariate effects \u03b2\u2217 jc\u2019s are sparse (Holland & Wainer 2012, Chen et al. 2023a). Next, we present a sufficient and necessary condition for Condition 1(ii) to hold. Proposition 1. Condition 1(ii) holds if and only if for any v \u2208RK \\ {0K}, q X j=1 \f \fv\u22ba\u03b3\u2217 j \f \fI(\u03b2\u2217 js = 0) > q X j=1 sign(\u03b2\u2217 js)v\u22ba\u03b3\u2217 j I(\u03b2\u2217 js \u0338= 0), \u2200s \u2208[p\u2217]. (3) Remark 4. Proposition 1 implies that Condition 1(ii) holds when {j : \u03b2\u2217 js \u0338= 0} is separated into {j : \u03b2\u2217 js > 0} and {j : \u03b2\u2217 js < 0} in a balanced way. With diversified signs of \u03b2\u2217 js, Proposition 1 holds when a considerable proportion of test items have no covariate effect (\u03b2\u2217 js \u0338= 0). For example, when \u03b3\u2217 j = m1(k) K with m > 0, Condition 1(ii) holds if and only if Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m > 0) + I(\u03b2\u2217 js/m \u22640)} > 0 and Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m \u22650) + I(\u03b2\u2217 js/m < 0)} < 0. With slightly more than q/2 items correspond to \u03b2\u2217 js = 0, Condition 1(ii) holds. Moreover, if #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, then Condition 1(ii) holds even when less than q/2 items correspond to \u03b2\u2217 js = 0 and more than q/2 items correspond 13 \fto \u03b2\u2217 js \u0338= 0. Though assuming a \u201csparse\u201d structure, our assumption here differs from existing high-dimensional literature. In high-dimensional regression models, the covariate coefficient when regressing the dependent variable on high-dimensional covariates, is often assumed to be sparse, with the proportion of the non-zero covariate coefficients asymptotically approaching zero. In our setting, Condition 1(ii) allows for relatively dense settings where the proportion of items with non-zero covariate effects is some positive constant. To perform simultaneous estimation and inference on \u0393\u2217and U\u2217, we consider the following identifiability conditions to address the second identifiability issue. Condition 2. (i) (U\u2217)\u22baU\u2217is diagonal. (ii) (\u0393\u2217)\u22ba\u0393\u2217is diagonal. (iii) n\u22121(U\u2217)\u22baU\u2217= q\u22121(\u0393\u2217)\u22ba\u0393\u2217. Condition 2 is a set of widely used identifiability conditions in the factor analysis literature (Bai 2003, Bai & Li 2012, Wang 2022). For practical and theoretical benefits, we impose Condition 2 to address the identifiability issue related to G. It is worth mentioning that this condition can be replaced by other identifiability conditions. For true parameters satisfying any identifiability condition, we can always find a transformation such that the transformed parameters satisfy our proposed Conditions 1\u20132 and the proposed estimation method and theoretical results in the subsequent sections still apply, up to such a transformation. 3.2 Joint Maximum Likelihood Estimation In this section, we introduce a joint-likelihood-based estimation method for the covariate effects B, the latent factors U, and factor loadings \u0393 simultaneously. Incorporating Conditions 1\u20132 into the estimation procedure, we obtain the maximum joint-likelihood-based estimators for \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) that satisfy the proposed identifiability conditions. With Condition 1, we address the identifiability issue related to the transformation matrix A. Specifically, for any parameters \u03d5 = (\u0393, U, B), there exists a matrix A\u2217= (a\u2217 0, A\u2217 c) with A\u2217 c = argminAc\u2208RK\u00d7p\u2217 Pq j=1 \u2225\u03b2jc \u2212A\u22ba c\u03b3j\u22251 and a\u2217 0 = \u2212n\u22121 Pn i=1(Ui + A\u2217 cXc i ) such that 14 \fthe transformed matrices U\u2217= U + X(A\u2217)\u22baand B\u2217= B \u2212\u0393A\u2217satisfy Condition 1. The transformation idea naturally leads to the following estimation methodology for B\u2217. To estimate B\u2217and U\u2217that satisfy Condition 1, we first obtain the maximum likelihood estimator b \u03d5 = (b \u0393, b U, b B) by b \u03d5 = argmin \u03d5\u2208\u2126\u03d5 \u2212L(Y | \u03d5, X), (4) where the parameter space \u2126\u03d5 is given as \u2126\u03d5 = {\u03d5 : \u2225\u03d5\u2225max \u2264C} for some large C. To solve (4), we employ an alternating minimization algorithm. Specifically, for steps t = 0, 1, . . ., we compute b \u0393(t+1), b B(t+1) = argmin \u0393\u2208Rq\u00d7K, B\u2208Rq\u00d7p \u2212L(Y | \u0393, U(t), B, X); b U(t+1) = argmin U\u2208Rn\u00d7K \u2212L(Y | \u0393(t+1), U, B(t+1), X), until the quantity max{\u2225b \u0393(t+1) \u2212b \u0393(t)\u2225F, \u2225b U(t+1) \u2212b U(t)\u2225F, \u2225b B(t+1) \u2212b B(t)\u2225F} is less than some pre-specified tolerance value for convergence. We then estimate Ac by minimizing the \u21131norm b Ac = argmin Ac\u2208RK\u00d7p\u2217 q X j=1 \u2225b \u03b2jc \u2212A\u22ba c b \u03b3j\u22251. (5) Next, we estimate b a0 = \u2212n\u22121 Pn i=1( b Ui + b AcXc i ) and let b A = (b a0, b Ac). Given the estimators b A, b \u0393, and b B, we then construct b B\u2217= b B \u2212b \u0393b A and e U = b U + Xb A\u22ba such that Condition 1 holds. Recall that Condition 2 addresses the identifiability issue related to the invertible matrix G. Specifically, for any parameters (\u0393, U), there exists a matrix G\u2217such that Condition 2 holds for U\u2217= (U+X(A\u2217)\u22ba)G\u2217and \u0393\u2217= \u0393(G\u2217)\u2212\u22ba. Let U = diag(\u03f11, . . . , \u03f1K) be a diagonal 15 \fmatrix that contains the K eigenvalues of (nq)\u22121(\u0393\u22ba\u0393)1/2(U + XA\u22ba)\u22ba(U + XA\u22ba) (\u0393\u22ba\u0393)1/2 and let V be a matrix that contains its corresponding eigenvectors. We set G\u2217= (q\u22121\u0393\u22ba\u0393)1/2 VU \u22121/4. To further estimate \u0393\u2217and U\u2217, we need to obtain an estimator for the invertible matrix G\u2217. Given the maximum likelihood estimators obtained in (4) and b A in (5), we estimate G\u2217via b G = (q\u22121b \u0393\u22bab \u0393)1/2 b V b U \u22121/4 where b U and b V are matrices that contain the eigenvalues and eigenvectors of (nq)\u22121(b \u0393\u22bab \u0393)1/2( b U+Xb A\u22ba)\u22ba( b U+Xb A\u22ba) (b \u0393\u22bab \u0393)1/2, respectively. With b G and b A, we now obtain the following transformed estimators that satisfy Condition 2: b \u0393\u2217= b \u0393( b G\u22ba)\u22121 and b U\u2217= ( b U + Xb A\u22ba) b G. To quantify the uncertainty of the proposed estimators, we will show that the proposed estimators are asymptotically normally distributed. Specifically, in Theorem 2 of Section 4, we establish the asymptotic normality result for b \u03b2\u2217 j, which allows us to make inference on the covariate effects \u03b2\u2217 j. Moreover, as the latent factors U \u2217 i and factor loadings \u03b3\u2217 j often have important interpretations in domain sciences, we are also interested in the inference on parameters U \u2217 i and \u03b3\u2217 j . In Theorem 2, we also derive the asymptotic distributions for estimators b U \u2217 i and b \u03b3\u2217 j , providing inference results for parameters U \u2217 i and \u03b3\u2217 j . 4 Theoretical Results We propose a novel framework to establish the estimation consistency and asymptotic normality for the proposed joint-likelihood-based estimators b \u03d5\u2217= (b \u0393\u2217, b U\u2217, b B\u2217) in Section 3. To establish the theoretical results for b \u03d5\u2217, we impose the following regularity assumptions. Assumption 1. There exist constants M > 0, \u03ba > 0 such that: (i) \u03a3\u2217 u = limn\u2192\u221en\u22121(U\u2217)\u22baU\u2217exists and is positive definite. For i \u2208[n], \u2225U \u2217 i \u22252 \u2264M. (ii) \u03a3\u2217 \u03b3 = limq\u2192\u221eq\u22121(\u0393\u2217)\u22ba\u0393\u2217exists and is positive definite. For j \u2208[q], \u2225\u03b3\u2217 j \u22252 \u2264M. (iii) \u03a3x = limn\u2192\u221en\u22121 Pn i=1 XiX\u22ba i exists and 1/\u03ba2 \u2264\u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) \u2264\u03ba2. For i \u2208[n], maxi \u2225Xi\u2225\u221e\u2264M. 16 \f(iv) \u03a3\u2217 ux = limn\u2192\u221en\u22121 Pn i=1 U \u2217 i X\u22ba i exists and \u2225\u03a3\u2217 ux\u03a3\u22121 x \u22251,\u221e\u2264M. The eigenvalues of (\u03a3\u2217 u \u2212\u03a3\u2217 ux\u03a3\u22121 x (\u03a3\u2217 ux)\u22ba)\u03a3\u2217 \u03b3 are distinct. Assumptions 1 is commonly used in the factor analysis literature. In particular, Assumptions 1(i)\u2013(ii) correspond to Assumptions A-B in Bai (2003) under linear factor models, ensuring the compactness of the parameter space on U\u2217and \u0393\u2217. Under nonlinear factor models, such conditions on compact parameter space are also commonly assumed (Wang 2022, Chen et al. 2023b). Assumption 1(iii) is standard regularity conditions for the nonlinear setting that is needed to establish the concentration of the gradient and estimation error for the model parameters when p diverges. In addition, Assumption 1(iv) is a crucial identification condition; similar conditions have been imposed in the existing literature such as Assumption G in Bai (2003) in the context of linear factor models and Assumption 6 in Wang (2022) in the context of nonlinear factor models without covariates. Assumption 2. For any i \u2208[n] and j \u2208[q], assume that lij(\u00b7) is three times differentiable, and we denote the first, second, and third order derivatives of lij(wij) with respect to wij as l\u2032 ij(wij), l\u2032\u2032 ij(wij), and l\u2032\u2032\u2032 ij(wij), respectively. There exist M > 0 and \u03be \u22654 such that E(|l\u2032 ij(wij)|\u03be) \u2264M and |l\u2032 ij(wij)| is sub-exponential with \u2225l\u2032 ij(wij)\u2225\u03c61 \u2264M. Furthermore, we assume E{l\u2032 ij(w\u2217 ij)} = 0. Within a compact space of wij, we have bL \u2264\u2212l\u2032\u2032 ij(wij) \u2264bU and |l\u2032\u2032\u2032 ij(wij)| \u2264bU for bU > bL > 0. Assumption 2 assumes smoothness on the log-likelihood function lij(wij). In particular, it assumes sub-exponential distributions and finite fourth-moments of the first order derivatives l\u2032 ij(wij). For commonly used linear or nonlinear factor models, the assumption is not restrictive and can be satisfied with a large \u03be. For instance, consider the logistic model with l\u2032 ij(wij) = Yij \u2212exp(wij)/{1+exp(wij)}, we have |l\u2032 ij(wij)| \u22641 and \u03be can be taken as \u221e. The boundedness conditions for l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) are necessary to guarantee the convexity of the joint likelihood function. In a special case of linear factor models, l\u2032\u2032 ij(wij) is a constant and the boundedness conditions naturally hold. For popular nonlinear models such as lo17 \fgistic factor models, probit factor models, and Poisson factor models, the boundedness of l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) can also be easily verified. Assumption 3. For \u03be specified in Assumption 2 and a sufficiently small \u03f5 > 0, we assume as n, q, p \u2192\u221e, p p n \u2227(pq) (nq)\u03f5+3/\u03be \u21920. (6) Assumption 3 is needed to ensure that the derivative of the likelihood function equals zero at the maximum likelihood estimator with high probability, a key property in the theoretical analysis. In particular, we need the estimation errors of all model parameters to converge to 0 uniformly with high probability. Such uniform convergence results involve delicate analysis of the convexity of the objective function, for which technically we need Assumption 3. For most of the popularly used generalized factor models, \u03be can be taken as any large value as discussed above, thus (nq)\u03f5+3/\u03be is of a smaller order of p n \u2227(pq), given small \u03f5. Specifically, Assumption 3 implies p = o(n1/2 \u2227q) up to a small order term, an asymptotic regime that is reasonable for many educational assessments. Next, we impose additional assumptions crucial to establishing the theoretical properties of the proposed estimators. One challenge for theoretical analysis is to handle the dependence between the latent factors U\u2217and the design matrix X. To address this challenge, we employ the following transformed U0 that are orthogonal with X, which plays an important role in establishing the theoretical results (see Supplementary Materials for details). In particular, for i \u2208[n], we let U 0 i = (G\u2021)\u22ba(U \u2217 i \u2212A\u2021Xi). Here G\u2021 = (q\u22121(\u0393\u2217)\u22ba\u0393\u2217)1/2 V\u2217(U \u2217)\u22121/4 and A\u2021 = (U\u2217)\u22baX(X\u22baX)\u22121, where U \u2217= diag(\u03f1\u2217 1, . . . , \u03f1\u2217 K) with diagonal elements being the K eigenvalues of (nq)\u22121((\u0393\u2217)\u22ba\u0393\u2217)1/2(U\u2217)\u22ba(In\u2212Px)U\u2217((\u0393\u2217)\u22ba\u0393\u2217)1/2 with Px = X(X\u22baX)\u22121X\u22baand V\u2217containing the matrix of corresponding eigenvectors. Under this transformation for U 0 i , we further define \u03b30 j = (G\u2021)\u22121\u03b3\u2217 j and \u03b20 j = \u03b2\u2217 j + (A\u2021)\u22ba\u03b3\u2217 j for j \u2208[q], and write Z0 i = ((U 0 i )\u22ba X\u22ba i )\u22baand w0 ij = (\u03b30 j )\u22baU 0 i + (\u03b20 j)\u22baXi. These transformed parameters \u03b30 j \u2019s, U 0 i \u2019s, and \u03b20 j\u2019s give the same joint likelihood value as that of the true parameters \u03b3\u2217 j \u2019s, U \u2217 i \u2019s and \u03b2\u2217 j\u2019s, which 18 \ffacilitate our theoretical understanding of the joint-likelihood-based estimators. Assumption 4. (i) For any j \u2208[q], \u2212n\u22121 Pn i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba p \u2192\u03a80 jz for some positive definite matrix \u03a80 jz and n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i d \u2192N(0, \u21260 jz). (ii) For any i \u2208[n], \u2212q\u22121 Pq j=1 l\u2032\u2032 ij(w0 ij)\u03b30 j (\u03b30 j )\u22ba p \u2192\u03a80 i\u03b3 for some positive definite matrix \u03a80 i\u03b3 and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j d \u2192N(0, \u21260 i\u03b3). Assumption 4 is a generalization of Assumption F(3)-(4) in Bai (2003) for linear models to the nonlinear setting. Specifically, we need Assumption 4(i) to derive the asymptotic distributions of the estimators b \u03b2\u2217 j and b \u03b3\u2217 j , and Assumption 4(ii) is used for establishing the asymptotic distribution of b U \u2217 i . Note that these assumptions are imposed on the loglikelihood derivative functions evaluated at the true parameters w0 ij, Z0 i , and \u03b30 j . In general, for the popular generalized factor models, such assumptions hold with mild conditions. For example, under linear models, l\u2032 ij(wij) is the random error and l\u2032\u2032 ij(wij) is a constant. Then \u03a80 jz and \u03a80 i\u03b3 naturally exist and are positive definite followed by Assumption 1. The limiting distributions of n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j can be derived by the central limit theorem under standard regularity conditions. Under logistic and probit models, l\u2032 ij(wij) and l\u2032\u2032 ij(wij) are both finite inside a compact parameters space and similar arguments can be applied to show the validity of Assumption 4. We present the following assumption to establish the theoretical properties of the transformed matrix b A as defined in (5). In particular, we define A0 = (G\u2021)\u22baA\u2021 and write A0 = (a0 0, . . . , a0 p\u2217)\u22ba. Note that the estimation problem of (5) is related to the median regression problem with measurement errors. To understand the properties of this estimator, following existing M-estimation literature (He & Shao 1996, 2000), we define \u03c80 js(a) = \u03b30 j sign{\u03b20 js + (\u03b30 j )\u22ba(a \u2212a0 s)} and \u03c7s(a) = Pq j=1 \u03c80 js(a) for j \u2208[q] and s \u2208[p\u2217]. We further define a perturbed version of \u03c80 js(a), denoted as \u03c8js(a, \u03b4js), as follows: \u03c8js(a, \u03b4js) = \u0010 \u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K] \u0011 sign n \u03b20 js + \u0002 \u03b4js \u221an \u0003 K+1 \u2212(\u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K])\u22ba(a \u2212a0 s) o , s \u2208[p\u2217] 19 \fwhere the perturbation \u03b4js = \uf8eb \uf8ec \uf8ed IK 0 0 (1(p) s )\u22ba \uf8f6 \uf8f7 \uf8f8 \u0010 \u2212 n X i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba\u0011\u22121\u0010\u221an n X i=1 l\u2032 ij(w0 ij)Z0 i \u0011 , is asymptotically normally distributed by Assumption 4. We define b \u03c7s(a) = Pq j=1 E\u03c8js(a, \u03b4js). Assumption 5. For \u03c7s(a), we assume that there exists some constant c > 0 such that mina\u0338=0 |q\u22121\u03c7s(a)| > c holds for all s \u2208[p\u2217]. Assume there exists as0 for each s \u2208[p\u2217] such that b \u03c7s(as0) = 0 with p\u221an\u2225\u03b1s0\u2225\u21920. In a neighbourhood of \u03b1s0, b \u03c7s(a) has a nonsingular derivative such that {q\u22121\u2207ab \u03c7s(\u03b1s0)}\u22121 = O(1) and q\u22121|\u2207ab \u03c7s(a)\u2212\u2207ab \u03c7s(\u03b1s0)| \u2264k|a\u2212\u03b1s0|. We assume \u03b9nq,p := max \b \u2225\u03b1s0\u2225, q\u22121 Pq j=1 \u03c8js(as0, \u03b4js) \t = o \u0000(p\u221an)\u22121\u0001 . Assumption 5 is crucial in addressing the theoretical difficulties of establishing the consistent estimation for A0, a challenging problem related to median regression with weakly dependent measurement errors. In Assumption 5, we treat the minimizer of | Pq j=1 \u03c8(a, \u03b4js)| as an M-estimator and adopt the Bahadur representation results in He & Shao (1996) for the theoretical analysis. For an ideal case where \u03b4js are independent and normally distributed with finite variances, which corresponds to the setting in median regression with measurement errors (He & Liang 2000), these assumptions can be easily verified. Assumption 5 discusses beyond such an ideal case and covers general settings. In addition to independent and Gaussian measurement errors, this condition also accommodates the case when \u03b4js are asymptotically normal and weakly dependent with finite variances, as implied by Assumption 4 and the conditional independence of Yij. We want to emphasize that Assumption 5 allows for both sparse and dense settings of the covariate effects. Consider an example of K = p = 1 and \u03b3j = 1 for j \u2208[q]. Suppose \u03b2\u2217 js is zero for all j \u2208[q1] and nonzero otherwise. Then this condition is satisfied as long as #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, even when the sparsity level q1 is small. Under the proposed assumptions, we next present our main theoretical results. 20 \fTheorem 1 (Average Consistency). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have q\u22121\u2225b B\u2217\u2212B\u2217\u22252 F = Op \u0012p2 log qp n + p log n q \u0013 ; (7) if we further assume p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1), then we have n\u22121\u2225b U\u2217\u2212U\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 ; (8) q\u22121\u2225b \u0393\u2217\u2212\u0393\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 . (9) Theorem 1 presents the average convergence rates of b \u03d5\u2217. Consider an oracle case with U\u2217 and \u0393\u2217known, the estimation of B\u2217reduces to an M-estimation problem. For M-estimators under general parametric models, it can be shown that the optimal convergence rates in squared \u21132-norm is Op(p/n) under p(log p)3/n \u21920 (He & Shao 2000). In terms of our average convergence rate on b B\u2217, the first term in (7), n\u22121p2 log(qp), approximately matches the convergence rate Op(p/n) up to a relatively small order term of p log(qp). The second term in (7), q\u22121p log n, is mainly due to the estimation error for the latent factor U\u2217. In educational applications, it is common to assume the number of subjects n is much larger than the number of items q. Under such a practical setting with n \u226bq and p relatively small, the term q\u22121 log n in (8) dominates in the derived convergence rate of b U\u2217, which matches with the optimal convergence rate Op(q\u22121) for factor models without covariates (Bai & Li 2012, Wang 2022) up to a small order term. Remark 5. The additional condition p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1) in Theorem 1 is used to handle the challenges related to the invertible matrix G that affects the theoretical properties of b U\u2217and b \u0393\u2217. It is needed for establishing the estimation consistency of b U\u2217and b \u0393\u2217 but not for that of b B\u2217. With sufficiently large \u03be and small \u03f5, this assumption is approximately p = o(n1/4 \u2227q1/3) up to a small order term. 21 \fRemark 6. One challenge in establishing the estimation consistency for b \u03d5\u2217arises from the unrestricted dependence structure between U\u2217and X. If we consider the ideal case where the columns of U\u2217and X are orthogonal, i.e., (U\u2217)\u22baX = 0K\u00d7p, then we can achieve comparable or superior convergence rates with less stringent assumptions. Specifically, with Assumptions 1\u20133 only, we can obtain the same convergence rates for b U\u2217and b \u0393\u2217as in (8) and (9), respectively. Moreover, with Assumptions 1\u20133, the average convergence rate for the consistent estimator of B\u2217is Op(n\u22121p log qp+q\u22121 log n), which is tighter than (7) by a factor of p. With estimation consistency results established, we next derive the asymptotic normal distributions for the estimators, which enable us to perform statistical inference on the true parameters. Theorem 2 (Asymptotic Normality). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have the asymptotic distributions as follows. Denote \u03b6\u22122 nq,p = n\u22121p log qp + q\u22121log n. If p3/2\u221an(nq)3/\u03be\u03b6\u22122 nq,p \u21920, for any j \u2208[q] and a \u2208Rp with \u2225a\u22252 = 1, \u221ana\u22ba(\u03a3\u2217 \u03b2,j)\u22121/2( b \u03b2\u2217 j \u2212\u03b2\u2217 j) d \u2192N(0, 1), (10) where \u03a3\u2217 \u03b2,j = (\u2212(A0)\u22ba, Ip)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121(\u2212(A0)\u22ba, Ip)\u22ba, and for any j \u2208[q], \u221an(\u03a3\u2217 \u03b3,j)\u22121/2(b \u03b3\u2217 j \u2212\u03b3\u2217 j ) d \u2192N(0, IK), (11) where \u03a3\u2217 \u03b3,j = G\u2021(IK, 0)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121 (IK, 0)\u22ba(G\u2021)\u22ba. Furthermore, for any i \u2208[n], if q = O(n) and p3/2\u221aq(nq)3/\u03be\u03b6\u22122 nq,p \u21920, \u221aq(\u03a3\u2217 u,i)\u22121/2( b U \u2217 i \u2212U \u2217 i ) d \u2192N(0, IK), (12) where \u03a3\u2217 u,i = (G\u2021)\u2212\u22ba(\u03a80 i\u03b3)\u22121\u21260 i\u03b3(\u03a80 i\u03b3)\u22121(G\u2021)\u22121. 22 \fThe asymptotic covariance matrices in Theorem 2 can be consistently estimated. Due to the space limitations, we defer the construction of the consistent estimators b \u03a3\u2217 \u03b2,j, b \u03a3\u2217 \u03b3,j, and b \u03a3\u2217 u,i to Supplementary Materials. Theorem 2 provides the asymptotic distributions for all individual estimators. In particular, with the asymptotic distributions and the consistent estimators b \u03a3\u2217 \u03b2,j for the asymptotic covariance matrices, we can perform hypothesis testing on \u03b2\u2217 js for j \u2208[q] and s \u2208[p\u2217]. We reject the null hypothesis \u03b2\u2217 js = 0 at significance level \u03b1 if |\u221an(b \u03c3\u2217 \u03b2,js)\u22121b \u03b2\u2217 js| > \u03a6\u22121(1 \u2212\u03b1/2), where (b \u03c3\u2217 \u03b2,js)2 is the (s + 1)-th diagonal entry in b \u03a3\u2217 \u03b2,j. For the asymptotic normality of b \u03b2\u2217 j, the condition p3/2\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u2192 0 together with Assumption 3 gives p = o{n1/5 \u2227(q2/n)1/3} up to a small order term, and further implies n \u226aq2, which is consistent with established conditions in the existing factor analysis literature (Bai & Li 2012, Wang 2022). For the asymptotic normality of b U \u2217 i , the additional condition that q = O(n) is a reasonable assumption in educational applications where the number of items q is much fewer than the number of subjects n. In this case, the scaling conditions imply p = o{q1/3 \u2227(n2/q)1/5} up to a small order term. Similarly for the asymptotic normality of b \u03b3\u2217 j , the proposed conditions give p = o{n1/5 \u2227(q2/n)1/3} up to a small order term. Remark 7. Similar to the discussion in Remark 6, the challenges arising from the unrestricted dependence between U\u2217and X also affect the derivation of the asymptotic distributions for the proposed estimators. If we consider the ideal case with (U\u2217)\u22baX = 0K\u00d7p, we can establish the asymptotic normality for all individual estimators under Assumptions 1\u20134 only and weaker scaling conditions. Specifically, when (U\u2217)\u22baX = 0K\u00d7p, the scaling condition becomes p\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u21920 for deriving asymptotic normality of b \u03b2\u2217 j and b \u03b3\u2217 j , which is milder than that for (10) and (11). 23 \f5 Simulation Study In this section, we study the finite-sample performance of the proposed joint-likelihoodbased estimator. We focus on the logistic latent factor model in (1) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, where wij = (\u03b3\u2217 j )\u22baU \u2217 i + (\u03b2\u2217 j)\u22baXi. The logistic latent factor model is commonly used in the context of educational assessment and is also referred to as the item response theory model (Mellenbergh 1994, Hambleton & Swaminathan 2013). We apply the proposed method to estimate B\u2217and perform statistical inference on testing the null hypothesis \u03b2\u2217 js = 0. We start with presenting the data generating process. We set the number of subjects n = {300, 500, 1000, 1500, 2000}, the number of items q = {100, 300, 500}, the covariate dimension p = {5, 10, 30}, and the factor dimension K = 2, respectively. We jointly generate Xc i and U \u2217 i from N(0, \u03a3) where \u03a3ij = \u03c4 |i\u2212j| with \u03c4 \u2208{0, 0.2, 0.5, 0.7}. In addition, we set the loading matrix \u0393\u2217 [,k] = 1(K) k \u2297vk, where \u2297is the Kronecker product and vk is a (q/K)-dimensional vector with each entry generated independently and identically from Unif[0.5, 1.5]. For the covariate effects B\u2217, we set the intercept terms to equal \u03b2\u2217 j0 = 0. For the remaining entries in B\u2217, we consider the following two settings: (1) sparse setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = 5s\u22124, . . . , 5s and other \u03b2\u2217 js are set to zero; (2) dense setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = Rsq/5 + 1, . . . , (Rs + 1)q/5 with Rs = s \u22125\u230as/5\u230b, and other \u03b2\u2217 js are set to zero. Here, the signal strength is set as \u03c1 \u2208{0.3, 0.5}. Intuitively, in the sparse setting, we set 5 items to be biased for each covariate whereas in the dense setting, 20% of items are biased items for each covariate. For better empirical stability, after reaching convergence in the proposed alternating maximization algorithm and transforming the obtained MLEs into ones that satisfy Conditions 1\u20132, we repeat another round of maximization and transformation. We take the significance level at 5% and calculate the averaged type I error based on all the entries \u03b2\u2217 js = 0 and the averaged power for all non-zero entries, over 100 replications. The averaged hypothesis testing results are presented in Figures 3\u20136 for p = 5 and p = 30, across different 24 \fsettings. Additional numerical results for p = 10 are presented in the Supplementary Materials. 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 3: Powers and type I errors under sparse setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 25 \f0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 4: Powers and type I errors under sparse setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 26 \f0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 5: Powers and type I errors under dense setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 27 \f0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 6: Powers and type I errors under dense setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 28 \fFrom Figures 3\u20136, we observe that the type I errors are well controlled at the significance level 5%, which is consistent with the asymptotic properties of b B\u2217in Theorem 2. Moreover, the power increases to one as the sample size n increases across all of the settings we consider. Comparing the left panel (\u03c1 = 0.3) to the right panel (\u03c1 = 0.5) in Figures 3\u20136, we see that the power increases as we increase the signal strength \u03c1. Comparing the plots in Figures 3\u20134 to the corresponding plots in Figures 5\u20136, we see that the powers under the sparse setting (Figures 3\u20134) are generally higher than that of the dense setting (Figures 5\u20136). Nonetheless, our proposed method is generally stable under both sparse and dense settings. In addition, we observe similar results when we increase the covariate dimension p from p = 5 (Figures 3 and 5) to p = 30 (Figures 4 and 6). We refer the reader to the Supplementary Materials for additional numerical results for p = 10. Moreover, we observe similar results when we increase the test length q from q = 100 (top row) to q = 500 (bottom row) in Figures 3\u20136. In terms of the correlation between X and U\u2217, we observe that while the power converges to one as we increase the sample size, the power decreases as the correlation \u03c4 increases. 6 Data Application We apply our proposed method to analyze the Programme for International Student Assessment (PISA) 2018 data2. PISA is a worldwide testing program that compares the academic performances of 15-year-old students across many countries (OECD 2019). More than 600,000 students from 79 countries/economies, representing a population of 31 million 15year-olds, participated in this program. The PISA 2018 used the computer-based assessment mode and the assessment lasted two hours for each student, with test items mainly evaluating students\u2019 proficiency in mathematics, reading, and science domains. A total of 930 minutes of test items were used and each student took different combinations of the test items. In addition to the assessment questions, background questionnaires were provided to collect students\u2019 information. 2The data can be downloaded from: https://www.oecd.org/pisa/data/2018database/ 29 \fIn this study, we focus on PISA 2018 data from Taipei. The observed responses are binary, indicating whether students\u2019 responses to the test items are correct, and we use the popular item response theory model with the logit link (i.e., logistic latent factor model; Reckase 2009). Due to the block design nature of the large-scale assessment, each student was only assigned to a subset of the test items, and for the Taipei data, 86% response matrix is unobserved. Note that this missingness can be considered as conditionally independent of the responses given the students\u2019 characteristics. Our proposed method and inference results naturally accommodate such missing data and can be directly applied. Specifically, to accommodate the incomplete responses, we can modify the joint log-likelihood function in (2) into Lobs(Y | \u0393, U, B, X) = Pn i=1 P j\u2208Qi lij(\u03b3\u22ba j Ui + \u03b2\u22ba j Xi), where Qi defines the set of questions to which the responses from student i are observed. In this study, we include gender and 8 variables for school strata as covariates (p\u2217= 9). These variables record whether the school is public, in an urban place, etc. After data preprocessing, we have n = 6063 students and q = 194 questions. Following the existing literature (Reckase 2009, Millsap 2012), we take K = 3 to interpret the three latent abilities measured by the math, reading, and science questions. We apply the proposed method to estimate the effects of gender and school strata variables on students\u2019 responses. We obtain the estimators of the gender effect for each PISA question and construct the corresponding 95% confidence intervals. The constructed 95% confidence intervals for the gender coefficients are presented in Figure 7. There are 10 questions highlighted in red as their estimated gender effect is statistically significant after the Bonferroni correction. Among the reading items, there is only one significant item and the corresponding confidence interval is below zero, indicating that this question is biased towards female test-takers, conditioning on the students\u2019 latent abilities. Most of the confidence intervals corresponding to the biased items in the math and science sections are above zero, indicating that these questions are biased towards male test-takers. In social science research, it is documented that female students typically score better than male students 30 \fduring reading tests, while male students often outperform female students during math and science tests (Quinn & Cooc 2015, Balart & Oosterveen 2019). Our results indicate that there may exist potential measurement biases resulting in such an observed gender gap in educational testing. Our proposed method offers a useful tool to identify such biased test items, thereby contributing to enhancing testing fairness by providing practitioners with valuable information for item calibration. Math Reading Science \u22126 \u22123 0 3 6 0 50 100 150 200 PISA Questions for TAP Gender Effect Estimator Figure 7: Confidence intervals for the effect of gender covariate on each PISA question using Taipei data. Red intervals correspond to confidence intervals for questions with significant gender bias after Bonferroni correction. (For illustration purposes, we omit the confidence intervals with the upper bounds exceeding 6 and the lower bounds below -6 in this figure). To further illustrate the estimation results, Table 1 lists the p-values for testing the gender effect for each of the identified 10 significant questions, along with the proportions of female and male test-takers who answered each question correctly. We can see that the signs of the estimated gender effect by our proposed method align with the disparities in the reported proportions between females and males. For example, the estimated gender effect corresponding to the item \u201cCM496Q01S Cash Withdrawal\u201d is positive with a p-value 31 \fItem code Item Title Female (%) Male (%) p-value Mathematics CM496Q01S Cash Withdrawal 51.29 58.44 2.77\u00d710\u22127 (+) CM800Q01S Computer Games 96.63 93.61 < 1 \u00d7 10\u22128 (\u2212) Reading CR466Q06S Work Right 91.91 86.02 1.95\u00d710\u22125 (\u2212) Science CS608Q01S Ammonoids 57.68 68.15 4.65\u00d710\u22125 (+) CS643Q01S Comparing Light Bulbs 68.57 73.41 1.08\u00d710\u22125 (+) CS643Q02S Comparing Light Bulbs2 63.00 57.50 4.64\u00d710\u22124 (\u2212) CS657Q03S Invasive Species 46.00 54.36 8.47\u00d710\u22125 (+) CS527Q04S Extinction of Dinosours3 36.19 50.18 8.13\u00d710\u22125 (+) CS648Q02S Habitable Zone 41.69 45.19 1.34\u00d710\u22124 (+) CS607Q01S Birds and Caterpillars 88.14 91.47 1.99\u00d710\u22124 (+) Table 1: Proportion of full credit in females and males to significant items of PISA2018 in Taipei. (+) and (\u2212) denote the items with positively and negatively estimated gender effects, respectively. of 2.77 \u00d7 10\u22127, implying that this question is statistically significantly biased towards male test-takers. This is consistent with the observation that in Table 1, 58.44% of male students correctly answered this question, which exceeds the proportion of females, 51.29%. Besides gender effects, we estimate the effects of school strata on the students\u2019 response and present the point and interval estimation results in the left panel of Figure 8. All the detected biased questions are from math and science sections, with 6 questions for significant effects of whether attending public school and 5 questions for whether residing in rural areas. To further investigate the importance of controlling for the latent ability factors, we compare results from our proposed method with the latent factors, to the results from directly regressing responses on covariates without latent factors. From the right panel of Figure 8, we can see that without conditioning on the latent factors, there are excessive items detected for the covariate of whether the school is public or private. On the other hand, there are no biased items detected if we only apply generalized linear regression to estimate the effect of the covariate of whether the school is in rural areas. 32 \fMath Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public \u2212 without latent variable Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural \u2212 without latent variable Figure 8: Confidence intervals for the effect of school stratum covariate on each PISA question. Red intervals correspond to confidence intervals for questions with significant school stratum bias after Bonferroni correction. 7 Discussion In this work, we study the covariate-adjusted generalized factor model that has wide interdisciplinary applications such as educational assessments and psychological measurements. In particular, new identifiability issues arise due to the incorporation of covariates in the model setup. To address the issues and identify the model parameters, we propose novel and interpretable conditions, crucial for developing the estimation approach and inference results. With model identifiability guaranteed, we propose a computationally efficient jointlikelihood-based estimation method for model parameters. Theoretically, we obtain the estimation consistency and asymptotic normality for not only the covariate effects but also latent factors and factor loadings. 33 \fThere are several future directions motivated by the proposed method. In this manuscript, we focus on the case in which p grows at a slower rate than the number of subjects n and the number of items q, a common setting in educational assessments. It is interesting to further develop estimation and inference results under the high-dimensional setting in which p is larger than n and q. Moreover, in this manuscript, we assume that the dimension of the latent factors K is fixed and known. One possible generalization is to allow K to grow with n and q. Intuitively, an increasing latent dimension K makes the identifiability and inference issues more challenging due to the increasing degree of freedom of the transformation matrix. With the theoretical results in this work, another interesting related problem is to further develop simultaneous inference on group-wise covariate coefficients, which we leave for future investigation.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.14395v1", |
| "title": "PARAMANU-GANITA: Language Model with Mathematical Capabilities", |
| "abstract": "In this paper, we present Paramanu-Ganita, a 208 million parameter novel Auto\nRegressive (AR) decoder based language model on mathematics. The model is\npretrained from scratch at context size of 4096 on our curated mixed\nmathematical corpus. We evaluate our model on both perplexity metric and GSM8k\nmathematical benchmark. Paramanu-Ganita despite being 35 times smaller than 7B\nLLMs, outperformed generalist LLMs such as LLaMa-1 7B by 28.4% points, LLaMa-2\n7B by 27.6% points, Falcon 7B by 32.6% points, PaLM 8B by 35.3% points, and\nmath specialised LLMs such as Minerva 8B by 23.2% points, and LLEMMA-7B by 3.0%\npoints in GSM8k test accuracy metric respectively. Paramanu-Ganita also\noutperformed giant LLMs like PaLM 62B by 6.4% points, Falcon 40B by 19.8%\npoints, LLaMa-1 33B by 3.8% points and Vicuna 13B by 11.8% points respectively.\nThe large significant margin improvement in performance of our math model over\nthe existing LLMs signifies that reasoning capabilities of language model are\njust not restricted to LLMs with humongous number of parameters.\nParamanu-Ganita took 146 hours of A100 training whereas math specialised LLM,\nLLEMMA 7B, was trained for 23,000 A100 hours of training equivalent. Thus, our\napproach of pretraining powerful domain specialised language models from\nscratch for domain adaptation is much more cost-effective than performing\ncontinual training of LLMs for domain adaptation. Hence, we conclude that for\nstrong mathematical reasoning abilities of language model, we do not need giant\nLLMs and immense computing power to our end. In the end, we want to point out\nthat we have only trained Paramanu-Ganita only on a part of our entire\nmathematical corpus and yet to explore the full potential of our model.", |
| "authors": "Mitodru Niyogi, Arnab Bhattacharya", |
| "published": "2024-04-22", |
| "updated": "2024-04-22", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "In this paper, we present Paramanu-Ganita, a 208 million parameter novel Auto\nRegressive (AR) decoder based language model on mathematics. The model is\npretrained from scratch at context size of 4096 on our curated mixed\nmathematical corpus. We evaluate our model on both perplexity metric and GSM8k\nmathematical benchmark. Paramanu-Ganita despite being 35 times smaller than 7B\nLLMs, outperformed generalist LLMs such as LLaMa-1 7B by 28.4% points, LLaMa-2\n7B by 27.6% points, Falcon 7B by 32.6% points, PaLM 8B by 35.3% points, and\nmath specialised LLMs such as Minerva 8B by 23.2% points, and LLEMMA-7B by 3.0%\npoints in GSM8k test accuracy metric respectively. Paramanu-Ganita also\noutperformed giant LLMs like PaLM 62B by 6.4% points, Falcon 40B by 19.8%\npoints, LLaMa-1 33B by 3.8% points and Vicuna 13B by 11.8% points respectively.\nThe large significant margin improvement in performance of our math model over\nthe existing LLMs signifies that reasoning capabilities of language model are\njust not restricted to LLMs with humongous number of parameters.\nParamanu-Ganita took 146 hours of A100 training whereas math specialised LLM,\nLLEMMA 7B, was trained for 23,000 A100 hours of training equivalent. Thus, our\napproach of pretraining powerful domain specialised language models from\nscratch for domain adaptation is much more cost-effective than performing\ncontinual training of LLMs for domain adaptation. Hence, we conclude that for\nstrong mathematical reasoning abilities of language model, we do not need giant\nLLMs and immense computing power to our end. In the end, we want to point out\nthat we have only trained Paramanu-Ganita only on a part of our entire\nmathematical corpus and yet to explore the full potential of our model.", |
| "main_content": "Introduction Pretrained Large Language Models (LLMs) (LLaMa (Touvron et al., 2023a), LLaMa-2, (Touvron et al., 2023b), PaLM (et al., 2022), Falcon (Almazrouei et al., 2023), Code LlaMa (Rozi\u00e8re et al., 2024), MPT 1, etc.) have demonstrated multidimensional abilities, such as in open-ended dialogue or instruction following (Ouyang et al., 2022) capabilities and being typically generalist language models balancing the performance across the entire distribution of natural language tasks. However, these generalist models are humongous in size and requires million dollars to train aside from high engineering inference cost involved. Traditionally, to optimize performance within specific domains such as finance (Wu et al., 2023), medicine (Singhal et al., 2023), etc., these models have been continued trained on domain specific data. However, domain specific continual pretraining of LLMs are also very expensive to our opinion. For employing a domain-specific LLM, lot of computation and inference costs are involved along with high requirement of GPUs. For example, to improve the mathematical reasoning capabilities of LLMs, LLEMMA 7B (Azerbayev et al., 2024) was trained on 256 A100 40GB GPUs for roughly 23,000 A100 training hours, which is yet very expensive. Instead of following the domain adaptation method of LLMs for better mathematical reasoning, we focused on pretraining from scratch a generative mathematical language model only on our curated high quality mathematical corpus. This avoids requiring immense compute power, high engineering maneuver and techniques to load LLMs in memory, and mostly high cost of training, and non-specialised tokenizer issue of existing LLMs. Following our previous work for domain adaptation (Niyogi and Bhattacharya, 2024b), we continued our exploration to see whether we can develop strong reasoning mathematical language model from scratch and compares how well it performs with respect to LLMs in mathematical reasoning benchmarks. We trained a powerful mathematical 1https://www.databricks.com/blog/mpt-7b arXiv:2404.14395v1 [cs.CL] 22 Apr 2024 \flanguage model from scratch which only required 146 hours of A100 training. Yet, our mathematical language model, Paramanu-Ganita outperformed LLEMMA 7B math specialised model on GSM8K (Cobbe et al., 2021) benchmark by significant margin of 3 percentage points despite being 35 times smaller in size. On the memory requirements, the LLEMMA 7B checkpoint size is 13.5GB whereas our model, Paramanu-Ganita checkpoint size is less than 1GB. Comparing with LLEMMA 7B training, we dropped the requirement of requiring 23,000 A100 hours of continual training to 146 hours of pretraining our mathematical language model from scratch. Our math model is based on Paramanu (Niyogi and Bhattacharya, 2024a), released earlier by us. We have trained an auto-regressive model from scratch at a context size of 4096 on a single NVidia A100-PCIE-40GB GPU. Our work is an attempt to make dedicated mathematical specialized model from scratch rather than performing continual pretraining of existing LLMs for domain adaptation. Our models are much smaller in size by large order of magnitude of LLMs, having only 208 million parameters. Hence, our models are very fast in inference without requiring any quantization of weights, and our math model can be run on CPU without need of GPU. Our main contributions are as follows: \u2022 We have curated an exclusive mathematical pretraining corpus with high quality mathematical text including textbooks, lecture notes, web crawled mathematical text, mathematical source code from various programming languages, mathematical ArXiV papers, mathematical question answers pairs from forums like StackExchange, Reddit. We also developed a math domain specialised tokenizer from scratch. \u2022 We developed first ever exclusive Auto Regressive decoder mathematical model of 208 million parameters only from scratch, Paramanu-Ganita at context size of 4096 respectively on a single GPU. We pretrained only on a part of our curated mathematical corpus and are yet to explore the full potential of the capabilities of our model. \u2022 We evaluated our mathematical pretrained models on validation perplexity, and on model FLOPs Utilization (MFU) metric for pretraining. Table 1 shows the validation perplexity and MFU metrics of pretraining. \u2022 We also benchmarked our math model on popular math benchmark, i.e, GSM8k on CoT prompting and compared with the generalist LLMs and math domain specialised LLMs. \u2022 Our model, Paramanu-Ganita 208M, outperformed LLaMa-1 (33B, 13B, 7B), LLaMa2 (7B, 13B), Falcon (40B, 7B) (Almazrouei et al., 2023), PaLM (62B, 8B), MPT (30B, 7B), Vicuna 13B (Chiang et al., 2023), and math-specialised LLMs like Minerva 8B (Lewkowycz et al., 2022), LLEMMA-7B on GSM8k benchmark by large significant margin despite being smaller by multiple orders of magnitude in size. 2 Background 2.1 Language Modeling This objective of the language modeling can formally described as maximizing the probability of a sequence of tokens w1, w2, . . . , wN P(w1, w2, . . . , wn) = n Y i=1 P(wi | w1, w2, . . . , wi\u22121) (1) where p(wt|w0, . . . wt\u22121) is the probability of token wt given the sequence of previous tokens w0, . . . , wt\u22121. The performance of a language model is generally being evaluated using the total cross-entropy loss, i.e, the negative log-likelihood of the observed data under the model under consideration, which for a given dataset is defined as: Avg Loss = \u22121 N n X i=1 log(P(wi | w1, w2, . . . , wi\u22121)) (2) Lower the loss better is the model but just computing the loss may be not intuitive. Therefore, perplexity is a metric to evaluate the performance of a given language model which is the exponent of the average loss. 2.2 Model Flops Utilization (MFU) Model Flops Uitilization (MFU) (Korthikanti et al., 2022) estimate is the ratio of the observed throughput (tokens-per-second), relative to the theoretical \fmaximum throughput of a system at peak FLOPs. Model flops utilization (MFU) estimate the number of flops we do per iteration. It quantifies how efficiently the GPUs are utilized in model training. 3 Data We have curated high quality mathematical text from mathematics text books, lecture notes, web such as OpenWebMath (Paster et al., 2023), blogs, articles, AlgebraStack (Azerbayev et al., 2024), mathematical question answers pairs from StackExchange, and math classified ArXiv scientific papers. We templatised the mathematical question answer tuples as CoT (Wei et al., 2023) prompt. The following template was used to templatise the mathematical question answers pairs such as \u201dBelow is an instruction that describes a task. Write a response that appropriately completes the request. ### Q:{question} ### A: Let\u2019s think step by step. {answer}\u201d We also included templatised training set of GSK8k in the pretraining dataset. Therefore, our combined mathematical corpus is a mix dataset of mathematical text, source code of programming languages like TeX, Python, C, Matlab, etc., and mathematical question answers tuples in CoT templatised format. 4 Related Work (Wei et al., 2023) boosts the reasoning capacity of LLMs by supplementing the output with a series of intermediate steps leading to the answer. Several approaches have been suggested to enhance the quality of these reasoning paths. For instance, Complexity-based CoT (Fu et al., 2023) picks examples with more steps as in-context demonstrations, demonstrating that prompting with additional reasoning steps improves performance. SelfConsistency (Wang et al., 2023b) generates multiple reasoning paths and selects the final answer through majority voting. Another set of techniques involves finetuning-based methods, which adapt open-source models (like LLaMA) using insights from advanced closed-source LLMs (GPT-4, and GPT-3.5-Turbo). (Magister et al., 2023) explore the transfer of reasoning abilities through knowledge distillation. (Yuan et al., 2023) advocate for the use of rejection sampling finetuning (RFT) to enhance mathematical reasoning performance. WizardMath (Choi et al., 2022) introduces a reinforced evol-instruct method for strengthening reasoning abilities through supervised fine-tuning and PPO training (Schulman et al., 2017). MAmmoTH (Yue et al., 2023) integrates CoT and Program-ofThought (Chen et al., 2023) rationales to teach LLMs how to utilize external tools (such as a Python interpreter) for solving mathematical problems. (Wang et al., 2023a) propose a constraint alignment loss for finetuning LLMs to improve calibration. 5 Training We have pretrained our math model, ParamanuGanita, from scratch at a context size of 4096 on a part of our curated corpus. However, we have excluded training of our math model on ArXiv math papers as we believe that to learn basic mathematical concepts, and acquire mathematical logical reasoning, ArXiv math papers are not required as they generally meant to serve beyond high school level mathematics. We started with simple strategy to use a part of our curated corpus which generally covers various mathematical and logical concepts till secondary school education in general. We performed mix-training combining both mathematical plain text, source code of programming languages, and templatised mathematical question answers pairs in the pretraining phase. For pretraining Paramanu-Ganita (4096 context size), we performed 95%-5% data split for pretraining, as we wanted to use most of the dataset for pretraining. We reported the validation perplexity of our pre-trained mathematical language model in table 1. We then fine-tuned the math model on the templatised GSM8k training dataset for 2 epochs. However, we are also working on training multiple pretrained models from scratch to check whether different combinations of mathematical books, web crawled mathematical text, ArXiv math papers, source code of relevant programming languages, and mathematical question answers pairs from popular forums such as StackExchange, Reddit improves the reasoning ability of our models based on popular math benchmark, GSM8K. 6 Evaluation We evaluate the model\u2019s ability to solve mathematics problems using chain of thought reasoning. Our evaluations include GSM8k (Cobbe et al., 2021), the de-facto standard benchmarks for evaluating quantitative reasoning in language models. We reported Pass@1 accuracy of Paramanu-Ganita as \fModel Perplexity MFU Paramanu-Ganita (4096) 4.34927 40.39193 Table 1: Perplexity and MFU metrics of ParamanuGanita pretrained models. shown in the table 2. We used the following evaluation prompt for GSM8k test set for our math model. \u201dBelow is an instruction that describes a task. Write a response that appropriately completes the request. ### Q:{question} ### A: Let\u2019s think step by step. \u201d Table 2 also reports the various LLMs and their reported scores quoted from the respective publications. Paramanu-Ganita despite being 35 times smaller than 7B LLMs, outperformed LLaMa1 7B by 28.4% points, LLaMa-2 7B by 27.6% points, Falcon 7B by 32.6% points, PaLM 8B by 35.3% points, Minerva 8B by 23.2% points, and LLEMMA-7B by 3% points respectively. Paramanu-Ganita also outperformed PaLM 62B by 6.4% points despite being smaller by 305 times, Falcon 40B by 19.8% points (smaller by 197 times), LLaMa-1 33B by 3.8% points (smaller by 162 times) and Vicuna 13B by 11.8 % points respectively despite being smaller by 64 times in model parameters. LLEMMA 34B, Minerva 62B, Minerva 540B are the giant LLMS that performed better than Paramanu-Ganita on GSM8k benchmark. However, as we have only trained our math model on a part of our entire corpus, so we hope its not a fair comparison to test the full potential of our math model, we also did not perform DPO or PPO training to improve the performance of our ParamanuGanita 208M compared to other math specialised LLMs. 7" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.13558v2", |
| "title": "LASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation", |
| "abstract": "Revolutionary advancements in text-to-image models have unlocked new\ndimensions for sophisticated content creation, e.g., text-conditioned image\nediting, allowing us to edit the diverse images that convey highly complex\nvisual concepts according to the textual guidance. Despite being promising,\nexisting methods focus on texture- or non-rigid-based visual manipulation,\nwhich struggles to produce the fine-grained animation of smooth\ntext-conditioned image morphing without fine-tuning, i.e., due to their highly\nunstructured latent space. In this paper, we introduce a tuning-free LLM-driven\nattention control framework, encapsulated by the progressive process of LLM\nplanning, prompt-Aware editing, StablE animation geneRation, abbreviated as\nLASER. LASER employs a large language model (LLM) to refine coarse descriptions\ninto detailed prompts, guiding pre-trained text-to-image models for subsequent\nimage generation. We manipulate the model's spatial features and self-attention\nmechanisms to maintain animation integrity and enable seamless morphing\ndirectly from text prompts, eliminating the need for additional fine-tuning or\nannotations. Our meticulous control over spatial features and self-attention\nensures structural consistency in the images. This paper presents a novel\nframework integrating LLMs with text-to-image models to create high-quality\nanimations from a single text input. We also propose a Text-conditioned\nImage-to-Animation Benchmark to validate the effectiveness and efficacy of\nLASER. Extensive experiments demonstrate that LASER produces impressive,\nconsistent, and efficient results in animation generation, positioning it as a\npowerful tool for advanced digital content creation.", |
| "authors": "Haoyu Zheng, Wenqiao Zhang, Yaoke Wang, Hao Zhou, Jiang Liu, Juncheng Li, Zheqi Lv, Siliang Tang, Yueting Zhuang", |
| "published": "2024-04-21", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Revolutionary advancements in text-to-image models have unlocked new\ndimensions for sophisticated content creation, e.g., text-conditioned image\nediting, allowing us to edit the diverse images that convey highly complex\nvisual concepts according to the textual guidance. Despite being promising,\nexisting methods focus on texture- or non-rigid-based visual manipulation,\nwhich struggles to produce the fine-grained animation of smooth\ntext-conditioned image morphing without fine-tuning, i.e., due to their highly\nunstructured latent space. In this paper, we introduce a tuning-free LLM-driven\nattention control framework, encapsulated by the progressive process of LLM\nplanning, prompt-Aware editing, StablE animation geneRation, abbreviated as\nLASER. LASER employs a large language model (LLM) to refine coarse descriptions\ninto detailed prompts, guiding pre-trained text-to-image models for subsequent\nimage generation. We manipulate the model's spatial features and self-attention\nmechanisms to maintain animation integrity and enable seamless morphing\ndirectly from text prompts, eliminating the need for additional fine-tuning or\nannotations. Our meticulous control over spatial features and self-attention\nensures structural consistency in the images. This paper presents a novel\nframework integrating LLMs with text-to-image models to create high-quality\nanimations from a single text input. We also propose a Text-conditioned\nImage-to-Animation Benchmark to validate the effectiveness and efficacy of\nLASER. Extensive experiments demonstrate that LASER produces impressive,\nconsistent, and efficient results in animation generation, positioning it as a\npowerful tool for advanced digital content creation.", |
| "main_content": "INTRODUCTION Diffusion models [8, 12, 24] form a category of deep generative models that has recently become one of the hottest topics in multimodal intelligence, showcasing impressive capabilities of text-to-image (T2I) generation, ranging from the high level of details to the diversity of the generated examples. Such diffusion models also unlock a new world of creative processes in content creation, e.g., text-guided image editing [5, 6, 10], involves editing the diverse images that convey highly complex visual concepts with text-to-image models arXiv:2404.13558v2 [cs.CV] 23 Apr 2024 \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang solely through the textual guidance. Broadly, the contemporary image editing paradigm can be summarized in two aspects: i) Texture editing [5, 6, 10], manipulating a given image\u2019s stylization and appearance while maintaining the input structure and scene layout; ii) Non-rigid Editing [6, 18], enabling non-rigid image editing (e.g., posture changes) while preserving its original characteristics. Despite achieving impressive image-level editing effects, the aforementioned methods fail to harness the editing animation, i.e., the smooth transition of the sequence of intermediary images according to the user\u2019s textual requirement, including the fine-grained texture and non-rigid transformation. Such text-conditioned imageto-animation serves as an imperative component in various realworld content creation tasks, ranging from cinematic effects to computer games, as well as photo-editing tools for artistic and entertainment purposes to enrich people\u2019s imagination. Nevertheless, realizing animation-level editing is highly challenging, primarily due to the highly unstructured latent space of the intermediary images. Of course, we can introduce more animation data to fine-tune the entire T2I diffusion models, thereby capturing the smooth animation edit. However, it comes at a tremendous cost and deteriorates the flexibility of the pre-trained diffusion models under the animation-level editing setting. Based on the above insights, one question is thrown: Given the input image and textual description, could we achieve the high-quality animation editing effect with the pre-trained text-to-image models without fine-tuning? In this paper, we introduce a novel tuning-free LLM-driven attention control framework framework for text-conditioned imageto-animation, through LLM planing \u2192prompt-Aware Editing \u2192 StablE moRphing, named as LASER. The core of our framework is that by leveraging the large language models (LLMs)[1, 21, 37, 48] with significant potential in natural language processing, to effectively parse the textual description into relevant and continuous control statements for pre-trained T2I diffusion models, thereby transforming the given image to animation. Specifically, LASER comprises the following progressive steps: Step 1, given a multimodal input, i.e., a description of the animation \ud835\udc430 and an initial image \ud835\udc3c0 (which can be optional, allowing the T2I model generation), LLM decomposes the general and coarse-grained description \ud835\udc430 into multiple fine-grained and consistent prompts. These prompts are closely aligned and exhibit subtle variations, aiding in the guided editing of subsequently corresponding keyframes; Step 2, the LLM analyzes these prompts to the feature and attention injection control signals, adapting to the nuanced differences between adjacent prompts. This enables tailored injection strategies for editing different keyframe types. Notably, the injection strategy delineates into two base categories: Feature and Association Injection (FAI) for texture-based editing and Key-Value Attention Injection (KVAI) for non-rigid editing. Notably, to facilitate the simultaneous portrayal of both texture and non-rigid editing within a singular animation phase, we propose the forward hybrid Attention Injection (HAI) for the image editing; Step 3, effectively synthesizing intermediate frames between keyframes, ensuring animations are coherent and fluid. This generator utilizes advanced interpolation methods, such as spherical linear interpolation, to ensure smooth transitions and reduce artifacts. Additionally, Adaptive Instance Normalization (AdaIN) is applied to enhance color and brightness consistency. The Hybrid Attention Injection (HAI) strategy is also employed to integrate texture and structural transformations within a single animation phase, further enhancing the animation\u2019s overall quality and coherence. Additionally, we inaugurate a Text-conditioned Image-to-Animation Benchmark, a comprehensive collection designed to challenge and quantify the adaptability and precision of the proposed LASER. Summing up, our contributions can be concluded as: \u2022 We introduce the tuning-free text-conditioned image-toanimation task, designed to craft high-quality animations based on the multimodal input using the pre-trained textto-image models, without additional fine-tuning or annotations. To evaluate the efficacy of our approach, we introduce the Text-conditioned Image-to-Animation Benchmark, hoping that it may support future studies within this domain. \u2022 The proposed the LASER encapsulated by the progressive process of LLM planing \u2192Prompt-aware editing \u2192Stable morphing, enabling the smooth textureand non-rigid animation generation. \u2022 Both qualitative and quantitative assessments underscore the superior efficacy of the proposed framework, showcasing its proficiency in generating animations that are not only smooth and of high quality but also diverse. 2 RELATED WORK Text-to-Image Generation. In artificial intelligence[49\u201351], textto-image (T2I) Generation aims to generate high-quality images based on text descriptions. Previous text-conditioned image generation approaches were primarily based on Generative Adversarial Networks (GANs) [4, 41, 43, 44, 52], leveraging their robust capabilities for high-fidelity image synthesis. These models, through multimodal vision-language learning, have endeavored to align text descriptions with synthesized image contents, yielding gratifying synthesis results on specific domain datasets. Recently, diffusion models [8, 12, 24] have demonstrated exceptional generative capabilities, achieving state-of-the-art results in terms of generation quality and diversity. By incorporating text prompts into diffusion models, various text-to-image diffusion models [28, 30, 31] have been developed. They are intricately conditioned on the provided text via cross-attention layers, ensuring that the generated images are not only visually coherent but also semantically consistent with the input descriptions. Text-guided Image Editing. Text-guided image editing is a challenging task that aims to edit images based on textual descriptions, enabling users to achieve desired changes in natural language. Previous deep-learning-based approaches based on GANs [20, 23, 27, 40] have achieved certain success, but they are limited to specific domain datasets and exhibit limited applicability and generalization. VQGANCLIP [7] is an autoregressive model that combines VQGAN [9] and CLIP [29] to produce high-quality images and enable precise editing, yielding diverse and controllable results. However, this method suffers from slow generation speed and high computational cost. Recently, diffusion models trained on large-scale text-image pairs such as Imagen [31] and Stable Diffusion [30] have achieved unprecedented success in text-to-image generation. Therefore, they serve as a robust prior for various editing tasks, including textguided image manipulation [5, 6, 10, 18, 26, 38]. Prompt-to-Prompt \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation [10] and Plug-and-Play [38] utilize cross-attention or spatial features to edit both global and local aspects of the image by directly modifying the text prompt. MasaCtrl [6] and Imagic [18] can handle non-rigid transformations such as changing object poses. Particularly, Plug-and-Play [38] consider the task of text-guided image-toimage translation that aims to estimate a mapping of an image from a source domain to a target domain, where the target domain is not specified through a dataset of images but rather via a target text prompt. However, most of these approaches directly generate the final edited image, with limited exploration concerning continuous animations such as image morphing. Image Morphing. Image morphing is a task in computer graphics and image processing that aims to obtain reasonable intermediate images in the smooth transition between two images [2, 53]. With the advent of deep learning, neural networks have been used for image morphing, learning to identify correspondences and generate intermediate frames through latent interpolations. For instance, in the works on GANs [15\u201317, 32, 33], it has been demonstrated that their latent embedding space is highly continuous, and linear interpolation between two latent codes yields impressive image morphing results. Recent studies on diffusion models have also indicated the feasibility of generating plausible intermediate images through latent noise interpolation and text embedding interpolation [3, 36, 39]. Impus [42] explored the application of diffusion models in image morphing tasks, performing interpolation in the locally linear continuous text embedding space and Gaussian latent space. DiffMorpher [45] utilizes pre-trained diffusion models to achieve smooth and natural image interpolation and morphing. It performs spherical linear interpolation on the latent noise obtained through DDIM inversion for two images and combines it with textconditioned linear interpolation, thus addressing the limitations of smooth interpolation between two image samples within the unstructured latent space of diffusion models. 3 METHODOLOGY Given a user-defined descriptor \ud835\udc43\u2217and an initial image \ud835\udc3c0 (provided or generated), our method generates the animation sequence {\ud835\udc65(\ud835\udefc) 0 ,\ud835\udc65(\ud835\udefc) 1 , . . . ,\ud835\udc65(\ud835\udefc) \ud835\udc5b }, where \ud835\udefcvaries from 0 to 1. The length of the \ud835\udc65(\ud835\udefc) \ud835\udc56 sequence is set by \ud835\udc5b\ud835\udc53and the number of sequences \ud835\udc65(\ud835\udefc) \ud835\udc56 corresponds to the transformation stages \ud835\udc5b\ud835\udc61. The resulting animation is expected to visually manifest the smooth transitions of \ud835\udc3c0 to \ud835\udc3c\ud835\udc5band characteristics as described by \ud835\udc43\u2217. To guide this generative process, a series of descriptive prompts {\ud835\udc430, \ud835\udc431, . . . , \ud835\udc43\ud835\udc5b\ud835\udc61} are derived to anchor each keyframe in the animation\u2019s continuity. 3.1 Preliminary for Diffusion Models Diffusion models [12] [35] [24] are a series of probabilistic generative models that produce images by gradual denoising from a noise distribution, e.g., Gaussian distribution. The generation process consists of two main phases: the forward (diffusion) process and reverse (denoising) process. The forward process gradually adds noise to initial data \ud835\udc650 to generate a noisy data \ud835\udc65\ud835\udc61given variance schedule \ud835\udefc\ud835\udc61\u2208(0, 1) at time-step \ud835\udc61: \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc650) = N (\ud835\udc65\ud835\udc61; \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc650, (1 \u2212\u00af \ud835\udefc\ud835\udc61)I), (1) where \u00af \ud835\udefc\ud835\udc61= \u00ce\ud835\udc61 \ud835\udc56=1 \ud835\udefc\ud835\udc56. After \ud835\udc47steps, we obtain noise \ud835\udc65\ud835\udc47\u223cN (0, 1). The reverse process aims to gradually clean the noise. By utilizing the Bayzes\u2019 rules and Markov property, we can intuitively express the conditional probabilities as: \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) = N (\ud835\udc65\ud835\udc61\u22121; 1 \u221a\ud835\udefc\ud835\udc61 (\ud835\udc65\ud835\udc61\u22121 \u2212\ud835\udefc\ud835\udc61 \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61 \ud835\udf16), \u02dc \ud835\udefd\ud835\udc61I), (2) where \u02dc \ud835\udefd\ud835\udc61is a time-dependent constant and added noise \ud835\udf16can be predicted by a neural network \ud835\udf16\ud835\udf03. By sampling \ud835\udc65\ud835\udc61\u22121 iteratively, we finally get a clean image \ud835\udc650 from initial Gaussian noise \ud835\udc65\ud835\udc47. We employ a text-conditioned Stable Diffusion (SD) [30], which operates within lower-dimensional latent space rather than pixel space. It begins with encoding images to latent representation by a variational auto-encoder (VAE) [19], followed by a diffusiondenoising process within the latent space. After denoising, the latent representation is decoded back into the image space via a decoder network, culminating in the final generated image. In the noise-predicting network\ud835\udf16\ud835\udf03, residual blocks process image features to generate intermediate features \ud835\udc53\ud835\udc59 \ud835\udc61, which are then used in the self-attention module to produce \ud835\udc44, \ud835\udc3e, \ud835\udc49for capturing longrange interactions. Subsequently, cross-attention integrates textual prompt \ud835\udc43input, merging text and image semantics. The attention mechanism can be formulated as follows: \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc44, \ud835\udc3e,\ud835\udc49) = \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(\ud835\udc44\ud835\udc3e\ud835\udc47 \u221a\ufe01 \ud835\udc51\ud835\udc58 )\ud835\udc49, (3) where \ud835\udc44, \ud835\udc3e, and \ud835\udc49represent queries, keys, and values, respectively, with \ud835\udc51\ud835\udc58denoting the key/query dimension for scaling dot product. In this model, \ud835\udc44originates from spatial features, while \ud835\udc3eand \ud835\udc49come from spatial features and text embeddings for self and cross-attention, respectively. Leveraging attention layers within the SD model significantly affects image composition and development [10] [38], guiding image editing and synthesis by manipulating attention-related information during denoising [6]. 3.2 LLM-driven Controller In this section, we first utilize LLM to extract aligned textual prompts for each key animation stage. Our approach supports two input modalities: text-image pairs and text-only inputs. If the user provides an image, it is directly utilized as the initial image \ud835\udc3c0. In cases where the initial image is absent, we leverage pre-trained Stable Diffusion models to generate \ud835\udc3c0.To generate animations that adhere to the semantics of a specified text description \ud835\udc43\u2217, we require text prompts {\ud835\udc430, \ud835\udc431, . . . , \ud835\udc43\ud835\udc5b\ud835\udc61} for each key animation stage, as these prompts directly guide the animation process. High-quality, detailed text prompts are crucial when no initial image is provided, as the model generates \ud835\udc3c0 based on \ud835\udc430\u2019s semantic cues. Prompts for Stable Diffusion should be richly descriptive to accurately produce a high-quality starting image. To enhance the quality and stability of the process, we introduce two agents based on large language models: the \u201cStage Image Text Prompt Agent\u201d (SIA) and the \u201cStable Diffusion Prompt Generator Agent\u201d (PGA). Initially, SIA generates text prompts that guide the image generation for each key stage, as illustrated in Fig. 2 (a). SIA generates text prompts based on two fundamental principles: i) By decomposing the animation descriptor \ud835\udc43\u2217into multiple independent processes, SIA reduces semantic differences between \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang P2 \u201cA dog jumping on the ground.\u201d P0 \u201cA cat sitting on the ground.\u201d Input or Generated Image I0 A sitting cat turns into a jumping dog. Textual Guidance P* LLM P1 \u201cA cat jumping on the ground.\u201d Multimodal Input \ud835\udc81\ud835\udfce,\ud835\udc7b \u2217 Denoising U-Net \ud835\udc99\ud835\udfce \ud835\udc81\ud835\udfcf,\ud835\udc7b \u2217 \ud835\udc99\ud835\udfcf Latent Noise Keyframe Interpolation intermediate image \ud835\udc99\u03b1 \ud835\udfce \ud835\udc99\u03b1 \ud835\udfcf \ud835\udc81\ud835\udfce,\ud835\udc7b DDIM Inversion \ud835\udc33\ud835\udc7b\u03b1 \ud835\udfce e0 e1 \ud835\udc86\u03b1 \ud835\udfce \ud835\udc81\ud835\udfcf,\ud835\udc7b Inversion \u03b1 1-\u03b1 Diffusion Prompt Generator Agent Stage Image Text Prompt Agent Injection Control Agent LLM Control Prompt Generation \ud835\udc81\ud835\udfd0,\ud835\udc7b \u2217 \ud835\udc99\ud835\udfd0 P0 \u201cA cat sitting on the ground.\u201d P1 \u201cA cat jumping on the ground.\u201d \u201csitting\u201d \u201cjumping\u201d Non-Rigid and Texture Editing P2\u201cA cat jumping on the ground.\u201d \u201ccat\u201d \u201cdog\u201d DDIM Inversion Feature or Self Attention Injection P0 \u201cA cat sitting on the ground.\u201d P1 \u201cA cat jumping on the ground.\u201d a b c LLM-driven Controller Hybrid Prompt-aware Editor Animation Generator Optional Image Generation Model with Fixed Parameters Figure 2: Overview of proposed LASER. (a) The LLM-driven Controller first parses the descriptive prompts to generate the descriptive prompts for corresponding frames of animation. (b) By doing so, the LLM analyzes these prompts to the feature and attention injection control signals, to facilitate the simultaneous portrayal of both texture and non-rigid editing. (c) The animation generator leverages spherical linear interpolation and adaptive instance normalization to generate the intermediate images between keyframes, accessing smooth animation generation. adjacent prompts, enhancing the overall quality of the results. ii) The prompts must be highly aligned to facilitate high-quality intermediate results through linear interpolation. Given the local linearity within the CLIP text embedding space [18], minimizing the gap between adjacent embeddings is essential. A practical method involves using consistent sentence structures across prompts, such as \u201cA cat [action] on the ground\u201d and \u201cA [animal] jumping on the ground\u201d [42]. This approach ensures that while the prompts are semantically distinct, they share a common categorical root, thus streamlining the generation process. This generation method successfully mitigates the non-linearity and discontinuity commonly encountered between text embeddings. With the deployment of the Stage Image Text Prompt Agent (SIA), we significantly bolster our model\u2019s capacity to generate semantically coherent and high-quality images. The Stable Diffusion Prompt Generator Agent (PGA) converts broad, high-level concepts from the SIA into richly detailed and vividly descriptive prompts specifically crafted for Stable Diffusion. As depicted in Fig. 2 (b), once PGA receives the initial text prompt from SIA, it refines this input to craft a more detailed prompt. This enhanced prompt not only delineates the subject and action but also enriches the scene with specific elements like texture, lighting, and artistic style, which instructs Stable Diffusion to produce images of higher fidelity and complexity [22]. 3.3 Hybrid Prompt-aware Editor This section utilizes the aligned textual prompts to obtain keyframe images. During the editing process, \ud835\udc4d\u2217 1,\ud835\udc47is a direct copy of \ud835\udc4d\u2217 0,\ud835\udc47. For \ud835\udc56\u22651, each keyframe \ud835\udc65\ud835\udc56undergoes DDIM inversion to produce \ud835\udc4d\ud835\udc56,\ud835\udc47, which is then cloned to form \ud835\udc4d\u2217 \ud835\udc56+1,\ud835\udc47for the subsequent keyframe. Despite using aligned prompts for text-guiding image editing, we still observe a marked discrepancy in semantic identity between the images, which results in animations that do not transition smoothly. To overcome this challenge, we draw inspiration from previous image editing techniques [6, 38] and propose a feature and attention injection method controlled by the LLM, tailored to query semantically similar content from the previous keyframes according to the changing nature of the corresponding stage. Utilizing DDIM inversion on the prior keyframe, we obtain the initial state \ud835\udc4d\ud835\udc56,\ud835\udc47. Past work [38] has demonstrated that injecting features \ud835\udc53\ud835\udc59 \ud835\udc61within residual blocks and self-attention projections \ud835\udc5e\ud835\udc59 \ud835\udc61, \ud835\udc58\ud835\udc59 \ud835\udc61significantly boosts text-guided image edition tasks. The encoding in the fourth layer \ud835\udc534 \ud835\udc61specifically captures shared semantics necessary for structure retention during generation. Moreover, the injections of self-attention are underpinned by the attention scores, which arise from the product of query and key vectors, exhibiting a profound connection to the well-established self-referential paradigms within neural attention schemas. By injecting specific features \ud835\udc534 \ud835\udc61into the fourth layer of residual blocks and introducing self-attention elements \ud835\udc5e\ud835\udc59 \ud835\udc61and \ud835\udc58\ud835\udc59 \ud835\udc61throughout all decoder layers, we have successfully achieved texture variations between keyframes. \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d Residual Block + + Self Attention \ud835\udc81\ud835\udc8a,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a,\ud835\udc95 \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d + + \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95 replace replace Figure 3: Overview of Feature and Association Injection. We refer to this injection strategy as \u201cFeature and Association Injection\u201d (FAI). However, the aforementioned method struggles with non-rigid keyframe modifications. The usual solution, limiting injection range to reflect rigid changes from prompts, risks losing image identity. To navigate this, especially for non-rigid edits, we avoid injecting into residual blocks, thereby maintaining the image\u2019s structural integrity without being obscured by local semantics. Our strategy uses targeted attention injections. As image layout solidifies early in denoising and self-attention queries align semantically [6], they can extract content from various objects. Post-denoising, we inject keys \ud835\udc58\ud835\udc59 \ud835\udc61and values \ud835\udc63\ud835\udc59 \ud835\udc61from the previous keyframe\u2019s self-attention block, as shown in Fig. 4. This process forms objects\u2019 outlines following text prompts and then enriches the generative structure with detailed content from the source image. Consequently, we achieve semantically coherent images that also support non-rigid transitions. We refer to this injection strategy as \u201cKey-Value Attention Injection\u201d (KVAI). Up to this point, the model has acquired the capability to generate diverse keyframes, enabling the production of the expected animations. Recognizing the need for a systematic approach to select the optimal injection strategy for each stage of the animation generation, we have developed the Injection Control Agent (ICA), as showcased in Fig. 2 (a). ICA\u2019s primary role is to process the text prompts from the Stage Image Text Prompt Agent (SIA), which performs an indepth analysis of semantic differences between these text prompts at consecutive key stages. This analysis enables SIA to issue tailored control signals: \u201c0\u201d signals ICA to deploy the injection strategy for stages where texture changes are dominant, and \u201c1\u201d signals the use of the KVAI strategy for stages with non-rigid transformations. By precisely managing the type of attention injection at each stage, ICA ensures that the generated animations are both visually coherent and closely aligned with the textual descriptors. 3.4 Animation Generator In this section, we generate intermediate images between keyframe images to obtain consistent and smooth animations. After generating the text prompts corresponding to each key stage \ud835\udc430, \ud835\udc431, . . . , \ud835\udc43\ud835\udc5b\ud835\udc61 in section 3.2, we obtain the respective text embeddings\ud835\udc520,\ud835\udc521, . . . ,\ud835\udc52\ud835\udc5b\ud835\udc61. When generating intermediate images, we perform a simple linear interpolation between the text embeddings of two adjacent key stages to obtain the corresponding text embedding \ud835\udc52. \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d Residual Block + + Self Attention \ud835\udc81\ud835\udc8a,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a,\ud835\udc95 \ud835\udc92\ud835\udc95 \ud835\udc8d \ud835\udc8c\ud835\udc95 \ud835\udc8d \ud835\udc97\ud835\udc95 \ud835\udc8d \ud835\udc87\ud835\udc95 \ud835\udc8d + + \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95\u2212\ud835\udfcf \u2026 \ud835\udc81\ud835\udc8a+\ud835\udfcf,\ud835\udc95 replace Figure 4: Overview of Key-Value Attention Injection. \ud835\udc52\ud835\udc56 \ud835\udefc= (1 \u2212\ud835\udefc)\ud835\udc52\ud835\udc56+ \ud835\udefc\ud835\udc52\ud835\udc56+1 (4) In the construction of the animation sequence, the interpolation parameter \ud835\udefcis discretized into a series of values that facilitate a smooth transition between frames. This discretization is achieved by defining a set of equidistant points within the closed interval [0, 1], where the number of points corresponds to the intended number of frames in an animation stage, denoted as \ud835\udc5b\ud835\udc53. Thus, \ud835\udefctakes on values \ud835\udefc0, \ud835\udefc1, . . . , \ud835\udefc\ud835\udc5b\ud835\udc53\u22121, where \ud835\udefc0 = 0 represents the starting frame, and \ud835\udefc\ud835\udc5b\ud835\udc53\u22121 = 1 indicates the ending frame. The intermediate values of \ud835\udefccorrespond to proportionally spaced frames within the animation sequence, ensuring linear spacing. This arrangement guarantees that each frame represents a weighted blend of the preceding and subsequent key stage embeddings, facilitating a smooth and continuous transformation across the animation. To ensure visual continuity in the sequence of intermediate images, we interpolate the latent noise of these images using the latent noise from adjacent key stages. However, standard linear interpolation may introduce artifacts. To address this, we adopt spherical linear interpolation (slerp) [34], which effectively minimizes artifacts and enhances the smoothness of transitions. z\ud835\udc56 \ud835\udc47\ud835\udefc= sin((1 \u2212\ud835\udefc)\ud835\udf09) sin \ud835\udf09 \ud835\udc4d\ud835\udc56,\ud835\udc47+ sin(\ud835\udefc\ud835\udf09) sin \ud835\udf09\ud835\udc4d\ud835\udc56+1,\ud835\udc47 (5) where \ud835\udf09= arccos \u0010 \ud835\udc4d\ud835\udc56,\ud835\udc47\ud835\udc4d\ud835\udc56+1,\ud835\udc47 \u2225\ud835\udc4d\ud835\udc56,\ud835\udc47\u2225\u2225\ud835\udc4d\ud835\udc56+1,\ud835\udc47\u2225 \u0011 . To maintain consistency in the color and luminance aspects of both generated and source images, we implement a variant of Adaptive Instance Normalization (AdaIN) [14] for the pre-denoising stage adjustment of the interpolated latent noise z\ud835\udc56 0\ud835\udefc. We calculate and then interpolate the means (\ud835\udf07) and standard deviations (\ud835\udf0e) of the latent noises for each channel: \ud835\udf07\ud835\udc56 \ud835\udefc= (1 \u2212\ud835\udefc)\ud835\udf07\ud835\udc56+ \ud835\udefc\ud835\udf07\ud835\udc56+1 (6) \ud835\udf0e\ud835\udc56 \ud835\udefc= (1 \u2212\ud835\udefc)\ud835\udf0e\ud835\udc56+ \ud835\udefc\ud835\udf0e\ud835\udc56+1 (7) \u02dc z\ud835\udc56 0\ud835\udefc= \ud835\udf0e\ud835\udefc z\ud835\udc56 0\ud835\udefc\u2212\ud835\udf07(z\ud835\udc56 0\ud835\udefc) \ud835\udf0e(z\ud835\udc56 0\ud835\udefc) ! + \ud835\udf07\ud835\udc56 \ud835\udefc (8) Subsequently, adjusted latent noise \u02dc z\ud835\udc56 0\ud835\udefcsupplants the original z\ud835\udc56 0\ud835\udefcduring the denoising steps, thereby improving the brightness and color consistency of the resulting images. \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang Ours DiffMorpher Diffinterp MasaCtrl PnP DDIM Textual Guidance: A standing horse turns into a running zebra. Textual Guidance: A bird starts to fly. Figure 5: Qualitative evaluation. Our method produces animations that significantly outperform previous methods in terms of quality, smoothness, and alignment with user input. Finally, during the denoising process of each intermediate image \ud835\udc65\ud835\udc56 \ud835\udefc, we also perform feature and self-attention injection. When the stage number \ud835\udc5b\ud835\udc61is not \u201c-1\u201d, we implement the standard injection strategy, wherein, while generating \ud835\udc65\ud835\udc56 \ud835\udefc, the injection is obtained from \ud835\udc65\ud835\udc56. Furthermore, when SIA\u2019s feedback on the \u201c\ud835\udc5b\ud835\udc61\u201d is \u201c-1\u201d, it indicates a special request from the user for a \u201csingle-stage generation,\u201d which involves both texture changes and non-rigid transformations within a single animation stage. In such cases, ICA leads the model to execute the Hybrid Attention Injection (HAI) strategy. HAI solves the issue that when using the normal injection strategies, the model is unable to produce animations that simultaneously exhibit changes in texture and structure within a single phase. This phenomenon will be further discussed in the 4. The HAI process initiates by editing \ud835\udc650 to produce \ud835\udc651 using the Feature and Association Injection (FAI), and subsequently \ud835\udc652 is edited from \ud835\udc651 utilizing the KVAI. Following these edits, DDIM Inversion is applied to extract the latent representations \ud835\udc4d0,\ud835\udc47and \ud835\udc4d2,\ud835\udc47, which are then interpolated to construct the intermediate latent representation z\ud835\udc56 \ud835\udc47\ud835\udefc. During the denoising phase, injections are strategically administered based on the interpolation parameter \ud835\udefc; specifically, injections from {\ud835\udc58\ud835\udc59 \ud835\udc61, \ud835\udc63\ud835\udc59 \ud835\udc61} corresponding to \ud835\udc650 are applied in the initial (1-\ud835\udefc)T steps, and those corresponding to \ud835\udc652 in the subsequent \ud835\udefcT steps. This method effectively conveys the semantic and structural information of significantly transformed images, ensuring smooth and consistent animations by querying local structures and textures from input images throughout the denoising process. 4 EXPERIMENTS We employ the publicly available Stable Diffusion v2.1-base [30] as our diffusion model and use GPT-4 8k [25] as the LLM in our experiments. For generating the initial image \ud835\udc3c0, we utilize two pretrained models: the real-style model dreamshaper-8 and the animestyle model MeinaMix, to assess our model\u2019s capability in producing animations across diverse styles. In creating intermediate images, we apply DDIM deterministic sampling. For keyframe synthesis, aiming to optimize the balance between efficiency and quality, we perform deterministic DDIM inversion with 100 forward steps followed by deterministic DDIM sampling with 100 backward steps. When implementing Feature and Association Injection (FAI), we inject features and self-attention within the first 25 of the 50-step sampling process, specifically targeting layers 4 to 10 of the UNet decoder. In the case of Key-Value Attention Injection (KVAI), injections commence after the initial five sampling steps and are applied within layers 6 to 10 of the decoder. The Hybrid Attention Injection (HAI) method follows the same timing and targets the same layers as KVAI. These injection strategies can be customized to align with the different input images \ud835\udc3c0. For sampling, the classifierfree guidance scale is set at 7.5. Runtime evaluations are performed on an NVIDIA RTX 4090 GPU. \fLASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation 4.1 Text-conditioned Image-to-Animation Benchmark Our method enables text-guided image-to-animation transitions, leveraging either image-textual or solely textual descriptions through pre-trained text-to-image diffusion models. Due to the lack of benchmarks for such configurations, we have proposed a new mini dataset: Text-conditioned Image-to-Animation Benchmark, which consists of 100 sets of textual descriptions. The collection comprises 100 sets, categorized as follows: 20 sets of animal actions and appearance transformations, 20 sets focused on animal appearance and species changes, 20 sets depicting transitions in natural landscapes and objects, 20 sets related to human figures and alterations in painting styles, 10 sets featuring character identity transformations, and 10 sets concerning changes in object colors and materials. Our model utilizes these textual prompts to generate 100 corresponding animation sequences. This benchmark serves as a preliminary evaluation of our model\u2019s performance, and we hope it will facilitate further research in this direction. 4.2 Qualitative Evaluation We present a visual comparison of our method against prior approaches to underscore its superiority. Although there are no other tuning-free methods for text-controlled image-to-animation currently available, we draw a detailed comparison with state-of-theart baselines promising for text-controlled image morphing. These include: 1) Diffusion-based deep interpolation methods such as DDIM [35], Diff.Interp [39], and DiffMorpher [45], all utilizing Stable Diffusion v2.1-base; 2) Text-driven, tuning-free image editing methods like PnP [38] and MasaCtrl [6]. For the first category of methods, which depend on multiple pre-existing image inputs and lack the capability to generate content directly from text, our experimental procedure includes: i) Utilizing LLM control for generating consistent outputs, which involves creating initial images and key stage prompts through stable diffusion prompt generation, similar to our method. ii) Generating initial images using the same stable diffusion checkpoint as employed in our experiments. iii) Producing subsequent key stage images via DDIM Inversion. For the second category, which also leverages LLM control, we maintain consistent application of text embedding interpolation rules to generate intermediate images, aligning with our approach. Generation Results. As illustrated in Fig. 5, our method outperforms previous approaches in alignment with user input, transition smoothness, semantic coherence, and maintaining the animation subject\u2019s semantic identity. Previous methods have often failed to accurately respond to user input changes in appearance and motion. These approaches typically struggle to generate the intended motions accurately or introduce noticeable artifacts post-motion changes, often resulting in a significant loss of primary subject information in the images. Compared to previous methods, our approach consistently generates coherent animations that closely align with the semantic content of the user input, resulting in visually satisfactory outputs. For additional examples of generated results, we encourage readers to consult the appendix. Generation Diversity. Furthermore, the extensive prior knowledge and generative capabilities of the LLM enhance our model\u2019s ability to produce diverse outputs, as shown in Fig. 6. When users request multiple distinct results, our model meets this demand by generating high-quality, varied animations, significantly broadening its creative potential. 4.3 Quantitative Evaluation Drawing from established objectives in prior research [6, 42, 45], we quantitatively evaluate the models using these metrics: (1) Learned Perceptual Image Patch Similarity (LPIPS, \u2193) [47]: LPIPS is employed to assess the perceptual deviation within an animation sequence in our work. We compute the total LPIPS (LPIPS\ud835\udc47) to quantify the overall perceptual variance throughout the sequence, highlighting the dynamic range of visual changes. Additionally, the maximum LPIPS to the nearest endpoint (LPIPS\ud835\udc40) is determined to identify the maximum perceptual variance, providing insights into the most significant changes within the animation. These measurements are crucial for assessing directness of the animation, ensuring finding the most efficient transition to generate animation. (2) CLIP Score (\u2191) [11]: The CLIP score is a metric that quantifies the alignment between images and textual descriptions, serving as a powerful tool for evaluating the coherence and relevance of generated images about their specified textual prompts. For an intermediate image, we describe its CLIP score by calculating its average similarity with the prompt before editing and the prompt after editing (e.g., \ud835\udc650 \ud835\udefcwith \ud835\udc430 and \ud835\udc431, \ud835\udc651 \ud835\udefcwith \ud835\udc431 and \ud835\udc432, etc.). (3) Perceptual Path Length (PPL, \u2193) [17]: To evaluate smoothness, i.e., transitions within the generated animation sequence should be seamless between any two consecutive images, we compute PPL: PPL\ud835\udf16= E\ud835\udefc\u223c\ud835\udc48(0,1) [ 1 \ud835\udf162 LPIPS(\ud835\udc99(\ud835\udefc), \ud835\udc99(\ud835\udefc+\ud835\udf00))], where \ud835\udf16is a small constant and we set it to 1 \ud835\udc5b\ud835\udc53\u22121. It is worth noting that we regard the entire sequence as a single animation process, despite consisting of multiple stages. The quantitative evaluation results of all methods are presented in Table 1. Generation Quality. Our method achieves a leading Clip Score, demonstrating its semantic alignment with user input. While DDIM may excel in CLIP Score, it often compromises structural coherence in favour of textual alignment, an issue our approach adeptly avoids as demonstrated in Fig. 5. Due to PnP\u2019s inability to effectively perform non-rigid edits, its generated animations often exhibit only appearance changes, thereby achieving higher levels of smoothness. This limitation hinders its capability to handle a diverse range of animation generation tasks. Similarly, Masacontrol, which struggles with texture transformations, also falls short in producing a diverse range of animations. Even when benchmarked against deep interpolation techniques that require fine-tuning, our results consistently exhibit superior smoothness. This performance not only underscores the effectiveness of our method but also affirms its suitability for adapting to a wide range of text-conditioned imageto-animation generation scenarios. \fHaoyu Zheng Wenqiao Zhang Yaoke Wang Hao Zhou Jiang Liu and Juncheng Li Zheqi Lv Siliang Tang Yueting Zhuang Textual Guidance: \"A photo of an anime-style girl transforms into the painting styles of different artists. 3 groups with different artists.\" First group Second group Third group Figure 6: The rich prior knowledge of the LLM grants the model the ability to generate diverse outcomes from the same input text and image. Table 1: Comparison of current methods. Superscript \u2663indicates that the model employs an external network (e.g., ControlNet [46]) to generate intermediate images and \u2020 indicates that the model fine-tunes with training LoRA [13]. \u201cTE\u201d stands for texture editing, a process that involves altering the surface appearance of objects within an image to match a specific texture style, while preserving the underlying structure and layout of the scene. \u201cNRIE\u201d refers to non-rigid image editing which involves altering the shape and structure of objects in images, like changing facial expressions or body poses. \u201cAG\u201d, standing for animation generation, refers to producing intermediate images between keyframes. Method Characteristics Metrics Runtime\u2193 TE NRIE AG CLIP Score \u2191 LPIPS\ud835\udc47\u2193 LPIPS\ud835\udc40\u2193 PPL \u2193 DDIM[35] ! 27.37 3.13 0.49 36.91 32s PnP[10] ! 26.57 0.90 0.23 10.51 2min MasaCtrl[6] ! 26.56 1.54 0.28 17.96 37s Diff.Interp\u2663[39] ! 20.05 5.14 0.58 72.29 2min6s DiffMorpher\u2020[45] ! 26.94 0.99 0.40 14.78 1min46s Ours ! ! ! 26.99 1.22 0.25 14.14 41s Generation Efficiency. In assessing efficiency, our method uniquely blends quality with speed, setting it apart from other deep interpolation techniques. This superior performance primarily stems from our operation without the need for fine-tuning. Although it may not lead in all image editing benchmarks, our model excels at managing a diverse array of edits, enabling it to adeptly tackle a broad spectrum of generative tasks. Given this versatility, the efficiency of our method is exceptionally high. 4.4 Ablation Study We have conducted an ablation study to evaluate the effectiveness of the proposed components, with experimental results shown in Table 2 and Fig.2 (in Appendix). The findings demonstrate that using DDIM alone cannot accurately restore the structure of the input image. In contrast, our feature and self-attention injections address the loss of texture and structural information during the DDIM generation process, significantly enhancing the quality of the generated animations. However, remnants of structural features from the initial state are still noticeable in the generated animations, including in the intermediate segments. The implementation of Latent Interpolation addresses this issue. While it may slightly Standard FAI Standard KVAI HAI Figure 7: The comparative effects of different injection strategies given the textual description \u201cA sitting cat turns into a jumping dog\u201d. Table 2: Ablation study results. Injection, Latent Interp, and AdaIN represent different components studied in the ablation. Method Components Metrics Injection Latent Interp AdaIN Clip Score \u2191 LPIPS\ud835\udc47\u2193 LPIPS\ud835\udc40\u2193 PPL \u2193 DDIM[35] 27.37 3.13 0.49 36.91 ! 26.73 1.09 0.26 12.85 ! ! 26.81 1.22 0.26 14.03 Ours ! ! ! 26.99 1.22 0.25 14.14 elevate the \ud835\udc3f\ud835\udc43\ud835\udc3c\ud835\udc43\ud835\udc46\ud835\udc47and PPL metrics, it ensures that subsequent frames more accurately reflect the semantic information of the transformed state. After applying the AdaIN adjustment to the latent noise, the consistency of brightness and color across the image sequence has improved. To demonstrate the effectiveness of Hybrid Attention Injection (HAI) in producing single-stage animations that incorporate both texture changes and non-rigid transformations, we conducted qualitative experiments. We generated animations using basic injection strategies (FAI and KVAI) and HAI, with the results displayed in Fig. 7. When employing only FAI, the images failed to respond to non-rigid changes; using KVAI alone did not result in significant texture modifications. Our proposed HAI strategy successfully handles both texture and non-rigid changes, effectively fulfilling the task of single-stage animation generation. 5" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.14745v1", |
| "title": "TAAT: Think and Act from Arbitrary Texts in Text2Motion", |
| "abstract": "Text2Motion aims to generate human motions from texts. Existing datasets rely\non the assumption that texts include action labels (such as \"walk, bend, and\npick up\"), which is not flexible for practical scenarios. This paper redefines\nthis problem with a more realistic assumption that the texts are arbitrary.\nSpecifically, arbitrary texts include existing action texts composed of action\nlabels (e.g., A person walks and bends to pick up something), and introduce\nscene texts without explicit action labels (e.g., A person notices his wallet\non the ground ahead).\n To bridge the gaps between this realistic setting and existing datasets, we\nexpand the action texts on the HumanML3D dataset to more scene texts, thereby\ncreating a new HumanML3D++ dataset including arbitrary texts. In this\nchallenging dataset, we benchmark existing state-of-the-art methods and propose\na novel two-stage framework to extract action labels from arbitrary texts by\nthe Large Language Model (LLM) and then generate motions from action labels.\nExtensive experiments are conducted under different application scenarios to\nvalidate the effectiveness of the proposed framework on existing and proposed\ndatasets. The results indicate that Text2Motion in this realistic setting is\nvery challenging, fostering new research in this practical direction. Our\ndataset and code will be released.", |
| "authors": "Runqi Wang, Caoyuan Ma, GuoPeng Li, Zheng Wang", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Text2Motion aims to generate human motions from texts. Existing datasets rely\non the assumption that texts include action labels (such as \"walk, bend, and\npick up\"), which is not flexible for practical scenarios. This paper redefines\nthis problem with a more realistic assumption that the texts are arbitrary.\nSpecifically, arbitrary texts include existing action texts composed of action\nlabels (e.g., A person walks and bends to pick up something), and introduce\nscene texts without explicit action labels (e.g., A person notices his wallet\non the ground ahead).\n To bridge the gaps between this realistic setting and existing datasets, we\nexpand the action texts on the HumanML3D dataset to more scene texts, thereby\ncreating a new HumanML3D++ dataset including arbitrary texts. In this\nchallenging dataset, we benchmark existing state-of-the-art methods and propose\na novel two-stage framework to extract action labels from arbitrary texts by\nthe Large Language Model (LLM) and then generate motions from action labels.\nExtensive experiments are conducted under different application scenarios to\nvalidate the effectiveness of the proposed framework on existing and proposed\ndatasets. The results indicate that Text2Motion in this realistic setting is\nvery challenging, fostering new research in this practical direction. Our\ndataset and code will be released.", |
| "main_content": "INTRODUCTION Text2Motion [1\u20133, 5, 9, 12, 14, 25, 31, 32, 36, 37] denotes generating motions from natural language, which has proven useful in reducing \u2217Equal contribution Figure 2: HumanML3D++ Dataset Structure. We expand the action texts in the HumanML3D dataset to multiple scene texts. Taking a set of data as an example, HumanML3D provides 3-5 action texts for each motion data. Building upon this, we provide two scene texts for each action text. labor costs in industries requiring motion capture actors and manual editings, such as movie production and game development. In these arXiv:2404.14745v1 [cs.CV] 23 Apr 2024 \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 entertainment industries, the motion editing of characters is limited to the development stage, and motion patterns are fixed after release. However, the target needs to interact with users in more flexible applications, which brings various unrestricted scenes, such as embodied intelligence [33] and interactive Non-Player Characters (NPCs) in open-world games. Therefore, exploring the generation of potential motions from arbitrary texts is important. However, as shown in Figure 1, existing datasets [10, 13, 23, 26, 27, 30] simply assume that motions are from specific action labels or action texts (i.e. inputs with action labels). We argue this is impractical for some flexible applications that need scene inputs (i.e. inputs without action labels). For example, when we describe an event \u201cA person picks up something\u201d, Action2Motion [4, 6, 10, 16, 20, 22, 24, 35] (the left figure in Figure 1) can only generate motions from specific action labels rather than a sentence, such as \u201cwalk, bend, and pick\u201d. More flexibly, Text2Motion(the middle figure in Figure 1) generates motions from action texts, such as \u201cA person walks and bends to pick up something\u201d. Compared to them, it is more practical to generate motions from arbitrary texts (the right figure in Figure 1), such as \u201cA person notices his wallet on the ground ahead\u201d. In this case, perfect action labels or action texts are not guaranteed, hindering the applications of existing methods and datasets. Therefore, a natural question arises: can we generate reliable motions from arbitrary texts? In light of the novelty of this problem, we propose a new dataset to evaluate Text2Motion in a more realistic setting. Briefly, given the action texts of the HumanML3D dataset, the introduced scene texts are generated by LLM in a one-to-many manner. In total, our dataset includes 44,970 action texts, 134,910(about)scene texts, and 14,616 motions (see details in Table 1). The new dataset, called HumanML3D++, gives rise to two fundamental differences between this work and prior research. Beyond Action Texts. Previous methods mainly focus on specific action texts because existing datasets consider perfectly aligned action texts and motions as default. However, HumanML3D++ introduces many scene texts based on the action texts of HumanML3D and enables us to explore the effect of more flexible scene texts in real-life applications. As a result, we need to align multiple arbitrary texts with the same motions, breaking the limited action texts. Beyond Text2Motion. Previous frameworks generate motions from action texts in a one-stage manner because they have perfectly aligned action texts and motions. However, the introduced scene texts of our HumanML3D++ have vague relationships with motions. Therefore, we split the Text2Motion into Text2Action and Action2Motion. In the Text2Action stage, we use the emergent abilities of the LLM to extract the action texts and corresponding scene texts to understand the inherent means of scene texts, hereby extracting the complete and potential action labels. In the Action2Motion stage, we use the sequential abilities of the Transformer to ensure the coherence from action labels to final motions. As a result, our two-stage framework can extract the action labels from arbitrary texts and generate final motions from the extracted action labels, breaking the limited Text2Motion. Our main contributions can be summarized as follows: \u2022 We conduct a new dataset that contains over 80,000 scene text annotations to help infer the potential actions from scene texts (texts without action labels), which has not been explored in the past. \u2022 We propose a more practical two-stage framework, which extracts semantic information with LLMs from arbitrary texts and then generates motions from extracted information. \u2022 Compared with existing methods, our method is better able to understand scene texts and generate motions that align more closely with scene texts. 2 RELATED WORK 2.1 Human Motion Generation Human motion generation supports diverse multimodal inputs, including text [5, 8, 25, 32, 36, 38], action lable [10, 20, 24], incomplete posture sequences [7, 11, 32], music [15, 17, 18], images [29], and more. In all conditional tasks, text-to-motion [5, 8, 25, 32, 36, 38] has consistently propelled and dominated the forefront of research, given that linguistic descriptors remain the most user-friendly and convenient means of representation. In the realm of tasks conditioned on natural language inputs, the generation of actions predominantly relies on deterministic, action textual prompts. Our endeavor diverges by placing emphasis on scene textual inputs, aimed at comprehending natural language interactions and generating appropriate responsive actions. 2.2 Text-to-motion Generation According to the survey [40], tasks utilizing natural language as a conditional input can be categorized into two main classes: Action2motion and text2motion. The core objective of the Action2motion task is to generate human motion sequences corresponding to specific action categories. [5, 6, 10, 21, 22, 24, 32, 35] serves as a typical representative in the Action2motion task. PoseGPT [22]employed an autoencoder to map human motion into a latent index sequence in discrete space. Actor [24] utilized a Transformer-based architecture for encoding and decoding parameterized SMPL human body model sequences estimated from action recognition datasets. INR [4] introduced a motion-conditioned human motion generation method utilizing Variational Implicit Neural Representations. Kinetic-GAN [6] leveraged the advantages of generative adversarial networks and graph convolutional networks to synthesize a new architecture for human body dynamics. These methods demonstrate certain effectiveness. However, existing Action2motion methods suffer from limitations where input action categories are predetermined, thus unable to continuously generate multiple motion sequences, leading to restricted generative capabilities. Nevertheless, despite this limitation, given the relatively short length of textual input, these methods are capable of faithfully generating information relevant to the corresponding action category. Based on this, our design leverages the precision of action2motion generation. In contrast, text-to-motion tasks focus on generating human motion sequences from natural language descriptions. T2M-GPT [36] utilized a simple CNN-based VQ-VAE to obtain high-quality discrete representations of motion. MotionGPT [39] generated continuous human body motion by treating multimodal signals as special input tokens in a large language model (LLM). MLD [5] introduced the \fTAAT: Think and Act from Arbitrary Texts in Text2Motion Table 1: Dataset comparison. #Sub refers to the number of humans included in the dataset. #Act. Class denotes the number of action classes present in the dataset, representing the variety of actions captured (this metric is not applicable to motion datasets annotated with action texts). Our dataset stands out as the most abundant in terms of annotated text content among existing datasets, particularly due to the incorporation of a significant volume of scene texts. Name #Sub. #Motion #Text #Act. Class Scene Action Supervision AMASS [23] 344 11,000 NTU-120RGB+D[19] 106 114,000 120 UESTC [13] 118 25,600 40 NTU RGB+D [30] 56,000 60 BABEL [27] 344 66,000 250 HumanAct12 [10] 12 1,191 12 Text Sup. KIT-ML [26] 111 3,911 6,278 HumanML3D [8] 344 14,616 44,970 Ours 344 14,616 134,910 \u2713 diffusion model into the field of motion generation, diffusing the motion latent space and reducing computational expenses during both the training and inference stages. The use of natural language input aligns more with users\u2019 interaction habits. However, when receiving textual inputs containing multiple actions, due to the inherent complexity of textual content, these models often struggle to faithfully generate all actions in sequence. Our work, also relying on natural language input to align with user habits, addresses the issue of poor performance in multi-action motion generation through the implementation of a precision generator. By leveraging the strengths of both tasks, our model achieves more accurate and flexible motion generation. 3 DATASET: HUMANML3D++ Motion data is pivotal in the advancement of motion generation tasks. As our task relies on scene input, we primarily focus on datasets commonly used in text-to-motion tasks. KIT MotionLanguage (KIT-ML) [26] provides sequence-level annotations for motions, while HumanML3D [8] offers additional textual annotations for some motions in AMASS [23]. It also serves as a focal point in our text-to-motion task. For datasets mapping action labels to actions, Babel [27] collects actions from AMASS [23] and provides annotations for actions and behaviors. ACTOR [24] utilizes two action recognition datasets, HumanAct12 [10] and UESTC [13], employed for action-to-action tasks. However, existing datasets only encompass action texts. To adapt to our task, the modification and enhancement of existing textual data become issues of concern. As shown in Figure 2, we have enhanced the scene textual input component of the dataset built upon HumanML3D [8], named HumanML3D++. As illustrated in Table 1, it can be observed that we have provided the first dataset with scene textual annotations to date. Data composition. As shown in Figure 2, HumanML3D++ is expanded based on HumanML3D [8]. Specifically, HumanML3D [8] annotates 3-5 action texts for each motion. We use LLM to understand action texts and generate two different scene texts for each action text. We test many prompts to claim scene data, here are some examples: Template 1: Here is an example where the action sentence is \"a person takes a few steps forward and then bends down to pick up something.\" and the corresponding scene sentence is \"a person discovers his long lost wallet.\" The causal relationship between the two sentences is very close. I am now giving you some action sentences, hoping that you can complete some scene sentences, which should be the antecedents of the corresponding action sentence actions. The action sentence I am giving you now is <>, I hope you can generate two sentences for each action sentence. Template 2: Here is an example where the action sentence is \"a person takes a few steps forward and then bends down to pick up something,\" and the corresponding scene sentence is \"a person discovers his long lost wallet\". The causal relationship between the two sentences is very close. I am now giving you some action sentences, hoping that you can complete some scene sentences, When completing scene sentences, please try not to use verbs in action sentences. The action sentence I am giving you now is <>. Template 3: Here are some events, and I hope you can summarize in one sentence what happened that could have caused such a reaction. For example, the action sentence is \"a person takes a few steps forward and then bends down to pick up something\", and the corresponding scene sentence is \"a person discovers his long lost wallet\". I am now giving you some action sentences, hoping that you can complete some scene sentences, When completing scene sentences, please try not to use verbs in action sentences. The action sentence I am giving you now is <>. We conduct multiple experiments and evaluate the effectiveness of the generated outcomes, Template 1 can effectively generate scene texts that match the action texts. We ultimately chose the first one as the prompt to be used in our data generation process. Data validation. All the meta versions of scene texts in HumanML3D++ are generated by LLM. Despite having a good prompt, there still exists a certain level of uncertainty in the generation process. To validate the data reliability, we invite 20 participants to evaluate randomly selected data (which accounts for 15% of the total amount). Since different motions may be responses to the same scene text, and the same motion may also be responses to different scene texts, we set the evaluation criteria that as long as the action text is one of the possible reactions for a given scene text, we will mark it as reasonable. The results show that about 94% of the selected data is considered reasonable. At the same time, we have cleaned up or manually corrected the abnormal data. 4 METHOD Our objective is to comprehend scene and action textual inputs and generate lifelike human motion responses. As illustrated in Figure 3, the entire framework comprises two main components. The LLM accepts action and scene inputs and produces corresponding action labels. The generation module uses VQ-VAE to learn the mapping between motion data and discrete encoding sequences and generates code indices based on the corresponding action descriptions. Leveraging the decoder in the motion VQ-VAE, we are able to reconstruct motion from the code indices. In Section 4.1, we presented our comprehension module and introduced a new dataset provided for novel tasks. Subsequently, in Section 4.2, we outlined our universal generation module. \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 Figure 3: Our pipeline overview. Our approach consists of two main parts. a) Understanding natural language and decoupling it into a sequence of actions. We use LLM to obtain possible action labels from action texts or scene texts. b) Generating action sequences corresponding to the obtained action labels. Action X represents the \ud835\udc65-th acquired action label. \ud835\udc50\ud835\udc65 \ud835\udc59represents the \ud835\udc59-th discrete representation ( pose ID) of the motion generated by the \ud835\udc65-th action label. \ud835\udc4erepresents the number of smoothing pose IDs we use. We use the sequential abilities of the Transformer and use the last few actions of the previous action as part of the input for the next action. 4.1 Think Model 4.1.1 Dataset. The existing datasets contain only action text inputs without scene texts, we need to expand the scene text on the existing dataset. We selecte our foundational dataset based on the following considerations. First, when it comes to addressing the issue of generating quality motion, it has been demonstrated that the amount of motion data has an impact on the results. A greater amount of motion data allows for the learning of more poses, consequently leading to improved generation performance. Therefore, our foundational dataset should contain as many motion actions as possible. Secondly, the textual component of the dataset must include annotations of action text to reduce the workload of data labeling. Additionally, this facilitates the supplementation of scene text sections. Based on these criteria, we have chosen the largest dataset with textual annotations, the HumanML3D dataset, as our foundational dataset. However, the textual component of HumanML3D comprises solely action annotations of motion. To meet the requirements of our task, we adopt a comprehensive approach, combining large-scale models with manual processing. Figure 2 displays the representation of the data relationships in our dataset. For each motion data point in the source dataset, which includes 3-5 action descriptions of the motion, we refine the promotion of the large model to generate two scene descriptions corresponding to each data point. After the model completes the supplementation of scene text, we manually filter and clean the data. Then, the data is utilized to train the model, enabling it to perform specific tasks. We compared several datasets utilized in the current field, and according to Table1, it is evident that we possess the largest annotated (action and scene) three-dimensional motion dataset available. 4.1.2 LLM. In the realm of situational comprehension modules, LLM emerges as the paramount choice for delving into textual inputs, owing to its adeptness in modeling intricate language structures and its extraordinary comprehension prowess. Leveraging LLM, we endeavor to generate action representations corresponding to predefined scenarios. Illustrated in Figure 3, our think module ingests scene texts as input and subsequently yields corresponding action labels. Our extraction methodology unfolds in two distinct phases: initially, we harness LLM to procure action texts in response to the provided scene texts; thereafter, we capitalize on the language model\u2019s proficiency to distill action labels from the acquired action texts. Notably, we encounter impracticality in retraining or fine-tuning LLM for our purposes. Firstly, the endeavor to retrain large-scale models entails formidable demands on computational resources and time, rendering it unfeasible in many practical scenarios. Secondly, direct fine-tuning of large models on the paired text inputs of scene texts and action texts encounters the predicament of \u201clective forgetting\u201d. By contrast, the direct adaptation of prompts utilizing existing LLM presents a viable workaround to circumvent these challenges. The utilization of LLM to generate response action labels for scene texts entails a degree of uncertainty; namely, the action label generated from scene texts may not necessarily correspond to the ground truth (GT) action. We have taken this issue into consideration and implemented certain measures to address it, which are elaborated upon in Section 5.2. 4.2 ACT Model 4.2.1 CodeBook. The incorporation of VQ-VAE [34] into the model framework facilitates the acquisition of discrete representations within generative models. Herein, we denote the encoder and decoder components of the autoencoder as E and D, respectively. Consider a human motion sequence \ud835\udc4b= [\ud835\udc651,\ud835\udc652, ...,\ud835\udc65\ud835\udc47], with \ud835\udc47 denoting the total number of frames. The latent feature \ud835\udc4dcan be \fTAAT: Think and Act from Arbitrary Texts in Text2Motion derived as \ud835\udc4d= \ud835\udc38(\ud835\udc4b), where \ud835\udc4d= [\ud835\udc671,\ud835\udc672, ...,\ud835\udc67\ud835\udc47/\ud835\udc59], and \ud835\udc59signifies the temporal downsampling rate of the encoder \ud835\udc38. Quantization of each latent feature \ud835\udc67\ud835\udc56entails its mapping to the nearest centroid \ud835\udc50\ud835\udc58within the codebook \ud835\udc36, as delineated by the equation: \u02c6 \ud835\udc67\ud835\udc56= arg min \ud835\udc50\ud835\udc58\u2208\ud835\udc36 \u2225\ud835\udc67\ud835\udc56\u2212\ud835\udc50\ud835\udc58\u22252 (1) In the optimization of VQ-VAE, the standard objective function [34] Lvq encompasses three pivotal components: a reconstruction loss Lre, an embedding loss Lembed, and a commitment loss Lcommit. L\ud835\udc63\ud835\udc5e= L\ud835\udc5f\ud835\udc52+ \u2225\ud835\udc4d\u2212sg[ \u02c6 \ud835\udc4d]\u22252 | {z } Lembed +\ud835\udefd\u2225sg[\ud835\udc4d] \u2212\u02c6 \ud835\udc4d\u22252 | {z } Lcommit (2) In our organizational refinement framework, we introduce a hyper-parameter \ud835\udefdto govern the impact of the commitment loss and denote the stop-gradient operator as sg. Let Xre represent the reconstructed motion derived from X, specifically defined as Xre = \ud835\udc37(Z). Additionally, denote V(X) as the velocity vector corresponding to X, where V = [\ud835\udc631, \ud835\udc632, ..., \ud835\udc63\ud835\udc47\u22121] and each \ud835\udc63\ud835\udc56denotes the difference between consecutive elements in X, i.e., \ud835\udc63\ud835\udc56= \ud835\udc65\ud835\udc56+1 \u2212\ud835\udc65\ud835\udc56. Hence, the overarching objective guiding our reconstruction process can be articulated as follows: Lre = Lsmooth 1 (\ud835\udc4b,\ud835\udc4bre) + \ud835\udefcLsmooth 1 (\ud835\udc49(\ud835\udc4b),\ud835\udc49(\ud835\udc4bre)) (3) where \ud835\udefcis a hyper-parameter to balance the two losses. A rudimentary implementation of VQ-VAE training encounters a notable challenge known as codebook collapse, as discussed in literature [28, 34]. However, to mitigate this issue and enhance codebook utilization, two prominent training methodologies have been devised [28]. The first approach involves the utilization of exponential moving average (EMA) and the second is referred to as codebook reset (Code Reset). The EMA method facilitates a smooth evolution of the codebook C over iterations. On the other hand, the Code Reset strategy identifies inactive codes during the training process and dynamically reassigns them based on input data, thereby revitalizing the codebook and optimizing its utility throughout the training regimen. 4.2.2 Generativate Transfomer. Utilizing a learned motion VQVAE, a motion sequence \ud835\udc4b= [\ud835\udc651,\ud835\udc652, ...,\ud835\udc65\ud835\udc47] can be converted into a sequence of indices \ud835\udc3c= [\ud835\udc561,\ud835\udc562, ...,\ud835\udc56\ud835\udc47/\ud835\udc59, End], where \ud835\udc56\ud835\udc61\u2208 [1, 2, ...,\ud835\udc60\ud835\udc47/\ud835\udc59] denotes indices from the learned codebook. It\u2019s important to note that a special \"End\" token is appended to signify the end of the motion sequence. By projecting \ud835\udc3cback to their corresponding codebook entries, we obtain \u02c6 \ud835\udc4d= [\u02c6 \ud835\udc671, \u02c6 \ud835\udc672, ..., \u02c6 \ud835\udc67\ud835\udc47/\ud835\udc59] , which can then be decoded into a motion sequence \ud835\udc4bre using the decoder \ud835\udc37. Consequently, text-to-motion generation can be formulated as an autoregressive next-index prediction task: given previous \ud835\udc61\u22121 indices (i.e., \ud835\udc3c< \ud835\udc61) and text condition \ud835\udc50, our objective is to predict the distribution of possible next indices \ud835\udc5d(\ud835\udc56\ud835\udc61|\ud835\udc50, \ud835\udc3c< \ud835\udc61), a task well-suited for Transformer-based models. The overview of our Transformer model is depicted in Figure3. Optimization Goal. The optimization goal is defined by denoting the likelihood of the full sequence as \ud835\udc5d(\ud835\udc46|\ud835\udc50) = \u00ce\ud835\udc47/\ud835\udc59 \ud835\udc56=1 \ud835\udc5d(\ud835\udc46\ud835\udc56|\ud835\udc50,\ud835\udc46< \ud835\udc56). We directly maximize the log-likelihood of the data distribution: L\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60= E\ud835\udc46\u223c\ud835\udc5d(\ud835\udc46) [\u2212log\ud835\udc5d(\ud835\udc46|\ud835\udc50)] (4) 4.2.3 full motion generation. In the training phase of the generative module, our input comprises textual labels paired with corresponding sequences of discrete actions. This design allows the generative module to learn various actions and transitional actions between two actions, thereby establishing a discrete representation of actions and mappings between them. Nevertheless, this does not completely faithfully generate all actions. When visualizing, we adopt a new approach to help us generate all actions. \u001a clip_feature_action0,\ud835\udc5b\ud835\udc62\ud835\udc59\ud835\udc59 if action = action0 clip_feature_action\ud835\udc56, action\ud835\udc56\u22121[\u2212\ud835\udc4e:] otherwise (5) Specifically, when the input label is the first in the sequence, we utilize the corresponding action label along with an empty ID list as input. When the input label is not the first, we employ the corresponding label and the last a IDs of the preceding action as input to generate the next index under the given label condition. For each action label, we initiate the generation process from the text embedding, proceeding in an autoregressive manner. This generation process continues until the model predicts the End token, signifying the completion of action sequence generation. Subsequently, upon obtaining all action label indices, we concatenate them. This concatenated sequence is then passed through the VAE decoder, facilitating the formation of a cohesive and smooth sequence of actions. 5 EXPERIMENT In the experiments, We select R-Precision, Frechet Inception Distance (FID), Multimodal Distance (MM-Dist), Diversity, and Multimodality (MModality) as our evaluation metrics. In Section 5.1 we introduce standard datasets as well as evaluation metrics. We report the accuracy of text2Action in Section 5.2. We compare our results to competitive approaches in Section 5.3-5.5. 5.1 Dataset and evaluation metric Due to the current lack of standardized datasets suitable for extracting motions from arbitrary texts, we supplement the textual portion of the largest annotated dataset, HumanML3D, to meet our task requirements. Following Section 4.1.1, we reorganize the dataset and conduct multiple experiments. Implementation details. For the codebook from VQ-VAE, its size is set to 512 \u00d7 512. The downsampling rate \ud835\udc59is 4. For the HumanML3D++ dataset, the motion sequences are cropped to \ud835\udc47= 64 for training. We use AdamW optimizer with [\ud835\udefd1, \ud835\udefd2] = [0.9, 0.99], batch size of 256, and exponential moving constant \ud835\udf06= 0.99. We train the first 200K iterations with a learning rate of 2 \u00d7 10\u22124, and 100K with a learning rate of 1\u00d710\u22125. \ud835\udefdand \ud835\udefcin L\ud835\udc63\ud835\udc5eand L\ud835\udc5f\ud835\udc52are set to 1 and 0.5, respectively. For the GPT, we employ 20 Transformer layers with a dimension of 1,024 and 16 heads. Following Guo et al [8], the maximum length of Motion is 196 on HumanML3D++ and HumanML3D, and the minimum length is 40 for HumanML3D++. The maximum length of the code index sequence is \ud835\udc47\u2032 = 50. We train an extra End token as a signal to stop index generation. The \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 Table 2: Experiment results on HumanML3D. The training is conducted on the HumanML3D dataset, and testing is also performed on the HumanML3D dataset. Compared to them, our TAAT uses the sequential abilities of the Transformer and works well in FID, Diversity, and MModality, proving that our model generates high-quality motion. Methods R-Precision \u2191 FID \u2193 MM-Dist \u2193 Diversity \u2191 MModality\u2191 Top-1 Top-2 Top-3 Real motion 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 0.002\u00b1.000 2.974\u00b1.008 9.503\u00b1.065 TM2T [9] 0.457\u00b1.002 0.639\u00b1.003 0.740\u00b1.003 1.067\u00b1.002 3.340\u00b1.008 9.188\u00b1.002 2.090\u00b1.083 MDM [32] 0.611\u00b1.007 0.544\u00b1.044 5.566\u00b1.027 9.599\u00b1.086 2.799\u00b1.072 MLD [5] 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 0.473\u00b1.013 3.196\u00b1.010 9.724\u00b1.082 2.413\u00b1.079 MotionDiffuse [37] 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 0.630\u00b1.001 3.113\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 T2M-GPT [36] 0.417\u00b1.003 0.589\u00b1.002 0.685\u00b1.003 0.140\u00b1.006 3.730\u00b1.009 9.844\u00b10.095 3.285\u00b1.070 Ours 0.329\u00b1.003 0.489\u00b1.002 0.696\u00b1.003 0.461\u00b1.006 5.050\u00b1.009 10.038\u00b1.095 2.929\u00b1.070 Table 3: Experiment results on model generalization ability. The training is conducted on the HumanML3D dataset, while testing is performed using the HumanML3D++ dataset. We observe a certain degree of decline in metrics across all models when they are subjected to new scene text inputs. Compared to other methods, TAAT exhibits lesser degradation in metrics, proving the enhanced comprehension capability of our method when confronted with new scene text inputs. Methods R-Precision \u2191 FID \u2193 MM-Dist \u2193 Diversity \u2191 MModality\u2191 Top-1 Top-2 Top-3 Real motion 0.397\u00b1.003 0.568\u00b1.003 0.665\u00b1.003 0.006\u00b1.000 3.945\u00b1.000 8.435\u00b1.069 TM2T [9] 0.337\u00b1.002 0.496\u00b1.002 0.593\u00b1.002 2.201\u00b1.020 4.265\u00b1.008 7.286\u00b1.075 2.600\u00b1.094 MDM [32] 0.322\u00b1.004 0.481\u00b1.007 0.579\u00b1.007 0.827\u00b1.053 4.539\u00b1.019 8.249\u00b1.058 2.804\u00b1.052 MLD [5] 0.373\u00b1.002 0.534\u00b1.002 0.626\u00b1.002 0.897\u00b1.026 3.893\u00b1.010 9.289\u00b1.096 3.018\u00b1.028 MotionDiffuse [37] 0.366\u00b1.000 0.546\u00b1.000 0.637\u00b1.000 1.514\u00b1.000 3.965\u00b1.000 7.907\u00b1.000 1.813\u00b1.000 T2M-GPT [36] 0.389\u00b1.009 0.544\u00b1.009 0.633\u00b1.002 0.516\u00b1.042 4.035\u00b1.004 9.396\u00b1.232 2.499\u00b1.348 Ours 0.225\u00b1.003 0.315\u00b1.002 0.413\u00b1.003 0.488\u00b1.006 5.109\u00b1.009 8.552\u00b1.095 2.957\u00b1.070 Table 4: Experiment results on HumanML3D++. The training is conducted on the HumanML3D++ dataset, and testing is also performed on the HumanML3D++ dataset. Despite the constraints imposed by the evaluated metrics, our TAAT performs favorably in terms of FID, Diversity, and MModality, demonstrating that our model can generate high-quality and diverse motion Methods R-Precision \u2191 FID \u2193 MM-Dist \u2193 Diversity \u2191 MModality\u2191 Top-1 Top-2 Top-3 Real motion 0.397\u00b1.003 0.568\u00b1.003 0.665\u00b1.003 0.006\u00b1.000 3.945\u00b1.000 8.435\u00b1.069 TM2T [9] 0.337\u00b1.000 0.508\u00b1.000 0.616\u00b1.000 1.394\u00b1.000 4.229\u00b1.000 8.181\u00b1.000 2.701\u00b1.000 MDM [32] 0.314\u00b1.006 0.482\u00b1.008 0.588\u00b1.009 0.435\u00b1.029 4.340\u00b1.026 8.634\u00b1.057 2.901\u00b1.055 MLD [5] 0.165\u00b1.002 0.281\u00b1.002 0.368\u00b1.003 9.408\u00b1.060 5.564\u00b1.013 6.962\u00b1.063 3.086\u00b1.130 MotionDiffuse [37] 0.286\u00b1.000 0.442\u00b1.000 0.540\u00b1.000 2.688\u00b1.000 4.638\u00b1.000 7.703\u00b1.000 3.191\u00b1.000 T2M-GPT [36] 0.371\u00b1.005 0.543\u00b1.004 0.645\u00b1.005 0.316\u00b1.015 3.994\u00b1.034 8.627\u00b1.080 2.620\u00b1.067 Ours 0.235\u00b1.003 0.358\u00b1.002 0.427\u00b1.003 0.448\u00b1.006 4.712\u00b1.009 8.950\u00b1.095 3.046\u00b1.070 Transformer is optimized using AdamW with [\ud835\udefd1, \ud835\udefd2] = [0.5, 0.99] and batch size 128. The initialized learning rate is set to 1\u00d710\u22124 for 150K iterations and decayed to 5 \u00d7 10\u22126 for another 150K iterations. Since our method takes the label group of the action label as input, we follow the instructions [8] and retrain the Motion&Text Feature Extractors for Evaluation on the HumanML3D dataset, where the text part is replaced by the action label extracted from the HumanML3D dataset. In the experiment on Motion Generation on HumanML3D++, we follow the same guidance [8] and retrain Motion &Text Feature Extractors for Evaluation on the HumanML3D++ dataset. Metrics. When calculating indicators, we use a consistent evaluation method [36], input action label combinations, and corresponding actions. \fTAAT: Think and Act from Arbitrary Texts in Text2Motion Figure 4: Visual results on Action texts and scene texts. The first row displays the visual results of different models in Action texts, while the second row presents the visual results of different models in scene texts. Compared with other models, under action texts, our TAAT faithfully generates three actions in sequence when given three actions as input. Under scene texts, TAAT generates reactive actions to the situation (running away), while other models generate textual content (driving). \u2022 R-Precision: Given one motion sequence and 32 text descriptions (1 ground-truth and 31 randomly selected mismatched descriptions), we rank the Euclidean distances between the motion and text embeddings. Top-1, Top-2, and Top-3 accuracy of motion-to-text retrieval are reported. \u2022 Frechet Inception Distance (FID): We calculate the distribution distance between the generated and real motion using FID on the extracted motion features. \u2022 Multimodal Distance (MM-Dist): The average Euclidean distances between each text feature and the generated motion feature from this text. \u2022 Diversity: From a set of motions, we randomly sample 300 pairs of motion. We extract motion features and compute the average Euclidean distances of the pairs to measure motion diversity in the set. \u2022 Multimodality (MModality): For one text description, we generate 20 motion sequences forming 10 pairs of motion. We extract motion features and compute the average Euclidean distances of the pairs. We finally report the average over all the text descriptions. 5.2 Text2Action Accuracy In the task we proposed, using LLM to understand and respond to scene texts is the core of generating reasonable motion for arbitrary texts. However, there is uncertainty in generating action texts from scene texts. Specifically, one scene text may correspond to multiple reactive actions, although the results are usually reasonable, the direct use of LLM\u2019s single result does not always correspond to the ground truth action texts, which is not conducive to the existing evaluation criteria. The illusion phenomenon of LLM may have a negative impact on our setting. To test the above concerns, we generate multiple actions for each scene text. As shown in Figure 6, we evaluate the rationality of the results: we generate ten action texts for each scene text and use an evaluator to compute the similarity between each action text and Ground truth to choose the most similar action text we generated. we set the evaluation criteria that If the action texts and ground truth are extremely matched, such as both actions that are \"kick\", we determine that they are \"match\". If the action texts and ground truth actions are similar but not exactly matched, we determine that they are \"similar\". If the action texts and ground truth actions do not match at all, we determine that they are \"mismatch\". We randomly sample 10% of the results and find that 66% of the data can be similar to the ground truth. For the actions used in the second part of the test, we use a discriminator to select the result closest to the original action texts as the input for the second part. 5.3 Motion Generation on HumanML3D In Table 2, both model training and testing are performed on the HumanML3D dataset. The results data for other models are directly obtained from the respective papers. Compared to the original action texts to motion task, our TAAT demonstrates good performance in Diversity and shows promising results in metrics such as FID and MModality.This also demonstrates the efficacy of our TAAT in the original action texts to motion tasks. Our TAAT can also generate improved and more diverse motions when presented with action text inputs.TAAT combines the accuracy of generation with the diversity of generated actions. \fRunqi Wang, Caoyuan Ma, GuoPeng Li, and Zheng Wang\u2020 Figure 5: E1, E2, and E3, respectively, represent the result in table 2, table 3, and table 4. We opt for the FID metric (lower values indicating better performance) to illustrate the variations among different models under various experimental settings. Most models exhibit a decline in metrics when directly subjected to scene inputs without prior training (E2), indicating a lack of generalization capability in the preceding models. Upon retraining the models on the new task (E3), there is a noticeable improvement in metrics for most models, proving that the majority of models also possess a certain learning capability for more complex scene texts. Our method has the smallest change in indicators among the three experiments and maintains a leading level, proving that our method is more suitable for the input of arbitrary texts. 5.4 From HumanML3D To HumanML3D++ Table 3 illustrates our testing on the model\u2019s generalization capabilities. All models are trained on the HumanML3D dataset and tested on the HumanML3D++ dataset to test whether the relevant models have the generalization ability to understand and generate motion from scene texts. We conduct testing using official pre-trained models provided by each paper. Table 3 shows that preceding models demonstrate a decrease in metrics when directly subject to scene text inputs without prior training. This indicates a lack of generalization capability in the preceding models, showing them not directly applicable to the new task. Despite not being specifically trained on scene texts, our model exhibits a comparatively minor decrease in performance metrics when presented with scene text inputs. Furthermore, it achieves the best FID and satisfactory Diversity, demonstrating the ability to generate high-quality and diverse human motions. 5.5 Motion Generation on HumanML3D++ Table 4 shows the model\u2019s ability to learn scene texts and generate corresponding responsive actions. All models are trained on the HumanML3D++ dataset and tested on the HumanML3D++ dataset. All models are trained according to guidelines provided in their respective official repositories. It can be observed that our TAAT Figure 6: Accuracy of action labels generated by LLM. It can be observed that the action texts generated in our think stage closely approximate real action texts at a rate of 66%. can learn and understand the input of scene text well, generate corresponding actions, and does not produce particularly poor metrics as some models do. Despite the constraints imposed by the evaluated metrics, our TAAT performs favorably in terms of FID, Diversity, and MModality, demonstrating that our model can generate high-quality and diverse motion. 6 DISSCUSION Since the LLMs have randomness in producing action texts from scene texts, the existing methods mainly focus on aligning motion and text. This leads to our quantitative results not showing a significantly superior performance. We believe the main cause is that a scene text has various reasonable motions, and a motion may occur in various scenes, we need a better evaluation method to judge the rationality of the generated results.. Although we have already provided 6-10 scene texts for each motion, it is still insufficient for the task. 7" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.14977v1", |
| "title": "Social Media and Artificial Intelligence for Sustainable Cities and Societies: A Water Quality Analysis Use-case", |
| "abstract": "This paper focuses on a very important societal challenge of water quality\nanalysis. Being one of the key factors in the economic and social development\nof society, the provision of water and ensuring its quality has always remained\none of the top priorities of public authorities. To ensure the quality of\nwater, different methods for monitoring and assessing the water networks, such\nas offline and online surveys, are used. However, these surveys have several\nlimitations, such as the limited number of participants and low frequency due\nto the labor involved in conducting such surveys. In this paper, we propose a\nNatural Language Processing (NLP) framework to automatically collect and\nanalyze water-related posts from social media for data-driven decisions. The\nproposed framework is composed of two components, namely (i) text\nclassification, and (ii) topic modeling. For text classification, we propose a\nmerit-fusion-based framework incorporating several Large Language Models (LLMs)\nwhere different weight selection and optimization methods are employed to\nassign weights to the LLMs. In topic modeling, we employed the BERTopic library\nto discover the hidden topic patterns in the water-related tweets. We also\nanalyzed relevant tweets originating from different regions and countries to\nexplore global, regional, and country-specific issues and water-related\nconcerns. We also collected and manually annotated a large-scale dataset, which\nis expected to facilitate future research on the topic.", |
| "authors": "Muhammad Asif Auyb, Muhammad Tayyab Zamir, Imran Khan, Hannia Naseem, Nasir Ahmad, Kashif Ahmad", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.SI", |
| "cats": [ |
| "cs.SI", |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "This paper focuses on a very important societal challenge of water quality\nanalysis. Being one of the key factors in the economic and social development\nof society, the provision of water and ensuring its quality has always remained\none of the top priorities of public authorities. To ensure the quality of\nwater, different methods for monitoring and assessing the water networks, such\nas offline and online surveys, are used. However, these surveys have several\nlimitations, such as the limited number of participants and low frequency due\nto the labor involved in conducting such surveys. In this paper, we propose a\nNatural Language Processing (NLP) framework to automatically collect and\nanalyze water-related posts from social media for data-driven decisions. The\nproposed framework is composed of two components, namely (i) text\nclassification, and (ii) topic modeling. For text classification, we propose a\nmerit-fusion-based framework incorporating several Large Language Models (LLMs)\nwhere different weight selection and optimization methods are employed to\nassign weights to the LLMs. In topic modeling, we employed the BERTopic library\nto discover the hidden topic patterns in the water-related tweets. We also\nanalyzed relevant tweets originating from different regions and countries to\nexplore global, regional, and country-specific issues and water-related\nconcerns. We also collected and manually annotated a large-scale dataset, which\nis expected to facilitate future research on the topic.", |
| "main_content": "Introduction Real-time monitoring and observation of resources and infrastructure is a primary task towards a resilient infrastructure and sustainable cities [1]. This allows for taking appropriate recovery actions for the mitigation of risks and damages. Crowdsourcing is one of the effective ways used for real-time monitoring and feedback on infrastructure [2]. One of the key methods, widely explored in the literature, for crowdsourcing is conducting surveys to obtain citizens\u2019 feedback on different services, such as water quality, air quality, roads, infrastructure, and other societal challenges. These surveys can help in obtaining more detailed, contextual, and localized information [3]. These surveys are either conducted by asking citizens to fill in an online form or a questionnaire. More recently, mobile applications have also been developed for conducting such surveys, where the participants were asked to install and give feedback. However, these online and in-person surveys have several limitations [4]. One of the key limitations of such surveys is the limited scope, which means they can cover a limited number of people as people are often reluctant to install such applications as we noticed during the COVID-19 pandemic [5]. Moreover, it takes a lot of time to complete a survey and also needs to involve Email address: kashif.ahmad@mtu.ie (Kashif Ahmad) several human and other resources, thus, the frequency of such surveys is generally very low as it is costly and unfeasible to frequently conduct such surveys. These limitations of the current crowd-sourcing methods could be overcome by extracting information from social media outlets, such as Twitter and Facebook. Social media outlets have already been proven to be an effective source of communication and information spreading [6, 7]. Their capabilities to engage large volumes of audiences worldwide make them a preferred platform for discussing and conveying concerns over different domestic and global challenges. The literature already reports their effectiveness in a diversified set of societal, environmental, and technological topics [8, 9]. In this work, we explore the potential of social media as a crowd-sourcing source/medium of instant feedback on water quality. To this aim, an automatic solution is proposed that is able to collect and analyze citizens\u2019 feedback on water quality. The proposed system will not only engage a large number of participants, which is a key factor in meaningful feedback, but it will also be a continuous process and will keep collecting people\u2019s feedback continuously. One of the key advantages of the system is collecting and analyzing feedback without asking the citizens to fill in any online form/survey rather it will keep filtering and analyzing relevant social media posts in a privacypreserving manner. Preprint submitted to Journal of L AT EX Templates April 24, 2024 arXiv:2404.14977v1 [cs.SI] 23 Apr 2024 \fThe proposed system is composed of (i) a crawler, which is responsible for collecting social media posts (Tweets), (ii) a classification framework employing several NLP algorithms in a merit-based fusion to differentiate between water-related and irrelevant tweets, and (iii) topical analysis to automatically analyze and extract key water-related issues discussed in the tweets. For the training and evaluation of the text classification framework, we also collected and annotated a large-scale benchmark dataset, which will be made publicly available for further research in the domain. The key contributions of the work can be summarized as follows: \u2022 We propose an automatic tool to collect, analyze, and extract meaningful information from social media posts as a source of instant feedback on water quality, as a first step towards a sustainable water network. \u2022 We propose a merit-based fusion framework by combining several transformers-based NLP algorithms to differentiate between water-related and irrelevant tweets. \u2022 We also collected and annotated a large-scale benchmark dataset containing around 8,000 tweets. \u2022 We also perform topic modeling on the relevant tweets to automatically extract key water-related issues discussed in the relevant tweets. \u2022 We also analyze the origin of the water-related tweets and provide region and country-wise distribution of the water-related tweets collected by our system. This analysis shows the growing concern over this important societal challenge. The rest of the paper is organized as follows. Section 2 provides an overview of the related work. Section 3 discusses the proposed methodology. Section 4 covers the experimental setup, conducted experiments, and experimental results. Finally, Section 5 concludes the paper. 2. Related Work The literature already reports several interesting crowdsourcing-based solutions, which are mostly based on offline or online surveys for infrastructure monitoring and feedback on different public services [10]. The majority of the recent solutions rely on smartphones and other handheld devices by developing smart applications allowing users to give feedback on the infrastructure and services. For instance, Rapousis et al. [11] proposed QoWater, a client-to-server architecture-based mobile application allowing mobile users to give feedback on water quality. Similarly, Santani et al. [12] proposed CommuniSense, a mobile phone application for crowdsourcing to monitor road conditions in Nairobi. However, several challenges are associated with such applications for crowdsourcing [13]. One of the key limitations of such surveys is the limited scope, which means they can cover a limited number of people as, generally, people are found reluctant to install and use such mobile applications. A prime example of people\u2019s reluctance to such mobile applications is observed during COVID-19 when people showed concerns over such applications in terms of privacy, difficulty in usage, and battery consumption [5]. Moreover, it takes a lot of time to complete a survey and also needs to involve several human and other resources, thus, the frequency of such surveys is generally very low as it is costly and unfeasible to frequently conduct such surveys. These challenges could be overcome by extracting people\u2019s feedback on infrastructure and public services. The literature already provides some hints on the effectiveness of social media for real-time monitoring and instant feedback on different services. For instance, Want et al. [14] explore the potential of social media as a source of feedback on government services by analyzing citizens\u2019 opinions in a social media text. Water quality analysis is one of the key applications that recently got the attention of the community. To this aim, several interesting frameworks have been introduced. The majority of the existing works aim at sentiment analysis of social media posts to extract people\u2019s opinions on water quality. For instance, Lambert [15] proposed a sentiment analysis framework for analyzing users\u2019 feedback and perception of tap water quality. Similarly, Li et al. [16] performed sentiment analysis on social media posts about recycled water in China. Jiang et al. [17], on the other hand, analyzed the public\u2019s opinion on large hydro projects by performing sentiment analysis on relevant social media posts. To this aim, three different hydro projects in China are considered, and mixed opinions were noticed for the projects. More recently, water quality analysis from social media posts has also been introduced in MediaEval 2021 [18]. The task involved the retrieval of relevant multimedia content describing water quality in an Italian region. A couple of interesting solutions, incorporating different types of available information, are proposed in response to the task. For instance, Hanif et al. [19] fine-tuned exiting pre-trained deep-learning models namely VGGNet and BERT for retrieving relevant visual and textual content, respectively. Overall better results are reported for textual content. Ayub et al. [20] rather focused on textual content by employing three different NNs models including BERT, RoBERTa, and a custom LSTM both individually and jointly in a naive late fusion scheme. Despite the initial efforts in the domain, several interesting aspects of water quality analysis and automatic analysis of people\u2019s feedback on public services and infrastructure, in general, are unexplored. For instance, the majority of the initial efforts are based on sentiment analysis without extracting meaningful information from the content itself. The domain also lacks a large-scale benchmark dataset. To this aim, in this work, we collect and annotate a large-scale benchmark dataset on water quality analysis. We also extend the text classification framework with topic modeling to automatically extract key waterrelated issues discussed in social media. 2 \f3. Methodology Figure 1 provides the block diagram of the proposed system. As can be seen, the proposed system is composed of five steps. In the first step, a large number of Tweets have been collected. In the next step, these tweets are annotated in a crowd-sourcing study. The annotated dataset is then used to train/finetune Large Language Models (LLMs) for the classification of tweets into relevant and non-relevant tweets. In the fourth step, several merit-based fusion techniques are used to combine the classification scores obtained with the individual models. In the final step, topic modeling techniques are used to identify topics in the relevant tweets. In the next subsections, we provide a detailed description of each step. 3.1. Data Collection, Cleaning, and Annotation For data collection, we developed a crawler able to continuously collect data from different outlets of social media. As proof of concept, in the current implementation, data is collected from Twitter only. To this aim, we used a Python package namely Tweepy1 with different relevant keywords, such as waterpollution, water, watercrisis, watersmell, drinkingwater, watercolour, cleanwater, waterquality, plasticpollution, drinkingwater, watercrisis, savewater, waterislife, cleantheocean, plasticocean, endplasticpollution. The list of keywords is prepared in a data-driven manner by picking the keywords used in social media posts, blogs, newspapers etc., . We tried to include as many as possible keywords to the list to collect relevant and quality tweets. This resulted in a large collection of tweets, which were saved in a CSV file. After data collection, all the collected tweets are manually annotated by involving multiple volunteers in a crowd-sourcing activity. Before the annotation, the collected data is manually checked to remove less informative tweets. For example, we removed very short tweets without sufficient text or containing tags only. We also removed duplicate entries in the file. During the crowd-sourced activity, we manually analyzed a total of 8,000 tweets, which are annotated as relevant or nonrelevant. To ensure, the quality of the annotated data, each sample is checked by three different annotators and is labeled based on the majority votes. The participants of the crowd-sourcing activity are postgraduate students with sufficient knowledge of the domain. 3.2. Text Classification For text classification, we employed several LLMs both individually and jointly in a merit-based fusion technique to differentiate between relevant and non-relevant tweets. In the next subsections, we provide a detailed description of the classification and fusion process. 1https://www.tweepy.org/ Data Collection Data Annotation in Crowdsourcing Activity and Pre-processing Text Classification via LLMs M1 M2 Mn Regions/Location Extraction from the Text Merit-based Late Fusion C =W1c1+W2C2+ ... WnCn Key Issues Extraction via Topic Modeling Topics\u00a0 Topics Frequency Cluster of words by topic\u00a0 Figure 1: A block diagram of the proposed methodology. 3 \f3.2.1. Classification Via Individual Models In this work, we mainly rely on state-of-the-art transformerbased NLP models for the classification of tweets. In total, six different models are used. These models include the original BERT model, RoBERTa, ALBERT, DistilBERT, GPT, and Meta-LLAMA. The selection of these models is motivated by their proven performances in similar tasks, and we believe the evaluation of these models will provide a baseline for future work in the domain. A brief overview of these models is provided below. \u2022 BERT: It is one of the state-of-the-art NLP algorithms that have been widely used for a diversified list of NLP applications. Its ability to read/learn in both directions makes it a preferred choice in different text-processing applications. Several implementations of BERT are available. In this work, we used Tensorflow implementation. The model is composed of 12 layers and attention heads, and 110 million parameters. Our loss function is based on the Binary Cross entropy loss function while the Adaptive Moments (Adam) optimizer is used in the experiments. \u2022 RoBERTa: RoBERTa is another state-of-the-art transformer-based NLP model, and it uses self-attention for processing and generating contextualized representations of input text. One of the key advantages of RoBERTa over BERT is its training on a larger dataset and the use of a dynamic masking technique allowing the model to learn robust and generalizable representations of words. In this work, we fine-tuned the model on our dataset by using the Adam optimizer with a binary cross-entropy loss function. \u2022 ALBERT: It is a modified version of BERT with fewer memory requirements. ALBERT has a reduced number of parameters mainly due to factorized embedding parameterization and cross-layer parameter sharing. In this first technique, the large vocabulary embedding matrix is decomposed into two small matrices, separating the size of the hidden layers from the size of the vocabulary embedding. The cross-layer parameter sharing, on the other hand, prevents an increase in the number of parameters with the depth of the model. \u2022 DistilBERT: DistilBERT is another variant of the BERT model aiming at applications with less computational and memory requirements. The concept of knowledge distillation is adopted during pre-training allowing a significant reduction in parameters without a significant impact on the performance of the model. \u2022 GPT: Generative Pre-trained transformer (GPT) models represent a family of Neural Network (NNs)-based language prediction models built on the Transformer architecture [21]. These models are pre-trained on a huge volume of diverse text data. Currently, GPT is available in different versions. However, the first version of the model was introduced in 2018 by Open AI [21]. In this work, we used GPT version 3.5 turbo. It is composed of 175 billion parameters, which is significantly higher than the number of parameters used in its previous versions and other transformers, such as BERT. In this work, we used prompt engineering for the classification of tweets through GPT 3.5. \u2022 Meta-LLAMA: Large Language Model Meta AI (LLMA) is also a family of pre-trained LLMs. Similar to GPT, multiple versions of LLAMA are available having 7B to 70B parameters. In this work, we used LLAMA 2, which is an improved version of the base model LLAMA. Similar to the base model, LLAMA 2 is built on the Google transformer architecture with several interesting changes and improvements. For example, the RMSNorm pre-normalization, a SwiGLU activation function, and multi-query attention instead of multi-head attention and AdamW optimizer. The key differences between LLAMA 2 and the original LLAMA include a higher context length (i.e., 4096 compared to 2048 tokens) and grouped-query attention instead of multi-query attention. Similar to GPT 3.5, we used the prompt engineering method for text classification with LLAMA. 3.2.2. Fusion of the Models Our fusion methods are based on a late fusion scheme, where the scores/posterior probabilities of the individual models are accumulated for the final decision using equation 1. In the equation, S m1, S m2, S m3, ...S mn represent the scores/posterior probabilities obtained through the 1st, 2nd, , 3rd, and nth model, respectively while W1, W2, W3, ...Wn are the corresponding weights assigned to these models. S f = W1S m1 + W2S m2 + W3S m3 + .... + WnS mn (1) The weights are assigned to the models on the basis of their performances. To this aim, several weight optimization/selection methods, including PSO, Nelder Mead, BFGS, and Powell method, are employed. These methods seek a set of variable values (i.e., W1, W2, W3, ...Wn in our case) optimizing an objective function under a set of constraints. In this case, the fitness/objective function is based on accumulative classification error obtained on a validation set using equation 2. In the equation, Aacc represents the accumulative accuracy computed on the validation set. In this work, our goal is to find a set of weights to be assigned to the models that minimize the classification error. e = 1 \u2212Aacc (2) We note that the same fitness function is used by all the weight optimization methods employed in this work. These methods use different mechanisms and have their own pros and cons. A brief overview of each method is provided below. \u2022 PSO: Particle Swarm Optimization (PSO), which is a heuristic approach, has been widely used in the literature for different tasks. For instance, in several works, PSO has been used for the optimization of hyper-parameters of ML 4 \falgorithms, such as the number of layers, batch size, number of neurons, etc., in LSTMs and CNNs [22, 23]. Similarly, it has been also used for the hyper-parameter optimization of Federated Learning (FL) algorithms [24]. The literature also reports the effectiveness of the optimization technique in late fusion where the algorithm is used to assign optimal weights to the classifiers [25, 26]. The algorithm solves the optimization problem in three steps, iteratively; starting from a random set of candidate solutions, where each candidate solution is called a particle. At each iteration, each particle keeps track of its personal and global best solution in the swarm. The particles adjust two parameters namely (i) velocity and (ii) the position. The velocity of a particle is adjusted based on its own experience and the information shared by the other particles in the swarm. The position of particles is adjusted based on their current position, velocity, and distances between their current positions and personal and global best. The process continues until a global optimum is obtained. The key limitations of the method include a slow convergence rate, especially in high dimensional problems, and entrapment in local minima. Being one of the key optimization algorithms, PSO implementation is available in several libraries. In this work, we used the open-source library namely pyswarm2 for the implementation of the algorithm. \u2022 Nelder Mead Method: Similar to PSO, the Nelder Mead method has also been widely explored for different optimization tasks. For instance, Takenaga et al. [27] employed the method for computationally expensive optimization problems. Similarly, Ozaki et al. [28] used the algorithm for the hyper-parameter selection/optimization of a CNN model. The method has also been widely used for the fusion of classification algorithms in different visual and NLP applications [29, 8]. The method optimizes a set of variables leading to a minimum or maximum value of an objective function in a multidimensional space. To this aim, it uses a set of n + 1 test points (solutions), which are arranged as a simplex. The method then estimates the behavior of the objective function at each test point for new test points, which replace the old ones in an iterative manner. In this work, we used a Python open-source library, namely, SciPy3 for the implementation of the method. \u2022 Limited-memory Broyden Fletcher Goldfarb Shanno Algorithm (BFGS): Similar to PSO and Neldar Mead, BFGS and its variants have been proven very effective in different tasks, such as optimization hyper-parameters of deep learning models and fusion. For instance, Saputro et al. [30] employed the algorithm for parameter estimation on a geographically weighted ordinal logistic regression model. Maria et al. [31] employed the method along with other optimization techniques for the fusion of inducers\u2019 scores for media interestingness prediction. BFGS, 2https://pyswarms.readthedocs.io/en/latest/ 3https://scipy.org/ which is a local search optimization algorithm, belongs to the Quasi-Newton optimization family and aims at the optimization of the second-order derivative of the objective function. To obtain a set of optimal values, the algorithm computes the inverse of the Hessian matrix used for multivariate functions. To this aim, the algorithm approximates the inverse using a gradient that eliminates the need for inverse calculation at each step. One of the key limitations of the algorithm is its large memory requirement, and it becomes impractical to compute the inverse of the Hessian matrix with a larger number of input parameters. To overcome this limitation, several variations of the algorithms have been proposed. For instance, Limited BFGS/LBFGS [32] is one of the variants of the algorithms with fewer memory requirements. In this work, though we don\u2019t have a large number of inputs, we used the LBFGS implementation of the method. \u2022 Powell Method: Powell method is another interesting optimization method that has been widely used for similar tasks. For instance, Maria et al. [31] and [8] employed the method for merit-based late fusion of classifiers for media interestingness and water quality analysis, respectively. Similar to PSO, several variations of the algorithm have been proposed in the literature. The algorithm seeks the local minima of the objective function. The objective function, which is a real-valid function with multiple inputs, doesn\u2019t need to be differentiable. The algorithm finds the minima in several steps starting with a random selection and evaluation of initial points/solutions. A list of parameters is then randomly selected. A subset of the initial points with minimum error is then selected as parents to produce children for the next step for the next generation. The children/new points are then evaluated in the fifth step and the process is repeated again from the third step until a global minima is found. 3.3. Regions Extraction In this phase, we define different regions based on the locations associated with the tweets. This allows us to analyze the water quality or water-related issues in different regions of the world as each region may have specific issues. We note that this step is added to facilitate in region-wise topic modeling, where we aim to extract keywords used in water-related tweets from different parts of the world. To this aim, the location addresses associated with each tweet are fed into Chat GPT to identify the corresponding countries by mapping the addresses to the respective countries. To ensure the quality of the mapping, the identified countries and the associated addresses are meticulously verified. To further enhance the accuracy of the data, we applied filtering techniques to specific locations. For example, in cases where the user\u2019s location included the address \u2019Florida, FL,\u2019 we replaced it with \u2019USA\u2019. This replacement was applied wherever the specified keyword was encountered. As a result, we successfully extracted and verified 4707 accurate locations. The countries list is then provided to Chat GPT to 5 \fexpand the geographical scope by translating the unique countries into regions using Chat GPT. 3.4. Topic Modeling The final component of the methodology is based on BERTopic [33], which is a state-of-the-art topic modeling technique. One of the key advantages of topic modeling is its ability to quickly discover the hidden topical patterns present in the data. These hidden patterns could result in meaningful insights leading to useful data-driven decisions. In this work, we aim to automatically extract the hidden topical patterns in the water-related tweets to identify the key water-related issues and concerns expressed over the water quality in the tweets. The algorithm used in this work extracts topics from Tweets in three different steps starting from converting the tweets into embeddings, then reducing the dimensionality and clustering, and finally converting them into topics. The embeddings are obtained by a pre-trained model namely SentenceBERT. The dimensionality reduction and clustering are carried out through Uniform Manifold Approximation and Projection(UMAP) and HDBSCAN (Hierarchical DBSCAN), respectively. Finally, topics are extracted from the clustering using a modified form of TF-IDF (Term Frequency-Inverse Document Frequency) namely c-TF-IDF. The algorithm brings several advantages. For instance, it clusters documents based on both lexical and semantic similarities. Moreover, BERTopic provides a library with several packages allowing more accurate and better visualization of the clusters, topics, and probabilities. It also comes with a few limitations. For instance, its assumption that each document/tweet contains only one topic is its main limitation, though it is possible to have Tweets with multiple topics. We note that we also performed some pre-processing in addition to the data cleaning before topic modeling. For instance, we removed short and stop words, numbers, and alphanumeric characters. This allows us to remove irrelevant frequently used words. 4. Experiments and Results 4.1. Dataset Our final dataset, after removing less informative tweets during the manual analysis and annotation, contains a total of 7,930 tweets. Among these, 5,728 tweets are annotated as irrelevant while the remaining 2,202 tweets were classified as relevant. The dataset has been divided into three subsets namely (i) training, (ii) test, and (iii) validation set using a ratio of 70%, 20%, and 10%, respectively. The validation set is used for the computation of the classification error for the fitness function of the fusion methods. Table 1 provides some sample relevant and irrelevant tweets from the dataset. 4.2. Experimental Results The objectives of this work are multi-fold. On one side, we aim to extract flood-related tweets, and on the other hand, we want to automatically extract keywords from the relevant tweets. It is very possible that each country/region may have different water-related issues than others, thus, we are also interested in keywords/topics of tweets tweeted from a specific country/region. To achieve these objectives, we perform the following experiments. \u2022 Evaluation of the performance of several state-of-the-art LLMs individually. \u2022 Fusion of the classification scores obtained through the individual models in a merit-based fusion framework by employing several weight selection/optimization methods. \u2022 Topic modeling on all the relevant tweets. This will allow us to highlight key global water-related issues. \u2022 Topic modeling of the collection of tweets tweeted from a specific country/region. This will allow us to highlight the water-related issues specific to a particular region. In the next subsections, we provide a detailed analysis of the results of all the experiments. 4.2.1. Text Classification Results Table 2 provides the results of our first experiments, where we evaluate the performance of several LLMs in the application. We note that for GPT and LLAMA-2 we use the prompt engineering method with a few-shot (5-shot and 10-shot) classification setting without fine-tuning the models. As can be seen, overall similar results are obtained for BERT and its different variants and XLNET. However, the lowest results are observed with Meta-LLAMA-2. One of the potential reasons for the lowest performance of the model is the few-shot learning as the model may have limited generalization to classify the samples from the seen examples. Table 3 reports the results of our fusion experiment, where combine the classification scores of the best-performing individual models in a merit-based fusion scheme. In this experiment, we considered two experimental settings. In the first case, we combined the classification scores obtained with the top 5 performing models including BERT, RoBERTa, DistilBERT, ALBERT, and XLNET while in the second experiment, we considered the top 2 models namely BERT and ALBERT. Overall there is a slight improvement in the results of the fusion compared to the best-performing individual model. Generally, the fusion results in an improvement in the F1 score, however, the less improvement in this case could be the complexity of the dataset or the fewer variations in the individual models\u2019 results. As far as the comparison of the fusion methods is concerned, no significant differences have been observed. However, the performance of all the methods is higher when the top 2 best-performing models are used in the fusion compared to the top 5 models. In the case of top models, though there is no significant difference, the slightly lower performance could be due to the low-performing models that could adversely affect the performance of the fusion methods. 6 \fTable 1: Sample Tweets from the dataset. Relevant Samples Irrelevant Samples We have been receiving water of the worst quality from past 6 months. I want to bring this situation to your notice and solve this problem ASAP. Water is a basic need. Area : Adarsh Nagar, Bahadurgarh One of the most popular urban beaches in Gran Canaria, Las Canteras is a twokilometer ribbon of sand caressed by warm and calm water. Drinking contaminated water can transmit diseases and back in 2017 nearly 1.6 million people died from diarrheal diseases. 1/3 of those were children under the age of 5. #climatecrisis #water Wondering about #books about #water sports (canoeing, sailing, yachting, scuba diving, etc.)? Check out call number range The landmark research blames chemical #pollution from plastics, farm fertilisers and pharmaceuticals in the #water. Previously, it was thought the amount of #plankton had halved since the 1940s, but the #evidence gathered by the Scots suggests 90% has now vanished. Hope people leave water out in their gardens or balcony in any containers for all the beautiful wildlife x #water #wildlife #thirsty #animals x In face of recurring drought, cities seek security in wastewater recycling projects #security #projects #recycling #wastewater #water Removing pollution from water using water shaping tech #sketchup #depollution #watershaping #waterpollution In a larger portion of cases, #carpet #damage is treated efficiently and all the defects are repaired. Professional services take care of all the #Water #Damage #Restoration Sunshine Coast. The privatisation of water and power has been one of the biggest rip-offs of the British public in modern times. Time to jail those profiteering through pollution of our rivers and waterways! #water #corporategreed #utilities The theme this time is \u201dWater from Japanese restaurants\u201d. Is it true that there are many paid shops outside Japan? The popular article has exceeded 650pv Is water free at Japanese restaurants? Life without water is impossible. Save water. Save life. With every little drop, a day less to live on Earth. Your body depends on #water to survive. Every cell, tissue, and organ in your body needs water to work properly and for overall good health. Learn how to ensure you stay hydrated, and why it is important to do so, here in familydoctor Drinking contaminated #water can be harmful to one\u2019s health. #Cholera, #diarrhea, #dysentery, and #typhoid are just a few of the ailments it can induce. We\u2019ve worked with TheMixUK to explain what support is available for those struggling to pay for the increasing price of #Fuel and #Water bills. Take a read of the article here Clean Water is a necessity to daily life. Empower economically disadvantaged small communities to develop and sustain clean water supplies. Visit Central Florida Water Ski Sweepstakes 7 \fTable 2: Experimental results of the individual LLMs. LLM F1-score BERT 0.7686 ALBERT 0.7636 DistilBERT 0.7491 ROBERTA 0.7541 XLNET 0.76 Gpt-3.5 (5-shot) 0.7146 Gpt-3.5 (10-shot) 0.7246 Meta-LLAMA2 (5-shot) 0.5832 Meta-LLAMA2 (10-shot) 0.5876 Table 3: Evaluation of the fusion methods. Fusion Method F1-score Top 2 (BERT and ALBERT) Top 5 Simple Averaging 0.770 0.7630 PSO 0.772 0.770 Nelder Mead Method 0.776 0.771 Powell Method 0.7713 0.7680 BFGS 0.77 0.7687 4.2.2. Location Extraction and Topic Modeling Analysis In the topic modeling, we conducted two different experiments. In the first case, we analyzed and tried to discover hidden topical patterns in the complete collection of water-related tweets in the test set. Figure 2 provides the top 10 topics and the corresponding words extracted from the collection of relevant tweets through BERTopic. As can be seen, most of the topics and associated words are very relevant. The issues highlighted by the algorithm from the tweet collection include sanitation & access to water, plastic pollution in water reservoirs, saving and utilization of rainwater, irrigation & drought issues, environmental factors, filtering drinking water, heatwaves & heatwave, chemicals and tap water, etc., In the second experiment, the collected relevant tweets were divided into regions, which allowed us to discover topics in tweets relevant to or tweeted from certain regions. This experiment helps to discover people\u2019s concerns about this important topic of water-related issues including both local and global issues. As a first step, we extracted country names from the addresses associated with relevant tweets using ChatGPT. This resulted in a long list of countries from where water-related tweets were tweeted. We observed that very few tweets were recorded from certain countries. For example, our collection of relevant tweets contains a single tweet from Slovenia, Mozambique, El Salvador, and Grenada. To ensure a sufficient number of tweets from each country, we considered only those countries from which at least 70 tweets were tweeted. We note that there is no scientific reason behind this threshold (i.e., min 70 tweets per country), we simply wanted to make sure a sufficient number of countries in our list at the same time to ensure a sufficient number of tweets for our analysis from each country. Figure 3 provides the country-wise distribution of the relevant tweets in our dataset. A large portion of the tweets originated from the United States and the United Kingdom. This also indicates the interest of the people from these countries in this important societal challenge. Figure 4 provides the list of topics extracted from the tweets originating from different countries. Topic 0 to Topic 6 show the group of topics extracted from the tweet collections for Australia, Canada, India, Pakistan, South Africa, the United States, and the United Kingdom, respectively. Some of the topics and the associated words are less relevant compared to the others. For example, most of the words associated with topic 0, which is extracted from tweets originating from Australia, are not very relevant to water-related issues. However, on the other side, Topics 2 to Topic 6 are very relevant and helpful in highlighting the issues. For instance, Topic 2 stresses the careful usage of water in general and rainwater in particular. Topic 3 and Topic are about drinking water in one of the provinces of Pakistan and South Africa, respectively. Similarly, Topic 5 is based on heatwaves, wildlife, and water pollution. Finally, Topic 6 also includes relevant keywords, such as clean water and droughts. We also performed topic modeling on different geographic regions by combining tweets from all the countries in the region. These regions are formed on the basis of the geographic locations of the countries. To this aim, the list of countries is provided to Chat GPT resulting in five regions including Asia, Africa, America, Oceania, and Europe. Similar to countrywise topic modeling, we included the regions having at least 70 tweets. Figure 5 provides the distribution of relevant tweets from each region. As can be seen in the figure, overall, a higher number of tweets originated from America, Europe, and Asia. Figure 6 provides the summary of the topics extracted from the tweets originating from different regions. Topic 0 to topic 4 represent topics extracted from tweets originating from Africa, America, Asia, Europe, and Oceania, respectively. The majority of the topics and the associated keywords are very relevant to water quality except the topic extracted from the Oceania region. The topics are similar to what has been observed in the country-wise topic modeling, which indicates that the regions mostly have similar types of water-related issues or at least the topics/concerns are similar. 5." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15604v1", |
| "title": "Hybrid LLM/Rule-based Approaches to Business Insights Generation from Structured Data", |
| "abstract": "In the field of business data analysis, the ability to extract actionable\ninsights from vast and varied datasets is essential for informed\ndecision-making and maintaining a competitive edge. Traditional rule-based\nsystems, while reliable, often fall short when faced with the complexity and\ndynamism of modern business data. Conversely, Artificial Intelligence (AI)\nmodels, particularly Large Language Models (LLMs), offer significant potential\nin pattern recognition and predictive analytics but can lack the precision\nnecessary for specific business applications. This paper explores the efficacy\nof hybrid approaches that integrate the robustness of rule-based systems with\nthe adaptive power of LLMs in generating actionable business insights.", |
| "authors": "Aliaksei Vertsel, Mikhail Rumiantsau", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "In the field of business data analysis, the ability to extract actionable\ninsights from vast and varied datasets is essential for informed\ndecision-making and maintaining a competitive edge. Traditional rule-based\nsystems, while reliable, often fall short when faced with the complexity and\ndynamism of modern business data. Conversely, Artificial Intelligence (AI)\nmodels, particularly Large Language Models (LLMs), offer significant potential\nin pattern recognition and predictive analytics but can lack the precision\nnecessary for specific business applications. This paper explores the efficacy\nof hybrid approaches that integrate the robustness of rule-based systems with\nthe adaptive power of LLMs in generating actionable business insights.", |
| "main_content": "Introduction As organizations grapple with increasingly complex and diverse data sets, the demand for advanced techniques that can extract valuable insights has grown exponentially. Traditional rule-based systems have often struggled to keep up with the intricacies of modern business data, while stand-alone AI Models, although powerful, may still have limitations in certain scenarios. In response to these challenges, the concept of hybrid approaches has emerged as a compelling solution. By combining the strengths of rule-based systems and AI models, hybrid approaches offer the potential to enhance the process of data extraction and uncover meaningful insights from diverse data sources. In this paper, we explore the use of LLM-powered and rule-based systems to address the complexities of data extraction in the field of business intelligence. \fThe following sections will investigate the details of this hybrid approach, assessing its effectiveness in navigating the complexities of business data and extracting actionable insights. 2. The Hybrid LLM-Powered and Rule-Based Approach The hybrid approach combines the strengths of interpretable AI techniques, such as LIME, rule-based systems, and supervised document classification [1], to create a powerful framework for extracting actionable insights from business data. LLM, or Large Language Model, plays a vital role in the extraction process by integrating with rule-based systems to enhance the understanding and generation of natural language-generated insights. By utilizing LLM's ability to model linguistic characteristics and generate coherent responses, the hybrid approach can uncover personalized and nuanced user interests, needs, and goals from user journeys and user activities on the platform [2]. LLM, as an interpretable AI method, can explain the individual predictions of black-box ML models and provide insight into the underlying decision-making process. This can greatly enhance the transparency and trustworthiness of the data extraction process, as stakeholders can easily understand and validate the generated insights. Furthermore, the hybrid LLM-powered and rule-based approach utilizes the logic learning machine technique for supervised data mining. Considerations When implementing a hybrid LLM-powered and rule-based approach for extracting actionable insights from business data, there are several considerations to take into account: 1. Data quality and preprocessing: It is crucial to ensure that the business data used for analysis is of high quality, free from errors, inconsistencies, and biases. 2. Domain knowledge: It is essential to have a deep understanding of the specific business domain and its unique requirements. This knowledge will inform the design of the rule-based system and help identify relevant features and patterns in the data. 3. Scalability and computational resources: The hybrid approach may require significant computational resources, especially when working with large-scale textual datasets. While the hybrid LLM-powered and rule-based approach may seem promising, it is essential to consider the potential drawbacks and limitations of this methodology. One of the primary concerns with relying on LLMs and large language models for data extraction is the ethical implications associated with using such powerful and complex AI systems. Large language models, especially those trained on large-scale textual datasets, have raised concerns about perpetuating biases and misinformation present in the training data. The potential for these models to generate biased or inaccurate insights based on flawed linguistic 1 \fcharacteristics is a significant risk that needs to be carefully addressed. Additionally, the complex nature of LLMs may introduce challenges in understanding and validating the decision-making process, especially when combined with rule-based systems. Moreover, while LLMs have demonstrated impressive capabilities for natural language understanding and generation, there is a continuous need for extensive computational resources to train, fine-tune, and maintain these models. This reliance on substantial computational infrastructure may pose challenges for businesses with limited resources or computing capabilities, making the practical implementation of this approach a potential barrier. 3. Introduction to Business Insight Generation Data Pipeline In the field of business intelligence, the journey from raw data to actionable insights involves a sophisticated pipeline that meticulously processes, analyzes, and synthesizes data. This pipeline is pivotal for organizations aiming to harness their data for strategic decision-making. While both rule-based and LLM approaches offer significant merits in data processing and analysis, the most effective business insight generation pipelines employ a hybrid strategy. This combines the precision and reliability of rule-based systems with the contextual understanding and linguistic flexibility of LLMs. Such a combination ensures a comprehensive analysis that is not only accurate but also richly informative and readily actionable. 3.1 Data Preprocessing The foundation of any insightful analysis is high-quality data. In the data preprocessing stage, raw data is cleaned, normalized, and transformed to ensure consistency and relevance for subsequent analysis. Both rule-based methods and LLMs play crucial roles here. Rule-based approaches excel in systematically cleaning and structuring data according to predefined standards, while LLMs can offer additional support, particularly in interpreting and correcting unstructured textual data. The synergy of these methods ensures a robust preparation of data, laying the groundwork for insightful extraction. Importance: Improves Data Quality: Preprocessing cleans the data by fixing or removing inaccuracies and inconsistencies, significantly improving its quality. Ensures Consistency: Normalization ensures that data from different sources or formats is brought to a common standard, facilitating accurate comparison and analysis. Enhances Model Performance: Quality preprocessing directly impacts the performance of predictive models and analyses by providing them with reliable data. 2 \fChallenges: Variability of Data Sources: Data can come from diverse sources with different formats and standards, making preprocessing complex. Missing Values: Determining the best way to handle missing data\u2014whether to impute, ignore, or remove it\u2014can significantly affect the analysis outcomes. Scalability: As datasets grow in size, preprocessing steps must scale accordingly, requiring efficient algorithms and processing power. 3.2 Data Preprocessor Structure A data preprocessor is typically structured to sequentially apply a series of steps to clean and normalize the data. These steps might include: Data Cleaning: Removing duplicates, correcting errors, and dealing with missing values. Data Integration: Combining data from different sources into a cohesive dataset. Data Transformation: Converting data into a format or scale suitable for analysis, such as normalizing ranges or encoding categorical variables. Data Reduction: Reducing the dataset size through methods like principal component analysis (PCA) or feature selection to focus on the most informative aspects. 3.3 Rule-Based Approach In a rule-based approach to preprocessing, specific rules and criteria are defined to handle different preprocessing tasks. For instance, rules could dictate that all missing values in a particular column should be replaced with the median value of that column, or that certain outlier values should be capped at a predefined threshold. This approach is highly structured and can be very efficient for datasets with well-understood characteristics and common preprocessing needs. Advantages: Consistency and Control: Offers consistent results and allows for fine-grained control over the preprocessing logic. Efficiency: Can be highly efficient for datasets where the preprocessing needs are well understood and stable over time. 3 \fChallenges: Flexibility: Adapting to new data sources or changes in the data can require manual updates to the preprocessing rules. Complexity: Developing and maintaining a comprehensive set of preprocessing rules can become complex for large or diverse datasets. 3.4 LLM-Based Approach An LLM-based approach to preprocessing involves leveraging language models to understand and manipulate data. This could involve using an LLM to infer missing values based on the context within the dataset or to identify and correct inconsistencies in textual data. While more experimental and less common than rule-based preprocessing, LLMs offer intriguing possibilities for handling complex and unstructured data. Advantages: Adaptability: Can adapt to new data patterns and inconsistencies without predefined rules. Handling Unstructured Data: Particularly effective for preprocessing textual data, where LLMs can understand and correct nuances in language. Challenges: Resource Intensity: Requires significant computational resources, especially for large datasets. Predictability: The outcomes of LLM-based preprocessing may be less predictable and harder to control than rule-based approaches. Conclusion Data preprocessing and normalization are critical steps that significantly impact the success of subsequent data analysis and modeling efforts. The choice between rule-based and LLM-based approaches depends on the specific characteristics of the data, the available resources, and the desired level of control over the preprocessing process. While rule-based approaches offer predictability and efficiency for structured datasets, LLM-based approaches provide flexibility and powerful capabilities for dealing with complex and unstructured data. 4 \f3.5 Experimental Approach: Data preprocessor built by LLM using input and output dataset examples While full pre-processing of the input data stream might seem too resource intensive, one may consider using LLM's code generation capabilities. Building a data preprocessor in such a way involves designing a system that can analyze input dataset examples and understand the transformations needed to produce the desired output dataset. This process typically requires a combination of automated analysis and human oversight to ensure accuracy and relevance. Below, I'll outline a high-level approach to creating such a preprocessor, focusing on leveraging an LLM's capabilities to understand and apply data transformations based on examples. Step 1: Define the Input and Output Dataset Examples First, clearly define and document examples of your input and output datasets. These examples should illustrate the types of data preprocessing tasks you need to perform, such as handling missing values, normalizing data, transforming text, or categorizing information. The more comprehensive and varied your examples, the better the LLM can learn the desired transformations. Step 2: Design the Preprocessing Task Framework Develop a framework that outlines the types of preprocessing tasks your system should be able to handle. This framework could include: Data Cleaning: Identifying and correcting inaccuracies or inconsistencies. Data Normalization: Scaling numerical values or standardizing text formats. Feature Engineering: Deriving new data columns from existing ones based on specific logic. Data Augmentation: Generating synthetic data or additional features based on the input data. Step 3: Utilize the LLM for Task Identification With your examples and framework in place, use the LLM to identify the specific preprocessing tasks required for each example. This step involves querying the LLM with pairs of input and output examples and asking it to describe the transformations that occurred. For instance: What preprocessing steps are needed to transform Dataset A (input) into Dataset B (output)? Identify and explain the data normalization techniques applied from Dataset A to B. 5 \fStep 4: Generate Preprocessing Scripts or Commands Once the LLM has identified the necessary tasks, the next step is to generate the actual code or commands that perform these transformations. This can be achieved by querying the LLM with specific preprocessing tasks identified in the previous step and requesting code snippets in your language of choice (e.g., Python, SQL). For example, ask the LLM to generate a Python function that applies the identified normalization technique to a given column of a pandas DataFrame. Step 5: Validate and Refine the Generated Code After generating the initial preprocessing scripts, it's crucial to validate their effectiveness on your datasets. This involves: Testing: Run the generated code on your input datasets and compare the results with your output examples to ensure accuracy. Refinement: If discrepancies are found, refine your queries to the LLM or adjust the generated code manually. This may involve providing the LLM with feedback on what was incorrect or asking for alternative solutions. Step 6: Automate and Iterate As you refine your preprocessing scripts, consider automating the process of querying the LLM and applying the generated transformations to new datasets. This could involve creating a pipeline that takes new data as input, uses the LLM to identify necessary preprocessing steps, generates the corresponding code, and applies it to the data. Step 7: Continuous Learning and Improvement Data preprocessing needs can evolve over time as new types of data are collected or analysis goals change. Continuously monitor the performance of your preprocessing system, and use new input-output examples to teach the LLM new transformations or refine existing ones. This ensures that your system remains effective and adaptable to changing requirements. Conclusion Building a data preprocessor via an LLM based on input and output dataset examples represents a novel approach to automating data preprocessing tasks. While promising, this method requires careful design, testing, and refinement to ensure it meets the specific needs of your data analysis projects. By leveraging the power of LLMs to understand and generate code for data 6 \ftransformations, you can create a flexible and powerful tool to streamline your data preprocessing workflows. 4. Business Insights Extraction Extracting meaningful business insights from processed data is a complex task that requires discernment and precision. This stage involves identifying patterns, anomalies, and trends that are significant to the business. Employing a hybrid approach, rule-based systems can efficiently sift through large datasets to pinpoint specific information based on established criteria, while LLMs can further analyze these findings to uncover deeper insights and context. This dual strategy allows for a thorough exploration of the data, ensuring that no valuable insight is overlooked. Examples of Business Insights In the domain of business intelligence, the insights gleaned from data analysis serve as critical inputs for strategic decision-making. The emergence of sophisticated analytics and machine learning technologies has empowered businesses to extract a wide array of valuable insights from their data repositories. In this section, we will explore various categories of business insights related to business metrics that are pivotal for organizations. General Anomalous Measurement Shifts Identifying general anomalous shifts in measurements across the entire dataset is crucial for early detection of issues or opportunities. These anomalies could signal a sudden change in consumer behavior, operational hiccups, or emerging market trends. By monitoring for unexpected deviations from historical patterns, businesses can swiftly respond to mitigate risks or capitalize on new developments. Specific Dimensions with Anomalous Measurement Shifts Drilling down from general anomalies, it\u2019s important to identify anomalous shifts within specific dimensions of the data. This could involve particular product lines, geographic regions, or customer segments exhibiting unusual patterns. Pinpointing these dimensions enables businesses to address the root causes of anomalies and tailor their strategies to specific aspects of their operations. Measurement Spikes Measurement spikes are sudden, sharp increases in specific metrics followed by a fast decrease, which could indicate both positive and negative developments. For instance, a spike in website traffic might result from a successful marketing campaign, while a surge in customer service 7 \fcomplaints could highlight issues with a product or service. Recognizing and understanding the context of these spikes is essential for effective management and decision-making. All-Time High Measurement Values Achieving all-time high values in certain measurements, such as sales, user engagement, or production efficiency, is a clear indicator of business success. These milestones provide valuable insights into what strategies are working and serve as a benchmark for future performance. Celebrating these achievements can also boost morale and motivate teams to continue striving for excellence. Specific Dimensions with Top Values per Measurement Identifying which specific dimensions (e.g., product categories, regions, sales channels) are performing the best in terms of specific measurements can inform resource allocation and strategic focus. For example, if certain products are consistently top sellers, a business might decide to expand those lines or explore similar market opportunities. Specific Dimensions Comparison per Business Performance Comparing specific dimensions in relation to overall business performance allows for a nuanced understanding of how different areas of the business contribute to success. This can involve comparing sales performance across regions, customer satisfaction by product line, or marketing ROI by channel. Such comparisons not only highlight areas of strength but also reveal potential improvement opportunities. Incorporating these types of insights into business strategies enables organizations to navigate complex markets with greater agility and precision. By leveraging data analytics and machine learning, businesses can transform raw data into actionable intelligence, driving growth and competitive advantage in today's data-driven world. 5. An Overview of Various Approaches to Extracting Business Insights The process of business insights extraction from the preprocessed structured data can be approached through rule-based systems or LLMs, each offering distinct advantages and facing unique challenges. Rule-based approaches are highly effective in structured data environments, offering high precision, resource efficiency, deterministic outcomes, ease of interpretability, and customizability for domain-specific needs. However, they may struggle with scalability, flexibility, complexity in rule creation, overlooking nuanced patterns, and require significant maintenance overhead as data environments evolve. 8 \fOn the other hand, LLMs provide adaptability to new data patterns, excel in handling unstructured data, and can generate rich, nuanced insights. They also reduce the maintenance overhead associated with rule updates. Yet, LLMs face challenges such as high resource intensity, interpretability issues, less precision in highly structured data, significant training data requirements, risk of bias, and difficulties in performing precise mathematical operations. The juxtaposition of these approaches highlights a landscape where the integration of rule-based systems and LLMs can offer a comprehensive solution, leveraging the precision and reliability of rule-based methods with the flexibility and depth of LLM-generated insights. This hybrid approach aims to balance the strengths and mitigate the challenges inherent in each method, providing a robust framework for business insights extraction that is adaptable, scalable, and aligned with organizational goals. 5.1 Rule-Based Approaches to Business Insights Extraction From an engineering perspective, rule-based approaches to business insights extraction operate on a framework of predefined logic and criteria to process and analyze data. This methodology is deeply rooted in the principles of traditional programming and data processing, where every operation is explicitly defined by the developers or data scientists which requires a solid foundation in programming, data science, and domain-specific knowledge. The effectiveness of these systems depends on the precision with which rules are defined and applied, as well as the system's ability to process and analyze data efficiently and accurately. Advantages High Precision in Structured Data: Rule-based systems excel in environments with structured data, where precise conditions and thresholds can be defined for insight extraction, ensuring high accuracy in identifying specific patterns or anomalies. Resource Efficiency: Compared to LLM approaches, rule-based systems generally require fewer computational resources for processing structured data, making them more cost-effective for certain types of analysis. Deterministic Outcomes: The deterministic nature of rule-based systems guarantees consistent results, which is crucial for repeatable analysis and tracking changes over time within structured datasets. Ease of Interpretability: Insights generated through rule-based methods are easier to trace back to their originating logic, offering clear interpretability and the ability to easily validate findings against business logic. 9 \fCustomizability for Domain-Specific Needs: Rule-based systems can be finely tuned to specific business contexts and domains, allowing for tailored insight extraction that aligns closely with organizational goals and data characteristics. Challenges Scalability Limitations: As datasets grow in size and complexity, maintaining and updating the rules can become increasingly challenging, limiting the scalability of rule-based systems. Flexibility and Adaptability: Rule-based systems can struggle to adapt to new patterns or changes in data structure without manual intervention, potentially missing emerging insights not covered by existing rules. Complexity in Rule Creation: Developing comprehensive and effective rules requires deep domain expertise and understanding of the data, which can be resource-intensive and time-consuming. Overlooked Nuances: Rule-based systems might overlook subtler, complex patterns in data that do not fit neatly into predefined criteria, potentially missing valuable insights. Maintenance Overhead: As business contexts and data environments evolve, rule-based systems require continuous review and updates to rules, creating significant maintenance overhead. 5.2 LLM Approaches to Business Insights Extraction LLM approaches to business insights extraction represent a shift towards leveraging advanced artificial intelligence to analyze and interpret data. Unlike rule-based systems, LLMs rely on pre-trained models that understand and generate natural language, allowing for a more nuanced and context-aware analysis of data. Advantages Adaptability to New Patterns: LLMs can quickly adapt to new data patterns and changes, providing the flexibility to generate insights from evolving datasets without the need for manual rule adjustments. Handling of Unstructured Data: Beyond structured data, LLMs excel in extracting insights from unstructured data, offering a broader scope of analysis that includes textual analysis and sentiment extraction. Richness of Insights: LLMs can generate more nuanced and contextually rich insights, capturing complex relationships in the data that might be missed by rule-based systems. 10 \fReduced Maintenance Overhead: Once trained, LLMs can continue to provide insights without the same level of ongoing maintenance and rule updates required by rule-based systems. Challenges Resource Intensity: LLMs typically require significant computational resources for training and inference, especially when processing large datasets, which can be costly. Interpretability Issues: Insights generated by LLMs may not always offer clear interpretability, making it challenging to understand the rationale behind certain analyses. Precision in Structured Data: For highly structured datasets, LLMs may not always match the precision of rule-based systems, especially in scenarios requiring strict adherence to predefined conditions. Training Data Requirements: LLMs require large amounts of training data to perform optimally, which can be a limitation in data-scarce environments or when dealing with highly specific business contexts. Risk of Bias: If not carefully managed, LLMs can perpetuate or amplify biases present in their training data, leading to skewed insights and potential ethical concerns. Mathematical Operations: LLMs might struggle with performing precise mathematical operations or extracting insights based on complex numerical analysis, a task for which rule-based systems with explicitly defined logic can be a lot more suited and reliable. 6. Natural Language Narrative Generation Transforming data-driven insights into natural language narratives makes the analysis accessible and understandable to non-technical decision-makers or in other words democratizes the data. LLMs are particularly adept at this task, utilizing their advanced natural language generation capabilities to articulate complex insights in clear, concise language. However, incorporating rule-based logic can enhance this process by structuring the narratives around key business metrics and objectives, ensuring that the generated text aligns with specific analytical goals. The result is a set of narratives that not only convey the insights but do so in a manner that is directly relevant to the business's strategic interests. 11 \f6.1 Rule-Based Approaches to Natural Language Narrative Generation From an engineering perspective, rule-based approaches to natural language narrative generation involve a structured process where explicit rules dictate how data insights are translated into text. This method leverages a combination of data processing techniques and linguistic rules to produce narratives that are both informative and aligned with specific analytical goals. Advantages Precision and Relevance: Rule-based systems can generate narratives that precisely match specific business metrics and objectives, ensuring that every piece of generated text is highly relevant and aligned with predefined analytical goals. Efficiency in Structured Environments: For structured data insights, rule-based narrative generation can be more resource-efficient, as it operates within well-defined parameters without the need for extensive computational power. Consistency Across Narratives: By adhering to a set of predefined rules, this approach guarantees a high level of consistency in the narrative output, crucial for maintaining a uniform tone and style across all generated reports. Customization to Business Needs: Rule-based systems allow for extensive customization, enabling the engineering of narratives that cater to the unique needs and preferences of different business stakeholders. Deterministic Outcomes: The deterministic nature ensures that given the same set of insights, the narrative output will be consistent, providing a reliable basis for decision-making and reporting. Challenges Limited Scalability and Flexibility: As datasets grow and evolve, updating and maintaining the rule set for narrative generation can become increasingly complex, limiting the system's scalability and flexibility. Complexity in Rule Development: Creating and refining the rules for narrative generation requires a deep understanding of both the domain and natural language structures, making the process resource-intensive. Risk of Overlooking Nuances: This approach may miss subtleties in the data or fail to capture the full context of insights, leading to narratives that lack depth or fail to engage the audience. 12 \f\u200b Maintenance Overhead: Continuous monitoring and updating of the rule set to align with changing business conditions and data environments create a significant maintenance burden. \u200b Rigid Narrative Structures: Rule-based narrative generation might result in texts that are structurally rigid and lack the fluidity or creativity that can engage readers more effectively. 6.2 LLM Approaches to Natural Language Narrative Generation Large Language Models (LLMs) represent a significant advancement in natural language processing (NLP) and generation (NLG), offering a powerful tool for transforming data-driven insights into natural language narratives. From an engineering perspective, deploying LLMs for narrative generation involves several key steps, leveraging the models' ability to understand context, generate coherent text, and adapt to new information. Advantages Adaptability to Evolving Data: LLMs can adapt to new patterns and insights from evolving datasets, generating narratives that reflect the most current context without manual adjustments. Richness and Nuance in Narratives: Leveraging advanced NLP capabilities, LLMs can produce narratives that capture the subtleties and complexities of the insights, providing depth and engaging the audience more effectively. Efficiency at Scale: LLMs can handle large volumes of data and generate narratives at scale, benefiting from their ability to process and synthesize information quickly. \u200b Reduced Maintenance: Once an LLM is trained or fine-tuned, it requires less ongoing maintenance compared to rule-based systems, as it can automatically adapt to new information patterns. \u200b Creative and Engaging Text Generation: LLMs have the potential to generate narratives that are not only informative but also engaging and creative, enhancing the readability of insight reports. Challenges High Resource Requirements: Training and running LLMs, especially for narrative generation from complex insights, require substantial computational resources and associated costs. 13 \fInterpretability and Transparency Issues: Understanding how an LLM constructs narratives from data insights can be challenging, raising questions about the process's transparency. Precision in Structured Data Narratives: In scenarios requiring strict adherence to business metrics, LLMs might not match the precision offered by rule-based systems, potentially affecting the accuracy of the narratives. Quality and Relevance of Training Data: The effectiveness of LLMs in generating relevant and accurate narratives heavily depends on the quality and domain relevance of the training data. Risk of Bias and Inaccuracy: There is a potential for LLMs to perpetuate biases present in their training data or generate inaccuracies due to overgeneralization, affecting the credibility of the narratives. 7. Architectures of Hybrid Data Processing Pipelines for Business Insight Generation To address the challenges arising at various stages of business insight generation, innovative data processing pipeline architectures have been developed, combining the strengths of rule-based systems with the advanced capabilities of LLMs. These hybrid approaches offer a sophisticated solution to the complexities of business intelligence, balancing precision with the nuance of natural language understanding. Various architectures of that kind [6] point out using structured information alongside the LLMs in graph related contexts [8] suggest a multi-stage LLM based algorithm with results reranking, more approaches mentioned in the References section, all having one thing in common \u2013 LLM is not treated as a static black box that takes in prompts and outputs text, but rather one of the elements of complex processing pipelines with rule base and algorithmic input and output curation. The first architecture (LLM-Based Insight Generation from Chunked Data) introduces a method where unprocessed data is segmented and fed through an LLM alongside expertly crafted prompts. This \"chunking\" strategy is designed to circumvent the token input limitations of LLMs, enabling the analysis of extensive datasets while focusing on generating relevant insights. It exemplifies the adaptability and scalability of combining traditional data processing techniques with the power of LLMs. Our second architecture (Sequential Data Processing and Insight Generation) outlines a sequential approach that begins with meticulous data preprocessing and moves towards extracting specific data fragments for analysis. These fragments are then enriched with expert prompts that guide the LLM in producing atomic insights, which are synthesized into a final report. This process highlights the precision of targeted analysis and the LLM's ability to 14 \fgenerate insights from carefully curated data fragments, illustrating the benefits of a methodical, step-by-step approach to data analysis. The third architecture (Hybrid Rule-Based and LLM Insight Generation) presents a compelling hybrid model, utilizing a rules-based engine for the initial generation of atomic business insights, followed by the summarization capabilities of an LLM to craft these insights into a cohesive, well-articulated report. This architecture leverages the accuracy and reliability of rule-based analysis with the advanced natural language generation skills of LLMs, offering a balanced solution that combines the best of both worlds. These hybrid architectures underscore the significant advantages of integrating rule-based systems with LLMs over relying on purely rule-based or LLM methods alone. By doing so, organizations can achieve a higher level of precision and detail in their analysis while also benefiting from the contextual understanding and linguistic sophistication of LLMs. This synergy not only enhances the quality of insights generated but also ensures that these insights are both actionable and accessible to decision-makers. As we explore these architectures further, it becomes evident that the future of business intelligence and data analysis will increasingly rely on such hybrid approaches, which offer a comprehensive solution to the complex challenge of transforming data into strategic business insights. 7.1 LLM-Based Insight Generation from Chunked Data This architecture involves directing unprocessed, arbitrary data through a Large Language Model (LLM) to generate natural language insights, utilizing expertly crafted prompts to guide the analysis. Given the LLM's token input limitations \u2014 where the amount of data that can be processed in a single prompt is restricted \u2014 the proposed solution includes a \"chunking\" strategy. This approach entails dividing the large dataset into smaller, manageable pieces or \"chunks\" that are processed sequentially or in parallel by the LLM, with each chunk accompanied by a tailored prompt designed to extract specific insights or information. 15 \fFig 1. LLM-Based Insight Generation from Chunked Data Data Preparation: The unprocessed dataset is divided into smaller chunks. This division can be based on logical segments of the data, such as temporal splits (e.g., monthly data), categorical divisions (e.g., by product line or region), or simply by breaking the dataset into parts that fit within the LLM's token limits. Prompt Engineering: For each chunk, an expertly designed prompt is created. These prompts are crafted to direct the LLM's focus towards extracting relevant insights from the specific portion of data it receives, taking into account the context and goals of the analysis. Sequential or Parallel Processing: Depending on the infrastructure and urgency, the data chunks can be processed either sequentially or in parallel. Parallel processing significantly speeds up the analysis but requires more computational resources. 16 \fInsight Generation: The LLM generates insights for each chunk, which are then compiled and synthesized into a comprehensive analysis. Advantages Scalability: By breaking down the dataset, this approach can handle large volumes of data that exceed the LLM's token limits, making it scalable to various data sizes. Focused Analysis: Chunking allows for tailored prompts that can direct the LLM to generate more relevant and focused insights for each segment of the data. Parallel Processing Capability: The ability to process data chunks in parallel can significantly reduce the time required for insight generation, making this approach efficient for time-sensitive analyses. Challenges Integration of Insights: Combining insights from different chunks into a coherent and comprehensive analysis can be challenging, especially if the chunks are processed independently without consideration of the overall context. Chunking Strategy: Determining the optimal way to divide the data into chunks requires careful consideration to ensure that important patterns or trends are not overlooked. Poor chunking strategies can lead to fragmented insights that miss the bigger picture. Resource Intensity: While parallel processing offers speed advantages, it also demands significant computational resources, which may not be feasible for all organizations or scenarios. Prompt Engineering Complexity: Crafting effective prompts for each chunk requires a deep understanding of both the data and the analysis objectives, making prompt engineering a potentially complex and time-consuming task. Conclusion The chunking approach to utilizing LLMs for insight generation from large, unprocessed datasets offers a scalable and flexible solution to the challenge of LLM token limitations. While it presents several advantages in terms of focused analysis and processing efficiency, it also introduces complexities related to insight integration, chunking strategy, and the need for expert prompt engineering. Addressing these challenges requires careful planning and potentially innovative solutions to ensure that the final insights are both comprehensive and actionable. 17 \f7.2 Sequential Data Processing and Insight Generation This architecture outlines a structured approach to generating business insights from data, starting with preprocessing and culminating in a comprehensive insight report. The process emphasizes the extraction of specific data fragments, enrichment with expertly crafted prompts, and the use of a Large Language Model (LLM) to produce atomic insights, which are then synthesized into a final report. This methodology aims to ensure that the insights are both precise and relevant to the specific questions at hand. Fig 2: Sequential Data Processing and Insight Generation Step 1: Data Preprocessing The initial step involves cleaning, normalizing, and preparing the data for analysis. This includes removing duplicates, handling missing values, normalizing data ranges, and possibly transforming the data to ensure consistency and accuracy. Effective preprocessing lays the groundwork for more accurate and meaningful insights by ensuring the data is in a suitable format for analysis. 18 \fStep 2: Extraction of Specific Fragments Following preprocessing, the next step is to extract specific fragments of the data that are relevant to the questions being asked. This targeted approach ensures that the analysis is focused and efficient, dealing only with data that can potentially yield useful insights. The selection of fragments can be based on certain criteria or conditions that align with the business questions of interest. Step 3: Enrichment with Expert Prompts The extracted data fragments are then enriched with prompts created by domain experts. These prompts are designed to guide the LLM in analyzing the data fragments, directing it to focus on generating insights that are relevant to the specific business questions. The prompts act as a bridge between the raw data and the LLM's capability to generate insightful analysis, ensuring that the model's output is aligned with the business objectives. Step 4: Generation of Atomic Business Insights With the prompts in place, the LLM is employed to analyze each enriched data fragment and generate atomic business insights. These insights are \"atomic\" in the sense that each one addresses a specific aspect or question related to the broader topic of interest. The use of an LLM facilitates the generation of nuanced and contextually relevant insights that might not be readily apparent through traditional analysis methods. Step 5: Summarization into the Final Insight Report The final step involves synthesizing the atomic insights into a comprehensive insight report. This synthesis requires careful consideration of how each atomic insight fits into the overall picture, ensuring that the final report provides a coherent and comprehensive analysis of the data in relation to the business questions. The summarization process may also involve prioritizing certain insights, drawing connections between them, and presenting them in a format that is accessible and actionable for decision-makers. Advantages Targeted Analysis: By focusing on specific data fragments and questions, this approach ensures that the analysis is highly relevant and efficient. Expert Guidance: The use of expert-crafted prompts ensures that the LLM's analysis is closely aligned with the business objectives, enhancing the relevance and usefulness of the insights. Depth of Insight: Leveraging an LLM to generate insights allows for a level of depth and nuance that is difficult to achieve with traditional analysis methods. 19 \fChallenges Complexity of Prompt Engineering: Crafting effective prompts requires a deep understanding of both the domain and the questions at hand, which can be challenging and time-consuming. Integration of Insights: Synthesizing atomic insights into a cohesive report requires a clear understanding of how each insight contributes to the overall analysis, which can be complex, especially for large datasets or multifaceted questions. Data Fragmentation: Ensuring that the extraction of specific data fragments does not overlook important context or connections between different parts of the data can be challenging. Conclusion Architecture 2 offers a structured and targeted approach to generating business insights, leveraging the strengths of LLMs guided by expert input. While it presents several advantages in terms of producing deep and relevant insights, it also poses challenges related to the complexity of prompt engineering and the integration of insights. Addressing these challenges effectively is crucial for maximizing the value of the final insight report. 7.3 Hybrid Rule-Based and LLM Insight Generation This architecture proposes a hybrid approach combining a rules-based engine for generating atomic business insights with the use of a Large Language Model (LLM) for summarizing these insights into a coherent, well-written final report. This methodology leverages the precision and reliability of rule-based systems for initial insight generation, while capitalizing on the strengths of LLMs in natural language understanding and generation for the creation of the final report. 20 \fFig 3. Hybrid Rule-Based and LLM Insight Generation Step 1: Data Preprocessing The process begins with thorough data preprocessing to ensure the quality and consistency of the dataset. This includes cleaning the data (removing duplicates, handling missing values), normalizing (standardizing formats and scales), and potentially transforming data (encoding categorical variables, generating new features) to prepare it for analysis. Effective preprocessing is critical to ensure that the insights generated in later stages are based on reliable and accurate data. Step 2: Rules-Based Engine for Atomic Insight Generation With the data prepared, a rules-based engine analyzes the dataset to generate atomic business insights. This engine operates on predefined logic and criteria to identify patterns, anomalies, trends, or other relevant findings within the data. The rules are crafted based on domain knowledge and specific business objectives, ensuring that the insights are directly applicable and valuable to the business. This approach benefits from the high level of control and transparency, allowing for consistent and interpretable insight generation. 21 \fStep 3: LLM as a Summarizer for Final Report The atomic insights generated by the rules-based engine are then passed to an LLM, which acts as a summarizer. The LLM leverages its advanced natural language generation capabilities to synthesize the atomic insights into a well-structured, coherent final report. This report not only summarizes the insights but also contextualizes them, highlighting their significance and potential implications for the business. The LLM's ability to understand and generate natural language ensures that the final report is accessible and engaging for its intended audience. Advantages Precision and Reliability: The use of a rules-based engine for initial insight generation ensures that the insights are accurate, consistent, and based on well-defined criteria. High-Quality Reporting: Leveraging an LLM for report generation capitalizes on the model's strengths in summarization and natural language production, resulting in a final report that is both informative and well-written. Efficient Process: This hybrid approach allows for an efficient division of labor between the rules-based engine and the LLM, with each component focusing on what it does best. Challenges Complexity in Integration: Ensuring seamless integration between the rules-based insight generation and the LLM summarization can be complex, requiring careful design to ensure that the insights are accurately and effectively communicated to the LLM. Rules Maintenance: The rules-based engine requires ongoing maintenance and updating to remain aligned with evolving business objectives and data characteristics. Summarization Accuracy: While LLMs are generally good at summarization, ensuring that the final report accurately reflects the nuances and importance of the atomic insights requires careful prompt engineering and possibly manual review. Conclusion Architecture 3 presents a strategic approach to business insight generation, combining the strengths of rules-based engines and LLMs to produce actionable and well-communicated insights. This hybrid model offers a balance between the precision of rule-based analysis and the linguistic prowess of LLMs, making it a powerful tool for businesses looking to derive meaningful insights from their data. Addressing the integration and maintenance challenges inherent in this approach is crucial for its success, ensuring that businesses can leverage their data effectively to inform strategic decision-making. 22 \f8. Benchmarking We benchmarked the efficiency of data analysis and specifically extraction of important business events from the business related time series datasets in the form of readable insights using 3 basic approaches for each case: purely rule based, LLM and some form of hybrid approach. The data used for the benchmarking was collected from 30 corporate Google Analytics 4 and Google Ads accounts via APIs for the time frame of approximately two years. The LLM used for the research was GPT-4 accessed via its native API. 8.1 Precision of mathematical operations LLMs can perform some mathematical calculations and do certain logical reasoning, although the precision of those operations is not absolute due to a number of factors. Calculations is not a fundamental feature of LLMs but rather a side effect with no guaranteed quality Certain metrics have specific calculation requirements, i.e. can\u2019t be treated in a generic way, e.g. the average value for the cost-per-click type of metrics can\u2019t be calculated using the standard average formula but needs the weighted average schema to be applied To minimize the number of LLM induced errors a rule based preprocessing algorithm is added to the analytical pipeline which pre-calculates total and average values for each business metric specifically for the required time periods. It increases the prompt size therefore reducing the amount of data that can be sent into the LLM which is a typical trade off, i.e. recall traded off for precision. Processing pipeline type Processing efficiency Rule based 100% LLM 63% Hybrid (Rule based precalculation + LLM analysis) 87% 8.2 Number of proper name hallucinations LLMs can sometimes \"hallucinate\" names or facts, producing outputs that seem plausible but are incorrect or fictional. This is particularly challenging when dealing with data that includes proper names, where accuracy is crucial. 23 \fRule-based: This approach relies solely on pre-defined rules and does not generate new content, thus minimizing the risk of hallucinating names. It would typically result in a very low or even zero count of proper name hallucinations. LLM: Given its generative nature, LLM is more prone to hallucinate names compared to rule-based systems. While it's sophisticated enough to generate highly plausible content, distinguishing between real and generated names without external validation can be challenging. Hybrid (Name hashing + LLM analysis + Hash decoding): This approach uses a combination of name hashing to anonymize real names before processing with LLM, followed by hash decoding to restore names in the final output. This can significantly reduce the number of proper name hallucinations since the model isn't directly generating or manipulating the proper names. Processing pipeline type Number of errors Rule based 0% LLM 12% Hybrid (Name hashing + LLM analysis + Hash decoding) 3% 8.3 Recall of the important business insights Recall, in this context, refers to the system's ability to extract and present all relevant business insights from the data, a crucial factor for comprehensive analysis. Rule-based: This method might not capture all nuances or connections in the data due to its reliance on predefined patterns and thresholds, potentially leading to lower recall. LLMs are adept at identifying patterns and insights in large datasets, sometimes allowing for non-trivial pattern recognition and out-of-the-box reasoning, potentially discovering business insights that can\u2019t be accessed by rule-based systems. However, the quality of the insights can vary based on the input data and the model's current knowledge. Hybrid (Source specific data chunking + LLM analysis + LLM summarization of the processed chunks): By chunking the data and using LLM for both detailed analysis and summarization, this approach aims to combine the best of both worlds. It potentially 24 \foffers high recall by leveraging LLM's pattern recognition and generative capabilities while focusing the analysis on manageable portions of the dataset. Processing pipeline type Processing efficiency Rule based 71% LLM 67% Hybrid (Source specific data chunking + LLM analysis + LLM summarization of the processed chunks) 82% 8.4 Overall user satisfaction on weekly/monthly reports User satisfaction can be influenced by factors like the accuracy of the information, the relevance and comprehensiveness of the insights provided, and the readability of the reports. This metric is measured as the ratio of likes to dislikes (the bigger the number, the higher the overall user satisfaction). Rule-based: While highly accurate, rule-based reports might lack the narrative quality and comprehensive insights that come from more sophisticated analysis, possibly leading to lower user satisfaction if the reports seem too dry or limited. LLM can generate more engaging and detailed reports with a narrative structure, potentially leading to higher user satisfaction. However, the accuracy and relevance of the insights depend heavily on the input data and the model's training. 25 \fHybrid (Rule based + LLM analysis): This approach aims to combine the reliability of rule-based preprocessing with the narrative and analytical strengths of LLM, more specifically mixing the insight generation capabilities of the rule based module and data analysis/text summarization capabilities of the LLM. Processing pipeline type Likes-to-dislikes ratio Rule based 1.79 LLM 3.82 Hybrid 4.60 9." |
| } |
| ] |
| } |