| { |
| "url": "http://arxiv.org/abs/2404.16745v1", |
| "title": "Statistical Inference for Covariate-Adjusted and Interpretable Generalized Factor Model with Application to Testing Fairness", |
| "abstract": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).", |
| "authors": "Jing Ouyang, Chengyu Cui, Kean Ming Tan, Gongjun Xu", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "stat.ME", |
| "cats": [ |
| "stat.ME" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Latent factors, often referred to as hidden factors, play an increasingly important role in modern statistics to analyze large-scale complex measurement data and find wide-ranging applications across various scientific fields, including educational assessments (Reckase 2009, Hambleton & Swaminathan 2013), macroeconomics forecasting (Stock & Watson 2002, Lam et al. 2011), and biomedical diagnosis (Carvalho et al. 2008, Frichot et al. 2013). For instance, in educational testing and social sciences, latent factors are used to model unobservable traits of respondents, such as skills, personality, and attitudes (von Davier Matthias 2008, Reckase 2009); in biology and genomics, latent factors are used to capture underlying genetic factors, gene expression patterns, or hidden biological mechanisms (Carvalho et al. 2008, Frichot et al. 2013). To uncover the latent factors and analyze large-scale complex data, various latent factor models have been developed and extensively investigated in the existing literature (Bai 2003, Bai & Li 2012, Fan et al. 2013, Chen et al. 2023b, Wang 2022). In addition to measuring the latent factors, the observed covariates and the covariate effects conditional on the latent factors hold significant scientific interpretations in many applications (Reboussin et al. 2008, Park et al. 2018). One important application is testing fairness, which receives increasing attention in the fields of education, psychology, and social sciences (Candell & Drasgow 1988, Belzak & Bauer 2020, Chen et al. 2023a). In educa- tional assessments, testing fairness, or measurement invariance, implies that groups from diverse backgrounds have the same probability of endorsing the test items, controlling for individual proficiency levels (Millsap 2012). Testing fairness is not only of scientific interest to psychometricians and statisticians but also attracts widespread public awareness (Toch 1984). In the era of rapid technological advancements, international and large-scale edu- cational assessments are becoming increasingly prevalent. One example is the Programme for International Student Assessment (PISA), which is a large-scale international assessment with substantial sample size and test length (OECD 2019). PISA assesses the knowledge and skills of 15-year-old students in mathematics, reading, and science domains (OECD 2 2019). In PISA 2018, over 600,000 students from 37 OECD1 countries and 42 partner coun- tries/economies participated in the test (OECD 2019). To assess fairness of the test designs in such large-scale assessments, it is important to develop modern and computationally effi- cient methodologies for interpreting the effects of observed covariates (e.g., gender and race) on the item responses, controlling for the latent factors. However, the discrete nature of the item responses, the increasing sample size, and the large amount of test items in modern educational assessments pose great challenges for the estimation and inference for the covariate effects as well as for the latent factors. For instance, in educational and psychological measurements, such a testing fairness issue (measurement invariance) is typically assessed by differential item functioning (DIF) analysis of item re- sponse data that aims to detect the DIF items, where a DIF item has a response distribution that depends on not only the measured latent factors but also respondents\u2019 covariates (such as group membership). Despite many statistical methods that have been developed for DIF analysis, existing methods often require domain knowledge to pre-specify DIF-free items, namely anchor items, which may be misspecified and lead to biased estimation and inference results (Thissen 1988, Tay et al. 2016). To address this limitation, researchers developed item purification methods to iteratively select anchor items through stepwise selection mod- els (Candell & Drasgow 1988, Fidalgo et al. 2000, Kopf et al. 2015). More recently, tree-based methods (Tutz & Berger 2016), regularized estimation methods (Bauer et al. 2020, Belzak & Bauer 2020, Wang et al. 2023), item pair functioning methods (Bechger & Maris 2015), and many other non-anchor-based methods have been proposed. However, these non-anchor- based methods do not provide valid statistical inference guarantees for testing the covariate effects. It remains an open problem to perform statistical inference on the covariate effects and the latent factors in educational assessments. To address this open problem, we study the statistical estimation and inference for a gen- eral family of covariate-adjusted nonlinear factor models, which include the popular factor 1OECD: Organisation for Economic Co-operation and Development 3 models for binary, count, continuous, and mixed-type data that commonly occur in educa- tional assessments. The nonlinear model setting poses great challenges for estimation and statistical inference. Despite recent progress in the factor analysis literature, most existing studies focus on estimation and inference under linear factor models (Stock & Watson 2002, Bai & Li 2012, Fan et al. 2013) and covariate-adjusted linear factor models (Leek & Storey 2008, Wang et al. 2017, Gerard & Stephens 2020, Bing et al. 2024). The techniques employed in linear factor model settings are not applicable here due to the nonlinearity inherent in the general models under consideration. Recently, several researchers have also investigated the parameter estimation and inference for generalized linear factor models (Chen et al. 2019, Wang 2022, Chen et al. 2023b). However, they either focus only on the overall consistency properties of the estimation or do not incorporate covariates into the models. In a concurrent work, motivated by applications in single-cell omics, Du et al. (2023) considered a general- ized linear factor model with covariates and studied its inference theory, where the latent factors are used as surrogate variables to control for unmeasured confounding. However, they imposed relatively stringent assumptions on the sparsity of covariate effects and the dimension of covariates, and their theoretical results also rely on data-splitting. Moreover, Du et al. (2023) focused only on statistical inference on the covariate effects, while that on factors and loadings was unexplored, which is often of great interest in educational assess- ments. Establishing inference results for covariate effects and latent factors simultaneously under nonlinear models remains an open and challenging problem, due to the identifiability issue from the incorporation of covariates and the nonlinearity issue in the considered general models. To overcome these issues, we develop a novel framework for performing statistical infer- ence on all model parameters and latent factors under a general family of covariate-adjusted generalized factor models. Specifically, we propose a set of interpretable and practical iden- tifiability conditions for identifying the model parameters, and further incorporate these conditions into the development of a computationally efficient likelihood-based estimation 4 method. Under these identifiability conditions, we develop new techniques to address the aforementioned theoretical challenges and obtain estimation consistency and asymptotic nor- mality for covariate effects under a practical yet challenging asymptotic regime. Furthermore, building upon these results, we establish estimation consistency and provide valid inference results for factor loadings and latent factors that are often of scientific interest, advancing our theoretical understanding of nonlinear latent factor models. The rest of the paper is organized as follows. In Section 2, we introduce the model setup of the covariate-adjusted generalized factor model. Section 3 discusses the associated iden- tifiability issues and further presents the proposed identifiability conditions and estimation method. Section 4 establishes the theoretical properties for not only the covariate effects but also the latent factors and factor loadings. In Section 5, we perform extensive numerical studies to illustrate the performance of the proposed estimation method and the validity of the theoretical results. In Section 6, we analyze an educational testing dataset from Pro- gramme for International Student Assessment (PISA) and identify test items that may lead to potential bias among different test-takers. We conclude with providing some potential future directions in Section 7. Notation: For any integer N, let [N] = {1, . . . , N}. For any set S, let #S be its cardinality. For any vector r = (r1, . . . , rl)\u22ba, let \u2225r\u22250 = #({j : rj \u0338= 0}), \u2225r\u2225\u221e= maxj=1,...,l |rj|, and \u2225r\u2225q = (Pl j=1 |rj|q)1/q for q \u22651. We define 1(y) x to be the y-dimensional vector with x-th entry to be 1 and all other entries to be 0. For any symmetric matrix M, let \u03bbmin(M) and \u03bbmax(M) be the smallest and largest eigenvalues of M. For any matrix A = (aij)n\u00d7l, let \u2225A\u2225\u221e,1 = maxj=1,...,l Pn i=1 |aij| be the maximum absolute column sum, \u2225A\u22251,\u221e= maxi=1,...,n Pl j=1 |aij| be the maximum of the absolute row sum, \u2225A\u2225max = maxi,j |aij| be the maximum of the absolute matrix entry, \u2225A\u2225F = (Pn i=1 Pl j=1 |aij|2)1/2 be the Frobenius norm of A, and \u2225A\u2225= p \u03bbmax (A\u22baA) be the spectral norm of A. Let \u2225\u00b7 \u2225\u03c61 be sub- exponential norm. Define the notation Av = vec(A) \u2208Rnl to indicate the vectorized matrix 5 A \u2208Rn\u00d7l. Finally, we denote \u2297as the Kronecker product.", |
| "main_content": "Consider n independent subjects with q measured responses and p\u2217observed covariates. \u2217 For the ith subject, let Yi \u2208Rq be a q-dimensional vector of responses corresponding to measurement items and Rbe a-dimensional vector of observed covariates. q measurement items and Xc i \u2208Rp\u2217be a p\u2217-dimensional vector of observed covariates. Moreover, let be a-dimensional vector of latent factors representing the unobservable Moreover, let Ui be a K-dimensional vector of latent factors representing the unobservable traits such as skills and personalities, where we assume K is specified as in many educational assessments. We assume that the q-dimensional responses Yi are conditionally independent, given Xc i and Ui. Specifically, we model the jth response for the ith subject, Yij, by the following conditional distribution: Yij \u223cpij(y | wij), where wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . (1) Here \u03b2j0 \u2208R is the intercept parameter, \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba\u2208Rp\u2217are the coefficient parameters for the observed covariates, and\u22baR are the factor loadings. \u2208\u2217\u2208 parameters for the observed covariates, and \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba\u2208RK are the factor loadings. \u22ba \u2208 For better presentation, we write \u03b2j = (\u03b2j0, \u03b2\u22ba jc)\u22baas an assembled vector of intercept and coefficients and define Xi = (1, (Xc i )\u22ba)\u22bawith dimension p = p\u2217+ 1, which gives wij = \u03b3\u22ba j Ui + \u03b2\u22ba j Xi. Given wij, the function pij is some specified probability density (mass) function. Here, we consider a general and flexible modeling framework by allowing different types of pij functions to model diverse response data in wide-ranging applications, such as binary item response data in educational and psychological assessments (Mellenbergh 1994, Reckase 2009) and mixed types of data in educational and macroeconomic applications (Rijmen et al. 2003, Wang 2022); see also Remark 1. A schematic diagram of the proposed model setup is 6 presented in Figure 1. Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi \u2208Rp Ui \u2208RK Yij \u2208R, j \u2208[q] Figure 1: A schematic diagram of the proposed model in (1). The subscript i indicates the ith subject, out of n independent subjects. The response variable Yij can be discrete or continuous. Our proposed covariate-adjusted generalized factor model in (1) is motivated by applications in testing fairness. In the context of educational assessment, the subject\u2019s responses to questions are dependent on latent factors Ui such as students\u2019 abilities and skills, and are potentially affected by observed covariates Xc i such as age, gender, and race, among others (Linda M. Collins 2009). The intercept \u03b2j0 is often interpreted as the difficulty level of item j and referred to as the difficulty parameter in psychometrics (Hambleton & Swaminathan 2013, Reckase 2009). The capability of item j to further differentiate individuals based on their latent abilities is captured by \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba, which are also referred to as discrimination parameters (Hambleton & Swaminathan 2013, Reckase 2009). The effects of observed covariates Xc i on subject\u2019s response to the jth question Yij, conditioned on latent abilities Ui, are captured by \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba, which are referred to as DIF effects in psychometrics (Holland & Wainer 2012). This setting gives rise to the fairness problem of validating whether the response probabilities to the measurements differ across different genders, races, or countries of origin while holding their abilities and skills at the same level. 7 Given the observed data from n independent subjects, we are interested in studying the relationships between Yi and Xc i after adjusting for the latent factors Ui in (1). Specifically, our goal is to test the statistical hypothesis H0 : \u03b2js = 0 versus Ha : \u03b2js \u0338= 0 for s \u2208[p\u2217], where \u03b2js is the regression coefficient for the sth covariate and the jth response, after adjusting for the latent factor Ui. In many applications, the latent factors and factor loadings also carry important scientific interpretations such as students\u2019 abilities and test items\u2019 characteristics. This motivates us to perform statistical inference on the parameters \u03b2j0, \u03b3j, and Ui as well. Remark 1. The proposed model setup (1) is general and flexible as various functions pij\u2019s could be used to model diverse types of response data in wide-ranging applications. For instance, in educational assessments, logistic factor model (Reckase 2009) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, y \u2208{0, 1} and probit factor model (Birnbaum 1968) with pij(y | wij) = {\u03a6(wij)}y{1 \u2212\u03a6(wij)}1\u2212y, y \u2208{0, 1} where \u03a6(\u00b7) is the cumulative density function of standard normal distribution, are widely used to model the binary responses, indicating correct or incorrect answers to the test items. Such types of models are often referred to as item response theory models (Reckase 2009). In economics and finances, linear factor models with pij(y | wij) \u221dexp{\u2212(y \u2212wij)2/(2\u03c32)}, where y \u2208R and \u03c32 is the variance parameter, are commonly used to model continuous responses, such as GDP, interest rate, and consumer index (Bai 2003, Bai & Li 2012, Stock & Watson 2016). Moreover, depending on the the observed responses, different types of function pij\u2019s can be used to model the response from each item j \u2208[q]. Therefore, mixed types of data, which are common in educational measurements (Rijmen et al. 2003) and macroeconomic applications (Wang 2022), can also be analyzed by our proposed model. 8 Remark 2. In addition to testing fairness, the considered model finds wide-ranging applications in the real world. For instance, in genomics, the gene expression status may depend on unmeasured confounders or latent biological factors and also be associated with the variables of interest including medical treatment, disease status, and gender (Wang et al. 2017, Du et al. 2023). The covariate-adjusted general factor model helps to investigate the effects of the variables of interest on gene expressions, controlling for the latent factors (Du et al. 2023). This setting is also applicable to other scenarios, such as brain imaging, where the activity of a brain region may depend on measurable spatial distance from neighboring regions and latent structures due to unmodeled factors (Leek & Storey 2008). To analyze large-scale measurement data, we aim to develop a computationally efficient estimation method and to provide inference theory for quantifying uncertainty in the estimation. Motivated by recent work in high-dimensional factor analysis, we treat the latent factors as fixed parameters and apply a joint maximum likelihood method for estimation (Bai 2003, Fan et al. 2013, Chen et al. 2020). Specifically, we let the collection of the item responses from n independent subjects be Y = (Y1, . . . , Yn)\u22ba n\u00d7q and the design matrix of observed covariates to be X = (X1, . . . , Xn)\u22ba n\u00d7p. For model parameters, the discrimination parameters for all q items are denoted as \u0393 = (\u03b31, . . . , \u03b3q)\u22ba q\u00d7K, while the intercepts and the covariate effects for all q items are denoted as B = (\u03b21, . . . , \u03b2q)\u22ba q\u00d7p. The latent factors from all n subjects are U = (U1, . . . , Un)\u22ba n\u00d7K. Then, the joint log-likelihood function can be written as follows: L(Y | \u0393, U, B, X) = 1 nq n X i=1 q X j=1 lij(\u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i ), (2) where the function lij(wij) = log pij(Yij|wij) is the individual log-likelihood function with wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . We aim to obtain (b \u0393, b U, b B) from maximizing the joint likelihood function L(Y | \u0393, U, B, X). While the estimators can be computed efficiently by maximizing the joint likelihood 9 function through an alternating maximization algorithm (Collins et al. 2002, Chen et al. 2019), challenges emerge for performing statistical inference on the model parameters. \u2022 One challenge concerns the model identifiability. Without additional constraints, the covariate effects are not identifiable due to the incorporation of covariates and their potential dependence on latent factors. The latent factors and factor loadings encounter similar identifiability issues as in traditional factor analysis (Bai & Li 2012, Fan et al. 2013). Ensuring that the model is statistically identifiable is the fundamental prerequisite for achieving model reliability and making valid inferences (Allman et al. 2009, Gu & Xu 2020). \u2022 Another challenge arises from the nonlinearity of our proposed model. In the existing literature, most studies focus on the statistical inference for our proposed setting in the context of linear models (Bai & Li 2012, Fan et al. 2013, Wang et al. 2017). On the other hand, settings with general log-likelihood function lij(wij), including covariateadjusted logistic and probit factor models, are less investigated. Common techniques for linear models are not applicable to the considered general nonlinear model setting. Motivated by these challenges, we propose interpretable and practical identifiability conditions in Section 3.1. We then incorporate these conditions into the joint-likelihood-based estimation method in Section 3.2. Furthermore, we introduce a novel inference framework for performing statistical inference on \u03b2j, \u03b3j, and Ui in Section 4. 3 Method 3.1 Model Identifiability Identifiability issues commonly occur in latent variable models (Allman et al. 2009, Bai & Li 2012, Xu 2017). The proposed model in (1) has two major identifiability issues. The first issue is that the proposed model remains unchanged after certain linear transformations of 10 both B and U, causing the covariate effects together with the intercepts, represented by B, and the latent factors, denoted by U, to be unidentifiable. The second issue is that the model is invariant after an invertible transformation of both U and \u0393 as in the linear factor models (Bai & Li 2012, Fan et al. 2013), causing the latent factors U and factor loadings \u0393 to be undetermined. Specifically, under the model setup in (1), we define the joint probability distribution of responses to be P(Y | \u0393, U, B, X) = Qn i=1 Qq j=1 pij(Yij|wij). The model parameters are identifiable if and only if for any response Y, there does not exist (\u0393, U, B) \u0338= (e \u0393, e U, e B) such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). The first issue concerning the identifiability of B and U is that for any (\u0393, U, B) and any transformation matrix A, there exist e \u0393 = \u0393, e U = U + XA\u22ba, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). This identifiability issue leads to the indeterminacy of the covariate effects and latent factors. The second issue is related to the identifiability of U and \u0393. For any (e \u0393, e U, e B) and any invertible matrix G, there exist \u00af \u0393 = e \u0393(G\u22ba)\u22121, \u00af U = e UG, and \u00af B = e B such that P(Y | e \u0393, e U, e B, X) = P(Y | \u00af \u0393, \u00af U, \u00af B, X). This causes the latent factors and factor loadings to be unidentifiable. Remark 3. Intuitively, the unidentifiable e B = B \u2212\u0393A can be interpreted to include both direct and indirect effects of X on response Y. We take the intercept and covariate effect on the first item ( e \u03b21) as an example and illustrate it in Figure 2. One part of e \u03b21 is the direct effect from X onto Y (see the orange line in the left panel), whereas another part of e \u03b21 may be explained through the latent factors U, as the latent factors are unobserved and there are potential correlations between latent factors and observed covariates. The latter part of e \u03b21 can be considered as the indirect effect (see the blue line in the right panel). 11 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Figure 2: The direct effects (orange solid line in the left panel) and the indirect effects (blue solid line in the right panel) for item 1. The first identifiability issue is a new challenge introduced by the covariate adjustment in the model, whereas the second issue is common in traditional factor models (Bai & Li 2012, Fan et al. 2013). Considering the two issues together, for any (\u0393, U, B), A, and G, there exist transformations e \u0393 = \u0393(G\u22ba)\u22121, e U = (U + XA\u22ba)G, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). In the rest of this subsection, we propose identifiability conditions to address these issues. For notation convenience, throughout the rest of the paper, we define \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) as the true parameters. Identifiability Conditions As described earlier, the correlation between the design matrix of covariates X and the latent factors U\u2217results in the identifiability issue of B\u2217. In the psychometrics literature, the intercept \u03b2\u2217 j0 is commonly referred to as the difficulty parameter, while \u03b2\u2217 jc represents the effects of observed covariates, namely DIF effects, on the response to item j (Reckase 2009, Holland & Wainer 2012). The different scientific interpretations motivate us to develop different identifiability conditions for \u03b2\u2217 j0 and \u03b2\u2217 jc, respectively. Specifically, we propose a centering condition on U\u2217to ensure the identifiability of the intercept \u03b2\u2217 j0 for all items j \u2208[q]. On the other hand, to identify the covariate effects \u03b2\u2217 jc, a natural idea is to impose the covariate effects \u03b2\u2217 jc for all items j \u2208[q] to be sparse, as shown in many regularized methods and item purification methods (Candell & Drasgow 1988, Fidalgo et al. 2000, Bauer et al. 2020, Belzak & Bauer 2020). In Chen et al. (2023a), 12 an interpretable identifiability condition is proposed for selecting sparse covariate effects, yet this condition is specific to uni-dimensional covariates. Motivated by Chen et al. (2023a), we propose the following minimal \u21131 condition applicable to general cases where the covariates are multi-dimensional. To better present the identifiability conditions, we write A = (a0, a1, . . . , ap\u2217) \u2208RK\u00d7p and define Ac = (a1, . . . , ap\u2217) \u2208RK\u00d7p\u2217as the part applied to the covariate effects. Condition 1. (i) Pn i=1 U \u2217 i = 0K. (ii) Pq j=1 \u2225\u03b2\u2217 jc\u22251 < Pq j=1 \u2225\u03b2\u2217 jc \u2212A\u22ba c\u03b3\u2217 j \u22251 for any Ac \u0338= 0. Condition 1(i) assumes the latent abilities U\u2217are centered to ensure the identifiability of the intercepts \u03b2\u2217 j0\u2019s, which is commonly assumed in the item response theory literature (Reckase 2009). Condition 1(ii) is motivated by practical applications. For instance, in educational testing, practitioners need to identify and remove biased test items, correspondingly, items with non-zero covariate effects (\u03b2\u2217 js \u0338= 0). In practice, most of the designed items are unbiased, and therefore, it is reasonable to assume that the majority of items have no covariate effects, that is, the covariate effects \u03b2\u2217 jc\u2019s are sparse (Holland & Wainer 2012, Chen et al. 2023a). Next, we present a sufficient and necessary condition for Condition 1(ii) to hold. Proposition 1. Condition 1(ii) holds if and only if for any v \u2208RK \\ {0K}, q X j=1 \f \fv\u22ba\u03b3\u2217 j \f \fI(\u03b2\u2217 js = 0) > q X j=1 sign(\u03b2\u2217 js)v\u22ba\u03b3\u2217 j I(\u03b2\u2217 js \u0338= 0), \u2200s \u2208[p\u2217]. (3) Remark 4. Proposition 1 implies that Condition 1(ii) holds when {j : \u03b2\u2217 js \u0338= 0} is separated into {j : \u03b2\u2217 js > 0} and {j : \u03b2\u2217 js < 0} in a balanced way. With diversified signs of \u03b2\u2217 js, Proposition 1 holds when a considerable proportion of test items have no covariate effect (\u03b2\u2217 js \u0338= 0). For example, when \u03b3\u2217 j = m1(k) K with m > 0, Condition 1(ii) holds if and only if Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m > 0) + I(\u03b2\u2217 js/m \u22640)} > 0 and Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m \u22650) + I(\u03b2\u2217 js/m < 0)} < 0. With slightly more than q/2 items correspond to \u03b2\u2217 js = 0, Condition 1(ii) holds. Moreover, if #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, then Condition 1(ii) holds even when less than q/2 items correspond to \u03b2\u2217 js = 0 and more than q/2 items correspond 13 to \u03b2\u2217 js \u0338= 0. Though assuming a \u201csparse\u201d structure, our assumption here differs from existing high-dimensional literature. In high-dimensional regression models, the covariate coefficient when regressing the dependent variable on high-dimensional covariates, is often assumed to be sparse, with the proportion of the non-zero covariate coefficients asymptotically approaching zero. In our setting, Condition 1(ii) allows for relatively dense settings where the proportion of items with non-zero covariate effects is some positive constant. To perform simultaneous estimation and inference on \u0393\u2217and U\u2217, we consider the following identifiability conditions to address the second identifiability issue. Condition 2. (i) (U\u2217)\u22baU\u2217is diagonal. (ii) (\u0393\u2217)\u22ba\u0393\u2217is diagonal. (iii) n\u22121(U\u2217)\u22baU\u2217= q\u22121(\u0393\u2217)\u22ba\u0393\u2217. Condition 2 is a set of widely used identifiability conditions in the factor analysis literature (Bai 2003, Bai & Li 2012, Wang 2022). For practical and theoretical benefits, we impose Condition 2 to address the identifiability issue related to G. It is worth mentioning that this condition can be replaced by other identifiability conditions. For true parameters satisfying any identifiability condition, we can always find a transformation such that the transformed parameters satisfy our proposed Conditions 1\u20132 and the proposed estimation method and theoretical results in the subsequent sections still apply, up to such a transformation. 3.2 Joint Maximum Likelihood Estimation In this section, we introduce a joint-likelihood-based estimation method for the covariate effects B, the latent factors U, and factor loadings \u0393 simultaneously. Incorporating Conditions 1\u20132 into the estimation procedure, we obtain the maximum joint-likelihood-based estimators for \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) that satisfy the proposed identifiability conditions. With Condition 1, we address the identifiability issue related to the transformation matrix A. Specifically, for any parameters \u03d5 = (\u0393, U, B), there exists a matrix A\u2217= (a\u2217 0, A\u2217 c) with A\u2217 c = argminAc\u2208RK\u00d7p\u2217 Pq j=1 \u2225\u03b2jc \u2212A\u22ba c\u03b3j\u22251 and a\u2217 0 = \u2212n\u22121 Pn i=1(Ui + A\u2217 cXc i ) such that 14 the transformed matrices U\u2217= U + X(A\u2217)\u22baand B\u2217= B \u2212\u0393A\u2217satisfy Condition 1. The transformation idea naturally leads to the following estimation methodology for B\u2217. To estimate B\u2217and U\u2217that satisfy Condition 1, we first obtain the maximum likelihood estimator b \u03d5 = (b \u0393, b U, b B) by b \u03d5 = argmin \u03d5\u2208\u2126\u03d5 \u2212L(Y | \u03d5, X), (4) where the parameter space \u2126\u03d5 is given as \u2126\u03d5 = {\u03d5 : \u2225\u03d5\u2225max \u2264C} for some large C. To solve (4), we employ an alternating minimization algorithm. Specifically, for steps t = 0, 1, . . ., we compute b \u0393(t+1), b B(t+1) = argmin \u0393\u2208Rq\u00d7K, B\u2208Rq\u00d7p \u2212L(Y | \u0393, U(t), B, X); b U(t+1) = argmin U\u2208Rn\u00d7K \u2212L(Y | \u0393(t+1), U, B(t+1), X), until the quantity max{\u2225b \u0393(t+1) \u2212b \u0393(t)\u2225F, \u2225b U(t+1) \u2212b U(t)\u2225F, \u2225b B(t+1) \u2212b B(t)\u2225F} is less than some pre-specified tolerance value for convergence. We then estimate Ac by minimizing the \u21131norm b Ac = argmin Ac\u2208RK\u00d7p\u2217 q X j=1 \u2225b \u03b2jc \u2212A\u22ba c b \u03b3j\u22251. (5) Next, we estimate b a0 = \u2212n\u22121 Pn i=1( b Ui + b AcXc i ) and let b A = (b a0, b Ac). Given the estimators b A, b \u0393, and b B, we then construct b B\u2217= b B \u2212b \u0393b A and e U = b U + Xb A\u22ba such that Condition 1 holds. Recall that Condition 2 addresses the identifiability issue related to the invertible matrix G. Specifically, for any parameters (\u0393, U), there exists a matrix G\u2217such that Condition 2 holds for U\u2217= (U+X(A\u2217)\u22ba)G\u2217and \u0393\u2217= \u0393(G\u2217)\u2212\u22ba. Let U = diag(\u03f11, . . . , \u03f1K) be a diagonal 15 matrix that contains the K eigenvalues of (nq)\u22121(\u0393\u22ba\u0393)1/2(U + XA\u22ba)\u22ba(U + XA\u22ba) (\u0393\u22ba\u0393)1/2 and let V be a matrix that contains its corresponding eigenvectors. We set G\u2217= (q\u22121\u0393\u22ba\u0393)1/2 VU \u22121/4. To further estimate \u0393\u2217and U\u2217, we need to obtain an estimator for the invertible matrix G\u2217. Given the maximum likelihood estimators obtained in (4) and b A in (5), we estimate G\u2217via b G = (q\u22121b \u0393\u22bab \u0393)1/2 b V b U \u22121/4 where b U and b V are matrices that contain the eigenvalues and eigenvectors of (nq)\u22121(b \u0393\u22bab \u0393)1/2( b U+Xb A\u22ba)\u22ba( b U+Xb A\u22ba) (b \u0393\u22bab \u0393)1/2, respectively. With b G and b A, we now obtain the following transformed estimators that satisfy Condition 2: b \u0393\u2217= b \u0393( b G\u22ba)\u22121 and b U\u2217= ( b U + Xb A\u22ba) b G. To quantify the uncertainty of the proposed estimators, we will show that the proposed estimators are asymptotically normally distributed. Specifically, in Theorem 2 of Section 4, we establish the asymptotic normality result for b \u03b2\u2217 j, which allows us to make inference on the covariate effects \u03b2\u2217 j. Moreover, as the latent factors U \u2217 i and factor loadings \u03b3\u2217 j often have important interpretations in domain sciences, we are also interested in the inference on parameters U \u2217 i and \u03b3\u2217 j . In Theorem 2, we also derive the asymptotic distributions for estimators b U \u2217 i and b \u03b3\u2217 j , providing inference results for parameters U \u2217 i and \u03b3\u2217 j . 4 Theoretical Results We propose a novel framework to establish the estimation consistency and asymptotic normality for the proposed joint-likelihood-based estimators b \u03d5\u2217= (b \u0393\u2217, b U\u2217, b B\u2217) in Section 3. To establish the theoretical results for b \u03d5\u2217, we impose the following regularity assumptions. Assumption 1. There exist constants M > 0, \u03ba > 0 such that: (i) \u03a3\u2217 u = limn\u2192\u221en\u22121(U\u2217)\u22baU\u2217exists and is positive definite. For i \u2208[n], \u2225U \u2217 i \u22252 \u2264M. (ii) \u03a3\u2217 \u03b3 = limq\u2192\u221eq\u22121(\u0393\u2217)\u22ba\u0393\u2217exists and is positive definite. For j \u2208[q], \u2225\u03b3\u2217 j \u22252 \u2264M. (iii) \u03a3x = limn\u2192\u221en\u22121 Pn i=1 XiX\u22ba i exists and 1/\u03ba2 \u2264\u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) \u2264\u03ba2. For i \u2208[n], maxi \u2225Xi\u2225\u221e\u2264M. 16 (iv) \u03a3\u2217 ux = limn\u2192\u221en\u22121 Pn i=1 U \u2217 i X\u22ba i exists and \u2225\u03a3\u2217 ux\u03a3\u22121 x \u22251,\u221e\u2264M. The eigenvalues of (\u03a3\u2217 u \u2212\u03a3\u2217 ux\u03a3\u22121 x (\u03a3\u2217 ux)\u22ba)\u03a3\u2217 \u03b3 are distinct. Assumptions 1 is commonly used in the factor analysis literature. In particular, Assumptions 1(i)\u2013(ii) correspond to Assumptions A-B in Bai (2003) under linear factor models, ensuring the compactness of the parameter space on U\u2217and \u0393\u2217. Under nonlinear factor models, such conditions on compact parameter space are also commonly assumed (Wang 2022, Chen et al. 2023b). Assumption 1(iii) is standard regularity conditions for the nonlinear setting that is needed to establish the concentration of the gradient and estimation error for the model parameters when p diverges. In addition, Assumption 1(iv) is a crucial identification condition; similar conditions have been imposed in the existing literature such as Assumption G in Bai (2003) in the context of linear factor models and Assumption 6 in Wang (2022) in the context of nonlinear factor models without covariates. Assumption 2. For any i \u2208[n] and j \u2208[q], assume that lij(\u00b7) is three times differentiable, and we denote the first, second, and third order derivatives of lij(wij) with respect to wij as l\u2032 ij(wij), l\u2032\u2032 ij(wij), and l\u2032\u2032\u2032 ij(wij), respectively. There exist M > 0 and \u03be \u22654 such that E(|l\u2032 ij(wij)|\u03be) \u2264M and |l\u2032 ij(wij)| is sub-exponential with \u2225l\u2032 ij(wij)\u2225\u03c61 \u2264M. Furthermore, we assume E{l\u2032 ij(w\u2217 ij)} = 0. Within a compact space of wij, we have bL \u2264\u2212l\u2032\u2032 ij(wij) \u2264bU and |l\u2032\u2032\u2032 ij(wij)| \u2264bU for bU > bL > 0. Assumption 2 assumes smoothness on the log-likelihood function lij(wij). In particular, it assumes sub-exponential distributions and finite fourth-moments of the first order derivatives l\u2032 ij(wij). For commonly used linear or nonlinear factor models, the assumption is not restrictive and can be satisfied with a large \u03be. For instance, consider the logistic model with l\u2032 ij(wij) = Yij \u2212exp(wij)/{1+exp(wij)}, we have |l\u2032 ij(wij)| \u22641 and \u03be can be taken as \u221e. The boundedness conditions for l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) are necessary to guarantee the convexity of the joint likelihood function. In a special case of linear factor models, l\u2032\u2032 ij(wij) is a constant and the boundedness conditions naturally hold. For popular nonlinear models such as lo17 gistic factor models, probit factor models, and Poisson factor models, the boundedness of l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) can also be easily verified. Assumption 3. For \u03be specified in Assumption 2 and a sufficiently small \u03f5 > 0, we assume as n, q, p \u2192\u221e, p p n \u2227(pq) (nq)\u03f5+3/\u03be \u21920. (6) Assumption 3 is needed to ensure that the derivative of the likelihood function equals zero at the maximum likelihood estimator with high probability, a key property in the theoretical analysis. In particular, we need the estimation errors of all model parameters to converge to 0 uniformly with high probability. Such uniform convergence results involve delicate analysis of the convexity of the objective function, for which technically we need Assumption 3. For most of the popularly used generalized factor models, \u03be can be taken as any large value as discussed above, thus (nq)\u03f5+3/\u03be is of a smaller order of p n \u2227(pq), given small \u03f5. Specifically, Assumption 3 implies p = o(n1/2 \u2227q) up to a small order term, an asymptotic regime that is reasonable for many educational assessments. Next, we impose additional assumptions crucial to establishing the theoretical properties of the proposed estimators. One challenge for theoretical analysis is to handle the dependence between the latent factors U\u2217and the design matrix X. To address this challenge, we employ the following transformed U0 that are orthogonal with X, which plays an important role in establishing the theoretical results (see Supplementary Materials for details). In particular, for i \u2208[n], we let U 0 i = (G\u2021)\u22ba(U \u2217 i \u2212A\u2021Xi). Here G\u2021 = (q\u22121(\u0393\u2217)\u22ba\u0393\u2217)1/2 V\u2217(U \u2217)\u22121/4 and A\u2021 = (U\u2217)\u22baX(X\u22baX)\u22121, where U \u2217= diag(\u03f1\u2217 1, . . . , \u03f1\u2217 K) with diagonal elements being the K eigenvalues of (nq)\u22121((\u0393\u2217)\u22ba\u0393\u2217)1/2(U\u2217)\u22ba(In\u2212Px)U\u2217((\u0393\u2217)\u22ba\u0393\u2217)1/2 with Px = X(X\u22baX)\u22121X\u22baand V\u2217containing the matrix of corresponding eigenvectors. Under this transformation for U 0 i , we further define \u03b30 j = (G\u2021)\u22121\u03b3\u2217 j and \u03b20 j = \u03b2\u2217 j + (A\u2021)\u22ba\u03b3\u2217 j for j \u2208[q], and write Z0 i = ((U 0 i )\u22ba X\u22ba i )\u22baand w0 ij = (\u03b30 j )\u22baU 0 i + (\u03b20 j)\u22baXi. These transformed parameters \u03b30 j \u2019s, U 0 i \u2019s, and \u03b20 j\u2019s give the same joint likelihood value as that of the true parameters \u03b3\u2217 j \u2019s, U \u2217 i \u2019s and \u03b2\u2217 j\u2019s, which 18 facilitate our theoretical understanding of the joint-likelihood-based estimators. Assumption 4. (i) For any j \u2208[q], \u2212n\u22121 Pn i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba p \u2192\u03a80 jz for some positive definite matrix \u03a80 jz and n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i d \u2192N(0, \u21260 jz). (ii) For any i \u2208[n], \u2212q\u22121 Pq j=1 l\u2032\u2032 ij(w0 ij)\u03b30 j (\u03b30 j )\u22ba p \u2192\u03a80 i\u03b3 for some positive definite matrix \u03a80 i\u03b3 and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j d \u2192N(0, \u21260 i\u03b3). Assumption 4 is a generalization of Assumption F(3)-(4) in Bai (2003) for linear models to the nonlinear setting. Specifically, we need Assumption 4(i) to derive the asymptotic distributions of the estimators b \u03b2\u2217 j and b \u03b3\u2217 j , and Assumption 4(ii) is used for establishing the asymptotic distribution of b U \u2217 i . Note that these assumptions are imposed on the loglikelihood derivative functions evaluated at the true parameters w0 ij, Z0 i , and \u03b30 j . In general, for the popular generalized factor models, such assumptions hold with mild conditions. For example, under linear models, l\u2032 ij(wij) is the random error and l\u2032\u2032 ij(wij) is a constant. Then \u03a80 jz and \u03a80 i\u03b3 naturally exist and are positive definite followed by Assumption 1. The limiting distributions of n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j can be derived by the central limit theorem under standard regularity conditions. Under logistic and probit models, l\u2032 ij(wij) and l\u2032\u2032 ij(wij) are both finite inside a compact parameters space and similar arguments can be applied to show the validity of Assumption 4. We present the following assumption to establish the theoretical properties of the transformed matrix b A as defined in (5). In particular, we define A0 = (G\u2021)\u22baA\u2021 and write A0 = (a0 0, . . . , a0 p\u2217)\u22ba. Note that the estimation problem of (5) is related to the median regression problem with measurement errors. To understand the properties of this estimator, following existing M-estimation literature (He & Shao 1996, 2000), we define \u03c80 js(a) = \u03b30 j sign{\u03b20 js + (\u03b30 j )\u22ba(a \u2212a0 s)} and \u03c7s(a) = Pq j=1 \u03c80 js(a) for j \u2208[q] and s \u2208[p\u2217]. We further define a perturbed version of \u03c80 js(a), denoted as \u03c8js(a, \u03b4js), as follows: \u03c8js(a, \u03b4js) = \u0010 \u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K] \u0011 sign n \u03b20 js + \u0002 \u03b4js \u221an \u0003 K+1 \u2212(\u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K])\u22ba(a \u2212a0 s) o , s \u2208[p\u2217] 19 where the perturbation \u03b4js = \uf8eb \uf8ec \uf8ed IK 0 0 (1(p) s )\u22ba \uf8f6 \uf8f7 \uf8f8 \u0010 \u2212 n X i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba\u0011\u22121\u0010\u221an n X i=1 l\u2032 ij(w0 ij)Z0 i \u0011 , is asymptotically normally distributed by Assumption 4. We define b \u03c7s(a) = Pq j=1 E\u03c8js(a, \u03b4js). Assumption 5. For \u03c7s(a), we assume that there exists some constant c > 0 such that mina\u0338=0 |q\u22121\u03c7s(a)| > c holds for all s \u2208[p\u2217]. Assume there exists as0 for each s \u2208[p\u2217] such that b \u03c7s(as0) = 0 with p\u221an\u2225\u03b1s0\u2225\u21920. In a neighbourhood of \u03b1s0, b \u03c7s(a) has a nonsingular derivative such that {q\u22121\u2207ab \u03c7s(\u03b1s0)}\u22121 = O(1) and q\u22121|\u2207ab \u03c7s(a)\u2212\u2207ab \u03c7s(\u03b1s0)| \u2264k|a\u2212\u03b1s0|. We assume \u03b9nq,p := max \b \u2225\u03b1s0\u2225, q\u22121 Pq j=1 \u03c8js(as0, \u03b4js) \t = o \u0000(p\u221an)\u22121\u0001 . Assumption 5 is crucial in addressing the theoretical difficulties of establishing the consistent estimation for A0, a challenging problem related to median regression with weakly dependent measurement errors. In Assumption 5, we treat the minimizer of | Pq j=1 \u03c8(a, \u03b4js)| as an M-estimator and adopt the Bahadur representation results in He & Shao (1996) for the theoretical analysis. For an ideal case where \u03b4js are independent and normally distributed with finite variances, which corresponds to the setting in median regression with measurement errors (He & Liang 2000), these assumptions can be easily verified. Assumption 5 discusses beyond such an ideal case and covers general settings. In addition to independent and Gaussian measurement errors, this condition also accommodates the case when \u03b4js are asymptotically normal and weakly dependent with finite variances, as implied by Assumption 4 and the conditional independence of Yij. We want to emphasize that Assumption 5 allows for both sparse and dense settings of the covariate effects. Consider an example of K = p = 1 and \u03b3j = 1 for j \u2208[q]. Suppose \u03b2\u2217 js is zero for all j \u2208[q1] and nonzero otherwise. Then this condition is satisfied as long as #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, even when the sparsity level q1 is small. Under the proposed assumptions, we next present our main theoretical results. 20 Theorem 1 (Average Consistency). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have q\u22121\u2225b B\u2217\u2212B\u2217\u22252 F = Op \u0012p2 log qp n + p log n q \u0013 ; (7) if we further assume p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1), then we have n\u22121\u2225b U\u2217\u2212U\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 ; (8) q\u22121\u2225b \u0393\u2217\u2212\u0393\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 . (9) Theorem 1 presents the average convergence rates of b \u03d5\u2217. Consider an oracle case with U\u2217 and \u0393\u2217known, the estimation of B\u2217reduces to an M-estimation problem. For M-estimators under general parametric models, it can be shown that the optimal convergence rates in squared \u21132-norm is Op(p/n) under p(log p)3/n \u21920 (He & Shao 2000). In terms of our average convergence rate on b B\u2217, the first term in (7), n\u22121p2 log(qp), approximately matches the convergence rate Op(p/n) up to a relatively small order term of p log(qp). The second term in (7), q\u22121p log n, is mainly due to the estimation error for the latent factor U\u2217. In educational applications, it is common to assume the number of subjects n is much larger than the number of items q. Under such a practical setting with n \u226bq and p relatively small, the term q\u22121 log n in (8) dominates in the derived convergence rate of b U\u2217, which matches with the optimal convergence rate Op(q\u22121) for factor models without covariates (Bai & Li 2012, Wang 2022) up to a small order term. Remark 5. The additional condition p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1) in Theorem 1 is used to handle the challenges related to the invertible matrix G that affects the theoretical properties of b U\u2217and b \u0393\u2217. It is needed for establishing the estimation consistency of b U\u2217and b \u0393\u2217 but not for that of b B\u2217. With sufficiently large \u03be and small \u03f5, this assumption is approximately p = o(n1/4 \u2227q1/3) up to a small order term. 21 Remark 6. One challenge in establishing the estimation consistency for b \u03d5\u2217arises from the unrestricted dependence structure between U\u2217and X. If we consider the ideal case where the columns of U\u2217and X are orthogonal, i.e., (U\u2217)\u22baX = 0K\u00d7p, then we can achieve comparable or superior convergence rates with less stringent assumptions. Specifically, with Assumptions 1\u20133 only, we can obtain the same convergence rates for b U\u2217and b \u0393\u2217as in (8) and (9), respectively. Moreover, with Assumptions 1\u20133, the average convergence rate for the consistent estimator of B\u2217is Op(n\u22121p log qp+q\u22121 log n), which is tighter than (7) by a factor of p. With estimation consistency results established, we next derive the asymptotic normal distributions for the estimators, which enable us to perform statistical inference on the true parameters. Theorem 2 (Asymptotic Normality). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have the asymptotic distributions as follows. Denote \u03b6\u22122 nq,p = n\u22121p log qp + q\u22121log n. If p3/2\u221an(nq)3/\u03be\u03b6\u22122 nq,p \u21920, for any j \u2208[q] and a \u2208Rp with \u2225a\u22252 = 1, \u221ana\u22ba(\u03a3\u2217 \u03b2,j)\u22121/2( b \u03b2\u2217 j \u2212\u03b2\u2217 j) d \u2192N(0, 1), (10) where \u03a3\u2217 \u03b2,j = (\u2212(A0)\u22ba, Ip)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121(\u2212(A0)\u22ba, Ip)\u22ba, and for any j \u2208[q], \u221an(\u03a3\u2217 \u03b3,j)\u22121/2(b \u03b3\u2217 j \u2212\u03b3\u2217 j ) d \u2192N(0, IK), (11) where \u03a3\u2217 \u03b3,j = G\u2021(IK, 0)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121 (IK, 0)\u22ba(G\u2021)\u22ba. Furthermore, for any i \u2208[n], if q = O(n) and p3/2\u221aq(nq)3/\u03be\u03b6\u22122 nq,p \u21920, \u221aq(\u03a3\u2217 u,i)\u22121/2( b U \u2217 i \u2212U \u2217 i ) d \u2192N(0, IK), (12) where \u03a3\u2217 u,i = (G\u2021)\u2212\u22ba(\u03a80 i\u03b3)\u22121\u21260 i\u03b3(\u03a80 i\u03b3)\u22121(G\u2021)\u22121. 22 The asymptotic covariance matrices in Theorem 2 can be consistently estimated. Due to the space limitations, we defer the construction of the consistent estimators b \u03a3\u2217 \u03b2,j, b \u03a3\u2217 \u03b3,j, and b \u03a3\u2217 u,i to Supplementary Materials. Theorem 2 provides the asymptotic distributions for all individual estimators. In particular, with the asymptotic distributions and the consistent estimators b \u03a3\u2217 \u03b2,j for the asymptotic covariance matrices, we can perform hypothesis testing on \u03b2\u2217 js for j \u2208[q] and s \u2208[p\u2217]. We reject the null hypothesis \u03b2\u2217 js = 0 at significance level \u03b1 if |\u221an(b \u03c3\u2217 \u03b2,js)\u22121b \u03b2\u2217 js| > \u03a6\u22121(1 \u2212\u03b1/2), where (b \u03c3\u2217 \u03b2,js)2 is the (s + 1)-th diagonal entry in b \u03a3\u2217 \u03b2,j. For the asymptotic normality of b \u03b2\u2217 j, the condition p3/2\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u2192 0 together with Assumption 3 gives p = o{n1/5 \u2227(q2/n)1/3} up to a small order term, and further implies n \u226aq2, which is consistent with established conditions in the existing factor analysis literature (Bai & Li 2012, Wang 2022). For the asymptotic normality of b U \u2217 i , the additional condition that q = O(n) is a reasonable assumption in educational applications where the number of items q is much fewer than the number of subjects n. In this case, the scaling conditions imply p = o{q1/3 \u2227(n2/q)1/5} up to a small order term. Similarly for the asymptotic normality of b \u03b3\u2217 j , the proposed conditions give p = o{n1/5 \u2227(q2/n)1/3} up to a small order term. Remark 7. Similar to the discussion in Remark 6, the challenges arising from the unrestricted dependence between U\u2217and X also affect the derivation of the asymptotic distributions for the proposed estimators. If we consider the ideal case with (U\u2217)\u22baX = 0K\u00d7p, we can establish the asymptotic normality for all individual estimators under Assumptions 1\u20134 only and weaker scaling conditions. Specifically, when (U\u2217)\u22baX = 0K\u00d7p, the scaling condition becomes p\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u21920 for deriving asymptotic normality of b \u03b2\u2217 j and b \u03b3\u2217 j , which is milder than that for (10) and (11). 23 5 Simulation Study In this section, we study the finite-sample performance of the proposed joint-likelihoodbased estimator. We focus on the logistic latent factor model in (1) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, where wij = (\u03b3\u2217 j )\u22baU \u2217 i + (\u03b2\u2217 j)\u22baXi. The logistic latent factor model is commonly used in the context of educational assessment and is also referred to as the item response theory model (Mellenbergh 1994, Hambleton & Swaminathan 2013). We apply the proposed method to estimate B\u2217and perform statistical inference on testing the null hypothesis \u03b2\u2217 js = 0. We start with presenting the data generating process. We set the number of subjects n = {300, 500, 1000, 1500, 2000}, the number of items q = {100, 300, 500}, the covariate dimension p = {5, 10, 30}, and the factor dimension K = 2, respectively. We jointly generate Xc i and U \u2217 i from N(0, \u03a3) where \u03a3ij = \u03c4 |i\u2212j| with \u03c4 \u2208{0, 0.2, 0.5, 0.7}. In addition, we set the loading matrix \u0393\u2217 [,k] = 1(K) k \u2297vk, where \u2297is the Kronecker product and vk is a (q/K)-dimensional vector with each entry generated independently and identically from Unif[0.5, 1.5]. For the covariate effects B\u2217, we set the intercept terms to equal \u03b2\u2217 j0 = 0. For the remaining entries in B\u2217, we consider the following two settings: (1) sparse setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = 5s\u22124, . . . , 5s and other \u03b2\u2217 js are set to zero; (2) dense setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = Rsq/5 + 1, . . . , (Rs + 1)q/5 with Rs = s \u22125\u230as/5\u230b, and other \u03b2\u2217 js are set to zero. Here, the signal strength is set as \u03c1 \u2208{0.3, 0.5}. Intuitively, in the sparse setting, we set 5 items to be biased for each covariate whereas in the dense setting, 20% of items are biased items for each covariate. For better empirical stability, after reaching convergence in the proposed alternating maximization algorithm and transforming the obtained MLEs into ones that satisfy Conditions 1\u20132, we repeat another round of maximization and transformation. We take the significance level at 5% and calculate the averaged type I error based on all the entries \u03b2\u2217 js = 0 and the averaged power for all non-zero entries, over 100 replications. The averaged hypothesis testing results are presented in Figures 3\u20136 for p = 5 and p = 30, across different 24 settings. Additional numerical results for p = 10 are presented in the Supplementary Materials. 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 3: Powers and type I errors under sparse setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 25 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 4: Powers and type I errors under sparse setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 26 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 5: Powers and type I errors under dense setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 27 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 6: Powers and type I errors under dense setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 28 From Figures 3\u20136, we observe that the type I errors are well controlled at the significance level 5%, which is consistent with the asymptotic properties of b B\u2217in Theorem 2. Moreover, the power increases to one as the sample size n increases across all of the settings we consider. Comparing the left panel (\u03c1 = 0.3) to the right panel (\u03c1 = 0.5) in Figures 3\u20136, we see that the power increases as we increase the signal strength \u03c1. Comparing the plots in Figures 3\u20134 to the corresponding plots in Figures 5\u20136, we see that the powers under the sparse setting (Figures 3\u20134) are generally higher than that of the dense setting (Figures 5\u20136). Nonetheless, our proposed method is generally stable under both sparse and dense settings. In addition, we observe similar results when we increase the covariate dimension p from p = 5 (Figures 3 and 5) to p = 30 (Figures 4 and 6). We refer the reader to the Supplementary Materials for additional numerical results for p = 10. Moreover, we observe similar results when we increase the test length q from q = 100 (top row) to q = 500 (bottom row) in Figures 3\u20136. In terms of the correlation between X and U\u2217, we observe that while the power converges to one as we increase the sample size, the power decreases as the correlation \u03c4 increases. 6 Data Application We apply our proposed method to analyze the Programme for International Student Assessment (PISA) 2018 data2. PISA is a worldwide testing program that compares the academic performances of 15-year-old students across many countries (OECD 2019). More than 600,000 students from 79 countries/economies, representing a population of 31 million 15year-olds, participated in this program. The PISA 2018 used the computer-based assessment mode and the assessment lasted two hours for each student, with test items mainly evaluating students\u2019 proficiency in mathematics, reading, and science domains. A total of 930 minutes of test items were used and each student took different combinations of the test items. In addition to the assessment questions, background questionnaires were provided to collect students\u2019 information. 2The data can be downloaded from: https://www.oecd.org/pisa/data/2018database/ 29 In this study, we focus on PISA 2018 data from Taipei. The observed responses are binary, indicating whether students\u2019 responses to the test items are correct, and we use the popular item response theory model with the logit link (i.e., logistic latent factor model; Reckase 2009). Due to the block design nature of the large-scale assessment, each student was only assigned to a subset of the test items, and for the Taipei data, 86% response matrix is unobserved. Note that this missingness can be considered as conditionally independent of the responses given the students\u2019 characteristics. Our proposed method and inference results naturally accommodate such missing data and can be directly applied. Specifically, to accommodate the incomplete responses, we can modify the joint log-likelihood function in (2) into Lobs(Y | \u0393, U, B, X) = Pn i=1 P j\u2208Qi lij(\u03b3\u22ba j Ui + \u03b2\u22ba j Xi), where Qi defines the set of questions to which the responses from student i are observed. In this study, we include gender and 8 variables for school strata as covariates (p\u2217= 9). These variables record whether the school is public, in an urban place, etc. After data preprocessing, we have n = 6063 students and q = 194 questions. Following the existing literature (Reckase 2009, Millsap 2012), we take K = 3 to interpret the three latent abilities measured by the math, reading, and science questions. We apply the proposed method to estimate the effects of gender and school strata variables on students\u2019 responses. We obtain the estimators of the gender effect for each PISA question and construct the corresponding 95% confidence intervals. The constructed 95% confidence intervals for the gender coefficients are presented in Figure 7. There are 10 questions highlighted in red as their estimated gender effect is statistically significant after the Bonferroni correction. Among the reading items, there is only one significant item and the corresponding confidence interval is below zero, indicating that this question is biased towards female test-takers, conditioning on the students\u2019 latent abilities. Most of the confidence intervals corresponding to the biased items in the math and science sections are above zero, indicating that these questions are biased towards male test-takers. In social science research, it is documented that female students typically score better than male students 30 during reading tests, while male students often outperform female students during math and science tests (Quinn & Cooc 2015, Balart & Oosterveen 2019). Our results indicate that there may exist potential measurement biases resulting in such an observed gender gap in educational testing. Our proposed method offers a useful tool to identify such biased test items, thereby contributing to enhancing testing fairness by providing practitioners with valuable information for item calibration. Math Reading Science \u22126 \u22123 0 3 6 0 50 100 150 200 PISA Questions for TAP Gender Effect Estimator Figure 7: Confidence intervals for the effect of gender covariate on each PISA question using Taipei data. Red intervals correspond to confidence intervals for questions with significant gender bias after Bonferroni correction. (For illustration purposes, we omit the confidence intervals with the upper bounds exceeding 6 and the lower bounds below -6 in this figure). To further illustrate the estimation results, Table 1 lists the p-values for testing the gender effect for each of the identified 10 significant questions, along with the proportions of female and male test-takers who answered each question correctly. We can see that the signs of the estimated gender effect by our proposed method align with the disparities in the reported proportions between females and males. For example, the estimated gender effect corresponding to the item \u201cCM496Q01S Cash Withdrawal\u201d is positive with a p-value 31 Item code Item Title Female (%) Male (%) p-value Mathematics CM496Q01S Cash Withdrawal 51.29 58.44 2.77\u00d710\u22127 (+) CM800Q01S Computer Games 96.63 93.61 < 1 \u00d7 10\u22128 (\u2212) Reading CR466Q06S Work Right 91.91 86.02 1.95\u00d710\u22125 (\u2212) Science CS608Q01S Ammonoids 57.68 68.15 4.65\u00d710\u22125 (+) CS643Q01S Comparing Light Bulbs 68.57 73.41 1.08\u00d710\u22125 (+) CS643Q02S Comparing Light Bulbs2 63.00 57.50 4.64\u00d710\u22124 (\u2212) CS657Q03S Invasive Species 46.00 54.36 8.47\u00d710\u22125 (+) CS527Q04S Extinction of Dinosours3 36.19 50.18 8.13\u00d710\u22125 (+) CS648Q02S Habitable Zone 41.69 45.19 1.34\u00d710\u22124 (+) CS607Q01S Birds and Caterpillars 88.14 91.47 1.99\u00d710\u22124 (+) Table 1: Proportion of full credit in females and males to significant items of PISA2018 in Taipei. (+) and (\u2212) denote the items with positively and negatively estimated gender effects, respectively. of 2.77 \u00d7 10\u22127, implying that this question is statistically significantly biased towards male test-takers. This is consistent with the observation that in Table 1, 58.44% of male students correctly answered this question, which exceeds the proportion of females, 51.29%. Besides gender effects, we estimate the effects of school strata on the students\u2019 response and present the point and interval estimation results in the left panel of Figure 8. All the detected biased questions are from math and science sections, with 6 questions for significant effects of whether attending public school and 5 questions for whether residing in rural areas. To further investigate the importance of controlling for the latent ability factors, we compare results from our proposed method with the latent factors, to the results from directly regressing responses on covariates without latent factors. From the right panel of Figure 8, we can see that without conditioning on the latent factors, there are excessive items detected for the covariate of whether the school is public or private. On the other hand, there are no biased items detected if we only apply generalized linear regression to estimate the effect of the covariate of whether the school is in rural areas. 32 Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public \u2212 without latent variable Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural \u2212 without latent variable Figure 8: Confidence intervals for the effect of school stratum covariate on each PISA question. Red intervals correspond to confidence intervals for questions with significant school stratum bias after Bonferroni correction. 7 Discussion In this work, we study the covariate-adjusted generalized factor model that has wide interdisciplinary applications such as educational assessments and psychological measurements. In particular, new identifiability issues arise due to the incorporation of covariates in the model setup. To address the issues and identify the model parameters, we propose novel and interpretable conditions, crucial for developing the estimation approach and inference results. With model identifiability guaranteed, we propose a computationally efficient jointlikelihood-based estimation method for model parameters. Theoretically, we obtain the estimation consistency and asymptotic normality for not only the covariate effects but also latent factors and factor loadings. 33 There are several future directions motivated by the proposed method. In this manuscript, we focus on the case in which p grows at a slower rate than the number of subjects n and the number of items q, a common setting in educational assessments. It is interesting to further develop estimation and inference results under the high-dimensional setting in which p is larger than n and q. Moreover, in this manuscript, we assume that the dimension of the latent factors K is fixed and known. One possible generalization is to allow K to grow with n and q. Intuitively, an increasing latent dimension K makes the identifiability and inference issues more challenging due to the increasing degree of freedom of the transformation matrix. With the theoretical results in this work, another interesting related problem is to further develop simultaneous inference on group-wise covariate coefficients, which we leave for future investigation.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.13594v1", |
| "title": "Lost in Space: Probing Fine-grained Spatial Understanding in Vision and Language Resamplers", |
| "abstract": "An effective method for combining frozen large language models (LLM) and\nvisual encoders involves a resampler module that creates a `visual prompt'\nwhich is provided to the LLM, along with the textual prompt. While this\napproach has enabled impressive performance across many coarse-grained tasks\nlike image captioning and visual question answering, more fine-grained tasks\nthat require spatial understanding have not been thoroughly examined. In this\npaper, we use \\textit{diagnostic classifiers} to measure the extent to which\nthe visual prompt produced by the resampler encodes spatial information. Our\nresults show that this information is largely absent from the resampler output\nwhen kept frozen during training of the classifiers. However, when the\nresampler and classifier are trained jointly, we observe a significant\nperformance boost. This shows that the compression achieved by the resamplers\ncan in principle encode the requisite spatial information, but that more\nobject-aware objectives are needed at the pretraining stage to facilitate this\ncapability", |
| "authors": "Georgios Pantazopoulos, Alessandro Suglia, Oliver Lemon, Arash Eshghi", |
| "published": "2024-04-21", |
| "updated": "2024-04-21", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Recent approaches for developing Vision and Lan- guage (V&L) models leverage existing vision (Rad- ford et al., 2021; Fang et al., 2023b,a), and lan- guage experts (Touvron et al., 2023a; Zhang et al., 2022; Touvron et al., 2023b) and try to learn a map- ping between them (Alayrac et al., 2022; Li et al., 2023b; Dai et al., 2023; You et al., 2023; Liu et al., 2023c,b). In most cases, the experts are kept frozen while the only learnable component is the mapping between the visual and the language expert. The simplest approach uses a linear projection layer that matches the dimensionality of the visual and textual embeddings before feeding them to the LLM (Liu et al., 2023c,b). A more sophisticated 1Code available here \u2744Resampler Probe The phrase \u2018Strawberries and cream in an fruit/snacks tray\u2019 refers to the top left part of the image. \u2744Text Embeddings TRUE \u2744Resampler Probe Locate the region that is described by: \u2018Strawberries and cream in an fruit/snacks tray\u2019. 0.32 0.25 0.54 0.39 Latent Queries \u2744Vision Encoder Latent Queries \u2744Text Embeddings Figure 1: Explicit (left) and implicit (right) probing for spatial understanding. In the explicit setting, we probe for region localization, while in the implicit setting, the probe is trained to classify whether a description involving an image region is true of the image. method is to use a resampler to compress the visual embeddings into a compact \u2018visual prompt\u2019 that is then fed to the LLM either at the input level along with the text prompt (Li et al., 2023b; Dai et al., 2023) or via cross attention layers (Alayrac et al., 2022; Li et al., 2023a). From a practical standpoint, the resampler may accelerate training and infer- ence as it significantly reduces the sequence length, but also facilitates in-context learning capabilities since additional examples can fit into the context window of the LLM. As a result, these approaches have demonstrated impressive performance across multiple \u2018coarse-grained\u2019 tasks such as image cap- tioning, and visual question answering. However, fine-grained tasks such as visual grounding and spatial understanding are relatively underexplored. Resamplers are usually pretrained on pairs of image-text data using contrastive learn- ing (Li et al., 2023b; Dai et al., 2023), and/or mul- timodal masked language modeling (Lauren\u00e7on et al., 2023; Alayrac et al., 2022), without relying on object-aware objectives. Given the importance of resamplers for the development of V&L mod- els, we ask whether this compression preserves arXiv:2404.13594v1 [cs.CV] 21 Apr 2024 fine-grained spatial information. Do the contrastive and language modeling objectives retain the overall scene structure, or is this information lost due to the absence of object-aware pretraining objectives? To address these questions, we train diagnostic classifiers to probe two different resampler mod- ules for explicit and implicit spatial understand- ing \u2014 see Figure 1. Our results indicate that the multimodal resamplers do not facilitate spatial un- derstanding. Nevertheless, in all settings, jointly fine-tuning the diagnostic classifiers and the resam- plers significantly boosts performance, demonstrat- ing that the compression achieved by the resam- plers can in principle encode the requisite spatial information, but that more object-aware pretraining objectives are needed to facilitate this.", |
| "main_content": "Resamplers The idea of the resampler is inspired primarily by computer vision, where an attention mechanism is used to compress visual features into learnable queries (often referred to as slots) (Carion et al., 2020; Kamath et al., 2021; Locatello et al., 2020). More recently, resamplers have been applied to more multimodal tasks. Flamingo (Alayrac et al., 2022) and subsequent open-source variants (Lauren\u00e7on et al., 2023; Li et al., 2023a) are based on the Perceiver Resampler (Jaegle et al., 2022), with cross-attention between the latent queries and the visual embeddings followed by a stack of selfattention blocks that operate on the latent queries. In the Q-Former (Li et al., 2023b; Dai et al., 2023), the latent queries are also informed by the input text and, therefore, create a more \u2018linguistically informed\u2019 visual prompt. Probing Probing is a class of methods for interpreting neural models by assessing whether the model representations encode specific kinds of information at different processing stages (Belinkov, 2022). The concept of probing is straightforward; we extract representations from a model that is already trained on some task(s), and use a lightweight diagnostic classifier on top of these representations to solve a probing task that reflects the information that we seek to find. The classifier\u2019s performance is then taken to correlate with the extent to which that information is encoded by the model (Conneau et al., 2018; Hupkes et al., 2018). Many within (multimodal) NLP have thus adopted probing to interpret model behavior (Kajic and Nematzadeh, 2022; Salin et al., 2022; Lindstr\u00f6m et al., 2020). 3 Experiments Is spatial understanding a property of V&L resamplers? We experiment with three different spatial understanding tasks. In RefCOCOg (Mao et al., 2016), the objective is to predict the coordinates of the region that is described by the input phrase. Secondly, we use the \u2018random split\u2019 from the VSR dataset (Liu et al., 2023a), where the model has to assess the validity of a caption describing a spatial relationship between two entities. Finally, we introduce the Region Cell Matching (RCM) task, which follows the VSR formulation but is designed to test for a more rudimentary form of spatial understanding regarding the location of one entity in the image. Inspired by CAPTCHAs, an image is divided into a 3x3 grid, and each grid cell is assigned a location description (such as top left or middle). We generate synthetic captions by combining RefCOCOg descriptions with the cell location as shown in the implicit probing example of Figure 1. To ensure that performance is not influenced by frequency biases, we balanced the distribution of positive and negative examples. Appendix A contains further details about the dataset. In our experiments, we use the Q-Former from the first pretraining stage of BLIP2 (Li et al., 2023b) and InstructBLIP (Dai et al., 2023). To probe the resamplers, we follow past work (Belinkov, 2022) and use a single linear layer after flattening the embeddings of the query tokens. For RefCOCOg, the linear layer predicts the normalized coordinates of the region that matches the referring expression. We use the bounding box loss from (M)DETR (Carion et al., 2020; Kamath et al., 2021): a weighted sum of the Generalised IoU and L1 losses. Similarly, for VSR and the RCM task, we use a linear layer that predicts the probability that the query matches the image trained using binary cross entropy. We tune the learning rate, number of epochs, and loss weights (only for RefCOCOg) using Bayesian hyperparameter optimization (Bergstra et al., 2013) for at least ten iterations. For further implementation details, see Appendix B. In all cases, we evaluate the best model in terms of validation performance. We compare the two resamplers against similarly-sized models that employ patch representations. We avoid comparison against models with object-centric visual encoding because the task of visual grounding is significantly easier in these models as they need to select the correct canRefCOCOg VSR random RCM Validation Test Validation Test Validation Test Random 50.00 50.00 50.00 Human 95.40 92.29 MDETR (Kamath et al., 2021) 83.35 83.31 CLIP\u2217(Radford et al., 2021) 56.0 Unitab (Yang et al., 2022) 84.58 84.70 ViLT (Kim et al., 2021) 69.14 68.93 71.38 71.53 83.16 83.25 ^ Q-Former 30.39 30.26 66.91 64.97 70.12 69.49 t Q-Former 71.47 71.72 80.86 80.50 81.68 81.35 ^ IBLIP Q-Former 20.00 19.92 58.07 55.72 64.58 63.08 t IBLIP Q-Former 68.89 69.34 78.40 76.99 83.11 80.86 Table 1: Linear probing results. ^/t denotes that the resampler is frozen/unfrozen. \u2217results from Liu et al. (2023a). 1 2 3 4 5 6 7 8 9 10 11 Layer 45 50 55 60 65 70 Accuracy Performance on VSR using Intermediate Layers frozen Q-Former frozen IBLIP Q-Former (a) person animal vehicle electronic appliance indoor kitchen furniture food accessory outdoor sports Super-Category 0 20 40 60 80 100 Accuracy@IoU0.5 40.3 34.2 32.4 18.7 12.2 11.7 16.4 19.3 21.6 13.2 21.2 8.2 83.1 78.0 73.5 70.5 64.9 61.3 60.0 58.7 57.9 54.6 51.5 47.8 Performance per MSCOCO super-category frozen Q-Former unfrozen Q-Former (b) Figure 2: Performance on (a) VSR per intermediate layer, (b) RefCOCOg per MSCOCO super-category. didate bounding box provided from the detector as opposed to explicit image region prediction. Additionally, we provide results where the linear classifier is jointly trained along with the resampler as an upper bound for the performance with frozen representations. Table 1 shows the results for the models that we are considering. We observe that both resamplers perform poorly on RefCOCOg when kept frozen, and, therefore, are unable to perform explicit visual grounding. A possible counter-argument could be that predicting raw coordinates within the image is too difficult to solve with a single linear layer. However, we observe similar trends with VSR and RCM, which test for spatial understanding in an easier binary classification setup. While the resamplers perform better than random baselines in these tasks, there is a significant gap between the performance of the frozen and fine-tuned backbones. We believe this is an outcome of the pretraining objectives of the Q-Former that do not explicitly facilitate fine-grained object-centric representations. This is in line with previous work, which found that V&L models trained with contrastive objectives act as bag-of-words and do not preserve spatial information (Yuksekgonul et al., 2022). On the other hand, the significant boost achieved by unfreezing the resamplers shows that the compression of the input embeddings is, in principle, able to capture spatial information and, therefore, that the resampler as an architectural choice does not necessarily constitute a bottleneck. Is spatial information encoded in earlier layers but discarded in deeper layers? We previously observed that resamplers have poor performance in Category Adjacency Directional Orientation Projective Proximity Topological Unallocated ^ Q-Former 61.94 42.05 56.93 62.87 60.15 74.56 68.42 t Q-Former 68.86 75.00 67.15 78.29 81.95 83.94 72.37 ^ IBLIP Q-Former 57.44 38.64 58.39 54.21 40.60 66.14 52.63 t IBLIP Q-Former 62.98 68.18 67.88 74.61 78.95 83.15 77.63 Table 2: VSR results per model for different categories of spatial relationships. ^/t denotes that the resampler is frozen/unfrozen. spatial understanding tasks when using representations from the last layer. Next, we examine if the representations from intermediate layers better encode spatial information. Intuitively, representations from earlier layers could lead to greater probing performance as they are closer to the visual encoder\u2019s output. Figure 2a shows the results on VSR after probing representations from intermediate layers. Overall, intermediate layer representations do not provide performance gains. There is a clear upward trend regarding the performance of the Q-Former from BLIP2, whereas for InstructBLIP we observe fluctuations within a small range across layers. A similar trend is observed in the RefCOCOg results which are included in Appendix C. Scaling the Probing Classifier Additionally, we experiment with scaling the probe classifier by introducing non-linearities. In particular, we use 2-layer and 4-layer classifiers with SwiGLU activation functions. We refrain from using more complex classifiers because they may infer features that are not actually used by the underlying model (Hupkes et al., 2018). For training, we used the same setup as with our previous experiments. Table 3 illustrates the results with increasing prompt complexity. While we observe a common trend of increasing performance when we make the probe more complex, the accuracy of the nonlinear probes does not indicate that the resampler encodes spatial information which can be easily retrieved. Additionally, the performance gap between the simplest and the most complex probe in the case of InstructBLIP indicates that fine-grained spatial understanding is \u2018built-up\u2019 within the probe and is not necessarily a property of the resampler component. 3.1 Discussion Performance analysis per object category Figure 2b illustrates the Q-Former\u2019s performance on RefCOCOg per MSCOCO (Lin et al., 2014) supercategory. We observe that the frozen/unfrozen resamplers behave differently but also have sigModel #Layers RefCOCOg VSR random RCM ^ Q-Former 1 30.26 64.97 69.49 2 32.08 65.15 69.98 4 34.49 65.01 70.71 ^ IBLIP Q-Former 1 19.92 55.72 63.08 2 25.01 58.09 68.66 4 34.49 59.09 69.29 Table 3: Probing results by scaling the probing classifier. nificant variation between object categories. To further understand the possible reasons for this variation, we computed the Kendall coefficient (Kendall, 1938) between the performance of each super-category and 1) the distribution of train examples, 2) the area of each bounding box, 3) and the distance of the bounding box from the center of the image (Table 5). Interestingly, the main factor that correlates positively with the performance per category is the area of the bounding box. We also observe that the further the bounding box deviates from the center, the more the performance drops. These two observations imply that the Q-Former constructs the visual prompt by \u2018summarizing\u2019 the most central entities within an image, ignoring positional outliers. Which spatial relationships are difficult to capture? In Table 2, we break down the VSR results according to the spatial relationship type. Both resamplers perform the best in topological relations across frozen/unfrozen conditions. Directional relations seem challenging for out-of-the-box resamplers, though this relation can be captured during fine-tuning. Finally, captions describing adjacency or orientation properties are difficult even for finetuned resamplers. Effect of learning objectives We showed that multimodal resamplers pretrained with contrastive learning and multimodal language modeling objectives do not capture spatial information well. These are undoubtedly important objectives as they enable large-scale pretraining, however, on their own, they are not sufficient for enabling fine-grained spatial understanding. Finally, we observed that BLIP-2\u2019s Q-Former consistently outperformed the one from InstructBLIP. However, as shown in Figure 2a, the performance of the two resamplers is comparable for early layers. We hypothesize that during instruction tuning, the InstructBLIP Q-former may get away with providing even less fine-grained information since the language modeling loss is already low due to the high-quality LLM, leading to a forgetting effect (McCloskey and Cohen, 1989). 4 Conclusion In this paper, we explored to what degree multimodal resamplers preserve spatial information. While previous work has demonstrated the effectiveness of resamplers across a variety of V&L tasks, our investigation revealed their limitations when applied to spatial understanding tasks. In particular, we probed two resamplers and showcased that grounding natural language descriptions in image regions is not an inherent ability of these modules. Furthermore, probing experiments showed limited spatial understanding in two easier settings. These involved image-text matching with captions referencing the absolute location of an entity, or spatial relationships between two entities. Nevertheless, our results showcased that when the resampler is fine-tuned, the compression of the visual encoding induced by the resampler can be effective. We believe that this is due to the lack of an object-aware pretraining objective that would encourage the resamplers to encode spatial information. Future work should build upon our findings and design objectives that incentivize disentangled representations (Bengio et al., 2013). Limitations This study centered on exploring some architectural components of current V&L models with regard to their ability to encode spatial information. For the purpose of our study, it is necessary that the visual and textual representations are already fused. Models adopting unimodal resamplers do not facilitate this because 1) the fusion happens only in the successive cross-attention layers of the LLM (Alayrac et al., 2022), or 2) the visual embeddings are concatenated with the text embeddings at the input of the LLM (Bai et al., 2023). While we could extract representations from intermediate layers from a model like IDEFICS (Lauren\u00e7on et al., 2023), this would have been an unfair comparison with BLIP-2 style models because the former adds more layers to the original resampler architecture. The other option would be to provide the visual embeddings and the text embeddings to the probe, but this defeats the purpose of the probing classifier as probe since it would have to perform the necessary multimodal fusion internally; thus making any comparisons uninterpretable. Consequently, our study does not encompass the entirety of available models adopting resamplers, and the findings may not be fully representative of the broader V&L model landscape. We also recognize the limitation in our exploration of spatial understanding as an emergent ability in V&L models. The question of whether spatial understanding materializes as a natural consequence of model scale remains unanswered in our study. A more in-depth investigation controlling the pretraining dataset, the size of the models as well, and the training hyperparameters is required in order to truly understand the capacity of these models to develop fine-grained and disentangled representations that facilitate spatial understanding. Acknowledgements We would like to thank the reviewers for their valuable feedback during the ARR process. Additionally, we would like to thank Malvina Nikandrou, and Ioannis Konstas for their suggestions with regards to the experimental setup. This work was supported by the Edinburgh International Data Facility (EIDF) and the Data-Driven Innovation Programme at the University of Edinburgh." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.14716v1", |
| "title": "Bayesian Example Selection Improves In-Context Learning for Speech, Text, and Visual Modalities", |
| "abstract": "Large language models (LLMs) can adapt to new tasks through in-context\nlearning (ICL) based on a few examples presented in dialogue history without\nany model parameter update. Despite such convenience, the performance of ICL\nheavily depends on the quality of the in-context examples presented, which\nmakes the in-context example selection approach a critical choice. This paper\nproposes a novel Bayesian in-Context example Selection method (ByCS) for ICL.\nExtending the inference probability conditioned on in-context examples based on\nBayes' theorem, ByCS focuses on the inverse inference conditioned on test\ninput. Following the assumption that accurate inverse inference probability\n(likelihood) will result in accurate inference probability (posterior),\nin-context examples are selected based on their inverse inference results.\nDiverse and extensive cross-tasking and cross-modality experiments are\nperformed with speech, text, and image examples. Experimental results show the\nefficacy and robustness of our ByCS method on various models, tasks and\nmodalities.", |
| "authors": "Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI", |
| "cs.CV", |
| "cs.SD", |
| "eess.AS" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Large language models (LLMs) (Touvron et al., 2023b; OpenAI, 2023a) have achieved great suc- cess on many text-based natural language process- ing (NLP) tasks. By connecting with extra visual and audio encoders (Sun et al., 2023b; Radford et al., 2023), the resulting multimodal LLMs can also achieve remarkable performance on image- text and audio-text tasks (Li et al., 2023; OpenAI, 2023b; Tang et al., 2023). With the ability of in- context learning (ICL) (Brown et al., 2020), LLMs can adapt to new tasks easily and efficiently in a training-free manner, to generate output following the prompting paradigm based on a few input-label pairs pre-pended to the test input. The existence of ICL ability has also been verified on image-text and audio-text tasks (Tsimpoukelli et al., 2021; Wang et al., 2023c; Hsu et al., 2023; Pan et al., 2023). (i) Random Selected Example(s) (ii) Inverse Inference (iii) Bayesian Selected Example(s) text similarity score-based reranking estimated probabilities datastore (few-shot with k samples) (k samples in-context learning) Figure 1: A brief illustration of the proposed Bayesian in-context example selection includes: (i) first randomly selecting k examples; (ii) examining the examples in the datastore through \u201cinverse inference,\u201d where the test input-label pair serves as the in-context example; and (iii) selecting samples with correct label predictions as good examples (colored in blue), considered to have high mutual information interaction with the test input. Although ICL requires no gradient descent and thus does not suffer from the instability caused by stochastic optimisation compared to other test- time adaptation approaches, care still needs to be taken when selecting the in-context examples since they often lead to distinct ICL performance varia- tions (Zhao et al., 2021; Min et al., 2022; Lu et al., 2022b). Prior work on in-context example selection trains an example retrieval module (Rubin et al., 2022; Zhang et al., 2022; Lu et al., 2022a; Wang et al., 2023b), selects close examples in embedding space (Liu et al., 2022; An et al., 2023; Qin et al., 2023), or leverages the feedback of LLMs to score the examples (Su et al., 2022; Nguyen and Wong, 2023; Iter et al., 2023; Mavromatis et al., 2023). While boosting ICL performance, most methods treat in-context examples and test input separately, overlooking their mutual interactions. This paper proposes ByCS (Bayesian in-Context example Selection), a novel in-context example selection approach focusing on mutual informa- tion interactions based on the Bayesian formula. Refer to the inference of test input conditioned on in-context examples as ICL inference, and the inference of in-context example\u2019s input based on the test input-label pair as the inverse inference. arXiv:2404.14716v1 [cs.CL] 23 Apr 2024 By introducing inverse inference via Bayes\u2019 theo- rem, ByCS leverages the inverse inference result to evaluate the quality of each in-context example. Assuming the contextual information interaction is mutual, an accurate inverse inference is likely to result in an accurate inference. Examples with accurate inverse inference results are selected as optimal examples. Extensive experiments across audio, image, and text modalities are conducted to verify the effectiveness and robustness of ByCS, such as ASR, visual question answering (VQA), as well as NLP tasks (including topic classification, sentiment analysis, and text-to-SQL etc). Our main contributions are summarised as follows: \u2022 ByCS, a novel in-context example selection method inspired by Bayes\u2019 theorem, is pro- posed. To improve the efficiency, the use of a smaller model for fast inverse inference imple- mentation and a ranking-based pre-selection to reduce the number of in-context examples are also proposed in this paper. \u2022 The method is verified using both \u201cdecoder- only ICL\" on NLP tasks and \u201cencoder- decoder\u201d ICL on ASR and VQA. To the best of our knowledge, this is the first work of an in-context example selection method verified across text, audio, and visual modalities as shown in Figure 2.", |
| "main_content": "Multimodal ICL. Inspired by the decoder-only ICL in text-based NLP, efforts have been made to extend such a few-shot learning ability to other modalities, in particular image and audio. Frozen (Tsimpoukelli et al., 2021) is the first attempt to exploit ICL ability in the vision-language model (VLM). By using a vision encoder to map the input image to textual tokens in the input embedding space of a frozen text language model, Frozen can handle interleaved image and text input and achieve image-text ICL. Other work manages to improve VLM\u2019s ICL ability by using adapter blocks (Eichenberg et al., 2022), adding blockwise modality fusion structures (Alayrac et al., 2022) and scaling up the model size (Sun et al., 2023a). In audio modality, Borsos et al. (2023) proposed AudioLM, a language model based on quantised audio tokens for audio generation tasks, which exhibits ICL ability for audio continuation. Similarly, Speech example inputs Speech test input Text example labels Answer \u201c\u597d\u7747\u3002\u201d \ud835\udc4b \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc4c Text example inputs Text test input Answer Text example labels Albert Einstein was Marie Curie was Polish. German. \ud835\udc4c \ud835\udc4b \ud835\udc36!\"#$% \ud835\udc36&'()& Text example labels \ud835\udc36&'()& German. \u201c\u7747\u569f\u3002\u201d Image example inputs Text example inputs \ud835\udc36!\"#$% Text example labels Image test input Text test input Answer \ud835\udc36&'()& \ud835\udc4b \ud835\udc4c Does this type of train Does this type of train transport people or cargo? What is this vehicle used for? Transporting goods. Cargo. (a) text ICL (b) ASR ICL (c) VQA ICL Figure 2: Multimodal ICL. Although ICL on different modalities shares the same formula expression, the actual inputs and inference model architectures differ. For ASR ICL on Whisper, the speech is fed into the encoder while the text example is labelled into the decoder, which is aware of speech input through cross-attention with the encoder. For VQA ICL, images are first encoded to the same embedding space of LM\u2019s input, then interleaved images and texts are fed into decoder LM. Wang et al. (2023a) proposed VALL-E, a controllable text-to-speech synthesis system with ICL ability based on audio and text prompts. Wang et al. (2023c) presented the first ICL work for ASR based on paired speech-text examples, which adapted the Whisper (Radford et al., 2023) model to receive considerable word error rate (WER) reductions on unseen Chinese dialects. Further explorations enabled the recent speech-language models to perform ICL on more speech input tasks through warmup training (Hsu et al., 2023) or speech instruction-tuning (Pan et al., 2023). In-Context Example Selection Methods. Rubin et al. (2022) proposed a scoring LM to retrieve incontext examples using contrastive learning, which can also be trained with reinforced learning algorithms, such as Q-learning (Zhang et al., 2022) and policy gradient (Lu et al., 2022a). Alternatively, examples that are semantically similar to the test input can be selected. Liu et al. (2022) proposed to select the k nearest neighbours (kNN) in the embedding space of the examples. When combining with chain-of-thought (Wei et al., 2022), Qin et al. (2023) proposed to select examples in the embedding space of the reasoning path. LLM feedback is often used in in-context example selection. Iter et al. (2023) selected in-context examples with cross-entropy differences of the fine-tuned model \ud835\udc36 \"!\"#$! = arg max \ud835\udc43(\ud835\udc36!\"#$!|\ud835\udc7f, \ud835\udc80 /, \ud835\udc36%&'()) \ud835\udc7f \ud835\udc80 # \ud835\udc36!\"#$% \ud835\udc36&'()& \ud835\udc36 $&'()& Text similarity measurement Example Score \ud835\udc44 Select examples with max(\ud835\udc78) \ud835\udc4c 3 = arg max \ud835\udc43(\ud835\udc80|\ud835\udc36%&'(), \ud835\udc36!\"#$!, \ud835\udc7f) \ud835\udc44= \ud835\udc46\ud835\udc56\ud835\udc5a\ud835\udc56\ud835\udc59\ud835\udc4e\ud835\udc5f\ud835\udc56\ud835\udc61\ud835\udc66(\ud835\udc36!\"#$!, \ud835\udc36 \"!\"#$!) First-round inference Inverse inference \u2460 \u2461 \u2462 Figure 3: The detailed pipeline of our ByCS method includes: First, conduct the first-round inference to estimate the label of the test input. Then, perform inverse inference on each example in the datastore, where the test input and the estimated label serve as in-context examples. A detailed illustration of inverse inference can be found in Figure 5 in the Appendix. Finally, rank in-context examples by the text similarity between the inverse inference result and the true context label. Examples with high similarity scores are selected due to their high mutual information interaction. based on the assumption that ICL may act as implicit gradient descent (Dai et al., 2022). Nguyen and Wong (2023) identified highly impactful examples according to the proposed influence score. Although ByCS also uses LLM feedback when evaluating the quality of in-context examples through inverse inference, it leverages the text-similarity of the inverse inference results and the corresponding ground-truth labels, in no need of complete output probability distributions which are often not available for commercial LLMs. Wang et al. (2023d) selected optimal in-context examples in the Bayesian framework by viewing LLMs as latent variable models and ICL as latent concept learning. In comparison, ByCS directly extends the ICL inference probability using Bayes\u2019 theorem. Xu and Zhang (2024) selected examples with high discrepancy between the labels and LLM\u2019s outputs when performing question answering. ByCS also selected examples from candidates in a datastore based on LLM\u2019s outputs but computes the mutual information interactions between the in-context examples and test input. 3 Methodology As shown in Figure 3, given a test input X and paired in-context examples (Cinput, Clabel), LLMs predict the most possible answer \u02c6 Y by maximising the inference probability P(Y|Cinput, Clabel, X): \u02c6 Y = arg max P(Y|Cinput, Clabel, X), (1) where Cinput and Clabel are the inputs and labels of different data types in different tasks. Regarding text-based NLP tasks, Cinput and Clabel are referred to as text questions and corresponding answers. Regarding ASR, Cinput and Clabel are speech audio and corresponding text transcriptions. Regarding VQA, Cinput are images and text questions based on the images and Clabel are the text answers. The inference probability can be extended using Bayes\u2019 theorem: P(Y|Cinput, Clabel, X) = P(Clabel|X, Y, Cinput)P(Y|X, Cinput) P(Clabel|X, Cinput) . (2) The likelihood P(Clabel|X, Y, Cinput) is termed as inverse inference probability, since it can be interpreted as the probability of the context label Clabel when the test input-label pair (X, Y) is inversely treated as the in-context example. ByCS is focused on the inverse inference probability and assumes the influence of the prior P(Y|X, Cinput) is subordinate for simplification. In practice, since the ground-truth label Yref of the test input X is not available, the correct likelihood P(Clabel|X, Yref, Cinput) is approximated by P(Clabel|X, \u02c6 Y, Cinput), where \u02c6 Y is produced by the first-round inference. Specifically, \u2022 First, the first-round inference is performed to produce a hypothesized label \u02c6 Y based on the test input X, which can be achieved using decoding rule without any in-context examples by \u02c6 Y = arg max P(Y|X). Better performance can be achieved when using the hypothesized label obtained by in-context examples by \u02c6 Y = arg max P(Y| \u02dc Cinput, \u02dc Clabel, X) based on Eqn. (1), where ( \u02dc Cinput, \u02dc Clabel) is a pair of first-round in-context example selected either randomly or using other example selection methods. \u2022 Next, for the datastore with all candidate incontext examples, generate the inverse inference result in \u02c6 Clabel for every candidate example based on the approximated inverse inference probability P(Clabel|X, \u02c6 Y, Cinput) by \u02c6 Clabel = arg max P(Clabel|X, \u02c6 Y, Cinput). \u2022 Last, compute Q = Similarity(Clabel, \u02c6 Clabel) as the text similarity between Clabel and \u02c6 Clabel, and use Q as the metric for the evaluation of the quality of inverse inference. Since more accurate inverse inference probability often results in higher text similarity, ByCS selects the in-context examples with higher Q. Note that Q is adopted since it does not require to assessment of the model\u2019s output probability distribution of the LLM, which is often unavailable for commercial LLMs. To reduce the computation cost of inverse inference, two methods are used when the number of examples in the datastore is large: \u2022 Conduct inverse inference using a model in the same model family as our inference model but has a smaller model size. \u2022 Apply ByCS to a small number (e.g. N) of pre-selected candidate examples. In preselection, all examples in the datastore are first ranked, and only the top N best examples are reserved as the pre-selected candidates. The pre-selection is performed using fast rankingbased algorithms like kNN. 4 Experimental Setup 4.1 Models Experimental results are performed on audio, text, and image modalities. For audio-text and imagetext tasks, ASR and VQA are used to evaluate the ICL ability of encoder-decoder structured models. For text-only NLP tasks, topic classification, sentiment analysis, and text-to-SQL are used to evaluate the ICL performance with decoder-only models. Regarding the NLP tasks, experiments are conducted using GPT-3.5-Turbo and GPT-4 (OpenAI, 2023a). For the ASR task, the open-sourced Whisper model (Radford et al., 2023) is used, which is a series of speech models released by OpenAI. The Whisper model family uses vanilla encoderdecoder Transformer (Vaswani et al., 2017) architecture ranging from 39 million (M) parameters (tiny) to 1.55 billion (B) parameters (large). Specifically, the Whisper small (244M) and Whisper largev2/-v3 (1.55B) models are used. For the VQA task, experiments are performed on Emu2 (Sun et al., 2023a) and GPT-4V (OpenAI, 2023b). Emu2 is a 37B text-image model (VLM) which leverages pretrained EVA-02-CLIP-E-plus (Sun et al., 2023b) and LLAMA-33B (Touvron et al., 2023a), which has ICL ability when taking interleaved inputs of images and texts. For experiments on Emu2, the outputs are generated using a greedy decoding setting for fast evaluation. GPT-4V is a GPT4 variant that can directly perceive image inputs, showing state-of-the-art image understanding performance. 4.2 Datasets Seven datasets covering NLP, ASR and VQA are used in this paper. For text-only ICL, four datasets are used in four different task categories: the TREC dataset for topic classification (Voorhees and Tice, 2000), the SST2 dataset for sentiment analysis (Socher et al., 2013), the Spider dataset for text-to-SQL (Yu et al., 2018), and the CHiME4 (Vincent et al., 2017) split of the HyPoradise dataset (Chen et al., 2023) for generative language model re-scoring to correct pre-generated ASR transcriptions. For audio-text ICL, Two datasets are used for ASR tasks, namely RASC863 (ChineseLDC.org, 2004) and CORAAL (Gunter et al., 2021). RASC863 is a commonly used Chinese dialect ASR dataset and its dialectal words split of Chongqing and Guangzhou dialects are used. CORAAL is an English corpus with speech recordings from regional African Americans. For imagetext ICL, VQA experiments are conducted on OKVQA (Marino et al., 2019), a dataset that requires methods to draw upon external knowledge to answer the visual questions. 4.3 Baselines On all three modalities, random selection and improved KATE (Liu et al., 2022) are used as baseline approaches. For random selection, in-context examples are uniformly selected from the example datastore three times and the average results are reported. For KATE (Liu et al., 2022), k neighbours that are nearest to the test input in the embedding space in terms of Euclidean distance are selected. For ASR ICL, the encoder of Whisper large-v2 acts as the embedding retrieval module on the Chinese dataset, while on the English dataset, we use the encoder of Whisper large-v3. In text-ICL, OpenAI text-embedding-ada-002 is used as the embedding retrieval model. For VQA ICL, KATE is only based on the embedding space of the query Corpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 67.1 56.1 52.7 51.0 61.7 38.3 31.2 28.8 12.4 KATE+ 67.1 54.7 51.3 49.7 61.3 36.1 26.9 24.8 12.0 ByCS 62.4 53.4 50.6 48.6 49.5 31.9 27.1 26.6 11.7 oracle ByCS 62.4 52.4 49.5 47.2 49.4 30.7 25.8 24.7 11.7 (a) Results with Whisper-large-v2 Corpus & In-context example number k Setting RASC863 Chongqing RASC863 Guangzhou CORAAL <15s k = 1 k = 2 k = 3 k = 4 k = 1 k = 2 k = 3 k = 4 k = 1 random 68.9 60.3 57.0 55.7 67.1 42.8 38.3 35.2 11.6 KATE+ 68.1 58.2 54.8 54.1 67.7 41.3 34.3 31.6 11.4 ByCS 63.5 56.3 53.5 51.8 50.7 36.7 33.0 31.5 11.3 oracle ByCS 63.4 55.2 53.0 50.7 51.3 35.6 31.9 30.7 11.2 (b) Results with Whisper-large-v3 Table 1: %WERs on RASC863 dialectal word dataset and CORAAL with different in-context example selection methods. For RASC863, the example datastore is the RASC863 dialectal word dataset of the corresponding dialect. For CORAAL, the size of the example datastore for ByCS is narrowed down to 10 using kNN algorithm. For the \u201coracle ByCS\u201d setting, the ground-truth label Yref is used in the inverse reference. image and EVA02-CLIP-bigE-14-plus (Sun et al., 2023b) serves as the embedding retrieval module. We use the term \u201cKATE+\u201d to refer to the baseline in our paper, putting stress on the fact that it is actually an improved KATE version enhanced using stronger embedding retrieval models, which results in better performance. For text ICL, bm25 (Robertson et al., 1995) and LLM-R (Wang et al., 2023b) are also compared as baselines. bm25 is a ranking metric originally designed for search engines to estimate the relevance of documents to a given query based on word-overlapping similarity. LLM-R provides a recent and preferment dense retriever distilled using a reward model trained based on LLM feedback. 5 Results 5.1 ASR ICL Results in WER are reported for ASR tasks in Table 1, and here in Chinese WER is calculated based on Chinese characters, which is also termed as character error rate. The ByCS method outperforms the KATE+ baseline in most cases, showing the robustness and effectiveness of our method. When the number of in-context examples k is small, ByCS surpasses KATE+ baseline in a large margin, with a 10.25% relative WER reduction on average when k = 1. Such performance advantage of ByCS reduces when the number of in-context examples increases, which may be attributed to the fact that ByCS performs the inverse inference of each in-context example individually by applying an independence assumption that ignores the contextual interactions between different in-context examples. The use of Yref in \u201coracle ByCS\u201d further boosts the performance gain, indicating the upper bound of our method with the same number of k. 5.2 Ablation study on ASR ICL 5.2.1 Inverse decoding option The influence of different decoding options of inverse inference is studied on the RASC863 dialectal word dataset. The results are shown in Table 2. For the setting notation, \u201cnoprompt\u201d denotes decoding in the default decoding option, and \u201cprompt\u201d means to decode with a specially designed prompt \u201c\u8bc6\u522b\u65b9\u8a00\u201d (meaning to \u201crecognize dialect speech\u201d). \u201cLID\u201d denotes decoding with the correct language identity of Chinese (\u201czh\u201d). The results show that among the three inverse decoding options, \u201cnoprompt\u201d obtains the best performance, \u201cprompt\u201d becomes the second, and \u201cLID\u201d the worst. The WERs of inverse inference are reported in Table 3. The WERs under the \u201cnoprompt\u201d setting are more than 100% due to the high insertion error rate. The repeated outputs are not removed when calculating the WERs of inverse inference and when calculating the text similarity, making a more obvious distinction between the examples with high mutual information interaction and those with low. Although it may be a little counter-intuitive that low inverse inference accuracy results in high ByCS selection performance, it is reasonable since inverse inference in ByCS helps to separate good in-context examples from the rest, which can be better achieved by using worse decoding options during inverse inference. This is because our decoding options can often make the model make more mistakes for worse in-context examples. Setting Corpus Text Inverse RASC863 Chongqing RASC863 Guangzhou similarity decoding measurement option Jaccard coefficient noprompt 62.4 49.5 prompt 62.9 50.7 LID 64.1 52.3 BERT wordvecs noprompt 62.4 51.5 prompt 63.5 56.8 LID 64.5 57.7 Table 2: %WERs of Whisper large-v2 on RASC863 dialectal word dataset using ByCS method with different inverse decoding options and text similarity measurements. The number of in-context examples is k = 1. Inverse decoding option Corpus RASC863 Chongqing RASC863 Guangzhou noprompt 91.5 125.2 prompt 70.2 70.1 LID 54.6 61.7 Table 3: Inverse inference %WERs of Whisper largev2 on RASC863 dialectal word dataset with different inverse decoding options. 5.2.2 Text similarity measurement The results of ByCS with different text similarity measurements are also reported in Table 2. For the setting notation, the \u201cJaccard coefficient\u201d is a comSetting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 67.1 54.7 51.3 49.7 ByCSlargev2 62.4 53.4 50.6 48.6 ByCSsmall 64.2 53.3 50.5 48.7 (a) Results with Whisper large-v2 Setting In-context example number k k = 1 k = 2 k = 3 k = 4 KATE+ 68.1 58.2 54.8 54.1 ByCSlargev3 63.5 56.3 53.5 51.8 ByCSsmall 64.4 56.5 54.1 51.7 (b) Results with Whisper large-v3 Table 4: %WERs on RASC863 Chongqing dialectal word dataset with ByCS with different inverse inference models. ByCSlargev3 and ByCSsmall use Whisper-largev3 and Whisper-small as the inverse inference model separately. monly used statistic to gauge similarity, defined as the intersection over the union of two sentences. \u201cBERT wordvecs\u201d is to measure similarity based on the Euclidean distance in the embedding space of BERT encoded word vectors. The embedding retrieval module is bert-base-chinese 1. ByCS with the Jaccard coefficient as text similarity have lower WERs, which may be because the training data of the BERT model doesn\u2019t include sufficient dialectal Chinese words and expressions. It also indicates that ByCS can work well with even a simple rule-based text similarity measurement, further verifying its high robustness. The Jaccard coefficient is used as the text similarity measurement in later experiments unless explicitly specified, due to the performance and simplicity. 5.2.3 Inverse inference model The inverse inference with different models is also investigated, with the results displayed in Table 4. A smaller model is used for inverse inference to speed up ByCS, since it is expensive to perform inverse inference using the inference model for every candidate example in datastore. Replacing Whisper-large-v2/v3 with Whisper-small will speed up six times2. For the notation, the subscript denotes the inverse inference model. For example, ByCSsmall is the ByCS method with Whisper small 1https://huggingface.co/ bert-base-chinese 2https://github.com/openai/whisper Corpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 63.0 92.92 67.41 8.0 random 63.5 72.7 75.3 94.96 94.80 67.02 7.5 7.5 7.3 KATE+ 78.8 86.4 91.0 95.05 94.69 69.44 7.7 7.1 6.8 bm25 74.6 89.4 89.8 95.27 95.40 67.41 7.4 7.5 8.1 LLM-R 78.0 88.8 90.4 95.05 94.02 67.82 7.4 6.9 7.0 ByCS 81.2 88.0 90.6 95.16 95.04 69.63 7.1 6.8 6.4 (a) Results using GPT-3.5-Turbo Corpus & In-context example number k Setting TREC(%Acc. \u2191) SST2(%Acc. \u2191) Spider(%Acc. \u2191) HyPoradise CHiME-4 (%WER \u2193) k = 1 k = 2 k = 4 k = 1 k = 2 k = 1 k = 1 k = 2 k = 5 default 75.2 95.01 69.63 11.6 random 81.3 82.5 84.6 96.38 96.11 70.66 6.9 6.8 6.5 KATE+ 88.2 91.6 93.4 96.43 95.85 71.95 7.0 6.3 5.8 bm25 81.8 87.4 91.4 96.19 96.09 71.47 6.8 6.6 6.3 LLM-R 88.2 91.0 93.6 95.74 95.06 72.63 6.8 6.3 5.9 ByCS 88.6 92.4 93.6 96.55 96.31 72.82 6.7 6.3 5.9 (b) Results using GPT-4 Table 5: Results of four text ICL tasks on two GPT-family models with different in-context example selection methods. The evaluation metrics are denoted in the brackets. The example datastore is narrowed down to a small size using kNN for ByCS. In the \u2018default\u2019 setting, the answers are generated directly with the questions without ICL. as an inverse inference model. ByCSsmall has similar results to ByCSlargev2 and ByCSlargev3, verifying the effectiveness of using a smaller model from the same family for inverse inference. This is intuitive since Whisper-small is trained using the same data and settings compared to the inference model Whisper-large-v2 and Whisper-large-v3, which therefore processes information similarly and can serve as a good alternative when evaluating the quality of the in-context examples. The smaller size of Whisper-small makes ByCS a more practical method in cost-sensitive scenarios. 5.3 Text ICL Text-only ICL results are shown in Table 5. As shown, ByCS outperforms all baselines on most dataset settings, showing not only the effectiveness but also the robustness of ByCS. In particular, ByCS outperforms the best baseline on the generative ASR rescoring dataset HyPoradise with a considerable 4.7% relative WER reduction with GPT3.5-Turbo. On TREC and SST2 datasets, ByCS does not always outperform the baselines. This indicates that ByCS is more suitable for open-ended long-answer datasets due to the calculation of text similarity in ByCS, in which answers are much more diverse and examples with rich information interactions can be better separated. In contrast, in multi-choice classification datasets, only a few short answers are often available, containing little contextual information. As the example shown in Figure 4, the distribution of the text similarity for ranking the examples is often sharp, merging the optimal and the suboptimal examples. Furthermore, considering the hypothesized labels of the test inputs for inverse inference, the hypothesized answers in open-ended datasets (in the form of long sentences) are often more similar to their corresponding references compared to those in the multi-choice classification datasets (in the form of a word or phrase or just an index of choice). It is observed that different in-context example selection methods perform differently with different models, even though on the same dataset. The bm25 method outperforms the KATE+ method with GPT-3.5-Turbo on the SST2 dataset, but not with GPT4. Compared to KATE+ and bm25 that is model-free in the actual selection step, the performance advantage of ByCS is more consistent since it takes into account the influence of the model. The outputs of the inverse inference model are used, which can serve as a good approximation to the inference model as verified in Section 5.2.3. Note that for ByCS on GPT-4, although the inverse inference procedure is conducted on GPT-3.5Turbo, the performances of ByCS are still superior. This further verifies that smaller models from the same model family can serve as a good low-cost approximation of the inverse inference model. (a) Distribution on SST2 (b) Distribution on HyPoradise Figure 4: The distribution of text similarity scores on different datasets. The text similarity score is the Jaccard coefficient. The entropy of distribution is calculated and placed on the upper left. The distribution on the multichoice classification dataset SST2 (blue) is much sharper than that of the open-ended dataset HyPoradise (red). 5.4 VQA ICL ByCS is tested on VQA ICL and the results are reported in Table 6. ByCS outperforms the KATE+ baseline on the VQA ICL task, demonstrating strong performances across modalities. The performance improvement from ByCS is not as obvious as in audio and text tasks, since the answers of VQA are usually short (usually a word or phrase), lacking sufficient contextual information. ByCS on In-context example number k Example selection method KATE+ ByCS k = 2 40.47 40.12 k = 4 45.11 45.14 (a) Results with Emu-2 In-context example number k Example selection method KATE+ ByCS k = 2 52.54 52.86 k = 4 54.00 54.39 (b) Results with GPT-4V Table 6: Results of VQA ICL with different in-context example selection methods and numbers of examples on OKVQA dataset. the VQA dataset suffers from the problem of having sharp text similarity score distributions, similar to the multichoice classification dataset. For ByCS with GPT-4V, inverse inference results on Emu-2 are used to pre-select the candidate examples, and ByCS still outperforms the KATE+ baseline. The performance may be further improved if GPT-4V is also used for inverse inference. This demonstrates that ICL may perform similarly cross models not only on speech and text, but also on images. 6 Conclusion This paper proposes ByCS, a novel in-context example selection method based on Bayes\u2019 theorem, which assumes that contextual information interaction is mutual between the test input and in-context examples and selects high-quality examples based on the inverse inference results. Experiments are performed across three modalities: speech, text, and images, using six different tasks and seven datasets. Results demonstrated the robustness and effectiveness of ByCS. It is also validated that the inverse inference results can be approximated using a smaller model from the same model family, which considerably reduces the computational cost. Moreover, relying on text similarity to rank in-context examples, ByCS is more suitable for open-ended long-answer datasets which contain sufficient contextual information. Future work is to extend the inverse inference to sequences with multiple incontext examples to model the interactions among the in-context examples. Limitations There are two limitations to this work. First, ByCS follows the simple assumption that the influence of each in-context example is independent and treats each in-context example individually, which neglects the contextual interactions between incontext examples. The approximation may be not adapted to the scenario in which the number of in-context examples is high. Another limitation is that sufficient contextual diversity is required by ByCS to select optimal examples for it depends on text similarity to evaluate inverse inference results. ByCS may suffer performance penalty when applied to a short-answer dataset. Future work includes enhancing ByCS in more scenarios. Ethics Statement The work doesn\u2019t give rise to any ethical risks and issues. All the models and data used in this paper are publicly accessible and used under licenses." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.12558v1", |
| "title": "Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making", |
| "abstract": "As large language models (LLMs) advance to produce human-like arguments in\nsome contexts, the number of settings applicable for human-AI collaboration\nbroadens. Specifically, we focus on subjective decision-making, where a\ndecision is contextual, open to interpretation, and based on one's beliefs and\nvalues. In such cases, having multiple arguments and perspectives might be\nparticularly useful for the decision-maker. Using subtle sexism online as an\nunderstudied application of subjective decision-making, we suggest that LLM\noutput could effectively provide diverse argumentation to enrich subjective\nhuman decision-making. To evaluate the applicability of this case, we conducted\nan interview study (N=20) where participants evaluated the perceived\nauthorship, relevance, convincingness, and trustworthiness of human and\nAI-generated explanation-text, generated in response to instances of subtle\nsexism from the internet. In this workshop paper, we focus on one troubling\ntrend in our results related to opinions and experiences displayed in LLM\nargumentation. We found that participants rated explanations that contained\nthese characteristics as more convincing and trustworthy, particularly so when\nthose opinions and experiences aligned with their own opinions and experiences.\nWe describe our findings, discuss the troubling role that confirmation bias\nplays, and bring attention to the ethical challenges surrounding the AI\ngeneration of human-like experiences.", |
| "authors": "Sharon Ferguson, Paula Akemi Aoyagui, Young-Ho Kim, Anastasia Kuzminykh", |
| "published": "2024-04-19", |
| "updated": "2024-04-19", |
| "primary_cat": "cs.HC", |
| "cats": [ |
| "cs.HC" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Human-AI collaborative decision-making aims for a complemen- tary performance [3, 4, 10, 15, 29], where human and AI partners together achieve a better outcome than they would individually. In such ambiguous and open-to-interpretation scenarios where there is no ground truth, Ferguson et al. [11] have explored the use of Large Language Models (LLMs) to support human-decision makers by surfacing various viewpoints [24]. One such example of these ambiguous scenarios is in the domain of hate speech detection, par- ticularly sexism, where subtle hate speech is more challenging to identify and remove automatically [20]. Benokraitis [13] describes subtle sexism as a less visible form of discrimination that is based on gender and is oftentimes undetected, accepted as normal, or even considered to be benevolent. In fact, research has shown assessment of sexism can be highly subjective, depending on an individual\u2019s personal values, gender ideologies and, thus, is open to interpre- tation [25]. Specifically in a social media context, for example, the same tweet can be considered sexist by one person, but not sexist by another [14]. This ambiguity can be risky: while there is a risk of harm if hateful posts are spread through the internet [37], studies have also proven that hate speech filtering algorithms can unin- tentionally harm LGBTQ communities when mistakenly flagging words that would be considered offensive in other contexts [27, 30]. They also significantly over-restrict African American English [9]. Recent research suggests that human-AI collaboration, towards any goal, requires the AI not just to make a recommendation (i.e., to remove or not remove a potentially sexist post from social media) but also to be able to explain the reasoning behind it in a way that is relevant, convincing and trustworthy for the user [33, 38]. Further, when we specifically consider these subjective cases, Alm [1] argue that we need to move beyond traditional metrics, and evaluate user satisfaction, which can be measured in numerous ways. Firstly, relevance is often used as an evaluation criterion for explanations [16, 22, 33], largely based on how useful they are to the arXiv:2404.12558v1 [cs.HC] 19 Apr 2024 TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Ferguson et al. human-decision maker and how closely they represent the scenario or decision in question. Secondly, explanations must be convincing to enable collaborative decisions, especially in ambiguous settings [33] and previous work [28] has shown LLM-produced arguments can be as persuasive as those human-authored ones. Then thirdly, trustworthiness is pointed to as a key criterion for acceptance of a recommendation in decision-making tasks, with plenty of work exploring how Explainable AI (XAI) can be leveraged to build [12, 18, 23, 38], and calibrate trust [7, 39, 41, 42]. While truthfulness is undoubtedly also an important metric in objective decision-making, it is a challenging evaluation metric for subjective cases where there is no one ground truth. In summary, in many subjective decision-making scenarios, one\u2019s personal values and lived experiences can heavily influence one\u2019s assessment of a scenario. Given the importance of values, beliefs, and experiences, it would follow that the presence of beliefs or experiences in AI-generated input influences how a human per- ceives the input. While we know that current LLMs can generate uniquely human-like attributes, and humans find this troubling [35], little is known about how users perceive personal opinion and experience representations in explanations when considered in subjective decision-making. In the context of politics, one study found that sharing personal experiences about harm was more con- vincing than sharing facts [21], as everyone could agree that harm should be avoided. Other work has shown that LLMs can generate arguments [28], and that they commonly reiterate certain opinions [32]. However, we don\u2019t yet know how these findings expand to other contexts, and how humans evaluate them. Thus, we address the following research question: 1) Are opinions and experiences perceived by humans in AI-generated explanations? And, 2) How are these opinions and experiences perceived by users? To answer these research questions, we conducted an empirical study with 20 participants to explore how they evaluate human and AI-generated text explanations in the context of subtle sexism. We ask participants to imagine that they are in a decision-making context and have to evaluate whether the provided scenario is sexist or not using the explanation provided. We ask them to assess who authored the explanation, and how relevant, convincing or trustwor- thy it is. We found that personal opinions and personal experiences were identified in both human- and AI-authored explanations, and participants described these as trustworthy. Further, an alignment between the opinions and experiences shown in the explanation and the participant\u2019s own beliefs exaggerated this effect, suggest- ing harmful cognitive biases at play. We hope to bring this scoped finding to the CHI community to start a discussion on how the human-like ability to generate personal beliefs and experiences in- fluences perceptions of trust, and how we can consider this finding in the design of collaborative systems.", |
| "main_content": "We conducted a set of semi-structured interviews with 20 participants to gauge their perception of human and AI-generated explanations. This study was approved by the university\u2019s research ethics board, and all participants provided informed consent. Participants were shown eight scenarios with accompanying explanations of subtle sexism, and asked to imagine themselves as part of collaborative decision-making on whether the scenario constitutes a case of subtle sexism. Our research methodology can be seen in Figure 1. The scenarios and human explanations, were collected from online discussion sites such as Reddit1, The Everyday Sexism Project2, and Twitter3. We selected scenarios (descriptions of events) that were paired with an interpretation or explanation of the scenario and why it is or is not sexist. The complete dataset contained 117 scenarios and accompanying explanations, which are representative of the \u201ceveryday explanations\u201d that humans use when discussing sexism. To collect the AI-generated explanation text for these scenarios, we used GPT-3 [5], which was the state-of-the-art large language model at the time. We prompted the model using the question-answer feature, asking \u201cIs this scenario sexist: {{scenario}}. Why or why not?\u201d To ensure that a coherent explanation-text was generated for each scenario, we prompted the model three times per scenario, resulting in 351 AI-generated explanation texts. To keep the interviews at an appropriate length, we chose eight scenario and explanation pairs to present to the participants \u2014 four explanations which were generated by GPT-3, and four which were collected from online discussion sites. From the larger dataset, we chose eight scenario and explanation pairs based on the following criteria: the explanation was coherent; there was a balance of argumentative stance (it is sexist vs. it is not sexist); and the length was appropriate for an interview, with both the scenario and explanation being less than five sentences. These chosen explanations also represented some of the higher-quality human explanations in the dataset. The text-based output from LLMs can be displayed to users in multiple modalities, which we know influence the perception of explanations [31]. As part of the larger study, we manipulated whether users were presented the explanation in text or audio form, though we do not focus on the outcome of this manipulation in this short paper. In the semi-structured interviews, we collected demographic information and asked introductory questions to gauge participants\u2019 familiarity with AI technology. Of the 20 participants, 10 identified as women, nine as men and one as non-binary. Participants averaged 30 years old (min: 20, max: 56) and spanned various roles from student to company executive both within and outside of AI. Most participants said they often use conversational AI, while few used chatbots regularly. The rest of the interview contained eight scenario and explanation pairs, each with the same line of questioning. We started this portion of the study by explaining the collaborative decision-making context, and asking participants to imagine that they had to decide on whether a given scenario was sexist or not, and they had input from another party, who could be a human or an AI, but they were not aware of which one. We also briefly described how the AI-generated texts were produced by stating that it was not a model specifically trained on sexism, but just a general language model. We showed participants the scenario and explanation, and asked whether the explanation was generated by a human or an AI model, and why. We then asked 1www.reddit.com 2www.everydaysexism.com 3www.twitter.com Just Like Me TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Figure 1: Overview of the research methodology participants to rate and explain their rating for the explanation on three qualities: relevance, convincingness, and trustworthiness, based on the dictionary definitions for these terms. At the end of the interview, if participants were interested, we shared which explanations were human and AI-generated. Literature shows relevance, convincing and trustworthy can be evaluated objectively and subjectively [6, 16\u201318, 33, 38]. Thus, we collected both quantitative (scales) and qualitative data for eight explanations across 20 participants. To protect the privacy of the posters whose scenarios and explanations are used in this study, we provide descriptions for the collected scenarios and explanations in Table 1, but do not provide the verbatim text. Verbatim text is provided for AI-generated content. We anonymized and transcribed interviews before following the Braun and Clarke thematic analysis method [8]. Two researchers went through two rounds of initial coding, where three randomly chosen interviews were open-coded individually by both researchers each time. The researchers met after each coding round to consolidate and organize the open codes, resulting in two iterations of the coding scheme before it was finalized. At this time, the researchers re-coded all twenty interviews using this finalized coding scheme. While we also collected quantitative data for the relevant, convincing, and trustworthy scales, in this paper we focus on the qualitative results. 3 RESULTS In this section, we provide evidence for the perception of both personal opinions and experiences within human and AI explanation text. 3.1 Recognizing Opinions and Experiences Participants recognized the elements of personal opinion and personal experience in explanations, both those authored by humans and AI-generated. Personal opinion was simply identified 20 times in our interviews \u2014 mostly in reference to both actual and perceived human-authored explanations. Personal opinions were defined as when the explanation revealed personal beliefs or points of view, for example: \u201c...So it had a very strong opinion and then it supported the opinion afterwards by again, just kind of distilling down what the scenario was talking about...\u201d [P4] Personal experiences were identified 56 times in the interviews, and were primarily discussed in terms of both actual and perceived human examples. Personal experiences refer to the presence of a personal example or the way something personally affected the explanation author, such as: \u201cThis sounds like it comes from a place of having experienced this\u201d. [P5] We found that these explanation elements were identified by participants across scenarios, thus being scenario independent: opinions and experiences were recognized by participants in reference to seven of eight scenarios. We also found no evidence that individual differences between participants drove this identification: opinions were identified by 11 out of 20 participants, and experiences by 17. Both opinion and experience were often brought up when describing why a participant believed the explanation was written by a human (note that this does not mean the explanation was actually human-generated): \u201cIt\u2019s coming from a very personal story perspective. So it\u2019s one person\u2019s perspective which is good. So I wouldn\u2019t say it\u2019s a trustworthy news source, but at the same time, I do trust it as someone\u2019s own personal opinion\u201d [P12] Participants described how the specific examples contained in explanations had to come from lived experiences, and would be challenging to train an AI to replicate: \u201c...It\u2019s actually...coming up with real-life circumstances and scenarios of why this might not be sexist...So I feel like whoever explained this has real-world experience TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Ferguson et al. Table 1: Description of the Scenarios and Explanations used in the interview study. Verbatim scenarios and explanations (written by humans) are not included to protect poster privacy, though AI-generated text is included verbatim in italics. H=Human-authored, AI = AI-authored. # Scenario # Explanation 1 The scenario contains commentary on the \u201cmassively disproportionate\u201d number of women taking STEM subjects in school, and how this may be caused by \u201csociety\u2019s attitude towards women and these areas\u201d. They discuss people providing surprised comments when women say they are studying high-level science. H1 The explanation states that while these comments are not meant to offend, they are often made at critical times when children are choosing a career path, and can thus cause women to stay out of science to prevent them from being seen as \u201cabnormal\u201d. 2 The scenario where an adult women is referred to as a girl, by a man, in a workplace setting. H2 The explanation states that this scenario drives the reader \u201cinsane\u201d and it would not be appropriate to do the same to a man. 3 The scenario just contains the word \u201chysterical.\u201d H3 The explanation describes how the word is triggering because the Greek root of the word refers to the word for \u201cuterus\u201d, and the word is rarely used to describe a man. 4 The scenario describes a family where the father is upset if the daughter does not help to cook or clean, but the brothers are allowed to relax. H4 The explanation states that while people talk about \u201cmen\u2019s work\u201d, we should not assign chores based on gender. They describe their experience enjoying yard work growing up, and how some men would prefer to be in the kitchen. They say both ways are sexist. 5 The scenario describes many examples of men boiling thing down to a woman\u2019s \u201cbiology\u201d. The examples include being emotional due to your period, being able to handle pain due to childbirth, and women getting drunk faster than men. AI1 It is sexist. It\u2019s a way of undermining women by reducing their capabilities as human beings to nothing more than their physical features. It\u2019s a way of saying that women have no control over their behavior. It\u2019s insulting. And it\u2019s not helping. 6 The scenario contains two quotes from books containing the lines \u201ccourage uncommon in a woman\u201d and \u201cgrip that was decidedly unfeminine.\u201d AI2 Yes, and I think that\u2019s sexist because it is saying that a woman not acting like a woman is admirable. 7 The scenario describes when the term \u201cguys\u201d is used when addressing both men and women. AI3 No, it isn\u2019t sexist. The word \u2019guys\u2019 is not a gendered word. It is not sexist. 8 The scenario describes \u201cbenevolent sexism\u201d and gives the examples of women being called \u201cdear\u201d by senior male colleagues or \u201cgirls\u201d when hanging out with other female co-workers. AI4 I am not sure that it is sexist. It is a term that refers to the way men can try to be inclusive of women or supportive of women but with condescending or patronising undertones. I think it is important to look at how we treat each other and to look at how we might be sexist without meaning it. and it\u2019s not just something that was trained to say the right thing\". [P5] Even going as far as to say that it would be unnerving to think of an AI that could generate text as if from personal experience: \u201c...it was kind of reflecting on their own domestic chore experiences and bringing that into the argument that just instantly made me feel like it was a human cause you don\u2019t want to think about an AI like that. It\u2019s just a bit unnerving...\u201d [P9] In terms of evaluation of the explanation, participants noted that personal opinions are worth considering: \u201cTrustworthy? Yeah. I mean it is based on someone\u2019s opinion and it is based on a different situation that the speaker had provided. So is it reliable? Yes, it\u2019s reliable.\u201d [P8]. One participant even shared that personal experiences are only convincing when they are real \u2013 meaning, they reflect a real event that a human experienced, suggesting that an AI imitation of this would not be convincing: \u201cBut I think if I was standing there and I was in a conversation and there was a human woman saying this in rebuttal, and I would say it\u2019s a four or a five because it\u2019s someone who is speaking from lived experience.\u201d [P15] However, many also noted that sometimes an author\u2019s opinion or description of their experience is not enough to make an explanation trustworthy and convincing. For that effect to be achieved, the opinion or experience must be backed by facts, supporting evidence or sources. \u201cOkay, I don\u2019t see any sort of source for these facts that are being stated. I don\u2019t see any, yeah so it\u2019s just an opinion of AI or a person and I would have to see evidence.\u201d [P2] Some participants argued that a trustworthy explanation would contain both personal components such as opinions and experiences, as well as logic, facts, or statistics: \u201cI think yes [it is trustworthy], because it seems like it doesn\u2019t come only from academic experience but also from personal experience firsthand. So it seems, yeah I think it appeals to our senses to trust one that can handle both [facts and opinions].\u201d [P10] In summary, participants recognize the presence of opinions and experiences displayed in humanand AI-generated explanations, across various scenarios. The presence of these explanation elements made explanations more convincing and trustworthy when used in subjective decision-making, though they should be combined with objective facts and sources. 3.2 Comparing Opinions and Experiences Interestingly, even though we did not directly inquire about the participant\u2019s opinions about a scenario or their similar personal experiences, they often offered these as a justification for their answers. In fact, they recognized the author\u2019s opinion in the explanation and discussed how this opinion did or did not align with their personal opinion, and how this alignment, or lack thereof, influenced their assessment of the explanation. For example: Just Like Me TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA \u201c...it has a lot of truths to it. You don\u2019t really hear someone use the word hysterical to describe another man.\u201d [P7] This comparison with their own opinion was discussed 97 times, in reference to all scenarios, by 19 participants, and overwhelmingly in response to perceived human explanations. Despite the fact that humans often made this comparison in regard to explanations they thought were human-authored, these explanations were often actually AI-authored. This comparison echoes literature [25] that exposed the weight of personal values when considering sexism. Overall, when an opinion displayed in the explanation aligned with the participant\u2019s opinion, they were more likely to assess a human wrote it. When an explanation aligned, or related, to their personal experience in the world, participants described that this felt \u201chuman\u201d: \u201cThis feels like a response that I would have, personally. This is something probably that I would see myself saying. So I would guess that this is a human response.\u201d [P4] \u201cI find it relatable, that explanation to, to my career, to my job and everything. So I feel connected to that explanation or that makes me feel that it was done by a human.\u201d [P11] In terms of comparing experiences displayed in the explanation to their own experiences, this was found 48 times in our interviews, in regard to all scenarios and brought up by 17 participants. In this case, participants made this comparison in explanations that both were actually and perceived to be human-authored. While in general, we found that opinions and experiences had a more important role in the evaluation of convincing and trustworthy, rather than relevant, there were notable examples where participants mentioned alignment with personal experience in their assessment of relevance. Perhaps they used their personal experience as a middle ground between the scenario and explanation. If the scenario aligned with their personal experience, as did the explanation, the explanation was relevant to the scenario: \u201cI think it\u2019s definitely relevant, and even I believe that the word hysterical is typically associated with women and that kind of reinforces this misconception that we as a society have that women are the ones who super emotional and they get carried away and they can act in a crazy manner or be hysterical. So yeah, I think the explanation makes sense to me\u201d [P13] Furthermore, when participants noticed alignment with their personal opinions and experiences within the explanation, they would find the explanation to be more Convincing: \u201cI like the reasoning, it\u2019s pretty similar... It\u2019s just in line with my personal values system\" [P3] \u201cI did relate to it. I don\u2019t see hysterical referred to, [or] used as a descriptor for men or haven\u2019t historically seen that. And so that\u2019s the piece where it was like, oh yeah, that is an observed behavior that I\u2019ve also noticed. So I feel convinced by that...\u201d[P15] Further, a lack of alignment makes things less convincing: \u201c...if I was a guy and you\u2019re trying to convince me based on this explanation, I wouldn\u2019t really be convinced because that\u2019s what I\u2019m used to hearing the entire time. I\u2019m used to saying that\u2019s what I, that\u2019s the way everyone around me talks. Yeah. So I wouldn\u2019t be convinced by this explanation.\u201d [P15] In terms of trustworthiness, participants described an emotional connection, or an emotional appeal, that was brought about when the explanation aligned with their experience: \u201dAnd I think because it also speaks to some of my own experiences and the experiences of some of my friends and colleagues growing up, I\u2019m, it just intuitively fosters this connection...[it] speaks about an experience that a lot of people have had growing up and choosing what they wanna do in life and their career paths. So there is some emotional appeal that is going on there\u201d [P9] In addition, a few participants also considered that while they agree with the explanation\u2019s stance, someone else with a different opinion might make a different assessment: \u201cAgain, same thing. I do agree with what\u2019s been said, so I\u2019m like, yeah, I trust that a lot. But if I put [myself] in someone else\u2019s shoes, if I did not agree with what\u2019s been said would it be trustworthy? I think it\u2019s a three. I could react or someone could react back and say, \u2018Ah, I have a different opinion\u2019. Of course, go ahead. So based on what I know and what I believe, I trust it a lot...\" [P8] And lastly, we found that in some instances, participants said that the lack of alignment with their own opinion made it hard to evaluate the explanation. In this case, the participant recognized the impact that this lack of alignment had on their evaluation, and was thus unable to provide a rating: \u201cI don\u2019t really have enough information to make the decision if it\u2019s convincing or trustworthy mainly because I don\u2019t agree with the explanation. So I\u2019m not really finding an answer to whether it\u2019s trustworthy or convincing.\u201d [P17] In summary, participants tended to automatically compare the opinions and experiences they perceived within the explanation to their own opinions and experiences. Whether or not these opinions and experiences aligned with their own influenced their evaluation of the explanation, making it more convincing and trustworthy, and evening making it challenging to evaluate the explanation if there is a lack of alignment. 4 DISCUSSION AND IMPLICATIONS In this work, we argue that as LLM\u2019s abilities advance, they are becoming more suitable collaboration partners for humans, specifically in the context of subjective decision-making. The role of AI output in these contexts is to present new information and perspectives to the human decision-maker. Explanations in collaborative, subjective decision-making are less likely to be grounded in facts than explanations for objective decisions; it is more common that opinions and personal experiences comprise these new perspectives. As opinions and personal experiences may be considered uniquely human attributes, we were interested in whether and how TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Ferguson et al. these attributes were perceived in AI explanations. We found that humans did identify opinions and experiences in explanations for subtle sexism scenarios, and perceived them to be both convincing and trustworthy, making them important for subjective decisionmaking. While an argumentative stance (i.e., sexist or not sexist) can be depicted in some non-textual explanation formats, it would be hard to share personal opinions (such as \u201cI don\u2019t think women should be confined to traditionally feminine interests\u201d) and personal experiences, which are normally described in a narrative format, in forms other than text. Thus, word-based explanations are perceived to contain opinions and experiences which are important for subjective decision-making. Personal opinions and experiences were also often attributed to human authorship. As has also been shown in past work in the context of emotions [36], we have demonstrated that these modern LLMs can generate plausibly human text elements; in our case, opinions and experiences. Thus, while these elements may aid in subjective decision-making, we have to consider the ethical implications of participants potentially believing these explanations come from humans. In fact, we provide evidence that the relationship between the explanation author and the explanation evaluation is causal in some cases \u2014 some participants described how they value human\u2019s individual experiences, and thus they would trust a human\u2019s explanation of their experience, but not an AI-generated replication. This means that if we use LLMs in subjective decisionmaking collaborative systems, we must be sure to disclose their contributions as AI-generated, even if this might harm the trustworthiness of the explanation. In many contexts, humans distrust AI-generated text [19, 40], thus future work can investigate which elements or factors need to be present in AI-generated text to calibrate trust. Explanations which featured opinions and experiences were also found to be convincing and trustworthy, which aligns with past work in other contexts, where researchers found that personal experiences regarding politics are more convincing than facts [21], and make people seem more rational and worthy of respect [34]. However, opinions alone were often noted as not enough to completely convince participants. This finding suggests one way in which we may need to fine-tune language models if used in this context. Our results also show that personal opinions and experiences, in conjunction with statistics or other forms of evidence, would be most convincing \u2014 as was also suggested in past work on political disagreement [21]. Future work can identify how to prompt or adjust the design of language models to provide this balance. Perhaps the more troublesome finding was that participants unpromptedly judged how these opinions and experiences aligned with their own, greatly influencing their overall perception. When an explanation contains an opinion or experience similar to the participant\u2019s own, they tend to rate the explanation overall as more trustworthy and convincing. This is known as a confirmation bias \u2013 defined as \u201cseeking or interpreting evidence in ways that are partial to existing beliefs\u201d [26, p. 175] \u2013 which poses the risk of reinforcing the user\u2019s existing opinions and negating the intention of providing new perspectives to the decision-making process. Thus, we have to be careful when deciding on the opinions and experiences present in these explanations. Perhaps in these collaborative decision-making settings, we should present participants with multiple LLM outputs, representing multiple opinions and experiences, or even prompt models to describe different perspectives. It has been shown that these models can provide different opinions, although they may provide one more commonly than another [32]. Further, recent work has shown that LLMs can be trained to generate widely accepted outputs that can help people with diverse viewpoints reach consensus [2], suggesting that these models could be used both to generate different perspectives and resolve them. This would have the benefit of being perceived as convincing and trustworthy, but still providing new perspectives and information that can help participants make their decisions. 5 LIMITATIONS Due to our collection of subtle sexism scenarios from naturally occurring internet discussion sites, we limit the variation of scenarios studied. For instance, we found that the large majority of the posts regarding subtle sexism argued for why a scenario was sexist, limiting our ability to assess human explanations arguing why the scenario was not sexist. Further, because we could only fit a small number of scenario-explanation pairs in the interview, we also cannot comment on the generalizability of our findings to subtle sexism in various contexts and other types of hate speech. Another limitation of this work is the small sample size. The majority of our study participants (though not all) came from Western communities, and may have different beliefs and lived experiences from those in other parts of the world. Additionally, because online forums and social media tend to be populated mostly by Western communities, this could influence the alignment that we see between experiences displayed in explanations and the participant\u2019s own experiences. Future work should extend this investigation to understand how cultural differences may influence the perception of alignment. Furthermore, LLMs have been shown to reflect societal biases, and can create hate-speech themselves. While we filter out this type of content in our study, future work can measure the biases in these LLM-generated texts, and how biased opinions and experiences influence human perceptions. Lastly, as part of the larger scope of the study, we asked participants to describe why they believe a text to be human or AIauthored. This focus prevented us from being able to analyze how knowledge of the explanation source affected perception, which is an important next step in the work. 6 CONCLUSION & FUTURE WORK In this work, we present an interview study of participants\u2019 perceptions of human and AI explanations for subjective decision-making, specifically using the example of identifying subtle sexism. We argue that in subjective cases, multiple perspectives presented in collaboration can be helpful. To make this feasible, we motivate this work with the idea that LLMs can be used to generate these perspectives. We ask participants to evaluate human and AI-generated explanations as if they were participating in this collaborative process. We found that participants often perceived the explanations, both those authored by humans and AI, to contain personal opinions and experiences. The presence of these elements typically leads participants to view the explanation as convincing and trustworthy and Just Like Me TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA also believe that it was written by a human. Further, we show that whether these opinions and experiences are aligned with the participant\u2019s own opinions and experiences is even more important for trust, highlighting a troubling tendency to conform to confirmation bias, negating the original intent of collaborative decision-making. Thus, we show that these elements of explanations are particularly important for collaboration in subjective hate-speech detection, and we motivate future work to address how we might best provide multiple, differing opinions and experiences to collaborative human decision-makers, and how we can avoid ethical challenges that arise when AI generates human-like opinions and experiences." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15588v1", |
| "title": "Minimal Evidence Group Identification for Claim Verification", |
| "abstract": "Claim verification in real-world settings (e.g. against a large collection of\ncandidate evidences retrieved from the web) typically requires identifying and\naggregating a complete set of evidence pieces that collectively provide full\nsupport to the claim. The problem becomes particularly challenging when there\nexists distinct sets of evidence that could be used to verify the claim from\ndifferent perspectives. In this paper, we formally define and study the problem\nof identifying such minimal evidence groups (MEGs) for claim verification. We\nshow that MEG identification can be reduced from Set Cover problem, based on\nentailment inference of whether a given evidence group provides full/partial\nsupport to a claim. Our proposed approach achieves 18.4% and 34.8% absolute\nimprovements on the WiCE and SciFact datasets over LLM prompting. Finally, we\ndemonstrate the benefits of MEGs in downstream applications such as claim\ngeneration.", |
| "authors": "Xiangci Li, Sihao Chen, Rajvi Kapadia, Jessica Ouyang, Fan Zhang", |
| "published": "2024-04-24", |
| "updated": "2024-04-24", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "The task of claim verification predicts whether a claim is supported by the presented evidence (Thorne et al., 2018; Chen et al., 2023a). A claim verification model is expected to identify the cor- rect evidence pieces (EPs; e.g. evidence sentences or snippets) among tens of retrieved candidate ev- idence, but a practical challenge lies in that there might exist multiple sets of evidence that verify the claim from different perspectives. Figure 1 shows an example where, given a claim and some retrieved evidence, there exist two different \u2014 but both valid \u2014 ways of supporting the claim. While humans can quickly identify mutually re- dundant EPs, e.g. e1 and e3 in Figure 1, and pro- pose plausible combinations of EPs as evidence groups (EGs, formally defined in Section 3.1), existing claim verification systems (Dagan et al., * Work performed while the authors are interning at Google On October 17, 2018, one year after Downie\u2019s death, a previously unreleased studio recording of the song \u201cWait So Long\u201d was played on K-Rock. Claim TITLE: Hear Previously Unreleased Tragically Hip Song \u2018Wait So Long\u2019 \u2013 K-ROCK 105.7 Candidate Evidence Pieces On Wednesday, October 17, 2018, as the country remembered Gord Downie following the one year anniversary of his passing, we played a song from a special package that was delivered earlier in this month. We played the song \u2018Wait So Long\u2019 a couple of times on the 17th as a part of Gord FM. The song is also listed on the Hip\u2019s official list of 61 unreleased songs. e1 e2 e3 \u2026 e4 e1 c e2 \u222a e3 \u21d2 c \u21d2 c Evidence Group #1 Evidence Group #2 \u222a e2 Figure 1: The problem of minimal evidence group iden- tification for claim verification: given a claim and a list of candidate evidence pieces, the task is to identify the sets of minimal, non-redundant evidence, where each set provides full support for the claim. 2005; Thorne et al., 2018; Wadden et al., 2020; Schuster et al., 2021; Chen et al., 2023a,b) focus only on the relationship between the claim and indi- vidual EPs, without considering the co-supporting relationships among EPs. This becomes problem- atic because the retrieved EPs might be redundant, or an individual EP may only partially support the claim. An EG with redundant EPs makes it more difficult to explain the reasoning for supporting the claim, while an EG composed of partially sup- porting EPs may still not fully support the claim, resulting in logical flaws. These problematic out- puts not only confuse human verifiers, but also hurt the performance of downstream tasks. In this paper, we introduce the problem of iden- tifying minimal evidence groups (MEGs) from retrieved evidence candidates. Conceptually, an MEG is composed of EPs with the following prop- erties: (1) Sufficiency: each MEG fully supports the veracity of the claim; (2) Non-redundancy: the EPs in an MEG are not redundant with each other; and (3) Minimality: the number of EPs in each MEG is minimal. We formally define the task of MEG identification and show that classic arXiv:2404.15588v1 [cs.CL] 24 Apr 2024 claim verification approaches cannot effectively solve this problem. We propose a simple yet practi- cal approach to decompose it to support prediction and evidence group merging. Our proposed ap- proach significantly outperforms the baseline of di- rectly prompting a large-language model (LLM) by 18.4% and 34.8% absolute precision scores on the WiCE (Kamoi et al., 2023) and SciFact (Wadden et al., 2020) benchmarks. Finally, we demonstrate the benefit of MEGs for saving computation budget in the downstream task of claim generation.", |
| "main_content": "Classic claim verification (Thorne et al., 2018; Chen et al., 2023a) consists of three steps: evidence retrieval, evidence selection, and stance prediction. Evidence retrieval perform coarse-grained filtering of EPs from thousands of candidates. Evidence selection and stance prediction perform fine-grained selection of EPs and predict whether the claim is supported by the selected EPs. MEG identification builds on classic claim verification by restricting evidence selection and stance prediction to predict a minimal group of EPs that fully supports the claim. To address the problem that claim verification systems (Dagan et al., 2005; Wadden et al., 2020; Schuster et al., 2021; Chen et al., 2023b) may predict EPs that only partially support the claim, Laban et al. (2022); Schuster et al. (2022); Kamoi et al. (2023) aggregated individual EPs\u2019 entailment scores into EG scores. However, they did not address the problem of redundancy within an EG; we propose MEG identification to fill this gap. The closest work to ours is SciFact (Wadden et al., 2020), which annotates \u201cminimal evidence sets\u201d for each claim. However, the SciFact shared task does not penalize non-minimal EGs, and consequently models that evaluate on SciFact (Pradeep et al., 2021; Li et al., 2021; Zhang et al., 2021; Wadden et al., 2022) are trained on the union of EGs from different human annotators, which is no longer minimal. Similarly, Thorne et al. (2018); Chen et al. (2023b); Kamoi et al. (2023) collect (possibly redundant) EGs from multiple annotators and use their union as training labels. As a result, existing models prioritize EP recall and are not directly comparable to MEG identification models. 3 Minimal Evidence Groups 3.1 Problem Formulation MEG identification builds on the classic claim verification task (Thorne et al., 2018; Chen et al., 2023a). Formally, claim verification takes a claim c and a list of candidate EPs E = {e1, e2, ...} as input. The evidence selection step retrieves all EPs that are relevant to c, and the stance prediction step predicts whether the selected EPs support c1. In Figure 1, e1, e2, e3 all support c. A set of fully supporting EPs is called an evidence group (EG). MEG identification requires the EGs to be sufficient, non-redundant, and minimal. We consider a set of EPs S \u2286E to fully or partially support a claim c if the EPs in S collectively entail all or only part of c, respectively; S does not support c if none of EPs in S entail c. If S fully supports c, it is an EG; an MEG is a minimal EG such that none of its EPs are redundant in terms of supporting c. In Figure 1, e1 and e3 are redundant; {e1, e2} and {e2, e3} are MEGs that fully support c. 3.2 Task Evaluation We focus on precision-oriented scores (precision and F0.5) to penalize predicting non-minimal EGs because we observe from prior claim verification datasets (Thorne et al., 2018; Wadden et al., 2020; Chen et al., 2023b; Kamoi et al., 2023) that (1) one MEG is sufficient for claim verification in practice; (2) humans are good at finding one plausible MEG but struggle to exhaustively find all valid MEGs; and (3) different annotators propose distinct MEGs. Given a claim c with reference MEGs RG = {G1, G2, ...}, we measured the following metrics: Exact match of MEGs treats each reference MEG atomically and considers a predicted MEG to be correct if it exactly matches a reference MEG. Best soft match of MEGs gives partial credits to the predicted MEGs. We calculate the EP-level scores between the predicted MEG G\u2032 and the most similar reference MEG chosen by \u2032 and the most similar reference MEG chosen by \u02c6 G = arg maxGi\u2208RG F0.5(G\u2032, Gi). 4 MEG Identification Approach The challenge of MEG identification is to find the smallest set of EPs that fully supports the claim. As discussed in Section 2, classic claim verification models treat the EP as the basic unit; they are 1We limit our scope to claim support/non-support, ignoring contradictions for simplicity. See Section 5 for discussion. Algorithm 1 MEG identification with a support prediction Model. Simplified for illustration, see Appendix Section C.2 for details. Require: c, E = {e1, e2, ..., en}, Model Require: max_size \u25b7Max size of EGs to consider. MEG \u2190[] \u25b7Proposed MEGs. for size in 1... min(|E|, max_size) do CS \u2190makeCombinations(c, E, size) \u25b7List of notRedundant combinations of partially supporting EPs. for S in CS do label \u2190Model(c, S) if label is fully support then MEG.append(S) end if end for if len(MEG) > 0 then break end if end for Output MEG neither designed nor trained for groups of evidence. Our experiments of prompting directly with LLMs also show poor performance (Table 1, \u201cDirect\u201d)2. As Algorithms 1 shows, we decompose MEG identification into two steps: (1) predicting whether a candidate set of EPs fully supports, partially supports, or does not support the claim and (2) bottomup merging partially supporting groups in search of a fully supporting group. The support prediction Model can be implemented by any reasonable approach, such as prompting LLMs or fine-tuning models like T5 (Raffel et al., 2020). When merging groups, we increment the overall group size by 1 at each step. Note that if we only evaluate the base case with size=1, this is equivalent to classic claim verification (Thorne et al., 2018; Wadden et al., 2020; Schuster et al., 2021; Kamoi et al., 2023). Based on the definition of MEG (Section 3.1), we derive three principles to prune the problem space for a tractable solution: (1) any superset of an MEG fully supports the claim c; (2) any nonempty subset of an MEG partially supports c; and (3) if a set of EPs S fully supports or does not support c, then S is not a strict subset of any MEG. Therefore, we stop merging sets that are predicted as fully supporting or not supporting to maintain the non-redundancy and minimality of the candidate EP sets. In addition, when choosing a pair of sets to merge, we prune the candidate merge partners for each set using a redundancy checker notRedundant (implemented as a zero-shot LLM prompt; see Appendix C.2). Finally, upon finding a fully supporting set, we stop merging and return all fully supporting sets of the current size. 2The explicit verification of combinations of EPs reduces from Set Cover and is NP-hard (see proof in Appendix A.) 5 Intrinsic Evaluation 5.1 Experimental Settings 5.1.1 Datasets We perform filtering to convert classic claim verification datasets to align with our MEG identification task. Both of the datasets listed below annotate EGs with multiple annotators. We assume that every human-annotated EG fully supports its claim, every subset of an EG partially supports its claim, and all non-labeled sentences do not support the claim. In addition, we assume each reference EG to be an MEG proposed by a different annotator. SciFact (Wadden et al., 2020) is a biomedical fact-checking dataset and is the only existing dataset whose annotation instructions match the sufficiency, non-redundancy, and minimality requirements of MEGs. We remove all examples whose claims contradict the evidence, resulting in 268 samples from the development set. We use the non-contradictory EGs as-is. To distinguish it from the original SciFact dataset and task3, we call this modified dataset SciFact-MEG. WiCE (Kamoi et al., 2023) distinguishes EGs that fully or partially support claims from Wikipedia. We use the subclaim-level partition of the dataset and only use samples labeled as fully supporting, resulting in 528 samples from the test set. We call this modified dataset WiCE-MEG. 5.1.2 Implementation For both the support prediction Model and notRedundant checker, we prompt PaLM-2L (Anil et al., 2023) with few-shot demonstrations and greedy decoding (see Appendix B). We follow Wan et al. (2023) to select the LLM\u2019s most confident examples for few-shot demonstrations. To prioritize precision, we take the top-1 predicted MEG, ranked according to the LLM\u2019s predicted fully supporting score, if multiple MEGs are predicted. 5.1.3 Baseline Approaches Direct prediction. We zero-shot prompt PaLM2L (Anil et al., 2023) to predict the MEG via EP indices, given a claim and a list of candidate EPs (Appendix Table 6). Classic claim verification. To simulate classic claim verification without considering groups 3As discussed in Section 2, while the SciFact dataset annotates EGs that meet the requirements of MEGs, the task does not evaluate redundancy or minimality, only sufficiency. Exact Match Best Soft Match Dataset Approach Precision Prec. Recall F0.5 WiCE-MEG Direct 0.456 0.176 0.522 0.203 Classic 0.568 0.338 0.554 0.367 Classic+LR 0.570 0.425 0.526 0.442 Ours 0.640 0.809 0.423 0.684 SciFact-MEG Direct 0.243 0.235 0.652 0.269 Classic 0.479 0.468 0.478 0.470 Classic+LR 0.479 0.491 0.476 0.488 Ours 0.591 0.612 0.352 0.533 Table 1: Top-1 minimal evidence group identification performance. Examples with failed outputs are excluded for the baseline approach. of EPs (Thorne et al., 2018; Wadden et al., 2020; Schuster et al., 2021; Kamoi et al., 2023), we use our proposed approach but early stop after computing size=1. If we find any fully supporting EP, the output MEG will be the same as our proposed approach. Otherwise, we concatenate all partially supporting EPs as a single EG. Classic claim verification with less redundancy (Classic+LR). Given the output from \u201cclassic claim verification\u201d above, we additionally remove EPs that cause redundancy, as predicted by the pair-wise nonRedundant checker4. 5.2 Experimental Results Table 1 shows the top-1 MEG identification performance using the metrics introduced in Section 3.2. For both datasets, our approach significantly outperforms all baselines on precision and F0.5 scores. The baselines underperform our approach because their predicted MEGs contain too many EPs, especially the \u201cDirect\" LLM prompting baseline. Decomposing MEG identification into many individual entailment problems (\u201cClassic\u201d) greatly improves the precision score. Further removing pair-wise redundancy (\u201cClassic+LR\") slightly improves performance, showing the impact of redundancy. Finally, although requiring significantly more computation, our bottom-up MEG identification approach performs the best because every combination of EPs is explicitly verified. 6 Extrinsic Evaluation The non-redundancy of MEGs not only makes the evidence more human-readable, it also improves the performance of downstream applications. Inspired by Chen et al. (2023c), we use WiCE-MEG to highlight the MEG\u2019s minimality and sufficiency properties using claim generation as an example 4We simply remove redundant combinations when size=2. Input Evidence # Words # Sents R-1 R-2 R-L Direct 172.4 6.81 0.299 0.127 0.263 First-k Direct 34.1 1.15 0.282 0.114 0.250 Classic 85.0 3.20 0.282 0.120 0.250 Classic+LR 69.2 2.45 0.281 0.120 0.250 Our MEGs 39.5 1.29 0.289 0.121 0.254 Gold MEGs 37.0 1.31 0.294 0.126 0.259 Gold UEGs 71.7 2.78 0.302 0.128 0.267 First-k gold UEGs 33.0 1.31 0.264 0.101 0.232 Table 2: Budgeted retrieval-augmented generation performance (ROUGE F1). downstream task, with a computation budget measured in the number of words or sentences. 6.1 Experimental Settings Since EGs fully entail their claims, they contain the information to reconstruct the claim. We compare the following input settings for the task of claim reconstruction using PaLM-2L (Anil et al., 2023): MEGs. We use the top-1 MEG obtained with our proposed approaches, each baseline in Table 1, and the human-annotated reference EG with the smallest number of EPs for each claim. Union of EGs (UEGs). We take the union of reference EGs (from different annotators) for a claim. First-k. To simulate a low computation budget setting, we follow Chen et al. (2023c) in taking the first k EPs, where k is the size of the top-1 MEG. 6.2 Experimental Results Table 2 shows that both our predicted and gold MEG settings perform comparably to settings with much lower computation budgets, while significantly outperforming the most constrained \u201cfirst-k\u201d settings. These results indicate that (1) our proposed approach for MEG identification is effective; (2) MEGs contain complete information for the claim generation task; (3) MEGs are much more compact than EGs from other approaches, with more than 45% fewer words, allowing them to be used in low-computation scenarios. 7 Conclusion We have addressed the challenging scenario in claim verification where a model is expected to identify a minimal group of evidence pieces (EPs) among a relatively large amount of candidate evidence, and there might exist different sets of evidence that verify the claim from different perspectives. We formally define and study the problem of such minimal evidence group (MEG) identification and show that it can be reduced from a Set Coverlike problem. Our proposed approach achieves significant improvements over direct LLM prompting. Finally, we demonstrate the benefit of MEGs over classic claim verification approaches in downstream applications such as claim generation. Limitations Ignoring contradictions. In this work, we only consider supporting/non-supporting evidence for simplicity, and leave contradicting evidence for future work. Our proposed approach avoids the edge case of full/partial entailment problem brought by contradiction. Nonetheless, we claim that contradiction can be regarded as the opposite of support, where our proposed concepts and approaches still apply with minor fix. Reliability of human annotations. As we point out in Section 1, there is no gold-standard annotated dataset designed for this task, and it is practically difficult to enforce and verify the sufficiency, non-redundancy, and minimality requirements of MEGs in the existing annotations. In practice, unless explicitly stated, it is unknown whether the annotated EGs are simply relevant to or fully support the claim. Although human annotators are good at proposing salient EGs, annotators usually do not exhaustively find all possible EGs. Moreover, human annotators tend to over-select EPs for a better contextualization, which breaks the non-redundancy and minimality requirements. As a result, we argue that the human annotations should only be treated as a reference, instead of an absolute gold standard. Therefore, the measured performance in Table 1 can should be regarded as each approach\u2019s agreement with the human annotators, and does not necessarily measure MEG correctness. Definition of partial support. It is challenging to precisely define partial support. Even Kamoi et al. (2023), who proposed this label, did not clearly define it. Our proposed approaches do not rely on the precise definition of partial support but simply regard it as the intermediate label between not support and fully support because the precise definition may vary case-by-case in different datasets that the support prediction Model is trained on. Because of this ambiguity, partial support is the most challenging label for LLMs to predict (Appendix B) and hurts the performance of MEG identification. Running time. Due to the NP-hardness (Appendix A) of the MEG identification problem, we trade off running time for higher performance, thus the worst case running time for the proposed solution is too long to be practically useful in a production system. Our proposed approach is currently more suitable for dataset creation, which requires a robust solution without strict running time requirements. We leave more efficient approaches for future work. Ethical Statements Similar to all prior claim verification works (Dagan et al., 2005; Thorne et al., 2018; Wadden et al., 2020; Schuster et al., 2021; Chen et al., 2023a,b), we stress that the MEG identification problem and the MEGs predicted by our proposed approach only consider the relative entailment relationship between the evidence and the claim. In other words, the MEG identification problem and our proposed approach do not guarantee the absolute correctness of the claim or the EPs or EGs themselves. Any future application must be cautious in distinguishing between retrieving evidence that supports the claim, correct or not, and verifying the absolute factual correctness of the claim." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.15406v1", |
| "title": "Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs", |
| "abstract": "Multimodal LLMs are the natural evolution of LLMs, and enlarge their\ncapabilities so as to work beyond the pure textual modality. As research is\nbeing carried out to design novel architectures and vision-and-language\nadapters, in this paper we concentrate on endowing such models with the\ncapability of answering questions that require external knowledge. Our\napproach, termed Wiki-LLaVA, aims at integrating an external knowledge source\nof multimodal documents, which is accessed through a hierarchical retrieval\npipeline. Relevant passages, using this approach, are retrieved from the\nexternal knowledge source and employed as additional context for the LLM,\naugmenting the effectiveness and precision of generated dialogues. We conduct\nextensive experiments on datasets tailored for visual question answering with\nexternal data and demonstrate the appropriateness of our approach.", |
| "authors": "Davide Caffagni, Federico Cocchi, Nicholas Moratelli, Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara", |
| "published": "2024-04-23", |
| "updated": "2024-04-23", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.AI", |
| "cs.CL", |
| "cs.MM" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "LLM Fairness", |
| "gt": "Recently, Large Language Models (LLMs) have demon- strated impressive performance in zero-shot textual tasks. Specifically, recent literature has devised models capable of tackling diverse tasks, as instructed by the user [6, 30, 41]. In this context, the classical approach is that of fine-tuning a model on varied tasks that are described through natu- ral language [7, 34], thus empowering the model to assim- ilate externally provided instructions and facilitating robust generalization across multiple domains. Following these advancements, the computer vision community has started to investigate the extension of such models to vision-and- language contexts, thus generating Multimodal Large Lan- guage Models (MLLMs). On this line, the fusion of visual features into LLM backbones through vision-to-language adapters [1, 21, 23, 48] has induced notable performance \u2217Equal contribution. Notation Standard MLLMs What is the closest parent taxonomy of this bird? Image + Question The closest parent taxonomy of this bird is the parrot. Wiki-LLaVA (Ours) What is the closest parent taxonomy of this bird? Opisthocomidae Image + Question The closest parent taxonomy of this bird is the parrot. What is the closest parent taxonomy of this bird? Standard MLLMs Image + Question + Retrieved Passages Opisthocomidae Wiki-LLaVA (Ours) What is the closest parent taxonomy of this bird? Visual Tokens Textual Tokens Retrieval Tokens Retrieved Docs Figure 1. Comparison between a standard multimodal LLM and Wiki-LLaVa. Our model integrates knowledge retrieved from an external knowledge base of documents through a hierarchical re- trieval pipeline. As a result, it provides more precise answers when tasked with questions that require external knowledge. improvements, enabling extensive generalization to vision- and-language tasks requiring elaborate visual descriptions. In this context, MLLMs excel by simply including a small module (i.e., an adapter) that aligns visual features with textual ones. However, despite these models be- ing built upon LLMs trained on large-scale data, they ex- hibit notable limitations when confronted with highly spe- cific user queries or when a certain degree of compo- sitional reasoning is required to formulate the response. Moreover, certain knowledge proves itself challenging to be encoded within the parameters of an MLLM, due to the scarcity of long-tail information in the training data. In response to this challenge, different benchmarks have 1 arXiv:2404.15406v1 [cs.CV] 23 Apr 2024 been recently introduced for evaluating the capabilities of MLLM to tackle queries related to external data, such as InfoSeek [5] and Encyclopedic-VQA [28]. While differ- ent works [8, 20, 21, 32] have been testing on these bench- marks, underscoring the significance of this area, none of them has developed architectures specifically designed for tackling external knowledge. Driving from these considerations, in this paper we pro- pose the first MLLM augmented with a retrieval module, thus shifting the focus towards teaching the model to lever- age diverse information in its responses and learning to discern the relative importance of each. In particular, our model retrieves appropriate information from an external knowledge base of documents and employs a hierarchical retrieval approach to identify relevant passages. This addi- tional knowledge is then fed to an MLLM, without changing its structure but improving its answering capabilities. To the best of our knowledge, our work represents the first MLLM to harness the retrieval capability of external sources. We assess the quality of the proposed approach by conducting extensive experiments and comparisons with respect to re- cent MLLMs [8, 21, 24] and by showcasing the effective- ness of our design choices. Experimental results demon- strate the advantage of retrieving from external sources and the appropriateness of our model design. Overall, we con- ceive our work as a first step in the direction of retrieval- augmented MLLMs, which could foster future works in the same area.", |
| "main_content": "Multimodal LLMs. LLMs have significantly reshaped the landscape of AI research and applications, spearheaded by notable examples like OpenAI\u2019s ChatGPT and GPT-4. These models leverage alignment techniques such as instruction tuning [30] and reinforcement learning from human feedback [39] and achieve remarkable capabilities in language understanding and reasoning. Open-source LLMs like Flan-T5 [7], Vicuna [6], LLaMA [41], and Alpaca [40] have further accelerated the advancement within the research community. This surge in the development of LLMs subsequently led to the emergence of MLLMs [3], which can combine the understating of visual inputs with natural language generation. Early attempts of building MLLMs such as VisualGPT [4] and Frozen [42] used pre-trained language models to enhance vision-and-language models specifically for tasks like image captioning and visual question answering. This initial investigation paved the way for subsequent research in this domain, with the introduction of solutions such as Flamingo [1] or BLIP-2 [21] which allowed the integration of image features into LLMs respectively through trainable cross-attention layers directly within the LLM or Q-Former blocks that instead combine image and textual features via learnable queries. Building upon these advancements, subsequent models like FROMAGe [19], Kosmos-1 [14], and MiniGPT-4 [48] have been introduced to further refine the interplay between visual and language modalities within the LLM architecture. Concurrently, the LLaVA family of models [23\u201325] introduced the usage of instruction tuning in the multimodal domain, by training on a curated dataset collected with GPT-4. This strategy is now among the most promising recipes for building MLLMs. Retrieval-augmented language models. In recent years, retrieval-augmentation has been applied to language models by expanding their input space with relevant text passages extracted from external sources [10] or eventually retrieved directly from the web [29]. These techniques have demonstrated large improvements in knowledge-intensive tasks and significant savings in terms of model size. Traditionally, the integration of external knowledge into textual generation has been confined to the initial stages. Different solutions [17] proposed to adaptively retrieve passages for generation on top of a proprietary LLM. Some works [10], instead, focused on capturing knowledge in a more modular and interpretable way, by augmenting the language model pre-training with a latent knowledge retriever. This allows the model to retrieve and attend documents taken from a large corpus such as Wikipedia. While much attention has been directed towards textual augmentation, similar research efforts have recently been dedicated in the context of vision-and-language tasks [2, 13, 31, 37]. Following this direction, the work presented in [13] proposed a retrieval-augmented visual-language model that encodes world knowledge into a large-scale memory. Other approaches [35, 36] also apply retrieval to specific downstream tasks such as image captioning. Differently from all the aforementioned approaches, our work is the first to apply retrieval-augmentation to MLLMs. We do this by applying a hierarchical retrieval strategy on top of a knowledge base made of multimodal documents. Knowledge-based visual question answering. Recently, the emergence of new benchmarks like EncyclopedicVQA [28] and InfoSeek [5] has raised the difficulty of standard knowledge-based VQA [16, 27, 38] with questions that require intensive knowledge about specific entities, such that even LLM-based models perform poorly without retrieving information from external sources. Often, contrastive image-text encoders are employed to retrieve the target entity given the query image [44, 46]. Then, the entity name is used as a key to access an external knowledge base, which is typically composed of several text passages that encompass the correct answer. In this work, we design a hierarchical retrieval scheme based on CLIP [33] and the Contriever model [15] to extrapolate relevant passages, and we feed them to an MLLM to help the answer generation. 2 Entity: Q358813 Given the following context: When was this piece of sporting equipment invented? <IMAGE> 1926 Wiki-LLaVA <R2> <R1> A surfboard is a narrow plank used in surfing. Surfboards are\u2026 The Ochroma wood's surfboard history originates Hawaii, in 1926.. In the late 1960s Gordon Clark found the formulation for foam.. 0.45 0.75 0.2 Adapter Visual Encoder Contriever Figure 2. Overview of the architecture of Wiki-LLaVA, which augments a multimodal LLM with external knowledge through a hierarchical retrieval pipeline. 3. Proposed Method Our goal is to equip Multimodal LLMs (MLLMs) with the ability to answer complex and specific questions that cannot be addressed solely through the image content and pre-trained knowledge. To achieve this, we propose WikiLLaVA, which integrates external knowledge derived from an external memory into the LLaVA model, without significantly altering its design. Instead, we augment the capabilities of the model by incorporating retrieval information as additional input context. Overall, Wiki-LLaVA comprises three components, as shown in Fig. 2: a visual encoder, which is employed to provide the MLLM with visual context and as a query to retrieve from an external knowledge base, the knowledge base itself (e.g., Wikipedia), and a hierarchical retrieval module which retrieves relevant documents and passages from the external knowledge base, to be employed as additional context for the MLLM. 3.1. Knowledge-based Augmentation Multimodal integration and autoregressive generation. An MLLM usually takes as input a multimodal input query, comprising both image and text, and generates a textual output in an autoregressive manner. Formally, the architecture is trained to model a probability distribution p(wt|I, w0, w1, ..., wt\u22121, \u03b8), where \u03b8 denotes the parameters of the model, I represents an input image, and w0, .., wt\u22121 denotes the textual prompt. The textual prompt usually includes a pre-defined system-level prompt and a question related to the input image, given by the user. Clearly, a standard MLLM can only rely on the user prompt, the input image, and the knowledge stored in its internal parameters (i.e., \u03b8) to accommodate requests, thus limiting its ability to answer questions that rely on external knowledge. In the rest of the paper, we employ LLaVA [24] as our reference MLLM. LLaVA exploits the capabilities of a pre-trained LLM (i.e., Vicuna [6]) and a pre-trained visual model (i.e., a CLIP-based visual encoder [33]), which are interconnected through an MLP adapter, in charge of converting CLIP features to dense input tokens. For an input image I, therefore, LLaVA utilizes a pre-trained CLIP visual encoder Ev, extracts a dense grid of visual features Zv = Ev(I), which is then projected via a learnable MLP to produce a sequence of dense embedding tokens vo, v1, ..., vN. Finally, these are prepended to the system prompt, and the full sequence of visual and textual tokens is then given as input to the LLM component of the model. Augmentation with external knowledge. To augment the MLLM with external knowledge, we enrich the input context by injecting relevant textual data from an external memory composed of documents. Formally, the distribution of the MLLM is conditioned on additional textual retrievalknowledge tokens, leading to p(w _t| \\o verbra cke t { v_o, v_ 1, ... , v_ N}^{ \\text { Visu al tok e ns}},\\ \\ \\ \\ \\ underb rac ket {w_ 0, w_1, ..., w_{t-1}}_\\text {\\textcolor {red}{System + user prompt}}, \\overbracket {e_0, e_1, ..., e_{\\tau }}^\\text {\\textcolor {blue}{External memory tokens}}), (1) where e0, ..., e\u03c4 represents the added tokens retrieved from the external memory. Differently from the standard formulation of MLLMs, by enriching the input context we allow the model to generate more specific answers by exploiting tokens retrieved from the memory. Hierarchical retrieval from an external memory. The external memory comprises a collection of (document, image, text-title) triplets taken from documents, denoted as D = {(di, ti)i}. Within this memory, we conduct a hierarchical two-step search to retrieve appropriate information. 3 Initially, we locate the most pertinent document, followed by identifying the relevant passage inside a particular document, which is subsequently exploited as additional input context in the MLLM. In the first stage, given an input query image I we perform an approximate k-nearest neighbor search into the external memory, using document titles as retrievable keys. The similarity between the query image and the text titles is modeled as the inner product between their respective embeddings, which are computed through the visual and textual CLIP encoders (i.e., Ev and Et), as follows: \\text {s i m}(I_ i , t_i) = E_v(I) \\cdot E_t(t_i)^T. (2) Then, the knowledge retriever returns the top-k documents associated with the most relevant items retrieved using the aforementioned procedure. Retrieving document passages. In the second step, we analyze each of the retrieved documents to identify the most relevant passages corresponding to the user\u2019s question. Each document is defined as a sequence of chunks, denoted as di = [ci0, .., ciT ], and, given the input question, we retrieve the chunks with the highest similarity to the question. We employ the Contriever architecture [15] to embed each chunk of the selected document, along with the query (i.e., the question provided by the user), and compute the similarity as an inner product between embeddings. By retrieving the n most appropriate passages inside each of the retrieved documents, overall we obtain k \u00b7 n passages. Context enrichment. Once we find the most relevant chunks, we employ their raw contents as an additional input to the MLLM. Specifically, the final prompt that we employ includes the image tokens, the retrieved raw chunks, the system-level prompt, and the user question. Formally, considering three retrieved passages, the final prompt is defined as follows: \\small \\nonum ber \\texttt {<IMAGE>\\t extbackslash nGiv en the fol lowi n g con text:\\t extbackslash n} \\\\ \\nonumber \\small \\texttt { <R1>\\textbackslash n<R2>\\textbackslash <R3>\\textbackslash n <QUESTION>} \\\\ \\small \\texttt {Give a short answer. ASSISTANT:} (3) 3.2. Training While the aforementioned approach could work in a zeroshot fashion, using the original weights \u03b8 of the pre-trained MLLM, we also investigate the case of fine-tuning the model to augment its capabilities of exploiting retrieved passages. In particular, in this case, the model is trained on pairs of questions and ground-truth answers requiring external knowledge. As this would potentially reduce the capabilities of the MLLM on tasks not requiring external knowledge (i.e., all the other tasks on which the model has been originally trained), we apply a data mixing approach in which ground-truth pairs requiring external knowledge are mixed with ground-truth pairs not requiring external knowledge in the same mini-batch. 4. Experiments In this section, we first introduce the experimental settings, describing the datasets employed, the evaluation protocol, and the implementation and training details used to perform the experiments. Then, we present our experimental results, analyzing the effectiveness of CLIP fine-tuning and evaluating how it is possible to incorporate retrieved knowledge in an MLLM. Finally, limitations of the proposed approach and possible future works are reported. 4.1. Datasets Encyclopedic-VQA [28]. The dataset contains around 221k question-answer pairs associated with 16.7k different fine-grained entities, with up to 5 images representing the same entity. Overall, there are more than 1M triplets composed of an image, a question, and the corresponding answer. Fine-grained entities and related images are extracted from iNaturalist 2021 [43] and Google Landmarks Dataset V2 [45], which are associated with the corresponding Wikipedia article. Questions are divided into four different categories, namely single-hop, automatically generated, multi-answer, and two-hop. In particular, single-hop questions have been manually annotated and a single Wikipedia article is needed to answer them. Automatically generated questions are similar to the single-hop questions but have been generated by automatic models. Multi-answer questions, instead, can be answered with a list of terms, but always refer to a single fine-grained entity. Finally, two-hop questions require two retrieval steps to answer them. The dataset also comes with a knowledge base composed of 2M Wikipedia articles, suitable for answering dataset questions. Dataset triplets are divided into training, validation, and test splits respectively composed of 1M, 13.6k, and 5.8k samples. In our experiments, we employ the training split to fine-tune the LLaVA model and report the results on the test set of the dataset. During testing, we filter out two-hop questions resulting in 4,750 test triplets. InfoSeek [5]. The dataset contains 1.3M image-questionanswer triplets corresponding to around 11k different entities (i.e., Wikipedia articles). The vast majority of questions have been obtained with an almost entirely automatic procedure, by filling human-authored templates with knowledge triples from Wikidata. In this case, images are derived from the OVEN dataset [12]. Triplets are divided into training, validation, and test sets, with around 934k, 73k, and 348k samples respectively. At the time of the submission, the ground-truth answers for the test set were not available. Therefore, we report our results on the validation split. Both validation and test sets contain questions related to new en4 tities not included in the training split and questions not seen during training. Along with image-question-answer triplets, a knowledge base composed of 6M Wikipedia entities is provided. In our experiments, we consider a randomly extracted subset of 100k entities, in which we guarantee the presence of the 11k entities associated with the dataset questions. 4.2. Implementation Details LLaVA fine-tuning. We employ two distinct fine-tuning approaches, with each being exclusively applied to one of the datasets. In order to maintain the performance of the LLaVA model on well-established MLLM datasets, we supplement fine-tuning data with samples from the LLaVAInstruct dataset [24]. Specifically, given its size of 158k, we double the probability of having examples from this dataset in each mini-batch. To reduce the number of trainable parameters, we train using low-rank adapters [11] with a total batch size of 512 samples. Retrieval. Textual documents sourced from Wikipedia content are embedded using the Contriever architecture [15], segmenting the text into chunks of 600 characters each. Furthermore, for streamlined efficiency, the process involves utilizing a single visual encoder. Specifically, following the LLaVA architecture [24], we employ the CLIP ViTL/14@336 backbone to embed images to give as input to the MLLM, while simultaneously leveraging it to extract query visual features in the initial hierarchical retrieval step, facilitating the integration of an external memory component. To perform entity retrieval, we employ approximate kNN search rather than exact kNN search because it significantly improves the computational speed of the entire pipeline. To this aim, we employ the Faiss library [18] and a graph-based HNSW index with 32 links per vertex. 4.3. Evaluation Protocol We evaluate our models in two settings: without external knowledge base and with external knowledge base. The former means that we ask the model to directly answer a visual question, by solely relying on the competencies learned during pre-training and/or fine-tuning. On the other hand, in the latter setting, we leverage the proposed hierarchical retrieval method to search for additional information in the external knowledge base. In practice, this is represented by two dumps of Wikipedia comprehending 2M and 100k pages, respectively for Encyclopedic-VQA and InfoSeek. Concerning the evaluation metrics, we report the accuracy over the Encyclopedic-VQA test split and the InfoSeek validation split, following the official evaluation scripts provided along with the datasets. Dataset KB R@1 R@10 R@20 R@50 Encyclopedic-VQA 2M 3.3 9.9 13.2 17.5 InfoSeek 100k 36.9 66.1 71.9 78.4 Table 1. Entity retrieval results on the Encyclopedic-VQA test set and InfoSeek validation set. To comply with the visual encoder employed in LLaVA, all results are obtained using CLIP ViT-L/14@336. 4.4. Experimental Results Analyzing CLIP performance. We start by evaluating entity retrieval results using CLIP. In this setting, we consider images from the Encyclopedic-VQA test set and InfoSeek validation set and measure the CLIP ability to find the correct entity within the knowledge base of each respective dataset (i.e., composed of 2M entries for EncyclopedicVQA and 100k entries for InfoSeek). As previously mentioned, we perform retrieval using images as queries and Wikipedia titles as retrievable items. Results are reported in Table 1 in terms of recall@k (R@k) with k = 1, 10, 20, 50 which measures the percentage of times the correct entity is found in the top-k retrieved elements. Notably, correctly retrieving the Wikipedia entity associated with the input image strongly depends on the size of the employed knowledge base. In fact, when using 100k items, as in the case of InfoSeek, the correct entity is retrieved as the first item 36.9% of the time and among the top-10 66.1% of the time. Instead, when using a significantly larger knowledge base as in the case of Encyclopedic-VQA, which contains 2M items, retrieval results are significantly lower with 3.3% and 9.9% respectively in terms of R@1 and R@10. Results on Encyclopedic-VQA and InfoSeek. We then report visual question-answering results in Table 2. We include the performance of zero-shot models like BLIP2 [21], InstructBLIP [8], and the LLaVA-1.5 baseline model [24], which are not fine-tuned on the considered datasets and that do not leverage the external knowledge base. Moreover, we consider the accuracy results of LLaVA-1.5 when fine-tuned on the training set of Encyclopedic-VQA and InfoSeek, but not augmented with retrieved context. The results of our approach (i.e., WikiLLaVA) are reported both in the standard setting in which CLIP is used to retrieve the most representative entity from the knowledge base and in its oracle version, which employs the entity corresponding to the input image-question pair. For both cases, we consider a different number n of retrieved textual chunks, all corresponding to the top-1 (or ground-truth) entity. When employing CLIP, we also vary the number k of retrieved entities (i.e., k = 1, 2, 3) using n = 1 when k is greater than 1. This choice is given by the maximum context length that Vicuna takes as input, which is set to 2,048 tokens. 5 Enc-VQA InfoSeek Model LLM KB k n Single-Hop All Unseen-Q Unseen-E All Zero-shot Models BLIP-2 [21] Flan-T5XL \u2717 12.6 12.4 12.7 12.3 12.5 InstructBLIP [8] Flan-T5XL \u2717 11.9 12.0 8.9 7.4 8.1 LLaVA-1.5 [23] Vicuna-7B \u2717 16.3 16.9 9.6 9.4 9.5 Fine-tuned Models LLaVA-1.5 [23] Vicuna-7B \u2717 23.3 28.5 19.4 16.7 17.9 Wiki-LLaVA Vicuna-7B \u2713 1 1 21.8 26.4 26.6 24.6 25.5 Wiki-LLaVA Vicuna-7B \u2713 1 2 19.9 23.2 29.1 26.3 27.6 Wiki-LLaVA Vicuna-7B \u2713 1 3 17.7 20.3 30.1 27.8 28.9 Wiki-LLaVA Vicuna-7B \u2713 2 1 21.3 25.4 27.8 24.6 26.1 Wiki-LLaVA Vicuna-7B \u2713 3 1 20.5 24.3 27.4 24.5 25.3 Wiki-LLaVA Vicuna-7B \u2713 1 1 34.7 37.2 41.1 41.1 41.1 Wiki-LLaVA Vicuna-7B \u2713 1 2 39.2 40.2 49.1 46.5 47.8 Wiki-LLaVA Vicuna-7B \u2713 1 3 38.5 38.6 52.7 50.3 51.5 Table 2. Accuracy results on the Encyclopedic-VQA test set and InfoSeek validation set. Yellow color indicates models employing the CLIP model to perform entity retrieval, while gray color indicates the use of ground-truth entities (i.e., oracle). k denotes the number of retrieved entities, and n represents the number of textual chunks retrieved for each entity that are given to the MLLM as additional context. As it can be seen, zero-shot MLLMs face difficulties in correctly answering the given questions as these models can only rely on the knowledge embedded inside the LLM. When instead using an external knowledge base, the accuracy results significantly increase especially on the InfoSeek dataset with 100k retrievable items. The limited performance of the CLIP model in retrieving the correct entity on larger knowledge bases, instead, leads to a slight degradation of accuracy scores. This is due to the noisy textual passages that are provided to the MLLM as additional external context which, being related to a different entity, often do not contain informative content. Overall, retrieving passages from different entities does not always help increase the results. Instead, using more than one textual chunk as additional context for the MLLM generally improves the final accuracy on the InfoSeek validation set with an overall improvement of 2.1 and 3.4 accuracy points with n = 2 and n = 3 respectively. Furthermore, it is worth noting that employing oracle entities significantly boosts the final accuracy. In particular, oracle entities lead to an improvement of 13.8% on Encyclopedic-VQA and 22.6% on InfoSeek, comparing the best-performing configuration with CLIP-based entity retrieval (i.e., k = 1 and n = 1 for Encyclopedic-VQA and k = 1 and n = 3 for InfoSeek) with the best performing oracle-based version (i.e., k = 1 and n = 2 for Encyclopedic-VQA and k = 1 and n = 3 for InfoSeek). These results confirm the effectiveness of directly employing retrieved passages to augment a pre-trained MLLM and further highlight the importance of having a good entity retrieval model to limit the possibility of feeding the MLLM with irrelevant content. Enc-VQA InfoSeek Fine-tuning Single-Hop All Unseen-Q Unseen-E All \u2717 16.3 16.9 9.6 9.4 9.5 \u2713 23.4 29.0 17.1 15.0 16.0 \u2713+ LLaVA-Instruct 23.3 28.5 19.4 16.7 17.9 Table 3. Performance analysis when using the LLaVA-Instruct dataset during fine-tuning. All results are obtained without external knowledge retrieval. Some qualitative results on sample image-question pairs from Encyclopedic-VQA (first row) and InfoSeek (second row) are reported in Fig. 3, comparing the answers given by Wiki-LLaVA with those coming from the original LLaVA1.5 model. For completeness, we also report some failure cases (third row) in which both models are not able to correctly answer the given question. Evaluating the importance of the fine-tuning datasets. As described in Sec. 3.2 and Sec. 4.2, the MLLM finetuning is done with a mixture of data containing imagequestion-answer triples from the Encyclopedic-VQA or InfoSeek training set and visual instruction tuning data from LLaVA-Instruct [24], which has been used to originally fine-tune the LLaVA model. In Table 3, we evaluate the effect of mixing fine-tuning data for the knowledge-based VQA task. In this setting, we only report the results of the fine-tuned models without external knowledge retrieval. Notably, using visual instruction tuning data can help to regularize the fine-tuning phase on the InfoSeek dataset, leading to an overall improvement of 1.9 accuracy points compared to the model fine-tuned only on image-questionanswer triplets from the training set of the dataset. On 6 In what state is this building located? LLaVA-1.5: California \u2717 Wiki-LLaVA: Washington \u2713 When was this building constructed? LLaVA-1.5: 1970 \u2717 Wiki-LLaVA: 1927 \u2713 What\u2019s the height of the tallest minaret from this mosque? LLaVA-1.5: 100 feet \u2717 Wiki-LLaVA: 49mt \u2713 Which geographic area is this fish found? LLaVA-1.5: Gulf of Mexico \u2717 Wiki-LLaVA: Brazil \u2713 What is the oldest age of this animal? LLaVA-1.5: 10 years \u2717 Wiki-LLaVA: 24.9 \u2713 Who designed this building? LLaVA-1.5: Architect \u2717 Wiki-LLaVA: James of Saint George \u2713 Which culture is associated with this place? Ancient Greek LLaVA-1.5: Roman \u2717 Wiki-LLaVA: Nuragic Civilization \u2717 What is the name of the main club of this stadium? FC Rotor LLaVA-1.5: Real Madrid \u2717 Wiki-LLaVA: FC Dynamo Kyiv \u2717 Which mountain range is this mountain belong to? Snowdonia LLaVA-1.5: Rocky mountains \u2717 Wiki-LLaVA: Lake District \u2717 Figure 3. Qualitative results on sample image-question pairs from Encyclopedic-VQA (first row) and InfoSeek (second row) comparing the proposed approach with the original LLaVA-1.5 model. Some failure cases are shown in the third row with the corresponding ground-truth. MME MMMU MMB POPE Fine-tuning Cogn Perc Acc Acc Acc F1 355.7 1513.3 35.1 71.6 86.9 85.8 Enc-VQA 200.7 802.8 36.6 67.7 72.9 63.4 Enc-VQA + LLaVA-Instruct 290.0 1170.1 36.6 70.4 87.2 86.6 InfoSeek 296.8 1377.2 35.2 71.7 82.0 79.6 InfoSeek + LLaVA-Instruct 341.3 1438.9 35.6 71.1 85.8 84.2 Table 4. Performance preservation analysis with respect to the original LLaVA-1.5 model (first row) on diverse benchmarks for MLLM evaluation. Encyclopedic-VQA, instead, training with instruction tuning data does not lead to performance improvement although without degrading the original results. Preservation of LLaVA performance. Finally, we analyze the impact of LLaVA fine-tuning on knowledge-based VQA datasets when evaluating the model on common MLLM evaluation benchmarks [3]. In particular, we include results on MME [9] which contains image-question pairs covering 14 different tasks grouped in two macro-categories (i.e., cognition and perception), MMMU [47] that is composed of multiple-choice and open-ended questions possibly interleaved with one or more images and extracted from diverse university textbooks and online courses, MMBench (MMB) [26] that includes multiple-choice questions across 20 different domains, and POPE [22] that is focused on evaluating object hallucinations and comprises binary classification entries, each related to an image. More details about the evaluation metrics and number of samples can be found in the original paper of each dataset. Results are shown in Table 4 comparing the original LLaVA model with the two fine-tuned versions on Encyclopedic-VQA and InfoSeek, with and without the use of visual instruction tuning data. Overall, employing samples from the LLaVA-Instruct dataset can better preserve the results of the original model, only partially degrading the performance on the considered benchmarks compared to the original model. While the most significant deterioration is achieved on the MME dataset, in the other settings the original performances are better preserved, also leading to a slight improvement on MMMU and POPE benchmarks compared to the LLaVA-1.5 results. 4.5. Limitations and Future Works While our work provides an initial step towards MLLM which can properly exploit external multimodal data, it is worthwhile mentioning that significant research is needed in two directions. The fist is defining proper embedding spaces in which documents can be retrieved from questions and input images, so as to improve the performance of the higher level of our hierarchical retrieval. The second is modeling an efficient and sustainable paradigm to select from one or more documents. Here, the challenge is to increase the capability of the MLLM of distinguishing the appropriateness of retrieved items. This point might also require novel architectural design, which might go beyond the pure inclusion of retrieved items in the context. Regardless of its current limitations, our research testifies the potential of adding multimodal external knowledge to a MLLM and inherits all the advantages of retrieval-augmented approaches, such as the adaptability to different domains and the loosely-coupled relationship between pre-trained information and retrievable data. 5. Conclusion We have presented Wiki-LLaVA, an architecture for augmenting an existing MLLM with external knowledge. Our 7 proposal leverages an external knowledge source of documents to improve the effectiveness of an MLLM when tasked with questions and dialogues. In particular, we devise a hierarchical architecture for retrieving documents and eliciting selected parts to be included in the MLLM input context. Extensive experiments demonstrate the effectiveness of the proposed solution, and its capability to maintain the proficiency of the MLLM across different tasks. Acknowledgments We acknowledge the CINECA award under the ISCRA initiative, for the availability of high-performance computing resources and support. This work has been conducted under two research grants, one co-funded by Leonardo S.p.A. and the other co-funded by Altilia s.r.l., and supported by the PNRRM4C2 project \u201cFAIR Future Artificial Intelligence Research\u201d, funded by the European Commission, and by the PNRR project \u201cItalian Strengthening of Esfri RI Resilience\u201d (ITSERR) funded by the European Union NextGenerationEU (CUP B53C22001770006)." |
| } |
| ] |
| } |