| { |
| "url": "http://arxiv.org/abs/2404.16277v1", |
| "title": "Causally Inspired Regularization Enables Domain General Representations", |
| "abstract": "Given a causal graph representing the data-generating process shared across\ndifferent domains/distributions, enforcing sufficient graph-implied conditional\nindependencies can identify domain-general (non-spurious) feature\nrepresentations. For the standard input-output predictive setting, we\ncategorize the set of graphs considered in the literature into two distinct\ngroups: (i) those in which the empirical risk minimizer across training domains\ngives domain-general representations and (ii) those where it does not. For the\nlatter case (ii), we propose a novel framework with regularizations, which we\ndemonstrate are sufficient for identifying domain-general feature\nrepresentations without a priori knowledge (or proxies) of the spurious\nfeatures. Empirically, our proposed method is effective for both (semi)\nsynthetic and real-world data, outperforming other state-of-the-art methods in\naverage and worst-domain transfer accuracy.", |
| "authors": "Olawale Salaudeen, Sanmi Koyejo", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "stat.ML" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Causally Inspired Regularization Enables Domain General Representations", |
| "main_content": "Introduction A key feature of machine learning is its capacity to generalize across new domains. When these domains present di\ufb00erent data distributions, the algorithm must leverage shared structural concepts to achieve outof-distribution (OOD) or out-of-domain generalization. This capability is vital in numerous important realworld machine learning applications. For example, in safety-critical settings such as autonomous driving, a lack of resilience to unfamiliar distributions could lead to human casualties. Likewise, in the healthcare sector, where ethical considerations are critical, an inability to adjust to shifts in data distribution can result in unfair biases, manifesting as inconsistent performance across di\ufb00erent demographic groups. An in\ufb02uential approach to domain generalization is Invariant Causal Prediction (ICP; [Peters et al., 2016]). ICP posits that although some aspects of data distributions (like spurious or non-causal mechanisms [Pearl, 2010]) may change across domains, certain causal mechanisms remain constant. ICP suggests focusing on these invariant mechanisms for prediction. However, the estimation method for these invariant mechanisms suggested by [Peters et al., 2016] struggles with scalability in high-dimensional feature spaces. To overcome this, Arjovsky et al. [2019] introduced Invariant Risk Minimization (IRM), designed to identify these invariant mechanisms by minimizing an objective. However, requires strong assumptions for identifying the desired domain-general solutions [Ahuja et al., 2021, Rosenfeld et al., 2022]; for instance, observing a number of domains proportional to the spurious features\u2019 dimensions is necessary, posing a signi\ufb01cant challenge in these high-dimensional settings. Subsequent variants of IRM have been developed with improved capabilities for identifying domaingeneral solutions [Ahuja et al., 2020, Krueger et al., 2021, Robey et al., 2021, Wang et al., 2022, Ahuja et al., 2021]. Additionally, regularizers for Distributionally Robust Optimization with subgroup shift have been proposed (GroupDRO) [Sagawa et al., 2019]. However, despite their solid theoretical motivation, empirical evidence suggests that these methods may not consistently deliver domain-general solutions in practice Gulrajani and Lopez-Paz [2020], Kaur et al. [2022], Rosenfeld et al. [2022]. \u2217Contact: oes2@illinois.edu 1 \fKaur et al. [2022] demonstrated that regularizing directly for conditional independencies implied by the generative process can give domain-general solutions, including conditional independencies beyond those considered by IRM. However, their experimental approach involves regularization terms that require direct observation of spurious features, a condition not always feasible in real-world applications. Our proposed methodology also leverages regularizers inspired by the conditional independencies indicated by causal graphs but, crucially, it does so without necessitating prior knowledge (or proxies) of the spurious features. 1.1 Contributions In this work, \u2022 we outline su\ufb03cient properties to uniquely identify domain-general predictors for a general set of generative processes that include domain-correlated spurious features, \u2022 we propose regularizers to implement these constraints without independent observations of the spurious features, and \u2022 \ufb01nally, we show that the proposed framework outperforms the state-of-the-art on semi-synthetic and real-world data. The code for our proposed method is provided at https://github.com/olawalesalaudeen/tcri. Notation: Capital letters denote bounded random variables, and corresponding lowercase letters denote their value. Unless otherwise stated, we represent latent domain-general features as Zdg \u2208Zdg \u2261Rm and spurious latent features as Zspu \u2208Zspu \u2261Ro. Let X \u2208X \u2261Rd be the observed feature space and the output space of an invertible function \u0393 : Zdg \u00d7 Zspu 7\u2192X and Y \u2208Y \u2261{0, 1, . . ., K \u22121} be the observed label space for a K-class classi\ufb01cation task. We then de\ufb01ne feature extractors aimed at identifying latent features \u03a6dg : X 7\u2192Rm, \u03a6spu : X 7\u2192Ro so that \u03a6 : X 7\u2192Rm+o \u0000that is \u03a6(x) = [\u03a6dg(x); \u03a6spu(x)]\u2200x \u2208X \u0001 . We de\ufb01ne e as a discrete random variable denoting domains and E = {P e(Zdg, Zspu, X, Y ) : e = 1, 2, . . .} to be the set of possible domains. Etr \u2282E is the set of observed domains available during training. 2 Related Work The source of distribution shift can be isolated to components of the joint distribution. One special case of distribution shift is covariate shift [Shimodaira, 2000, Zadrozny, 2004, Huang et al., 2006, Gretton et al., 2009, Sugiyama et al., 2007, Bickel et al., 2009, Chen et al., 2016, Schneider et al., 2020], where only the covariate distribution P(X) changes across domains. Ben-David et al. [2009] give upper-bounds on target error based on the H-divergence between the source and target covariate distributions, which motivates domain alignment methods like the Domain Adversarial Neural Networks [Ganin et al., 2016] and others [Long et al., 2015, Blanchard et al., 2017]. Others have followed up on this work with other notions of covariate distance for domain adaptation, such as mean maximum discrepancy (MMD) [Long et al., 2016], Wasserstein distance [Courty et al., 2017], etc. However, Kpotufe and Martinet [2018] show that these divergence metrics fail to capture many important properties of transferability, such as asymmetry and non-overlapping support. Furthermore, Zhao et al. [2019] shows that even with the alignment of covariates, large distances between label distributions can inhibit transfer; they propose a label conditional importance weighting adjustment to address this limitation. Other works have also proposed conditional covariate alignment [des Combes et al., 2020, Li et al., 2018c,b]. Another form of distribution shift is label shift, where only the label distribution changes across domains. Lipton et al. [2018] propose a method to address this scenario. Schrou\ufb00et al. [2022] illustrate that many real-world problems exhibit more complex \u2019compound\u2019 shifts than just covariate or label shifts alone. One can leverage domain adaptation to address distribution shifts; however, these methods are contingent on having access to unlabeled or partially labeled samples from the target domain during training. When such samples are available, more sophisticated domain adaptation strategies aim to leverage and adapt spurious feature information to enhance performance [Liu et al., 2021, Zhang et al., 2021, Kirichenko et al., 2022]. 2 \fHowever, domain generalization, as a problem, does not assume access to such samples [Muandet et al., 2013]. To address the domain generalization problem, Invariant Causal Predictors (ICP) leverage shared causal structure to learn domain-general predictors [Peters et al., 2016]. Previous works, enumerated in the introduction (Section 1), have proposed various algorithms to identify domain-general predictors. Arjovsky et al. [2019]\u2019s proposed invariance risk minimization (IRM) and its variants motivated by domain invariance: min w,\u03a6 1 |Etr| X e\u2208Etr Re(w \u25e6\u03a6) s.t. w \u2208argmin e w Re( e w \u00b7 \u03a6), \u2200e \u2208Etr, where Re(w \u25e6\u03a6) = E \u0002 \u2113(y, w \u00b7 \u03a6(x)) \u0003 , with loss function \u2113, feature extractor \u03a6, and linear predictor w. This objective aims to learn a representation \u03a6 such that predictor w that minimizes empirical risks on average across all domains also minimizes within-domain empirical risk for all domains. However, Rosenfeld et al. [2020], Ahuja et al. [2020] showed that this objective requires unreasonable constraints on the number of observed domains at train times, e.g., observing distinct domains on the order of the rank of spurious features. Follow-up works have attempted to improve these limitations with stronger constraints on the problem \u2013 enumerated in the introduction section. Our method falls under domain generalization; however, unlike the domain-general solutions previously discussed, our proposed solution leverages di\ufb00erent conditions than domain invariance directly, which we show may be more suited to learning domain-general representations. 3 Causality and Domain Generalization We often represent causal relationships with a causal graph. A causal graph is a directed acyclic graph (DAG), G = (V, E), with nodes V representing random variables and directed edges E representing causal relationships, i.e., parents are causes and children are e\ufb00ects. A structural equation model (SEM) provides a mathematical representation of the causal relationships in its corresponding DAG. Each variable Y \u2208V is given by Y = fY (X) + \u03b5Y , where X denotes the parents of Y in G, fY is a deterministic function, and \u03b5Y is an error capturing exogenous in\ufb02uences on Y . The main property we need here is that fY is invariant to interventions to V \\{Y } and is consequently invariant to changes in P(V ) induced by these interventions. Interventions refer to changes to fZ, Z \u2208V \\{Y }. In this work, we focus on domain-general predictors dg that are linear functions of features with domaingeneral mechanisms, denoted as gdg := w \u25e6\u03a6dg, where w is a linear predictor and \u03a6dg identi\ufb01es features with domain-general mechanisms. We use domain-general rather than domain-invariant since domain-invariance is strongly tied to the property: Y \u22a5 \u22a5e | Zdg [Arjovsky et al., 2019]. As shown in the subsequent sections, this work leverages other properties of appropriate causal graphs to obtain domain-general features. This distinction is crucial given the challenges associated with learning domain-general features through domaininvariance methods [Rosenfeld et al., 2020]. Given the presence of a distribution shift, it\u2019s essential to identify some common structure across domains that can be utilized for out-of-distribution (OOD) generalization. For example, Shimodaira [2000] assume P(Y |X) is shared across all domains for the covariate shift problem. In this work, we consider a setting where each domain is composed of observed features and labels, X \u2208X, Y \u2208Y, where X is given by an invertible function \u0393 of two latent random variables: domain-general Zdg \u2208Zdg and spurious Zspu \u2208Zspu. By construction, the conditional expectation of the label Y given the domain-general features Zdg is the same across domains, i.e., Eei [Y |Zdg = zdg] = Eej [Y |Zdg = zdg] (1) \u2200zdg \u2208Zdg, \u2200ei \u0338= ej \u2208E. Conversely, this robustness to e does not necessarily extend to spurious features Zspu; in other words, Zspu may assume values that could lead a predictor relying on it to experience arbitrarily high error rates. Then, a sound strategy for learning a domain-general predictor \u2013 one that is robust to distribution shifts \u2013 is to identify the latent domain-general Zdg from the observed features X. 3 \fe Zdg Zspu Y X Figure 1: Partial Ancestral Graph representing all non-trivial and valid generative processes (DAGs); dashed edges indicate that an edge may or may not exist. The approach we take to do this is motivated by the Reichenbach Common Cause Principle, which claims that if two events are correlated, there is either a causal connection between the correlated events that is responsible for the correlation or there is a third event, a so-called (Reichenbachian) common cause, which brings about the correlation [Hitchcock and R\u00e9dei, 2021, R\u00e9dei, 2002]. This principle allows us to posit the class of generative processes or causal mechanisms that give rise to the correlated observed features and labels, where the observed features are a function of domain-general and spurious features. We represent these generative processes as causal graphs. Importantly, the mapping from a node\u2019s causal parents to itself is preserved in all distributions generated by the causal graph (Equation 1), and distributions can vary arbitrarily so long as they preserve the conditional independencies implied by the DAG (Markov Property [Pearl, 2010]). We now enumerate DAGs that give observe features with spurious correlations with the label. Valid DAGs. We consider generative processes, where both latent features, Zspu, Zdg, and observed X are correlated with Y , and the observed X is a function of only Zdg and Zspu (Figure 1). Given this setup, there is an enumerable set of valid generative processes. Such processes are (i) without cycles, (ii) are feature complete \u2013 including edges from Zdg and Zspu to X, i.e., Zdg \u2192X \u2190Zspu, and (iii) where the observed features mediate domain in\ufb02uence, i.e., there is no direct domain in\ufb02uence on the label e \u0338\u2192Y . We discuss this enumeration in detail in Appendix B. The result of our analysis is identifying a representative set of DAGs that describe valid generative processes \u2013 these DAGs come from orienting the partial ancestral graph (PAG) in Figure 1. We compare the conditional independencies implied by the DAGs de\ufb01ned by Figure 1 as illustrated in Figure 2, resulting in three canonical DAGs in the literature (see Appendix B for further discussion). Other DAGs that induce spurious correlations are outside the scope of this work. e Zdg Zspu Y X (a) Causal [Arjovsky et al., 2019]. e Zdg Zspu Y X (b) Anticausal [Rosenfeld et al., 2020]. e Zdg Zspu Y X (c) Fully Informative Causal [Ahuja et al., 2021]. Figure 2: Generative Processes. Graphical models depicting the structure of possible data-generating processes \u2013 shaded nodes indicate observed variables. X represents the observed features, Y represents observed targets, and e represents domain in\ufb02uences (domain indexes in practice). There is an explicit separation of domain-general Zdg and domain-speci\ufb01c Zspu features; they are combined to generate observed X. Dashed edges indicate the possibility of an edge. Conditional independencies implied by identi\ufb01ed DAGs (Figure 2). 4 \fTable 1: Generative Processes and Su\ufb03cient Conditions for Domain-Generality Graphs in Figure 2 (a) (b) (c) Zdg \u22a5 \u22a5Zspu | {Y, e} \u2713 \u2713 \u2717 Identifying Zdg is necessary \u2713 \u2713 \u2717 Fig. 2a: Zdg \u22a5 \u22a5Zspu | {Y, e}; Y \u22a5 \u22a5e | Zdg. This causal graphical model implies that the mapping from Zdg to its causal child Y is preserved and consequently, Equation 1 holds [Pearl, 2010, Peters et al., 2016]. As an example, consider the task of predicting the spread of a disease. Features may include causes (vaccination rate and public health policies) and e\ufb00ects (coughing). e is the time of month; the distribution of coughing changes depending on the season. Fig. 2b: Zdg \u22a5 \u22a5Zspu | {Y, e}; Zdg \u22a5 \u22a5Zspu | Y ; Y \u22a5 \u22a5e | Zdg, Zdg \u22a5 \u22a5e. The causal graphical model does not directly imply that Zdg \u2192Y is preserved across domains. However, in this work, it represents the setting where the inverse of the causal direction is preserved (inverse: Zdg \u2192Y ), and thus Equation 1 holds. A context where this setting is relevant is in healthcare where medical conditions (Y ) cause symptoms (Zdg), but the prediction task is often predicting conditions from symptoms, and this mapping Zdg \u2192Y , opposite of the causal direction, is preserved across distributions. Again, we may consider e as the time of month; the distribution of coughing changes depending on the season. Fig. 2c: Y \u22a5 \u22a5e | Zdg; Zdg \u22a5 \u22a5e. Similar to Figure 2a, this causal graphical model implies that the mapping from Zdg to its causal child Y is preserved, so Equation 1 holds [Pearl, 2010, Peters et al., 2016]. This setting is especially interesting because it represents a Fully Informative Invariant Features setting, that is Zspu \u22a5 \u22a5Y | Zdg [Ahuja et al., 2021]. Said di\ufb00erently, Zspu does not induce a backdoor path from e to Y that Zdg does not block. As an example of this, we can consider the task of predicting hospital readmission rates. Features may include the severity of illness, which is a direct cause of readmission rates, and also include the length of stay, which is also caused by the severity of illness. However, length of stay may not be a cause of readmission; the correlation between the two would be a result of the confounding e\ufb00ect of a common cause, illness severity. e is an indicator for distinct hospitals. We call the condition Y \u22a5 \u22a5e | Zdg the domain invariance property. This condition is common to all the DAGs in Figure 2. We call the condition Zdg \u22a5 \u22a5Zspu | {Y, e} the target conditioned representation independence (TCRI) property. This condition is common to the DAGs in Figure 2a, 2b. In the settings considered in this work, the TCRI property is equivalently Zdg \u22a5 \u22a5Zspu | Y\u2200e \u2208E since e will simply index the set of empirical distributions available at training. Domain generalization with conditional independencies. Kaur et al. [2022] showed that su\ufb03ciently regularizing for the correct conditional independencies described by the appropriate DAGs can give domaingeneral solutions, i.e., identi\ufb01es Zdg. However, in practice, one does not (partially) observe the latent features independently to regularize directly. Other works have also highlighted the need to consider generative processes when designing robust algorithms to distribute shifts [Veitch et al., 2021, Makar et al., 2022]. However, previous work has largely focused on regularizing for the domain invariance property, ignoring the conditional independence property Zdg \u22a5 \u22a5Zspu | Y, e. Su\ufb03ciency of ERM under Fully Informative Invariant Features. Despite the known challenges of learning domain-general features from the domain-invariance properties in practice, this approach persists, 5 \flikely due to it being the only property shared across all DAGs. We alleviate this constraint by observing that Graph (Fig. 2c) falls under what Ahuja et al. [2021] refer to as the fully informative invariant features settings, meaning that Zspu is redundant, having only information about Y that is already in Zdg. Ahuja et al. [2021] show that the empirical risk minimizer is domain-general for bounded features. Easy vs. hard DAGs imply the generality of TCRI. Consequently, we categorize the generative processes into easy and hard cases Table 1: (i) easy meaning that minimizing average risk gives domaingeneral solutions, i.e., ERM is su\ufb03cient (Fig. 2c), and (ii) hard meaning that one needs to identify Zdg to obtain domain-general solutions (Figs. 2a-2b). We show empirically that regularizing for Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208 E also gives a domain-general solution in the easy case. The generality of TCRI follows from its su\ufb03ciency for identifying domain-general Zdg in the hard cases while still giving domain-general solutions empirically in the easy case. 4 Proposed Learning Framework We have now clari\ufb01ed that hard DAGs (i.e., those not solved by ERM) share the TCRI property. The challenge is that Zdg and Zspu are not independently observed; otherwise, one could directly regularize. Existing work such as Kaur et al. [2022] empirically study semi-synthetic datasets where Zspu is (partially) observed and directly learn Zdg by regularizing that \u03a6(X) \u22a5 \u22a5Zspu | Y, e for feature extractor \u03a6. To our knowledge, we are the \ufb01rst to leverage the TCRI property without requiring observation of Zspu. Next, we set up our approach with some key assumptions. The \ufb01rst is that the observed distributions are Markov to an appropriate DAG. Assumption 4.1. All distributions, sources and targets, are generated by one of the structural causal models SCM that follow: causal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (2) anticausal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Y (e) \u223cPY , Z(e) dg \u2190\u27e8e wdg, Y \u27e9+ \u03b7(e) Zdg, Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (3) F IIF z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Zdg\u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (4) where PZdg is the causal covariate distribution, w\u2019s are linear generative mechanisms, \u03b7\u2019s are exogenous independent noise variables, and \u0393 : Zdg \u00d7 Zspu \u2192X is an invertible function. It follows from having causal mechanisms that we can learn a predictor w\u2217 dg for Zdg that is domain-general (Equation 2-4) \u2013 w\u2217 dg inverts the mapping e wdg in the anticausal case. These structural causal models (Equation 2-4) correspond to causal graphs Figures 2a-2c, respectively. Assumption 4.2 (Structural). Causal Graphs and their distributions are Markov and Faithful [Pearl, 2010]. Given Assumption 4.2, we aim to leverage TCRI property (Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208Etr) to learn the latent Zdg without observing Zspu directly. We do this by learning two feature extractors that, together, recover Zdg and Zspu and satisfy TCRI (Figure 3). We formally de\ufb01ne these properties as follows. De\ufb01nition 4.3 (Total Information Criterion (TIC)). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TIC with respect to random variables X, Y, e if for \u03a6(Xe) = [\u03a6dg(Xe); \u03a6spu(Xe)], there exists a linear operator T s.t., T (\u03a6(Xe)) = [Ze dg; Ze spu]\u2200e \u2208Etr. 6 \fXe \u03a6dg \u03a6spu b Zdg \u03b8c \u2295 b Zspu \u03b8e b yc b ye Figure 3: Modeling approach. During training, both representations, \u03a6dg, and \u03a6spu, generate domaingeneral and domain-speci\ufb01c predictions, respectively. However, only the domain-invariant representations/predictions are used during testing \u2013 indicated by the solid red arrows. In other words, a feature extractor that satis\ufb01es the total information criterion recovers the complete latent feature sets Zdg, Zspu. This allows us to de\ufb01ne the proposed implementation of the TCRI property non-trivially \u2013 the conditional independence of subsets of the latents may not have the same implications on domain generalization. We note that X \u22a5 \u22a5Y |Zdg, Zspu, so X has no information about Y that is not in Zdg, Zspu. De\ufb01nition 4.4 (Target Conditioned Representation Independence). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TCRI with respect to random variables X, Y, e if \u03a6dg(X) \u22a5 \u22a5\u03a6spu(X) | Y \u2200e \u2208E. Proposition 4.5. Assume that \u03a6dg(X) and \u03a6spu(X) are correlated with Y . Given Assumptions 4.1-4.2 and a representation \u03a6 = \u03a6dg \u2295\u03a6spu that satis\ufb01es TIC, \u03a6dg(X) = Zdg \u21d0 \u21d2\u03a6 satis\ufb01es TCRI. (see Appendix C for proof). Proposition 4.5 shows that TCRI is necessary and su\ufb03cient to identify Zdg from a set of training domains. We note that we can verify if \u03a6dg(X) and \u03a6spu(X) are correlated with Y by checking if the learned predictors are equivalent to chance. Next, we describe our proposed algorithm to implement the conditions to learn such a feature map. Figure 3 illustrates the learning framework. Learning Objective: The \ufb01rst term in our proposed objective is L\u03a6dg = Re(\u03b8c \u25e6\u03a6dg), where \u03a6dg : X 7\u2192Rm is a feature extractor, \u03b8c : Rm 7\u2192Y is a linear predictor, and Re(\u03b8c \u25e6\u03a6dg) = E \u0002 \u2113(y, \u03b8c \u00b7 \u03a6(x)) \u0003 is the empirical risk achieved by the feature extractor and predictor pair on samples from domain e. \u03a6dg and \u03b8c are designed to capture the domain-general portion of the framework. Next, to implement the total information criterion, we use another feature extractor \u03a6spu : X 7\u2192Ro, designed to capture the domain-speci\ufb01c information in X that is not captured by \u03a6dg. Together, we have \u03a6 = \u03a6dg \u2295\u03a6spu where \u03a6 has domain-speci\ufb01c predictors \u03b8e : Rm+o 7\u2192Y for each training domain, allowing the feature extractor to utilize domain-speci\ufb01c information to learn distinct optimal domain-speci\ufb01c (nongeneral) predictors: L\u03a6 = Re\u0000\u03b8e \u25e6\u03a6 \u0001 . L\u03a6 aims to ensure that \u03a6dg and \u03a6spu capture all of the information about Y in X \u2013 total information criterion. Since we do not know o, m, we select them to be the same size on our experiments; o, m could be treated as hyperparameters though we do not treat them as such. Finally, we implement the TCRI property (De\ufb01nition 4.4). We denote LT CRI to be a conditional independence penalty for \u03a6dg and \u03a6spu. We utilize the Hilbert Schmidt independence Criterion (HSIC) [Gretton et al., 2007] as LT CRI. However, in principle, any conditional independence penalty can be used in its place. HSIC: LT CRI(\u03a6dg, \u03a6spu) = 1 2 X k\u2208{0,1} \\ HSIC \u0010 \u03a6dg(X), \u03a6spu(X) \u0011y=k = 1 2 X k\u2208{0,1} 1 n2 k tr \u0010 K\u03a6dgHnkK\u03a6spuHnk \u0011y=k , 7 \fwhere k, indicates which class the examples in the estimate correspond to, C is the number of classes, K\u03a6dg \u2208 Rnk\u00d7nk, K\u03a6spu \u2208Rnk\u00d7nk are Gram matrices, Ki,j \u03a6 = \u03ba(\u03a6dg(X)i, \u03a6dg(X)j), Ki,j \u03a6spu = \u03c9(\u03a6spu(X)i, \u03a6spu(X)j) with kernels \u03ba, \u03c9 are radial basis functions, Hnk = Ink \u2212 1 n2 k 11\u22a4is a centering matrix, Ink is the nk \u00d7 nk dimensional identity matrix, 1nk is the nk-dimensional vector whose elements are all 1, and \u22a4denotes the transpose. We condition on the label by taking only examples of each label and computing the empirical HSIC; then, we take the average. Taken together, the full objective to be minimized is as follows: L = 1 Etr X e\u2208Etr \" Re(\u03b8c \u25e6\u03a6dg) + Re(\u03b8e \u25e6\u03a6) + \u03b2LT CRI(\u03a6dg, \u03a6spu) # , where \u03b2 > 0 is a hyperparameter and Etr is the number of training domains. Figure 3 shows the full framework. We note that when \u03b2 = 0, this loss reduces to ERM. Note that while we minimize this objective with respect to \u03a6, \u03b8c, \u03b81, . . . , \u03b8Etr, only the domain-general representation and its predictor, \u03b8c \u00b7 \u03a6dg are used for inference. 5 Experiments We begin by evaluating with simulated data, i.e., with known ground truth mechanisms; we use Equation 5 to generate our simulated data, with domain parameter \u03c3ei; code is provided in the supplemental materials. SCM(ei) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Z(ei) dg \u223cN \u00000, \u03c32 ei \u0001 y(ei) = Z(ei) dg + N \u00000, \u03c32 y \u0001 , Z(ei) spu = Y (ei) + N \u00000, \u03c32 ei \u0001 . (5) Table 2: Continuous Simulated Results \u2013 Feature Extractor with a dummy predictor \u03b8c = 1., i.e., b y = x \u00b7 \u03a6dg \u00b7 w, where x \u2208RN\u00d72, \u03a6dg, \u03a6spu \u2208R2\u00d71, w \u2208R. Oracle indicates the coe\ufb03cients achieved by regressing y on zc directly. Algorithm (\u03a6dg)0 (\u03a6dg)1 (i.e., Zdg weight) (i.e., Zspu weight) ERM 0.29 0.71 IRM 0.28 0.71 TCRI 1.01 0.06 Oracle 1.04 0.00 We observe 2 domains with parameters \u03c3e=0 = 0.1, \u03c3e=1 = 0.2 with \u03c3y = 0.25, 5000 samples, and linear feature extractors and predictors. We use partial covariance as our conditional independence penalty LT CRI. Table 2 shows the learned value of \u03a6dg, where \u2018Oracle\u2019 indicates the true coe\ufb03cients obtained by regressing Y on domain-general Zdg directly. The ideal \u03a6dg recovers Zdg and puts zero weight on Zspu. Now, we evaluate the e\ufb03cacy of our proposed objective on non-simulated datasets. 5.1 Semisynthetic and Real-World Datasets Algorithms: We compare our method to baselines corresponding to DAG properties: Empirical Risk Minimization (ERM, [Vapnik, 1991]), Invariant Risk Minimization (IRM [Arjovsky et al., 2019]), Variance Risk Extrapolation (V-REx, [Krueger et al., 2021]), [Li et al., 2018a]), Group Distributionally Robust Optimization (GroupDRO), [Sagawa et al., 2019]), and Information Bottleneck methods (IB_ERM/IB_IRM, [Ahuja et al., 2021]). Additional baseline methods are provided in the Appendix A. We evaluate our proposed method on the semisynthetic ColoredMNIST [Arjovsky et al., 2019] and realworld Terra Incognita dataset [Beery et al., 2018]. Given observed domains Etr = {e : 1, 2, . . . , Etr}, we train on Etr \\ ei and evaluate the model on the unseen domain ei, for each e \u2208Etr. ColoredMNIST: The ColoredMNIST dataset [Arjovsky et al., 2019] is composed of 7000 (2 \u00d7 28 \u00d7 28, 1) images of a hand-written digit and binary-label pairs. There are three domains with di\ufb00erent correlations between image color and label, i.e., the image color is spuriously related to the label by assigning a color to 8 \feach of the two classes (0: digits 0-4, 1: digits 5-9). The color is then \ufb02ipped with probabilities {0.1, 0.2, 0.9} to create three domains, making the color-label relationship domain-speci\ufb01c because it changes across domains. There is also label \ufb02ip noise of 0.25, so we expect that the best accuracy a domain-general model can achieve is 75%, while a non-domain general model can achieve higher. In this dataset, Zdg corresponds to the original image, Zspu the color, e the label-color correlation, Y the image label, and X the observed colored image. This DAG follows the generative process of Figure 2a [Arjovsky et al., 2019]. Spurrious PACS: Variables. X: images, Y : non-urban (elephant, gira\ufb00e, horse) vs. urban (dog, guitar, house, person). Domains. {{cartoon, art painting}, {art painting, cartoon}, {photo}} [Li et al., 2017]. The photo domain is the same as in the original dataset. In the {cartoon, art painting} domain, urban examples are selected from the original cartoon domain, while non-urban examples are selected from the original art painting domain. In the {art painting, cartoon} domain, urban examples are selected from the original art painting domain, while non-urban examples are selected from the original cartoon domain. This sampling encourages the model to use spurious correlations (domain-related information) to predict the labels; however, since these relationships are \ufb02ipped between domains {{cartoon, art painting} and {art painting, cartoon}, these predictions will be wrong when generalized to other domains. Terra Incognita: The Terra Incognita dataset contains subsets of the Caltech Camera Traps dataset [Beery et al., 2018] de\ufb01ned by [Gulrajani and Lopez-Paz, 2020]. There are four domains representing di\ufb00erent locations {L100, L38, L43, L46} of cameras in the American Southwest. There are 9 species of wild animals {bird, bobcat, cat, coyote, dog, empty, opossum, rabbit, raccoon, squirrel} and a \u2018no-animal\u2019 class to be predicted. Like Ahuja et al. [2021], we classify this dataset as following the generative process in Figure 2c, the Fully Informative Invariant Features (FIIF) setting. Additional details on model architecture, training, and hyperparameters are detailed in Appendix 5. Model Selection. The standard approach for model selection is a training-domain hold-out validation set accuracy. We \ufb01nd that model selection across hyperparameters using this held-out training domain validation accuracy often returns non-domain-general models in the \u2018hard\u2019 cases. One advantage of our model is that we can do model selection based on the TCRI condition (conditional independence between the two representations) on held-out training domain validation examples to mitigate this challenge. In the easy case, we expect the empirical risk minimizer to be domain-general, so selecting the best-performing trainingdomain model is sound \u2013 we additionally do this for all baselines (see Appendix A.1 for further discussion). We \ufb01nd that, empirically, this heuristic works in the examples we study in this work. Nevertheless, model selection under distribution shift remains a signi\ufb01cant bottleneck for domain generalization. 5.2 Results and Discussion Table 3: E\\etest \u2192etest (model selection on held-out source domains validation set). The \u2018mean\u2019 column indicates the average generalization accuracy over all three domains as the etest distinctly; the \u2018min\u2019 column indicates the worst generalization accuracy. ColoredMNIST Spurious PACS Terra Incognita Algorithm average worst-case average worst-case average worst-case ERM 51.6 \u00b1 0.1 10.0 \u00b1 0.1 57.2 \u00b1 0.7 31.2 \u00b1 1.3 44.2 \u00b1 1.8 35.1 \u00b1 2.8 IRM 51.7 \u00b1 0.1 9.9 \u00b1 0.1 54.7 \u00b1 0.8 30.3 \u00b1 0.3 38.9 \u00b1 3.7 32.6 \u00b1 4.7 GroupDRO 52.0 \u00b1 0.1 9.9 \u00b1 0.1 58.5 \u00b1 0.4 37.7 \u00b1 0.7 47.8 \u00b1 0.9 39.9 \u00b1 0.7 VREx 51.7 \u00b1 0.2 10.2 \u00b1 0.0 58.8 \u00b1 0.4 37.5 \u00b1 1.1 45.1 \u00b1 0.4 38.1 \u00b1 1.3 IB_ERM 51.5 \u00b1 0.2 10.0 \u00b1 0.1 56.3 \u00b1 1.1 35.5 \u00b1 0.4 46.0 \u00b1 1.4 39.3 \u00b1 1.1 IB_IRM 51.7 \u00b1 0.0 9.9 \u00b1 0.0 55.9 \u00b1 1.2 33.8 \u00b1 2.2 37.0 \u00b1 2.8 29.6 \u00b1 4.1 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 63.4 \u00b1 0.2 62.3 \u00b1 0.2 49.2 \u00b1 0.3 40.4 \u00b1 1.6 9 \fTable 4: Total Information Criterion: Domain General (DG) and Domain Speci\ufb01c (DS) Accuracies. The DG classi\ufb01er is shared across all training domains, and the DS classi\ufb01ers are trained on each domain. The \ufb01rst row indicates the domain from which the held-out examples are sampled, and the second indicates which domain-speci\ufb01c predictor is used. {+90%, +80%, -90%} indicate domains \u2013 {0.1, 0.2, 0.9} digit label and color correlation, respectively. DG Classi\ufb01er DS Classi\ufb01er on +90 DS Classi\ufb01er on +80 DS Classi\ufb01er on -90 Test Domain No DS clf. +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% 68.7 69.0 68.5 90.1 9.8 79.9 20.1 10.4 89.9 +80% 63.1 62.4 64.4 76.3 24.3 70.0 30.4 24.5 76.3 -90% 65.6 63.4 44.1 75.3 75.3 69.2 69.5 29.3 26.0 Table 5: TIC ablation for ColoredMNIST. Algorithm average worst-case TCRI_HSIC (No TIC) 51.8 \u00b1 5.9 27.7 \u00b1 8.9 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 Worst-domain Accuracy. A critical implication of domain generality is stability \u2013 robustness in worstdomain performance up to domain di\ufb03culty. While average accuracy across domains provides some insight into an algorithm\u2019s ability to generalize to new domains, the average hides the variance of performance across domains. Average improvement can be increased while the worst-domain accuracy stays the same or decreases, leading to incorrect conclusions about domain generalization. Additionally, in real-world challenges such as algorithmic fairness where worst-group performance is considered, some metrics or fairness are analogous to achieving domain generalization [Creager et al., 2021]. Results. TCRI achieves the highest average and worst-case accuracy across all baselines (Table 3). We \ufb01nd no method recovers the exact domain-general model\u2019s accuracy of 75%. However, TCRI achieves over 7% increase in both average accuracy and worst-case accuracy. Appendix A.2 shows transfer accuracies with cross-validation on held-out test domain examples (oracle) and TCRI again outperforms all baselines, achieving an average accuracy of 70.0% \u00b1 0.4% and a worst-case accuracy of 65.7% \u00b1 1.5, showing that regularizing for TCRI gives very close to optimal domain-general solutions. Similarly, for the Spurious-PACS dataset, we observe that TCRI outperforms the baselines. TRCI achieves the highest average accuracy of 63.4% \u00b1 0.2 and worst-case accuracy of 62.3% \u00b1 0.1 with the next best, VREx, achieving 58.8 \u00b1 1.0 and 33.8 \u00b1 0.0, respectively. Additionally, for the Terra-Incognita dataset, TCRI achieves the highest average and worst-case accuracies of 49.2% \u00b1 0.3% and 40.4% \u00b1 1.6% with the next best, GroupDRO, achieving 47.8 \u00b1 0.9 and 39.9 \u00b1 0.7, respectively. Appendix A.2 shows transfer accuracies with cross-validation held-out target domain examples (oracle) where we observe that TCRI also obtains the highest average and worst-case accuracy for Spurrious-PACS and Terra Incognita. Overall, regularizing for TCRI gives the most domain-general solutions compared to our baselines, achieving the highest worst-case accuracy on all benchmarks. Additionally, TCRI achieves the highest average accuracy on ColoredMNIST and Spurious-PAC and the second highest on Terra Incognita, where we expect the empirical risk minimizer to be domain-general. Additional results are provided in the Appendix A. The E\ufb00ect of the Total Information Criterion. Without the TIC loss term, our proposed method is less e\ufb00ective. Table 5 shows that for Colored MNIST, the hardest \u2018hard\u2019 case we encounter, removing the TIC criteria, performs worse in average and worst case accuracy, dropping over 8% and 18, respectively. Separation of Domain General and Domain Speci\ufb01c Features . In the case of Colored MNIST, we can reason about the extent of feature disentanglement from the accuracies achieved by the domain-general and domain-speci\ufb01c predictors. Table 4 shows how much each component of \u03a6, \u03a6dg and \u03a6spu, behaves as 10 \fexpected. For each domain, we observe that the domain-speci\ufb01c predictors\u2019 accuracies follow the same trend as the color-label correlation, indicating that they capture the color-label relationship. The domain-general predictor, however, does not follow such a trend, indicating that it is not using color as the predictor. For example, when evaluating the domain-speci\ufb01c predictors from the +90% test domain experiment (row +90%) on held-out examples from the +80% training domain (column \"DS Classi\ufb01er on +80%\"), we \ufb01nd that the +80% domain-speci\ufb01c predictor achieves an accuracy of nearly 79.9% \u2013 exactly what one would expect from a predictor that uses a color correlation with the same direction \u2018+\u2019. Conversely, the -90% predictor achieves an accuracy of 20.1%, exactly what one would expect from a predictor that uses a color correlation with the opposite direction \u2018-\u2019. The -90% domain has the opposite label-color pairing, so a color-based classi\ufb01er will give the opposite label in any \u2018+\u2019 domain. Another advantage of this method, exempli\ufb01ed by Table 4, is that if one believes a particular domain is close to one of the training domains, one can opt to use the close domain\u2019s domain-speci\ufb01c predictor and leverage spurious information to improve performance. On Benchmarking Domain Generalization. Previous work on benchmarking domain generalization showed that across standard benchmarks, the domain-unaware empirical risk minimizer outperforms or achieves equivalent performance to the state-of-the-art domain generalization methods [Gulrajani and Lopez-Paz, 2020]. Additionally, Rosenfeld et al. [2022] gives results that show weak conditions that de\ufb01ne regimes where the empirical risk minimizer across domains is optimal in both average and worst-case accuracy. Consequently, to accurately evaluate our work and baselines, we focus on settings where it is clear that (i) the empirical risk minimizer fails, (ii) spurious features, as we have de\ufb01ned them, do not generalize across the observed domains, and (iii) there is room for improvement via better domain-general predictions. We discuss this point further in the Appendix A.1. Oracle Transfer Accuracies. While model selection is an integral part of the machine learning development cycle, it remains a non-trivial challenge when there is a distribution shift. While we have proposed a selection process tailored to our method that can be generalized to other methods with an assumed causal graph, we acknowledge that model selection under distribution shift is still an important open problem. Consequently, we disentangle this challenge from the learning problem and evaluate an algorithm\u2019s capacity to give domain-general solutions independently of model selection. We report experimental reports using heldout test-set examples for model selection in Appendix A Table 6. We \ufb01nd that our method, TCRI_HSIC, also outperforms baselines in this setting. 6 Conclusion and Future Work We reduce the gap in learning domain-general predictors by leveraging conditional independence properties implied by generative processes to identify domain-general mechanisms. We do this without independent observations of domain-general and spurious mechanisms and show that our framework outperforms other state-of-the-art domain-generalization algorithms on real-world datasets in average and worst-case across domains. Future work includes further improvements to the framework to fully recover the strict set of domain-general mechanisms and model selection strategies that preserve desired domain-general properties. Acknowledgements OS was partially supported by the UIUC Beckman Institute Graduate Research Fellowship, NSF-NRT 1735252. This work is partially supported by the NSF III 2046795, IIS 1909577, CCF 1934986, NIH 1R01MH116226-01A, NIFA award 2020-67021-32799, the Alfred P. Sloan Foundation, and Google Inc.", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.01532v1", |
| "title": "Set-Aligning Framework for Auto-Regressive Event Temporal Graph Generation", |
| "abstract": "Event temporal graphs have been shown as convenient and effective\nrepresentations of complex temporal relations between events in text. Recent\nstudies, which employ pre-trained language models to auto-regressively generate\nlinearised graphs for constructing event temporal graphs, have shown promising\nresults. However, these methods have often led to suboptimal graph generation\nas the linearised graphs exhibit set characteristics which are instead treated\nsequentially by language models. This discrepancy stems from the conventional\ntext generation objectives, leading to erroneous penalisation of correct\npredictions caused by the misalignment of elements in target sequences. To\naddress these challenges, we reframe the task as a conditional set generation\nproblem, proposing a Set-aligning Framework tailored for the effective\nutilisation of Large Language Models (LLMs). The framework incorporates data\naugmentations and set-property regularisations designed to alleviate text\ngeneration loss penalties associated with the linearised graph edge sequences,\nthus encouraging the generation of more relation edges. Experimental results\nshow that our framework surpasses existing baselines for event temporal graph\ngeneration. Furthermore, under zero-shot settings, the structural knowledge\nintroduced through our framework notably improves model generalisation,\nparticularly when the training examples available are limited.", |
| "authors": "Xingwei Tan, Yuxiang Zhou, Gabriele Pergola, Yulan He", |
| "published": "2024-04-01", |
| "updated": "2024-04-01", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.IR" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Set-Aligning Framework for Auto-Regressive Event Temporal Graph Generation", |
| "main_content": "Introduction Understanding the temporal relation between events mentioned in long documents is crucial to modelling complex text with articulated narratives. One of the widely adopted benchmarks for event temporal relation understanding is the SemEval 2013 TempEval-3 (UzZaman et al., 2013), requiring end-to-end generation of event temporal graphs 1For access to experimental code and data, please refer to: https://github.com/Xingwei-Warwick/Set-Aligning-EventTemporal-Graph-Generation directly from raw text. An event temporal graph is a natural representation of temporal information, with the nodes representing events and the edges the temporal relationships between them, such as \u201cbefore\u201d, \u201cafter\u201d, or \u201csimultaneous\u201d. Existing studies typically approach the problem of constructing event temporal graphs through a two-step pipeline, with the first step focusing on detecting events in text, and the second step on classifying the temporal relations between them (McDowell et al., 2017; Ning et al., 2018b). However, such pipeline-based approaches suffer from wellknown limitations, including (i) the need for finegrained annotations at each step; and (ii) the potential for error propagation throughout the pipeline. In the first step, the event extractor aims at locating as many event triggers as possible in the given documents, leading to the inclusion of numerous trivial events that often lack relevance to the narrative and have no relation with other events. As a result, the next step for temporal relational extraction becomes burdened with many noisy events, significantly impacting the overall accuracy and efficiency of the models. To address these limitations, Madaan and Yang (2021) introduced a reformulation of the task by generating event temporal graphs directly through conditional text generation. This approach allows for the use of pre-trained language models and, more importantly, overcomes the typical limitations associated with the pipeline architecture. While this method involved fine-tuning a text generation model, such as GPT-2, for the generation of linearised event temporal graphs as sequences, it fails to consider an important aspect. Specifically, it does not account for the fact that the target sequence (i.e. the list of event temporal relations) is order-invariant, and should therefore be treated as a set rather than as an ordered sequence. For example, the following two sequences represent the same temporal graph: arXiv:2404.01532v1 [cs.CL] 1 Apr 2024 \fS1: [(Cuomo leaving his office, before, speak to reporters), \u00b7 \u00b7 \u00b7 (Cuomo leaving, before, met with representatives)] S2: [(Cuomo leaving, before, met with representatives), \u00b7 \u00b7 \u00b7 (Cuomo leaving his office, before, speak to reporters)] In this scenario, the conventional text generation loss will (mistakenly) yield a high value because most of the tokens in the corresponding positions do not match, even though the event relations are the same. This issue has a detrimental effect on the model performance for several reasons. First, it discourages the language model from generating additional edges. Generating more edges implies a greater number of potential permutations in the edge sets, making it less likely to match the target. Secondly, if the initially generated edge in the sequence differs in token count from the one in the target, it causes all subsequent edges to misalign with the target, even if they are identical, leading to a high loss value. In this work, we propose a Set-Aligning Framework (SAF) that enables efficient employment of LLMs for auto-regressive event temporal graph generation. SAF incorporates a group of novel regularisations, named Set Property Regularisations (SPR), along with augmented data, which aims at tackling the problems associated with the use of LM loss in contextualised graph generation by mitigating its penalisation towards the target sequences. For example, the S1 and S2 above are different sequences of the same edge set. Even if S1 has the same order as the target edge sequence and thus has a lower LM loss than S2, both of them will be added with the same SPR. Therefore, the relative difference of their loss values becomes smaller, which avoids overfitting the model towards the specific edge order of S1. Moreover, if the model explores generating one more edge after S2 and the edge is correct, the SPR value will decrease while the LM loss will probably increase. Using the proposed SAF, we fine-tune language models from the T5 (Raffel et al., 2020) family with weak supervision. Additionally, we introduce the first human-annotated dataset for contextualised event temporal graph generation built on the New York Times, which we combine with existing event relation extraction datasets to evaluate the effectiveness of the SAF framework. Experiments on the newly annotated New York Times corpus2 show that SAF significantly increases the number of generated edges, resulting in improved recall. Further2https://doi.org/10.35111/77ba-9x74 more, we assess the performance of our approach on existing sentence-level event temporal relation extraction datasets, namely MATRES (Ning et al., 2018a) and TB-Dense (Cassidy et al., 2014), under zero-shot settings, and we find that the structural knowledge introduced through the proposed SAF has an even greater impact on model generalisation when the training examples available are limited. Our contributions are three-folded: \u2022 We introduce a model-agnostic framework, called SAF, for event temporal graph generation. SAF incorporates novel Set-Aligning regularisations, data augmentation, and weak supervision techniques. \u2022 We offer a human-annotated test set and a weakly-supervised dataset specifically designed for document-level event temporal generation. \u2022 Our extensive experimental results in various settings demonstrate the effectiveness of our proposed model. Our thorough analysis shows that our SAF framework encourages language models to generate at least 24% more edges than previous graph generation approaches across various datasets. 2 Related Work 2.1 Event Temporal Graph The task of event temporal graph extraction serves as an important task for evaluating an end-to-end system which takes raw text as input and output TimeML annotations (i.e., temporal relations) (UzZaman et al., 2013). Early attempts on the task include CAEVO (McDowell et al., 2017) and Cogcomptime (Ning et al., 2018b), which relied on a combination of statistical and rule-based methods. In recent years, more efforts have been put into developing specialised sub-systems with neural network-based approaches (Ning et al., 2019; Han et al., 2019a; Tan et al., 2021a). The emergence of large language models has paved the way for end-to-end learning, treating temporal graph generation as conditional text generation (Madaan and Yang, 2021). To tackle the set misalignment issue which remained unexplored in Madaan and Yang (2021), we propose a framework based on a group of novel Regularisations, aiming at enhancing autoregressive event temporal graph generation. It is worth noting that there is another related and more widely-recognised task called temporal \frelation extraction, which aims at classifying the type of temporal links between pre-extracted events (Wang et al., 2020; Wen and Ji, 2021; Tan et al., 2023). While Han et al. (2019b) proposed a joint extraction model for events and event temporal relations, they rely on event extraction supervision signals, which our work does not need. 2.2 Graph Generation with Language Models Generating graphs with language models has been explored in many areas. For example, Bosselut et al. (2019) fine-tunes GPT on the ATOMIC commonsense knowledge graph (Sap et al., 2019). Melnyk et al. (2022) proposed a multi-stage system for knowledge generation based on T5. However, these studies do not generate an entire graph in one generation. In contrast, Madaan et al. (2021) generated inference graphs using a combination of a graph generator and a graph corrector for queries in defeasible reasoning. Zaratiana et al. (2023) generate entities and entity relations with an autoregressive LM, but they did not consider the set property of the target. Different from them, we focus on the set property of the generation sequence, which is particularly important in the setting where both the input document and output sequence are considerably longer. 2.3 Conditional Set Generation Text generation models are primarily designed for generating text with strict linear orders, making them suboptimal for generating sets. This limitation has been acknowledged in recent NLP research, where efforts have been made to adapt seq2seq frameworks for tasks like multi-label classification and keyword generation (Qin et al., 2019; Ye et al., 2021). Vinyals et al. (2016) studied the general challenge of using sets as either input or target output for text generation models. They found in both cases, the order of elements in the set has a significant impact on convergence and final perplexity. This implies that there may exist an optimal order for the input or output set sequence, and they proposed allowing the model to search for this order during training. Instead of resorting to exhaustive search, Madaan et al. (2022) proposed a data augmentation method to enforce order-invariance and prepend the set\u2019s cardinality to the target sequence to ensure the correct cardinality. While previous research has tackled multi-label prediction and keyphrase generation, our work delves into the unique challenges presented by event temporal graph generation, which involves long sequences and partially ordered properties. In a more general sense, the object detection task from computer vision also involves set prediction (Chen et al., 2022). Carion et al. (2020) use parallel decoding to generate the elements in a set based on object queries. Tan et al. (2021b) adopted a similar approach in name entity recognition with a non-autoregressive decoder. Different from the entities in images, the set elements (event relations) in our task are not concrete spacial objects or text spans but instead are varied in length and scattered across each document. This makes object queries and non-autoregressive decoding inapplicable in our settings. 3 Set-Aligning Framework Madaan and Yang (2021) first explored the possibility of end-to-end event temporal graph generation using neural language modelling. Since then, however, this task has remained under-explored, with numerous unresolved issues. To elaborate, the first concern is that Madaan and Yang (2021) framed graph generation as a conventional sequence generation problem, whereas it is fundamentally a set generation problem. Secondly, the dataset they built primarily consists of small-sized graphs, failing to challenge the model in terms of documentlevel understanding. Lastly, their investigation mainly centred on GPT-2, while the landscape of LLMs has evolved with the emergence of models featuring distinct structures (e.g., encoder-decoder) and new paradigms (e.g., in-context learning) in recent years. In this study, we address these three aspects to enhance the understanding of sequenceto-sequence temporal graph generation. Although our proposed framework is designed to be model-independent, several factors have led us to choose Flan-T5 as the base model for our experiments: (i) Based on our preliminary experiments, Flan-T5-base hits the sweet spot in terms of performance vs. resource consumption, allowing us to test more variants; (ii) its encoder-decoder structure is well-suited to document-level graph generation, due to its efficiency in processing comprehensive information in lengthy documents. 3.1 Event Temporal Graph Modelling as Edge Set Generation An event temporal graph is a directed graph with no isolated vertex. Each edge in the graph describes a temporal relation between two events, and selfloops are not permitted. Following Madaan and \fDocument: Governor Cuomo (e1: leaving) his office in Albany yesterday to (e2: speak) to reporters after he (e4: met) with representatives of groups pushing for higher ethical standards for public officials in the state. \u201cWe (e3: wanted) the Governor and the Legislature to know that the feeling is out there.\u201d said Paul Elisha. Large Language Model e1: leaving e2: speak e3: wanted Edge Embedding e4: met digraph G { \u201cGov. Cuomo leaving his office\u201d -> \u201cTo speak to reporters\u201d [l = before]; \u201cGov. Cuomo leaving his office\u201d -> \u201cHe met with representatives\u201d [l = before]; \u201cHe met with representatives\u201d -> \u201cGov. Cuomo leaving his office\u201d [l = before]; [\u2026] } Hausdorff distance e1' e2' e3' e1' Matching Regularisation: \ud835\udc45!\"#$%&'(( Cardinality Regularisation: \ud835\udc45)\"'% Duplication Regularisation: \ud835\udc45%#*+ Edge Set \u2013 Gold [\ud835\udc52, -, \ud835\udc52. -, \u2026, \ud835\udc52/ ] Edge Set \u2013 Generated [\ud835\udc52,, \ud835\udc52., \u2026, \ud835\udc52/] Parsing \ud835\udcdb Combined with LM Loss Augmented Data \ud835\udc45!\"#$ \ud835\udc45%\"&'$(#)) Set-Property Regularisations Figure 1: Set-Aligning framework (SAF). Yang (2021), we represent these graphs by linearizing them into strings using the DOT graph description language (Gansner, 2006) (example shown in Figure 1). Given that event temporal graphs do not have isolated vertices, the sequence essentially represents the edge set of the graph. We model the probability of generating a string y, which is a linearised representation of the event temporal graph G, conditioned on a document X = (x1, x2, ..., xn) using a language model: pLM(y|X) = T Y t=1 p(yt|X, y<t) (1) where y is a string formatted in DOT notation. 3.2 Data Augmentation The target sequences of event temporal graph generation are essentially sets rather than strictly ordered text sequences. Therefore, conventional text generation loss can inadvertently penalise the token order and force the arrangement of elements to match the order in the target sequence, which is not necessarily the optimal order. This enforced order may lead to sub-optimal performance (Vinyals et al., 2016). A potential solution is to introduce random permutations of set elements as augmented training examples, which has already been shown effective in tasks like multi-label classification and keyphrase generation (Madaan et al., 2022). Specifically, in the context of event temporal graph generation, the elements correspond to the edges in the target string. The substrings representing the edges are randomly shuffled, while the rest of the string remains unchanged. Prepending the set cardinality of the groundtruth edge set to the generation target may also help constrain the generation model to avoid overgeneration (Madaan et al., 2022). However, such attempts in our preliminary experiment led to an approximate 4% drop in edge F1 score, despite a significant reduction in the number of generated edges. Thus, we decided not to incorporate the cardinality into the final framework. 3.3 Set Property Regularisations (SPR) Simply adding augmented data to train models does not address the fundamental issue of set alignment. Several challenges arise in this approach. First of all, it is unrealistic to add all permutations, especially when dealing with long documents containing numerous event relations, as the training data will grow at a rate proportional to the factorial of the cardinality of the target set. More importantly, with each augmented example, the loss function would still penalise the unobserved permutations of the set. This would make the training unstable. The core challenge lies in finding an effective way to compare the linearized target graph with the linearized generated graph, without relying on a strict token-by-token comparison as in conventional text generation. To tackle this issue, we propose introducing modifications to the generation objective. As the linearized graph essentially represents the edge set of the graph, we can simplify the graph comparison problem into a set comparison problem. Our approach involves several components. Firstly, we add a set cardinality regularisation to encourage the model to generate an adequate number of temporal relation edges. Then, we introduce a duplication regularisation to penalise any repetition of elements in the edge set. Lastly, we design a set matching regularisation that assesses the semantic similarity between elements in the \ftarget edge set and those in the generated edge set. Collectively, the above regularisations are referred to as Set Property Regularisations (SPR). They are integrated with the token-level cross-entropy loss through a weighted average. To compute the set property regularisations, a graph string needs to be first sampled from a language model given a training input. Then, this sequence is parsed into a list of edges E, where each edge e is a triplet consisting of a head event, a relation type, and a tail event (h, r, t). The parsing is done with a rule-based parser which turns the graph text string into a structured data representation. As the edges are loaded in a structured list, the number of edges and duplicated edges can be counted. Let E denote the set of all the unique edges in E. The values for the set cardinality regularisation and the duplication regularisation can be computed as follows: E = {e|e \u2208E} (2) Rdupl = |E| \u2212|E| |E| (3) Rcard = abs(|E\u2032| \u2212|E|) |E| (4) where function abs(\u00b7) denotes taking the absolute value, E\u2032 denotes the ground-truth edge set. To compute the set matching regularisation, we assess the similarity between the generated set and the target set by comparing the semantic similarity of the edges across the two sets. We take the last layer of the decoder\u2019s representations of the respective tokens as the semantic representations of the events and the relation type. Then, we concatenate these representations as the semantic representation of each edge: zh = H[h1,h2,...,hm] (5) zr = H[r1,r2,...,rs] (6) zt = H[t1,t2,...,tn] (7) \u00af e = \u0002 pool(zh); pool(zr); pool(zt)) \u0003 (8) where H is the last-layer hidden states of the decoder. [h1, ..., hm], [r1, ..., rs], and [t1, ..., tn] are the indices of the head event, relation type, and tail event, respectively. zh, zr, zt denote the semantic representations of the head event, relation type, and tail event, respectively. pool(\u00b7) represents the average pooling function. \u00af e denotes the semantic representation of the edge. We now possess two sets of embeddings: one compassing the edge embeddings extracted from the target graph, and the other containing the edge embeddings derived from the generated graph. Essentially, they can be considered as two sets of points in the representation space. Thus, we can measure the similarity of the two graphs by measuring the distance between the two point sets (manifolds) in the representation space. The Hausdorff distance, originally defined to measure the separation between two subsets within a metric space, has recently found applications in machine learning for measuring the distance between two sets of embeddings (Schutze et al., 2012; Wang et al., 2023). We compute the average Hausdorff distance as the measure: dH(E\u2032, E) = 1 |E\u2032| X \u00af e\u2032\u2208E\u2032 min \u00af e\u2208E dcos(\u00af e\u2032, \u00af e) + 1 |E| X \u00af e\u2208E min \u00af e\u2032\u2208E\u2032 dcos(\u00af e\u2032, \u00af e) (9) where the distance of an edge pair is computed by the cosine distance dcos(\u00b7). When the model generates the set elements in a different order than the target sequence, the tokenlevel cross-entropy loss would be high. If the model generates more correct elements as suffixes of the sequence with a wrong order, the loss value would probably increase further. However, the SPR will have a lower value and thus alleviate the discouragement for generating more elements caused by the token-level cross-entropy loss. 3.4 Fine-tuning with Set Property Regularisations Unlike the set prediction methods based on parallel decoding (Carion et al., 2020; Tan et al., 2021b), SPR cannot be directly used as the main objective in auto-regressive generation. There are two primary reasons for this. The first reason is that obtaining the SPR requires sampling from the decoder, which would reduce the training speed significantly. Moreover, the second reason is that the language model will struggle to generate sequences in DOT format accurately because learning the token dependency for such format requires the language modelling objective. Consequently, the sequence parser will fail to recognize any valid edges within the sequence, resulting in high SPR values and hindering the training. \fTo avoid the problems mentioned above, we introduce the SPR after a certain number of finetuning iterations. Once the model has acquired a basic proficiency in generating correct DOT sequences, the SPR can function as intended. SPR can prevent the language model from overfitting to the order of the target set shown in the training samples. We explored alternative approaches to incorporate SPR, but they reported inferior performance compared to the method eventually included in our framework. We discuss those alternative methods in the Appendix A. 4 Experiment 4.1 NYT Temporal Event Graph Dataset NYT-train NYT-test NYT-human Total documents 18, 263 1, 000 22 Total events 846, 022 47, 251 661 Node degree 2.52 2.54 2.34 Total relations 1, 066, 264 60, 056 528 before 578, 216 32, 729 465 after 412, 704 23, 200 0 includes 7, 922 450 12 is_included 41, 964 2, 332 0 simultaneous 25, 458 1, 345 51 Table 1: The statistics of the NYT temporal event graph dataset. Node degree represent the average number of relations each event has. There are several event temporal relation extraction datasets with pairwise event relation annotations, such as MATRES and TBD. It is theoretically possible to convert these annotations into document-level event temporal graphs. However, our preliminary experiments have shown that even when merging all of these datasets (resulting in 4,684 training documents), it is not sufficient to fine-tune a large language model to achieve acceptable performance. To address this limitation, we opted to build a significantly larger dataset on a selection of data from the New York Times (NYT) corpus using a weak supervision approach, drawing inspiration from the work of Madaan and Yang (2021). Nevertheless, we introduced additional steps in the data selection process to ensure that the selected documents contain high-quality event temporal graphs, which were not taken in Madaan and Yang (2021). Firstly, we performed topic modelling using Latent Dirichlet Allocation (LDA) on the MATRES and TBD datasets to extract a set of topics. Then, we identified general descriptors that are semantically similar to these topics (e.g., politics, diploma, sports, etc.). This selection process was crucial because, following training with noisy labels, our intention was to evaluate the model\u2019s performance on these datasets under zero-shot settings. We further analysed the most noteworthy events in these descriptors to ensure they were narrative-oriented, because articles that weave stories tend to contain a wealth of event temporal relations. To identify the most significant events, we employed a metric similar to TF\u00b7IDF which we could describe as \u201cevent frequency \u00d7 inverse-descriptor frequency\u201d. ef\u00b7idf = fe,d P e\u2032\u2208d fe\u2032,d \u00b7log |D| |{d \u2208D : e \u2208d}| (10) where e is an event and d is a descriptor. fe,d is the number of times that event e occurs in the documents with the descriptor d. P e\u2032\u2208d fe\u2032,d is the total number of event occurrence in the descriptor d. |D| is the total number of descriptors in the corpus. |{d \u2208D : e \u2208d}| is the number of descriptors where the event e appears. The descriptors that are selected and the number of documents in them are listed in the Appendix D.1. After choosing the documents, we acquire the event temporal graph by running an off-the-shelf event and temporal relation extraction tool called CAEVO (McDowell et al., 2017). CAEVO is more scalable than Cogcomptime (Ning et al., 2018b), making it suitable for building a large-scale dataset. Then, each temporal graph is represented in DOT format, and every event verb is prefixed and suffixed with its noun phrase and object, respectively. Note that we did not break the documents into short segments as Madaan and Yang (2021) did. Instead, we keep the data strictly at the document level which is a more challenging setting because the model needs to analyse the entire document and generate a much larger graph. In the dataset we built, a target graph has about 46 nodes and 58 edges on average. While in Madaan and Yang (2021), the average number of nodes is 4 and the average number of edges is 5 in a document-level event temporal graph. Moreover, their events have 1.54 relations on average, while events in our data have 2.52 relations on average, showing that the graphs in our dataset are much more complex. In practice, these complex documents are the ones that require analysis, and a model developed based on simpler inputs cannot handle them directly. \f4.2 Human-annotated Test Data Aside from testing with the CAEVO-created data, we recruited human annotators to annotate a test split of the NYT data. We performed a preprocessing step regarding the relation types by merging the reciprocal relations, such as transforming after into before, is_included into includes by swapping the head and tail events. For example, \u201cI had dinner after I had lunch\u201d is equivalent to \u201cI had lunch before I had dinner\u201d. This processing not only streamlined the annotation process but also enhanced the model performance (refer to experimental results in Appendix C). We recruited crowd workers from Prolific3 platform, which is a research-focused platform providing verified human workers. We recruited 24 participants in total (including pilot testing runs). To make sure the participants can understand and annotate the article efficiently, we only recruited native English speakers who have an education level higher than High school diploma/Alevels. We put 4 documents, which are randomly sampled from the same descriptor set as the training and testing of the selected NYT corpus, into each unit task. There is a shared document across all the tasks to compute the inter-annotator agreement (IAA). To minimize discrepancy, we asked 2 participants to first identify the event triggers in each unit task. We then merged the event annotations from the participants by taking the union of the spans (if there are overlapped spans, we take the longer span). Then, we asked another participant to annotate the event temporal relation based on the identified events. We also included the outputs from the CAEVO model to serve as examples, but we explicitly asked the participants to correct the annotations by adding, removing, or changing the CAEVO\u2019s annotations. In the end, we collected 22 documents as the human-annotated test set. On the event identification, we compute IOU (Intersection over Union) as a measure of agreement between the annotators. Average across 7 tasks, the IOU between the event spans is 0.8986. For the relation annotations, we compute the average Cohen\u2019s \u03ba of every participant pair in the relation annotation task (on the shared document). The average Cohen\u2019s \u03ba is 0.7465. Details of instructions and interfaces are in Appendix D.1. The statistics of the constructed datasets are shown in Table 1. The distributions of relation types are highly imbalanced, with a majority falling into either the before or after categories. We also 3prolific.com evaluated the trained models on the MATRES test set (comprising 20 documents) and TBD test set (consisting of 9 documents), both of which are based on human annotations and processed into DOT using the methods previously described. 4.3 Model Setting We employed Flan-T5-base as the backbone model for contextualised graph generation. We first trained a Flan-T5-base model following the same setup as in Madaan and Yang (2021) as the baseline. SAF (w/o DA) is our proposed framework without the augmentations of edge order but with Set Property Regularisations (SPR). SAF (w/o SPR) is the framework without the use of SPR but with the augmentations. As our SAF framework with SPR requires additional training steps and the augmentations enlarge the training set, we keep the number of training steps balanced in the methods to exclude the influence of seeing different amounts of training data. The model is trained for 10 epochs, with each document being augmented through 4 random permutations, followed by a further 3 epochs of training, during which the SPR are adopted without permutations. We use a learning rate of 2e \u22125, along with a weight decay of 0.01. Batch size of 5 before SPR, and 3 during SPR because additional memory is required for sampling. We used AdamW optimizer (Loshchilov and Hutter, 2019). We use the beam search (Graves, 2012) with a beam size of 5 and a maximum length of 2048 to sample results. We balanced the training steps in the compared methods to make sure they saw the same amount of training data. Experiments are conducted on a GPU node under an HPC cluster using 4 Nvidia A100 80G GPUs. The models are trained based on 3 random seeds (ChatGPT was tested for 3 times) and the metrics are the average values of them. Training with the augmented data for 10 epochs requires approximately 19 hours. Training with SPR for 3 epochs takes about 27 hours. Training a vanilla Flan-T5-base for the same number of training steps demands approximately 20 hours. 4.4 Evaluation Metrics Following the previous research (Madaan and Yang, 2021), we evaluate the results using the metrics of precision, recall, and F1 score for both node set and edge set predictions. The primary metric is the edge F1 because the quality of the node generation is also reflected in it. \fNYT-test NYT-human P E RE F E 1 P E RE F E 1 Flan-T5-base 51.27 32.43 39.73 22.61 25.88 24.14 SAF (w/o DA) 50.28 34.82 41.15 25.80 32.13 28.62 SAF (w/o SPR) 51.88 36.64 42.95 27.08 34.91 30.50 SAF 50.97 39.96 44.80 25.92 40.21 31.52 Table 2: Edge-based metrics on the NYT datasets NYT-test NYT-human P N RN F N 1 P N RN F N 1 Flan-T5-base 75.52 58.24 65.76 53.36 47.66 50.35 SAF (w/o DA) 75.34 60.64 67.20 54.86 50.43 52.55 SAF (w/o SPR) 75.43 62.36 68.27 54.14 51.59 52.84 SAF 75.47 65.16 69.95 53.63 54.51 54.06 Table 3: Node-based metrics on the NYT datasets 4.5 Results As shown in table 2 and 3, SAF (w/o SPR) consistently outperform Flan-T5-base in terms of F1 scores on the NYT-test and NYT-human datasets, suggesting the benefits of introducing permutated training examples. For example, SAF (w/o SPR) improves upon Flan-T5-base by about 3% on the NYT-test and 6% on the NYT-human in terms of edge F1. SAF (w/o DA) achieves an improvement of approximately 1.5% on the NYT-test and 4.5% on the NYT-human datasets in terms of edge F1, demonstrating the effectiveness of SPR alone. Furthermore, our SAF model yields the best performance when both SPR and augmentation are incorporated. We also observe that models utilizing SAF have much higher edge recalls while their edge precision scores are either similar or occasionally even lower than those of other models. This suggests that the performance improvement primarily comes from the generation of more edges. This observation is reinforced by the information presented in Figure 2, where models trained with SAF can generate 24% \u221248% more edges compared to the conventional text generation framework on these datasets. These additional edges play a pivotal role in the improvement of the edge F1 since precision stays nearly the same. It is worth mentioning that the NYT-human dataset has a different label distribution compared to the NYT dataset used for training, where its events and event temporal relations were produced by CAEVO. Notably, the frequency of simultaneous is significantly higher, accounting for 9.66%, in contrast to the 2.39% observed in the training set (see Appendix D.1 for more comprehensive analyses). Based on our observation, it appears that Figure 2: The comparison of generated edges between SAF and vanilla Flan-T5-base. The y axis is normalised by dividing the number of edges generated by Flan-T5base in the respective datasets. MATRES-test TBD-test P E RE F E 1 P E RE F E 1 ChatGPT 10.58 6.56 8.09 25.92 5.94 9.66 Flan-T5-base 13.06 7.16 9.25 23.26 4.59 7.67 SAF 18.05 14.31 15.96 37.53 11.04 17.05 Table 4: Experiment results on human-annotated MATRES and TBD under the zero-shot setting. human annotators tend to apply a more lenient criterion for the simultaneous label whereas CAEVO enforces a stricter definition of this label. Similar trends are also observed in Table 4, which were obtained through evaluation on MATRES and TBD. We used the models trained on the NYT training set to test on these datasets under zero-shot settings. ChatGPT shows our best attempts to generate event temporal graphs with gpt-3.5-turbo model through Openai API. We used two hops: (i) ask ChatGPT to generate events from the documents, (ii) ask ChatGPT to generate an event temporal graph based on the generated events and the documents. The results show that ChatGPT is outperformed by fine-tuned models, which is in line with the recent papers on exploring ChatGPT\u2019s ability on event understanding (Li et al., 2023; Chan et al., 2023; Gao et al., 2023). Upon examining the responses of ChatGPT, it appears that it conceptualises events as a broader and high-level notion which diverges from the definition commonly used by the information extraction community in event processing. In our task, each predicate can signify an event, but ChatGPT tends to approach event identification more like \fa summarisation task, where it summarises text chunks in a document. This likely explains why ChatGPT identified only approximately half of the events present in the target graph, resulting in low recall. These observations confirm that event temporal graph generation cannot be solved solely with prompt engineering on ChatGPT. It is noteworthy that ChatGPT consistently produces graphs in the correct DOT format across both the MATRES-test and TBD-test datasets, indicating that formatting issues are not the primary factor in ChatGPT\u2019s underperformance. The details regarding the inputs, outputs, and parameter settings for ChatGPT are presented in Appendix E. 4.6 Error Analysis A major error type we found is that the model often fails to deduce temporal relationships that involve inference. This is due to the reliance of weak supervision signals provided by CAEVO, which primarily rely on syntactical rules. Consequently, this problem led to a lower edge F1 on the humanannotated test set, as human annotators provided many temporal relations that were inferred through commonsense reasoning. For example, the model does not perceive a clear temporal sequence in the sentence:\u201c<person A> won the gold medal in women\u2019s 1,500m. <person B> won the silver and <person C> won the bronze.\u201d However, human annotators can readily identify an obvious temporal order among \u201c<person A> won\u201d, \u201c<person B> won\u201d, and \u201c<person C> won\u201d, as it aligns with the common knowledge that in a race, the first person who crossed the finish line won the gold, followed by the silver and the bronze winners. 5 Conclusion This study proposes a framework for fine-tuning language models to generate event temporal graphs directly from raw documents in an end-to-end manner. The proposed framework includes a data augmentation method and set property regularisations to mitigate the problem caused by conventional generation loss, promoting the generation of more edges by language models and, consequently, leading to improved performance. Extensive experiments show the effectiveness of our proposed model on multiple widely used datasets with realworld articles. The thorough analysis demonstrates that our framework can encourage language models to generate more edges for constructing event temporal graphs in various settings. Limitations Due to the presence of noisy labels used in finetuning, a major limitation of the proposed method is the inclusion of many imaginary events, trivial events, and negative expressions of events. For example, CAEVO identified phrases like \u201c<someone> did not fire\u201d as an event. While \u201cfire\u201d serves as a predicate and the notion of \u201cdid not fire\u201d can hold narrative significance, it may not be entirely suitable within the context of event temporal graphs. This is because it is not about the occurrence of an action or a change of state, but rather describes the absence of an event. Similarly, in some articles, there are descriptions of multiple potential future developments, such as \"he might buy product A\". Including such expressions as events might introduce confusion into the event temporal graph, as these represent possibilities rather than actual occurrences. This problem mainly arises from the behaviour of the CAEVO method, which primarily focuses on identifying fine-grained predicates as events. The resolution to this problem lies in obtaining better-quality supervision signals which focus on salient events (i.e., events which are mentioned frequently and are important to the narrative). Ethics Statement The proposed method analyses the text provided and extracts relevant information from it. The algorithm cannot acquire information beyond the boundary of the given text. Thus, any associated risks stem solely from the data itself. This research only utilised publicly available data. As long as the data input to the model is collected according to the relevant data policies and guidelines, the proposed method does not introduce further risks. Acknowledgements This work was supported in part by the UK Engineering and Physical Sciences Research Council through a Turing AI Fellowship (grant no. EP/V020579/1, EP/V020579/2). Computing facilities were provided by the Scientific Computing Research Technology Platform of the University of Warwick, and the CREATE HPC platform of the King\u2019s College London (KCL-CREATE, 2024)." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.02827v1", |
| "title": "PowerGraph: A power grid benchmark dataset for graph neural networks", |
| "abstract": "Public Graph Neural Networks (GNN) benchmark datasets facilitate the use of\nGNN and enhance GNN applicability to diverse disciplines. The community\ncurrently lacks public datasets of electrical power grids for GNN applications.\nIndeed, GNNs can potentially capture complex power grid phenomena over\nalternative machine learning techniques. Power grids are complex engineered\nnetworks that are naturally amenable to graph representations. Therefore, GNN\nhave the potential for capturing the behavior of power grids over alternative\nmachine learning techniques. To this aim, we develop a graph dataset for\ncascading failure events, which are the major cause of blackouts in electric\npower grids. Historical blackout datasets are scarce and incomplete. The\nassessment of vulnerability and the identification of critical components are\nusually conducted via computationally expensive offline simulations of\ncascading failures. Instead, we propose using machine learning models for the\nonline detection of cascading failures leveraging the knowledge of the system\nstate at the onset of the cascade. We develop PowerGraph, a graph dataset\nmodeling cascading failures in power grids, designed for two purposes, namely,\ni) training GNN models for different graph-level tasks including multi-class\nclassification, binary classification, and regression, and ii) explaining GNN\nmodels. The dataset generated via a physics-based cascading failure model\nensures the generality of the operating and environmental conditions by\nspanning diverse failure scenarios. In addition, we foster the use of the\ndataset to benchmark GNN explainability methods by assigning ground-truth\nedge-level explanations. PowerGraph helps the development of better GNN models\nfor graph-level tasks and explainability, critical in many domains ranging from\nchemistry to biology, where the systems and processes can be described as\ngraphs.", |
| "authors": "Anna Varbella, Kenza Amara, Blazhe Gjorgiev, Giovanni Sansavini", |
| "published": "2024-02-05", |
| "updated": "2024-02-05", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.SY", |
| "eess.SY" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "PowerGraph: A power grid benchmark dataset for graph neural networks", |
| "main_content": "Introduction The lack of public Graph Neural Network (GNN) datasets for power grid applications has motivated the development of a new graph dataset. Power grid stability is crucial to modern society, and, therefore, power grids are designed to be robust under failures of different nature. Under particular conditions, however, the failure of critical components can trigger cascading outages. In the worst case, cascading failures spread into the full blackout of the power grid [6, 26]. The complete understanding of complex events as cascading failures is therefore of uttermost importance. Such events are rare and historical data is scarce, therefore, we must rely on simulating cascading failures via computer models. The established traditional approach for cascading failure analysis is a quasi-steady state model, such as the OPA model [12], the Manchester model [47], and the Cascades model [22]. These models assess how the power grid responds after an outage is introduced in the grid. In fact, they simulate the complex behavior of the systemic responses and how a chain of successive failures (cascade) propagates in the grid. Since such tools are computationally intensive, they cannot be used by power grid operators for online detection of cascading failure nor for probabilistic risk 2 \fanalysis employing sequential Monte Carlo. The shortage of historical blackout data and the high computational cost of current methods to simulate cascading failures in power grids highlight the need for machine learning models that can detect cascading failures in almost real-time. Power grid operators, specifically transmission system operators (TSO), will greatly benefit from an online tool able to estimate the potential of cascading failures under given operating conditions of the power grid. The research community has presented new methods that employ machine learning algorithms for the online prediction of cascading failures. The proposed methods often do not generalize for diverse sets of failures [1, 4]. They are trained with datasets created with cascading failure models that often rely on the direct current (DC) power flow approximation [38], less accurate than the alternate-current (AC) power flow. In addition to these limitations, the authors are not aware of publicly available datasets on the subject. Within the realm of machine learning algorithms, GNN are convenient and powerful machine learning algorithms to model power grid phenomena, since graphs allow an intuitive representation of power grids. In [37], the authors introduce how GNN have been employed for various applications in the field of power systems. Our paper focuses on fault scenario application, but we plan to extend it to power flow calculation in the future. On this topic, the authors of [59] provide a review of GNN for power flow models in the distribution systems. The work in [54] shows that a GNN outperforms a feed-forward neural network in predicting cascading failures in power grids. To produce a large and complete dataset, we use Cascades [22], an alternate-current (AC) physics-based cascading failure model. The model simulates the evolution of the triggering failures yielding the final demand not served (DNS) to the customers. We produce a power grid GNN dataset comprising a large set of diverse power grid states. The power grid state represents the pre-outage operating condition, which is linked to the initial triggering outage (one or more failed elements), referred to as the outage list. Each power grid state is represented as a graph, to which we assign a graph-level label according to the results of the physics-based model. The dataset is generated to suit different graph-level tasks, including multi-class classification, binary classification, and regression. The presented graph property prediction dataset fills a gap according to the OGB taxonomy for graph dataset [30, 29]. Graph datasets are classified according to their task, domain, and scale. The task is at the node-, link-, or 3 \fgraphlevel; the scale is small, medium, or large; and the domain is nature, society, or information. Our dataset comprises a collection of power grid datasets, which are designed for graph-level tasks, and their size ranges from small to medium [21]. Moreover, all the datasets in PowerGraph have the same number of features per node, and therefore, they can be utilized as one combined dataset to train GNN models. Table 1 reports the total number of graphs per power grid, the number of buses and branches in the grid, the number of loading conditions, and the number of outage lists simulated. The dataset fits the society domain, where no public GNN graph property prediction datasets are available [30], see Appendix A.1. Table 1: Parameters of the AC physics-based cascading failure model for the selected four test power grids. A bus is defined as a node where a line or several lines are connected and may also include loads and generators in a power system. Transmission lines and transformers are defined as branches. Test system # Bus # Branch # Loading conditions nload cond # Outage lists noutage lists # Graphs N IEEE24 24 38 300 43 12900 UK 29 99 300 132 39600 IEEE39 39 46 300 55 16500 IEEE118 118 186 300 250 75000 Other relevant GNN datasets for graph property prediction are the TU collection [44] and the MoleculeNET [58] dataset. Their application is natural science, particularly molecular graphs, i.e., molecules are represented as graphs to predict certain chemical properties. Publicly available power grid datasets such as the Electricity Grid Simulated (EGS) datasets [15], the PSML [64], and the Simbench dataset [43] are not targeted to machine learning on graphs. In addition, both the EGS and PSML provide data for very small power grids, with 4 and 13 nodes respectively. Instead, Simbench focuses only on power system analysis in the German distribution and transmission grid, and the dataset is not designed for machine learning on graphs. In [46], the authors present new datasets of dynamic stability of synthetic power grids. They found that their GNN models, which primarily use emphasizes node regression, can predict highly non-linear targets from topological information. On the other hand, PowerGraph, which uses graph-level tasks, does not 4 \faddress dynamic stability and relies on established real-world-based power grid models to predict the development of cascading failures. Overall, the dataset we provide fills a gap in the domain of GNN datasets for graph-level tasks [30] and is the only publicly available GNN dataset for power grids. Besides benchmarking GNN models, the dataset is intended to be used for explainability methods. Therefore, we assign ground-truth edge explanations using the insights provided by the physics-based cascading failure model. As explanations, we consider the branches that have failed after the initial trigger, i.e., the cascading stage. In the field of explainability for GNN, there is to the best of our knowledge no existing real-world dataset with reliable groundtruth explanations [2]. There have been recent attempts to create a synthetic graph data generator producing a variety of benchmark datasets that mimic real-world data and are accompanied by ground-truth explanations [2], as well as to provide atom-wise and bond-wise feature attribution for chemical datasets [28, 32]. However, none of these attempts provides real world data with empirical explanations. Here, we propose a real world dataset for GNN graph level tasks that has clear ground-truth explanations obtained from physic-based simulations. This work provides a large-scale graph dataset to enable the prediction of cascading failures in electric power grids. The PowerGraph dataset comprises the IEEE24 [17], IEEE39 [18], IEEE118 [16] and UK transmission system [45]. These test power systems have been specifically selected due to their representation of real-world-based power grids, encompassing a diverse range of scales, topologies, and operational characteristics. Moreover, they offer comprehensive data with all the necessary information required for conducting cascading failure analysis. With PowerGraph, we make GNN more accessible for critical infrastructures such as power grids and facilitate the online detection of cascading failures. Our contributions are the following: \u2022 We provide a data-driven method for the online detection of severe cascading failure events in power grids. \u2022 We make the dataset public in a viable format (PyTorch Geometric), allowing the GNN community to test architectures for graph-level applications. \u2022 The dataset includes several graph-level tasks: binary classification, multiclass classification, and regression. \u2022 We provide explanatory edge masks, allowing the improvement of GNN explainability methods for graph-level applications. The rest of the paper is organized as follows: Section 2 describes the physicsbased model used to simulate cascading failure scenarios; Section 3 outlines the 5 \fstructure of the graph datasets; Section 4 reports the benchmark experiments of the different datasets; Section 5 describes the method used to benchmark explainability methods; and Section 6 concludes the article with a final discussion. 2 Physics-based model of cascading failures We employ the established Cascades model [22, 24] for cascading failure simulations to produce the GNN datasets. Indeed, its application to the Western Electricity Coordinating Council (WECC) power grid demonstrates that Cascades can generate a distribution of blackouts that is consistent with the historical blackout data [35]. Cascades is a steady-steady state model with the objective to simulate the power grid response under unplanned failures in the grid. For that purpose, the model simulates the power system\u2019s automatic and manual responses after such failures. Initially, all components are in service and there are no overloads in the grid. The system is in a steadystate operation with the demand supplied by the available generators, which produce power according to ACoptimal power flow (OPF) conditions [10]. The simulation begins with the introduction of single or multiple initial failures. Then, Cascades simulates the post-outage evolution of the power grid, i.e., identifies islands, performs frequency control, under-frequency load shedding, under-voltage load shedding, AC power flows, checks for overloads, and disconnects overloaded components. The model returns two main results: the demand not served (DNS) in MW and the number of branches tripped after the initial triggering failure. The simulation is performed for a set of power demands sampled from a yearly load curve. For each season of the year, an equal number of loading conditions are randomly sampled. We use a Monte-Carlo simulation to probabilistically generate outages of transmission branches (lines and transformers). We define the number of loading conditions and the size of the outage list. Therefore, we are able to simulate a large number of scenarios and thus create large datasets. Each scenario generated is a power grid state, and therefore, becomes an instance of the dataset. For each combination of loading condition and element in the outage list, we simulate the cascading failure, identify the terminal state of the power grid, quantify the demand not served, and list the tripped elements. Figure 1 shows the structure of the Cascades model [23]. 6 \fFigure 1: Workflow of the Cascades [23] model, used to simulate cascading failures in power grids. Separate runs of Cascades are performed for the different test power grids namely, IEEE24, IEEE39, UK, and IEEE118. 3 PowerGraph benchmark for graph-level predictions and explainability The PowerGraph dataset is obtained by processing the results of the Cascades model. Because we work with graph-level tasks, the dataset is a collection of N attributed graphs G = {G1, G2, .., GN}. Each input graph reflects a unique pre-outage operating condition of the system and one set of single/multiple outages. Therefore, the total number of graphs N per power grid equals to nload cond \u2217noutage lists. Finally, each graph is assigned an output label corresponding to the chosen task. An attributed graph is defined G = (V, E, V, E), where V is the set of nodes (bus) and E is the set of edges (branches), V \u2208R\\mathcal{V}\u00d7t is the node feature matrix, with \\mathcal{V} nodes and t features per node and E \u2208R\\mathcal{E}\u00d7s is the edge feature matrix, with \\mathcal{E} edges and s features per edge. Finally, the graph connectivity information is encoded in COO format [20]. We assign three bus-level features and four branch-level features. Each feature quantity is normalized using mean normalization. The input features are: Bus: 7 \f\u2022 Net active power at bus i, Pi,net = Pi,gen \u2212Pi,load, P \u2208Rnbus\u00d71, where Pi,gen and Pi,load are the active generation and load, respectively. \u2022 Net apparent power at bus i, Si,net = Si,gen \u2212Si,load, S \u2208Rnbus\u00d71, where Si,gen and Si,load are the apparent generation and load, respectively. \u2022 Voltage magnitude at bus i, Vi \u2208Rnbus\u00d71, where nbus is the number of buses in the power grid. Branch: \u2022 Active power flow Pi,j \u2022 Reactive power flow Qi,j \u2022 Line reactance Xi,j \u2022 Line rating lri,j. Figure 2 displays an instance of the PowerGraph dataset. Each graph represents a state of the power grid associated with a loading condition and an outage (single or multiple failures). Since each outage is associated with disconnected branches, we remove the respective branches from the adjacency matrix and from their respective edge features. Therefore, each instance of the dataset is a graph with a different topology. The total number of instances is reported in Table 1. For each initial power grid state, we have knowledge of the post-outage evolution of the system, i.e., the demand not served (DNS) and the number of tripped lines. We label it as a cascading failure in each case that results in branches tripping after the initial outage. With these two results, we can assign an output label to each graph for different models: Binary classification we assign each instance to two classes: \u2022 DNS=0, initial state results in a stable state, label 0 \u2022 DNS>0, initial state results in an unstable state, label 1 Multi-class classification we assign each instance to four classes: \u2022 DNS>0, cascading failure of components besides the first trigger, Category A \u2022 DNS>0, no cascading failure of components besides the first trigger Category B \u2022 DNS=0, cascading failure of components besides the first trigger, Category C \u2022 DNS=0, no cascading failure of components besides the first trigger, Category D Regression we assign each instance the DNS in MW 8 \fThe choice among binary classification, multi-class classification, or regression depends on the use of the GNN model trained with the PowerGraph dataset. The binary classification model serves as an early warning system, i.e., detects initial states of the power grid that are critical. The multi-class classification model allows us to distinguish different scenarios. Indeed, a transmission system operator could benefit from knowing when a cascading failure does not necessarily cause demand not served and vice-versa. Finally, with the regression model, we can directly access the final demand not served associated with particular pre-outage states of the system. In this case, the GNN model becomes a surrogate of the physics-based model useful both as an early warning system and to perform security evaluation with low computational cost. Table 2: Multi-class classification of datasets. c.f. stands for cascading failure and describes a state resulting in cascading failure of components. DNS denotes demand not served. Category A Category B Category C Category D DNS > 0 MW DNS > 0 MW DNS = 0 MW DNS = 0 MW c.f. \u2713 c.f. \u00d7 c.f. \u2713 c.f. \u00d7 Table 3: Results of categorization in percentage. Power grid Category A Category B Category C Category D IEEE39 2.18% 3.48% 1.46% 92.88% IEEE118 0.07% 5.84% 2.01% 92.08% IEEE24 33.90% 4.88% 0.16% 61.06% UK 4.06% 0% 8.02% 87.92% Explainability mask We assign ground-truth explanations as follows: when a system state undergoes a cascading failure, the cascading edges are considered to be explanations for the observed demand not served. Therefore, for the Category A instances, we record the branches that fail during the development of the cascading event. We set the explainability mask as a 9 \fFigure 2: Structure of one instance of the GNN dataset for an exemplary power grid. The same structure is kept for all the power grids in PowerGraph, IEEE24, IEEE39, UK, and IEEE118. We highlight the initial outage in red, the line is removed both from the graph connectivity matrix and from the edge feature matrix. The cascading edges are highlighted with the dotted line and encoded in the M boolean vector (0 the edge has not tripped during cascading development, 1 otherwise). Boolean vector M \u2208R\\mathcal{E}\u00d71, whose elements are equal to 1 for the edges belonging to the cascading stage and 0, otherwise (see Figure 2). 4 Benchmarking graph classification and regression models In this section, we outline the method used to benchmark classification and regression models. Experimental setting and evaluation metrics For each power grid dataset, we utilize baseline GNN architectures as they are common in the graph xAI community. Specifically, we use GCNConv [34], GATConv [55], and GINEConv [31] to demonstrate that the PowerGraph datasets can be used to benchmark GNN and methods used to explain them. Furthermore, we experimented with the state-of-the-art graph transformer convolutional layers [52] since they are the backbones of the most recent Graph Transformer models: GraphGPS [49], Transformer-M [41], TokenGT [33]. Finally, we resort to all of the aforementioned models because they account for the edge 10 \ffeatures, which are highly relevant in the case of power grids. We tune the number of MPL \u2208{1, 2, 3} and the hidden dimensionality \u2208{8, 16, 32}. Adam optimizer is used with the initial learning rate of 10\u22123. Each model is trained for 200 epochs with learning rate adjusted in the learning process using a scheduler, which automatically reduces the learning rate if a metric has stopped improving. We split train/validation/test with 80/10/10% for all datasets and choose a batch size of 128. We present three graph-level models, namely, binary/ multi-class classification, and regression. For classification models, we consider balanced accuracy [11] as the reference evaluation metric. Indeed, balanced accuracy has been designed as a metric for classification tasks where a strong class imbalance is observed (see Table 3). It allows prioritizing all the classes equally, in contrast to the F1 or F2 score, and it gives interpretable results for multiclass classification, in contrast to ROCAUC [50]. Indeed, a strong class imbalance is observed. For regression models, we use mean squared error as metric. Observations We report the best model performance for each power grid and MPL in Tables 4, 5, and 6. For the different MPL, we only show the set of hyper-parameters yielding the best performance, and the best model per power grid is highlighted in bold. The GNN architecture comprises 1) a number of MPLs, each followed by PReLU [27] activation function, 2) a global pooling operator to obtain graph-level embedding from node embeddings, and 3) one fully connected layer. For the classification model, we do not observe relevant differences among the mean, max, and sum global pooling operators. The classification results are obtained with max global pooling. The regression results are obtained by concatenating max and sum global poolings. Discussion Most GNN models achieve high performance on the power grids of PowerGraph. We compare GCN, GAT, GINe, and Transformer. Of all MPL considered, only GCN does not take edge features into account; as a result its performance is low in most cases. Transformer achieves the state-of-the-art on all power grids for the binary and multi-class models. In the regression model, Transformer and GINe are the best-performing models. Overall, the model for binary and classification models exhibit excellent results. However, the regression model, which is of importance in providing a prediction of the demand not served, does not achieve the desired level of performance. While the classification models showed consistent performance across various power grids, the regression models demonstrate lower MSE 11 \fTable 4: Binary classification models results on the test set averaged over five random seeds. Balanced accuracy is used as reference metric. Power grid MPL type No MPL Hidden dimension Test Accuracy Test Balanced Accuracy IEEE24 GCN 2 32 0.8667 \u00b1 0.0049 0.8769 \u00b1 0.0056 GINe 3 32 0.9798 \u00b1 0.0046 0.9800 \u00b1 0.0035 GAT 3 32 0.9008 \u00b1 0.0052 0.9067 \u00b1 0.0034 Transformer 3 16 0.9907 \u00b1 0.0040 0.9910 \u00b1 0.0037 IEEE39 GCN 3 32 0.9733 \u00b1 0.0012 0.8113 \u00b1 0.0011 GINe 2 32 0.9939 \u00b1 0.0020 0.9550 \u00b1 0.0041 GAT 3 32 0.9697 \u00b1 0.0023 0.7865 \u00b1 0.0061 Transformer 3 16 0.9952 \u00b1 0.0015 0.961 \u00b1 0.016 UK GCN 3 32 0.9657 \u00b1 0.0027 0.7176 \u00b1 0.0023 GINe 2 32 0.9975 \u00b1 0.0018 0.9820 \u00b1 0.0010 GAT 3 8 0.9889 \u00b1 0.0005 0.9175 \u00b1 0.0012 Transformer 3 16 0.9960 \u00b1 0.0016 0.9820 \u00b1 0.0045 IEEE118 GCN 3 32 0.9917 \u00b1 0.0015 0.9364 \u00b1 0.0032 GINe 3 8 0.9992 \u00b1 0.0046 0.9921 \u00b1 0.0035 GAT 3 32 0.9880 \u00b1 0.0012 0.9427 \u00b1 0.0005 Transformer 3 32 0.9992 \u00b1 0.0005 0.9947 \u00b1 0.0041 values for larger power grids. This observation can be attributed to the fact that larger power grids offer a greater diversity of scenarios, thus making it increasingly more difficult for a GNN model to identify and learn cascading failure patterns. Nevertheless, a regression model offers the most informative and comprehensive results since it predicts the exact magnitude of demand not served given a component failure and operating conditions. However, our results show that the regression models trained on the PowerGraph datasets do not provide the expected performance. Therefore, further advancements and innovations in GNN architectures are needed to achieve more robust and accurate regression results. Finally, we test the capability of GNN model to generalize to the systems not seen in training, i.e. inductive property of GNN [56]. We report the results in Appendix A.6. Models trained using the above approach, although representing real systems, are built with synthetic data from a cascading failure model. To render these models applicable to real-world systems further work is necessary. 12 \fTable 5: Multi-class classification models results on the test set averaged over five random seeds. Balanced accuracy is used as reference metric. Power grid MPL type No MPL Hidden dimension Test Accuracy Test Balanced Accuracy IEEE24 GCN 2 32 0.8465 \u00b1 0.0023 0.6846 \u00b1 0.0009 GINe 2 32 0.9798 \u00b1 0.0019 0.9426 \u00b1 0.0028 GAT 3 32 0.9054 \u00b1 0.0020 0.8375 \u00b1 0.0009 Transformer 3 32 0.9829 \u00b1 0.0012 0.9894 \u00b1 0.0016 IEEE39 GCN 2 8 0.9242 \u00b1 0.0019 0.4071 \u00b1 0.0012 GINe 3 16 0.9939 \u00b1 0.0015 0.9693 \u00b1 0.0019 GAT 2 16 0.9497 \u00b10.0022 0.5577 \u00b1 0.0027 Transformer 3 32 0.9550 \u00b1 0.0009 0.9742 \u00b1 0.0016 UK GCN 3 32 0.9068 \u00b1 0.0023 0.4615 \u00b1 0.0038 GINe 2 32 0.9798 \u00b1 0.0020 0.9347 \u00b1 0.0017 GAT 3 8 0.9563 \u00b1 0.0009 0.7452 \u00b1 0.0014 Transformer 3 8 0.9912 \u00b1 0.0009 0.9798 \u00b1 0.0013 IEEE118 GCN 3 8 0.9771 \u00b1 0.0010 0.8303 \u00b1 0.0016 GINe 3 32 0.9968 \u00b1 0.0018 0.9586 \u00b1 0.0010 GAT 3 16 0.9677 \u00b1 0.0010 0.7392 \u00b1 0.0011 Transformer 3 8 0.9992 \u00b1 0.0013 0.9833 \u00b1 0.0006 First, the cascading failure model that generates the data needs to be validated and calibrated on the system of interest. Second, the GNN model should be further trained using real-world cascading failure events from the system of interest. 5 Benchmarking explanations on the graphclassification models In this section, we outline the method used to benchmark explainability methods. We focus on explaining the power grids of Category A of the multi-class classification model. This choice is explained in Appendix A.2. Experimental setting and datasets For each dataset, we take the trained Transformer with 3 layers and 32 hidden units described in section 4. To benchmark explainability methods, we do not necessarily need the best GNN model. An appropriate filtering on the nature of the predictions 13 \fTable 6: Regression models results on the test set averaged over five random seeds. MSE error is used as reference metric. Power grid MPL type No MPL Hidden dimension MSE loss IEEE24 GCN 1 32 2.80E-03 \u00b1 5.69E-04 GINe 3 16 2.90E-03 \u00b1 2.88E-04 GAT 2 16 2.90E-01 \u00b1 5.00E-04 Transformer 3 8 2.70E-03 \u00b1 3.16E-04 IEEE39 GCN 2 32 5.61E-04 \u00b1 5.04E-05 GINe 3 32 5.04E-04 \u00b1 5.04E-05 GAT 3 32 5.62E-04 \u00b1 4.66E-05 Transformer 3 32 5.47E-04 \u00b1 8.50E-05 UK GCN 3 32 7.07E-03 \u00b1 6.45E-04 GINe 2 32 7.65E-03 \u00b1 6.17E-04 GAT 3 32 7.60E-03 \u00b1 6.12E-04 Transformer 3 16 7.00E-03 \u00b1 5.10E-04 IEEE118 GCN 2 32 4.00E-06 \u00b1 2.94E-07 GINe 2 32 3.00E-06 \u00b1 3.51E-07 GAT 2 8 4.00E-06 \u00b1 3.70E-07 Transformer 2 8 5.00E-06 \u00b1 6.55E-07 (correct or mix) and the focus of the explanation (phenomenon or model focus) [5] can circumvent smaller test accuracy. We adopt the same training parameters. We evaluate the posthoc explainability methods: Saliency [8], Integrated Gradient [53], Occlusion [19], GradCAM [51], GNNExplainer [60] with and without node feature mask, PGExplainer [40], PGMExplainer [57], SubgraphX [63], and GraphCFE [42]. In Appendix A.3, we report more experimental details on the GNN performance and the explainability methods. The PowerGraph benchmark with explanations is used to test and compare existing explainability methods. The role of explainers is to identify the edges that are necessary for the graphs to be classified as Category A [5]. Then, the resulting edges are evaluated on how well they match the explanation masks, which represent the cascading edges. We compare the results obtained on the PowerGraph datasets with scores computed for the synthetic dataset BA-2Motifs [40]. This dataset has 800 Barab\u00e1si base graphs. Half graphs are attached with \u201chouse\u201d motifs (label 0) and the rest are attached with five-node cycle motifs (label 1). The ground-truth explanations in this graph 14 \fclassification are the type of motifs attached to the base graph (house or five-node cycle). The BA-2Motifs dataset is commonly used to compare the performance of explainability methods [2, 3, 36, 39, 62] because its ground truth explanations enable a simple interpretation for human-based evaluation. The comparison of PowerGraph to the BA-2Motifs dataset allows us to verify if our results align with state-of-the-art research on the explainability of GNN. Human-based evaluation To evaluate the generated explanations, we use the balanced accuracy metric. It compares the generated edge mask to the ground-truth cascading edges and takes into account the class imbalance, i.e., cascading edges are a small fraction of the total edges. It measures how convincing the explanations are to humans. More details about this metric are given in Appendix A.4. We report the performance of 11 explainability methods on finding ground-truth explanations. All results are averaged on five random seeds. Accuracy scores are computed for the datasets in PowerGraph and the synthetic dataset BA-2Motifs. Model-centric evaluation Human evaluation is not always practical because it requires ground truth explanations and can be very subjective, and therefore does not necessarily account for the model\u2019s reasoning. Modelfocus evaluation however measures the consistency of model predictions w.r.t removing or keeping the explanatory graph entities. For more objective evaluation, we therefore evaluate the faithfulness of the explanations using the fidelity+ metric. The fidelity+ measures how necessary are the explanatory edges to the GNN predictions. For PowerGraph, edges with high fidelity+ are the ones necessary for the graph to belong to Category A. We compare the PowerGraph results with BA-2Motifs results, using the fidelity+ metric fidacc + . The fidacc + is computed as in the GraphFramEx framework [5] and described in Appendix A.5. We utilize GraphFramEx to compare explainability methods: we choose the phenomenon focus and the masks to be soft on the edges. Explanations are weighted explanatory subgraphs, where edges are given importance based on their contribution to the true prediction in the multi-class setting. Figure 4 reports the fidelity+ scores for the power grid datasets and for the synthetic dataset BA-2Motifs. Results Figure 3 shows that the best-balanced accuracies are obtained with the four methods, i.e., Saliency, Integrated Gradient, GradCAM, and Occlusion. Figure 4 also shows that these four methods have on average the 15 \fIEEE24 IEEE39 UK IEEE118 BA-2Motifs 0.4 0.5 0.6 0.7 T op balanced accuracy Explainer GNNExplainer GNNExplainer(E,NF) GradCAM GraphCFE IntegratedGrad Occlusion PGExplainer PGMExplainer Random Saliency SubgraphX Figure 3: Top balanced accuracy of the PowerGraph datasets and the synthetic dataset BA-2Motifs. The top balanced accuracy is computed on explanatory edge masks that contain the top k edges that contribute the most to the model predictions, with k being the number of edges in the corresponding ground-truth explanations. 0.0 0.2 0.4 0.6 0.8 1.0 IEEE24 IEEE39 BA-2Motifs 5 10 15 20 0.0 0.2 0.4 0.6 0.8 1.0 UK 5 10 15 20 IEEE118 Explanation size (T op K) Fidelity+ (acc) Explainer GNNExplainer GNNExplainer(E,NF) GradCAM GraphCFE IntegratedGrad Occlusion PGExplainer PGMExplainer Random Saliency SubgraphX Figure 4: Faithfulness of the PowerGraph datasets and the BA-2Motifs dataset measured with the fid+acc metric as defined in Equation 2 in Appendix A.5. We conducted experiments on five random seeds. In the plot, alongside each data point, we have included confidence intervals calculated based on the standard deviation. 16 \fhighest fidelity+ on all datasets. Therefore, we conclude that they are the most appropriate methods to generate accurate and necessary explanations. Our observations on faithfulness are also consistent with previous results on the GraphFramEx benchmark [5] that has already shown the superiority of gradient-based methods and Occlusion to return necessary explanations, i.e., the model predictions change when those explanatory entities are removed from the graph. However, in Figure 3 and Figure 4, no method globally outperforms the others for all datasets. For balanced accuracy, GradCAM and Occlusion are the best for IEEE24; Saliency for IEEE39; GradCAM for UK; and Integrated Gradient, Occlusion, GradCAM and SubgraphX for BA-2Motifs. On fidelity, GradCAM and Occlusion are the best for IEEE24; Saliency and Integrated Gradient for IEEE39; GradCAM for UK; and Integrated Gradient for BA-2Motifs. The choice of the optimal xAI method depends on the dataset. This is again consistent with the conclusions in [5]. Concerning the IEEE118 dataset, none of the methods is able to generate good explanations. The maximum top balanced accuracy is 0.55 and the maximum fidelity+ score is reached by GNNExplainer on edges and node features and is only 0.6. This performance is likely due to the complexity of the IEEE118. Being the largest power grid with 186 branches (see Table 1), the system contains complex interdependencies between the elements of the power grid during a cascading failure. As a consequence, node and edge-level features play a bigger role in explaining the GNN predictions. Therefore, we believe that an accurate model explanation will be obtained only with methods that provide node and link-level feature masks as well as edge masks. In addition, those methods could play a role in understanding the relevance of the input features to the GNN prediction, allowing to discard noisy features. 6 Conclusions To strengthen the use of GNN in the field of power systems, we present PowerGraph, a dataset for graph-level tasks and model explainability. The dataset is suited to test graph classification and regression models. The main focus of PowerGraph is the analysis of cascading failures in power grids. Furthermore, experts often require interpretability of the results. Therefore, we benchmark the dataset for a variety of GNN and explainability models. The GNN models show excellent performance, in particular for graph classification, on our new benchmark, while graph regression models should 17 \fbe further developed. Finally, PowerGraph is the first real-world dataset with ground-truth explanations for graph-level tasks in the field of explainable AI. It allows us to evaluate both the accuracy and faithfulness of explainability methods in a real-world scenario. PowerGraph provides consistent outcomes that align with previous research findings and reinforce the concept that there is no universally superior method for explainability. In future work, we aim to extend the PowerGraph with new datasets [9] and include additional power grid analyses, including solutions to the power flow, the optimal power flow, and the unit commitment. 18" |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.18348v1", |
| "title": "Sequential Recommendation with Latent Relations based on Large Language Model", |
| "abstract": "Sequential recommender systems predict items that may interest users by\nmodeling their preferences based on historical interactions. Traditional\nsequential recommendation methods rely on capturing implicit collaborative\nfiltering signals among items. Recent relation-aware sequential recommendation\nmodels have achieved promising performance by explicitly incorporating item\nrelations into the modeling of user historical sequences, where most relations\nare extracted from knowledge graphs. However, existing methods rely on manually\npredefined relations and suffer the sparsity issue, limiting the generalization\nability in diverse scenarios with varied item relations. In this paper, we\npropose a novel relation-aware sequential recommendation framework with Latent\nRelation Discovery (LRD). Different from previous relation-aware models that\nrely on predefined rules, we propose to leverage the Large Language Model (LLM)\nto provide new types of relations and connections between items. The motivation\nis that LLM contains abundant world knowledge, which can be adopted to mine\nlatent relations of items for recommendation. Specifically, inspired by that\nhumans can describe relations between items using natural language, LRD\nharnesses the LLM that has demonstrated human-like knowledge to obtain language\nknowledge representations of items. These representations are fed into a latent\nrelation discovery module based on the discrete state variational autoencoder\n(DVAE). Then the self-supervised relation discovery tasks and recommendation\ntasks are jointly optimized. Experimental results on multiple public datasets\ndemonstrate our proposed latent relations discovery method can be incorporated\nwith existing relation-aware sequential recommendation models and significantly\nimprove the performance. Further analysis experiments indicate the\neffectiveness and reliability of the discovered latent relations.", |
| "authors": "Shenghao Yang, Weizhi Ma, Peijie Sun, Qingyao Ai, Yiqun Liu, Mingchen Cai, Min Zhang", |
| "published": "2024-03-27", |
| "updated": "2024-03-27", |
| "primary_cat": "cs.IR", |
| "cats": [ |
| "cs.IR" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Sequential Recommendation with Latent Relations based on Large Language Model", |
| "main_content": "INTRODUCTION Sequential recommendation is a promising researched topic in the community of recommender systems, aiming to predict the next item the user prefers based on his/her interaction history [42]. Various methods have been proposed to address the sequential recommendation task. Early studies focused on estimating the transition relations between items based on Markov chain assumptions [32]. In recent years, with the advancement of deep learning, various deep neural networks, such as Recurrent Neural Networks (RNNs) [13, 15, 19], Convolutional Neural Networks (CNNs) [36], and Transformers [12, 17, 23, 35], have been incorporated to better model user preferences reflected in their sequential historical interactions. Although existing methods have achieved remarkable performance, they usually rely on item-based collaborative filtering algorithms [34] to calculate the implicit collaborative similarity between items while overlooking the explicit relations between items, which are prevalent and significantly influence user decisions in the real arXiv:2403.18348v1 [cs.IR] 27 Mar 2024 \fSIGIR 2024, July 14-18, 2024, Washington D.C., USA Yang, et al. recommendation scenario. Recently, some relation-aware sequential recommendation methods have been proposed [38, 39, 45] to explicitly consider item relations during modeling user preferences and significantly improve the performance of the sequential recommendation. However, current approaches still face some challenges that limit the application of these models. Specifically, for most existing relation-aware methods, item relation data is typically stored in a knowledge graph, which may suffer from sparsity issue of two aspects. Firstly, the relation sparsity on the edge set. The relational item modeling of existing methods is performed based on manually predefined relations. The attributebased relations (e.g., \u201cshare category\u201d) and co-occurrence-based relations (e.g., \u201calso buy\u201d) are usually used as these relations are relatively straightforward to define and can be derived from user interaction data and item metadata. Nevertheless, the relations between items are diverse in the real world, and manually defined relations are sparse compared to all latent relations. Relying on a restricted set of predefined relations limits the model\u2019s capacity to generalize effectively across diverse recommendation scenarios. Secondly, the item sparsity on the vertex set. It is caused by the inherent data sparsity issue of recommender systems [33] and particularly affects the data collection of the co-occurrence-based relation since it requires substantial interaction data to collect item pairs that conform to the relation definition. To alleviate the above issue, we investigate to discover latent relations between items that contribute to the recommendation. In this paper, we propose a language knowledge-based Latent Relation Discovery (LRD) method for the sequential recommendation. The motivation behind this approach is inspired by the fact that humans usually describe relations between items in natural language based on their knowledge. Observing the rich world knowledge and semantic representation capabilities exhibited by Large Language Models (LLMs) [3, 27, 37], we propose to leverage the abilities of LLMs to discover latent item relations. Specifically, we design a self-supervised learning framework to facilitate the process of discovering latent relations. We first leverage an LLM to obtain language knowledge representations of items. Subsequently, a relation extraction model is adopted to predict the latent relation between two items. Then we incorporate an item reconstruction model to reconstruct the representation of one item based on the representation of the predicted relation and the other item. Through this self-supervised learning process, the objective of reconstructing the original items forces the relation extraction model to predict relations with sufficient accuracy and generality. Furthermore, we incorporate the LRD into the existing relation-aware sequential recommendation frameworks and perform joint optimization. The merits of our proposed framework are threefold. Firstly, LRD does not rely on manually defined relations, and can autonomously discover latent relations between items. This enhances the model\u2019s ability to better capture diverse preferences reflected in user interaction history and improve recommendation performance. Secondly, the optimal objective of the relation-aware sequential recommendation task serves as supervised signals to guide the relation discovery process, leading to the discovery of relations more beneficial to the recommendation. Last but not least, analyzing the predicted item relations by the LRD contributes to better interpretability of relation-aware sequential recommendation models. Table 1: Notations. Notations Descriptions U The set of user V The set of item T The set of triplets R The set of relations R\ud835\udc51\ud835\udc52\ud835\udc53 The set of predefined relations R\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61 The set of latent relations \ud835\udc46\ud835\udc62 the interaction sequence of user \ud835\udc62 u \u2208R\ud835\udc51 The id embedding of user \ud835\udc62 m\ud835\udc62,\ud835\udc63\u2208R\ud835\udc51The relation-aware user sequence representation of \ud835\udc62 v \u2208R\ud835\udc51 The id embedding of item \ud835\udc63 e \u2208R\ud835\udc51\ud835\udc3f The LLM-based embedding of item \ud835\udc63 r \u2208R\ud835\udc51 The embedding of relation \ud835\udc5f We perform experiments on multiple public datasets to evaluate our proposed LRD approach. Leveraging latent relations derived from language knowledge-based item representation, the relationaware sequential recommendation model captures more comprehensive user sequence representations, significantly improving the performance of sequential recommendation. Experimental results demonstrate that compared to state-of-the-art (SOTA) relationaware sequential recommendation models, the model enhanced by LRD achieves significantly better performance. Further analysis experiments reveal that the LRD module is indeed capable of discovering reasonable relations between items. The main contributions of our work are summarized as follows: \u2022 To the best of our knowledge, we first propose to discover latent relations based on LLM for relation-aware sequential recommender systems. \u2022 We propose an LLM-based latent relation discovery framework, i.e., LRD, to harness the language knowledge to discover latent relations, which is a self-supervision learning method and flexible to work with existing relation-aware sequential recommenders through joint learning. \u2022 Experimental results on multiple public datasets demonstrate that LRD significantly improves the performance of existing relation-aware sequential recommendation models by effectively discovering reliable relations between items. 2 PROBLEM STATEMENT Firstly, the notations used in this paper are defined in Table 1 with descriptions. Let U and V denote the sets of users and items, respectively. For each user \ud835\udc62\u2208U, its chronologically-ordered interaction history is represented as \ud835\udc46\ud835\udc62= {\ud835\udc631, \ud835\udc632, \ud835\udc633, ..., \ud835\udc63\ud835\udc41\ud835\udc62}. For an item \ud835\udc63\ud835\udc56\u2208V, there may exist another related item \ud835\udc63\u2212\ud835\udc56with relation \ud835\udc5f, denoted as a triplet (\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f). R denotes the set of relations, which is further divided into predefined relations set R\ud835\udc51\ud835\udc52\ud835\udc53and latent relations set R\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61. All relational item triplets associated with predefined relations T can be stored in a knowledge graph G, where the vertex set comprises all relational item pairs, and the edge set consists of all the predefined relations. \fSequential Recommendation with Latent Relations based on Large Language Model SIGIR 2024, July 14-18, 2024, Washington D.C., USA Embedding Layer Large Language Model Relation Extraction Module Item Reconstruction Module\u00a0 Relation-aware User Preference Model \ud83e\uddca Knowledge Graph 1 2 3 Gradient direction History item Target item Negative sampled item Item embedding LLM-based Item embedding Relation embedding Relation-aware item weight Figure 1: Overall Framework of relation-aware sequential recommendation with LRD. There are two main components of LRD: a relation extraction model to estimate the latent relation based on the language knowledge representation of two items obtained by LLM and an item reconstruction model to reconstruct the item based on the estimated relation and another item. The predefined relations from the knowledge graph and latent relations from LRD are both used to contribute to the relational item modeling in the recommendation. The objective of the sequential recommendation task is to provide a ranked list of items for the user \ud835\udc62at the next interaction, considering their interaction history \ud835\udc46\ud835\udc62. The relation-aware sequential recommendation further considers the relation between each historic item \ud835\udc63\ud835\udc56\u2208\ud835\udc46\ud835\udc62and the target item \ud835\udc63\ud835\udc57. The existing methods only consider the relations in \ud835\udc45\ud835\udc51\ud835\udc52\ud835\udc53, while our method LRD further incorporates a latent relations set \ud835\udc45\ud835\udc59\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61. 3 METHOD 3.1 Framework Overview The framework of our proposed relation-aware sequential recommendation based on latent relation discovery is illustrated in Figure 1. The key component of the framework is the latent relation discovery module, which is designed as a self-supervised learning process inspired by the Discrete-state Variational Autoencoder (DVAE) [25]. The latent relation discovery model comprises two submodules: 1) A relation extraction module that utilizes an LLM to obtain language knowledge representations of items and predicts the latent relation between two items based on their language knowledge representations. 2) An item reconstruction module that reconstructs the representation of one item based on the representation of the predicted latent relation and the other item. Here, we incorporate the latent relation discovery module into the relation-aware sequential recommender. Specifically, we use the predicted latent relations to extend the predefined item relation embedding to better construct the user preference model. At the same time, we use the objectives of recommendation tasks to guide the discovery of more useful relations. Next, we introduce the overall design of the latent relation discovery approach in Section 3.2, followed by the pipeline of the relation-aware sequential recommendation model based on latent relation discovery in Section 3.3. 3.2 Latent Relation Discovery (LRD) 3.2.1 Optimization Objective. Our objective is to predict the latent relation between two items. Since latent relations are those not covered by manually crafted relation datasets, we cannot train the model with supervised learning. Instead, we adopt a self-supervised learning method inspired by DVAE. Following [25], we assume that all relations follow a uniform distribution \ud835\udc5d\ud835\udc62(\ud835\udc5f), and the optimization objective is formalized as the following pseudo-likelihood: L(\ud835\udf03) = log \u2211\ufe01 \ud835\udc5f\u2208R \ud835\udc5d(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56|\ud835\udc5f,\ud835\udf03)\ud835\udc5d\ud835\udc62(\ud835\udc5f) \u2248 2 \u2211\ufe01 \ud835\udc56=1 log \u2211\ufe01 \ud835\udc5f\u2208R \ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03)\ud835\udc5d\ud835\udc62(\ud835\udc5f), (1) where \ud835\udc63\ud835\udc56and \ud835\udc63\u2212\ud835\udc56denote a pair of items, \ud835\udc56\u2208{1, 2}. \ud835\udc63\u2212\ud835\udc56denotes {\ud835\udc631, \ud835\udc632}\\{\ud835\udc63\ud835\udc56} and \ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) denotes the probability of one item given another item and a relation, where \ud835\udf03denotes the parameters set. The Pseudo-likelihood L(\ud835\udf03) can be lower-bounded based on Jensen\u2019s inequality through a variational posterior \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13): L(\ud835\udf03) > L(\ud835\udf03,\ud835\udf13) = 2 \u2211\ufe01 \ud835\udc56=1 \u2211\ufe01 \ud835\udc5f\u2208R \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13) log\ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) + \ud835\udefc\ud835\udc3b[\ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13)], (2) where \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13) is the relation extraction model used to predict the relation between a pair of items and \ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) is the item reconstruction model that reconstructs the representation of the item given the predicted relation and another item. \ud835\udf13and \ud835\udf03denote the parameters of the two models, respectively. Intuitively, maximizing the probability of reconstructing the original item force \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13) to provide sufficiently accurate and highly generalizable relations. \ud835\udc3bis an entropy term used to regularize the probabilities predicted by the relation extraction model, ensuring more \fSIGIR 2024, July 14-18, 2024, Washington D.C., USA Yang, et al. uniform predictions. \ud835\udefcis the hyper-parameter to balance the regularization strength. Next, we further present the design of the relation extraction model \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13) and the item reconstruction model \ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) separately. 3.2.2 Relation Extraction. In the relation extraction model, we aim to predict the latent relation between two given items. Typically, well-defined relations between items can be conveniently annotated and collected. For example, attribute-based relations only require the collection of items with the same attribute. Nevertheless, in real-world scenarios, relations between items are more complex and diverse, making them challenging to define manually. Consequently, relying solely on predefined relations has limitations, as it cannot adequately model user preferences reflected in the interaction history. Intuitively inspired by the ability of humans to describe relations between two items with natural language based on their knowledge, we investigate discovering latent relations between items from the perspective of language knowledge. Considering that LLM has exhibited human-like world knowledge and effective semantic representations, we leverage LLM to extract language knowledge representations of items and feed them into the relation extraction model. Specifically, given an item \ud835\udc63= {\ud835\udc641,\ud835\udc642,\ud835\udc643, ...,\ud835\udc64\ud835\udc41\ud835\udc63}, where \ud835\udc64\ud835\udc56 denotes each token of the item\u2019s text. We feed the token sequence into the LLM to obtain the language knowledge representation of the item, as shown in Equation (3). e = \ud835\udc4a1(\ud835\udc3f\ud835\udc3f\ud835\udc40([\ud835\udc641,\ud835\udc642,\ud835\udc643, ...,\ud835\udc64\ud835\udc41\ud835\udc56])) + \ud835\udc4f1, (3) where \ud835\udc3f\ud835\udc3f\ud835\udc40(\u00b7) denotes a specific pooling strategy on the last hidden state of the LLM to obtain the output item representation. Different LLMs may use different pooling strategies, such as CLS-pooling, mean-pooling, etc [18].\ud835\udc4a1 \u2208R\ud835\udc51\ud835\udc3f\u00d7\ud835\udc51and\ud835\udc4f1 \u2208R\ud835\udc51denote the weight and bias of a projection layer, respectively, which is used to reduce the dimensionality of the LLM\u2019s output to match the input dimensions of the recommendation model. With the enriched world knowledge of the LLM, we obtain item representations that potentially embed important information for the discovery of item relations not covered in the manually predefined relation set, which we refer to as the language knowledge representations. Next, the relation extraction model \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13) in Equation (2) predicts the relation between two given items on the relation set R based on their language knowledge representations, i.e., e\ud835\udc56\u2208R\ud835\udc51and e\u2212\ud835\udc56\u2208R\ud835\udc51. In practice, we can adopt any classifier that allows gradient backpropagation. Without loss of generality, we use a lightweight linear classifier: \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13) = \ud835\udc46\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc40\ud835\udc4e\ud835\udc65(\ud835\udc4a2[e\ud835\udc56; e\u2212\ud835\udc56] + \ud835\udc4f2), (4) where \ud835\udc4a2 \u2208R2\ud835\udc51\u00d7|R| and \ud835\udc4f2 \u2208R|R| are the weight and bias of the linear classifier, respectively, and [; ] denotes the concatenation operation. 3.2.3 Relational Item Reconstruction. Through the relation extraction model, we estimate the latent relation between two items \ud835\udc63\ud835\udc56 and \ud835\udc63\u2212\ud835\udc56. Given the estimated relation and one of the items, the item reconstruction model aims to reconstruct the other item. The specific definition is as follows: \ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) = exp(\ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f)) \u00cd \ud835\udc63\u2032 \ud835\udc56\u2208V exp(\ud835\udf19(\ud835\udc63\u2032 \ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f)) , (5) where \ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) is a scoring function for two items and a relation, and any triplet scoring function can be used. Without loss of generality, we use DistMult [46] as scoring functions: \ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f) = v\ud835\udc47 \ud835\udc56diag(r)v\u2212\ud835\udc56, (6) where diag(r) is a diagonal matrix with the relation embedding r as its diagonal elements. Note that we use the representation of item IDs v\ud835\udc56for reconstruction instead of language knowledge representation e\ud835\udc56. This design is to align the representation space of the predicted relation \ud835\udc5fand the relation-aware recommendation model. Since Equation (5) involves calculations across the entire set of items V, resulting in high computational complexity, we approximate log\ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) using negative sampling as follows: log\ud835\udc5d(\ud835\udc63\ud835\udc56|\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) = log\ud835\udf0e(\ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) + log\ud835\udf0e(\u2212\ud835\udf19(\ud835\udc63\u2212 \ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03))), (7) where \ud835\udf0eis the sigmoid activation function, and \ud835\udc63\u2212 \ud835\udc56is a randomly sampled negative item. Ultimately, the optimization objective in Equation (2) becomes: L(\ud835\udf03,\ud835\udf13) = 2 \u2211\ufe01 \ud835\udc56=1 \u2211\ufe01 \ud835\udc5f\u2208R \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13)[log\ud835\udf0e(\ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03) + log\ud835\udf0e(\u2212\ud835\udf19(\ud835\udc63\u2212 \ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f,\ud835\udf03)))] + \ud835\udefc\ud835\udc3b[\ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udf13)]. (8) 3.3 LRD-based Sequential Recommendation In this section, we present how to apply latent relations extracted by LRD into a relation-aware sequential recommender. We first introduce the pipeline of the sequential recommendation explicitly considering item relations in Section 3.3.1, followed by the joint optimization framework of the latent relation discovery and relation-aware sequential recommendation in Section 3.3.2. 3.3.1 Relation-aware Sequential Recommendation. Given a user\u2019s interaction history \ud835\udc46\ud835\udc62= {\ud835\udc631, \ud835\udc632, \ud835\udc633, ..., \ud835\udc63\ud835\udc41\ud835\udc62} and a target item \ud835\udc63\ud835\udc57, the user\u2019s preference can be reflected in the history of interacted items. We further explicitly consider the relations between each historical item and the target item for sufficient user preference modeling. Formally, the preference score of user \ud835\udc62for target item \ud835\udc63\ud835\udc57is defined as: \ud835\udc66\ud835\udc62,\ud835\udc57= (u + m\ud835\udc62,\ud835\udc57)v\ud835\udc47 \ud835\udc57+ \ud835\udc4f\ud835\udc57, (9) where u \u2208R\ud835\udc51and v\ud835\udc57\u2208R\ud835\udc51are representations of the user and target item, m\ud835\udc62,\ud835\udc57\u2208R\ud835\udc51is the user\u2019s historical sequence representation considering the relations between historical items and the target item. \ud835\udc4f\ud835\udc57\u2208R\ud835\udc51is the bias term. Calculating m\ud835\udc62,\ud835\udc57is a crucial step in modeling relation-aware user preferences. Specifically, it is the aggregation of multiple user sequence representations considering different relations types, which is defined as: m\ud835\udc62,\ud835\udc57= AGG([s\ud835\udc62\ud835\udc57,\ud835\udc5f1; s\ud835\udc62\ud835\udc57,\ud835\udc5f2; ...; s\ud835\udc62\ud835\udc57,\ud835\udc5f|R|]), (10) where AGG is an aggregation function that can adopt various aggregation methods such as mean-pooling, max-pooling, and attentionpooling [38, 45]. R is the relations set, including predefined relations and latent relations discovered by the LRD. s\ud835\udc62\ud835\udc57,\ud835\udc5fis the historical \fSequential Recommendation with Latent Relations based on Large Language Model SIGIR 2024, July 14-18, 2024, Washington D.C., USA sequence representation of user \ud835\udc62given a relation \ud835\udc5fand target item \ud835\udc63\ud835\udc57, defined as: s\ud835\udc62\ud835\udc57,\ud835\udc5f= \u2211\ufe01 \ud835\udc63\ud835\udc56\u2208\ud835\udc46\ud835\udc62 \ud835\udf14(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f)v\ud835\udc56, (11) where \ud835\udc64(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f) is the relation intensity between historical item \ud835\udc63\ud835\udc56and target item \ud835\udc63\ud835\udc57under relation \ud835\udc5f. It is a normalized weight across all relations, defined as: \ud835\udf14(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f) = exp(\ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f)) \u00cd \ud835\udc63\u2032 \ud835\udc56\u2208V/\ud835\udc46\ud835\udc62exp(\ud835\udf19(\ud835\udc63\u2032 \ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f)) , (12) where \ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f) is the triplet scoring function given two items and a relation. The scoring function used here is consistent with the one used in the item reconstruction model in Section 3.2.3, i.e., Equation (5). This alignment facilitates joint optimization in subsequent steps. 3.3.2 Joint Learning. For the relation-aware sequential recommendation task, we adopt the BPR pairwise loss [31] to define its optimization objective: L\ud835\udc5f\ud835\udc52\ud835\udc50= \u2212 \u2211\ufe01 \ud835\udc62\u2208U \ud835\udc41\ud835\udc62 \u2211\ufe01 \ud835\udc57=2 log\ud835\udf0e(\ud835\udc66\ud835\udc62,\ud835\udc57\u2212\ud835\udc66\ud835\udc62,\ud835\udc57\u2212). (13) To leverage discovered latent relations for the recommendation task and simultaneously let user interaction data guide the relation discovery process, we jointly optimize the objectives of the latent relation discovery task in Equation (8) and the recommendation task in Equation (13). For this purpose, we make little modifications to the Equation (8) and explicitly represent item pairs as historical items and target items, as shown below: L\ud835\udc59\ud835\udc5f\ud835\udc51= \u2212 \u2211\ufe01 \ud835\udc62\u2208U \ud835\udc41\ud835\udc62 \u2211\ufe01 \ud835\udc57=2 \ud835\udc57\u22121 \u2211\ufe01 \ud835\udc56=1 \u2211\ufe01 \ud835\udc5f\u2208R \ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udf13)[log\ud835\udf0e(\ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f,\ud835\udf03) + log\ud835\udf0e(\u2212\ud835\udf19(\ud835\udc63\u2212 \ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udc5f,\ud835\udf03))] + \ud835\udefc\ud835\udc3b[\ud835\udc5e(\ud835\udc5f|\ud835\udc63\ud835\udc56, \ud835\udc63\ud835\udc57,\ud835\udf13)]. (14) Additionally, to ensure the model\u2019s capability to model predefined relations. We organize the set of triplets T with predefined relations into a knowledge graph and adopt a widely used knowledge graph embedding method to optimize the representations of items and relations in the knowledge graph, and the optimization objective is shown as: L\ud835\udc58\ud835\udc54\ud835\udc52= \u2212 \u2211\ufe01 (\ud835\udc63\ud835\udc56,\ud835\udc63\u2212\ud835\udc56,\ud835\udc5f)\u2208T log\ud835\udf0e(\ud835\udf19(\ud835\udc63\ud835\udc56, \ud835\udc63\u2212\ud835\udc56,\ud835\udc5f) \u2212\ud835\udf19(\ud835\udc63\u2212 \ud835\udc56, \ud835\udc63\u2212 \u2212\ud835\udc56,\ud835\udc5f)). (15) The knowledge graph embedding task is also included in the joint optimization framework. Ultimately, the joint optimization objective is defined as: L = L\ud835\udc5f\ud835\udc52\ud835\udc50+ \ud835\udefeL\ud835\udc58\ud835\udc54\ud835\udc52+ \ud835\udf06L\ud835\udc59\ud835\udc5f\ud835\udc51, (16) where \ud835\udefeand \ud835\udf06are the coefficients of the knowledge graph embedding task and the latent relation discovery task, respectively. 4 EXPERIMENTS In this section, we first introduce the experimental setting and then present experimental results and analyses to answer the following research questions: Table 2: Statistics of the datasets after preprocessing. Datasets MovieLens Offices Electronics User-Item Interactions #user 943 4,905 192,403 #item 1,349 2,420 63,001 #inter. 99,287 53,258 1,682,498 density 7.805% 0.448% 0.014% Item Relations #relation 2 4 4 #triplets 886K 778K 2,148M \u2022 RQ1: How is the effectiveness of LRD-enhanced relationaware sequential recommendation models? \u2022 RQ2: Does each component of LRD-enhanced relation-aware sequential recommender contribute to the recommendation performance? \u2022 RQ3: Does LRD discover reliable and vital relations between items? 4.1 Experimental Settings 4.1.1 Datasets. We conduct experiments on three publicly available datasets from diverse domains to validate the model\u2019s capability to discover latent relations in various recommendation scenarios. \u2022 MovieLens1: This dataset is widely used for movie recommendations and consists of user ratings and attribute information for movies. We utilized the MovieLens-100k version for our experiments. Two predefined relations, namely \u201crelease year\u201d and \u201cgenre\u201d, are extracted from the dataset. To discover latent relations between items, in addition to the movie title, release year, and genre available in the dataset, we crawl movie information from IMDB2, including director, actor, and brief based on movie titles and release year. \u2022 Amazon Office Products and Electronics [11, 26]: These two datasets are subsets from the Amazon e-commerce dataset, representing two distinct domains. They contain user ratings, reviews, and rich item metadata. We utilize two attributes (i.e., category and brand) and co-occurrence information (i.e., \u201calso buy\u201d and \u201calso view\") as predefined item relations. Texts of the item title, category, and brand are used to discover latent relations between items. For all datasets, following previous work [38], we filter users and items with fewer than 5 interactions. The statistical information for each dataset after preprocessing is presented in Table 2. 4.1.2 Baselines. We selected multiple sequential recommendation models as baselines for our experiments: \u2022 Caser [36] adopts convolutional filters to capture sequential patterns of embed user sequences. \u2022 GRU4Rec [13] utilizes Gated Recurrent Units (GRU) to model user representation by capturing patterns in the historical interaction sequence. 1https://grouplens.org/datasets/movielens/100k/ 2https://www.imdb.com/ \fSIGIR 2024, July 14-18, 2024, Washington D.C., USA Yang, et al. Table 3: Overall performance of different models. The best performances are denoted in bold fonts. \u201cH@K\u201d is short for \u201cHR@K\u201d and \u201cN@K\u201d is short for \u201cNDCG@K\u201d, respectively. The subscript \u201cLRD\u201d denotes the model is enhanced by LRD. \u201cImprov.\u201d means the relative improvement of the LRD-based model over the corresponding vanilla model. The superscripts \u2020 and \u2021 indicate \ud835\udc5d\u22640.05 and \ud835\udc5d\u22640.01 for the paired t-test of the LRD-based model vs. vanilla model. Datasets MovieLens Office Electronics Metrics H@5 H@10 N@5 N@10 H@5 H@10 N@5 N@10 H@5 H@10 N@5 N@10 Caser 0.5217 0.6872 0.3571 0.4107 0.3095 0.4762 0.1993 0.2530 0.4620 0.5865 0.3435 0.3838 GRU4Rec 0.5101 0.6723 0.3451 0.3976 0.3295 0.4856 0.2164 0.2670 0.4699 0.5994 0.3487 0.3906 SASRec 0.5186 0.6829 0.3712 0.4242 0.4027 0.5439 0.2751 0.3210 0.4805 0.6083 0.3587 0.4000 TiSASRec 0.5313 0.6882 0.3812 0.4322 0.4014 0.5433 0.2745 0.3209 0.5114 0.6329 0.3860 0.4253 RCF 0.5101 0.6660 0.3635 0.4137 0.4145 0.5696 0.2911 0.3413 0.5790 0.7004 0.4475 0.4868 RCFLRD 0.5398\u2021 0.6882\u2021 0.3886\u2021 0.4365\u2021 0.4381\u2021 0.5761\u2021 0.3127\u2021 0.3573\u2021 0.5828\u2020 0.7035\u2020 0.4510\u2020 0.4901\u2020 Impro. +5.82% +3.33% +6.91% +5.51% +5.69% +1.14% +7.42% +4.69% +0.66% +0.44% +0.78% +0.68% KDA 0.5748 0.7381 0.4182 0.4711 0.4453 0.6145 0.3127 0.3676 0.6008 0.7194 0.4665 0.5049 KDALRD 0.6066\u2021 0.7434\u2021 0.4420\u2021 0.4867\u2021 0.4826\u2021 0.6302\u2021 0.3403\u2021 0.3881\u2021 0.6111\u2021 0.7295\u2021 0.4760\u2021 0.5143\u2021 Impro. +5.53% +0.72% +5.69% +3.31% +8.38% +2.55% +8.83% +5.58% +1.71% +1.40% +2.04% +1.86% \u2022 SASRec [17] incorporates self-attention mechanisms to aggregate the historical item representations to obtain the user representation. \u2022 TiSASRec [20] further considers time intervals between historical interactions based on the SASRec. \u2022 RCF [45] adopts a two-level attention network to integrate item relations into the modeling of the user sequence representation. \u2022 KDA [38] incorporates a Fourier-based temporal evolution module to capture the dynamic changes in item relations over time. Among these methods, Caser, GRU4Rec, SASRec, and TiSASRec belong to item-based collaborative filtering models, while RCF and KDA are models that explicitly incorporate item relations. Our proposed LRD method can discover latent item relations beneficial for recommendations to enhance existing relation-aware sequential recommendation models. Therefore, we compare the performance of LRD-enhanced RCF and KDA, i.e., RCFLRD and KDALRD, with the baselines above. 4.1.3 Evaluation Metircs. We use two evaluation metrics, HR@K and nDCG@K, to evaluate the performance of the models, where K is set to 5 and 10. We adopt the leave-one-out method to construct the dataset. Specifically, for a user\u2019s interaction history sequence, we use the last item for testing, the second-to-last item for validation, and the remaining items for training. When predicting the next item, following previous work [38], we rank the ground-truth next item against 99 randomly sampled negative items. We report the average metrics over five runs with different random seeds. 4.1.4 Implementation Details. We implement LRD and baseline models with the ReChorus3 library. We harness a widely used LLM, i.e., GPT-34, to obtain language knowledge representations of items. The max length of the history sequence is set to 20. For 3https://github.com/THUwangcy/ReChorus/tree/master 4We use a version tailed for text embedding, i.e., text-embedding-ada-002 all models, we use the Adam optimizer and carefully search for hyperparameters, with a batch size of 256 and embedding dimension of 64. The early stop is adopted if the nDCG@5 does not improve for 10 epochs. We tune the learning rate in {1e-2, 1e-3, 1e-4} and the l2-normalization coefficients in {1e-4, 1e-5, 1e-6, 0}. The coefficients of the knowledge graph embedding task and the latent relation discovery task are tuned within {0.1, 1, 5, 10}. The number of latent relations to discover is tuned between {5, 6, 7, 8, 9, 10}. The regularization coefficient of the relation extraction model of LRD is set to 0.1. The code of our implementation is available at:https://github.com/ysh-1998/LRD. 4.2 Performance Comparison (RQ1) We compare two LRD-enhanced relation-aware sequential recommendation models, namely RCFLRD and KDALRD, with baseline methods. The overall experimental results are presented in the Tabel 3. For baseline methods, several observations can be made. Firstly, traditional sequential recommendation models rely on implicit preferences in user interaction history to predict the next item. Thus, they heavily depend on rich interaction data. On the relatively dense MovieLens dataset, the performance gap between traditional methods and relation-aware methods is not significant. However, on the highly sparse Amazon datasets, traditional methods perform significantly worse than relation-aware methods. Secondly, by incorporating item relations into the sequential recommendation, RCF and KDA achieve significantly better performance than nonrelation-aware sequential recommendation models. This indicates that explicitly modeling relations between historical items and the target item contributes to the user sequence representation modeling by capturing sufficient user preference. Thirdly, TiSASRec and KDA achieve significant performance improvement by incorporating modules that model temporal information based on SASRec and RCF, respectively. This demonstrates that considering temporal factors in modeling user interaction sequences can effectively enhance model performance. \fSequential Recommendation with Latent Relations based on Large Language Model SIGIR 2024, July 14-18, 2024, Washington D.C., USA NDCG@5 HR@5 0.30 0.35 0.40 0.45 0.50 0.55 MovieLens RCFLRD w/o KGE w/o LRD w/o LLM RCFLRD NDCG@5 HR@5 0.35 0.40 0.45 0.50 0.55 0.60 KDALRD w/o KGE w/o LRD w/o LLM KDALRD NDCG@5 HR@5 0.20 0.25 0.30 0.35 0.40 0.45 Office w/o KGE w/o LRD w/o LLM RCFLRD NDCG@5 HR@5 0.20 0.25 0.30 0.35 0.40 0.45 0.50 w/o KGE w/o LRD w/o LLM KDALRD NDCG@5 HR@5 0.40 0.45 0.50 0.55 0.60 Electronics w/o KGE w/o LRD w/o LLM RCFLRD NDCG@5 HR@5 0.40 0.45 0.50 0.55 0.60 w/o KGE w/o LRD w/o LLM KDALRD Figure 2: Ablation Study on variant models The RCFLRD and KDALRD achieve significant performance improvements over the vanilla models and outperform almost all non-relation-aware methods. We attribute these improvements to the fact that previous relation-aware sequential recommendation methods relied on predefined relations, limiting the model\u2019s ability to capture diverse item relations. Especially on datasets with fewer predefined relations, such as MovieLens, the performance of RCF is not significantly better than traditional methods. However, leveraging our proposed LRD method effectively improves the capability of relation-aware sequential recommendation models by capturing latent item relations. Moreover, the performance improvement of KDALRD suggests that the discovered latent relations by LRD also exhibit temporal evolution characteristics. 4.3 Ablation study (RQ2) In this section, we investigate the contributions of each component of our proposed LDR-based relation-aware sequential recommendation model to the final recommendation performance. For this purpose, we design two variant models: (1) w/o LLM, where item ID representations replace the language knowledge representation obtained through LLM to investigate whether language knowledge is crucial in the relation discovery process. (2) w/o KGE, which removes the knowledge graph embedding task, meaning the model cannot optimize the representation of predefined relations utilizing supervised signals in the knowledge graph. The vanilla model without the LRD can also be seen as a variant model, i.e., w/o LRD. Figure 2 presents the comparative results of the variant models, revealing several findings. Firstly, removing the LLM for obtaining item language knowledge representations leads to a noticeable performance decline. This indicates that ID representations lacking rich semantic information have limited effectiveness in the process 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 1.00 0.92 0.84 0.62 -0.76 -0.63 -0.77 -0.66 -0.85 -0.72 -0.54 -0.84 0.92 1.00 0.92 0.60 -0.73 -0.63 -0.77 -0.69 -0.85 -0.72 -0.49 -0.81 0.84 0.92 1.00 0.58 -0.78 -0.60 -0.80 -0.72 -0.86 -0.68 -0.51 -0.84 0.62 0.60 0.58 1.00 -0.53 -0.36 -0.53 -0.44 -0.62 -0.50 -0.15 -0.63 -0.76 -0.73 -0.78 -0.53 1.00 0.67 0.89 0.83 0.93 0.64 0.64 0.96 -0.63 -0.63 -0.60 -0.36 0.67 1.00 0.83 0.86 0.77 0.82 0.71 0.74 -0.77 -0.77 -0.80 -0.53 0.89 0.83 1.00 0.87 0.93 0.81 0.77 0.93 -0.66 -0.69 -0.72 -0.44 0.83 0.86 0.87 1.00 0.88 0.83 0.72 0.85 -0.85 -0.85 -0.86 -0.62 0.93 0.77 0.93 0.88 1.00 0.80 0.70 0.97 -0.72 -0.72 -0.68 -0.50 0.64 0.82 0.81 0.83 0.80 1.00 0.70 0.75 -0.54 -0.49 -0.51 -0.15 0.64 0.71 0.77 0.72 0.70 0.70 1.00 0.71 -0.84 -0.81 -0.84 -0.63 0.96 0.74 0.93 0.85 0.97 0.75 0.71 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Figure 3: The pair-wise cosine similarity of the relation embeddings in KDA\ud835\udc3f\ud835\udc36\ud835\udc37on Office dataset. The labels on the horizontal and vertical axes denote the relation IDs, where 1-4 are predefined relations and 5-12 are latent relations learned by LRD. of discovering latent relations, emphasizing the crucial role of LLM\u2019s rich world knowledge. Secondly, the variant model without the KGE task shows a significant performance decrease, highlighting the necessity of maintaining the modeling of predefined relations while utilizing latent relations. 4.4 Latent Relation Analyses (RQ3) To further validate the effectiveness and reliability of the latent item relations discovered by LRD, we perform additional analysis experiments. 4.4.1 Relation Embeddings. To investigate the distribution of the learned relation embeddings, we select the model with the best performance on the Office dataset, i.e., KDA\ud835\udc3f\ud835\udc36\ud835\udc37, and calculate the pair-wise cosine similarity of the relation embeddings. This model has 12 relations, including predefined relations (1-4) and latent relations (5-12). The similarity matrix is shown in Figure 3. We can find that there is a notable difference between the embeddings of predefined relations and latent relations. This is reasonable as the learning of predefined relation embeddings primarily relies on supervised signals from the knowledge graph, while the learning of latent relation embeddings depends on the latent relation discovery task. It is worth noting that the relation extraction model of LRD also predicts predefined relations for the reconstruction of relational items. This indicates that the joint optimization of the latent relation discovery task and the knowledge graph embedding task helps the model effectively distinguish predefined relations from latent relations during the learning process. 4.4.2 Latent Relational Items. To provide a clearer illustration of the learned item relations, we present representative item pairs of \fSIGIR 2024, July 14-18, 2024, Washington D.C., USA Yang, et al. #1 #5 #8 Item1 Desktop Staplers Inkjet Printer Copy & Multipurpose Paper Item2 Staples Inkjet Printer Ink Ballpoint Pens Item1 Transparent Tape Shipping Labels Label Makers Item2 Cart, Chrome Shelving Security Lock Boxes Envelope Item1 Inkjet Printers Laser Printers Porous-Point Pens Item2 Expanding File Jackets & Pockets Ballpoint Pens Hanging Folders Figure 4: Descriptions of representative item pairs belong to relation #1, #5 and #8 of KDA\ud835\udc3f\ud835\udc36\ud835\udc37on Office dataset, where relation #1 is \u201calso buy\u201d, relation #5 and #8 are learned latent relations. several relation types in Figure 4. These pairs are obtained by sorting the model\u2019s scores for these item pairs on these relations, i.e., the selected item pairs are samples with higher scores under each relation. To facilitate readability, we have simplified the item descriptions. Several observations can be made from Figure 4. Firstly, items in relation #1 generally exhibit complementary functionalities and are frequently purchased together. This indicates that the model effectively learns predefined relations from the knowledge graph, i.e., the \u201calso buy\u201d relation. Secondly, the model groups item pairs with common characteristics into the same latent relation type. These relations are more complex than predefined relations, possibly involving multi-hop connections and demonstrating characteristics tailored to specific scenarios and tasks. The relation #5 reflects the relations between items in document processing, editing, and storage scenarios. Items like \u201cInkjet Printers\u201d and \u201cExpanding File Jackets & Pockets\u201d, though not directly related, can establish a two-hop relation through \u201cprinting paper\u201d, completing a document processing procedure. Relation #8 focuses on tasks related to transportation. Items like \u201cTransparent Tape\u201d and \u201cCart, Chrome Shelving\u201d are related to the task of packing and transporting goods. These examples further demonstrate the reliability of the latent relations discovered by LRD and confirm the conclusion drawn in Section 4.4.1 that latent relations differ significantly from predefined relations. Leveraging these more complex and diverse relations, the LRD-based relation-aware sequential recommendation model effectively captures more intricate user preferences, resulting in improved recommendation performance. 4.4.3 Case Study. To further demonstrate how the model leverages latent relations to achieve improved recommendation performance, we present a case in Figure 5. In this case, we record the triplet scores between each historical item and the target item on all relations. The relation with the highest score is considered as the predicted relation between items. As shown in Figure 5, only one predefined relation exists between historical items and the target item, i.e., \u201calso buy.\u201d Relying solely on this predefined relation, KDA fails to rank the target item at a high position. While LRD-based model # 12 also buy # 5 Rank by KDALRD:\u00a0 \u00a0 4 Rank by KDA:\u00a0 \u00a0 \u00a0 \u00a0 28 Wireless All-in-One Printer Removable Labels Pen-Grip Fine Point Pen Hanging Folder without Tab History Items Target Item Item 1 Item 2 Item 3 Figure 5: The interaction history of user A3N4VTNFPMTHEF on Office dataset. KDA\ud835\udc3f\ud835\udc36\ud835\udc37ranks the target at a high position leveraging the learned latent relation between historical items and target item, i.e., relation #5 and #12, which outperforms KDA solely utilizing predefined relation, i.e. \u201calso buy\u201d. KDALRD, identifies latent relations #5 and #12 between items 1 and 3, and the target item, respectively. Considering the findings from Sections 4.4.1 and Section 4.4.2, latent relations #5 and #12 exhibit similar embeddings and both reflect item relations in the document processing scenario. Specifically, in this case, the printout from the \u201cWireless All-in-One Printer\", the document recorded with the \u201cPen-Grip Fine Point Pen\", and both can be stored in the \u201cFolder\". Benefiting from the discovered latent relations between historical items and the target item by LRD, KDALRD ranks the target item in the 4th position, significantly outperforming the vanilla model KDA. 4.5 Hyper-parameter Sensitivity In this section, we perform a sensitivity analysis on two crucial hyperparameters of LRD: the number of latent relations the model aims to discover, i.e., Num_latent, and the coefficient of the latent relation discovery task in the joint optimization, i.e., \ud835\udf06. We aim to analyze the effect of hyperparameter selection on model performance. Firstly, we examine the sensitivity of the model performance to different values of num_latent. Note that to isolate the impact of the two hyperparameters, we present the average performance under the same num_latent value across all \ud835\udf06values. Figure 6 illustrates the model performance under various num_latent values and several observations can be made. Firstly, the model exhibits significant performance differences under different num_latent values, indicating that both learning an insufficient number of latent relations and learning redundant or useless relations impair the model performance. Secondly, the overall trends of performance changes with num_latent are generally consistent across different models on the same dataset. While the optimal num_latent value varies for different datasets. Specifically, the optimal value for MovieLens is 6, while for the Office dataset is 5. This aligns with the assumption that there are differences in item relations across diverse recommendation scenarios. Next, we analyze the \fSequential Recommendation with Latent Relations based on Large Language Model SIGIR 2024, July 14-18, 2024, Washington D.C., USA 6 8 10 Num_latent 0.36 0.38 0.40 0.42 MovieLens 0.1 1 5 10 0.38 0.40 0.42 6 8 10 Num_latent 0.29 0.30 0.31 0.32 Office 0.1 1 5 10 0.29 0.30 0.31 0.32 RCFLRD KDALRD RCF KDA Figure 6: nDCG@5 comparison w.r.t. the number of latent relations and the coefficient of the latent relation discovery task, i.e., \ud835\udf06. hyperparameter \ud835\udf06. The performance shown in Figure 6 represents the average under the same \ud835\udf06value across all num_latent values. We observe a stable improvement in model performance until the optimal value is reached, which is generally consistent on the same dataset for two models. That demonstrates the effectiveness of the latent relation discovery task. 5 RELATED WORK 5.1 Sequential Recommendation In the literature of recommender systems, sequential recommendation is a widely researched task, aiming to recommend items that interest users based on their historical interactions [42]. Earlier works adopt the Markov chain to model item transition relations in the interaction history of users [10, 32]. In recent years, with the development of deep learning methods, various deep neural networks have been proposed to capture user preferences in historical interaction sequences. It includes Recurrent Neural Networks (e.g., GRU [13], LSTM [43] and HRNN [29]), Convolutional Neural Networks [36, 47], Attention-based Network [17, 19, 35], and Graph Neural Networks [5, 44]. The core idea of these methods is to capture item-based collaborative filtering information in historical interactions, overlooking explicit relations between items, which is crucial for understanding user behavior and extracting user preferences more effectively. The proposed approach in this paper can effectively uncover relations among items, thus providing a more efficient modeling of user preferences. 5.2 Relation-aware Recommendation In contrast to traditional recommendation methods that consider only item-based collaborative filtering similarity, relation-aware recommendation methods explicitly incorporate relations between items into the recommendation model. One line of methods constructs a knowledge graph containing relations between items and relations between items and attributes [1, 4, 24, 40, 41, 48]. These methods enhance item representations through knowledge graph embedding tasks. CKE [48] constructs a knowledge graph containing items and attributes and optimizes graph embedding tasks and recommendation tasks jointly. CFKG [1] further incorporates users into the knowledge graph. It defines a special \u201cpurchase\u201d relation as the proxy of the interactions between users and items to transform the recommendation task into the knowledge graph embedding task. Another research direction focuses on sequential recommendation scenarios and explicitly models relations between historical items and the target item [38, 39, 45]. RCF [45] proposes a two-level attention framework to compute the user\u2019s attention to relation types and the relation intensity between historical items and the target item separately. Furthermore, considering the evolution of item relations over time, KDA [38] proposes a Fourier-based temporal evolution module to incorporate time information into the modeling of relational items. Nevertheless, existing methods rely on manually predefined relations and suffer from relation sparsity and item sparsity issues. In this paper, we propose to uncover latent relations among items, enabling the model to adapt to diverse and complex recommendation scenarios. 5.3 Large Language Model for Recommendation Recently, the emergence of Large Language Models (LLMs) [3, 27, 28, 37, 50] has revolutionized the field of natural language processing. The performance of various natural language processing tasks has significantly improved with the reasoning abilities and world knowledge of LLMs [16, 51]. The application of LLMs to recommendation tasks has been widely researched in two main research directions [7]. One line of work leverages LLMs\u2019 reasoning capabilities and knowledge to solve various recommendation tasks. Liu et al. [21] utilize ChatGPT [27] to perform five recommendation tasks in zero-shot and few-shot settings, achieving impressive results in explanation generation tasks while performing poorly in the direct recommendation and sequential recommendation tasks. P5 [9] transform various recommendation tasks into natural language format and feed into T5 [30]. Then the model is fine-tuned with the language modeling training objective and exhibits performance improvement on five recommendation tasks. Another line of research investigates combining recommendation models with LLMs. ChatRec [8] proposes to first recall candidate items from a vast number of items utilizing traditional recommendation models and further adopts an LLM to re-rank candidate items, demonstrating the potential of using LLM as rerankers. TALLRec [2] constructs instruction-tuning samples based on the user rating data and finetune LLaMA [37] with parameter-efficient-tuning, i.e., LoRA [14], achieving promising performance on few-shot recommendation scenario. InstructRec [49] organizes diverse instruction samples, including user interaction data and comments, to perform instructtuning on FLANT5 [6]. The trained model contributes to the recommendation model as a reranker. ONCE [22] utilizes ChatGPT as a data augmenter to acquire knowledge-enhanced representations of users and news, improving the performance of the news recommendation. However, existing recommendation methods with \fSIGIR 2024, July 14-18, 2024, Washington D.C., USA Yang, et al. LLMs have significant limitations in terms of performance and effectiveness. In this paper, we leverage the rich world knowledge of the LLM to acquire knowledge representations of items, which is used to discover item relations that contribute to recommendations. We fully utilize the knowledge of the LLM and ensure the efficiency of the overall framework. 6 CONCLUSION In this paper, we propose a novel method for discovering latent item relations based on the Large Language Model (LLM), namely LRD. Leveraging the rich world knowledge of LLM and a self-supervised learning approach, LRD effectively extracts latent item relations. We jointly optimize LRD with existing relation-aware sequential recommender systems. On the one hand, the latent relations discovered by LRD provide more sophisticated item associations, contributing to the sufficient modeling of intricate user preference. On the other hand, the supervision signals from user interactions guide the relation discovery process effectively. Experimental results on multiple public datasets demonstrate that LRD significantly improves the performance of existing relation-aware sequential recommendation methods. Further analyses demonstrate the reliability of the latent relations. Note that the crucial LLM in our method is not meticulously selected in the current implementation. It leaves us a future work of exploring the performance of more advanced LLMs in our method." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.07718v1", |
| "title": "Local Centrality Minimization with Quality Guarantees", |
| "abstract": "Centrality measures, quantifying the importance of vertices or edges, play a\nfundamental role in network analysis. To date, triggered by some positive\napproximability results, a large body of work has been devoted to studying\ncentrality maximization, where the goal is to maximize the centrality score of\na target vertex by manipulating the structure of a given network. On the other\nhand, due to the lack of such results, only very little attention has been paid\nto centrality minimization, despite its practical usefulness.\n In this study, we introduce a novel optimization model for local centrality\nminimization, where the manipulation is allowed only around the target vertex.\nWe prove the NP-hardness of our model and that the most intuitive greedy\nalgorithm has a quite limited performance in terms of approximation ratio. Then\nwe design two effective approximation algorithms: The first algorithm is a\nhighly-scalable algorithm that has an approximation ratio unachievable by the\ngreedy algorithm, while the second algorithm is a bicriteria approximation\nalgorithm that solves a continuous relaxation based on the Lov\\'asz extension,\nusing a projected subgradient method. To the best of our knowledge, ours are\nthe first polynomial-time algorithms with provable approximation guarantees for\ncentrality minimization. Experiments using a variety of real-world networks\ndemonstrate the effectiveness of our proposed algorithms: Our first algorithm\nis applicable to million-scale graphs and obtains much better solutions than\nthose of scalable baselines, while our second algorithm is rather strong\nagainst adversarial instances.", |
| "authors": "Atsushi Miyauchi, Lorenzo Severini, Francesco Bonchi", |
| "published": "2024-02-12", |
| "updated": "2024-02-12", |
| "primary_cat": "cs.SI", |
| "cats": [ |
| "cs.SI", |
| "cs.DS" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Local Centrality Minimization with Quality Guarantees", |
| "main_content": "INTRODUCTION Among the many analytical tools that social network analysis [21] borrowed from graph theory, centrality measures play a fundamental role in a wide variety of analyses [12]. Centrality, which quantifies the importance of vertices or edges only based on the graph structure, has found many applications, including, e.g., identification of important users or connections in social networks, community detection [34], anomaly detection [28], to name a few. Local centrality minimization is the problem of removing a few existing edges around a target vertex, so as to minimize its centrality score. A direct application can be found in the context of reducing the visibility, or influence, of a targeted harmful user in a social network, without explicitly blocking the user account. In this scenario, to minimize the impact to the other decent users, it is quite reasonable to manipulate only the ties around the targeted user. The edges to be removed, identified by local centrality minimization, are the most important edges for the centrality (i.e., visibility or influence) of the target vertex. In this regard, another direct application is to keep satisfying influential users so that they are engaged in the platform. The degree of satisfaction of such influencers often depends on how actually influential they are in the platform, i.e., how much their content is consumed by the network. Local centrality minimization can be used for revealing the most important connections between the influencers and their followers, which are key in contributing to their visibility. While a large body of work has been devoted to studying centrality maximization [12], much less attention has been paid to centrality minimization (see Section 2 for a brief literature survey). Generally speaking, the goal of centrality maximization is to maximize the centrality score of a target vertex by adding a limited number of edges to the network. In many reasonable optimization models, the objective function becomes monotone and submodular [14], and thus a simple greedy algorithm admits a (1 \u22121/e)-approximation [33]. This positive result makes the basis of various studies on centrality maximization [4, 7, 10, 11, 18, 29]. The lack of positive approximability results has instead limited the attention on the centrality minimization problem, especially in its local variant. Waniek et al. [41] introduced several optimization models for local centrality minimization under some specific objectives and constraints, investigated the computational complexity of their models, and devised some algorithms. Later, Waniek et al. [42] investigated local centrality minimization from a game-theoretic point of view. However, the work by Waniek et al. [41, 42] proposed only (exponential-time) exact algorithms and heuristics. In this paper, we study the local centrality minimization problem, adopting the most well-established centrality measure called the harmonic centrality [12], which quantifies the importance of vertices based on the level of reachability from the other vertices. The harmonic centrality is known as an effective alternative to the closeness centrality [6], which was employed in Waniek et al. [41], in the sense that unlike the closeness centrality, it is well-defined even in the case where a graph is not strongly connected. Boldi and Vigna [6] showed that among all the known centrality measures, only the harmonic centrality satisfies all the desirable axioms, namely the size axiom, density axiom, and score monotonicity axiom. Recently, Murai and Yoshida [32] theoretically and empirically demonstrated that among well-known centrality measures, the harmonic centrality is most stable (thus reliable) against the uncertainty of a given graph. arXiv:2402.07718v1 [cs.SI] 12 Feb 2024 \fAtsushi Miyauchi, Lorenzo Severini, and Francesco Bonchi 1.1 Paper contributions and roadmap In this paper, we introduce a novel optimization model for local centrality minimization, where the harmonic centrality is employed as an objective function. Specifically, in our model, given a directed graph \ud835\udc3a= (\ud835\udc49,\ud835\udc34), a target vertex \ud835\udc63\u2208\ud835\udc49, and a budget \ud835\udc4f\u2208Z>0, we aim to find a set of incoming edges of \ud835\udc63with size (no greater than) \ud835\udc4fwhose removal minimizes the harmonic centrality score of \ud835\udc63, denoted by \u210e\ud835\udc3a(\ud835\udc63) (to be defined in Section 3). For our optimization model, we first analyze the computational complexity. Specifically, we show that our model is NP-hard even on a very limited graph class (i.e., acyclic graphs), by constructing a polynomial-time reduction from the minimum \ud835\udc58-union problem. Furthermore, we prove that the most intuitive greedy algorithm, which iteratively removes an incoming edge of \ud835\udc63that maximally decreases the objective value, cannot achieve an approximation ratio of \ud835\udc5c(|\ud835\udc49|), while any reasonable algorithm has an approximation ratio of \ud835\udc42(|\ud835\udc49|). This negative result motivates the design of algorithms that exploit the characteristics of our model. We design two polynomial-time approximation algorithms. The first algorithm is a highly-scalable algorithm that has an approximation ratio of \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63). We stress that as \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63) = \ud835\udc42( \u221a\ufe01 |\ud835\udc49|) = \ud835\udc5c(|\ud835\udc49|), this approximation ratio is unachievable by the above greedy algorithm. Our algorithm first sorts the incoming neighbors of the target vertex \ud835\udc63in the decreasing order of their harmonic centrality scores on a slightly modified graph, and then removes \ud835\udc4fincoming edges from the top-\ud835\udc4fvertices in the sorted list. To prove the approximation ratio, we scrutinize the relationship between the harmonic centrality scores of the target vertex and its incoming neighbors. In the end, we also prove the tightness of our analysis of the approximation ratio. The second algorithm is a polynomial-time algorithm that has a bicriteria approximation ratio of ( 1 \ud835\udefc, ( 1 1\u2212\ud835\udefc,\ud835\udf16)) for any \ud835\udefc\u2208(0, 1) and \ud835\udf16> 0. That is, the algorithm finds a subset of incoming edges of the target vertex \ud835\udc63with size at most \ud835\udc4f/\ud835\udefcbut attains the objective value at most the original optimal value times 1 1\u2212\ud835\udefcplus \ud835\udf16. Therefore, the algorithm approximates the original optimal value while violating the budget constraint to some bounded extent. To design the algorithm, we first introduce a continuous relaxation of our model. To this end, we use the well-known extension of set functions, called the Lov\u00e1sz extension [26]. An important fact is that the objective function of our model is submodular, which guarantees that its Lov\u00e1sz extension is (not necessarily differentiable but) convex. Therefore, we can solve the relaxation (with an arbitrarily small error) using a projected subgradient method [3]. Once we get a fractional solution, we apply a simple probabilistic procedure and obtain a subset of incoming edges of the target vertex \ud835\udc63. Finally, our experiments on a variety of real-world networks show that our first algorithm is applicable to million-scale graphs and obtains much better solutions than those of scalable baselines, while our second algorithm is strong against adversarial instances. In summary, our contributions are as follows: \u2022 We study the local harmonic centrality minimization problem: We prove that it is NP-hard even on acyclic graphs, its objective function is submodular, and the most intuitive greedy algorithm cannot achieve \ud835\udc5c(|\ud835\udc49|)-approximation (Section 3). \u2022 We devise a highly-scalable algorithm with an approximation ratio of \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63), which is unachievable by the greedy algorithm (Section 4). \u2022 We then devise a bicriteria approximation algorithm that solves a continuous relaxation based on the Lov\u00e1sz extension, using a projected subgradient method (Sections 5 and 6). To the best of our knowledge, ours are the first polynomial-time algorithms with provable approximation guarantees for centrality minimization. 2 RELATED WORK In this section, we review related literature about centrality minimization and maximization, submodular minimization, and other relevant applications in social networks analysis. Centrality minimization. The most related to the present paper is the work on centrality minimization by Waniek et al. [41, 42] whose goal is to provide a methodology that contributes to hiding individuals in social networks from centrality-based network analysis algorithms. More specifically, Waniek et al. [41] introduced the following optimization model: Given a directed graph \ud835\udc3a= (\ud835\udc49,\ud835\udc34), a target vertex \ud835\udc63\u2208\ud835\udc49, a budget \ud835\udc4f\u2208Z>0, and a set \ud835\udc40of possible singleedge modifications, we are asked to select at most \ud835\udc4factions in \ud835\udc40 so as to minimize the centrality score of \ud835\udc63while satisfying some constraint on the influence of some vertices in the graph. As a centrality measure, they considered the degree centrality, closeness centrality, and betweenness centrality. They showed that except for the degree centrality case, the above model is NP-hard even when the constraint on the influence of vertices is ignored, and devised simple heuristics. Waniek et al. [42] investigated local centrality minimization from a game-theoretic point of view. As a tool to analyze their game, they studied the following optimization model: Given a directed graph \ud835\udc3a= (\ud835\udc49,\ud835\udc34), a target vertex \ud835\udc63\u2208\ud835\udc49, a hiding parameter \ud835\udeff, and a set \ud835\udc40 as above, we are asked to find a minimal subset of \ud835\udc40guaranteeing that there are at least \ud835\udeffvertices having a centrality score greater than that of \ud835\udc63. As a centrality measure, they again considered the above three. They showed that the model is 2-approximable for the degree centrality but is inapproximable within any logarithmic factor for the other two. Note that the above approximation is just for the size of the output rather than the ranking of the centrality score of \ud835\udc63(or the centrality score of \ud835\udc63). Veremyev et al. [38] studied a global centrality minimization problem: Given an undirected graph \ud835\udc3a= (\ud835\udc49, \ud835\udc38) with cost \ud835\udc50\ud835\udc52\u2208R\u22650 for each \ud835\udc52\u2208\ud835\udc38, a target vertex subset \ud835\udc46\u2286\ud835\udc49, and a budget \ud835\udc4f\u2208 Z>0, we are asked to find \ud835\udc39\u2286\ud835\udc38whose removal minimizes the centrality score of the target vertex subset \ud835\udc46subject to the budget constraint \u00cd \ud835\udc52\u2208\ud835\udc39\ud835\udc50\ud835\udc52\u2264\ud835\udc4f. The centrality score of a vertex subset is defined as a generalization of the centrality score of a vertex. As a centrality measure, they considered a quite general one, based on distance between vertices, which includes the harmonic centrality as a special case. They proved that the above model is NP-hard for any centrality measure included in the above, and as a by-product of the analysis, they also mentioned the NP-hardness of its local variant, which coincides with (the undirected-graph counterpart of) our proposed model. In the present paper, by focusing on the \fLocal Centrality Minimization with Quality Guarantees harmonic centrality, we prove that our model is NP-hard even on a very limited graph class (i.e., acyclic graphs). On a positive side, they presented an exact algorithm based on mathematical programming and greedy heuristics. Very recently, Liu et al. [25] addressed another global centrality minimization problem, where the objective function is a centrality measure called the information centrality and the connectivity of the resulting graph is guaranteed. Centrality maximization. Centrality maximization has more actively been studied in the literature (e.g., [4, 7, 10, 11, 18, 29]), where the most related to ours is due to Crescenzi et al. [10]. They introduced the harmonic centrality maximization problem, where given a directed graph \ud835\udc3a= (\ud835\udc49,\ud835\udc34), a target vertex \ud835\udc63\u2208\ud835\udc49, and a budget \ud835\udc4f\u2208Z>0, we are asked to insert at most \ud835\udc4fincoming edges of \ud835\udc63so as to maximize the harmonic centrality score of \ud835\udc63. Our proposed optimization model can be seen as a minimization counterpart of their problem. They proved that the problem is APX-hard, but devised a polynomial-time (1 \u22121/e)-approximation algorithm based on the submodularity of the objective function. Finally, we note that there is another class of problems also called centrality maximization, where the goal is to find \ud835\udc46\u2286\ud835\udc49that has the maximum group centrality score (e.g., [1, 2, 4, 8, 18, 24, 27, 31, 35, 45]), which is less relevant to the present paper. Submodular minimization. Submodular minimization is one of the most well-studied problem classes in combinatorial optimization. Among the literature, the most related work is due to Svitkina and Fleischer [37]. They stated that a polynomial-time ( 1 \ud835\udefc, 1 1\u2212\ud835\udefc)bicriteria approximation algorithm for submodular minimization with a cardinality upper bound (and thus for our proposed model) is possible for any \ud835\udefc\u2208(0, 1), using techniques in Hayrapetyan et al. [17]. However, Hayrapetyan et al. [17] addressed another problem called the minimum-size bounded-capacity cut, where the function in the constraint instead of the objective function is submodular, which is not the case in submodular minimization with a cardinality upper bound. Therefore, the above statement is not trivial and even our proposed model should be handled in a formal way. Lov\u00e1sz extension has actively been used for developing novel network analysis algorithms (e.g., [22, 23, 36]). Applications. Reducing the visibility or influence of target users in social networks has been studied in the context of influence minimization [20, 40, 43]. All existing studies are based on some influence diffusion models such as the independent cascade model [15, 16] and the linear threshold model [19]. Unlike those, our model does not assume any influence diffusion model, but is just based on the network structure. Very recently, Fabbri et al. [13] and Coupette et al. [9] addressed the problem of reducing the exposure to harmful contents in social media networks. On the other hand, identifying the users and/or connections that play a key role for user engagement in social networks has also attracted much attention. Bhawalkar et al. [5] initiated this kind of study from an optimization perspective. They invented a model that aims to find a group of users whose permanent use of the service guarantees user engagement as much as possible, and designed polynomial-time algorithms for some cases. Later, Zhang et al. [44] and Zhu et al. [46] introduced variants of the above model, and devised intuitive heuristics. 3 PROBLEM FORMULATION AND CHARACTERIZATION In this section, we mathematically formulate our problem (Problem 1), and prove its NP-hardness (Theorem 1) and the submodularity of the objective function (Theorem 2). Finally, we show the quite limited performance of the greedy algorithm (Theorem 3). Let\ud835\udc3a= (\ud835\udc49,\ud835\udc34) be a directed graph (or digraph for short). Throughout the paper, we assume that digraphs are simple, that is, there exist neither self-loops nor multiple edges. For \ud835\udc39\u2286\ud835\udc34, we define \ud835\udc3a\\\ud835\udc39as the subgraph of \ud835\udc3athat is constructed by removing all edges in \ud835\udc39from \ud835\udc3a, i.e., \ud835\udc3a\\ \ud835\udc39= (\ud835\udc49,\ud835\udc34\\ \ud835\udc39). For \ud835\udc63\u2208\ud835\udc49, we denote by \ud835\udf0c(\ud835\udc63) the set of incoming edges of \ud835\udc63, i.e., \ud835\udf0c(\ud835\udc63) = {(\ud835\udc62, \ud835\udc63) \u2208\ud835\udc34| \ud835\udc62\u2208\ud835\udc49}. For \ud835\udc63\u2208\ud835\udc49, let \u210e\ud835\udc3a(\ud835\udc63) be the harmonic centrality score of \ud835\udc63on a digraph \ud835\udc3a, i.e., \u210e\ud835\udc3a(\ud835\udc63) = \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc63} 1 \ud835\udc51\ud835\udc3a(\ud835\udc62, \ud835\udc63) , where \ud835\udc51\ud835\udc3a(\ud835\udc62, \ud835\udc63) is the (shortest-path) distance from \ud835\udc62\u2208\ud835\udc49to \ud835\udc63\u2208\ud835\udc49 on \ud835\udc3a, and by convention, \ud835\udc51\ud835\udc3a(\ud835\udc62, \ud835\udc63) = \u221ewhen \ud835\udc63is not reachable from \ud835\udc62. Note that, contrarily to other centrality measures, even in the case where a digraph is not strongly connected, the harmonic centrality is still well-defined (assuming by convention that 1/\u221e= 0). Intuitively, the harmonic centrality quantifies the importance of a given vertex \ud835\udc63based on the level of reachability from the other vertices. The problem we tackle in this paper is formalized as follows: Problem 1 (Local harmonic centrality minimization). Given a digraph \ud835\udc3a= (\ud835\udc49,\ud835\udc34), a target vertex \ud835\udc63\u2208\ud835\udc49, and a budget \ud835\udc4f\u2208Z>0, we are asked to find \ud835\udc39\u2286\ud835\udf0c(\ud835\udc63) with |\ud835\udc39| \u2264\ud835\udc4fwhose removal minimizes the harmonic centrality of \ud835\udc63\u2208\ud835\udc49, i.e., \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39) B \u210e\ud835\udc3a\\\ud835\udc39(\ud835\udc63). By constructing a polynomial-time reduction from the NP-hard optimization problem called the minimum \ud835\udc58-union, we can prove the following. The proof can be found in Appendix A.1. Theorem 1. Problem 1 is NP-hard even on acyclic graphs. We next show that the objective function \ud835\udc53(\ud835\udc3a,\ud835\udc63) of Problem 1 is submodular, which helps us design our bicriteria approximation algorithm in Section 5. Let\ud835\udc46be a finite set. A set function \ud835\udc53: 2\ud835\udc46\u2192R is said to be submodular if for any \ud835\udc4b,\ud835\udc4c\u2286\ud835\udc46, it holds that \ud835\udc53(\ud835\udc4b) + \ud835\udc53(\ud835\udc4c) \u2265\ud835\udc53(\ud835\udc4b\u222a\ud835\udc4c) + \ud835\udc53(\ud835\udc4b\u2229\ud835\udc4c). We prove the following in Appendix A.2: Theorem 2. For any \ud835\udc3a= (\ud835\udc49,\ud835\udc34) and \ud835\udc63\u2208\ud835\udc49, the objective function \ud835\udc53(\ud835\udc3a,\ud835\udc63) of Problem 1 is submodular. Finally, we prove that the most intuitive greedy algorithm does not have any non-trivial approximation ratio. Specifically, we consider the algorithm that iteratively removes an incoming edge of the target vertex \ud835\udc63that maximally decreases the harmonic centrality score of \ud835\udc63, until it exhausts the budget. For reference, the pseudocode is given in Algorithm 4 in Appendix A.3. This algorithm runs in \ud835\udc42(\ud835\udc4f|\ud835\udf0c(\ud835\udc63)|(|\ud835\udc49| + |\ud835\udc34|)) time. Note that, unlike many submodular maximization algorithms, the lazy evaluation technique [30] cannot be used to obtain a practically efficient implementation. The proof of the following is available in Appendix A.4. \fAtsushi Miyauchi, Lorenzo Severini, and Francesco Bonchi Algorithm 1: \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63)-approximation algorithm for Problem 1 Input : \ud835\udc3a= (\ud835\udc49,\ud835\udc34), \ud835\udc63\u2208\ud835\udc49, and \ud835\udc4f\u2208Z>0 Output: \ud835\udc39\u2286\ud835\udf0c(\ud835\udc63) with |\ud835\udc39| \u2264\ud835\udc4f 1 Sort the elements of \ud835\udc41in(\ud835\udc63) as (\ud835\udc641, . . . ,\ud835\udc64|\ud835\udf0c(\ud835\udc63) |) so that \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc641) \u2265\u00b7 \u00b7 \u00b7 \u2265\u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64|\ud835\udf0c(\ud835\udc63) |); 2 return {(\ud835\udc641, \ud835\udc63), . . . , (\ud835\udc64\ud835\udc4f, \ud835\udc63)}; Theorem 3. The greedy algorithm has no approximation ratio of \ud835\udc5c(|\ud835\udc49|) for Problem 1, while any algorithm that outputs \ud835\udc39\u2286\ud835\udf0c(\ud835\udc63) with |\ud835\udc39| = \ud835\udc4fhas an approximation ratio of \ud835\udc42(|\ud835\udc49|). 4 SCALABLE APPROXIMATION ALGORITHM In this section, we present a highly-scalable \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63)-approximation algorithm for Problem 1. 4.1 Algorithm Let \ud835\udc41in(\ud835\udc63) be the set of incoming neighbors of \ud835\udc63, i.e., \ud835\udc41in(\ud835\udc63) = {\ud835\udc64\u2208\ud835\udc49| (\ud835\udc64, \ud835\udc63) \u2208\ud835\udf0c(\ud835\udc63)}. The intuition behind our algorithm is quite simple: As long as there exists a vertex \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63) that has a large harmonic centrality score, so does the target vertex \ud835\udc63. This means that it is urgent to remove incoming edges of \ud835\udc63that come from vertices having large harmonic centrality scores. Note that our algorithm and analysis consider the harmonic centrality scores on \ud835\udc3a\\\ud835\udf0c(\ud835\udc63) rather than \ud835\udc3a; this is essential to obtain our approximation ratio. Specifically, our algorithm first sorts the elements of \ud835\udc41in(\ud835\udc63) as (\ud835\udc641, . . . ,\ud835\udc64|\ud835\udf0c(\ud835\udc63)|) so that \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc641) \u2265\u00b7 \u00b7 \u00b7 \u2265\u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64|\ud835\udf0c(\ud835\udc63) |) and just returns {(\ud835\udc641, \ud835\udc63), . . . , (\ud835\udc64\ud835\udc4f, \ud835\udc63)}. For reference, the entire procedure is described in Algorithm 1. The algorithm is highly scalable. Indeed, the time complexity of Algorithm 1 is dominated by the part of computing the harmonic centrality scores of vertices in \ud835\udc41in(\ud835\udc63), which just takes \ud835\udc42(|\ud835\udf0c(\ud835\udc63)|(|\ud835\udc49| + |\ud835\udc34|)) time. Therefore, the algorithm is asymptotically \ud835\udc4ftimes faster than the greedy algorithm (Algorithm 4). 4.2 Analysis From now on, we analyze the approximation ratio of Algorithm 1. The following lemma demonstrates that the optimal value can be lower bounded using the maximum harmonic centrality score over the remaining incoming neighbors of \ud835\udc63in the resulting graph: Lemma 1. Let \ud835\udc39\u2217be an optimal solution to Problem 1 and \ud835\udc41\u2217the vertex subset corresponding to \ud835\udc39\u2217, i.e., \ud835\udc41\u2217= {\ud835\udc64\u2208\ud835\udc49| (\ud835\udc64, \ud835\udc63) \u2208\ud835\udc39\u2217}. Then it holds that \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217) \u22651 2 \u0012 max \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41\u2217\u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) + 1 \u0013 . Proof. For any \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63) \\ \ud835\udc41\u2217, we have \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217) = \u210e\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc63) = \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc62, \ud835\udc63) \u2265 \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc62,\ud835\udc64) + 1 = 1 + \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc64,\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc62,\ud835\udc64) + 1 \u22651 + 1 2 \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc64,\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc62,\ud835\udc64) \u22651 2 + 1 2 \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc64} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc62,\ud835\udc64) = 1 2 \u0010 \u210e\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc64) + 1 \u0011 \u22651 2 \u0010 \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) + 1 \u0011 , where the first inequality follows from the triangle inequality of distance \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217, the third equality follows from \ud835\udc51\ud835\udc3a\\\ud835\udc39\u2217(\ud835\udc64,\ud835\udc64) = 0, and the second inequality follows from the fact that the addition of 1 to the denominator makes it at most twice the original. The arbitrariness of the choice of \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63) \\\ud835\udc41\u2217derives the statement. \u25a1 On the other hand, the next lemma upper bounds the objective value of the output of Algorithm 1 using the harmonic centrality scores of the remaining incoming neighbors of \ud835\udc63in the resulting graph: Lemma 2. Let \ud835\udc39ALG be the output of Algorithm 1 and \ud835\udc41ALG the vertex subset corresponding to \ud835\udc39ALG, i.e., \ud835\udc41ALG = {\ud835\udc64\u2208\ud835\udc49| (\ud835\udc64, \ud835\udc63) \u2208 \ud835\udc39ALG}. Then we have \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39ALG) \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) + \u2211\ufe01 \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64). Proof. On digraph \ud835\udc3a\\ \ud835\udc39ALG, any \ud835\udc62\u2208\ud835\udc49\\ {\ud835\udc63} satisfies either (i) there exists no (shortest) path from \ud835\udc62to \ud835\udc63or (ii) there exists \ud835\udc64(\ud835\udc62) \u2208\ud835\udc41in(\ud835\udc63) \\ \ud835\udc41ALG that is contained in a shortest path from \ud835\udc62 to \ud835\udc63, i.e., \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62, \ud835\udc63) = \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62,\ud835\udc64(\ud835\udc62)) + 1. Let \ud835\udc49\u2032 \u2286\ud835\udc49\\ {\ud835\udc63} be the subset of vertices that satisfy the condition (ii). Then we have \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39ALG) = \u210e\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc63) = \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\\{\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62, \ud835\udc63) = \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\u2032\\{\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62,\ud835\udc64(\ud835\udc62)) + 1. (1) We see that the shortest path corresponding to \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62,\ud835\udc64(\ud835\udc62)) does not contain \ud835\udc63(and thus any edge in \ud835\udf0c(\ud835\udc63) \\ \ud835\udc39ALG). Otherwise there would exist \ud835\udc64\u2032(\ud835\udc62) \u2208\ud835\udc41in(\ud835\udc63) \\ \ud835\udc41ALG satisfying that \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62,\ud835\udc64\u2032(\ud835\udc62)) < \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62,\ud835\udc64(\ud835\udc62)), which contradicts the fact that \ud835\udc64(\ud835\udc62) is contained in a shortest path from \ud835\udc62to \ud835\udc63on \ud835\udc3a\\ \ud835\udc39ALG. Hence, we have \ud835\udc51\ud835\udc3a\\\ud835\udc39ALG (\ud835\udc62,\ud835\udc64(\ud835\udc62)) = \ud835\udc51\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc62,\ud835\udc64(\ud835\udc62)). \fLocal Centrality Minimization with Quality Guarantees Combining this with the equality (1), we can conclude the proof: \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39ALG) = \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\u2032\\{\ud835\udc63} 1 \ud835\udc51\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc62,\ud835\udc64(\ud835\udc62)) + 1 = (|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) + \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\u2032\\{\ud835\udc63}\\(\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG) 1 \ud835\udc51\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc62,\ud835\udc64(\ud835\udc62)) + 1 \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) + \u2211\ufe01 \ud835\udc62\u2208\ud835\udc49\u2032\\{\ud835\udc63}\\(\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG) 1 \ud835\udc51\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc62,\ud835\udc64(\ud835\udc62)) \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) + \u2211\ufe01 \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64), where the last inequality holds by the fact that any term 1 \ud835\udc51\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc62,\ud835\udc64(\ud835\udc62)) in the summation of the left-hand-side appears as a term in\u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) for appropriate \ud835\udc64= \ud835\udc64(\ud835\udc62) in the right-hand-side. \u25a1 We are now ready to prove our main theorem: Theorem 4. Algorithm 1 is a 2(|\ud835\udf0c(\ud835\udc63)|\u2212\ud835\udc4f)-approximation algorithm for Problem 1. Proof. Here we use the notation that appeared in Lemmas 1 and 2. By the behavior of Algorithm 1, we have max \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) \u2264 max \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41\u2217\u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64). Using Lemmas 1 and 2 together with this inequality, we have \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39ALG) \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) + \u2211\ufe01 \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) \u0012 1 + max \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41ALG \u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) \u0013 \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) \u0012 1 + max \ud835\udc64\u2208\ud835\udc41in(\ud835\udc63)\\\ud835\udc41\u2217\u210e\ud835\udc3a\\\ud835\udf0c(\ud835\udc63) (\ud835\udc64) \u0013 \u2264(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f) \u0010 1 + 2\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217) \u22121 \u0011 = 2(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f)\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217), which completes the proof. \u25a1 Based on the theorem, we obtain the desired approximation ratio: Corollary 1. Algorithm 1 is a \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63)-approximation algorithm for Problem 1. Proof. For any instance that satisfies \ud835\udc4f= |\ud835\udf0c(\ud835\udc63)|, Algorithm 1 outputs the trivial optimal solution (i.e., \ud835\udf0c(\ud835\udc63)). Therefore, in what follows, we focus only on the instances with \ud835\udc4f< |\ud835\udf0c(\ud835\udc63)|. Obviously the output of any algorithm for Problem 1 has an objective value at most \u210e\ud835\udc3a(\ud835\udc63). On the other hand, the optimal value is at least |\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4fbecause in the resulting digraph, there are still |\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f incoming neighbors of \ud835\udc63, each of which contributes exactly 1 to the objective value. Therefore, any algorithm (including Algorithm 1) for Problem 1 has an approximation ratio of \u210e\ud835\udc3a(\ud835\udc63) |\ud835\udf0c(\ud835\udc63) |\u2212\ud835\udc4f. By combining this with Theorem 4, the approximation ratio of Algorithm 1 can be improved to min n 2(|\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f), \u210e\ud835\udc3a(\ud835\udc63) |\ud835\udf0c(\ud835\udc63) |\u2212\ud835\udc4f o \u2264 \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63). \u25a1 We remark that as \u221a\ufe01 2\u210e\ud835\udc3a(\ud835\udc63) \u2264 \u221a\ufe01 2(|\ud835\udc49| \u22121) = \ud835\udc42( \u221a\ufe01 |\ud835\udc49|) = \ud835\udc5c(|\ud835\udc49|), the approximation ratio is unachievable by the greedy algorithm. Finally, we conclude this section by showing that the analysis of the approximation ratio is tight up to a constant factor. The proof is available in Appendix B.1. Theorem 5. Algorithm 1 has no approximation ratio of \ud835\udc5c \u0010\u221a\ufe01 \u210e\ud835\udc3a(\ud835\udc63) \u0011 . 5 BICRITERIA APPROXIMATION ALGORITHM In this section, we present a polynomial-time ( 1 \ud835\udefc, ( 1 1\u2212\ud835\udefc,\ud835\udf16))-bicriteria approximation algorithm (\ud835\udefc\u2208(0, 1) and \ud835\udf16> 0) for Problem 1. Our algorithm first solves a continuous relaxation of the problem and then applies a simple probabilistic procedure to the fractional solution to obtain the output. 5.1 Continuous relaxation To obtain a continuous relaxation of Problem 1, we consider the well-known extension of set functions, called the Lov\u00e1sz extension [26]. For our objective function \ud835\udc53(\ud835\udc3a,\ud835\udc63), the Lov\u00e1sz extension b \ud835\udc53(\ud835\udc3a,\ud835\udc63) : [0, 1]\ud835\udf0c(\ud835\udc63) \u2192R is defined in the following way: Let \ud835\udf0c(\ud835\udc63) = {\ud835\udc521, . . . ,\ud835\udc52|\ud835\udf0c(\ud835\udc63) |}. For \ud835\udc99\u2208[0, 1]\ud835\udf0c(\ud835\udc63), we relabel the elements of \ud835\udf0c(\ud835\udc63) so that \ud835\udc65\ud835\udc521 \u2265\ud835\udc65\ud835\udc522 \u2265\u00b7 \u00b7 \u00b7 \u2265\ud835\udc65\ud835\udc52|\ud835\udf0c(\ud835\udc63)|, and construct a sequence of subsets \u2205= \ud835\udc4b0 \u2282\ud835\udc4b1 \u2282\u00b7 \u00b7 \u00b7 \u2282\ud835\udc4b|\ud835\udf0c(\ud835\udc63) | = \ud835\udf0c(\ud835\udc63), where \ud835\udc4b\ud835\udc56= {\ud835\udc521, . . . ,\ud835\udc52\ud835\udc56} for\ud835\udc56= 1, . . . , |\ud835\udf0c(\ud835\udc63)|. Based on these, we define the value of b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99) as follows: b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99) = (1 \u2212\ud835\udc65\ud835\udc521)\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\u2205) + |\ud835\udf0c(\ud835\udc63) |\u22121 \u2211\ufe01 \ud835\udc56=1 (\ud835\udc65\ud835\udc52\ud835\udc56\u2212\ud835\udc65\ud835\udc52\ud835\udc56+1)\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\ud835\udc56) + \ud835\udc65\ud835\udc52|\ud835\udf0c(\ud835\udc63)| \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udf0c(\ud835\udc63)). Observe that for any \ud835\udc39\u2286\ud835\udf0c(\ud835\udc63), it holds that b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (1\ud835\udc39) = \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39), where 1\ud835\udc39is an indicator vector of \ud835\udc39, taking 1 if \ud835\udc52\u2208\ud835\udc39and 0 otherwise. Therefore, b \ud835\udc53(\ud835\udc3a,\ud835\udc63) is indeed an extension of \ud835\udc53(\ud835\udc3a,\ud835\udc63). The Lov\u00e1sz extension can be defined on any (not necessarily submodular) set function. The Lov\u00e1sz extension is always continuous but not necessarily differentiable. An important fact is that the Lov\u00e1sz extension is convex if and only if the original set function is submodular [26]. Therefore, by Theorem 2, the Lov\u00e1sz extension b \ud835\udc53(\ud835\udc3a,\ud835\udc63) of \ud835\udc53(\ud835\udc3a,\ud835\udc63) is convex. Using b \ud835\udc53(\ud835\udc3a,\ud835\udc63), we introduce our continuous relaxation as follows: Relaxation: minimize b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99) subject to \u2225\ud835\udc99\u22251 \u2264\ud835\udc4f, \ud835\udc99\u2208[0, 1]\ud835\udf0c(\ud835\udc63). For convenience, we denote by \ud835\udc36the feasible region of the problem, i.e., \ud835\udc36B n \ud835\udc99\u2208R\ud835\udf0c(\ud835\udc63) : \u2225\ud835\udc99\u22251 \u2264\ud835\udc4fand \ud835\udc99\u2208[0, 1]\ud835\udf0c(\ud835\udc63)o . From the above, we see that Relaxation is a non-smooth convex programming problem. We will present an algorithm for Relaxation and its convergence result in Section 6. In the remainder of this section, we assume that for \ud835\udf16\u2032 = (1 \u2212\ud835\udefc)\ud835\udf16, we can compute, in polynomial time, an \ud835\udf16\u2032-additive approximate solution for Relaxation, i.e., a feasible solution for Relaxation that has an objective value at most the optimal value plus \ud835\udf16\u2032. 5.2 Algorithm Let \ud835\udc99\u2217\u2208[0, 1]\ud835\udf0c(\ud835\udc63) be an \ud835\udf16\u2032-additive approximate solution for Relaxation, where \ud835\udf16\u2032 = (1 \u2212\ud835\udefc)\ud835\udf16. Then our algorithm picks \ud835\udc5d\u2208[\ud835\udefc, 1] uniformly at random and just returns {\ud835\udc52\u2208\ud835\udf0c(\ud835\udc63) | \ud835\udc65\u2217 \ud835\udc52\u2265\ud835\udc5d}. For reference, the entire procedure is summarized in Algorithm 2. \fAtsushi Miyauchi, Lorenzo Severini, and Francesco Bonchi Algorithm 2: ( 1 \ud835\udefc, ( 1 1\u2212\ud835\udefc,\ud835\udf16))-bicriteria approximation algorithm for Problem 1 Input : \ud835\udc3a= (\ud835\udc49,\ud835\udc34), \ud835\udc63\u2208\ud835\udc49, and \ud835\udc4f\u2208Z>0 Output: \ud835\udc39\u2286\ud835\udf0c(\ud835\udc63) 1 \ud835\udf16\u2032 \u2190(1 \u2212\ud835\udefc)\ud835\udf16; 2 Solve Relaxation (using Algorithm 3 in Section 6) and obtain its \ud835\udf16\u2032-additive approximate solution \ud835\udc99\u2217\u2208[0, 1]\ud835\udf0c(\ud835\udc63); 3 Pick \ud835\udc5d\u2208[\ud835\udefc, 1] uniformly at random; 4 return {\ud835\udc52\u2208\ud835\udf0c(\ud835\udc63) | \ud835\udc65\u2217 \ud835\udc52\u2265\ud835\udc5d}; 5.3 Analysis The following theorem gives the bicriteria approximation ratio of Algorithm 2: Theorem 6. For any \ud835\udefc\u2208(0, 1) and \ud835\udf16> 0, Algorithm 2 is a polynomial-time randomized ( 1 \ud835\udefc, ( 1 1\u2212\ud835\udefc,\ud835\udf16))-bicriteria approximation algorithm, in expectation, for Problem 1. Proof. Let \ud835\udc39\u2286\ud835\udf0c(\ud835\udc63) be the output of Algorithm 2. The approximation ratio with respect to the size of \ud835\udc39can be evaluated as follows: E[|\ud835\udc39|] = 1 \ud835\udefc\u00b7 E \"\u2211\ufe01 \ud835\udc52\u2208\ud835\udc39 \ud835\udefc # \u22641 \ud835\udefc\u00b7 E \"\u2211\ufe01 \ud835\udc52\u2208\ud835\udc39 \ud835\udc65\u2217 \ud835\udc52 # \u22641 \ud835\udefc \u2211\ufe01 \ud835\udc52\u2208\ud835\udf0c(\ud835\udc63) \ud835\udc65\u2217 \ud835\udc52\u2264\ud835\udc4f \ud835\udefc, where the first inequality follows from \ud835\udefc\u2264\ud835\udc5d\u2264\ud835\udc65\u2217 \ud835\udc52for any \ud835\udc52\u2208\ud835\udc39, the second inequality follows from the nonnegativity of \ud835\udc99\u2217, and the third inequality follows from the first constraint in Relaxation. Next we analyze the approximation ratio with respect to the quality of \ud835\udc39. Let \ud835\udc39\u2217be an optimal solution to Problem 1. As Relaxation is indeed a relaxation of Problem 1 and \ud835\udc99\u2217is its \ud835\udf16\u2032-additive approximate solution, we have b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99\u2217) \u2264\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217) + \ud835\udf16\u2032. For convenience, we define \ud835\udc65\u2217 \ud835\udc520 = 1 for an imaginary element \ud835\udc520. Let \u2113 be the maximum number that satisfies \ud835\udc65\u2217 \ud835\udc52\u2113\u2265\ud835\udefc. Then we have E[\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39)] = \u00cd\u2113\u22121 \ud835\udc56=0 (\ud835\udc65\u2217 \ud835\udc52\ud835\udc56\u2212\ud835\udc65\u2217 \ud835\udc52\ud835\udc56+1)\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\ud835\udc56) + (\ud835\udc65\u2217 \ud835\udc52\u2113\u2212\ud835\udefc)\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\u2113) 1 \u2212\ud835\udefc \u2264 \u00cd|\ud835\udf0c(\ud835\udc63)|\u22121 \ud835\udc56=0 (\ud835\udc65\u2217 \ud835\udc52\ud835\udc56\u2212\ud835\udc65\u2217 \ud835\udc52\ud835\udc56+1)\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\ud835\udc56) + \ud835\udc65\u2217 \ud835\udc52|\ud835\udf0c(\ud835\udc63)| \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udf0c(\ud835\udc63)) 1 \u2212\ud835\udefc = b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99\u2217) 1 \u2212\ud835\udefc \u2264 \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217) + \ud835\udf16\u2032 1 \u2212\ud835\udefc = \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc39\u2217) 1 \u2212\ud835\udefc + \ud835\udf16, where the first equality follows from the random choice of \ud835\udc5dand the first inequality follows from the monotonicity of elements in \ud835\udc99\u2217and the nonnegativity of \ud835\udc53(\ud835\udc3a,\ud835\udc63). Therefore, we have the theorem. \u25a1 6 SOLVING RELAXATION In this section, we present our algorithm for solving Relaxation. 6.1 Algorithm Specifically, we design a projected subgradient method for Relaxation. The algorithm is an iterative method, where each iteration consists of two parts, i.e., the subgradient computation part and the projection computation part. The pseudo-code is given in Algorithm 3. All the details will be given later. The sequence generated by the algorithm is {\ud835\udc99\ud835\udc61}\ud835\udc61\u22650, while the sequence of function values Algorithm 3: Projected subgradient method for Relaxation Input : \ud835\udc990 \u2208\ud835\udc36and some stopping condition Output: \ud835\udc99\u2208\ud835\udc36 1 \ud835\udc61\u21900; 2 while the stopping condition is not satisfied do 3 Pick a stepsize \ud835\udf02\ud835\udc61> 0 and a subgradient b \ud835\udc53\u2032 (\ud835\udc3a,\ud835\udc63) (\ud835\udc99\ud835\udc61) of b \ud835\udc53(\ud835\udc3a,\ud835\udc63) at \ud835\udc99\ud835\udc61; 4 \ud835\udc99\ud835\udc61+1 \u2190proj\ud835\udc36 \u0010 \ud835\udc99\ud835\udc61\u2212\ud835\udf02\ud835\udc61\u00b7 b \ud835\udc53\u2032(\ud835\udc99\ud835\udc61) \u0011 and \ud835\udc61\u2190\ud835\udc61+ 1; 5 return \ud835\udc99\ud835\udc61; generated by the algorithm is {b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99\ud835\udc61)}\ud835\udc61\u22650. As the sequence of function values is not necessarily monotone, we are also interested in the sequence of best-achieved function values at or before \u2113-th iteration, which is defined as b \ud835\udc53(\u2113) best = min\ud835\udc61=0,1,...,\u2113b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc99\ud835\udc61). Subgradient computation. From the definition of b \ud835\udc53(\ud835\udc3a,\ud835\udc63), a subgradient b \ud835\udc53\u2032 (\ud835\udc3a,\ud835\udc63) at \ud835\udc99\ud835\udc61\u2208\ud835\udc36is given by b \ud835\udc53\u2032 (\ud835\udc3a,\ud835\udc63) (\ud835\udc99\ud835\udc61) = |\ud835\udf0c(\ud835\udc63) | \u2211\ufe01 \ud835\udc56=1 \u0010 \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\ud835\udc56) \u2212\ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\ud835\udc56\u22121) \u0011 \ud835\udc96\ud835\udc52\ud835\udc56, (2) where \ud835\udc96\ud835\udc52\ud835\udc56is the |\ud835\udf0c(\ud835\udc63)|-dimensional vector that takes 1 in the element corresponding to \ud835\udc52\ud835\udc56and 0 elsewhere. To compute the subgradient b \ud835\udc53\u2032 (\ud835\udc3a,\ud835\udc63) (\ud835\udc99\ud835\udc61), we need to sort the entries of \ud835\udc99\ud835\udc61and compute \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\ud835\udc4b\ud835\udc56) for all \ud835\udc56= 0, 1, . . . , |\ud835\udf0c(\ud835\udc63)|, which takes \ud835\udc42(|\ud835\udf0c(\ud835\udc63)|(|\ud835\udc49| + |\ud835\udc34|)) time. Projection computation. For a given \ud835\udc99\u2208R\ud835\udf0c(\ud835\udc63), it is not trivial how to compute the projection of \ud835\udc99onto \ud835\udc36because \ud835\udc36is the intersection of the two sets {\ud835\udc99\u2208R\ud835\udf0c(\ud835\udc63) | \u2225\ud835\udc99\u22251 \u2264\ud835\udc4f} and {\ud835\udc99\u2208 R\ud835\udf0c(\ud835\udc63) | \ud835\udc99\u2208[0, 1]\ud835\udf0c(\ud835\udc63)}. For simplicity, define Box[0, 1] = {\ud835\udc99\u2208 R\ud835\udf0c(\ud835\udc63) | \ud835\udc99\u2208[0, 1]\ud835\udf0c(\ud835\udc63)}. Let projBox[0,1] (\ud835\udc99) be the projection of \ud835\udc99onto Box[0, 1]. Then by Lemma 6.26 in Beck [3], we obtain projBox[0,1] (\ud835\udc99) = (min{max{\ud835\udc65\ud835\udc52, 0}, 1})\ud835\udc52\u2208\ud835\udf0c(\ud835\udc63). Using this projection, we can give the projection of \ud835\udc99onto \ud835\udc36as follows: Fact 1 (A special case of Example 6.32 in Beck [3]). Let proj\ud835\udc36(\ud835\udc99) be the projection of \ud835\udc99\u2208R\ud835\udf0c(\ud835\udc63) onto \ud835\udc36. Then we have proj\ud835\udc36(\ud835\udc99) = ( projBox[0,1] (\ud835\udc99) if \u2225projBox[0,1] (\ud835\udc99)\u22251 \u2264\ud835\udc4f, projBox[0,1] (\ud835\udc99\u2212\ud835\udf06\u22171) otherwise, where \ud835\udf06\u2217is any positive root of the nonincreasing function \ud835\udf11(\ud835\udf06) = \u2225projBox[0,1] (\ud835\udc99\u2212\ud835\udf061)\u22251 \u2212\ud835\udc4f. In practice, we can compute the value of \ud835\udf06\u2217for \ud835\udc99B \ud835\udc99\ud835\udc61\u2212\ud835\udf02\ud835\udc61\u00b7 b \ud835\udc53\u2032 (\ud835\udc3a,\ud835\udc63) (\ud835\udc99\ud835\udc61) using binary search. Assume that the stepsize \ud835\udf02\ud835\udc61> 0 is no greater than 1 for any iteration \ud835\udc61= 0, 1, . . . , which is indeed the case of ours (specified later). As initial lower and upper bounds on \ud835\udf06\u2217, we can use 0 and max\ud835\udc52\u2208\ud835\udf0c(\ud835\udc63) \ud835\udc65\ud835\udc52, respectively. From the fact that \ud835\udc99\ud835\udc61is always contained in\ud835\udc36and the definition of the subgradient (2), we see that max\ud835\udc52\u2208\ud835\udf0c(\ud835\udc63) \ud835\udc65\ud835\udc52\u22641 + \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\u2205) \u2264|\ud835\udc49|. Therefore, the binary search finds \ud835\udf06\u2217in \ud835\udc42(|\ud835\udf0c(\ud835\udc63)| log(|\ud835\udc49|/\ud835\udeff)) time with an additive error of \ud835\udeff> 0. Note that any polynomial-time algorithm cannot recognize an additive error of \ud835\udc5c(2\u2212|\ud835\udc49|\ud835\udc50) for constant \ud835\udc50, due to its bit complexity. Hence, if we set \ud835\udeff= \ud835\udc42(2\u2212|\ud835\udc49|\ud835\udc50), we can assume that the projection is exact, and the time complexity is still polynomial. \fLocal Centrality Minimization with Quality Guarantees Table 1: Real-world graphs used in our experiments. Name |\ud835\udc49| |\ud835\udc34| Stat. of in-degrees of targets moreno-blogs 1,224 19,022 (337, 158.80, 101) dimacs10-polblogs 1,224 33,430 (274, 143.45, 104) librec-ciaodvd-trust 4,658 40,133 (361, 152.50, 100) munmun-twitter-social 465,017 834,797 (174, 123.65, 100) citeseer 384,054 1,744,590 (495, 190.70, 106) youtube-links 1,138,494 4,942,297 (1,311, 273.40, 100) higgs-twitter-social 456,626 14,855,819 (1,049, 280.20, 111) soc-pokec-relationships 1,632,803 30,622,564 (316, 157.35, 101) 6.2 Convergence result Let \ud835\udc3fb \ud835\udc53(\ud835\udc3a,\ud835\udc63) = b \ud835\udc53(\ud835\udc3a,\ud835\udc63) (0) (= \ud835\udc53(\ud835\udc3a,\ud835\udc63) (\u2205)). Based on the convergence result of the projected subgradient method in Beck [3], reviewed in Appendix C.1, we present the convergence result of Algorithm 3: Theorem 7. Let \u0398 be an upper bound on the half-squared diameter of \ud835\udc36, i.e., \u0398 \u2265max\ud835\udc99,\ud835\udc9a\u2208\ud835\udc361 2 \u2225\ud835\udc99\u2212\ud835\udc9a\u22252. Determine the stepsize \ud835\udf02\ud835\udc61(\ud835\udc61= 0, 1, . . . ) as\ud835\udf02\ud835\udc61= \u221a 2\u0398 \ud835\udc3fb \ud835\udc53(\ud835\udc3a,\ud835\udc63) \u221a \ud835\udc61+1. Let b \ud835\udc53\u2217be the optimal value of Relaxation. Then for all \ud835\udc61\u22652, it holds that b \ud835\udc53(\ud835\udc61) best \u2212b \ud835\udc53\u2217\u2264 2(1 + log 3)\ud835\udc3fb \ud835\udc53(\ud835\udc3a,\ud835\udc63) \u221a 2\u0398 \u221a \ud835\udc61+ 2 . The proof is in Appendix C.2. By this theorem and the above discussion of the time complexity, the following is straightforward: Corollary 2. Let \ud835\udf16\u2032 > 0. Set the stopping condition of Algorithm 3 as follows: \ud835\udc61\u2265\u00a9 \u00ad \u00ab 2(1 + log 3)\ud835\udc3fb \ud835\udc53(\ud835\udc3a,\ud835\udc63) \u221a 2\u0398 \ud835\udf16\u2032 \u00aa \u00ae \u00ac 2 \u22122. Then, Algorithm 3 outputs, in polynomial time, an \ud835\udf16\u2032-additive approximate solution for Relaxation. 7 EXPERIMENTAL EVALUATION In this section, we evaluate the performance of our proposed algorithms (i.e., Algorithms 1 and 2) using various real-world networks. 7.1 Setup Instances. Table 1 lists real-world digraphs on which our experiments were conducted. All graphs were collected from the webpage of The KONECT Project1. Note that self-loops and multiple edges were removed so that the graphs are made simple. For each graph, we randomly chose 20 vertices as target vertices among those having the in-degree at least 100. The last column of Table 1 gives the statistics of the in-degrees of the target vertices, i.e., the maximum, average, and minimum in-degrees. For each graph and each target vertex \ud835\udc63, we vary the budget \ud835\udc4fin \b \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b, \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b, \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b \t . Baselines. We employ the following baseline methods: \u2022 Empty: This algorithm just outputs the empty set, thus presenting an upper bound on the objective function value (i.e., \u210e\ud835\udc3a(\ud835\udc63)) of any feasible solution. 1http://konect.cc/ \u2022 Random: This algorithm randomly chooses \ud835\udc4fedges from \ud835\udf0c(\ud835\udc63). For each instance, this algorithm is run 100 times and the average objective value is reported. \u2022 Degree: This algorithm sorts the elements of \ud835\udc41in(\ud835\udc63) in the order of (\ud835\udc641, . . .\ud835\udc64|\ud835\udf0c(\ud835\udc63) |) so that |\ud835\udf0c(\ud835\udc641)| \u2265\u00b7 \u00b7 \u00b7 \u2265|\ud835\udf0c(\ud835\udc64|\ud835\udf0c(\ud835\udc63)|)| and just returns {(\ud835\udc641, \ud835\udc63), . . . , (\ud835\udc64\ud835\udc4f, \ud835\udc63)}. \u2022 Greedy: Execute the greedy algorithm (Algorithm 4). Machine specs and code. All experiments were conducted on Mac mini with Apple M1 Chip and 16 GB RAM. All codes were written in Python 3.9, which are available online.2 7.2 Performance of algorithms Here we evaluate the performance of our algorithms. To this end, we run the algorithms together with the baselines for all graphs, target vertices, and budgets. For each graph and each budget, if the algorithm tested does not terminate within one hour for the target vertex having the largest in-degree, we do no longer run the algorithm for the other target vertices. Algorithm 3 (in Algorithm 2) is run with stopping condition \ud835\udc61\u22651000 for scalability and initial solution \ud835\udc990 = 0. The quality of solutions of the algorithms except for Algorithm 2 is illustrated in Figure 1. Due to space limitations, only the results for the budget \ud835\udc4f= \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230bare presented here. Although the trend is similar, the results for the other budget settings \ud835\udc4f= \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b, \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230bare given in Appendix D.1. As the solutions of Algorithm 2 may violate the budget constraint, it is unfair to compare those with the others in the same plots; thus, the solutions are evaluated later. In the plots in Figure 1, once we fix a value in the vertical axis, we can observe the cumulative number of solutions (i.e., targets) that attain the harmonic centrality score at most the fixed value. Therefore, we can say that an algorithm drawing a lower line has a better performance. As can be seen, Algorithm 1 outperforms the baselines. Indeed, Algorithm 1 is applicable to all graphs tested, thanks to its high scalability, and the quality of solutions is much better than that of the scalable baselines, i.e., Random and Degree. Most interestingly, for the graph citeseer, Algorithm 1 succeeds in reducing the harmonic centrality scores of all target vertices to the values relatively close to 0, although Degree and Greedy fail to have centrality scores less than 9,000 for some target vertices. Note that this result does not contradict the fact that even an optimal solution has the harmonic centrality score no less than |\ud835\udf0c(\ud835\udc63)| \u2212\ud835\udc4f: The minimum objective value attained by Algorithm 1 is 54, rather than 0. For small graphs, Greedy performs slightly better than Algorithm 1; however, as Greedy is quite time consuming, it is not applicable to youtube-links and the larger graphs. The detailed report of the computation time is found in Table 2, where the average computation time over all target vertices is reported. Note that the results for Random and Degree are omitted because Random is obviously quite fast and Degree records 0.00 seconds for all graphs and budgets. We remark that the computation time of Greedy grows roughly proportionally to the budget \ud835\udc4f, but that of Algorithm 1 remains almost the same for all settings of \ud835\udc4f. Finally, the quality of solutions of Algorithm 2 (with \ud835\udefc= 1 3, 1 2) for \ud835\udc4f= \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b, averaged over the target vertices, is reported 2https://github.com/atsushi-miyauchi/Local-Centrality-Minimization \fAtsushi Miyauchi, Lorenzo Severini, and Francesco Bonchi 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 100 200 300 400 500 600 f(G, v)(F) moreno_blogs Empty Random Degree Greedy Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 100 200 300 400 500 600 700 f(G, v)(F) dimacs10-polblogs Empty Random Degree Greedy Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 100 200 300 400 500 600 700 800 f(G, v)(F) librec-ciaodvd-trust Empty Random Degree Greedy Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 100 200 300 400 500 600 f(G, v)(F) munmun_twitter_social Empty Random Degree Greedy Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 5000 10000 15000 20000 25000 f(G, v)(F) citeseer Empty Random Degree Greedy Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 20000 40000 60000 80000 100000 120000 140000 160000 f(G, v)(F) youtube-links Empty Random Degree Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 20000 40000 60000 80000 100000 120000 140000 f(G, v)(F) higgs-twitter-social Empty Random Degree Algorithm 1 (Ours) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 # of observations 0 50000 100000 150000 200000 250000 300000 350000 f(G, v)(F) soc-pokec-relationships Empty Random Degree Algorithm 1 (Ours) Figure 1: Quality of solutions of the algorithms (except for Algorithm 2) with \ud835\udc4f= \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b. Table 2: Computation time (seconds) of the algorithms tested. Name \ud835\udc4f Greedy Algorithm 1 Algorithm 2 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b 4.53 0.10 326.55 moreno-blogs \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b 7.79 0.10 333.48 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b 9.75 0.10 332.48 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b 4.40 0.12 393.72 dimacs10-polblogs \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b 7.59 0.13 416.21 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b 9.55 0.13 404.52 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b 6.17 0.15 482.83 librec-ciaodvd-trust \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b 10.59 0.15 502.63 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b 13.33 0.15 489.92 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b 4.03 0.14 440.70 munmun-twitter-social \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b 6.90 0.14 444.37 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b 8.68 0.14 446.68 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b 1,405.08 4.59 \u2014 citeseer \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b 1,671.77 4.55 \u2014 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b 1,662.66 4.55 \u2014 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 130.56 \u2014 youtube-links \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 130.54 \u2014 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 132.19 \u2014 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 188.84 \u2014 higgs-twitter-social \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 187.92 \u2014 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 187.68 \u2014 \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 402.24 \u2014 soc-pokec-relationships \u230a1 2 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 392.63 \u2014 \u230a3 4 |\ud835\udf0c(\ud835\udc63)|\u230b \u2014 512.05 \u2014 in Table 3, where each of the objective value and the size of the solutions is a relative value compared with that of the solutions of Algorithm 1. As the rounding procedure of the algorithm contains randomness, we run the procedure 100 times and report the average value. As can be seen, Algorithm 2 returns very small solutions that even do not exhaust the budget. However, this result does not contradict the guarantee given in Theorem 6; the objective value is not much worse than that of Algorithm 1 and the (expected) size of solutions is just upper bounded in Theorem 6. Table 3: Quality of solutions of Algorithm 2 with \ud835\udefc= 1 3, 1 2 compared with those of Algorithm 1 with \ud835\udc4f= \u230a1 4 |\ud835\udf0c(\ud835\udc63)|\u230b. Name \ud835\udefc= 1 3 \ud835\udefc= 1 2 Obj. val. Size Obj. val. Size moreno-blogs 1.12 0.14 1.13 0.12 dimacs10-polblogs 1.09 0.10 1.09 0.07 librec-ciaodvd-trust 1.09 0.06 1.09 0.03 munmun-twitter-social 1.15 0.08 1.15 0.08 To further examine the performance of Algorithm 2, we run it on the instance that appeared in the proof of Theorem 5, where we set \ud835\udc58= 50 (and \ud835\udc4f= 50). Note that it is theoretically guaranteed that Algorithm 1 outputs a poor solution to this instance. On the other hand, Algorithm 2 with \ud835\udefc= 3 4 always obtains the optimal solution among 100 rounding trials. Hence, we see that the algorithm is rather strong against adversarial instances, verifying the effectiveness of the theoretical approximation guarantee of Algorithm 2. 8 CONCLUSION In this study, we have introduced a novel optimization model for local centrality minimization and designed two effective approximation algorithms. To our knowledge, ours are the first polynomialtime algorithms with provable approximation guarantees for centrality minimization. Experiments using a variety of real-world networks demonstrate the effectiveness of our proposed algorithms. Our work opens up several interesting problems. Can we design a polynomial-time algorithm that has an approximation ratio better than that of Algorithm 1 or a bicriteria approximation ratio better than that of Algorithm 2? Another interesting direction is to study Problem 1 with a more capable setting. For example, it would be valuable to have a target vertex subset (rather than a single target vertex) and aim to minimize its group harmonic centrality score, as in the literature on global centrality minimization [38]. \fLocal Centrality Minimization with Quality Guarantees ACKNOWLEDGMENTS Lorenzo Severini was supported for this research by Project ECS 0000024 Rome Technopole CUP B83C22002820006, \u201cPNRR Missione 4 Componente 2 Investimento 1.5\u201d, funded by European Commission NextGenerationEU." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2401.10632v2", |
| "title": "Interventional Fairness on Partially Known Causal Graphs: A Constrained Optimization Approach", |
| "abstract": "Fair machine learning aims to prevent discrimination against individuals or\nsub-populations based on sensitive attributes such as gender and race. In\nrecent years, causal inference methods have been increasingly used in fair\nmachine learning to measure unfairness by causal effects. However, current\nmethods assume that the true causal graph is given, which is often not true in\nreal-world applications. To address this limitation, this paper proposes a\nframework for achieving causal fairness based on the notion of interventions\nwhen the true causal graph is partially known. The proposed approach involves\nmodeling fair prediction using a Partially Directed Acyclic Graph (PDAG),\nspecifically, a class of causal DAGs that can be learned from observational\ndata combined with domain knowledge. The PDAG is used to measure causal\nfairness, and a constrained optimization problem is formulated to balance\nbetween fairness and accuracy. Results on both simulated and real-world\ndatasets demonstrate the effectiveness of this method.", |
| "authors": "Aoqi Zuo, Yiqing Li, Susan Wei, Mingming Gong", |
| "published": "2024-01-19", |
| "updated": "2024-03-08", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Interventional Fairness on Partially Known Causal Graphs: A Constrained Optimization Approach", |
| "main_content": "INTRODUCTION Machine learning algorithms have demonstrated remarkable success in automating decision-making processes across a wide range of domains (e.g., hiring decisions (Hoffman et al., 2018), recidivism predictions (Dieterich et al., 2016; Brennan et al., 2009), and finance (Sweeney, 2013; Khandani et al., 2010)), providing valuable insights and predictions. However, it has become increasingly evident that these algorithms are not immune to biases in training data, potentially perpetuating discrimination against individual or sub-population group with respect to sensitive attributes, e.g., gender and race. For example, bias against female was found in a recruiting tool built by one of Amazon\u2019s AI team to review job applicants\u2019 resume in a period of time (Kodiyan, 2019). To achieve fair machine learning, various methods have been proposed with respect to different fairness measures. These methods can be broadly classified into two groups. The first group focuses on developing statistical fairness measures, which typically indicate the statistical discrepancy between individuals or sub-populations, e.g., statistical parity (Dwork et al., 2012), equalized odds (Hardt et al., 2016), and predictive parity (Chouldechova, 2017). The second group is grounded in the causal inference framework (Pearl et al., 2000), which emphasizes understanding the causal relationships between the sensitive attribute and decision outcomes and treats the presence of causal effect of the sensitive attribute on the decision as discrimination (Zhang et al., 2017; Kilbertus et al., 2017; Kusner et al., 2017; Zhang & Bareinboim, 2018b;a; Nabi & Shpitser, 2018; Wu et al., 2019b; Khademi et al., 2019; Chiappa, 2019; Russell et al., 2017; Zhang et al., 2018; Zhang & Wu, 2017; Kusner et al., 2019; Salimi et al., 2019; Wu et al., 2018; Galhotra et al., 2022; Zuo et al., 2022). Causality-based fairness notions are defined within the framework of Pearl\u2019s ladder of causation, which encompasses interventions and counterfactuals. Build on the highest rung and also the most fine-grained type of inference in the ladder, counterfactual fairness (Kusner et al., 2017; Chiappa, 1 arXiv:2401.10632v2 [cs.LG] 8 Mar 2024 \fPublished as a conference paper at ICLR 2024 2019; Russell et al., 2017; Wu et al., 2019a) requires the full knowledge of structural causal model or the computation of counterfactuals in the sense of Pearl et al. (2000), thus posing extra challenges compared to the one based on interventions. As the most basic and general notion of causal fairness that is testable with observational data, the interventional fairness assesses the unfairness as the causal effect of the sensitive attribute on the outcome on paths defined by specific attributes. In this paper, we aim to achieve interventional fairness. The majority of existing methods for ensuring causal fairness assume the presence of a causal directed acyclic graph (DAG) (Pearl, 2009), which encodes causal relationships between variables. However, in many real-world scenarios, it is very difficult to fully specify the causal DAG due to a limited understanding of the system under study. One potential approach is to use causal discovery methods to deduce the causal DAG from observational data (Spirtes & Glymour, 1991; Colombo et al., 2014; Spirtes et al., 2000a; Chickering, 2002b; Shimizu et al., 2006; Hoyer et al., 2008; Zhang & Hyv\u00a8 arinen, 2009; Peters et al., 2014; Peters & B\u00a8 uhlmann, 2014). However, determining the true underlying causal graph solely based on observational data is challenging without strong assumptions about the data generation process, such as linearity (Shimizu et al., 2006) or additive noise (Hoyer et al., 2008; Peters et al., 2014). In general cases, causal discovery methods may produce a Markov equivalence class of DAGs that capture the same conditional independencies in the data, represented by a completely partially directed acyclic graph (CPDAG) (Spirtes et al., 2000b; Chickering, 2002b; Meek, 1995; Andersson et al., 1997; Chickering, 2002a). Although incorporating additional background knowledge allows for the discernment of more causal directions, resulting in a maximally partially directed acyclic graph (MPDAG) (Meek, 1995), it is still unlikely to obtain a unique causal DAG. Following this line of inquiry, a natural question arises: Can we learn interventional fairness when we only have partially knowledge of the causal graph, represented by an MPDAG? 1 Inspired by Zuo et al. (2022), one straightforward way to achieve fair predictions for interventional fairness is to identify the definite non-descendants of the sensitive attribute on an MPDAG (see Section 3.1). While this approach guarantees interventional fairness, disregarding the descendants results in a notable decline in performance. Thus, we propose a constrained optimisation problem that balances the inherent competition between accuracy and fairness (see Section 3.2). In our approach, to measure interventional fairness in MPDAGs, we model the prediction as the effect of all the observational variables. Interestingly, this modeling technique gives rise to another causal MPDAG, which allows us to discuss the identification of interventional fairness criterion formally. In this paper, we assume the absence of selection bias and latent confounders since causal discovery algorithms themselves faces difficulties in such challenging scenarios. These assumptions align with many recent related work (Zhang et al., 2017; Chiappa, 2019; Chikahara et al., 2021; Wu et al., 2019a). However, we relax the assumption of a fully directed causal DAG. Based on these considerations, our main contributions on achieving interventional fairness on MPDAGs are as follows: \u2022 We propose a modeling technique on the predictor, which gives rise to a causal MPDAG on which it is feasible to perform causal inference formally; \u2022 We analyze the identification condition over interventional fairness measure on the MPDAG; \u2022 We develop a framework for achieving interventional fairness on partially known causal graphs, specifically MPDAGs, as a constrained optimization problem. 2 BACKGROUND 2.1 STRUCTURAL CAUSAL MODEL AND CAUSAL GRAPH The structural causal model (SCM) (Pearl et al., 2000) provides a framework for representing causal relations among variables. It comprises a triple (U, V, F), wherein V represents observable endogenous variables and U represents unobserved exogenous variables that cannot be caused by any variable in V . The set F consists of functions f1, ..., fn, each associated with a variable Vi \u2208V describing how Vi depends on its direct causes: Vi = fi(pai, Ui) . Here, pai denotes the observed 1As CPDAG is a special case of MPDAG without background knowledge, we deal with MPDAG generally. 2 \fPublished as a conference paper at ICLR 2024 direct causes of Vi and Ui is the set of unobserved direct causes of Vi. The exogenous Ui\u2019s are required to be mutually independent. The equations in F induce a causal graph D that represents the relationships between variables, typically in the form of a directed acyclic graph (DAG), where the direct causes of Vi correspond to its parent set in the causal graph. DAGs, PDAGs and CPDAGs. A directed acyclic graph (DAG) is characterized by directed edges and the absence of directed cycles. When some edges are undirected, it is referred to as a partially directed graph (PDAG). The structure of a DAG captures a collection of conditional independence relationships through the concept of d-separation (Pearl, 1988). In cases where multiple DAGs encode the same conditional independence relationships, they are considered to be Markov equivalent. The Markov equivalence class of a DAG D can be uniquely represented by a completed partially directed acyclic graph (CPDAG) C, often denoted as [C]. MPDAGs. A maximally oriented PDAGs (MPDAG) (Meek, 1995) can be obtained by applying Meek\u2019s rules in Meek (1995) to a CPDAG with a background knowledge constraint. To construct an MPDAG G from a given CPDAG C and background knowledge B, Algorithm 1 in Perkovic et al. (2017) can be utilized, in which the background knowledge B is assumed to be the direct causal information in the form X \u2192Y , indicating that X is a direct cause of Y .2 The subset of Markov equivalent DAGs that align with the background knowledge B can be uniquely represented by an MPDAG G, denoted as [G]. Both a DAG and a CPDAG can be viewed as special cases of an MPDAG, where the background knowledge is fully known and unknown, respectively. Density. A density f over V is consistent with a DAG D = (V, E) if it can be factorized as f(v) = \u220fVi\u2208V f(vi\u2223pa(vi, D)) (Pearl, 2009; Perkovic et al., 2017). A density f of V is consistent with an MPDAG G = (V, E) if f is consistent with a DAG in [G]. 2.2 CAUSAL INFERENCE AND INTERVENTIONAL FAIRNESS The interventions do(X = x) or the shorthand do(x) represent interventions that force X to take certain value x. In a SCM, this means the substitution of the structural equation X = fx(pax, Ux) with X = x. The causal effect of a set of treatments X on a set of responses Y can be understood as the post-interventional density of Y when intervening on X via the do operator, which can be denoted as f(Y = y\u2223do(X = x)) or P(yx) Pearl (1995; 2009). In the context of an MPDAG G, such causal effect is identifiable if it is possible to be uniquely computed. For a formal definition and a brief review on the causal effect identification problem on MPDAG, please refer to Appendix H. Build on the do-operator, interventional fairness criterion (Salimi et al., 2019) is defined upon admissible attribute. An admissible attribute is one through which the causal path from the sensitive attribute to the outcome is still considered fair. For example, in a construction company hiring scenario, the attribute strength is considered as an admissible attribute as it is a valid criterion for assessing job candidates. Despite being causally affected by the attribute gender, the causal path \u2018gender \u2192strength \u2192hiring decision\u2019 should not be regarded as discriminatory. Formally, let A, Y and X represent the sensitive attributes, outcome of interest and other observable attributes, respectively. The prediction of Y is denoted by \u02c6 Y . We say the prediction \u02c6 Y is interventionally fair with respect to the sensitive attributes A if, for any given set of admissible attributes Xad \u2286X, it satisfies the following condition: Definition 2.1 (Interventional fairness). (Salimi et al., 2019) We say the prediction \u02c6 Y is interventionally fair with respect to the sensitive attributes A if the following holds for any Xad = xad: P( \u02c6 Y = y\u2223do(A = a), do(Xad = xad)) = P( \u02c6 Y = y\u2223do(A = a\u2032), do(Xad = xad)), for all possible values of y and any value that A and Xad can take. 2It is worth noting that other forms of background knowledge, such as tier orderings, specific model restrictions, or data obtained from previous experiments, can also induce MPDAGs Scheines et al. (1998); Hoyer et al. (2012); Hauser & B\u00a8 uhlmann (2012); Eigenmann et al. (2017); Wang et al. (2017); Rothenh\u00a8 ausler et al. (2018). 3 \fPublished as a conference paper at ICLR 2024 3 PROBLEM FORMULATION In this section, we focus on the challenge of attaining interventional fairness in PDAGs, particular MPDAGs that can be learnt from observational data using causal discovery algorithms (Spirtes & Glymour, 1991; Colombo et al., 2014; Chickering, 2002a). We begin by presenting a basic implication of the definition of interventional fairness as a baseline method for achieving fairness. However, to overcome its limitations, we formulate a constrained optimisation problem. 3.1 FAIRNESS UNDER MPDAGS AS A GRAPHICAL PROBLEM A simple but important implication in achieving interventional fairness is the following: Lemma 3.1. Let G be the causal graph of the given model (U, V, F). Then \u02c6 Y will be interventionally fair if it is a function of the admissible set Xad and non-descendants of A. Zuo et al. (2022) propose a graphical criterion and algorithms for identifying the ancestral relationship between two vertices on an MPDAGs which we review in Appendix D.1. These findings form the basis for learning (exactly) interventionally fair predictions as indicated by Lemma 3.1. However, we claim that by including descendants, the prediction accuracy is possible to be improved. We propose a constrained optimisation problem that balances the inherent competition between accuracy and fairness. 3.2 \u03f5-APPROXIMATE INTERVENTIONAL FAIRNESS We first establish an approximation to interventional fairness to address the problem of learning a fair prediction on an MPDAG. Definition 3.1 (\u03f5-Approximate Interventional Fairness). A predictor \u02c6 Y is \u03f5-approximate interventionally fair with respect to the sensitive attribute A if for any value of admissible set Xad = xad, we have that: \u2223P( \u02c6 Y = y\u2223do(A = a), do(Xad = xad)) \u2212P( \u02c6 Y = y\u2223do(A = a\u2032), do(Xad = xad)\u2223\u2264\u03f5 for any a\u2032 \u2260a. Objective. Our objective is to train a model h\u03b8 mapping from a subset of observable variables to Y with parameter \u03b8 so as to accurately predict Y while simultaneously achieving \u03f5-approximate interventional fairness. To accomplish this, we minimize the loss function \u2113(\u02c6 y, y) under the fairness constraint. Given a dataset with n observations {A(i), X(i), Y (i)} for i = 1, 2, ..., n, where X (i) ad \u2282 X(i) represents the admissible set. For a specific intervention on Xad = xad, the objective can be formulated as follows: min \u03b8 1 n n \u2211 i=1 \u2113( \u02c6 yi, yi) + \u03bb\u2223P( \u02c6 YA\u2190a,Xad\u2190xad) \u2212P( \u02c6 YA\u2190a\u2032,Xad\u2190xad)\u2223, (1) where P( \u02c6 YA\u2190a,Xad\u2190xad) and P( \u02c6 YA\u2190a\u2032,Xad\u2190xad) represent the post-interventional distributions of \u02c6 Y under do(A = a, Xad = xad) and do(A = a\u2032, Xad = xad), respectively. The parameter \u03bb balances the trade-off between accuracy and fairness. In this paper, we focus on the modelling and identification of the objective function for a specific intervention on Xad. If we consider multiple values of Xad, the fairness term in Equation (1) can be replaced with the average of \u2223P( \u02c6 YA\u2190a,Xad\u2190xad) \u2212P( \u02c6 YA\u2190a\u2032,Xad\u2190xad)\u2223over different interventions on Xad. This formulation raises two key challenges: 1) how to design the model h\u03b8 (for \u02c6 Y ) over an MPDAG; 2) when and how P( \u02c6 Y = y\u2223do(A = a, Xad = xad)) is identifiable using the observational densities within our modeling framework. 4 INTERVENTIONAL FAIRNESS UNDER MPDAGS As mentioned in Section 1, the underlying causal DAG D is usually unknown for real-world datasets. At most, we can obtain a refined Markov equivalence class of DAGs represented by an MPDAG 4 \fPublished as a conference paper at ICLR 2024 G from the observational data (X, A) and additional background knowledge. In this section, we model the predictor as a function of all the other observable variables, regardless of whether they are (possible) descendants or non-descendants of the sensitive attribute. Then we show that such modeling technique leads to another MPDAG over (X, A, \u02c6 Y ) which facilitates causal inference. 4.1 MODELING AND VERIFICATION Modeling. We illustrate our modeling technique with Figure 1. Let D = (V, E) in Figure 1a be the underlying causal DAG over the observational variables X and A, where V = X \u222aA, and f be the consistent observational density over V, factorized as f(v) = \u220fVi\u2208V f(vi\u2223pa(vi, D)). We model the fair predictor \u02c6 Y as a function h\u03b8(x, a) of x and a with parameter \u03b8. Under Pearl\u2019s SCM framework, our modeling technique implies 1. An underlying causal DAG D\u2217over X, A and \u02c6 Y which includes additional edges from V to \u02c6 Y for any V \u2208V compared with the DAG D, as depicted in Figure 1b; 2. The density f over V\u222a\u02c6 Y , denoted as f(v, \u02c6 y) = f(\u02c6 y\u2223v)f(v), is consistent with D\u2217, where \u02c6 y = h\u03b8(x, a); 3. The density f(v) is consistent with any MPDAG G, where D \u2208[G], and the density f(v, \u02c6 y) is consistent with any MPDAG G\u2217, where D\u2217\u2208[G\u2217]. Figure 1c and Figure 1d are two examples of G and G\u2217, respectively. A X[1] X[2] X[3] (a) DAG D A X[1] X[2] X[3] \u02c6 Y (b) DAG D\u2217 A X[1] X[2] X[3] (c) MPDAG G A X[1] X[2] X[3] \u02c6 Y (d) MPDAG G\u2217 Figure 1: (a) is an underlying causal DAG D with three variables X[1], X[2] and X[3] in X; (b) is a causal DAG D\u2217under modeling on \u02c6 Y ; (c) is an example MPDAG G such that D \u2208[G]; (d) is an example MPDAG G\u2217such that D\u2217\u2208[G\u2217]. Next, we introduce Definition 4.1 to formalize such modeling strategy. Definition 4.1 (Augmented-G with \u02c6 Y ). For a partially directed graph G = (V, E), let G\u2217augment G by (i) adding an additional vertex \u02c6 Y ; (ii) adding the edge V \u2192\u02c6 Y for each node V \u2208V in G. The resulting graph is denoted as G\u2217= (V\u2217, E\u2217), where V\u2217= V \u222a\u02c6 Y , E\u2217= E \u222a{V \u2192\u02c6 Y \u2223V \u2208V}. We call G\u2217the augmented-G with \u02c6 Y . Although directly learning an MPDAG from X, A and \u02c6 Y is not feasible due to the unobservability of \u02c6 Y , Theorem 4.1 implies that once we obtain the MPDAG G from the observational data (X, A) with background knowledge B, the augmented-G with \u02c6 Y , G\u2217, is exactly an MPDAG technically such that D\u2217\u2208[G\u2217]. Therefore, the density f(v, \u02c6 y) is consistent with G\u2217. Theorem 4.1. For a DAG D = (V, E), let G be an MPDAG consistent with the background knowledge B such that D \u2208[G]. Let D\u2217be the augmented-D with \u02c6 Y and G\u2217be the augmented-G with \u02c6 Y . Then the graph G\u2217is an MPDAG consistent with the background knowledge B \u222a{V \u2192\u02c6 Y \u2223V \u2208V} such that D\u2217\u2208[G\u2217]. Theorem 4.1 is verified by exploring the property of Meek\u2019s rules. For more detail, please refer to Appendix B.2. Theorem 4.1 may be of independent interest as it establishes a general modeling method applicable to any problem. The theorem implies that we can directly model \u02c6 Y on any MPDAG G and remarkably, the resulting augmented-G with \u02c6 Y remains an MPDAG consistent with the given background knowledge. This augmented graph enables various causal inference tasks. 5 \fPublished as a conference paper at ICLR 2024 4.2 IDENTIFICATION OF FAIRNESS CRITERIA We denote the augmented MPDAG G = (V, E) with \u02c6 Y , where V = X \u222aA, by G\u2217. In this section, we discuss the identification of the causal quantity P( \u02c6 Y = y\u2223do(A = a), do(Xad = xad)) in Equation (1) on G\u2217: when and how we can uniquely express such a causal quantity by the observable densities f(v) over G and f(\u02c6 y\u2223v) over G\u2217. We start with the general identification problem P( \u02c6 Y = y\u2223do(S = s)), where S \u2286V. Perkovic (2020, Theorem 3.6) establishes the condition and formula for the identification of causal effect P( \u02c6 Y = y\u2223do(S = s)) in an MPDAG G\u2217with density f(v, \u02c6 y). Here, under our modeling strategy on \u02c6 Y , we present such graphical condition over the MPDAG G and represent the identification formula based on f(v) in G and f(\u02c6 y\u2223v) in G\u2217in Proposition 4.1. We first provide the notion of partial causal ordering as a preliminary. Partial Causal ordering Perkovic (2020). Let G = (V, E) be an MPDAG. Since G may include undirected edges (\u2212), it is generally not possible to establish a causal ordering of a node set V\u2032 \u2208V in G. Instead, a partial causal ordering, <, of V\u2032 of G is defined as a total causal ordering of pairwise disjoint node sets V1, ... Vk, k \u22651, \u222ak i=1Vi = V\u2032 that satisfies the following condition: if Vi < Vj and there is an edge between Vi \u2208Vi and Vj \u2208Vj in G, then Vi \u2192Vj is in G. Perkovic (2020) develops an algorithm for decomposing a set of nodes as partial causal orderings (PCO) in an MPDAG G. We provide it in Algorithm 2 in Appendix A, which acts as an important component for Proposition 4.1. We denote the parents of the node W in a graph G as pa(W, G). Proposition 4.1. Let S be a node set in an MPDAG G = (V, E) and let V\u2032 = V\\S. Furthermore, let (V1, ..., Vk) be the output of PCO(V,G). The augmented-G with \u02c6 Y is denoted as G\u2217. Then for any density f(v) consistent with G and the conditional density f(\u02c6 y\u2223v) entailed by G\u2217, we have 1. f(v, \u02c6 y) = f(\u02c6 y\u2223v) \u220fVi\u2286V f(vi\u2223pa(vi, G)), 2. If and only if there is no pair of nodes V \u2208V\u2032 and S \u2208S such that S\u2212V in G, f(v\u2032\u2223do(s)) and f(\u02c6 y\u2223do(s)) are identifiable, and f(v\u2032\u2223do(s)) = \u220f Vi\u2286V\u2032 f(vi\u2223pa(vi, G)) (2) f(\u02c6 y\u2223do(s)) = \u222bf(\u02c6 y\u2223v)f(v\u2032\u2223do(s))dv\u2032 = \u222bf(\u02c6 y\u2223v) \u220f Vi\u2286V\u2032 f(vi\u2223pa(vi, G))dv\u2032 (3) for values pa(vi, G) of Pa(vi, G) that are in agreement with s. The proof of Proposition 4.1 is based on Perkovic (2020, Theorem 3.6) and Theorem 4.1. The detailed proof is provided in Appendix B.3. Example. We provide a simple example to illustrate Proposition 4.1. Consider the MPDAG G and G\u2217in Figure 1c and Figure 1d, where G\u2217is the augmented-G with \u02c6 Y . The partial causal ordering of X \u222aA on G is {{A}, {X}}. f(x, a) is a density consistent with G, G\u2217entails a conditional density f(\u02c6 y\u2223x, a). Then, by Proposition 4.1 we have f(x, a, \u02c6 y) = f(\u02c6 y\u2223x, a)f(x\u2223a)f(a) in G\u2217. Since there is no pair of nodes A and X \u2208X such that A \u2212X is in G, we have f(x\u2223do(a)) = f(x\u2223a) and f(\u02c6 y\u2223do(a)) = \u222bf(\u02c6 y\u2223x, a)f(x\u2223a)dx in G\u2217. For more example, please refer to Appendix C. Dealing with non-identification. In cases where the causal effect of S on \u02c6 Y is not identifiable, we can list MPDAGs corresponding to all valid combinations of edge orientations for edges V \u2212S, where V \u2208V\u2032 and S \u2208S. In this case, the causal effect in each MPDAG can be identified as a unique functional relationship of the observable density. To address our constrained optimisation problem, we can replace the unfairness term in Equation (1) with the average of unfairness over different MPDAGs. The experimental analysis on unidentifiable cases is provided is Appendix G.11. Identification of P( \u02c6 Y = y\u2223do(A = a), do(Xad = xad)) in the fairness context. Under the modeling technique that \u02c6 y = h\u03b8(x, a), Proposition 4.1 implies that the causal effect of A \u222aXad on \u02c6 Y is identifiable for a given set of admissible attributes Xad \u2282X if and only if there is no 6 \fPublished as a conference paper at ICLR 2024 pair of nodes X \u2208A \u222aXad and V \u2208V\\(A \u222aXad) such that X \u2212V is in G. In such case, f(\u02c6 y\u2223do(a), do(xad)) = \u222bf(\u02c6 y\u2223v) \u220fVi\u2286V f(vi\u2223pa(vi, G))dv\u2032 for values pa(vi, G) of Pa(vi, G) that are in agreement with a and xad, where V\u2032 = V\\(A \u222aXad), (V1, ..., Vm) = PCO(V\u2032, G). 4.3 SOLVING THE OPTIMISATION PROBLEM Addressing the optimisation problem stated in Equation (1) requires measuring the discrepancy between distributions P( \u02c6 Y = y\u2223do(A = a, Xad = xad)) and P( \u02c6 Y = y\u2223do(A = a\u2032, Xad = xad)). Here, we employ Maximum Mean Discrepancy (MMD) (Gretton et al., 2007), but other measures can also be utilized. On the other hand, as evident from the identification formula Equation (3), we need to estimate the stable conditional density f(vi\u2223pa(vi, G)) and design the model \u02c6 y = h(x, a) to approximate f(\u02c6 y\u2223v). However, computing the MMD of two distributions entailed by a model h is intractable. As a result, we resort to Monte Carlo sampling to approximate the integrals involved in the identification formula. For convenience, we estimate f(vi\u2223pa(vi, G)) using a conditional multivariate normal distribution, but other conditional density estimation approaches can also be employed. Then we generate the interventional data for v under each intervention do(A = a, Xad = xad). Finally, a neural net can be trained for h(x, a). For more on the MMD formulation and its application in our context, please refer to Appendix E. 4.4 APPLICABILITY TO OTHER CAUSAL FAIRNESS NOTIONS Causal fairness notions are defined on different types of causal effects, such as interventional fairness on interventional effects, path-specific fairness on nested counterfactual queries, and counterfactual fairness on counterfactual effects. Their identifiability depends on the identifiability of these causal effects. Our proposed method is directly applicable to other interventional-based fairness notions but not to path-specific causal fairness or counterfactual fairness due to the ongoing challenge of (nested) counterfactual identification over MPDAGs. For more details on the related fairness notions and applicability, please refer to Appendix F. 5 EXPERIMENT In this section, we illustrate our approach on both synthetic and real-world datasets. We measure prediction performance using root mean squared error (RMSE) or accuracy and assess interventional unfairness by MMD. Here, we focus on the scenario where the admissible variable set is empty. Additional experiments involving non-empty admissible variable sets can be found in Appendix G.6. Baselines. We consider three baselines: 1) Full model makes predictions using all attributes, including the sensitive attributes, 2) Unaware model uses all attributes except the sensitive attributes, and 3) IFair is a model mentioned in Section 3.1 which makes predictions using all definite nondescendants of the sensitive attribute in an MPDAG 3. Our proposed method is \u03f5-IFair, which uses all attributes and implement the constrained optimization problem in Section 3.2. 5.1 SYNTHETIC DATA We first randomly generate DAGs with d nodes and s directed edges from the graphical model Erd\u02dd osR\u00b4 enyi (ER), where d is chosen from {5, 10, 20, 30} and the correspongding s is {8, 20, 40, 60}. For each setting, we generate 10 graphs. For each DAG D\u2032, we consider the last node in topological ordering as the outcome variable and we randomly select one node from the graph as the sensitive attribute. The sensitive attribute can have two or three values, drawn from a Binomial([0,1]) or Multinomial([0,1,2]) distribution separately. The weight, \u03b2ij, of each directed edges Xi \u2192Xj in the generated DAG, is drawn from a Uniform([\u22121, \u22120.1]\u222a[0.1, 1]) distribution. The synthetic data is generated according to the following linear structural equation: Xi = \u2211 Xj\u2208pa(Xi) \u03b2ijXj + \u03f5i, (4) 3Proposition 4.1 and Theorem D.2 indicate that when the total effect of singleton A on \u02c6 Y is identifiable in the augmented MPDAG G\u2217, the ancestral relationship between A and any other attribute is definite. 7 \fPublished as a conference paper at ICLR 2024 where \u03f5i are independent N(0, 1). Then we generate a sample with size 1000 for each DAG as the observational data. The proportion of training, validation and test data is split as 8 \u22361 \u22361. We consider the DAG D over all of the variables, excluding the outcome. As the simulated DAG is known, the CPDAG can be obtained from the true DAG without running the causal discovery algorithms. 4 Once we obtain the CPDAG C, where D \u2208[C], we randomly generate the direct causal information S \u2192T as the background knowledge from the edges where S \u2192T is in DAG D, while S \u2212T is in CPDAG C. Combining with the background knowledge that is necessary to identify fairness, we can obtain the corresponding MPDAG G. We show a randomly generated DAG D, the corresponding CPDAG C and MPDAG G as an example in Figure 10, see Appendix G.1. To measure the unfairness, we generate 1000 interventional data points XA\u2190a under different interventions on A according to the identification formula. To approximate each term f(vi\u2223pa(vi, G)) in the formula, we fit a conditional multivariate Gaussian distribution using observational data. The generation of vi is based on the fitted density and follows the partial causal ordering. The proportion of training and validation for interventional data is split as 8 \u22362. Besides, we generate 1000 interventional data from the ground-truth SCM for different interventions on the sensitive attribute for test. Then we fit all models with the data. The \u03bb in our optimisation problem is [0, 0.5, 5, 20, 60, 100]. For additional experiments on nonlinear structural equations and varying amounts of background knowledge, please refer to Appendix G.7 and Appendix G.8, respectively. We also analyze the model robustness experimentally when the graph is learnt by causal discovery algorithms in Appendix G.9. The model robustness on fitting of conditional densities is provide in Appendix G.10. Results. For each graph setting, we report the average unfairness and RMSE achieved on 10 causal graphs in the corresponding trade-off plots in Figure 2. Obviously, both Full and Unaware methods exhibit lower RMSE but higher unfairness. The IFair method achieves nearly 0 unfairness but at the cost of significantly reduced prediction performance. However, our \u03f5-IFair approach allows for a trade-off between unfairness and RMSE with varying \u03bb. For certain values of \u03bb, we can even simultaneously achieve low unfairness comparable to IFair model and low RMSE comparable to Full. Exemplary density plots of the predictions under two interventional datasets are shown in Figure 3. The degree of overlap in the distributions indicates the fairness of the predictions with respect to the sensitive attribute. The more the distributions overlap, the fairer the predictions are considered to be. More discussion on the accuracy-fairness trade-off is included in Appendix I. 1.1 1.2 0.0 0.1 0.2 Unfairness RMSE Full \u03b5\u2212IFair IFair Unaware (a) 5nodes8edges. 1.0 1.1 1.2 1.3 0.00 0.03 0.06 0.09 Unfairness RMSE Full \u03b5\u2212IFair IFair Unaware (b) 10nodes20edges. 1.1 1.2 1.3 0.005 0.010 0.015 0.020 Unfairness RMSE Full \u03b5\u2212IFair IFair Unaware (c) 20nodes40edges. 1.15 1.20 1.25 0.003 0.006 0.009 0.012 Unfairness RMSE Full \u03b5\u2212IFair IFair Unaware (d) 30nodes60edges. Figure 2: Accuracy fairness trade-off. Y ^ A\u2190a Y ^ A\u2190a' 0.0 0.1 0.2 0.3 \u22122.5 0.0 2.5 5.0 Y ^ (Full) density 0.0 0.1 0.2 0.3 \u22122 0 2 4 Y ^ (Unaware) density 0.0 0.1 0.2 0.3 0.4 0.5 \u22122 \u22121 0 1 2 Y ^ (IFair) density 0.0 0.1 0.2 0.3 \u22122 0 2 4 Y ^ (\u03b5 \u2212IFair , \u03bb = 5) density 0.0 0.1 0.2 0.3 0.4 \u22121 0 1 2 3 Y ^ (\u03b5 \u2212IFair , \u03bb = 60) density (a) Y ^ A\u2190a Y ^ A\u2190a' 0.0 0.1 0.2 0.3 \u22124 \u22122 0 2 Y ^ (Full) density 0.0 0.1 0.2 0.3 \u22124 \u22122 0 2 Y ^ (Unaware) density 0.0 0.1 0.2 0.3 0.4 \u22124 \u22122 0 2 Y ^ (IFair) density 0.0 0.1 0.2 0.3 \u22124 \u22122 0 2 Y ^ (\u03b5 \u2212IFair , \u03bb = 5) density 0.0 0.1 0.2 0.3 \u22124 \u22122 0 2 Y ^ (\u03b5 \u2212IFair , \u03bb = 60) density (b) Figure 3: Density plots of the predicted YA\u2190a and YA\u2190a\u2032 in synthetic data. 5.2 REAL DATA 5.2.1 THE UCI STUDENT DATASET Our first experiment with real-world data is based on the UCI Student Performance Data Set (Cortez & Silva, 2008), which contains information about students performance in Mathematics. The dataset consists of records for 395 students, with 32 school-related features. In this dataset, we consider the 4With an ample sample size, existing causal discovery algorithms have demonstrated high accuracy in recovering the CPDAG from simulated data Glymour et al. (2019). 8 \fPublished as a conference paper at ICLR 2024 attribute sex as the sensitive attribute. We create the target attribute Grade as the average of grades in three tests. Our experiments are carried out on the MPDAG G in Figure 11c. Due to the space limit, Figure 11c is provided in Appendix G.2. For details on graph learning, interventional data generation and model training, please refer to Appendix G.3. We measure interventional fairness and accuracy using a similar approach as described in Section 5.1. The trade-off plot is shown in Figure 4a. The model Full and Unaware exhibit higher unfairness. The model IFair achieves interventionally fairness at the cost of increased RMSE. On the other hand, our \u03f5-IFair model strikes a balance between unfairness and RMSE by tuning the values of \u03bb. Since the test set only contains 19 interventional data for do(sex = female) and 20 interventional data for do(sex = male), the unfairness measure may not be highly accurate. Therefore, we also provide the trade-off plot on the training set in Figure 4b, where the unfairness approaches zero for the IFair model and \u03f5-IFair model with stricter penalties on unfairness. Furthermore, the distribution of predictions on the two interventional datasets follows a similar trend as the synthetic data, as depicted in Figure 5. 3.52 3.54 3.56 3.58 0.16 0.20 0.24 0.28 0.32 Unfairness RMSE Full \u03b5\u2212IFair IFair Unaware (a) Test set. 2.7 2.8 2.9 3.0 3.1 0.00 0.02 0.04 0.06 Unfairness RMSE Full \u03b5\u2212IFair IFair Unaware (b) Train set. Figure 4: Accuracy fairness trade-off. Y ^ A\u2190a Y ^ A\u2190a' 0.00 0.05 0.10 0.15 0.20 0.25 8 10 12 14 16 Y ^ (Full) density 0.0 0.1 0.2 0.3 0.4 8 10 12 14 Y ^ (Unaware) density 0.0 0.1 0.2 0.3 0.4 0.5 9 10 11 12 Y ^ (IFair) density 0.0 0.1 0.2 0.3 0.4 0.5 9 10 11 12 13 Y ^ (\u03b5 \u2212IFair , \u03bb = 250) density Figure 5: Density plot of the predicted YA\u2190a and YA\u2190a\u2032 in Student data. 5.2.2 CREDIT RISK DATASET The task of credit risk assessment involves predicting the likelihood of a borrower defaulting on a loan. For our experiment, we utilize the Credit Risk Dataset, which contains 11 features related to the repayment capability of 32,581 borrowers. In this dataset, we consider the attribute Age as the sensitive attribute and the target variable is Loan status, which is a binary variable indicating default (1) or no default (0). We focus on two specific age groups: 23 and 30, representing the first and third quantiles, respectively. Our experiments are based on the MPDAG G in Figure 13a. Due to the space limit, it is provided in Appendix G.4. For details on graph learning, interventional data generation and model training, please refer to Appendix G.5. 0.100 0.125 0.150 0.175 0.01 0.02 0.03 0.04 Unfairness 1\u2212Accuracy Full \u03b5\u2212IFair Unaware (a) Accuracy fairness trade-off. 0 1 0 1 0 1 0 1 Y(Full) Y(Unaware) Y( -IFair) Y( -IFair) 0.0 0.2 0.4 0.6 0.8 1.0 Probability YA a YA a\u2032 (b) Histogram, where the \u03bb for \u03f5-IFair model is 10 and 150, respectively. Figure 6: Results on Credit Risk dataset. Given that the target variable Loan status is binary, we measure unfairness using the absolute difference in the means of predictions for the interventions on Age = 23 and Age = 30. Prediction performance is evaluated by accuracy. Since there is no non-descendant of Age, we do not have the model IFair for this dataset. The trade-off plot is presented in Figure 6a and the distribution of predictions for the two interventional datasets is depicted in Figure 6b. Both of them follow a similar trend as previous. 6 CONCLUSION This paper presents a framework for achieving interventional fairness on partially known causal graphs, specifically MPDAGs. By leveraging the concept of interventions and modeling fair predictions as the effect of all observational variables, the proposed approach addresses the limitations of existing methods that assume a fully known causal DAG. Through the analysis of identification criteria and the formulation of a constrained optimization problem, the framework provides a principled approach to achieving interventional fairness while maximizing data utility. Experimental results on simulated and real-world datasets demonstrate the effectiveness of the proposed framework. The limitation of this work is that it assumes no selection bias or latent confounders. Future work can focus on extending the proposed framework and addressing these challenges. 9 \fPublished as a conference paper at ICLR 2024 7 ACKNOWLEDGEMENT AZ was supported by Melbourne Research Scholarship from the University of Melbourne. This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. SW was supported by ARC DE200101253. MG was supported by ARC DE210101624." |
| } |
| ] |
| } |