| { |
| "url": "http://arxiv.org/abs/2404.16540v1", |
| "title": "Approximation Algorithm of Minimum All-Ones Problem for Arbitrary Graphs", |
| "abstract": "Let $G=(V, E)$ be a graph and let each vertex of $G$ has a lamp and a button.\nEach button can be of $\\sigma^+$-type or $\\sigma$-type.\n Assume that initially some lamps are on and others are off. The button on\nvertex $x$ is of $\\sigma^+$-type ($\\sigma$-type, respectively) if pressing the\nbutton changes the lamp states on $x$ and on its neighbors in $G$ (the lamp\nstates on the neighbors of $x$ only, respectively). Assume that there is a set\n$X\\subseteq V$ such that pressing buttons on vertices of $X$ lights all lamps\non vertices of $G$. In particular, it is known to hold when initially all lamps\nare off and all buttons are of $\\sigma^+$-type.\n Finding such a set $X$ of the smallest size is NP-hard even if initially all\nlamps are off and all buttons are of $\\sigma^+$-type. Using a linear algebraic\napproach we design a polynomial-time approximation algorithm for the problem\nsuch that for the set $X$ constructed by the algorithm, we have $|X|\\le\n\\min\\{r,(|V|+{\\rm opt})/2\\},$ where $r$ is the rank of a (modified) adjacent\nmatrix of $G$ and ${\\rm opt}$ is the size of an optimal solution to the\nproblem.\n To the best of our knowledge, this is the first polynomial-time approximation\nalgorithm for the problem with a nontrivial approximation guarantee.", |
| "authors": "Chen Wang, Chao Wang, Gregory Z. Gutin, Xiaoyan Zhang", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.DS", |
| "cats": [ |
| "cs.DS", |
| "cs.DM" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "The all-ones problem is a fundamental problem in applied mathematics, first proposed by Sutner in 1988 [17]. This problem has applications in linear cellular automata, as discussed in [18] and the references therein. To illustrate the problem, consider an n \u00d7 n grid with each area having a light lamp and a switch, and every lamp is initially off. Turning the switch on in some area lights the lamp in the area and the lamps in neighboring areas. Is there a set X of areas such that turning the switches on in X will turn on all the lamps? This problem can be extended to all graphs and we will call it the all-ones problem. Sutner [18] proved that a solution X exists for every graph. Later, several simple proofs of this result were given or rediscovered [3, 5, 7, 10, 13]. Many variants of the all-ones problem have been introduced and studied [1, 2, 6, 7, 11, 12, 19] over years. There are two important generalizations of the all-ones problem: (i) the initial state of lamps and switches can be arbitrary, \u2217Corresponding author. Email addresses: 2120220677@mail.nankai.edu.cn (Chen Wang), wangchao@nankai.edu.cn (Chao Wang), gutin@cs.rhul.ac.uk (Gregory Z. Gutin), xiaoyanice@aliyun.com (Xiaoyan Zhang) 1 arXiv:2404.16540v1 [cs.DS] 25 Apr 2024 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 2 i.e., some are on and the others are off, and (ii) every switch can be either of \u03c3+-type which changes the states of the lamp on its vertex and the lamps on the neighbors of its vertex or \u03c3-type which changes the states of the lamps on the neighbors of its vertex only. As a result of these two generalizations, the generalized all-ones problem may not have a solution X which lights all lamps. This generalized problem is studied in this paper. Under the condition that such a solution X exists for the generalized all-ones problem, it is natural to ask for X of minimum size. Unfortunately, this minimization problem is NP-hard even for all-ones problem [16]; we will call the minimization all-ones problem the min all-ones problem. Galvin and Lu both proved that the min all-ones problem of trees can be solved in linear time [9, 14]. Building on this, Chen proposed an algorithm for solving the min generalized all- ones problem on trees, with linear complexity [4]. Manuel et al. provided solutions for some of the widely studied architectures, such as binomial trees, butterfly, and benes networks [15]. Fleischer and Yu provided a detailed survey of the generalized all-ones problem [8]. More recently, Zhang extended the all-ones problem to the all-colors problem, in which each lamp had other states besides being on and off, and obtained additional findings on the all-colors problem [20]. Although significant research has been conducted on the all-ones problem on special graphs, such as trees, re- sulting in efficient algorithms, no polynomial-time approximation algorithms have been designed for the min all-ones problem on general graphs. Trees and cyclic graphs only represent a fraction of general graphs. In practical engi- neering scenarios, complex graphs are more common. In this paper, we design a polynomial-time approximation algorithm for the min generalized all-ones problem. If the problem has a solution, our algorithm outputs a solution X such that |X| \u2264min{r, (|V| + opt)/2}, where the rank of a (modified) adjacent matrix of G and opt is the size of an optimal solution to the problem. Apart from the introduction, this paper contains three sections. In Section 2, we introduce our approximation algorithm in detail. Section 3 shows the theoretical analysis and performance evaluation of this algorithm. Section 4 summarizes all the work of this paper and discusses future work.", |
| "main_content": "2.1. Linear algebraic formulation of min generalized all-ones problem It is not hard to see that the min generalized all-ones problem can be described as the following linear integer program over F2. For an arbitrary graph G = (V, E) with V = {v1, . . . , vn} we can get its modified adjacency matrix A = (aij)n\u00d7n such that for all i \ufffdj, aij = 1 if vivj \u2208E and ai j = 0 otherwise, and for all i \u2208{1, 2, . . . , n}, aii = 1 (aii = 0, respectively) if the switch on vi is of \u03c3+-type (of \u03c3-type, respectively). Combined with the initial state B = (b1, b2, \u00b7 \u00b7 \u00b7 , bn), where bi = 0 if the lamp on vertex vi is initially on and bi = 1 if the lamp is initially off, we can construct a system of linear equations AU = B over F2. The solution to this problem is the minimum of \ufffdU = \ufffdn i=1 ui. Suppose the rank of A is r and the corank is m so that m + r = n. If aii = 1 for all i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , n}, the system of equations AU = B must have a solution, but if aii = 0, the system may not necessarily have a solution. However, as long as the system has at least one solution \u03b3 = (\u03b31, \u03b32, \u00b7 \u00b7 \u00b7 , \u03b3n)T, we can find all solutions of the system using the following system combining \u03b3 with the fundamental solution set \u03b7 = (\u03b71, \u03b72, \u00b7 \u00b7 \u00b7 , \u03b7m) within time O(n3). Here xi is the coefficient of the column vector \u03b7i = (\u03b71i, . . . , \u03b7ni)T. \u03b7X + \u03b3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ate to \u03b711 \u03b712 \u00b7 \u00b7 \u00b7 \u03b71m \u03b721 \u03b722 \u00b7 \u00b7 \u00b7 \u03b72m \u03b731 \u03b732 \u00b7 \u00b7 \u00b7 \u03b73m . . . . . . ... . . . \u03b7n1 \u03b7n2 \u00b7 \u00b7 \u00b7 \u03b7nm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe o e \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 m x1 x2 . . . xm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe im + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b3 \u03b3 \u03b3 \u03b3 ze her \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \ufffdU fo (1) \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u00b7 \u00b7 \u00b7 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe The problem is how to find the appropriate column vector X to minimize \ufffdU, under the condition that X has a total of 2m values. This problem was proven to be an NP-complete [16]. Therefore, the next subsection provides an approximation algorithm running in polynomial time. 2 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 3 2.2. Approximation algorithm Firstly, it can be observed that the polynomial time complexity (not exceeding O(n3)) of finding the matrix (\u03b71, \u03b72, \u00b7 \u00b7 \u00b7 , \u03b7m) and the special solution \u03b3 makes this process cost-effective in solving NP-complete problems. Secondly, it is challenging to identify alternative methods capable of directly computing the optimal solution without obtaining all the solutions. Even if such a solution is obtained, verification is often infeasible. When \u03b7 and \u03b3 are known, we need to find the X that minimizes P U. \u03b7X + \u03b3 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b711 \u03b712 \u00b7 \u00b7 \u00b7 \u03b71m \u03b721 \u03b722 \u00b7 \u00b7 \u00b7 \u03b72m \u03b731 \u03b732 \u00b7 \u00b7 \u00b7 \u03b73m . . . . . . ... . . . \u03b7n1 \u03b7n2 \u00b7 \u00b7 \u00b7 \u03b7nm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 x1 x2 . . . xm \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b41 \u03b42 \u03b43 . . . \u03b4n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe + \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b31 \u03b32 \u03b33 . . . \u03b3n \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 u1 u2 u3 . . . un \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (2) Proposition 2.1. Row exchanges of matrix \u03b7 do not change P U. Proof. Multiply both sides of Equation 2 by matrix P as shown in Equation 3. Matrix P is a product of elementary matrices that perform row exchanges. This operation essentially reorders the elements of vector U, but does not change the P U. P(\u03b7X + \u03b3) = P(\u03b4 + \u03b3) = PU (3) Proposition 2.2. Column transformation of matrix \u03b7 does not change P U. Proof. Let Qm\u2217m be a full rank matrix, and QZ = X, with the following equation. \u03b7X + \u03b3 = \u03b7QZ + \u03b3 = (\u03b7Q)Z + \u03b3 = \u03f5Z + \u03b3 = \u03b4 + \u03b3 = U (4) Q is the transition matrix between X and Z, and Q is full rank. When we find that X makes P U the smallest, we can definitely find the corresponding Z, so that the obtained U is the same. We can transform the \u03b7 column into an echelon form using row exchanges and column transformations, as shown in the following equation, with a complexity of O(m2n). The question mark indicates that the value of the number is uncertain, which may be 0 or 1. We can divide the matrix into m + 1 parts based on the echelon and assume the last line of the i-th part is line ki (i = 0, 1, \u00b7 \u00b7 \u00b7 , m) for the rank of matrix \u03b7 is always m. Part 0 is the most special, with all 3 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 4 elements in each row being 0. To ensure that Equation 4 holds, there should be (u1, u2, \u00b7 \u00b7 \u00b7 , uk0) = (\u03b31, \u03b32, \u00b7 \u00b7 \u00b7 , \u03b3k0). \u03b7Q = \u03f5 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 0 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . 1 0 0 0 \u00b7 \u00b7 \u00b7 0 1 0 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5(k1+1)1 1 0 0 \u00b7 \u00b7 \u00b7 0 \u03f5(k1+2))1 1 0 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5(k2+1)1 \u03f5(k2+1)2 1 0 \u00b7 \u00b7 \u00b7 0 \u03f5(k2+2)1 \u03f5(k2+2)2 1 0 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . ... . . . \u03f5km1 \u03f5km2 \u03f5km3 \u03f5km4 \u00b7 \u00b7 \u00b7 1 \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (5) In the following m parts, we will use greedy algorithms to solve for the Z value on the Echelon of each part. Part 1 of the linear Equation 5 is shown in Equation 6. (\u03b3k0+1, \u03b3k0+2, \u00b7 \u00b7 \u00b7 , \u03b3k1) is known and (\u03b4k0+1, \u03b4k0+2, \u00b7 \u00b7 \u00b7 , \u03b4k1) is unknown. It is important to ensure that \u03b4i is as similar to \u03b3i as possible. At this moment z1 only has two possible values: 0 and 1. Therefore, the idea of a greedy algorithm is adopted here. If there are more 0\u2019s than 1\u2019s in the range from \u03b3k0+1 to \u03b3k1, then z1 is set to 0. If there are more 1\u2019s than 0\u2019s, then z1 is set to 1. Therefore, we can directly obtain the value of x1 by solving it here, while ensuring that Pk1 i=k0+1 ui \u2264(k1 \u2212k0)/2. \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z1 = \u03b4k0+1 z1 = \u03b4k0+2 z1 = \u03b4k0+3 . . . z1 = \u03b4k1 Compare to \u03b3k0+1 \u03b3k0+2 \u03b3k0+3 . . . \u03b3k1 (6) The value of z2 can be calculated through z1. Part 2 of Equation 5 can be written as shown in Equation 7. (\u03b3k1+1, \u03b3k1+2, \u00b7 \u00b7 \u00b7 , \u03b3k2) is known, and (\u03b4k1+1, \u03b4k1+2, \u00b7 \u00b7 \u00b7 , \u03b4k2) needs to satisfy the Equation 5 and be as similar to (\u03b3k1+1, \u03b3k1+2, \u00b7 \u00b7 \u00b7 , \u03b3k2) as possible. The variables in Equation 7 are z1 and z2, and z1 has been solved before through a greedy algorithm, so the unknown variable is only z2. Since \u03f5i1z1 are constants, we can move them from the left side of the equation to the right side, and these two equation systems are obviously equivalent. Then, we need to ensure that \u03f5(k1+i)1z1 +\u03b4k1+i is as similar to \u03b3k1+i as possible. It can be seen that another transformation can be carried out, which is equivalent to making \u03b4k1+i as similar to \u03f5(k1+i)1z1 +\u03b3k1+i as possible. In this way, we have separated the variables: the left side of the equation is the variable z2, the right side of the equation is the variable \u03b4k1+i(\u03b4k1+i = z2), and the column of \u03f5(k1+i)1z1 + \u03b3k1+i are constants. At this point, we find that part 2 of Equation 5 has been transformed to be very similar to part 1. Therefore, if there are more 0\u2019s than 1\u2019s in the range from \u03f5(k1+i)1z1 + \u03b3k1+1 to \u03f5(k2)1z1 + \u03b3k2, then z2 is set to 0. If there are more 1\u2019s than 0\u2019s, then z2 is set to 1. Therefore, the value of z2 can be solved here and Pk2 i=k1+1 ui \u2264(k2 \u2212k1)/2 is ensured. After obtaining the value of z2, the value of \u03f5i1z1 + \u03f5i2z2 can be calculated, and the value of z3 can be calculated again. Following this pattern, the values of Z = (z1, z2, \u00b7 \u00b7 \u00b7 , zm) can be obtained. Then \u03f5Z + \u03b3 = U, we obtain U. The complete algorithm is shown in Algorithm 1. 4 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 5 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03f5(k1+1)1z1 + z2 = \u03b4k1+1 \u03f5(k1+2)1z1 + z2 = \u03b4k1+2 \u03f5(k1+3)1z1 + z2 = \u03b4k1+3 . . . \u03f5(k2)1z1 + z2 = \u03b4k2 Compare to \u03b3k1+1 \u03b3k1+1 \u03b3k1+1 . . . \u03b3k2 \u21d3 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z2 = \u03b4k1+1 + \u03f5(k1+1)1z1 z2 = \u03b4k1+2 + \u03f5(k1+2)1z1 z2 = \u03b4k1+3 + \u03f5(k1+3)1z1 . . . z2 = \u03b4k2 + \u03f5(k2)1z1 Compare to \u03b3k1+1 \u03b3k1+1 \u03b3k1+1 . . . \u03b3k2 \u21d3 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z2 = \u03b4k1+1 z2 = \u03b4k1+2 z2 = \u03b4k1+3 . . . z2 = \u03b4k2 Compare to \u03b3k1+1 + \u03f5(k1+1)1z1 \u03b3k1+2 + \u03f5(k1+2)1z1 \u03b3k1+3 + \u03f5(k1+3)1z1 . . . \u03b3k2 + \u03f5(k2)1z1 (7) 3. Algorithm performance evaluation In this section, we present the complexity of Algorithm 1 and analyze its approximation guarantees. Proposition 3.1. Algorithm 1 has a complexity of O(n3), and if the fundamental solution set \u03b7 for the equation AU = B has been obtained and is in column echelon form, then the complexity will reduce to O(mn). Proof. In Algorithm 1, step 1 involves solving a system of linear equations, which has a complexity of O(n3). Step 8 involves transforming the matrix etaup into column echelon form, which has a complexity of O(m2n) where m \u2264n. Step 2 to 7 is O(1). Step 9 involves calculating the location of pivots in the column echelon matrix \u03f5, which has a complexity of O(mn). Steps 10 to 24 consist of a nested loop with three layers. However, each element in the matrix \u03f5 is only accessed once, resulting in a complexity of O(mn). Proposition 3.2. If a given instance I of the min generalized all-ones problem has a solution, the value sol of the solution obtained by Algorithm 1 satisfies sol \u2264r, where r is the rank of the matrix A. Proof. In Equation 6 and 7, if \u03b4i = \u03b3i, then the resulting ui will be 0. In each part, we always make more ui equal to 0, so each part has at least one ui that takes on the value of 0. Furthermore, the rank of \u03b7 is m = n \u2212r because \u03b7 is the fundamental solution set of the system AU = B. Therefore, at least m values of ui are 0, so P U \u2264n \u2212m = r. Proposition 3.3. If a given instance I of the min generalized all-ones problem has a solution, the value sol of the solution obtained by Algorithm 1 satisfies sol \u2264(n + opt)/2, where opt is the value of an optimal solution of I. 5 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 6 Algorithm 1: Approximation Algorithm of Minimum All-Ones Problem Data: An adjacency matrix An\u2217n and a initial state B1\u2217n Result: Answer U 1 (\u03b7, \u03b3, m) = solveEquations(A,B); 2 if m == 0 and \u03b3 is null then 3 return null; 4 end 5 if m == 0 and \u03b3 is not null then 6 return \u03b3; 7 end 8 (P, \u03f5, Q) = matrixEchelon(\u03b7, \u03b3); 9 K = calculatePart(\u03f5); 10 for i from 1 to m do 11 cnt, tmp = 0; 12 for j from K[i \u22121] + 1 to K[i] do 13 for p from 1 to i \u22121 do 14 tmp = tmp \u2295(\u03f5[ j][p] \u2217X[p]) 15 end 16 cnt = cnt + (tmp \u2295\u03b3[ j]); 17 end 18 if cnt \u2264K[i] \u2212K[i \u22121])/2 then 19 X[i] = 0; 20 end 21 else 22 X[i] = 1; 23 end 24 end 25 U = P \u2217(\u03f5 \u2217X + \u03b3); 26 return U; Proof. In the Subsection 2.2, we partitioned the matrix \u03b7 into m + 1 parts and proved that for the 1 to m parts, Pki+1 ki+1 u \u2264(ki+1 \u2212ki)/2. Only the 0th part remains to be discussed. The 0th part is quite special in that it contains no variables, only differing in the value of \u03b3. Let the number of 0\u2019s in \u03b3 in the 0th part be g0 and the number of 1\u2019s be g1. g0 indicates that the switch at that point must not be pressed; otherwise, the conditions for the all-ones problem cannot be satisfied. Similarly, g1 indicates that the switch must be pressed. Now we have: sol \u2264g1 + (n \u2212g1 \u2212g0)/2 = (n + g1 \u2212g0)/2 (8) Then add the parameter opt. We can easily prove that sol \u2265opt \u2265g1, because the switches for these points must be pressed in any case. So we have g1 \u2264opt \u2264sol \u2264(n + g1 \u2212g0)/2 (9) Next, we will bound sol by replacing g1 with opt and g0 with 0, resulting in the following expression: sol \u2264(n + opt)/2 (10) 4. Conclusion and future work This article presents an approximation algorithm for the min generalized all-ones problem on arbitrary graphs, making it possible to process the problem in batches. The algorithm has a complexity of O(n3). If the equation 6 Chen Wang et al. / Theoretical computer science 00 (2024) 1\u20138 7 Figure 1. The range of possible values for sol system AU = B has been solved and the solution is in column echelon form, the complexity will be reduced to O(n(n \u2212r)), which is the lowest complexity for general graphs. The upper bound of the solution value sol obtained by this algorithm satisfies the inequality sol \u2264(n + opt)/2 and sol \u2264r. This ensures that the obtained solution, as shown in Figure 1, is always the optimal half. In future work, there still remain two questions to be solved. One of them is whether there is a polynomialtime algorithm for the min generalized all-ones problem which always finds a solution of size at most c \u02d9 opt for some constant c? The other one is whether we can get such an algorithm for the minimum all-colors problem?", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2404.08155v1", |
| "title": "Graph Integrated Language Transformers for Next Action Prediction in Complex Phone Calls", |
| "abstract": "Current Conversational AI systems employ different machine learning\npipelines, as well as external knowledge sources and business logic to predict\nthe next action. Maintaining various components in dialogue managers' pipeline\nadds complexity in expansion and updates, increases processing time, and causes\nadditive noise through the pipeline that can lead to incorrect next action\nprediction. This paper investigates graph integration into language\ntransformers to improve understanding the relationships between humans'\nutterances, previous, and next actions without the dependency on external\nsources or components. Experimental analyses on real calls indicate that the\nproposed Graph Integrated Language Transformer models can achieve higher\nperformance compared to other production level conversational AI systems in\ndriving interactive calls with human users in real-world settings.", |
| "authors": "Amin Hosseiny Marani, Ulie Schnaithmann, Youngseo Son, Akil Iyer, Manas Paldhe, Arushi Raghuvanshi", |
| "published": "2024-04-11", |
| "updated": "2024-04-11", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Building and maintaining complex production qual- ity conversational systems has been an ongoing challenge in industry. One approach to solve com- plex conversational tasks such as outbound call automation, is to use a dialogue manager (Paek and Pieraccini, 2008; Teixeira et al., 2021) to encode business logic. Conversational systems which use dialogue managers have multiple com- ponents which consist of Natural Language Under- standing (NLU) (Bocklisch et al., 2017), dialogue state tracking (Mannekote, 2023), next action pre- diction (Mannekote, 2023), and response genera- tion (Weston et al., 2022; He et al., 2018). Figure 1 describes the process of call automation systems with the aforementioned components. Handling next action prediction is one of the critical tasks dialogue managers take care of, as it affects the response generation directly (David, 2017). Next action prediction is the process of ana- lyzing human utterance, current and previous state Figure 1: A schematic visualization of dialogue man- agers\u2019 components which utilize an NLU pipeline of models to extract intents and fill slots from human utter- ances, and predict the next action based on the current and previous state. Finally, the system generates an ut- terance to respond to the human users (e.g., using LLMs or predefined templates). of the conversation (i.e., dialogue state tracking) and deciding which action to take, which in many industry settings is returning a specific response template. Figure 2 demonstrates an example of a dialogue manager based conversation automa- tion as a visual navigation assistant for multiple dialogue turns. Recently, there has been significant progress in the field of Generative AI and Large Language Models (LLMs) for end-to-end conversational sys- tems which alleviate the need for manually engi- neered dialogue managers (Mannekote, 2023; Snell et al., 2022). However, they sometimes have issues with hallucinations and can underperform in do- main specific, targeted conversations such as those that require knowledge graph retrieval (Dziri et al., 2021; Ji et al., 2023). In most industry settings, templates are used with action prediction to generate the response. By predicting an action, we are determining which re- sponse template(s) to return to the user (Mannekote, 2023; Qiu et al., 2022; Urbanek et al., 2019). Ac- tion prediction using response templates instead of arXiv:2404.08155v1 [cs.CL] 11 Apr 2024 Figure 2: An example of Visual Navigation Assistant as a dialogue manager. At each time-step t, the dialogue manager extracts the entities such as slots and intents from human utterance ut (i.e., green rectangles on the left side) and predicts the next action at (i.e., red rectan- gles). Using the predicted action (i.e., blue rectangles) the dialogue manager generates a system response (i.e., green rectangles on the right side). language generation helps prevent hallucinations, adds necessary guardrails for some industry set- tings, and keeps latency low. To solve the next action prediction problem, different NLP methods from traditional symbolic AI techniques such as Knowledge Graph mod- els (He et al., 2017; de Vries et al., 2018), to more modern transformer based techniques (e.g., Zhou et al., 2023) have been introduced; however, two main challenges still persist. 1) a majority of prior work depends on Slot-Filling (SF) and Intent- Classification (IC) techniques to extract dependen- cies and relies on external sources (i.e., knowl- edge or rule based approaches) to find the rela- tionship between the extracted information and ac- tions (Mannekote, 2023; David, 2017). Instability in detecting SF and IC causes incorrect next action prediction. 2) many conversational systems han- dle grounding poorly (David, 2017; Weston et al., 2018; Sutskever et al., 2014); this is when users\u2019 re- sponses differ from expected inputs (e.g., referring to a previous point in the conversation, moving backwards to change a previous response, or no action-related slots being detected). For example, in Figure 2, the human user sets a new ground by mentioning the elevator instead of the their loca- tion. This information may be slightly different than what a next action prediction model expects and can respond to. Lack of grounding in a conver- sation and more specifically in a model may result in misunderstanding (David, 2017) and can damage the conversation. This paper introduces an approach to predict the next action without any dependency on in- formation extraction (i.e., SF and IC) or exter- nal resources 1 such as ontology (e.g., Altinok, 2018) or knowledge-base (e.g., Vizcarra and Joki- nen, 2022) approaches. The proposed models, Graph Integrated Language Transformers, learns co-occurrences of actions and human utterances through a graph component (i.e., Graph Neural Net- work or a graph embedding layer) and combines it with language transformers to add language un- derstanding in production settings. The model is trained on conversations that followed a Standard Operating Procedure (SOP) 2 without the need for explicit encoding. The proposed model can be trained on any similar dataset that has an inher- ent action-to-action relationships. The list below summarizes the contribution of this paper. \u2022 Integrating graph information and combining with language transformers to remove depen- dency on NLU pipelines. \u2022 Adding a graph component (i.e., history of action co-occurrence) to language transform- ers to predict the next action as one atomic task while also overcoming the token limit by removing the need to keep prior dialogue history. \u2022 Evaluating the proposed next action predic- tion model in a production setting against a system that relies on an NLU pipeline with an explicitly defined dialogue manager (DM system) in Appendix A. To examine the performance and robustness of the proposed models in real-world settings with noisy input, the evaluation is done in a production setting and goes beyond classification metrics; the evaluation includes industry critical factors such as human experience using the conversational system and considers real-time constraints such as latency of output generation.", |
| "main_content": "Next action prediction approaches can be categorized in three chief groups. First, structured1The proposed model is trained using external resources but does not need any external resources after training. 2SOP is a document which defines a set of guideline instructions for diverse situations during the conversations. based approaches that consider sequential relationships between previous actions, other actions, and their requirements. These approaches assume that the current state (i.e., the previous action) is known (Henderson, 2015). On the one hand, local structure-based approaches such as Question & Answer systems (Reshmi and Balakrishnan, 2016) consider local adjacency of the actions, utterance features, and next potential actions. On the other hand, global structured-based approaches define problem space using dialoguegrammars or finite-state networks (David, 2017; Wollny et al., 2021). However, none of structuredbased approaches provide the ability to train a model and they require expert to design them (Henderson, 2015). The second group of next action prediction approaches are principle-based. These techniques choose next actions based on the filled information rather than sequential order between actions, thus behaving both locally and globally (David, 2017). Slot-filling (SF) and Intent-classification (IC) based techniques (i.e., joined or separate components) are common principle based approaches (Louvan and Magnini, 2020). Recently, neural models including RNNs and Language Transformers which act solely on input are receiving more attention for SF-IC based techniques (Goo et al., 2018; Chen et al., 2019; Zhang and Wang, 2022). These methods are mainly using dialogue history alongside additional information such as schema of the task (e.g., \u201chotel booking\u201d or \u201cscheduling a doctor\u2019s appointment\u201d) e.g., using embedding layers with or without attention layers fused with a language transformer (e.g., Mosig et al., 2020; Mehri and Eskenazi, 2021; Zhang et al., 2021). However, most of these language transformer based techniques were only evaluated on datasets with low number of actions, 10 or less (Mosig et al., 2020; Rastogi et al., 2020), or perform poorly on larger number of actions (i.e., 30 actions) for one top output selection (Chen et al., 2021). 3 Methodology This section discusses the problem definition of the next action prediction task (i.e., Section 3.1), and introduces the proposed models (i.e., Section 3.2). 3.1 Problem Definition A next action prediction model chooses an action at given Uk:t and Zk:t\u22121 at time t in which U is the set of all utterances from time k (i.e., k \u22650) to time t, and Z is the set of all previously predicted acts. Equation 1 formulates the process of next action prediction. In this equation, f denotes any function (e.g., machine learning model or a probabilistic matching technique) that can map thereof inputs to the next action. at = f([Uk:t, Zk:t\u22121]) s.t. 0 \u2264k \u2264t \u22121 (1) Different techniques approach next action prediction differently. Some techniques rely on feature extraction from utterances (i.e., Uk:t) using NLU techniques (e.g., intents or slots in NLU pipeline of Figure 1); in those cases Ut in Equation 1 becomes utterance and all those extracted features at time t. However, this paper proposes a method that relies only on the very last human utterance and previous actions in Section 3.2. 3.2 Graph Integrated Language Transformers This paper proposes a graph integrated approach to employ the rich information of graph-like structures, discussed in Section 2 (e.g., SOP, graphs, or rule knowledge bases) and combine it with language transformers. Two different techniques are proposed in this section that each combine language transformers with 1) Graph Neural Networks (GNN) to explicitly encode the graph of actions and other features (GNN-LT), and 2) a graph embedding layer to learn co-occurrences of action history, Graph-aware Language Transformer (GaLT). Both models additionally use language transformers such as BERT (Devlin et al., 2018), DistilBERT (Sanh et al., 2019), or RoBERTa (Liu et al., 2019) to add language understanding (Devlin et al., 2018) to the next action prediction. The GNNLT models is fed past actions as nodes and features of nodes\u2019 connections as edges (i.e., order of the connections, slots, and embedding of the utterance) using a Graph Attention Network (Yun et al., 2019). Thus, GNN-LT explicitly integrates the graph knowledge including the order of the actions and their connections. GaLT employs a graph embedding layer that encodes past actions as node labels directly without the past action names or utterances; therefore implicitly adds the ability to learn the co-occurring utterances and actions without the need to explicitly enforce graph constraints (i.e., actions as nodes, filled slots or other features as edges). Additionally, GaLT acquires fewer training parameters (e.g., 66M Distilbert + 1M fusion and fully connected layer = 67M in total) in comparison to GNN-LT (e.g., 66M Distilbert + 12M Graphormer small (Yun et al., 2019) + 1M fusion and fully connected layer = 79M in total); therefore, GaLT requires less training time and performs much faster in inference. The language transformer is fed the human utterance alongside the history of actions to implicitly learn the co-occurrence between human responses and follow-up actions taken by the system. Additionally, the language transformer is pre-trained on a much larger dataset of full dialogue turns to learn the context of the utterances and their co-occurring actions. As the dialogue history is removed from the graph integrated language transformer training process, the model is incentivised to focus on action co-occurrence and sequences as graph nodes rather than the dialogue history surrounding them. Keeping only actions as the history of the dialogues (i.e., both in language transformer and graph components) removes dependency to the NLU pipeline (i.e., discussed in Section 1 and 2) and the need to keep the dialogue turns\u2019 utterances; thus improving speed of prediction and satisfying the language transformer token limit; e.g., 512 for DistilBERT (Sanh et al., 2019; Devlin et al., 2018). Due to the simplicity of the model, real time inference time requirements are still being met. Figure 3 shows a schematic of the proposed models. A fusion layer combines both language transformer and graph component features using Equations 2-4. First, Equation 2 computes mean of the hidden features from the language transformer and Equation 3 computes the features of the graph component. Here, W and b are trainable parameters, O is the output of a layer, l and g denote the language transformer and graph component. Then, the fused features will be fed into a fully connected layer to predict the next action. Equation 4 fuses the hidden features of both layers and generates the probability using the Softmax activation layer. The next action will be picked from the list of all actions with respect to their probability of the computed Softmax output. While there are variety of fusion techniques (e.g., concatenation, dot product techniques, or summation techniques), Equation 4 uses \u2297; since GaLT and GNN-LT reach to the highest performance via pairwise dot product fusuion. Hl = GELU(Wl mean(Ol) + bl) (2) Hg = GELU(WgOg + bg) (3) Hf = Softmax(Wf(Hl \u2297Hg) + bf) (4) 4 Experimental Setup and Results This section describes the process of collecting data for training the models, comparing the trained models regarding classification metrics (i.e., F1), and evaluating the proposed models as well as the DM system3, explained in detail in Appendix A, using a human-centered approach. 4.1 Data, Configurations, and Training To integrate the graph information into GNN-LT and GaLT models, this work utilizes conversational data which follows a Standard Operating Procedure (SOP). These conversations were guided by a human expert or the DM system which employs a human defined SOP. The SOP is a graph like structure with actions as nodes and their connections to next actions based on filled slots, which has been carefully translated into dialogue manager logic. Appendix B discuss the SOP in more details. However, the proposed Graph Integrated Language Transformers were not trained on the SOP explicitly. GaLT and GNN-LT were trained on the data human experts and the DM system collected and generated from the SOP. To evaluate the proposed models, dialogue turns of phone calls between human-AI and humanhuman were collected from June to August 2023. The next action for each human dialogue turn was decided and labeled by the DM system with human in the loop supervision. Human domain experts intervened in calls that might fail. The intervention varied from correcting the collected data (e.g., spelling mistakes) to driving the calls in severe cases. To generate a reliable dataset, a team of human experts classified each conversation as successful or unsuccessful on a call level, rather than labeling and reviewing each dialogue turn, due to financial reasons and limited human resources. For the same reason, all of dialogue turns for each call are added to the dataset if it was considered 3The current production system that is handling the call automation at the time is called DM system throughout this paper. Figure 3: The architecture of the GaLT model (i.e., left figure) and GNN-LT (i.e., right figure). GaLT is fed action history as graph embedding and GNN-LT is fed actions as nodes as well as utterance features as edges; each models then is fused with a language transformer. L1 denotes the number of layers in GNN and L2 denotes the layers of the language transformer. successful 4 or was dropped otherwise. That resulted in \u223c1M records each including one human utterance and one system response. In addition to selecting successful calls, a pre-processing step (described in Appendix C), is devised to remove undesirable dialogue turns, calls, or actions; e.g., actions that are deprecated and the rest of the call to avoid incorrect connection between actions. This process led to \u223c600K remaining dialogue turns. Despite filtering out \u223c400k dialogue turns, the language transformers were initially pre-trained on all dialogue turns (i.e., \u223c1M) using Masked Language Modeling (MLM) (Devlin et al., 2018) and then fine-tuned on \u223c600K selected dataset for the next action prediction task. The dataset was randomly split to 80%-10%-10% for training, validation, and test. Section D summarizes the details of the dataset. Additionally, Section E and Section F lists the system configurations and proposed models\u2019 hyper-parameters for training and testing the models. 4.2 Classification Performance Comparison This section evaluates the proposed models and other techniques using an offline classification evaluation. The process evaluates each technique\u2019s performance on the turn-level; next action given a human user\u2019s utterance and the previous actions or dialogue history. To measure the performance for each model, F1 Score was computed on the test-set described in Section 4.1. Table 1 compares the proposed models with other techniques. The dataset, described in Appendix D, consists of 80 next actions (i.e., classes) of imbalanced frequency; thus F1Macro was cal4If the model managed to prompt the human user to give all information required culated alongside F1weighted. The results suggest that stand-alone models (i.e., language transformers or GNNs) and prompt-based large language models 5 are not able to predict the next action with high performance (i.e., lower F1macro). Moreover, this table shows adding the graph embedding of actions in GaLT can improve F1 for next action prediction more than combining complex GNN models. GaLT also can reach to its high performance with as little as 60K dialogue turns(i.e., 10% data size) as described in Appendix H. 4.3 Human-Centered Evaluation This section evaluates the best performing model, GaLT, with the DM system using a human-centered approach since the desired outcome of a call can be achieved through various paths and does not need to be strictly tied to one correct next action (i.e., what was done in Section 4.2). Put precisely, more than one next action can be considered as a correct prediction given the recent actions and the current utterance. To compare GaLT with the DM system, human assessors acted the \u201crole\u201d of the agent receiving outbound calls. They were familiar with the call structure and expected outcome of calls. Two different approaches were designed to compare and evaluate the models; objective productlevel and human subjective. Additionally to test the generalizability and robustness of the compared models three call difficulty levels were defined; easy, medium, and hard (i.e., Table 7). As the call difficulty level increases human utterances and provided information get more complex (e.g., mum5This paper also evaluates a prompting only approach using Llama2 (https://ai.meta.com/llama) on the same task and dataset; however, the results are not reported due to poor results in comparisons with other models. The prompt that is used to generate outputs as well as the results are discussed in Appendix G. Model F 1W eighted F 1Macro BERT w/ dialogue history (Mosig et al., 2020) 0.58 0.38 BERT w/ SF (Zhang et al., 2021) 0.79 0.44 BERT w/ action history 0.80 0.63 DistilBERT w/ action history 0.82 0.69 RoBERTa w/ action history 0.78 0.60 GNN (Yun et al., 2019) 0.72 0.52 (sub)*GNN (Yun et al., 2019) 0.72 0.51 GNN-LT(DistilBERT) 0.84 0.72 (sub)*GNN-LT(DistilBERT) 0.84 0.72 GaLT 0.84 0.75 *sub-GNN models are fed only recent actions (e.g., last 5 or 10). Table 1: Summary of offline classification evaluation across different techniques regarding F1. Four categories of models were listed in this table; language transformers (e.g., BERT) with dialogue history or detected filled slots, language transformers with last utterance and recent history of actions (e.g., 5 or 10 last actions), GNN model, and Graph Integrated language transformers (e.g., GNN or graph embedding). The underscore values show the best performance regarding each metric (i.e., columns). bling or updating a piece of information). The experimental setup and metrics are described in more detail in Appendix I. Production Level Metrics Table 2 shows that the proposed models outperforms the DM system regarding both field number (i.e., how much information the call collected) and panel number 6 (i.e., how far to the end of the call model reached). T-test statistics analysis suggests that the comparisons were significant for medium level as well as all levels combined (i.e., \u2018.\u2019 and \u2018*\u2019 symbols for each pair in Table 2). In addition to the panel number, finishing a call successfully (e.g., collecting all information or without human user hanging up) is another important metric (i.e., E2E metric). GaLT also improved the E2E or number of successfully finished calls by +31.92% (Appendix J shows an extensive comparison). Subjective Human Evaluation Additionally, Human agents (i.e., human users who interacted with the models) and reviewers rated each call after finishing that call as described in Appendix I using a 5-point Likert scale rating. The GaLT model received a higher rating average of 2.91 (std = 1.15) in comparison to rating average of 6Panel number indicates the progress a model is made into finishing a call. Panel 0,1,2,3,4, and E2E indicate 0%, 20%, 40%, 60%, 80%, and 100% progress of a call respectively. Difficulty #Fields Mean (std) #Panels Mean(std) Level DM system Proposed DM system Proposed Easy 23.1(6.59) 25.35(6.19) 3.85(0.65) 4.0(0.0) Medium 18.36(9.23). 23.3(4.45). 3.05(1.39)** (4.0)0.0** Hard 18.25(5.49) 21.44(5.98) 3.63(0.99) 3.66(0.94) Total 20.36(7.97)*23.79(5.70)* 3.48(1.13)* 3.93(0.42)* Note: .p<0.1, *p<0.05, **p<0.01, ***P<0.001 Table 2: Comparing the DM system and proposed models performance regarding product-level metrics, number of fields and panels, across different difficulty levels. The results of t-test are shown as stars (\u2018*\u2019) or dots (\u2018.\u2019) 2.78 (std = 1.42) for the DM system. Comparing the number of positive and negative ratings for each model shows that both models received almost same number of positive ratings but the DM system received higher number of negative ratings. In other words, human assessors rated the proposed models to be more robust. A deeper investigation regarding difficulty levels is done and discussed in Section K. 5 Conclusion This paper proposes Graph Integrated Language Transformers technique to improve next action prediction performance to resolve the dependency on Slot-Filling and Intent-Classification techniques and grounding issue (Mannekote, 2023). The analyses indicate that keeping the action history with order of the actions using a graph embedding layer and combining with language transformers generates higher quality of outputs in comparison to more complex models that include connection details of actions (i.e, GNNs including the connection details through edges). The proposed model(s) improve the next action prediction regarding F1 as well as product-level metrics and human-centered evaluation. They can improve the robustness regarding next action prediction (e.g., less unexpected results or being stuck in a loop) in comparison to other techniques and handle complex tasks better in comparison to the DM system in long noisy phone calls. Additionally, the proposed models can reach to a high performance level with as low as 60K dialogue turns. We hope future research can employ a similar method combined with generative AI models to extract the information from human utterances as well as generating custom responses to automate calls without dependency on other components. Limitations Although the proposed models can reach to a high performance with as little as 60K dialogue turns, it needs re-training or fine-tuning for any new application in a new domain or even with slightest changes; e.g., adding or removing even one action. Moreover, similar to other neural models, graph integrated language transformers, lack interpretability and may show instability (e.g., predict an action that does not have any relationship to dialogue history). In addition to these limitations, the evaluation can benefit from further investigation. This paper recruits human agents and employees who were familiar with the DM system. That can lead to a biased assessment and perhaps is the source of inconsistency between human subjective rating and product-level metrics. Finally, there are next steps to further evaluate graph injection with additional third party GenAI prompt based models. The ability to use certain third party systems was limited at the time of evaluation due to the requirement for this healthcare dataset to stay HIPAA compliant." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.11741v1", |
| "title": "To Store or Not to Store: a graph theoretical approach for Dataset Versioning", |
| "abstract": "In this work, we study the cost efficient data versioning problem, where the\ngoal is to optimize the storage and reconstruction (retrieval) costs of data\nversions, given a graph of datasets as nodes and edges capturing edit/delta\ninformation. One central variant we study is MinSum Retrieval (MSR) where the\ngoal is to minimize the total retrieval costs, while keeping the storage costs\nbounded. This problem (along with its variants) was introduced by Bhattacherjee\net al. [VLDB'15]. While such problems are frequently encountered in\ncollaborative tools (e.g., version control systems and data analysis\npipelines), to the best of our knowledge, no existing research studies the\ntheoretical aspects of these problems.\n We establish that the currently best-known heuristic, LMG, can perform\narbitrarily badly in a simple worst case. Moreover, we show that it is hard to\nget $o(n)$-approximation for MSR on general graphs even if we relax the storage\nconstraints by an $O(\\log n)$ factor. Similar hardness results are shown for\nother variants. Meanwhile, we propose poly-time approximation schemes for\ntree-like graphs, motivated by the fact that the graphs arising in practice\nfrom typical edit operations are often not arbitrary. As version graphs\ntypically have low treewidth, we further develop new algorithms for bounded\ntreewidth graphs.\n Furthermore, we propose two new heuristics and evaluate them empirically.\nFirst, we extend LMG by considering more potential ``moves'', to propose a new\nheuristic LMG-All. LMG-All consistently outperforms LMG while having comparable\nrun time on a wide variety of datasets, i.e., version graphs. Secondly, we\napply our tree algorithms on the minimum-storage arborescence of an instance,\nyielding algorithms that are qualitatively better than all previous heuristics\nfor MSR, as well as for another variant BoundedMin Retrieval (BMR).", |
| "authors": "Anxin Guo, Jingwei Li, Pattara Sukprasert, Samir Khuller, Amol Deshpande, Koyel Mukherjee", |
| "published": "2024-02-18", |
| "updated": "2024-02-18", |
| "primary_cat": "cs.DS", |
| "cats": [ |
| "cs.DS", |
| "cs.CC", |
| "cs.DB", |
| "cs.DC" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "The management and storage of data versions has become increasingly important. As an example, the increasing usage of online collaboration tools allows many collaborators to edit an original dataset simultaneously, producing multiple versions of datasets to be stored daily. Large number of dataset versions also occur often in industry data lakes [71] where huge tabular datasets like product catalogs might require a few records (or rows) to be modified periodically, resulting in a new version for each such modification. Furthermore, in Deep Learning pipelines, multiple versions are generated from the same original data for training and insight generation. At the scale of terabytes or even petabytes, storing and managing all the versions is extremely costly in the aforementioned situations [69]. Therefore, it is no surprise that data version control is emerging as a hot area in the industry [1, 2, 3, 4, 5, 6], and even popular cloud solution providers like Databricks are now capturing data lineage information, which helps in effective data version management [73]. In a pioneering paper, Bhattacherjee et al. [15] proposed a model capturing the trade-off between storage cost and retrieval (recreation) cost. The problems they studied can be defined as follows. Given dataset versions and a subset of the \u201cdeltas\u201d between them, find a compact representation that minimizes the overall storage as well as the retrieval costs of the versions. This involves a decision for each version: either we materialize it (i.e., store it explicitly) or we store a \u201cdelta\u201d and rely on edit operations to retrieve the version from another materialized version if necessary. The downside of the latter is that, to retrieve a version that was not materialized, we have to incur a computational overhead that we call retrieval cost. Figure 1, taken from Bhattacherjee et al.[15], illustrates the central point through different storage options. (i) shows the input graph, with annotated storage and retrieval costs . If the storage size is not a concern, we should store all versions as in (ii). From (iii) to (iv), it is clear that, by materializing v3, we shorten the retrieval costs of v3 and v5. This retrieval/storage trade-off leads to combinatorial problems of minimizing one type of cost, given a constraint on the other. Moreover, as an objective function, the retrieval cost can be measured by either the maximum or total (or equivalently average) retrieval cost of files. This yields four different optimization problems (Problems 3-6 in Table 1). While the first two problems in the table are easy, the other four turn out to be NP-hard and hard to approximate, as we will soon discuss. Problem Name Storage Retrieval Minimum Spanning Tree min R(v) < \u221e, \u2200v Shortest Path Tree < \u221e min {maxv R(v)} MinSum Retrieval (MSR) \u2264S min{P v R(v)} MinMax Retrieval (MMR) \u2264S min {maxv R(v)} BoundedSum Retrieval (BSR) min P v R(v) \u2264R BoundedMax Retrieval (BMR) min maxv R(v) \u2264R Table 1 Problems 1-6. Here, R(v) is the retrieval cost of version v, while R, S are the retrieval and storage constraints, respectively. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 3 V1 V3 V2 V5 V4 <10000, 10000> <10100, 10100> <9700,9700> <9800,9800> <10120,10120> <200,200> <1000,3000> <50,400> <800,2500> <200,550> V1 V3 V2 V5 V4 <10000, 10000> <200,200> <1000,3000> <50,400> <200,550> V1 V3 V2 V5 V4 <10000, 10000> <200,200> <50,400> <200,550> <9700,9700> V1 V3 V2 V5 V4 <10000, 10000> <10100, 10100> <9700,9700> <9800,9800> <10120,10120> (i) (ii) (iii) (iv) Figure 1 (i) A version graph over 5 datasets \u2013 annotation \u2329a, b\u232aindicates a storage cost of a and a retrieval cost of b; (ii, iii, iv) three possible storage graphs. The figure is taken from [15] There are some follow-up works on this model [28, 46, 85]. However, those either formulate new problems in different use cases [28, 46, 67] or implement a system incorporating the feature to store specific versions and deltas [46, 74, 82]. We will discuss this in more detail in Section 1.2. 1.1 Our Contributions We provide the first set of approximation algorithms and inapproximability results for the aforementioned optimization problems under various conditions. Our theoretical results also give rise to practical algorithms which perform very well on real-world data. Problem Graph type Assumptions Inapproximability MSR arborescence Triangle inequality r = s on edges2 1 undirected 1 + 1 e \u2212\u03f5 general \u2126(n)3 MMR undirected 2 \u2212\u03f5 general log\u2217n \u2212\u03c9(1) BSR arborescence 1 undirected ( 1 2 \u2212\u03f5) log n BMR undirected (1 \u2212\u03f5) log n Table 2 Hardness results 2 Both are assumptions in previous work [15] that simplify the problems. We note that our algorithms function even without these assumptions. 3 This is true even if we relax S by O(log n). 4 To Store or Not to Store Graphs Problems Algorithm Approx. General Digraph MSR LMG-All heuristic Bounded Treewidth MSR & MMR DP-BTW 1 + \u03f5 BSR & BMR (1, 1 + \u03f5) Bidirectional Tree MMR DP-BMR exact BMR Table 3 Algorithms summary. MMR and BMR. In Section 3 we prove that it is hard to approximate MMR within log\u2217n4 factor and BMR within log n factor on general inputs. Meanwhile, in Section 4 we give a polynomial-time dynamic programming (DP) algorithm for the two problems on bidirectional trees, i.e., digraphs whose underlying undirected graph5 is a tree. These inputs capture the cases where new versions are generated via edit operations. We also briefly describe an FPTAS (defined below) for MMR, analogous to the main result for MSR in Section 5. MSR and BSR. In Section 3 we prove that it is hard to design \u0000 O(n), O(log n) \u0001 -bicriteria approximation6 for MSR or O(log n)-approximation for BSR. It is also NP-hard to solve the two problems exactly on trees. On the other hand, we again use DP to design a fully polynomial-time approximation scheme (FPTAS) for MSR on bounded treewidth graphs. These inputs capture many practical settings: bidirectional trees have width 1, series-parallel graphs have width 2, and the GitHub repositories we use in (Section 7) all have low treewidth.7 New Heuristics. We improved LMG into a more general LMG-All algorithm for solving MSR. LMG-All outperforms LMG in all our experiments and runs faster than LMG on sparse graphs. Inspired by our algorithms on trees, we also propose two DP heuristics for MSR and BMR. Both algorithms perform extremely well even when the input graph is not tree-like. Moreover, there are known procedures for parallelizing general DP algorithms [79], so our new heuristics are potentially more practical than previous ones, which are all sequential. 1.2 Related Works 1.2.1 Theory There was little theoretical analysis of the exact problems we study. The optimization problems are first formalized in Bhattacherjee et al. [15], which also compared the effectiveness of several proposed heuristics on both real-world and synthetic data. Zhang et al.[85] followed up by considering a new objective that is a weighted sum of objectives in MSR and MMR. 4 log\u2217n is \u201citerated logarithm\u201d, defined as the number of times we iteratively take logarithm before the result is at most 1. 5 The undirected graph formed by disregarding the orientations of the edges. 6 An (\u03b1, \u03b2)-bicriteria approximation refers to an algorithm that potentially exceeds the constraint by \u03b1 times, in order to achieve a \u03b2-approximation of the objective. See Section 2 for an example. 7 datasharing, styleguide, and leetcode have treewidth 2,3, and 6 respectively. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 5 They also modified the heuristics to fit this objective. There are similar concepts, including Light Approximate Shortest-path Tree (LAST) [55] and Shallow-light Tree (SLT) [44, 45, 54, 59, 68, 72]. However, this line of work focuses mainly on undirected graphs and their algorithms do not generalize to the directed case. Among the two problems mentioned, SLT is closely related to MMR and BMR. Here, the goal is to find a tree that is light (minimize weight) and shallow (bounded depth). To our knowledge, there are only two works that give approximation algorithms for directed shallow-light trees. Chimani and Spoerhase [25] give a bi-criteria (1 + \u03f5, n\u03f5)-approximation algorithm that runs in polynomial- time. Recently, Ghuge and Nagarajan [41] gave a O( log n log log n)-approximation algorithm for submodular tree orienteering that runs in quasi-polynomial time. Their algorithm can be adapted into O( log2 n log log n)-approximation for BMR. For MSR, their algorithm gives \u0000 O( log2 n log log n), O( log2 n log log n) \u0001 -approximation. The idea is to run their algorithm for many rounds, where the objective of each round is to cover as many nodes as possible. 1.2.2 Systems To implement a system captured by our problems, components spanning multiple lines of works are required. For example, to get a graph structure, one has to keep track of history of changes. This is related to the topic of data provenance [21, 77]. Given a graph structure, the question of modeling \u201cdeltas\u201d is also of interest. There is a line of work dedicated to studying how to implement diff algorithms in different contexts [22, 47, 65, 80, 83]. In the more flexible case, one may think of creating deltas without access to the change history. However, computing all possible deltas is too wasteful, hence it is necessary to utilize other approaches to identify similar versions/datasets. Such line of work is known as dataset discovery or dataset similarlity [18, 19, 35, 50, 71]. Several follow-up works of Bhattacherjee et al.[15] have implemented systems with a feature that saves only selected versions to reduce redundancy. There are works focusing on version control for relational databases [14, 20, 23, 46, 66, 74, 75, 82] and works focusing on graph snapshots [56, 67, 84]. However, since their focus was on designing full-fledged systems, the algorithms they proposed are rather simple heuristics with no theoretical guarantees. 1.2.3 Usecases In a version control system such as git, our problem is similar to what git pack command aims to do.8 The original heuristic for git pack, as described in an IRC log, is to sort objects in particular order and only create deltas between objects in the same window.9 It is shown in Bhattacherjee et al. [15] that git\u2019s heuristic does not work well compared to other methods. SVN, on the other hand, only stores the most recent version and the deltas to past versions [70]. Other existing data version management systems include [2, 3, 4, 5, 6], which offer git-like capabilities suited for different use cases, such as data science pipelines in enterprise setting, machine learning-focused, data lake storage, graph visualization, etc. Though not directly related to our work, recently, there has been a lot of work exploring algorithmic and systems related optimizations for reducing storage and maintenance costs of data. For example, Mukherjee et al. [69] proposes optimal multi-tiering, compression 8 https://www.git-scm.com/docs/git-pack-objects 9 https://github.com/git/git/blob/master/Documentation/technical/pack-heuristics.txt 6 To Store or Not to Store and data partitioning, along with predicting access patterns for the same. Other works that exploit multi-tiering to optimize performance include e.g., [24, 29, 30, 60] and/or costs, e.g., [24, 32, 57, 63, 64, 76]. Storage and data placement in a workload aware manner, e.g., [7, 8, 24] and in a device aware manner, e.g., [61, 62, 81] have also been explored. [29] combine compression and multi-tiering for optimizing latency.", |
| "main_content": "In this section, the definition of the problems, notations, simplifications, and assumptions will be formally introduced. 2.1 Problem Setting In the problems we study, we are given a directed version graph G = (V, E), where vertices represent versions and edges capture the deltas between versions. Every edge e = (u, v) is associated with two weights: storage cost se and retrieval cost re.10 The cost of storing e is se, and it takes re time to retrieve v once we retrieved u. Every vertex v is associated with only the storage cost, sv, of storing (materializing) the version. Since there is usually a smallest unit of cost in the real world, we will assume sv, se, re \u2208N for all v \u2208V, e \u2208E. To retrieve a version v from a materialized version u, there must be some path P = {(ui\u22121, ui)}n i=1 with u0 = u, un = v, such that all edges along this path are stored. In such cases, we say that v is retrieved from materialized u with retrieval cost R(v) = \ufffdn i=1 r(ui\u22121,ui). In the rest of the paper, we say v is \u201cretrieved from u\u201d if u is in the path to retrieve v, and v is \u201cretrieved from materialized u\u201d if in addition u is materialized. The general optimization goal is to select vertices M \u2286V and edges F \u2286E of small size (w.r.t. storage cost s), such that for each v \u2208V \\ M, there is a short path (w.r.t retrieval cost r) from a materialized vertex to v. Formally, we want to minimize (a) total storage cost \ufffd v\u2208M sv + \ufffd e\u2208F se, and (b) total (resp. maximum) retrieval cost \ufffd v\u2208V R(v) (resp. maxv\u2208V R(v)). Since the storage and retrieval objectives are negatively correlated, a natural problem is to constrain one objective and minimize the other. With this in mind, four different problems from a mate \ufffd v\u2208M sv + \ufffd e V R(v)). ce the storag train one obj erialized vertex to v. Formally, we want to minimize (a \ufffd e\u2208F se, and (b) total (resp. maximum) retrieval cost \ufffd v ge and retrieval objectives are negatively correlated, a natu jective and minimize the other. With this in mind, four dif cost r) from a materialized vertex to v. Formally, we want to minimize (a) total storage cost \ufffd v\u2208M sv + \ufffd e\u2208F se, and (b) total (resp. maximum) retrieval cost \ufffd v\u2208V R(v) (resp. maxv\u2208V R(v)). Since the storage and retrieval objectives are negatively correlated, a natural problem is to constrain one objective and minimize the other. With this in mind, four different problems are formulated, as described by Problems 3-6 in Table 1. These problems are originally defined in Bhattacherjee et al. [15], although we use different names for brevity. Since the first two problems are well studied, we do not discuss them further. 2.2 Further Definitions We hereby formalize several simplifications and complications, to capture more realistic aspects of the problem. Most of the proposed variants are natural and considered by Bhattacherjee et al. [15]. Triangle inequality: It is natural to assume that both weights satisfy triangle inequality, i.e., ru,v \u2264ru,w + rw,v, since we can always implement the delta ru,v by implementing first ru,w and then rw,v. In fact, a more general triangle inequality should hold when we consider the materialization costs sv, as it\u2019s often true that su + su,v \u2265sv for all pairs of u, v \u2208V . All hardness results in this paper hold under the generalized triangle inequality. 10 If e = (u, v), we may use su,v in place of se and ru,v in place of re. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 7 Directedness: It is possible that for two versions u and v, ru,v \u0338= rv,u. In real world, deletion is also significantly faster and easier to store than addition of content. Therefore, Bhattacherjee et al. [15] considered both directed and undirected cases; we argue that it is usually more natural to model the problems as directed graphs and focus on that case. Note that in the most general directed setting, it is possible that we are given the delta (u, v) but not (v, u). (For our purposes, this is equivalent to having a worse-than-trivial delta, with sv,u \u2265su.) Directed and undirected cases are considered separately in our hardness results, and all our algorithms apply in the more general directed case. Single weight function: This is the special case where the storage cost function se and retrieval cost re function are identical up to some scaling. This can be seen in the real world, for example, when we use simple diff to produce deltas. We note that the random compression construction in our experiments (Section 7) is designed to simulate two distinct weight functions. All our hardness results hold for single weight functions. All our approximation algorithms work even when the two weight functions are very different. Arborescence and trees: An arborescence, or a directed spanning tree, is a connected digraph where all vertices except a designated root have in-degree 1, and the root has indegree 0. If each version is a modification on top of another version, then the \u201cnatural\" deltas automatically form an arboreal input instance.11 For practical reasons, we also consider bidirectional tree instances, meaning that both (u, v) and (v, u) are available deltas with possibly different weights. Empirical evidence shows that having deltas in both directions can greatly improve the quality of the optimal solution.12 Bounded treewidth: At a high level, treewidth measures how similar a graph is to a tree [13]. As one notable class of graphs with bounded treewidths, series-parallel graphs highly resemble the version graphs we derive from real-world repositories. Therefore, graphs with bounded treewidth is a natural consideration with high practical utility. We give precise definitions of this special case in Section 5.3. We note that once we have an algorithm for MSR (resp. MMR), we can turn it into an algorithm for BSR (resp. BMR) by binary-searching over the possible values of the constraint. Due to the somewhat exchangeable nature of the storage and constraints in these problems, it\u2019s worth considering (\u03b1, \u03b2)-bicriteria approximations, where we relax the constraint by some \u03b2 factor in order to achieve a \u03b1-approximation. For example, an algorithm A is (\u03b1, \u03b2)-bicriteria approximation for MSR if it outputs a feasible solution with storage cost \u2264\u03b1 \u00b7 S and retrieval cost \u2264\u03b2 \u00b7 OPT where OPT is the retrieval cost of an optimal solution. 3 Hardness Results We hereby prove the main hardness results of the problems. For completeness, we define the notion of approximation algorithms, as used in this paper, in Appendix A. We also include 11This does not hold true for version controls because of the merge operation. 12Although not presented in this paper, we noticed that the minimum arborescences on all our experimental datasets tend to have much worse optimal costs, compared to the minimum bidirectional trees. 8 To Store or Not to Store in Appendix B a list of well-studied optimization problems that are used in this section for reduction purposes. 3.1 Heuristics can be Arbitrarily Bad First, we consider the approximation factor of the best heuristic for MSR in Bhattacherjee et al. [15], Local Move Greedy (LMG). The gist of this algorithm is to start with the arborescence that minimizes the storage cost, and iteratively materialize a version that most efficiently reduces retrieval cost per unit storage. In other words, in each step, a version is materialized with maximum \u03c1, where \u03c1 = reduction in total of retrieval costs increase in storage cost . We provide the pseudocode for LMG in Algorithm 1. Algorithm 1 Local Move Greedy (LMG) Input : Version graph G, storage constraint S /* Constructing extended version graph with auxiliary root vaux. */ 1 V \u2190V \u222a{vaux}; 2 for v \u2208V \\ {vaux} do 3 E \u2190E \u222a{(vaux, v)}; 4 r(vaux,v) = 0; 5 s(vaux,v) = sv; 6 Let Gaux = (V, E); /* The main algorithm. */ 7 T \u2190minimum arborescence of Gaux rooted at vaux w.r.t. weight function s; 8 Let S(T) be the total storage cost of T; 9 Let R(v) be the retrieval cost of v in T; 10 Let P(v) be the parent of v in T; 11 U \u2190V ; 12 while S(T) < S do 13 (\u03c1max, vmax) \u2190(0, \u2205); 14 for v \u2208U with S(T) + sv \u2212sP (v),v \u2264S do 15 T \u2032 \u2190T \\ {(P(v), v)} \u222a{(vaux, v)}; 16 \u2206= P v \u0000R(v) \u2212RT \u2032(v) \u0001 ; 17 if \u2206/(sv \u2212sP (v),v) > \u03c1max then 18 \u03c1max \u2190\u2206/(sv \u2212sP (v),v); 19 vmax \u2190v; 20 T \u2190T \\ {(P(vmax), vmax)} \u222a{(vaux, vmax)}; 21 U \u2190U \\ {vmax}; 22 if U = \u2205then 23 return T; 24 return T; \u25b6Theorem 1. LMG has an arbitrarily bad approximation factor for MinSum Retrieval, even under the following assumptions: A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 9 A a B b C c (1 \u2212b c)b (1 \u2212b c)c Figure 2 An adversarial example for LMG. (i) G is a directed path; (ii) there is a single weight function; and (iii) triangle inequality holds. Proof. Consider the following chain of three nodes; the storage costs for nodes and the storage/retrieval costs for edges are labeled in Figure 2 Let a be large and \u03f5 = b/c be arbitrarily small. To save space, we do not show vaux but only the nodes of the version graph. It is easy to check that triangle inequality holds on this graph. In the first step of LMG, the minimum storage solution of the graph is {A, (A, B), (B, C)} with storage cost a + (1 \u2212\u03f5)b + (1 \u2212\u03f5)c. Next, in the greedy step, two options are available: (1) Choosing B and delete (A, B): \u03c11 = 2(1 \u2212\u03f5)b \u03f5b = 2 \u03f5 \u22121. (2) Choosing C and delete (B, C): \u03c12 = (1 \u2212\u03f5)b + (1 \u2212\u03f5)c \u03f5c = (1 \u2212\u03f5)b b + 1 \u2212\u03f5 \u03f5 = 1 \u03f5 \u2212\u03f5 < 2 \u03f5 \u22121. With any storage constraint in range \u0002 a + (1 \u2212\u03f5)b + c, a + b + c \u0001 , LMG will choose Option (1) which gives a total retrieval cost of (1 \u2212\u03f5)c. Note that with S < a + b + c, LMG is not able to pick Option (2) after taking Option (1). However, by choosing Option (2), which is also feasible, the total retrieval cost is (1 \u2212\u03f5)b. The proof is finished by observing that c/b can be arbitrarily large. \u25c0 3.2 Hardness Results on General Graphs Here, we show the various hardness of approximations on general input graphs. We first focus on MSR and MMR where the constraint is on storage cost and the objective is on the retrieval cost. We then shift our attention to BMR and BSR in which the constraint is of retrieval cost and the objective function is on minimizing storage cost. 3.2.1 Hardness for MSR and MMR \u25b6Theorem 2. On version graphs with n nodes, even assuming single weight function and triangle inequality, there is no: (i) (\u03b1, \u03b2)-approximation for MinSum Retrieval if \u03b2 \u22641 2(1 \u2212\u03f5) \u0000ln n \u2212ln \u03b1 \u2212O(1) \u0001 ; in particular, for some constant c, there is no (c \u00b7 n)-approximation without relaxing storage constraint by some \u2126(log n) factor, unless NP \u2286DTIME(nO(log log n)); 10 To Store or Not to Store (ii) (1 + 1 e \u2212\u03f5)-approximation for MinSum Retrieval on undirected graphs for all \u03f5 > 0, unless NP \u2286DTIME(nO(log log n)); (iii) \u0000log\u2217(n)\u2212\u03c9(1) \u0001 -approximation for MinMax Retrieval, unless NP \u2286DTIME(nO(log log n)); (iv) (2 \u2212\u03f5)-approximation for MinMax Retrieval on undirected graphs for all \u03f5 > 0, unless NP =P. Proof. MSR. There is an approximation-preserving (AP) reduction13 from (Asymmetric) k-median to MSR. Let su,v = ru,v = du,v, the distance from u to v in a (asymmetric) k-median instance. By setting the size of each version v to some large N and storage constraint to be S = kN + n, we can restrict the instance to materialize at most k nodes and retrieve all other nodes through deltas. For large enough N, an (\u03b1, \u03b2)-approximation for MSR provides an (\u03b1, \u03b2)-approximation for (Asymmetric) k-median, just by outputting the materialized nodes. The desired results follow from known hardness for asymmetric [9] or symmetric (see Appendix B) k-median. MMR. A similar AP reduction exists from (Asymmetric) k-center to MMR. Again, we can set all materialization costs to N and cu,v = ru,v = du,v, and the desired result follows from the hardness of asymmetric [26] and symmetric [42] k-center. \u25c0 3.2.2 Hardness for BSR and BMR \u25b6Theorem 3. On both directed and undirected version graphs with n nodes, even assuming single weight function and triangle inequality, there is no: (i) (c1 ln n)-approximation for BoundedSum Retrieval for any c1 < 0.5; (ii) (c2 ln n)-approximation for BoundedMax Retrieval for any c2 < 1. unless NP = P. To prove this theorem, we will present our reduction to these two problems from Set Cover. We then show their structural properties on Lemmas 4 and 5. We finally show the proof at the end of this section. Reduction. Given a set cover instance with sets A1, . . . , Am and elements o1, . . . , on, we construct the following version graph: 1. Build versions ai corresponding to Ai, and bj corresponding to oj. All versions have size N for some large N \u2208N. 2. For all i, j \u2208[m], i \u0338= j, create symmetric delta (ai, aj) of weight 1. For each oj \u2208Ai, create symmetric delta (ai, bj) of weight 1. \u25b6Lemma 4 (BMR\u2019s structure). Assume we are given an approximate solution to BMR on the above instance under max retrieval constraint R = 1. In polynomial time, we can produce another feasible solution with equal or smaller total storage cost such that only the set versions are materialized. i.e., all {bj}n j=1 are retrieved via deltas. Proof of Lemma 4. Suppose some algorithm produces a solution that materializes bj. 13 In particular, a strict reduction. See, e.g., Crescenzi\u2019s note [27] for more detail. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 11 Case 1: If there exists ai that needs to be retrieved through bj (i.e., oj \u2208Ai), then we can replace the materialization of bj with that of ai and replace edges of the form (bj, ak) with (ai, ak). It is straightforward to see that neither storage cost nor retrieval cost increased in this process. Case 2: If no other node is dependent on bj, we can pick any ai such that (ai, bj) exists (again, oj \u2208Ai). If ai is already materialized in the original solution, then we can store (ai, bj) instead of materializing bj, which decreases storage cost. Case 3: If no ai adjacent to bj is materialized in the original solution, then some delta (ai\u2032, ai) has to be stored with the former materialized to satisfy the R = 1 constraint. We can hence materialize ai, delete the delta (ai\u2032, ai), and again replace the materialization of bj with the delta (ai, bj) without increasing the storage. Figure 3 illustrates this case. \u25c0 Figure 3 Case 3 in the proof of Lemma 4. The improved solution is on the right. \u25b6Lemma 5 (BSR\u2019s structure). Assume we are given an approximate solution to BSR on the above version graph under total retrieval constraint R = m \u2212mOPT + n, where mOPT is the size of the optimal set cover. In polynomial time, we can produce another feasible solution to BSR with equal or lower total storage cost, such that only the set versions are materialized. i.e., all {bj}n j=1 are retrieved via deltas. Proof of Lemma 5. We refer to the same three cases as in Lemma 4. Similarly, if we have a solution where some bj is materialized, Case 1: if some ai is retrieved through bj, we can apply the same modification as Lemma 4. We can replace the materialization of bj with that of ai, and replace edges of the form (bj, ak) with (ai, ak). Neither the storage nor the retrieval cost increases in this case. Now, WLOG we assume no deltas (bj, ai) are chosen. Case 2: if no ai is retrieved through bj, and some ai adjacent to bj is materialized, then method in Lemma 4 needs to be modified a bit in order to remove the materialization of bj. If we simply retrieve bj via the delta (ai, bj), we would lower the storage cot by N \u22121 and increase the total retrieval cost by 1. This renders the solution infeasible if the total retrieval constraint is already tight. To tackle this, we analyze the properties of the solutions with total retrieval cost exactly R. Observe that all solutions must materialize at least mOPT nodes at all time, so a configuration 12 To Store or Not to Store exhausting the constraint R must have some version w with retrieval cost at least 2. If this w is a set version, we can loosen the retrieval constraint by storing a delta of cost 1 from some materialized set instead. If w is an element version, then we can materialize its parent version (a set covering it), which increases storage cost by N \u22121 and decreases total retrieval cost by at least 2. Either case, by performing the above action if necessary, we can resolve case 2 and obtain a approximate solution that is not worse than before. Case 3: this is where each ai adjacent to bj neither retrieves through bj nor is materialized. Fix an ai, then some delta (ai\u2032, ai) has to be stored to retrieve ai; WLOG we can assume that the former is materialized. We can thus materialize ai, delete the delta (ai\u2032, ai), and again replace the materialization of bj with the delta (ai, bj) with no increase in either costs. \u25c0 Equipped with Lemma 4 and Lemma 5, we are now ready to prove Theorem 3. Proof of Theorem 3. Assuming m = O(n) in the set cover instance, we present an AP reduction from Set Cover to both BMR and BSR. BMR. To produce a set cover solution, we take an improved approximate solution for BMR, and output the family of sets whose corresponding versions are materialized. Since none of the bj\u2019s is stored, they have to be retrieved from some ai. Moreover, under the constraint R = 1, they have to be a 1-hop neighbor of some ai, meaning the materialized ai covers all of the elements in the set cover instance. Finally, we prove that the approximation factor is preserved: for large N, the improved solution has objective value \u2248N|{i : ai materialized}|. If n = O(m), then an \u03b1(|V |)approximation for MMR provides a (\u03b1(n) + O(1))-approximation for set cover. Hence we can not have \u03b1(|V |) = c ln n for c < 1 unless NP \u2286DTIME(nO(log log n)) [33]. Figure 4 The BSR case in proof of Theorem 3. The solution on the right has one version (b2) of retrieval cost 2, hence it must materialize an additional version am to satisfy the total retrieval constraint. BSR. Assume for the moment that we know mOPT, then we can set total retrieval constraint to be R = m \u2212mOPT + n, and work with an improved approximate solution. This choice of R is made so that an optimal solution must materialize the set versions corresponding to a minimum set cover. All other nodes must be retrieved via a single hop. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 13 By Lemma 5, we assume all element versions are retrieved from a (not necessarily materialized) set version that covers it. If m = O(n), an \u03b1(|V |)-approximation of BMR materializes mALG \u2264(\u03b1(n) + O(1))mOPT nodes. Note that, by materializing additional nodes, we are allowing a set B of bj\u2019s to have retrieval cost \u22652. Let H denote the set of \u201chopped sets\u201d Ai, which are not materialized yet are necessary to retrieve some bj through the delta (ai, bj). By analyzing the total retrieval cost, we can bound |H| by: |H| \u2264|B| \u2264mALG \u2212mOPT Specifically, each additional bj \u2208B increases retrieval cost by at least 1 compared to the optimal configuration; yet each of the mALG \u2212mOPT additionally materialized set versions only decreases total retrieval cost by 1. It follows that the family of sets S = {Ai : ai materialized } \u222aH is a \u0010 2\u03b1(n) \u2212O(1) \u0011 -approximation solution for the corresponding Set Cover instance. S is feasible because all of the bj\u2019s are retrieved through some (ai, bj), where Ai \u2208S; on the other hand, the size of both sets on the right hand side are at most (\u03b1(n) + O(1))mOPT, hence the approximation factor holds. Thus, any \u03b1(|V |) = c ln n for any c < 0.5 will result in a Set Cover approximation factor of 2c \u00b7 ln(n). We finish the proof by noting that, without knowing mOPT in advance, we can run the above procedure for each possible guess of the value mOPT, and obtaining a feasible set cover each iteration. The desired approximation factor is still preserved by outputting the minimum set cover solution over the guesses. \u25c0 3.3 Hardness on Arborescences We show that MSR and BSR are NP-hard on arborescence instances. This essentially shows that our FPTAS algorithm for MSR in Section 5.1 is the best we can do in polynomial time. \u25b6Theorem 6. On arborescence inputs, MinSum Retrieval and BoundedSum Retrieval are NP-hard even when we assume single weight function and triangle inequality. In order to prove the theorem above, we rely on the following reduction which connects two problems together. \u25b6Lemma 7. If there exists poly-time algorithm A that solves BoundedSum Retrieval (resp. BoundedMax Retrieval) on some set of input instances, then there exists a poly-time algorithm solving MinSum Retrieval (resp. MinMax Retrieval) on the same set of input instances. Proof. Suppose we want to solve a MSR (resp. MMR) instance with storage constraint S. We can use A as a subroutine and conduct binary search for the minimum retrieval constraint R\u2217under which BSR (resp. BMR) has optimal objective at most S. Thus, R\u2217is an optimal solution for our problem at hand. To see that the binary search takes poly(n) steps, we note that the search space for the target retrieval constraint is bounded by n2rmax for BSR and nrmax for BMR, where rmax = maxe\u2208E re. \u25c0 Now we show the proof for Theorem 6. 14 To Store or Not to Store Proof of Theorem 6. Assuming Lemma 7, it suffices to show the NP-hardness of MSR on these inputs. Consider an instance of Subset Sum problem with values a1, . . . , an and target T. This problem can be reduced to MSR on an n-ary arborescence of depth one. Let the root version be v0 and its children v1, . . . , vn. The materialization cost of vi is set to be ai + 1 for i \u2208[n], while that of v0 is some N large enough so that the generalized triangle inequality holds. For each i \u2208[n], we can set both retrieval and storage costs of edge (v0, vi) to be 1. Consider MSR on this graph with storage constraint S = N + n + T. From an optimal solution, we can construct set A = {i \u2208[n] : vi materialized}, an optimal solution for the above Subset Sum instance. \u25c0 4 Exact Algorithm for MMR and BMR on Bidirectional Trees As discussed in Section 1, we can use an algorithm for BMR to solve for MMR via binary search. Hence, it suffices to focus on BMR, namely, when we are given maximal retrieval constraint R and want to minimize storage cost. Algorithm 2 DP-BMR Input : Tree T and the max retrieval constraint R 1 Orient T arbitrarily. Sort V in reverse topological order; 2 DP[v][u] \u2190\u221efor all v, u \u2208V ; 3 for v in V do 4 for u in V such that R(u, v) \u2264R do 5 if u = v then 6 DP[v][u] \u2190sv; 7 else 8 DP[v][u] \u2190spu v ,v, where pu v is the node preceding v on the path from u to v; 9 for w that is a child of v do 10 if w in the path from u to v then 11 DP[v][u] \u2190DP[v][u] + DP[w][u]; 12 else 13 DP[v][u] \u2190DP[v][u] + min{OPT[w], DP[w][u]}; 14 OPT[v] \u2190min{DP[v][w] : w \u2208V (T[v])}; 15 return OPT[vroot]; Let T = (V, E) be a bidirectional tree. This is a digraph with two directed edges (u, v), (v, u) \u2208E corresponding to each edge {u, v} \u2208E0, on some underlying undirected tree (V, E0). Let R be the maximum retrieval cost constraint. We can pick any vertex v0 as root, and orient the tree such that v0 has no parent, while all other nodes have exactly one parent. For each v \u2208V , let T[v] denote the subtree of T rooted at v. If v is retrieved from materialized u, we use pu v to denote the parent of v on the unique u \u2212v path to retrieve v. We write pv v = v. We now describe a dynamic programming (DP) algorithm DP-BMR that solves BMR exactly on T. DP variables. For u, v \u2208V , let DP[v][u] be the minimum storage cost of a partial solution on T[v], which satisfies the following: all descendants of v are retrieved from some node in A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 15 (a) Case 1 (b) Case 2 (c) Case 3 Figure 5 The 3 cases of DP-BMR, where u = v, u \u2208V (T[v]), and u \u0338\u2208V (T[v]) respectively. The blue nodes and edges are stored in the partial solution. T[v], while v is retrieved from some materialized version u, which is potentially outside the subtree T[v]. See Figure 5 for an illustration. Importantly, when calculating the storage cost for DP[v][u], if u is not a part of T[v], the incident edge (pu v, v) is involved in the calculation, while other edges in the u \u2212v path, or the cost to materialize u, are not involved. Base case. We iterate from the leaves up. Let R(u, v) denote the retrieval cost of the u \u2212v path. For a leaf v, we set DP[v][v] = sv, and DP[v][u] = s(pu v ,v) for all u \u0338= v with R(u, v) \u2264R. Here, pu v is just the parent of v in the tree structure. All choices of u, v such that R(u, v) > R are infeasible, and we therefore set DP[v][u] = \u221ein these cases. Recurrence. For convenience, we define helper variable OPT[v] to be the minimum storage cost on the subproblem T[v], such that v is either materialized or retrieved from one of its materialized descendants.14 Formally, OPT[v] = min{DP[v][w] : w \u2208V (T[v])}. For recurrence on DP[v][u] such that R(u, v) \u2264R, there are three possible cases of the relationship between v and u (see Figure 5). In each case, we outline what we add to DP[v][u]. (1) If u = v, we materialize v. Each child w of v can be either materialized, retrieved from their materialized descendants, or retrieved from the materialized v. Note that the storage cost on T[w] is exactly min{OPT[w], DP[w][v]}, which we will add to the total value of DP[v][v]. (2) If u \u2208V (T[v])\\{v}, we would store the edge (pu v, v). Note that pu v is a child of v and hence is also retrieved from the materialized u, so we must add DP[pu v][u]. We then add min{OPT[w], DP[w][u]} for all other children w of v. (3) If u \u0338\u2208V (T[v]), we would store the edge (pu v, v), where pu v is now the parent of v in the tree structure. We then add min{OPT[w], DP[w][u]} for all children as before. Output. We output OPT[vroot], the storage cost of the optimal solution. To output the configuration achieving this optimum, we can use the standard procedure where we store the configuration in each DP variable. 14 In other words, the case where v is retrieved from u outside of T[v], or case 3 in Figure 5, is not considered in this helper variable. 16 To Store or Not to Store \u25b6Theorem 8. BoundedMax Retrieval is solvable on bidirectional tree instances in O(n2) time. Proof. The time complexity follows from the observation that each calculation of DP[v][u] in the recurrence takes O(deg(v)) time, and P u P v deg(v) = P u O(n) = O(n2). The optimality of this DP can be shown inductively from leaves up. On each leaf v, the optimal storage costs of the trivial subproblems are indeed DP[v][v] = sv, and DP[v][u] = s(pu v ,v) for all u \u0338= v such that R(u, v) \u2264R. Inductively, suppose node v has children w1, . . . , wk on which the DP values are correctly calculated. To calculate the optimal storage cost DP[v][u] where R(u, v) \u2264R, we consider DP[v][u] as the sum of the following three items: (1) The storage cost s(pu v ,v), or sv if u = v. We add this cost directly since it is not part of the DP values of any child of v. (2) The value min{DP[wi][u], OPT[u]} for all child wi such that u \u0338\u2208V (T[wi]). This is becuase the minimum storage cost on subproblem T[wi] is exactly min{DP[wi][u], OPT[u]} if v is not retrieved from any descendents of wi. (3) DP[wi][u] for child wi whose subtree T[wi] contains u. This is because if so, for v to be retrieved from u, wi must also be retrieved from u. Thus, we add DP[wi][u] to DP[v][u], for this particular wi. We also note that the partial solution on T[wi] is completely independent from the partial solution on T[wj], for all i \u0338= j. This allows us to directly sum over the individual optimal costs. The resulting DP[v][u] is also feasible: retrieving v from materialized u is feasible since R(u, v) \u2264R, and any infeasible solution on T[wi] is not considered due to its infinite DP value. \u25c0 We note that by binary-searching the constraint value S, this algorithm also solves MinMax Retrieval on trees. 5 FPTAS for MSR via Dynamic Programming In this section we work on MinSum Retrieval and present a fully polynomial time approximation scheme (FPTAS) on digraphs whose underlying undirected graph has bounded treewidth. Similar techniques can be applied to MMR, but we will focus on MSR due to space constraints. We start by describing a dynamic programming (DP) algorithm on trees in Section 5.1. In Section 5.2, we define all notations necessary for the latter subsection. Finally, in Section 5.3, we show how to extend our DP to the bounded treewidth graphs. 5.1 Warm-up: Bidirectional Trees As a warm-up to the more general algorithm, we present an FPTAS for bidirectional tree instances of MSR via DP. This algorithm also inspired a practical heuristic DP-MSR, presented in Section 6.2. Again, we assume the tree has a designated root vroot and a parent-child hierarchy. We further assume that the tree is binary, via the standard trick of vertex splitting and adding edges of zero weight if necessary. (See Appendix C for details.) A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 17 v c1 c2 v c1 c2 v c1 c2 v c1 c2 Figure 6 An illustration of DP variables in Section 5.1 DP variables. We define DP[v][k][\u03b3][\u03c1] to be the minimum storage cost for the subproblem with constraints v, k, \u03b3, \u03c1 such that (with examples illustrated in Figure 6) 1. Root for subproblem v \u2208V is a vertex on the tree; in each iteration, we consider the subtree rooted at v. 2. Dependency number k \u2208N is the number of versions retrieved from v (including v itself) in the subproblem solution. This is useful when calculating the extra retrieval cost incurred by retrieving v from its parent. 3. Root retrieval \u03b3 \u2208N represents the cost of retrieving the subtree root v, if it is retrieved from a materialized descendant. This is useful when calculating the extra retrieval cost incurred by retrieving the parent of v from v. Note that the root retrieval cost will be discretized, as specified later. 4. Total retrieval \u03c1 \u2208N represents the total retrieval cost of the subsolution. Similar to \u03b3, \u03c1 will also be discretized. v c1 c2 v c1 c2 v c1 c2 v c1 c2 v c1 c2 v c1 c2 v c1 c2 v c1 c2 Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Figure 7 Eight types of connections on a binary tree. A node is colored if it is either materialized or retrieved from a node outside the chart. Otherwise, an uncolored node is retrieved from another node as illustrated with the arrows. Discretizing retrieval costs. Let rmax = maxe\u2208E{re}. The possible total retrieval cost \u03c1 is within range {0, 1, . . . , n2rmax}. To make the DP tractable, we partition this range further and define approximated retrieval cost r\u2032 u,v for edge (u, v) \u2208E as follows: r\u2032 u,v = \u2308ru,v l \u2309 where l = n2rmax t(\u03f5) , t(\u03f5) = n4 \u03f5 , 18 To Store or Not to Store and t(\u03f5) is the number of \u201cticks\u201d we want to partition the retrieval range [0, n2rmax] into. The choice for t(\u03f5) will be specified in the proof for Theorem 10. We will work with r\u2032 in the rest of the subsection. However, by an abuse of notation, we still use r for discretized retrieval for the ease of representation. Base case. For each leaf v, we let DP[v][1][0][0] = sv. Recurrence step. On each iteration at node v, we consider every target configuration DP[v][k][\u03b3][\u03c1] under each possible connection types as illustrated in Figure 7. For each configuration, we go over all corresponding compatible partial solutions on T[c1] and T[c2]. The recurrence relation for all cases is given in Appendix C. Here, we select representative cases and explain the details of calculation below: 5.1.1 Dealing with Dependency When we decide to retrieve any child from v, as in case 4 of Figure 7, the children c1, c2 along with all their dependencies now become dependencies of v. The minimum storage cost in case 4 (given v, k, \u03b3 = 0, \u03c1) is: S4 = sv + sv,c1 + sv,c2 \u2212sc1 \u2212sc2 (1) + min \u03c11+\u03c12=\u03c1 k1+k2=k\u22121 n DP[c1][k1][0][\u03c11 \u2212k1rv,c1] (2a) + DP[c2][k2][0][\u03c12 \u2212k2rv,c2] o (2b) In Equation (2a), v is required to have dependency number k and root retrieval 0. For each k1 + k2 = k \u22121, we must go through subproblems where c1 has dependency number k1 and c2 has that of k2. Also in Equation (2a), the choice of \u03c11, \u03c12 determines how we are allocating retrieval costs budget \u03c1 to c1 and c2 respectively. Specifically in Equation (2a) and Equation (2b), the total retrieval cost allocated to subproblem on T[c1] is \u03c11 \u2212k1 \u00b7 rv,c1 since an extra k1 \u00b7 rv,c1 cost is incurred by the edge (v, c1), as it is used k1 times by all versions depending on c1. Similar applies to the subproblem on T[c2]. Next, we highlight the idea of \u201cinvisible\u201d dependency here: in case 2 on T[v], the diffs (v, c1) and (v, c2) was not available in any previous recurrence, since v has just been introduced. Therefore, the compatible solution for the subproblems on T[c1] and T[c2] have to materialize nodes c1 and c2 to ensure they can be retrieved. This explains the \u2212sc1 \u2212sc2 terms in Equation (1), since these costs are no longer present. When generalizing the DP onto graphs with bounded treewidth, similarly, restriction of a global solution does not always result in a feasible partial solution due to the existence of dependencies invisible to the subproblems. We will resolve them using similar ideas. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 19 5.1.2 Dealing with Retrieval In contrast with dependencies, this refers to the case where v is retrieved from one of its children. We take case 5 as an example: given v, k = 0, \u03b3, \u03c1, S5 = sc1,v + min \u03c11\u2264\u03c1 n min k1 {DP[c1][k1][\u03b3 \u2212rc1,v][\u03c11 \u2212\u03b3]} + min k2,\u03b3\u2032{DP[c2][k2][\u03b3\u2032][\u03c1 \u2212\u03c11]} o We allocate the retrieval cost similar to case 2. We will care less about the dependency number, over which we will take minimum. The retrieval cost for c1 now has to be \u03b3 \u2212rc1,v since v is retrieved from c1. Note importantly that now we are counting the retrieval cost for v in \u03c11, and so the retrieval cost budget for T[c1] is now \u03c11 \u2212\u03b3. Similarly, we take minimum on all other unused parameters to get the best storage for case 5. 5.1.3 Combining the ideas We take case 8 as an example where both retrieval and dependencies are involved. In case 8, v is retrieved from child c1 (retrieval), and child c2 is retrieved from v (dependency). Given v, k, \u03b3, \u03c1, we claim that: S8 = sc1,v + sv,c2 \u2212sc2 + min \u03c11+\u03c12=\u03c1 n min k\u2032 {DP[c1][k\u2032][\u03b3 \u2212rc1,v][\u03c11 \u2212\u03b3]} + DP[c2][k \u22121][0][\u03c12 \u2212(k \u22121) \u00b7 (r2 + \u03b3)] o Note that the c1 side is identical to that for case 5. In combining both dependency and retrieval cases, there is slight adjustment in the dependency side: since v now might also depend on nodes further down c1 side, the total extra retrieval cost created by adding edge (v, c2) becomes (k \u22121) \u00b7 (r2 + \u03b3) instead of (k \u22121) \u00b7 (r2). Output. Finally, with storage constraint S and root of the tree vroot, we output the configuration that outputs the minimum \u03c1 which achieves the following \u2203k \u2264n, \u03b3 \u2208N s.t. DP[vroot][k][\u03b3][\u03c1] \u2264S We shall formally state and prove the FPTAS result below. \u25b6Lemma 9. The DP algorithm outputs a configuration with total retrieval cost at most OPT + \u03f5rmax in poly(n, 1/\u03f5) time. Proof. By setting t(\u03f5) = n4 \u03f5 , we have l = n2rmax t(\u03f5) = \u03f5rmax n2 . Note that we can get an approximation of the original retrieval costs by multiplying each r\u2032 e with l. This creates an estimation error of at most l on each edge. Note further that in the optimal solution, at most n2 edges are materialized, so if \u03c1\u2217is the minimal discretized total retrieval cost, we have total retrieval of output \u2264l\u03c1\u2217\u2264OPT + n2l \u2264OPT + \u03f5rmax. \u25c0 20 To Store or Not to Store Now we prove the main theorem of this subsection: \u25b6Theorem 10. For all \u03f5 > 0, there is a (1 + \u03f5)-approximation algorithm for MinSum Retrieval on bidirectional trees that runs in poly(n, 1 \u03f5 ) time. Proof. Given parameter \u03f5, we can use the DP algorithm as a black box and iterate the following for up to n times: (1) Run the DP for the given \u03f5 on the current graph. Record the output. (2) Let (u, v) be the most retrieval cost-heavy edge. We now set r(u,v) = 0 and s(u,v) = sv. If the new graph is infeasible for the given storage constraint, or if all edges have already been modified, exit the loop. At the end, we output the best out of all recorded outputs. This improves the previous bound when rmax > OPT: at some point we will eventually have rmax \u2264OPT, which means the output configuration, if mapped back to the original input, is a feasible (1 + \u03f5)approximation. \u25c0 5.2 Treewidth-related Definitions We now consider a more general class of version graphs: any G = (V, E) whose underlying undirected graph15 G0 has treewidth bounded by some constant k. \u25b6Definition 11 (Tree Decomposition [13]). A tree decomposition of undirected G0 = (V0, E0) is a tree T = (VT , ET ), where each z \u2208VT is associated with a subset (\u201cbag\u201d) Sz of V0. The bags must satisfy the following conditions: (i) S z\u2208VT Sz = V0; (ii) For each v \u2208V0, the bags containing v induce a connected subtree of T; (iii) For each (u, v) \u2208E0, there exists z \u2208VT such that Sz contains both u and v. The width of a tree decomposition T = (VT , ET ) is maxz\u2208VT |Sz| \u22121. The treewidth of G0 is the minimum width over all tree decompositions of G0. It follows that undirected forests have treewidth 1. We further note that there is also a notion of directed treewidth [51], but it is not suitable for our purpose. We will WLOG assume a special kind of decomposition: \u25b6Definition 12 (Nice Tree Decomposition [17]). A nice tree decomposition is a tree decomposition with a designated root, where each node z is one of the following types: 1. A leaf, which has no children and whose bag has size 1; 2. A forget node, which has one children c, and Sz \u2282Sc and |Sc| = |Sz| + 1. 3. An introduce node, which has one children c, and Sz \u2283Sc and |Sc| + 1 = |Sz|. 4. A join, which has children c1, c2, and Sz = Sc1 = Sc2. Given a bound k on the treewidth, there are multiple algorithms for calculating a tree decomposition of width k [11, 16, 37], or an approximation of k [12, 34, 36, 58]. For our case, the algorithm by Bodlaender [11] can be used to compute a tree decomposition in time 2O(k3)\u00b7O(n), which is linear if the treewidth k is constant. Given a tree decomposition, we can in O(|V0|) time find a nice tree decomposition of the same width with O(k|V0|) nodes [17]. 15As before, this means that (u, v), (v, u) \u2208E for each undirected edge {u, v} \u2208E0 in G0. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 21 5.3 Generalized Dynamic Programming Here we outline the DP for MSR on graphs whose underlying undirected graph G0 has treewidth at most k. 5.3.1 DP States Similar to the warm-up, we will do the DP bottom-up on each z \u2208VT in the nice tree decomposition T. Before proceeding, let us define some additional notations. For any bag z \u2208VT , let T[z] be the induced subtree of T rooted at z. We define V[z] = S z\u2032\u2208V (Tz) Sz\u2032 be the set of vertices in the bags of T[z], including Sz. Following that, G[z] is the induced subgraph of G by vertices V[z]. We now define the DP states. At a high level, each state describes some number of partial solutions on the subgraph induced by V[z], G[z]. When building a complete solution on G from the partial solutions, the state variables should give us all the information we need. Each DP state on z \u2208VT consists of a tuple of functions Tz = (Parz, Depz, Retz, Ancz) and a non-negative integer \u03c1z: 1. Parent function Parz : Sz 7\u2192V[z] describing the partial solution on G[z], restricted on Sz. If Parz(v) \u0338= v then v will be retrieved through the edge (Parz(v), v). If Parz(v) = v then v will be materialized. 2. Dependency function Depz : Sz 7\u2192[n]. Similar to the dependency parameter in the warm-up, Depz(v) counts the number of nodes in V[z] retrieved through v. 3. Retrieval cost function Retz : Sz 7\u2192{0, . . . , nrmax}. Similar to the root retrieval parameter in the warm-up, Retz(v) denotes the retrieval cost of version v in the partial solution on G[z]. 4. Ancestor function Ancz : Sz 7\u21922Sz. If u \u2208Ancz(v), then u is retrieved in order to retrieve v in this partial solution, i.e., v is dependent on u. We need this extra information to avoid directed cycles. 5. \u03c1z, the total retrieval cost of the subproblem according to the partial solution. Similar to its counterpart in the warm-up, all retrieval costs would be discretized by the same technique that makes the approximation an FPTAS. A feasible state on z \u2208VT is a pair (Tz, \u03c1z) which correctly describes some partial solution on G[z] whose retrieval cost is exactly \u03c1z. Each state is further associated with a storage value \u03c3(Tz, \u03c1z) \u2208Z+, indicating the minimum storage needed to achieve the state (Tz, \u03c1z) on G[z]. We are now ready to describe how to compute the states. 5.3.2 Recurrence on Leaves For each leaf z \u2208VT , the only feasible partial solution is to materialize the only vertex v in the leaf bag. We can easily calculate its state and storage cost. 5.3.3 Recurrence on Forget Nodes This is also easy: for a forget node z with child c, we have G[z] = G[c], and hence the states on z are simply the restrictions of states on c. 22 To Store or Not to Store 5.3.4 Recurrence on Introduce Nodes At introduce node z with child c, we have Sz = Sc \u222a{v0} for some \u201cintroduced\u201d v0. Each feasible state (Tz, \u03c1z) on z must correspond to some state (Tc, \u03c1c) on c, which we can calculate as follows: We first initialize Tc to be the respective functions in Tz restricted on Sc. For instance, Parc = Parz |Sc, the restriction of Parz on domain Sc. If v0 is retrieved through u \u2208Sc according to Tz (Parz(v0) = u), then we remove the dependencies related to v0 and the retrieval cost incurred on edge (u, v0). Specifically: (1) Decrease the value of Depc by 1 on all vertices in Ancz(u). (2) Decrease \u03c1c by Depz(v0) \u00b7 Retz(v0). (3) Remove Ancz(u) from the ancestor functions of all descendants of z. If v0 has some child w according to Tz (namely, Parz(w) = v0), then we reverse the uprooting process in the warm-up, such that vertex w, which was not a root in Tz, is now a root in Tc. Specifically: (1) Let Parc(w) = w. (2) Remove v0 from the ancestor function of w and all its descendants. (3) Decrease the retrieval cost function of w and its descendants by Retz(w). (4) Decrease \u03c1c by Retz(w) \u00d7 Depz(w). Since v could have multiple children, the last procedure is potentially repeated multiple times. Algorithm 3 Compatibility Input : Sz, Tz, Ta, Tb /* External-Retrieval returns the \u201ctrue restrictions\u201d of the Par, Anc, and Ret functions. */ 1 T \u2032 a, T \u2032 b \u2190External-Retrieval(Sz, Tz); 2 if T \u2032 a disagree with Ta or T \u2032 b disagree with Tb on functions Par, Anc, or Ret then 3 return False; /* For each v \u2208Sz, External_Dependency returns the dependency of v that are outside of Sz. */ 4 ExtDepz \u2190External_Dependency (Sz, Tz); 5 ExtDepa \u2190External_Dependency (Sz, Ta); 6 ExtDepb \u2190External_Dependency (Sz, Tb); 7 if ExtDepz \u0338= ExtDepa + ExtDepb then 8 return False; 9 return True; 5.3.5 Recurrence on Joins Suppose we are at a join z with children a, b, where Sz = Sa = Sb. On a high level, for each state (Tz, \u03c1z) on G[z], we want to find all pairs of states (Ta, \u03c1a) and (Tb, \u03c1b) such that the partial solutions they describe can combine into a partial solution on G[z], as described by (Tz, \u03c1z). A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 23 Compatibility. The algorithm Compatibility (Algorithm 3) decides whether Ta, Tb are indeed how Tz looks like when restricted to G[a] and G[b] respectively. If the algorithm returns true, we proceed to calculate the correct value of \u03c1a + \u03c1b based on this particular restriction. Figure 8 Illustration for compatibility. Figures (b) and (c) show a pair of compatible configurations on Ta and Tb with the configuration on Tz in (a). The configurations of yellow nodes and green nodes are analyzed in External-Retrieval and External-Dependency respectively. Resolving external retrieval. Compatibility first deals with the vertices that are retrieved from outside Sz. For example, each v \u2208Sz retrieved from V[a] \\ Sz, like the yellow node in (c) of Figure 8, is instead materialized from Tb\u2019s perspective. To check whether Ta and Tb resolve all such cases correctly, we define subroutine External-Retrieval (Algorithm 4) to loop through Sz topologically and calculate the correct Par, Ret, Anc functions for both Ta and Tb. Resolving external dependency. The next step in Compatibility is to check whether the functions Depa, Depb are compatible with Depz. Specifically, nodes in Sz could have external dependencies in V[a] \\ Sz and V[b] \\ Sz, an example being the green nodes in Figure 8 and Figure 9. The specific definition of ExtDepa(v) is the number of descendants that v have outside Sz, to whom v is the closest ancestor in Sz, according to Ta. To see an example, note that only four red nodes are counted towards ExtDepa(A) in Figure 9. The functions ExtDepb and ExtDepz are defined similarly according to Tb and Tz. We need that ExtDepa(v) + ExtDepb(v) = ExtDepz(v) for all v \u2208Sz in order for (Ta, Tb) to be compatible with Tz. To check this, we call External-Dependency (Algorithm 5) on Tz, Ta, Tb as a subroutine of Compatibility. We note that this is similar to distributing the dependency number k to the two children in case 4 of Figure 7. Calculating \u03c1. Given that (Ta, Tb) are compatible with Tz, we want to find the objective, \u03c3(Tz, \u03c1z), with the recurrence relation involving \u03c3(Ta, \u03c1a) + \u03c3(Tb, \u03c1b) for suitable \u03c1a and \u03c1b. However, we cannot simply take \u03c1a + \u03c1b = \u03c1z due to the complicated procedure of combining 24 To Store or Not to Store Algorithm 4 External-Retrieval Input : Sz, Tz 1 Let T \u2032 a = T \u2032 b = Tz; 2 Sort Sz in topological order according to Ancz; 3 for v \u2208Sz do /* Removing external ancestors from a iteratively. */ 4 if Parz(v) \u2208V[a] \\ Sz then 5 Par\u2032 b(v) = v; 6 for w \u2208Sz with w \u0338= v and v \u2208Anc\u2032 b(w) do 7 Ret\u2032 b(w) \u2212= Ret\u2032 b(v); 8 Anc\u2032 b(w) \u2190Anc\u2032 b(w) \\ Anc\u2032 b(v); 9 Ret\u2032 b(v) \u21900; 10 Anc\u2032 b(v) \u2190\u2205; /* Removing external ancestors from b iteratively. */ 11 if Parz(v) \u2208V[b] \\ Sz then 12 Par\u2032 a(v) = v; 13 for w \u2208S)z with w \u0338= v and v \u2208Anc\u2032 a(w) do 14 Ret\u2032 a(w) \u2212= Ret\u2032 a(v); 15 Anc\u2032 a(w) \u2190Anc\u2032 a(w) \\ Anc\u2032 a(v); 16 Ret\u2032 a(v) \u21900; 17 Anc\u2032 a(v) \u2190\u2205; 18 return T \u2032 a, T \u2032 b ; Algorithm 5 External-Dependency Input : S, T 1 Sort S in topological order according to Anc; 2 for v \u2208S do 3 Let ExtDep(v) = Dep(v) \u2212 P w\u2208S:Par(w)=v Dep(w); 4 for v \u2208S do 5 if Par(v) \u0338\u2208S then 6 for u \u2208Anc(v) with u \u0338= v do 7 ExtDep(u) \u2212= ExtDep(v) 8 return ExtDep; A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 25 Figure 9 Illustration for external dependency. Green nodes A and B both have non-zero external dependency, as labeled in the figure. Ta, Tb into Tz. We thus implement Distribute Retrieval (Algorithm 6) to calculate \u03c1\u2206 such that \u03c1a + \u03c1b = \u03c1z \u2212\u03c1\u2206and then iterate through all such \u03c1a and \u03c1b. Algorithm 6 Distribute Retrieval Input : Sz, Tz, \u03c1z, Sa, Sb, Ta, Tb /* We want \u03c1z = \u03c1a + \u03c1b + \u03c1\u2206: */ 1 \u03c1\u2206\u21900; 2 for v \u2208Sz such that Parz(v) \u0338= v do /* The number of times rParz(v),v is counted towards \u03c1z, minus the number of times it is counted towards \u03c1a and \u03c1b: */ 3 Count \u2190Depz(v); 4 if Para(v) = Parz(v) then 5 Count \u2212= Depa(v); 6 if Parb(v) = Parz(v) then 7 Count \u2212= Depb(v); 8 if Parz(v) \u2208Sz then /* The edge rParz(v),v is over/undercounted: */ 9 \u03c1\u2206\u2190\u03c1\u2206+ Count \u00b7 rParz(v),v; 10 else /* The entire Retz(v) is over/undercounted: */ 11 \u03c1\u2206\u2190\u03c1\u2206+ Count \u00b7 Retz(v); 12 return \u03c1\u2206; Recurrence relation. Finally, we have all we need for the recurrence relation. For each feasible (Tz, \u03c1z), we take \u03c3(Tz, \u03c1z) = min {\u03c3(Ta, \u03c1a) + \u03c3(Tb, \u03c1b) \u2212uproot \u2212overcount} 26 To Store or Not to Store where the minimum is taken over all (Ta, Tb) that are compatible with Tz and all \u03c1a + \u03c1b = \u03c1z \u2212\u03c1\u2206, and where uproot = X v\u2208Ua (sv \u2212sParz(v),v) + X v\u2208Ub (sv \u2212sParz(v),v), overcount = X v\u2208Sa\u2229Sb sParz(v),v. If k is constant, then the recurrence relation takes poly(n) time. This is because there are poly(n) many possible choices of T and \u03c1 on a, b, z,and it takes poly(n) steps to check the compatibility of (Ta, Tb) with Tz and compute \u03c1\u2206. Output. The minimum retrieval cost of a global solution is just min{\u03c1z : \u2203Tz, \u03c3(Tz, \u03c1z) \u2264S} over all feasible (Tz, \u03c1z), where z is the designated root of the nice tree decomposition. We conclude this section with the following theorem. \u25b6Theorem 13. For a constant k \u22651, on the set of graphs whose undelying undirected graph has treewidth at most k, MinSum Retrieval admits an FPTAS. To see that our algorithm above is an FPTAS for MSR, the proof is almost identical to the proof of Theorem 10 (Section 5.1.3) once we note that the number of partial solutions on each z is poly(n). An FPTAS for MMR arises from a similar procedure. When the objective becomes the maximum retrieval cost, we can use \u03c1z to represent the maximum retrieval cost in the partial solution. We then modify Depz(v) to represent the highest retrieval cost among all the nodes that are dependent on v. The recurrence relation is also changed accordingly. One can note that, like before, the new tuple Tz contains all the information we need for a subsolution on G[z]. The same algorithms extend to (1, 1 + \u03f5) bi-criteria approximation algorithms for BSR and BMR naturally, as the objective and constraint are reversed. 6 Heuristics on MSR and BMR In this section, we propose three new heuristics that are inspired by empirical observations and theoretical results. 6.1 LMG-All: Improvement over LMG We propose an improved version of LMG (Algorithm 1), which we name LMG-All. (See Algorithm 7 for pseudocode.) LMG-All enlarges the scope of the search on each greedy step. Instead of searching for the most efficient version to materialize, we explore the payoff of modifying any single edge: 1. Find a configuration that minimizes total storage cost. 2. Let Par(v) be the current parent of v on retrieval path. In addition to Vactive, Define edge set Eactive to be the edges that (a) does not cause the configuration to exceed storage constraint S, and (b) does not form cycles, if (u, v) \u2208Eactive were to replace (Par(v), v) in the current configuration. If Vactive = Eactive = \u2205, output the current configuration. 3. Calculate cost and benefit of each v \u2208Vactive and e \u2208Eactive. Materialize or store the most cost-effective node or edge. Go to step 2 and repeat. While LMG-All considers more edges than LMG, it is not obvious that LMG-All always provides a better solution, due to its greedy nature. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 27 Algorithm 7 LMG-ALL Input : Version graph G, storage constraint S 1 Gaux \u2190extended version graph with auxiliary root. /* See Algorithm 1 for the construction of Gaux. */ 2 T \u2190minimum arborescence of Gaux rooted at vaux with respect to weight function s; 3 Let R(T) and S(T) be the total retrieval and storage cost of T; 4 Let P(v) be the parent of v in T; 5 while S(T) < S do 6 (\u03c1max, (umax, vmax)) \u2190(0, \u2205); 7 for e = (u, v) \u2208E where u is not a descendant of v in T do 8 Te = T \\ (P(v), v) \u222a{e}; 9 if R(Te) > R(T) then 10 continue; 11 if S(Te) \u2264S(T) then 12 \u03c1e \u2190\u221e; 13 else 14 \u03c1e \u2190(R(T) \u2212R(Te))/(se \u2212sP (v),v); 15 if \u03c1e > \u03c1max then 16 \u03c1max \u2190\u03c1e; 17 (umax, vmax) \u2190e; 18 if \u03c1max = 0 then 19 return T ; 20 T \u2190T \\ {(P(vmax), vmax)} \u222a{(umax, vmax)}; 21 return T; 28 To Store or Not to Store 6.2 DP on Extracted Bidirectional Trees We propose DP heuristics for both MSR and BMR, as inspired by algorithms in Sections 4 and 5. To ensure a reasonable running time, we extract bidirectional trees16 from input graphs and run the DP for treewidth 1 on the extracted graph, with the steps below: (1) Fix a node vroot as the root. Calculate a minimum spanning arborescence A of the graph G rooted at vroot. We use the sum of retrieval and storage costs as weight. (2) Generate a bidirectional tree G\u2032 from A. Namely, we have (u, v), (v, u) \u2208E(G\u2032) for each edge (u, v) \u2208E(A). (3) Run the proposed DP for MSR and BMR on directed trees (see Section 5.1 and Section 4) with input G\u2032. In addition, we also implement the following modifications for MSR to further speed up the algorithm: 1. Total storage cost (instead of retrieval) is discretized and used as DP variable index, since it has a smaller range than retrieval cost. 2. Geometric discretization is used instead of linear discretization, reducing the number of discretized \u201cticks.\u201d 3. A pruning step is added, where the DP variable discards all subproblem solutions whose storage cost exceeds some threshold. This reduces redundant computations. All three original features are necessary in the proof for our theoretical results, but in practice, the modified implementations show comparable results but significantly improves the running time. 7 Experiments for MSR and BMR In this section, we discuss the experimental setup and results for empirical validation of the algorithms\u2019 performance, as compared to previous best-performing heuristic: LMG for MSR, and MP for BMR.17 In all figures, the vertical axis (objective and run time) is presented in logarithmic scale. Run time is measured in milliseconds. 7.1 Datasets and Construction of Graphs As in Bhattacherjee et al [15], we conduct experiment on real-world GitHub repositories of varying sizes as datasets. We construct version graphs as follows. Each commit corresponds to a node with its storage cost equal to its size in bytes. Between each pair of parent and child commits, we construct bidirectional edges. The storage and retrieval costs of the edges are calculated, in bytes, based on the actions (such as addition, deletion, and modification of files) required to change one version to the other in the direction of the edge. We use simple diff to calculate the deltas, hence the storage and retrieval costs are proportional to each other. Graphs generated this way are called \u201cnatural graphs\u201d in the rest of the section. In addition, we also aim to test (1) the cases where the retrieval and storage costs of an edge can greatly differ from each other, and (2) the effect of tree-like shapes of graphs on the 16 Recall this means a digraph whose underlying undirected graph is a tree, as in Section 5 17 Our code can be found at https://anonymous.4open.science/r/Graph-Versioning-7343/README.md. A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 29 Dataset #nodes #edges avg. cost sv avg. cost se datasharing 29 74 7672 395 styleguide 493 1250 1.4 \u00d7 106 8659 996.ICU 3189 9210 1.5 \u00d7 107 337038 freeCodeCamp 31270 71534 2.5 \u00d7 107 14800 LeetCodeAnimation 246 628 1.7 \u00d7 108 1.2 \u00d7 107 LeetCode (0.05) 246 3032 1.7 \u00d7 108 1.0 \u00d7 108 LeetCode (0.2) 246 11932 1.7 \u00d7 108 1.0 \u00d7 108 LeetCode (1) 246 60270 1.7 \u00d7 108 1.0 \u00d7 108 Table 4 Natural and ER graphs overview. performance of algorithms. Therefore, we also conduct experiments on modified graphs in the following two ways: Random compression. We simulate compression of data by scaling storage cost with a rand om factor between 0.3 and 1, and increasing the retrieval cost by 20% (to simulate decompression). The resulting storage and retrieval costs are potentially very different. ER construction. Instead of the naturally constructing edges between each pair of parent and child commits, we construct the edges as in an Erd\u0151s-R\u00e9nyi random graph: between each pair (u, v) of versions, with probability p both deltas (u, v) and (v, u) are constructed, and with probability 1 \u2212p neither are constructed. The resulting graphs are much less tree-like.18 We construct ER graphs from the repository LeetCode because it has a moderate size and is the least tree-like.19 7.2 Results in MSR Figure 10, Figure 11, and Figure 12 demonstrate the performance of the three MSR algorithms on natural graphs, compressed natural graphs, and compressed ER graphs. The running times for the algorithms are shown in Figure 11 and Figure 12. Since run time for most non-ER graphs exhibit similar trends, many are omitted here due to space constraint. Also note that, since DP-MSR generates all data points in a single run, its running time is shown as a horizontal line over the full range for storage constraint. We run DP-MSR with \u03f5 = 0.05 on most graphs, except \u03f5 = 0.1 for freeCodeCamp (for the feasibility of run time). The pruning value for DP variables is at twice the minimum storage for uncompressed graphs, and ten times the minimum storage for randomly compressed graphs. Performance analysis. On most graphs, DP-MSR outperforms LMG-All, which in turn outperforms LMG. This is especially clear on natural version graphs, where DP-MSR solutions are near 1000 times better than LMG solutions on 996.ICU. in Figure 10. On datasharing, 18 ER graphs have treewidth \u0398(n) with high probability if the number of edges per vertex is greater than a small constant [38]. 19 On LeetCode, the average unnatural delta is 10 times more costly than a natural delta. This ratio is around 100 for other repositories. 30 To Store or Not to Store LMG LMG-All DP-MSR OPT 2 2.5 3 3.5 4 4.5 \u00b7104 104 105 Storage Retrieval datasharing 0.6 0.8 1 \u00b7107 108 109 Storage Retrieval styleguide 2.5 3 3.5 4 4.5 5 \u00b7108 108 109 1010 1011 Storage Retrieval 996.ICU 0.5 0.6 0.7 0.8 0.9 1 \u00b7109 1011 1012 Storage Retrieval freeCodeCamp Figure 10 Performance of MSR algorithms on natural graphs. OPT is obtained by solving an integer linear program (ILP, see Appendix D) using Gurobi [43]. ILP takes too long to finish on all graphs except datasharing. DP-MSR almost perfectly matches the optimal solution (calculated via the ILP in Appendix D) for all constraint ranges. On naturally constructed graphs (Figure 10), LMG-All often has comparable performance with LMG when storage constraint is low. This is possibly because both algorithms can only iterate a few times when the storage constraint is almost tight. DP-MSR, on the other hand, performs much better on natural graphs even for low storage constraint. On graphs with random compression (Figure 11), the dominance of DP in performance over the other two algorithms become less significant. This is anticipated because of the fact that DP only runs on a subgraph of the input graph. Intuitively, most of the information is already contained in a minimum spanning tree when storage and retrieval costs are proportional. Otherwise, the dropped edges may be useful. (They could have large retrieval but small storage, and vice versa. ) Finally, LMG\u2019s performance relative to our new algorithms is much worse on ER graphs. This may be due to the fact that LMG cannot look at non-auxiliary edges once the minimum arborescence is initialized, and hence losing most of the information brought by the extra edges. (Figure 12). Run time analysis. For all natural graphs, we observe that LMG-All uses no more time than LMG (as shown in Figure 11). Moreover, LMG-All is significantly quicker than LMG on large natural graphs, which was unexpected considering that the two algorithms have almost identical structures in implementation. Possibly, this could be due to LMG making bigger, more expensive changes on each iteration (materializing a node with many dependencies, for instance) as compared to LMG-All. As expected, though, LMG-All takes much more time than the other two algorithms on denser ER graphs (Figure 12), due to the large number of edges. DP-MSR is often slower than LMG, except when ran on the natural construction of large graphs (Figure 11). However, unlike LMG and LMG-All, the DP algorithm returns a whole A. Guo, J. Li, P. Sukprasert, S. Khuller, A. Deshpande, and K. Mukherjee 31 LMG LMG-All DP-MSR OPT 2 4 6 8 \u00b7104 103 104 105 Storage Retrieval datasharing 2 4 6 8 \u00b7104 10\u22121 100 101 Storage Time (ms) datasharing 0.2 0.4 0.6 0.8 1 1.2 1.4 \u00b7107 107 108 Storage Retrieval styleguide 0.2 0.4 0.6 0.8 1 1.2 1.4 \u00b7107 101 102 103 Storage Time (ms) styleguide 2 4 6 8 \u00b7108 107 108 109 1010 1011 Storage Retrieval 996.ICU 2 4 6 8 \u00b7108 102 103 104 105 Storage Time (ms) 996.ICU Figure 11 Performance and run time of MSR algorithms on compressed graphs. spectrum of solutions at once, so it is difficult to make a direct comparison. We also note that the run time of DP heavily depends on the choice of \u03f5 and the storage pruning bound. Hence, the user can trade-off the run time with solution\u2019s qualities by parameterize the algorithm with coarser configurations. 7.3 Results in BMR As compared to MSR algorithms, the performance and run time of our BMR algorithms are much more predictable and stable. They exhibit similar trends across different ways of graph construction as mentioned earlier in this section including the non-tree-like ER graphs, surprisingly. Due to space limitation, we only present the results on natural graphs, as shown in Figure 13, to respectively illustrate their performance and run time. Performance analysis. For every graph we tested, DP-BMR outperforms MP on most of the retrieval constraint ranges. As the retrieval constraint increases, the gap between MP and DP-BMR solution also increases. We also observe that DP-BMR performs worse than MP when the retrieval constraint is at zero. This is because the bidirectional tree have fewer edges than the original graph. (Recall that the same behavior happened for DP-MSR on compressed graphs) We also note that, unlike MP, the objective value of DP-BMR solution monotonically decreases with respect to retrieval constraint. This is again expected since these are optimal 32 To Store or Not to Store LMG LMG-All DP-MSR 0.5 1 1.5 \u00b7109 105 106 107 108 109 Storage Retrieval LeetCode (original) 0.5 1 1.5 \u00b7109 100 101 102 103 Storage Time (ms) LeetCode (original) 1 2 3 4 5 \u00b7109 109 1010 1011 Storage Retrieval LeetCode (0.05) 1 2 3 4 5 \u00b7109 100 101 102 103 104 Storage Time (ms) LeetCode (0.05) 0.5 1 1.5 \u00b7109 105 107 109 1011 Storage Retrieval LeetCode (0.2) 0.5 1 1.5 \u00b7109 101 102 103 104 Storage Time (ms) LeetCode (0.2) 0.5 1 1.5 \u00b7109 105 107 109 1011 Storage Retrieval LeetCode (complete) 0.5 1 1.5 \u00b7109 101 102 103 104 105 Storage Time (ms) LeetCode (complete) Figure 12 Performance and run time of MSR algorithms on compressed ER graphs. solutions of the problem on the bidirectional tree. Run time analysis. For all graphs, the run times of DP-BMR and MP are comparable within a constant factor. This is true with varying graph shapes and construction methods in all our experiments, and representative data is exhibited in Figure 13. Unlike LMG and LMG-All, their run times do not change much with varying constraint values. Overall evaluation For MSR, we recommend always using one of LMG-All and DP-MSR in place of LMG for practical use. On sparse graphs, LMG-All dominates LMG both in performance and run time. DP-MSR can also provide a frontier of better solutions in a reasonable amount of time, regardless of the input. For BMR, DP-BMR usually outperforms MP, except when the retrieval constraint is close to zero. Therefore, we recommend using DP in most situations. REFERENCES 33 MP DP-BMR 0 0.2 0.4 0.6 0.8 1 \u00b7104 108 108.5 Retrieval Storage styleguide 0 0.2 0.4 0.6 0.8 1 \u00b7104 101.6 101.7 Retrieval Time (ms) styleguide 0 1,000 2,000 3,000 4,000 1011.5 1012 Retrieval Storage freeCodeCamp 0 1,000 2,000 3,000 4,000 105.2 105.3 105.4 Retrieval Time (ms) freeCodeCamp Figure 13 Performance and run time of BMR algorithms on natural version graphs. 8 Conclusion In this paper, we developed fully polynomial time approximation algorithms for graphs with bounded treewidth. This often captures the typical manner in which edit operations are applied on versions. For practical use, we extracted the idea behind this approach as well as previous LMG approach, and developed heuristics which significantly improved both the performance and run time in experiments, while potentially allowing for parallelization." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.07540v1", |
| "title": "PKG API: A Tool for Personal Knowledge Graph Management", |
| "abstract": "Personal knowledge graphs (PKGs) offer individuals a way to store and\nconsolidate their fragmented personal data in a central place, improving\nservice personalization while maintaining full user control. Despite their\npotential, practical PKG implementations with user-friendly interfaces remain\nscarce. This work addresses this gap by proposing a complete solution to\nrepresent, manage, and interface with PKGs. Our approach includes (1) a\nuser-facing PKG Client, enabling end-users to administer their personal data\neasily via natural language statements, and (2) a service-oriented PKG API. To\ntackle the complexity of representing these statements within a PKG, we present\nan RDF-based PKG vocabulary that supports this, along with properties for\naccess rights and provenance.", |
| "authors": "Nolwenn Bernard, Ivica Kostric, Weronika \u0141ajewska, Krisztian Balog, Petra Galu\u0161\u010d\u00e1kov\u00e1, Vinay Setty, Martin G. Skj\u00e6veland", |
| "published": "2024-02-12", |
| "updated": "2024-02-12", |
| "primary_cat": "cs.HC", |
| "cats": [ |
| "cs.HC", |
| "cs.AI", |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "A personal knowledge graph (PKG) is \u201ca resource of structured information about entities related to an individual, their attributes, and the relations between them\u201d [1]. A PKG offers the possibility to centrally store all information related to its owner such as per- sonal relationship, preferences on food, and calendar data [23]. This enables the delivery of highly personalized services while main- taining the owner\u2019s full control over their data. In today\u2019s digital world, where personal data is often fragmented across multiple ac- counts with different service providers, a PKG provides a solution for consolidating information. Crucially, one of the most essential features of a PKG is that the individual is put in control of their data, allowing owners to determine what data is stored and what services have access to it [23]. Despite the clear potential of PKGs WWW \u201924, May 13\u201317, 2024, Singapore, Singapore \u00a9 2024 Association for Computing Machinery. This is the author\u2019s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of Proceedings of the ACM Web Conference 2024 (WWW \u201924), https://doi.org/10.1145/ nnnnnnn.nnnnnnn. and the growing research interest around them [2], efforts have so far remained mostly on the conceptual level. Practical implementa- tions, especially those that directly interface with users, are lacking. This paper aims to address that gap. Similar to the concept of PKGs, Solid (Social Linked Data) [20] is an existing initiative that aims to put individuals in control of their own data. Solid allows users to store personal data in decentral- ized \u201cPods\u201d (Personal Online Data Stores), giving them fine-grained control over which apps can access which portions of their data. However, Pods introduce a level of complexity that may pose chal- lenges for ordinary web users. Managing data within Pods requires a learning curve, and users accustomed to the simplicity of tradi- tional services might find this transition difficult. Solid interfaces and applications have particularly been criticized for not being user-friendly, and compatibility issues between Pod providers and Solid apps lead to inconsistent user experiences.1 In this work, we propose a user-friendly solution to managing PKGs, consisting of a web-based PKG Client and a service-oriented PKG API. To dramatically lower the barrier for end users, we let them administer and interact with their PKG via natural language statements, enabled by recent advances in Large Language Mod- els (LLMs). For example, a user might simply state a preference \u201cI dislike all movies with the actor Tom Cruise\u201d to be recorded in their PKG. While this example is simplistic, we demonstrate that representing it in a PKG can actually become complex due to en- tanglements between different entities and relationships between them, such as all movies and Tom Cruise. In order to tackle this challenge, we develop a PKG vocabulary on top of RDF to represent such statements both in natural language and as structured data. Furthermore, our vocabulary defines a set of properties, such as access rights and provenance, to enrich the statements. In summary, the main contributions of this work are: (1) A PKG vocabulary based on RDF reification, leveraging existing vocabularies, to represent statements in a PKG. (2) A PKG API that implements both user-facing and service- oriented functionalities to access and manage a PKG. It includes a novel NL2PKG feature, enabling the translation of natural language statements to API calls. (3) A web-based PKG Client to browse, expand, and manage a PKG, prioritizing simplicity, intuitive design, and visualiza- tion features for easy user understanding and control. Our complete solution along with a video demonstration may be found at https://github.com/iai-group/pkg-api. 1See, e.g., discussions at https://www.reddit.com/r/solid/ 1 arXiv:2402.07540v1 [cs.HC] 12 Feb 2024 Figure 1: Overview of the PKG tooling developed in this work.", |
| "main_content": "45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 PKG API: A Tool for Personal Knowledge Graph Management Nolwenn Bernard Ivica Kostric Weronika \u0141ajewska Krisztian Balog Petra Galu\u0161\u010d\u00e1kov\u00e1 Vinay Setty Martin G. Skj\u00e6veland University of Stavanger Stavanger, Norway {nolwenn.m.bernard,ivica.kostric,weronika.lajewska,krisztian.balog,petra.galuscakova,vinay.j.setty,martin.g.skjeveland}@uis.no ABSTRACT Personal knowledge graphs (PKGs) offer individuals a way to store and consolidate their fragmented personal data in a central place, improving service personalization while maintaining full user control. Despite their potential, practical PKG implementations with user-friendly interfaces remain scarce. This work addresses this gap by proposing a complete solution to represent, manage, and interface with PKGs. Our approach includes (1) a user-facing PKG Client, enabling end-users to administer their personal data easily via natural language statements, and (2) a service-oriented PKG API. To tackle the complexity of representing these statements within a PKG, we present an RDF-based PKG vocabulary that supports this, along with properties for access rights and provenance. CCS CONCEPTS \u2022 Information systems \u2192Web services; Web applications; Data management systems; Personalization; \u2022 Human-centered computing \u2192Natural language interfaces. KEYWORDS Personal Knowledge Graphs, Personal Data Management, Knowledge Representation, Semantic Technologies ACM Reference Format: . 2024. PKG API: A Tool for Personal Knowledge Graph Management. In Proceedings of Proceedings of the ACM Web Conference 2024 (WWW \u201924). ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn While there are several definitions of PKGs [1, 5], they all agree that PKGs can be seen as a specialized approach to a personal information management (PIM) system [9, 16]. PIM systems provide storage and access to personal data, with a focus on personal data control for third-party services. Several tools have been proposed, including Solid [20], MyData [12], and OpenPDS/SA [7], and are available as either free or commercial services. Compared with general PIM tools, a PKG stores data in the form of a knowledge graph (KG) [8], where information about entities and their relations is directly modeled as a graph structure. This structure, grounded in a pre-defined ontology, inherently supports operations such as summarization of personal information [10, 19] and cross-domain recommendation [24]. Our proposed PKG API focuses on enabling user-friendly interactions via natural language (NL). This requires a way to automatic translate NL statements to a structured query language that can directly interact with the PKG. Methods for such NL-tostructured-language translation traditionally focus on conversion to SQL [30, 32], with interactive approaches [26] and LLMs [13, 22] being the state of the art. A similar pattern is followed in the case of conversion from natural language to SPARQL, which can be used for querying knowledge graphs. While earlier methods were based, for example, on rules [17] and machine translation [31], recent studies start to explore LLMs [25, 29]. LLMs have also been utilized to translate NL queries to different API calls [18, 21]. 3 OVERVIEW AND ARCHITECTURE This work aims to provide a simple and user-friendly way to manage a PKG. It comprises a web interface, i.e., PKG Client, and the PKG API. The PKG API serves as a middleman between the PKG and both the PKG Client and external service providers. It has two entry points: one for natural language statements and another one HTTP requests. See Fig. 1 for an overview. A statement is a fundamental unit of information in the PKG, containing at minimum its text content. For example, \u201cI dislike all movies with the actor Tom Cruise\u201d is a statement. Statements can be further enriched with properties defined in the PKG vocabulary (Section 4), following the Subject-Predicate-Object (SPO) model. In this example, we can extract \u201cI\u201d as the subject, \u201cdislike\u201d as the predicate, and \u201call movies with the actor Tom Cruise\u201d as the object. Note that our example is a particular type of statement, one that Figure 2: Screenshot of the home screen after submission of a natural language statement. expresses a preference. Preference statements are especially valuable for service providers seeking to personalize user experiences. Therefore, our PKG vocabulary (detailed in Section 4) explicitly supports the representation of preferences, which are derived from statements via a derivation relationship. The PKG Client is a web interface connecting users to their PKG. It is designed to be intuitive and user-friendly, aiming to make PKG administration accessible to a broad range of users. The home screen features a form with a text area for users input natural language statements, as well as an area to display the outcomes of these statements (Fig. 2). Additional screens within the interface provide forms for specific tasks like adding statements manually to the PKG and visualizing the PKG; these features are primarily designed for advanced users with knowledge of semantic web technologies. When a natural language statement is received by the PKG API, whether from the PKG Client or external services, it is processed by the NL2PKG component (Section 5), which performs two main steps. First, a natural language understanding step identifies the action to execute (e.g., adding a new statement), extracts properties (subject, predicate, object), and infers whether a preference is expressed (e.g., identifying a negative preference towards \u201cTom Cruise\u201d). At this stage, all the properties and preference are represented as text. Next, an entity linking step attempts to resolve the properties and preference to IRIs (e.g., \u201cTom Cruise\u201d to <http://dbpedia.org/resource/Tom_Cruise>). Note that these steps may be performed asynchronously on existing statements in case the natural language understanding and/or entity linking 2 components are updated. Once these steps are completed, the PKG Connector generates a SPARQL query with the annotated statement and preference, and sends it to the PKG. Figure 3 illustrates these steps. In the case where the user or service provider decides to interact with the PKG using HTTP requests, the PKG API directly triggers the PKG Connector to create and send the corresponding SPARQL query to the PKG. 4 PKG VOCABULARY The PKG vocabulary is used for expressing all statements to be kept in a PKG. It is specified as a set of SHACL shapes [11] over existing RDF vocabularies such as RDF [4], SKOS [15], PAV [6], and The Weighted Interests Vocabulary [3], including custom vocabulary terms necessary for expressing access rights to stored statements. The main design idea behind the vocabulary is to provide a simple data model with which one can represent all kinds of incoming statements, and allow for incremental post-processing of statements to increase their quality and precision. The core modeling pattern is standard RDF reification [4]: a statement is represented by an instance of rdf:Statement where the original statement in text is represented by a literal annotation on the statement. The extracted subject, predicate and object are connected to the rdf:Statement using rdf:subject, rdf:predicate, and rdf:object, respectively, and can either be represented directly as an IRI, or, in the case that an appropriate IRI is not found, as an instance of skos:Concept with the extracted text as a literal annotation. Instances of rdf:Statement and skos:Concept are further analyzed and can, if a match is found, be related to other known resources from the PKG or from external KGs using, e.g., the SKOS properties skos:related, skos:broader, or skos:narrower. Also, the analysis may amend statements by asserting a preference the statement\u2019s subject has towards the object. Additional semantic descriptions may be added to statements and concepts at the discretion and capabilities of the analysis tools, e.g., concepts like \u201cAll movies with the actor Tom Cruise\u201d could be expressed as being a subclass of or equivalent to a, possibly complex, constructed OWL [28] class expression. However, this is outside the scope of our current implementation. Every rdf:Statement are assumed to be annotated with provenance information following the PAV Ontology. Finally, the PKG vocabulary enables straightforward access control at the rdf:Statement level by explicitly stating which services have read and write access to the statement using the properties pkg:readAccessRights and pkg:writeAccessRights. The bottom block of Fig. 3 demonstrates the use of the PKG vocabulary. It is available at its namespace IRI: http://w3id.org/pkg/, including SHACL shape definitions, documentation, and examples. 5 NATURAL LANGUAGE TO PKG To facilitate user-friendly interactions with PKGs, we present a twostage NL2PKG approach that translates natural language statements to API calls that perform operations on the PKG, such as storing stated preferences or retrieving previous statements. Figure 3: Life of a statement from NL to PKG. In the first stage, we leverage LLMs to classify user intent, extract an SPO-triple, and identify whether a preference was expressed in the NL statement. Intents specify the desired action on the PKG: (a) ADD inserts a statement, (b) GET retrieves matching statements, (c) DELETE removes a statement, and (d) UNKNOWN handles unrecognized statements. Preferences are represented as +1 (positive) or -1 (negative) and are relevant for ADD intents in statements expressing user likes or dislikes. For example, \u201cBob likes Oppenheimer\u201d translates to an ADD intent, inserting the triple \u27e8\ud835\udc35\ud835\udc5c\ud835\udc4f, \ud835\udc3f\ud835\udc56\ud835\udc58\ud835\udc52\ud835\udc60,\ud835\udc42\ud835\udc5d\ud835\udc5d\ud835\udc52\ud835\udc5b\u210e\ud835\udc52\ud835\udc56\ud835\udc5a\ud835\udc52\ud835\udc5f\u27e9into the PKG with a preference of +1. Specifically, we employ pre-trained LLMs with few-shot chain-ofthought reasoning prompts. Separate prompts are used for intent classification, SPO-triple extraction, and preference identification.2 In the second stage, we employ an entity linker to resolve the SPO elements, which are initially extracted in their surface form representations. They are transformed into normalized entities and relations congruent with external KGs, such as DBpedia. Note that the subject element most commonly belong in the user\u2019s private circle (e.g., \u201cI\u201d and \u201cmy mom\u201d), thus, we argue that is should be resolved using an entity linker specific to the PKG. 2The prompts used can be found here: https://github.com/iai-group/pkg-api/tree/ main/data/llm_prompts/cot 3 6 IMPLEMENTATION Our solution contains two main components: (1) a PKG API served as a RESTful API with a backend server based on Flask, and (2) a user interface, PKG Client, implemented as a React application. Central to the PKG API\u2019s functionality is the NL2PKG module. In our demo, we use the Ollama framework3 to deploy and experiment with Llama2-7b and Mistral-7b as LLMs, with the latter being the default option based on a set of preliminary experiments.4 For entity linking, we offer both REL [27], as our default, and DBPedia Spotlight [14] as an alternative. The code is designed to be modular to allow for easy experimentation with different LLM-based annotators and entity linkers in the future. Natural language statements processed by the NL2PKG module are further handled by the PKG Connector responsible for the creation and execution of SPARQL queries against the PKG. The PKG Connector uses a dedicated Python package, RDFLib,5 for generating and executing SPARQL queries in RDF format. 7 CONCLUSION Personal knowledge graphs hold the potential to be useful tools for organizing and providing personal information. As the volume of digital data continues to grow, alongside the number of services that can utilize it, the need for user-centric management tools becomes ever more pressing. Recognizing that existing tools are often too complex to be used by non-expert users, we focused on developing a robust internal data representation for PKGs, paired with an API and a user-friendly PKG Client. A key novelty of our approach is enabling users to interact with their PKG directly through natural language statements. Our open-source demo showcases the viability of this concept with a particular focus on understanding and representing user preferences. This work represents a major step forward in the practical realization of PKGs, opening avenues for research into both intuitive user-centric interaction methods and broader applications. ACKNOWLEDGMENTS This research was partially supported by the Norwegian Research Center for AI Innovation, NorwAI (Research Council of Norway, project number 309834)." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2403.06936v1", |
| "title": "Counterfactual Reasoning with Knowledge Graph Embeddings", |
| "abstract": "Knowledge graph embeddings (KGEs) were originally developed to infer true but\nmissing facts in incomplete knowledge repositories. In this paper, we link\nknowledge graph completion and counterfactual reasoning via our new task CFKGR.\nWe model the original world state as a knowledge graph, hypothetical scenarios\nas edges added to the graph, and plausible changes to the graph as inferences\nfrom logical rules. We create corresponding benchmark datasets, which contain\ndiverse hypothetical scenarios with plausible changes to the original knowledge\ngraph and facts that should be retained. We develop COULDD, a general method\nfor adapting existing knowledge graph embeddings given a hypothetical premise,\nand evaluate it on our benchmark. Our results indicate that KGEs learn patterns\nin the graph without explicit training. We further observe that KGEs adapted\nwith COULDD solidly detect plausible counterfactual changes to the graph that\nfollow these patterns. An evaluation on human-annotated data reveals that KGEs\nadapted with COULDD are mostly unable to recognize changes to the graph that do\nnot follow learned inference rules. In contrast, ChatGPT mostly outperforms\nKGEs in detecting plausible changes to the graph but has poor knowledge\nretention. In summary, CFKGR connects two previously distinct areas, namely KG\ncompletion and counterfactual reasoning.", |
| "authors": "Lena Zellinger, Andreas Stephan, Benjamin Roth", |
| "published": "2024-03-11", |
| "updated": "2024-03-11", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.AI", |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Reasoning about hypothetical situations (counter- factual reasoning) and anticipating the effects of a change in the current state of the world is central to human cognition (Rafetseder and Perner, 2014; Van Hoeck et al., 2015), and has been identified as a key concept in game theory (Aumann, 1995; Halpern, 1999) and agent-based systems (Icard et al., 2018; Parvaneh et al., 2020). It has even been argued that the capacity to reason about alternative configurations of the world could be a pre-requisite Figure 1: A hypothetical scenario and its implications, expressed in the language of knowledge graph triples to the existence of free will and a sense of agency (McCarthy, 2000; Kulakova et al., 2017). Recently, there has been an increased interest in evaluating and improving counterfactual reasoning of AI sys- tems, in particular, large language models (LLMs) (Qin et al., 2019; Frohberg and Binder, 2022; Li et al., 2023). Knowledge graphs (KGs) express rich informa- tion about the world as an explicit collection of triples, such as (Paris, capital, France), and knowl- edge graph embeddings (KGEs) effectively infer true but missing facts from incomplete knowledge repositories (Hogan et al., 2021; Ji et al., 2021). Yet, to the best of our knowledge, KGEs have not been explored for counterfactual reasoning. In this work, we link counterfactual reasoning to knowledge graph completion (KGC) via our new task CFKGR1 (CounterFactual KG Reasoning) which requires models to classify the validity of facts given a hypothetical scenario. CFKGR de- scribes the original world state as a KG and hy- pothetical scenarios as edges that are added to the graph. The hypothetical scenario leads to the emer- gence of new facts in the KG while leaving (most) already existing ones intact. Figure 1 illustrates a hypothetical scenario in which Paris is the capital of Japan. To perform well on CFKGR, models must be capable of detecting plausible additions 1The data and code are available at https://github. com/LenaZellinger/counterfactual_KGR. arXiv:2403.06936v1 [cs.LG] 11 Mar 2024 to the graph, e.g., (Paris, continent, Asia), while maintaining knowledge of unaffected facts, e.g., (Elvis Presley, occupation, musician). We create the first benchmark datasets for CFKGR, which are based on the CoDEx KGC benchmark (Safavi and Koutra, 2020) and provide diverse hypothet- ical scenarios with corresponding plausible addi- tions to the KG derived from inference rules (that were mined from the KG (Lajus et al., 2020)). We validate our data-generating process and underly- ing assumptions via thorough human annotation. Lastly, we introduce COULDD (COUnterfactual Reasoning with KnowLedge Graph EmbeDDings), a method which updates existing KGEs based on counterfactual information. COULDD follows a standard KGE training scheme using the hypotheti- cal scenario and negative sampling. Training stops once the hypothetical scenario is classified as valid. In our experiments, COULDD is initialized with five different KGE methods. We observe that it can detect plausible counterfactual changes to the graph that follow prominent inference patterns in the KG while maintaining performance on unaf- fected triples. We repeat the same experiments with ChatGPT, i.e., gpt-3.5-turbo, provided with similar prompts to the human annotators. ChatGPT per- forms better at detecting plausible additions to the graph than most KGE-based methods but exhibits poor knowledge retention. Qualitative analysis of answers provided by ChatGPT shows that it largely failed to understand the task on retained facts as it tried to infer them from the provided informa- tion. Evaluating on human-annotated data leads to a drop in overall performance for KGEs and Chat- GPT alike. To summarize, our main contributions are as follows: \u2022 We propose CFKGR, a challenging task for counterfactual reasoning on KGs and cre- ate corresponding, partially human-verified, datasets, which we make publicly available. \u2022 We introduce COULDD, a general method for adapting existing KGE methods to make infer- ences given hypothetical scenarios and show that it improves reasoning on counterfactual graphs over pre-trained embeddings. \u2022 We compare counterfactual reasoning with KGEs to ChatGPT and show that ChatGPT outperforms KGEs in detecting plausible counterfactual inferences but struggles to re- call unrelated knowledge, unlike COULDD.", |
| "main_content": "We introduce Counterfactual KG Reasoning (CFKGR) a novel task to assess the ability of machine learning systems to reason in hypothetical scenarios. CFKGR describes the originally observed world state as a knowledge graph and introduces hypothetical scenarios by adding previously unseen facts to the graph. To perform well on CFKGR, models need to (1) identify plausible changes to the original world state induced by the hypothetical scenario and (2) understand which facts are unaffected by the hypothetical scenario. 2.1 Definition of Counterfactual Graphs Formally, CFKGR defines the original world state via a knowledge graph G = {E, R, F}, where E and R denote the sets of entities and relations represented in the knowledge graph. The fact set F represents our knowledge about the world as triples (h, r, t) \u2208F \u2282E \u00d7 R \u00d7 E. The fact set is usually split into disjoint subsets Ftrain, Fvalid and Ftest. We denote a hypothetical scenario by a triple \u03c4 c := (h, r, t) / \u2208F. The counterfactual graph, in which \u03c4 c holds, is then characterized by the fact set Fc := F \\ F\u2212\u222aF+, where F+ denotes the facts that emerge given the hypothetical scenario, and F\u2212denotes facts that contradict the scenario and cannot hold any longer. We say \u03c4 c changes a triple \u03c4 if either \u03c4 \u2208F+ or \u03c4 \u2208F\u2212. In the following, we formulate the assumptions underlying our task. Closed-world assumption. We adopt the standard closed-world assumption (Reiter, 1978), which states that facts that are not part of the KG, i.e., \u03c4 / \u2208F, are false. Thus, each \u03c4 / \u2208F is a possible hypothetical scenario in our setup. Logic-world assumption. We assume that plausible changes to the graph largely follow some regularity and can hence be modeled via (potentially very complex) logical rules. While available rule sets have limited coverage and precision, we can leverage them to model a subset of plausible changes to a KG. By employing the logic-world assumption, we can represent an approximation of Fc via a set of rules and the original fact set. 2.2 Evaluation We formulate CFKGR as a binary classification task in which the goal is to predict whether a given triple is present in the counterfactual graph or not. Triples \u03c4 \u2208Fc receive label 1, while all other Elvis Presley Denmark Danish official language citizen of speaks Europe continent musician occupation Japan Asia continent Korean speaks Walt Disney speaks educated at Brazil continent continent spouse Instance Notation Original KG CF KG Counterfactual \u03c4 c \u03c4 c / \u2208F \u03c4 c \u2208Fc Inference \u03c4 i \u03c4 i / \u2208F \u03c4 i \u2208Fc Unchanged (near) \u03c4 n \u03c4 n \u2208F \u03c4 n \u2208Fc Unchanged (far) \u03c4 f \u03c4 f \u2208F \u03c4 f \u2208Fc Corruptions \u03c4h\u2032, \u03c4t\u2032, \u03c4r\u2032 \u03c4h\u2032, \u03c4t\u2032, \u03c4r\u2032 / \u2208F \u03c4h\u2032, \u03c4t\u2032, \u03c4r\u2032 / \u2208Fc Figure 2: Overview over the different types of facts, given the hypothetical scenario that Elvis Presley is a citizen of Denmark. The green edge (Elvis Presley, speaks, Danish) emerges from adding the blue edge (Elvis Presley, citizen of, Denmark) to the knowledge graph. Purple and orange edges are present in the original KG and unaffected by the scenario. Grey edges are neither present in the original nor the counterfactual knowledge graph. triples are labeled 0. Since scoring all possible triples is infeasible, we consider a smaller set of carefully chosen test cases. Given a counterfactual \u03c4 c / \u2208F and a rule, we define: (1) a counterfactual inference \u03c4 i that follows from the rule and allows us to measure whether the model can correctly predict changes to the graph given \u03c4 c, (2) retained facts which are unaffected by the hypothetical scenario and should still be classified as valid in the counterfactual graph, (3) random head, tail, and relation corruptions of inferences and retained facts, which ensure that the model does not predict unsolicited triples as valid additions. We denote the corruptions for a triple \u03c4 by \u03c4h\u2032, \u03c4t\u2032 and \u03c4r\u2032. For (2), we distinguish between near facts \u03c4 n, which are in the one-hop neighborhood of \u03c4 c, and far facts \u03c4 f, sampled from its complement. Note that they are sampled from the entire fact set F to measure knowledge retention. Figure 2 illustrates a counterfactual scenario and its associated test cases. We use the following metrics to evaluate the performance on our benchmark. The concrete formulas can be found in Appendix A. We compute (1) the F1-score over all test cases in the dataset to measure the overall predictive performance on counterfactual graphs. (2) the accuracy on changed facts, i.e., triples that have a different label before and after the hypothetical scenario is introduced. (3) the F1-score on unchanged facts, i.e., triples that have the same label before and after the hypothetical scenario is introduced. 3 CFKGR: Dataset Creation For our dataset construction, we leverage rules found by rule mining systems, which capture prominent patterns in KGs. Automatically mined rules are naturally compatible with the content of the KG and are known to be a useful tool for KGC (e.g., Meilicke et al., 2019; Sadeghian et al., 2019a). Since there is no trivial way to reliably generate F\u2212, we only consider the additions F+. Concretely, we define F+ via mined composition rules of the form (X, r1, Y ) \u2227(Y, r2, Z) \u2192(X, r3, Z) (1) where r1, r2, r3 \u2208R. We refer to (X, r1, Y ) \u2227 (Y, r2, Z) as the rule body and (X, r3, Z) as the inference. The triples (X, r1, Y ) and (Y, r2, Z) are called the first and second body atom, respectively. Replacing X, Y , and Z by concrete entities x, y, z \u2208E creates an instantiation of the rule. In the following, we will use the short-hand notation (r1, r2, r3) to denote a rule as described in (1). We choose composition rules since they are well studied in standard KG completion benchmarks (Safavi and Koutra, 2020) and inferential benchmarks (Cao et al., 2021; Liu et al., 2023). Moreover, composition rules, as given in (1), infer local changes. This is desirable since most relevant changes induced by a hypothetical scenario will likely occur in its close neighborhood. We consider understanding the implications induced by composition rules as a first step to more general and complex hypothetical reasoning. Rule: (X, country, Y)\u00a0 \u00a0 \u00a0(Y, part of, Z)\u00a0 \u00a0 \u00a0(X, continent, Z) Moscow Russia Canda North America country part of continent country Figure 3: Creation of a hypothetical scenario. 3.1 Data Generating Process In the following, we give a high-level overview of our data generating process. We focus on creating hypothetical scenarios for the first body atom of a given rule. Appendix C provides a detailed description and the full algorithm. Given a knowledge graph and a rule set, we generate several hypothetical scenarios for each rule by altering a fact in the KG such that it triggers the rule, as is illustrated in Figure 3. Concretely, for each rule (r1, r2, r3), we search for existing edges e1 := (x, r1, y) \u2208Ftrain and e2 := (\u00af y, r2, z) \u2208Ftrain, ensuring that the resulting hypothetical scenario \u03c4 c := (x, r1, \u00af y) and inference \u03c4 i := (x, r3, z) are not in the original KG. Sampling e1 and e2 without any constraints can result in nonsensical scenarios and inferences. Hence, we ensure that the entities in \u03c4 c and \u03c4 i are suitable for the given relation by restricting them to entities that occur with said relation in the original KG. Once suitable \u03c4 c and \u03c4 i are found, we randomly sample two near facts \u03c4 n from the one-hop neighborhood of \u03c4 c and one far fact \u03c4 f from its complement. Note that we sample \u03c4 n and \u03c4 f on the full fact set F, instead of only Ftest, as their primary purpose is to measure knowledge retention as opposed to inference capabilities. When creating head and tail corruptions of a given fact, we restrict the sample space since random corruptions, which tend to result in nonsensical triples, have previously been shown to be easily detectable for KGE methods (Safavi and Koutra, 2020). For head (tail) corruptions, we require that the replacements are also heads (tails) for the relation in the original graph2. For relation corruptions, we do not employ additional constraints. 3.2 CFKGR-CoDEx Based on the procedure described in Section 3.1, we create the first benchmark datasets for CFKGR 2In rare cases where these constraints only allow for creating triples already present in the KG or inferred by our rule set, we default to the full entity set. Valid Test Rules Facts Rules Facts CFKGR-CoDEx-S 5 3600 12 8848 CFKGR-CoDEx-M 5 3936 26 19584 CFKGR-CoDEx-L 5 4000 39 30064 Table 1: CFKGR dataset overview. \"Rules\" denotes the number of rules that were used to create the dataset. \"Facts\" is the total number of test cases. based on the CoDEx knowledge graph completion benchmark (Safavi and Koutra, 2020). We choose CoDEx since it covers diverse content, uses easily interpretable relations, and contains rich auxiliary information, such as entity types. CoDEx provides three knowledge graphs of varying sizes (S, M, and L), collected from Wikidata (Vrande\u02c7 ci\u00b4 c and Kr\u00f6tzsch, 2014), and corresponding composition rules obtained by the rule-mining system Amie3 (Lajus et al., 2020). CoDEx-S and CoDExM additionally contain verified negative triples. An overview over the resources provided by CoDEx can be found in Appendix B. We use the available Amie3 patterns for each CoDEx dataset as our rule set and create at most 25 unique counterfactual triples per body atom for each rule. We subsequently split them into a validation and test set, ensuring that there are no overlapping rules or counterfactuals between validation and test 3. Table 1 provides statistics about the created datasets. In the following section, we will explore how well the resulting test cases align with human counterfactual reasoning. 3.3 Human Annotation We validate our data generating process via human annotation. For each of the 31 rules in CFKGR-M, we verify 10 test instances (5 per atom4). We annotate \u03c4 i, \u03c4 f, \u03c4 n 1 , \u03c4 n 2 and \u03c4 i r\u2032, and omit the remaining corruptions as their construction relies on the commonly-used closed-world assumption (Reiter, 1978). This results in 1530 annotated instances, which were labeled by four to six annotators as either likely (1), unlikely (0), or unsure/too little information (-1), given verbalizations of the hypo3For M, there are rules which can produce the same counterfactual inference pairs (using a different context). There are 14 such duplicates in the test set. Still, there is no overlap in counterfactuals between validation and test. 4Except for one rule which only produced one unique counterfactual according to our conditions for the second atom. Majority Vote Label # Labeled Expected As expected 0 1 -1 Tied Inference 306 1 58.2% 60 178 27 41 Far fact 306 1 99.7% 0 305 0 1 Near fact 612 1 95.6% 16 585 2 9 Relation corr. 306 0 86.9% 266 20 3 17 Table 2: Annotation results. \"# Labeled\" denotes the number of annotated examples per category. \"Expected\" gives the label assigned by our automatic process and \"As expected\" gives the percentage of samples for which the expected label coincides with the majority vote. thetical scenario and context triggering the respective inference rule. We observe a Krippendorff\u2019s alpha (Hayes and Krippendorff, 2007) of 0.653, computed using the simpledorff library, which indicates substantial agreement (Landis and Koch, 1977). The annotation guidelines can be found in Appendix D. Table 2 summarizes the annotation results. Inferences seem to be the most difficult category to annotate as they show the highest amount of ties and \"unsure/too little information\" labels. Moreover, we observe the highest number of deviations from our expected label for this test case. This indicates that rules that were mined for factual knowledge graph completion cannot always be used for human-like counterfactual reasoning. On relation corruptions, we observe a noticeable number of inferences that are not implied by our rules, but are still considered valid by humans or are at least debatable. Possible explanations are the limited coverage of the rule set or unintuitive verbalizations of the relations. For near and far facts, we obtain a label distribution that largely agrees with our assumptions. 4 Counterfactual Reasoning with Knowledge Graph Embeddings KGE models find low-dimensional vector representations for entities and relations while preserving the information contained in the KG. To judge the plausibility of a given triple, KGE models use a scoring function \u03d5(h, r, t) : E \u00d7 R \u00d7 E \u2192R. A triple is typically classified as valid if it satisfies \u03d5(h, r, t) \u2265\u00b5r, for a relation-specific threshold \u00b5r \u2208R. To extend KGEs to our task, we propose COULDD (COUnterfactual Reasoning With KnowLedge Graph EmbeDDings), a general method for adapting existing knowledge graph embeddings with respect to a given hypothetical sceAlgorithm 1: COULDD training and prediction. The short-hand notation \u03d5\u03b8(T\u03c4 c) denotes scoring all test cases associated with \u03c4 c and L\u03b8 denotes the cross-entropy loss. Data: G = {E, R, F}, CFKGR data D, original embeddings \u03b80, # iterations E, # additional samples N, learning rate \u03b1, thresholds \u00b51, \u00b52, ..., \u00b5|R| Result: CFKGR predictions \u02c6 y \u2190{} foreach (\u03c4 c, T\u03c4 c) \u2208D do \u03b8 \u2190\u03b80 for e \u2208{1, ..., E} do S \u2190Sample N from Ftrain B \u2190{\u03c4 c} \u222aS \u03b8 \u2190Optimizer(L\u03b8(B), \u03b1) if \u03d5\u03b8(\u03c4 c) \u2265\u00b5r then break \u02c6 y \u2190\u02c6 y \u222a{\u03d5\u03b8(T\u03c4 c)} return \u02c6 y nario. COULDD is initialized from existing embeddings trained on the original KG. For each hypothetical scenario, these embeddings are updated and subsequently evaluated on the corresponding test cases. COULDD\u2019s update scheme only minimally changes standard KGE training: In each iteration, the existing embeddings are fine-tuned on a batch consisting of the counterfactual triple \u03c4 c and N additional randomly sampled edges from the training graph. Negative training examples are generated by randomly corrupting the head and tail entities of each triple in the batch. The embeddings are updated using the standard cross-entropy loss. Once the counterfactual triple \u03c4 c exceeds the classification threshold, the training is stopped in order to avoid an excessive perturbation of the pre-trained embeddings5. Importantly, COULDD only requires access to the counterfactual triple \u03c4 c and the original fact set F and does not require additional task-specific training data or information about the rules used to generate CFKGR datasets6. As a result, COULDD 5Note that there is no traditional validation set for the individual updates on which we could perform early stopping. 6We only use the test cases in the validation set for hyperCFKGR-CoDEx-S CFKGR-CoDEx-M CFKGR-CoDEx-L F1 Changed Unchanged F1 Changed Unchanged F1 Changed Unchanged RESCAL 60.82 27.12 63.28 63.05 21.57 66.92 53.84 71.47 49.64 COULDD-RESCAL 61.68 \u00b1 0.14 32.48 \u00b1 0.73 63.48 \u00b1 0.16 63.85 \u00b1 0.08 26.23 \u00b1 0.16 67.16 \u00b1 0.07 53.94 \u00b1 0.02 84.56 \u00b1 0.35 48.18 \u00b1 0.06 TransE 58.94 23.15 61.87 53.61 23.61 55.83 49.23 66.31 45.37 COULDD-TransE 60.49 \u00b1 0.12 26.8 \u00b1 0.81 63.16 \u00b1 0.09 53.91 \u00b1 0.05 26.06 \u00b1 0.25 55.79 \u00b1 0.06 52.6 \u00b1 0.06 76.56 \u00b1 0.25 47.77 \u00b1 0.04 ComplEx 62.45 29.11 64.90 65.69 11.60 71.83 58.44 65.51 55.26 COULDD-ComplEx 67.76 \u00b1 0.3 37.94 \u00b1 0.67 69.95 \u00b1 0.29 66.78 \u00b1 0.06 34.67 \u00b1 0.23 69.21 \u00b1 0.07 59.44 \u00b1 0.02 82.95 \u00b1 0.26 54.25 \u00b1 0.02 ConvE 61.04 16.64 65.39 56.83 13.15 61.37 55.56 61.84 52.58 COULDD-ConvE 61.51 \u00b1 0.11 16.96 \u00b1 0.72 65.92 \u00b1 0.12 52.69 \u00b1 0.16 17.04 \u00b1 0.16 56.09 \u00b1 0.16 60.6 \u00b1 0.17 45.53 \u00b1 0.61 60.29 \u00b1 0.14 TuckER 64.25 15.01 69.40 65.21 13.15 70.98 52.87 76.74 48.05 COULDD-TuckER 66.03 \u00b1 0.13 35.99 \u00b1 1.0 68.09 \u00b1 0.19 66.09 \u00b1 0.17 43.69 \u00b1 0.38 66.95 \u00b1 0.17 53.53 \u00b1 0.04 88.47 \u00b1 0.34 47.49 \u00b1 0.02 gpt-3.5-turbo 47.83 68.90 40.22 46.72 52.12 42.25 45.80 52.10 40.95 Table 3: Test performance of pre-trained embeddings and COULDD on CFKGR. For COULDD, we report the mean and standard deviation across 5 runs. Bold entries denote the best performance between pre-trained KGEs and their counterpart trained with COULDD. The best results on the dataset are underlined. For all scores, higher is better. can also be applied in rule-free evaluation setups. Algorithm 1 provides a formal description of COULDD. 5 Experiments In the following, we conduct two types of experiments: First, we evaluate pre-trained KGEs, COULDD, and ChatGPT on our CFKGR datasets with expected labels to assess whether the methods can apply inference rules found by a rule mining system in hypothetical scenarios. In our second set of experiments, we evaluate on human-labeled data to check whether the methods also capture human reasoning, which does not necessarily align with mined inference rules (see Section 3.3). 5.1 General Setup We use the five pre-trained CoDEx link-prediction models as initializations for COULDD7. Further details about the KGE methods are in Appendix E. For COULDD, we tune the learning rate (\u03b1) and number of additional samples per batch (N) on the respective CFKGR validation set, based on the best overall F1-score, and set the maximum number of update steps (E) to 20. We carry over the remaining hyperparameters from the pre-trained CoDEx models (Safavi and Koutra, 2020). Further details regarding the hyperparameters are in Appendix F.2. Optimization is performed using Adam (Kingma and Ba, 2014), or Adagrad (Duchi et al., 2011), depending on the original model configuration. The general classification setup and relation-specific decision thresholds are equivalent parameter tuning. 7The config files for the models are available at https: //github.com/tsafavi/codex to the original CoDEx paper8 (Safavi and Koutra, 2020) to ensure comparability. Note that this entails scoring all triples in the tail direction. Since no negatives are provided for CoDEx-L, we generate one random tail corruption per validation triple for threshold tuning (akin to experiments in (Safavi and Koutra, 2020)). During training, we sample 100 negative examples per triple (50 head and 50 tail corruptions), as this was effective in previous work (Trouillon et al., 2016; Kotnis and Nastase, 2017). We implement our experiments by adapting LibKGE (Broscheit et al., 2020) to support our proposed COULDD training strategy. We perform hyperparamter optimization using Optuna (Akiba et al., 2019). For experiments with ChatGPT, i.e., gpt-3.5-turbo, we use the OpenAI API and temperature 0. The used prompts and an example of input and output can be found in Appendix F.3. 5.2 Results Table 3 contains the results. A detailed evaluation per test type can be found in Appendix G. First, we observe that the KGE performances on CFKGRCoDEx-L differ noticeably from CFKGR-CoDExS and CFKGR-CoDEx-M. This is likely due to lower threshold quality resulting from the absence of hard negative triples for CoDEx-L. COULDD achieves the best results in terms of overall F1-score on all datasets. In particular, COULDD noticeably improves the performance on changed facts over the pre-trained embeddings, except for ConvE. Importantly, we do not observe a case where applying COULDD leads to a no8We added a minor correction to the CoDEx threshold tuning that ensures proper application of the global threshold for unobserved relations. CFKGR-CoDEx-M* CoDEx-M (filtered) F1 (E) F1 (H) Changed (E) Changed (H) Unchanged (E) Unchanged (H) Overall Rule-wise RESCAL 89.30 87.61 21.55 13.64 97.20 96.17 92.74 84.72 COULDD-RESCAL 89.03 \u00b1 0.24 87.12 \u00b1 0.24 25.08 \u00b1 0.75 16.25 \u00b1 0.58 96.48 \u00b1 0.20 95.31 \u00b1 0.21 \u2212 \u2212 TransE 81.21 79.85 21.55 16.48 88.55 87.73 91.29 80.26 COULDD-TransE 80.64 \u00b1 0.07 79.44 \u00b1 0.10 23.43 \u00b1 0.27 19.2 \u00b1 0.43 87.65 \u00b1 0.11 86.94 \u00b1 0.12 \u2212 \u2212 ComplEx 89.01 87.53 9.94 2.84 98.40 97.51 96.01 77.79 COULDD-ComplEx 92.05 \u00b1 0.11 90.43 \u00b1 0.16 37.35 \u00b1 1.08 29.89 \u00b1 1.37 98.29 \u00b1 0.1 97.27 \u00b1 0.1 \u2212 \u2212 ConvE 83.96 82.56 14.92 9.09 92.46 91.62 89.29 79.70 COULDD-ConvE 78.39 \u00b1 0.56 77.15 \u00b1 0.72 16.69 \u00b1 1.13 12.39 \u00b1 0.91 86.17 \u00b1 0.62 85.43 \u00b1 0.71 \u2212 \u2212 TuckER 89.31 88.08 13.81 7.95 98.26 97.50 96.37 90.33 COULDD-TuckER 92.83 \u00b1 0.12 90.92 \u00b1 0.12 43.43 \u00b1 0.90 34.55 \u00b1 0.91 98.41 \u00b1 0.11 97.21 \u00b1 0.12 \u2212 \u2212 gpt-3.5-turbo 63.96 63.36 53.04 53.98 62.75 62.34 \u2212 \u2212 Table 4: Case study on CFKGR-CoDEx-M* with expected (E) and human-assigned (H) labels and performance on the filtered CoDEx-M test set. \"Overall\" describes the accuracy across all inferences. \"Rule-wise\" gives the average accuracy per rule. Bold entries denote the best performance between pre-trained KGEs and their counterpart trained with COULDD. The best results on the dataset are underlined. For all scores, higher is better. ticeable loss of knowledge acquired during pretraining. In terms of overall F1-score, COULDDComplEx achieves the best results averaged across the three datasets. On changed facts, COULDDTuckER is the best-performing KGE method, likely because TuckER is well-suited for modeling compositional relations (Safavi and Koutra, 2020). ChatGPT achieves the best scores on changed facts on two out of three datasets. However, it generally does not perform well on unchanged facts. Possible reasons are that it misses relevant background knowledge present in the KG or does not understand the task on these instances. In summary, we observe that COULDD consistently improves performance over the pre-trained embeddings, overall and on changed facts in particular, and does not strongly degrade performance on unchanged facts. This indicates that COULDD, to an extent, can be used to infer plausible counterfactual changes to the graph when they follow prominent patterns in the KG. 5.3 Case Study on CoDEx-M To better understand the results shown in Table 3, we conduct a case study on CoDEx-M for which we have a human-annotated CFKGR subset. In particular, we want to assess how well the pretrained CoDEx models perform factual reasoning with composition rules and how an evaluation on human-assigned labels affects our results. The main results are presented in Table 4. Table 12 in the appendix presents a confusion matrix per test type for COULDD and ChatGPT. 5.3.1 Inference Rules in Factual Contexts Achieving good performance on changed triples in Table 3 requires (1) a logical adaption to the hypothetical scenario and (2) the application of composition rules that generated the test inferences. We attempt to disentangle these factors by investigating whether the CoDEx models captured the regularities expressed in the Amie3 rules during pre-training and can apply them in factual scenarios. Setup. We filter the original CoDEx-M test set for triples that can be inferred by applying the mined Amie3 rules to the training set. We only keep triples that stem from rules that cover at least five triples in the test set to obtain sensible estimates for the rule-wise performances. This results in a filtered test set of 551 instances inferred from 10 rules. Results. We notice that the performance on the filtered CoDEx-M test set is consistently high for all pre-trained KGE methods. This indicates that they learned the mined inference patterns during training and lower performances observed in Table 3 are likely due to insufficient adaption to the hypothetical scenario. Appendix H.2 gives a full overview of rule-wise performances. 5.3.2 Evaluation with Manual Labels Setup. We create a subset of the CFKGR-CoDExM test set, denoted by CFKGR-CoDEx-M*, which solely consists of instances that were manually labeled as described in Section 3.3. We filter out test cases for which less than three annotators agreed or the majority vote label was \"unsure/too little information\". This results in a dataset composed of 256 hypothetical scenarios and a total of 1159 test cases. The complete label distribution per Hypothetical: Joe Frazier (human) died in Moscow (city with millions of inhabitants, largest city, capital) Context: Joe Frazier (human) was cast in Encino Man (film) Statement: Is it then still plausible that E-40 (human) has the occupation actor (profession, artist), given that this is the case in the real world? Answer by ChatGPT: unlikely Explanation by ChatGPT: The hypothetical scenario states that Joe Frazier died in Moscow, which is a city with millions of inhabitants. This fact is not the case in the real world. The context mentions that Joe Frazier was cast in the film Encino Man. Since the statement is \u2019Is it then STILL plausible that E-40 has the occupation actor?\u2019, we need to consider if the hypothetical scenario makes it plausible for E-40 to have the occupation of an actor. However, the context does not provide any specific information about E-40 or his occupation. Therefore, we cannot determine the plausibility of E-40 being an actor based on the given information. Hence, the statement is unlikely. Table 5: Example prompt and corresponding answer produced by ChatGPT on CFKGR-CoDEx-M*. test case can be found in Appendix H.1. Results. First, we note that the overall F1-score and performance on unchanged facts greatly differ from the numbers observed for the unfiltered set in Table 3. This is due to the omission of most corruptions as they were not manually labeled. We observe a consistent performance drop for KGE-based methods when evaluating on human-assigned labels instead of expected labels for all metrics. Judging from the confusion matrix in Table 12, COULDD cannot reliably identify false inferences that follow from patterns in the KG, but are invalid according to the annotators. Moreover, it classifies most outdated facts, which are no longer valid given the hypothetical scenario, as positive. However, the number of outdated facts (14) is too small to draw any substantial conclusions from this observation. For ChatGPT, we observe a slightly reduced overall performance when evaluating with human-assigned labels. However, ChatGPT\u2019s score improves on changed facts for human-assigned labels. A closer look at the confusion matrix reveals that ChatGPT performs better at detecting outdated facts and false inferences than KGEs. However, as observed before, ChatGPT tends to misclassify facts that should be retained. A qualitative inspection reveals that ChatGPT largely misunderstands the task on such triples: instead of answering whether they STILL hold given the hypothetical scenario, it oftentimes tries to infer them. Table 5 gives an example. 6 Related Work Inferential KGC Benchmarks. Rule-based inferential benchmarks for KGC (Liu et al., 2023; Cao et al., 2021) assess a method\u2019s ability to learn implicit rule patterns and use them to predict inferences in the test set based on evidence in the training set. Cao et al. (2021) create an inferential test set for CoDEx-M based on a rule set mined by AnyBurl (Meilicke et al., 2019), akin to our experiments in Section 5.3.1, and also find that pre-trained KGEs have strong inferential reasoning capabilities. Counterfactual Graph Learning. Leveraging counterfactuals in graph learning is an emerging field of research (Guo et al., 2023). Counterfactuals have recently been utilized to ensure the fairness of graph-based systems with respect to sensitive node attributes (Agarwal et al., 2021; Ma et al., 2022; Zhang et al., 2021), improve interpretability by generating counterfactual explanations for predictions (Lucic, 2022; Numeroso and Bacciu, 2021; Prado-Romero et al., 2022; Xu et al., 2022), and enhance link prediction performance on the graph as-is (Chang et al., 2023; Lu et al., 2023; Shi et al., 2022; Wang et al., 2021; Zhao et al., 2022). Our work does not fall into any of the above categories and instead focuses on making predictions in a counterfactual graph. CF Reasoning Benchmarks for LLMs. Several datasets and evaluation schemes have been proposed for assessing the counterfactual reasoning capabilities of LLMs. Qin et al. (2019) introduce the task of counterfactual story rewriting, in which LLMs have to minimally revise a given story with respect to a counterfactual event. The CRASS benchmark challenges LLMs to select a valid consequence given a questionized counterfactual conditional in a multiple-choice setting (Frohberg and Binder, 2022). Li et al. (2023) present LLMs with a hypothetical premise and two possible completions for a corresponding statement, one of which is valid in the real world while the other holds in the hypothetical scenario. In contrast, CFKGR poses a binary classification task, in which the model has to decide whether a presented statement is plausible in the given hypothetical scenario or not. Further, our benchmark is based on the knowledge contained in a KG and thus considers specific, real-world entities. 7 Discussion Comparison with Human CF Reasoning. Our labeling efforts and experiments show that counterfactual reasoning on KGs is a challenging task. Both KGEs and ChatGPT leave much room for improvement on CFKGR. Moreover, as indicated by our annotation results (Table 2), even humans find it difficult to judge the plausibility of KG-based counterfactual statements, especially when they involve unfamiliar situations. For instance, \"If Meg White was a member of Girls Aloud, would Jack White be part of Girls Aloud?\" is a question that most humans likely do not ask themselves. Nevertheless, automatic systems can be presented with and evaluated on a wide range of possible scenarios, even if those are implausible or hard to imagine for humans. Advantage of KG-based Benchmarks. KGs are a powerful tool for defining hypothetical scenarios and their consequences. The rich world knowledge stored in KGs allows to create interesting casespecific inferences. In the example question above, would the judgement change if we replace \"Girls Aloud\" by a band that is not a girl group? This aspect is largely missing from current counterfactual reasoning benchmarks for LLMs (Frohberg and Binder, 2022; Li et al., 2023), as they mostly handle generic entities. 8 Conclusion This work introduces the novel task CFKGR, which requires models to reason on a counterfactual KG. By utilizing the world knowledge stored in KGs, we create datasets consisting of diverse hypothetical scenarios and their implications, as defined by inference rules. Further, we propose COULDD, a general method for counterfactual reasoning on KGs, and evaluate its effectiveness on automatically generated and human-annotated data. We extend our experiments to ChatGPT and find that it generally outperforms COULDD at making counterfactual inferences. However, ChatGPT largely does not recognize which facts are invariant to the hypothetical scenario. Both COULDD and ChatGPT leave much headroom on the task, highlighting the difficulty of CFKGR. 9 Limitations The type of rules that we examine is arguably limited. We consider understanding the implications induced by composition rules as a first step to more general and complex hypothetical reasoning. Moreover, while the set of outdated facts F\u2212is a key component for defining the counterfactual KG, there is no trivial way for generating them reliably without appropriate rules or extensive human verification. Most rules defined for KGs are Horn rules (e.g., Lajus et al., 2020; Meilicke et al., 2019; Sadeghian et al., 2019b) and do not express negation in the head atom. Hence, we focus on the additions F+ in this work. Furthermore, this work does not consider the confidences of the mined Amie3 rules but assumes that they all could be a valid inference rules for hypothetical reasoning. As indicated by our human annotation results, this is likely not true in practice. Verbalizing KG triples, in a way that is intuitive to humans, is a difficult task. We tried our best to find suitable verbalizations for the relations in the CoDEx KG by consulting the corresponding Wikidata definitions as well as ParaRel (Elazar et al., 2021). In our verbalizations, each entity is presented with up to three of its associated entity types9 in order to facilitate reasoning with lesserknown entities. Nevertheless, unintuitve verbalizations and missing context from the KG (with respect to how relations are used) might have influenced our annotation results and ChatGPT experiments. Moreover, KGs can contain erroneous or outdated facts and automatically constructed CFKGR examples might rely on these facts. It is possible that such instances impacted the performance of ChatGPT on our benchmark. Lastly, the poor performance of ChatGPT on unchanged facts could partially be caused by the system prompt used in our experiments, which can be found in Appendix F.3. We designed the prompt based on the instructions provided to the human annotators. Nevertheless, it is likely that the prompt could be adjusted to improve the results of ChatGPT on unchanged facts. Appendix I further details some frequent errors we noticed in ChatGPT\u2019s responses. 9Whenever more than three entity types were available, we randomly sampled three of them to enhance readability. 10 Ethics Statement We relied on well-established and publicly available resources to build our datasets and method. We use the CoDEx knowledge graph and LibKGE, which are both published under the MIT license. The config files for the pre-trained CoDEx models used in our experiments are available on the CoDEx github repository10. The counterfactual situations included in our datasets are randomly generated and purely hypothetical. They do not convey any implications about the real-world entities referenced in them. Nevertheless, the created instances could be biased towards certain entities due to biases in the original KGs and our employed sampling strategy detailed in Appendix C. We recruited annotators on a voluntary basis. We do not publish any information that could be used to identify the labelers and our data does not contain any personal information regarding the annotators. Acknowledgements This research has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) RO 5127/2-1 and the Vienna Science and Technology Fund (WWTF)[10.47379/VRG19008] \u201cKnowledgeinfused Deep Learning for Natural Language Processing\u201d. We thank the European High Performance Computing initiative for providing the computational resources that enabled this work. EHPC-DEV-2022D10-051." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2404.00209v1", |
| "title": "EventGround: Narrative Reasoning by Grounding to Eventuality-centric Knowledge Graphs", |
| "abstract": "Narrative reasoning relies on the understanding of eventualities in story\ncontexts, which requires a wealth of background world knowledge. To help\nmachines leverage such knowledge, existing solutions can be categorized into\ntwo groups. Some focus on implicitly modeling eventuality knowledge by\npretraining language models (LMs) with eventuality-aware objectives. However,\nthis approach breaks down knowledge structures and lacks interpretability.\nOthers explicitly collect world knowledge of eventualities into structured\neventuality-centric knowledge graphs (KGs). However, existing research on\nleveraging these knowledge sources for free-texts is limited. In this work, we\npropose an initial comprehensive framework called EventGround, which aims to\ntackle the problem of grounding free-texts to eventuality-centric KGs for\ncontextualized narrative reasoning. We identify two critical problems in this\ndirection: the event representation and sparsity problems. We provide simple\nyet effective parsing and partial information extraction methods to tackle\nthese problems. Experimental results demonstrate that our approach consistently\noutperforms baseline models when combined with graph neural network (GNN) or\nlarge language model (LLM) based graph reasoning models. Our framework,\nincorporating grounded knowledge, achieves state-of-the-art performance while\nproviding interpretable evidence.", |
| "authors": "Cheng Jiayang, Lin Qiu, Chunkit Chan, Xin Liu, Yangqiu Song, Zheng Zhang", |
| "published": "2024-03-30", |
| "updated": "2024-03-30", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Knowledge AND Graph", |
| "gt": "Narrative reasoning, such as predicting story endings and reasoning with scripts, is a funda- mental task in natural language understanding (Mostafazadeh et al., 2016; Li et al., 2018; Mori et al., 2020). Reasoning with narratives depends on the understanding of eventualities12. Consider the following story: \u201cTom was tired and wanted to have fun. He bought a movie ticket for Harry Potter.\u201d It can be broken down into multiple sub-sentences: (E1) Tom was tired. (E2) Tom wanted to have fun. (E3) He bought a movie ticket for Harry Potter. where each of them can be regarded as an event with a verb and one to several arguments. These events, which are considered as basic semantic units in various NLP research (Zhang et al., 2020; Yu et al., 2020; Zhong et al., 2022; Zhang et al., 2022), convey the majority of the meaning within their respective contexts. For human beings, the comprehension of these semantic units is found to heavily rely on our back- ground world knowledge beyond contexts (Day 1We use the linguistic term \u201ceventuality,\u201d which in- cludes events, states, and activities (Mourelatos, 1978; Bach, 1986). For simplicity, we use the terms \u201cevent\u201d and \u201ceventuality\u201d interchangeably. 2Jiayang completed this work while interning at Ama- zon AWS AI Lab. Tom was tired and wanted to have fun. He bought a movie ticket for Harry Potter. World Knowledge / Eventuality-centric KGs E1: Tom was tired. E2: Tom wanted to have fun. E3: (Tom) bought a movie ticket for Harry Potter PersonX buys a movie ticket PersonX wants to have fun PersonX is tired. PersonX watches a movie Watching a movie is fun PersonX arrives at the theater in time \u2026 \ud83e\udd16 \ud83e\udd14 Abstract Thinking Event Grounding Natural Language Figure 1: Given a piece of story, our goal is to ground it to eventuality-centric KGs to retrieve con- textualized background world knowledge for better narrative understanding. et al., 1998). For instance, given E1 and E2, we may infer that Tom might have just finished his work. Since we know watching movies is a lot of fun, we find it reasonable that Tom chose to do so (from E2 to E3). We can also reason from E3 that Tom would have to arrive at the theater before the movie started. To model such world knowledge on machines, most existing work fall into two paradigms. One is to implicitly model event knowledge by pretraining LMs with event-aware objectives (Yu et al., 2020; Zhou et al., 2021, 2022b,a). This paradigm, how- ever, sacrifices transparency and explanability of reasoning in its philosophy of design. In compar- arXiv:2404.00209v1 [cs.CL] 30 Mar 2024 ison, another paradigm focuses on modeling the explicit symbolic event knowledge, usually in the form of eventuality-centric knowledge graphs (KGs, such as ASER (Zhang et al., 2022) and ATOMIC (Sap et al., 2019)). In this direction, how to lever- age the symbolic event knowledge in these KGs for reasoning remains under-explored. The hand- ful research here only work on a restricted format (subject-verb-object) of texts and could not gener- alize to free-texts (Li et al., 2018; Lv et al., 2020; Lee and Goldwasser, 2019; Lee et al., 2020). In this paper, we make a step forward to examine the problem of grounding3 free-texts to eventuality- centric KGs. This problem is non-trivial due to the distinct characteristics of events, including: 1. Difficulty in representing events. First, events appear entangled in texts. They tend to share arguments with other events in the same context (e.g., E1 and E2). Second, when separated from the context, events lose co- reference information in the argument level. For instance, it is hard to discern whether the pronoun \u201che\u201d in event E3 refers to \u201cTom\u201d in E1 and E2 or not. 2. Sparsity of events. Events are sparse in nat- ural language. For instance, by adding or re- moving details, one could paraphrase E3 into infinite events describing the same scenario, such as \u201che purchased a ticket online for the latest Harry Potter\u201d or \u201che booked a ticket\u201d. Given the incomplete nature of eventuality- centric KGs, matching arbitrary events to KGs has rather high failure rate. To tackle the above problems, we propose the very first framework to explicitly ground free-texts to eventuality-centric KGs. For the event represen- tation problem, we equip semantic parsing based event extraction with an event normalization mod- ule, which separates events from contexts while preserving co-reference information. Motivated by humans\u2019 abstract thinking process, we propose a partial information extraction approach to tackle the sparsity problem. This approach conceptu- alizes events into multiple partial events by omit- ting argument details. Interestingly, we empirically demonstrate that these solutions significantly alle- viate the sparsity problem. Further, we ground the partial events to KGs to get joint reasoning sub- graphs. Subsequently, we employ two common graph reasoning models to leverage this knowl- edge. In addition to a model based on graph neu- ral networks (GNN), we also utilize a model based on a large language model (LLM). Experimental results on three narrative reasoning tasks show 3Here, the term \u201cgrounding\u201d refers to a process similar to \u201clinking\u201d used in \u201centity linking\u201d, where the target is the eventuality-centric KGs. that our framework consistently outperforms cur- rent state-of-the-art models. Lastly, we provide a qualitative study to showcase how our approach can provide interpretable evidence for model pre- dictions. To summarize, the paper\u2019s contributions are4: 1. We develop an initial formulation for the prob- lem of grounding free-texts to eventuality- centric KGs. 2. We propose EventGround, a systematic ap- proach, to solve the event representation and sparsity problems, and perform narrative rea- soning based on the grounded information. 3. Experimental results show that our approach outperforms strong baselines and achieves new state-of-the-art performance on three datasets, while providing human-interpretable evidence.", |
| "main_content": "Reasoning on narratives is a fundamental task (Mostafazadeh et al., 2016; Li et al., 2018; Mori et al., 2020; Jiayang et al., 2023) and has attracted much interest in the NLP community. The most crucial problem in narrative reasoning is modeling the relationship between events, which often requires background world knowledge (Day et al., 1998; Mostafazadeh et al., 2016). Many large scale knowledge graphs (KGs) such as ATOMIC (Sap et al., 2019), ConceptNet (Speer et al., 2017), ASER (Zhang et al., 2020, 2022) and GLUCOSE (Mostafazadeh et al., 2020) have been constructed in recent years. Current solutions on leveraging the knowledge in these resources can be coarsely categorized into the following two groups. An overview of the two paradigms is presented in Figure 6. The knowledge model paradigm leverages external KGs by pretraining LMs with carefully designed objectives. Most existing knowledge enhanced LMs focused on using entity-centric KGs (Zhang et al., 2019; Peters et al., 2019; F\u00e9vry et al., 2020; Verga et al., 2020; Xiong et al., 2020; Sun et al., 2019b, 2021; Joshi et al., 2020). As for using external event knowledge, the knowledge model paradigm focus on finetuning language models on event-aware KGs, such as event-pair relation modeling (Bosselut et al., 2019; West et al., 2021; Zhou et al., 2021), whole event recovering/masking (Zhou et al., 2022b; Yu et al., 2020), and correlation-based event ranking (Zhou et al., 2022a). 4The code and data are available at https://github. com/HKUST-KnowComp/EventGround. [P0] work hard to get good grades. [P0] be a student in medical school Context: Caroline was a student in medical school. Caroline worked hard to get good grades. [P0] work hard [P0] be a student [P0] perform well [P0] get good grades [P0] work hard [P0] be a student [P0] be a medical student Match failed [P0] be knowledgeable Linked subgraph Candidates Extraction & Normalization Partial Information Extraction Event linking & subgraph retrieval She did very well. Reasoning She messed up. ! Graph reasoning model \u274c \u2705 Steps of the system Figure 2: An overview of EventGround. The retrieval-and-integration paradigm, in contrast, explicitly retrieves triples or subgraphs from external KGs. Recent work on reasoning with external KB and texts have explored grounding entities to KGs, such as (Sun et al., 2018, 2019a; Xiong et al., 2019; Min et al., 2019; Lee et al., 2021), and (Lin et al., 2019; Feng et al., 2020; Yasunaga et al., 2021) in open-domain QA, commonsense QA, and narrative reasoning. However, most of them ground to entity-centric KGs (e.g. the entity part of ConceptNet (Speer et al., 2017)), which have little or no event knowledge. Although some (Lv et al., 2020; Lee and Goldwasser, 2019; Lee et al., 2020; Li et al., 2018) on script reasoning have investigated the usage of events, their methods are restricted to the \u201csubject-verb-object\u201d-like structured texts in the MCNC task, and have difficulty extending to general free-texts. In comparison, we tackle the more difficult problem of grounding events in free-texts to eventuality-centric KGs. The wide adoption of AI critically needs explainability (Hoffman et al., 2018). Thus, despite the appeal of a simpler pipeline (aided by the availability of large LMs), this work extends the retrievaland-integration paradigm for grounding free-texts to eventuality-centric KGs for narrative reasoning. As opposed to event grounding, a similar term \u201cevent linking\u201d has been used in the literature, where they either focus on cross-document event co-reference (Nothman et al., 2012; Krause et al., 2016), or event co-reference to Wikipedia pages (Yu et al., 2021). Moreover, their \u201cevent\u201d refers to specific happenings such as \u201cWorld War II\u201d rather than the more general eventualities in this work. 3. EventGround: Grounding free-texts to eventuality-centric knowledge graphs In this section, we present our proposed framework, EventGround. An overview is presented in Figure 2. To tackle the event representation problem, we equip semantic parsing based event extraction (\u00a7 3.1.1) with an event normalization module (\u00a7 3.1.2) to separate events from contexts while preserving their arguments\u2019 co-reference information. We solve the sparsity problem by with a partial information extraction approach (\u00a7 3.1.3). We empirically prove that these solutions largely alleviate the sparsity problem in \u00a7 4.5. At the end of this section, we discuss grounding the partial events to KGs to obtain joint reasoning subgraphs in \u00a7 3.2, and present both the GNN-based and LLM-based reasoning models in \u00a7 3.3. 3.1. Obtaining events The proposed event acquisition pipeline includes event extraction (\u00a7 3.1.1), normalization (\u00a7 3.1.2) and partial information extraction (\u00a7 3.1.3). 3.1.1. Event extraction As shown in the previous example, events do not naturally exist in free texts. Instead, an event may share arguments with (e.g., E1 and E2) or contain another event. Therefore, a special extraction step is needed to separate events from their contexts. In this work, we consider the semantic parsing based methods to extract events from their contexts. For each piece of text s = [s1, s2, \u00b7 \u00b7 \u00b7 , sn] with n sentences, we conduct semantic role labeling (SRL) on the text to extract a series of verb-centric events P = {p1, p2, \u00b7 \u00b7 \u00b7 , pm}, where each event pi = (verbi, Ai) has a trigger verbi and a set of arguments Ai. Each argument ai j \u2208Ai has a semantic role role(ai j) \u2208 {ARG0, ARG1, \u00b7 \u00b7 \u00b7 , ARGM}5. In addition, we define the operator text(pi) to obtain the text of pi. 3.1.2. Event normalization It is noteworthy that the extracted events suffer from the loss of co-reference information. For instance, here are three events extracted from a text:6 (1) The general had some wine at a party. (2) He felt sleepy. (3) He said goodbye to them. where \u201cthe general\u201d and \u201che\u201d refer to the same person, while \u201cthem\u201d refers to another group of people. A system would not be aware of this co-reference 5The annotation follows the PropBank (Palmer et al., 2005) annotation guideline, where the numbered arguments in general correspond to the roles: ARG0agent; ARG1-patient; ARG2-instrument, benefactive, attribute; ARG3-starting point, benefactive, attribute; ARG4-ending point; ARGM-modifier. 6For simplicity, we do not explicitly show verbs and arguments of the events. All the words in events are lemmatized in our pipeline, which is not shown in the examples. relationship without contexts. This makes it difficult to reason on the extracted events. Motivated by previous work (Sap et al., 2019; Fang et al., 2021) in constructing commonsense KGs, we replace tokens referring to people with special tokens7 (e.g., \u201c[P0],\u201d \u201c[P0\u2019s],\u201d \u201c[P1],\u201d where different numbers refer to different people). For instance, \u201cthe general\u201d and \u201che\u201d are replaced by \u201c[P0],\u201d and \u201cthem\u201d is replaced by \u201c[P1].\u201d Through this normalization process, the co-reference information is preserved: (1) [P0] had some wine at a party. (2) [P0] felt sleepy. (3) [P0] said goodbye to [P1]. In addition, the normalization helps reduce event sparsity by removing details in the personal words. For instance, \u201cthe general felt sleepy,\u201d \u201cJoe felt sleepy,\u201d and \u201che felt sleepy\u201d will all be normalized to \u201c[P0] felt sleepy.\u201d This increases their probability of being successfully grounded to KGs. 3.1.3. Partial information extraction The normalized events retain rich contextual details from the original texts, which are important for downstream reasoning processes. However, the sparsity of events can pose challenges in event grounding, especially when most knowledge graphs (KGs) are far from complete (Min et al., 2013; Xiong et al., 2019). For example, a KG is more likely to include a general event like \u201ca person is drinking\u201d than \u201cthe general is drinking Sauvignon Blanc on the balcony,\u201d because the former is more general and likely to occur frequently. Humans strongly depend on conceptual abstraction to identify similarities among seemingly different concepts and events, which enables generalizations to unfamiliar situations (Murphy, 2004). For instance, we can learn that there is common abstraction between \u201cbuy a ticket for \u2018Avengers\u2019\u201d and \u201cbuy a ticket for \u2018Harry Potter\u2019,\u201d and that how the commonality \u201cbuy a ticket\u201d relates to other events such as we should \u201carrive at the theater in time\u201d. With this concept in mind, we use a partial information extraction (PIE) phase to obtain partial events as a method of controllable abstraction. The partial information extraction is based on the importance of event arguments in semantic role labeling (Palmer et al., 2005). For instance, ARG0 and ARG1 have the highest importance as they usually specify the subject and objects. In contrast, the modifier argument ARGM express the least 7Specifically, the spans of personal words are detected by syntactic parsing and animacy classification. We then employ the co-reference information between these spans to normalize all spans that refer to persons. information, as it usually defines additional constraints of the predicate, such as when and where the event happens. Specifically, we propose to drop the event arguments in the descending order of their importance. For event p = (verb, A) with |A| = k, we iteratively drop its argument aj \u2208A, such that the roles of dropped arguments follow the order: (1) ARGM 8, (2) ARG2, ARG3, ARG4, (3) ARG1 and (4) ARG0. The partial information extraction on event set P results in a new set of partial events Pabs, where Pabs = {\u02c6 p1, \u02c6 p2, \u00b7 \u00b7 \u00b7 , \u02c6 pm}. Each element \u02c6 pi = [p0 i , p1 i , \u00b7 \u00b7 \u00b7 ] is a sequence of partial events correspond to event pi \u2208P (p0 i = pi). Below is an example of \u02c6 p: p0 ARG0: [P0] V: evacuated ARG2: to a relative \u2019s house ARGM: last night. p1 ARG0: [P0] V: evacuated ARG2: to a relative \u2019s house. p2 ARG0: [P0] V: evacuated. p3 V: evacuated. Each time an argument is dropped, the abstract level of the partial event increases. Meanwhile, partial events on higher abstract level (e.g. p2, p3) are more likely to have been recorded in KGs, which alleviates the sparsity problem. In \u00a7 4.5, we empirically show that the partial information extraction improves the model performance by drastically increasing the hit rate of event grounding. 3.2. Grounding to eventuality-centric KG In this section, we discuss the event grounding approach. In \u00a7 3.2.1, we describe how to map events to eventuality-centric KGs to get the anchor events that have the closest semantic meaning. In \u00a7 3.2.2, we describe how to retrieve grounded subgraphs based on the anchor events. 3.2.1. Event matching Suppose we have an eventuality-centric KG G = (V, E). V and E are the node set and the edge set, respectively. Each node vi \u2208V is an event with a text attribute text(vi). Then, for each event p \u2208Pabs, our goal is to find the node v \u2208V (which we term as \u201canchor event\u201d) that is the most similar to p: v = arg min v\u2208V d(p, v), (1) where d(\u00b7, \u00b7) denotes the distance between events. 8We do not drop the negation (e.g., not, n\u2019t, never) and modals (e.g., will, may, can) modifier arguments, since they are crucial building blocks in discourse as revealed in the linguistics study (Jordan, 1998). To define the similarity, previous work have explored token-level similarity by computing the cosine distance for TF-IDF or BM25 vectors (Lv et al., 2020). However, this method overlooks the semantics of events, and constantly fails by mapping to events with high inverse document frequency terms (e.g. \u201c[P0\u2019s] lung gets punched\u201d is matched with \u201c[P0] has lung cancer\u201d). Therefore, we turn to use semantic similarity to match events. Specifically, we encode event p and v with sentence transformers (Reimers et al., 2019),9 and compute d(p, v) by the L2 distance: d(p, v) = ||SBERT(text(p)), SBERT(text(v))||2. (2) In practice, not every event can be successfully matched with the correct ones. We empirically set a threshold l over d(p, v) to filter out the failed matches.10 As a result, partial events in Pabs are matched to their anchor events in G, which we denote by C. C = {\u02c6 c1, \u02c6 c2, \u00b7 \u00b7 \u00b7 , \u02c6 cm}, where each \u02c6 ci is a sequence of anchor events matched from \u02c6 pi. 3.2.2. Joint subgraph construction Knowledge subgraph retrieval Based on the anchor events from the matching results in \u00a7 3.2.1, we aim to retrieve a subgraph Gsub = (Vsub, Esub) from G. Ideally, Gsub should contain the background world knowledge related to the reasoning, meanwhile cover minimal number of additional eventualities. Finding such a subgraph is essentially trying to solve an NP-complete Steiner tree problem (Garey and Johnson, 1977; Lin et al., 2019), which is intractable. As a workaround, we search for the shortest path within \u03b3-hops between each event pair in {(va, vb) : va \u2208\u02c6 ci, vb \u2208\u02c6 cj; \u02c6 ci, \u02c6 cj \u2208C}. For any path obtained, the nodes and edges along the path are added to Gsub. Joint subgraph construction Based on Gsub, we construct a joint knowledge enhanced subgraph Gjoint = (Vjoint, Ejoint) for reasoning. Specifically, Gjoint includes all the nodes and edges in Gsub. In addition, we add the context events in P as nodes to Gjoint, where their grounding relation to anchor events in C as well as the context relation (between the previous and latter events in the order that they appear in context) are added as edges. 3.3. Graph reasoning models The retrieved subgraphs are then used for reasoning using either a GNN-based reasoning model or an LLM-based reasoning model. 9https://huggingface.co/sentence-transformers/ all-MiniLM-L6-v2 10We sample 100 matching results and empirically set l=0.65 that filters out the most failed cases. GNN-based reasoning model. We first encode the text s and node v \u2208Vjoint using the language model representation: v = fLM(text(v)), s = fLM(s). (3) Then, we employ a GNN module to perform reasoning on the joint subgraph Gjoint. We choose the relational graph convolutional networks (RGCN) (Schlichtkrull et al., 2018) so that the relational information in Gjoint can be well modeled. Specifically, for each layer l in an L-layer GNN, the representation h(l) i of node i \u2208Vjoint is updated by h(l+1) i = \u03c3 \u0010 X r\u2208R X j\u2208Nr(i) 1 |Nr(i)|Wr \u00b7 h(l) j \u0011 , (4) where R is the set of edge types in Ejoint, Nr(i) denotes the neighborhood with relation r of node i, and \u03c3(\u00b7) is an non-linear activation. Then, we obtain the vector representation for Gjoint by pooling the hidden node embeddings from the last layer g = Pooling({hL i : i \u2208Vjoint}). (5) The final prediction comes from p(s) \u221dMLP(s + g), (6) where MLP means a multi-layer perceptron module to predict the probability of the output. LLM-based reasoning model. We also explored fusing the eventuality knowledge subgraph Gjoint into LLMs. Since LLMs only receive sequence inputs, we conduct sequentialization on subgraphs in a format similar to (Madaan and Yang, 2021; Sakaguchi et al., 2021). Using a transformation function t(\u00b7), a subgraph Gjoint is transformed into a piece of text sGjoint (sGjoint = t(Gjoint)), which is then fed into LLM as part of the prompts. We discuss variations of t(\u00b7) and other details in \u00a7 4.3. 4. Experiments 4.1. Datasets We conduct experiments on three downstream tasks on narrative reasoning. The statistics are presented in Table 1. \u2022 Story Cloze Test v1.0 (SCT-v1.0) was proposed by Mostafazadeh et al. (2016) to evaluate the understanding of relations between events. Given four consecutive sentences, the task is to predict the correct ending from two possible choices. \u2022 Story Cloze Test v1.5 (SCT-v1.5) Later, Sharma et al. (2018) introduces a new version to correct the artifacts in the previous release. For both versions, we follow the common practice (Li et al., 2019; Name Train Valid Test SCT-v1.0 1,771 100 1,871 SCT-v1.5 1,471 100 1,571 MCNC 140,331 10,000 10,000 Table 1: Statistics of datasets. Yu et al., 2020) to randomly select 100 validation samples for validation, and use the rest for training. \u2022 Multiple Choice Narrative Chain (MCNC) (Granroth-Wilding and Clark, 2016; Li et al., 2018) is a 5-way multiple choice task that requires a system to predict the ending event given its previous context event sequence. 4.2. Eventuality-centric knowledge graphs There are eventuality-centric KGs such as ATOMIC (Sap et al., 2019), GLUCOSE (Mostafazadeh et al., 2020) and ASER (Zhang et al., 2020, 2022). In this paper, we conduct experiments on ASER. The nodes in ASER are eventualities, and the edges between them are the discourse relations (e.g. \u201cPrecedence\u201d, \u201cContrast\u201d and \u201cReason\u201d) defined in Penn Discourse Tree Bank (Prasad et al., 2008). To enable grounding normalized events to KGs, we normalize and aggregate eventualities in the ASER-core-100 version11 by detecting and replacing the personal words with aforementioned special tokens. The resulting normalized ASER graph contains 193k nodes and 6.6m edges. 4.3. Experimental Setup We implement the event extractor with AllenNLP SRL tools.12 To normalize the events, the syntactic parser, animacy classifier, and co-reference tools are from Stanford CoreNLP .13 In our implementation of the event matching module, due to the large scale of |V|, we employ Faiss (Johnson et al., 2019) to accelerate the similarity search. When retrieving subgraph, we set the shortest path length limit \u03b3 to 3, meaning that there are at most 2 intermediate nodes between any two anchor nodes along the path. We implement the GNN-based reasoning model with Deep Graph Library (Wang et al., 2019) and Huggingface-Transformers (Wolf et al., 2020). For finetuning the supervised models, we conduct gridsearch over model hyper-parameters. The number of convolutional layers L are searched within {2, 3, 4}, and the hidden size of convolutional layers 11We obtain the core-100 version by filtering out nodes with frequency lower than 100 from ASER-core: https: //hkust-knowcomp.github.io/ASER/ 12https://github.com/allenai/allennlp 13https://stanfordnlp.github.io/CoreNLP/ \u2208{64, 128, 256, 512}. For relational convolutional layers, the number of bases is searched within {\u22121, 10, 30}. We use the Adam (Kingma and Ba, 2015) optimizer with cosine learning rate schedule to optimize the models. The learning rate is set to 1e \u22125 for all the \u201cbase\u201d models, and 5e \u22126 for all the \u201clarge\u201d models. All the experiments are run on 4 NVIDIA Tesla-V100 GPUs. For the LLM-based reasoning model, we adopt ChatGPT (OpenAI, 2022). 14 We consider three implementations for the graph sequentialization function t(\u00b7): (1, DOT) using the DOT language to represent graphs (Gansner et al., 1993; Madaan and Yang, 2021; Sakaguchi et al., 2021); (2, Node & Edge) instead of using node indexing as in DOT, we try directly inputing all the nodes and edges (e.g., \u201c[P0] buy a boat \u2013> [P0\u2019s] nearby marina have a race; [P2] prepare \u2013> [P2] go to sleep; ...\u201d); (3, Node) only the nodes are fed into ChatGPT (e.g., \u201c[P0] buy a boat; [P0\u2019s] nearby marina have a race ...\u201d). The prompt template is: \u201cEvent knowledge on narrative choice A: {t(Gjoint,A)} \\n Event knowledge on narrative choice B: {t(Gjoint,B)} \\n Question:{} \\n Answer:\u201d. As a baseline, we also test ChatGPT without the additional knowledge (denoted by \u201cChatGPTVanilla\u201d). For SCT-v1.0, we report results on its test set (sampled 500 instances). Since the test set of SCT-v1.5 is no longer publicly available15 at the time we ran this experiment, we report the results on its validation set. We do not report the performance on MCNC because the lengths of most instances in this set exceed the maximum input length. 4.4. Main results The main results on the three datasets are presented in Table 2 and 12. Per-task performance comparisons are presented in Appendix A. As shown in Table 2, when coupled with a GNNbased reasoning model, our proposed framework achieves consistent performance gain over different backbone models. Moreover, compared with existing knowledge enhanced models, we achieve SOTA performance in three narrative reasoning tasks. The knowledge also benefits our LLM-based reasoning model (Table 12), especially when the subgraphs are transformed using the \u201cNode & Edge\u201d setting. 4.5. Ablation study We conduct ablation studies to investigate the contribution of each component in our framework. 14The evaluation is performed in September 2023. 15https://competitions.codalab.org/ competitions/15333 Method Size SCT-v1.0 SCT-v1.5 MCNC (Lv et al., 2020) 125M 58.66 (Zhou et al., 2021) 469M 63.62 CoCoLM (Yu et al., 2020) 355M 97.70 TransBERT (Li et al., 2019) 355M 91.80 90.30 EventBERT (Zhou et al., 2022a) 355M 91.33 63.50 ClarET (Zhou et al., 2022b) 400M 91.18 64.61 RoBERTa-base (Liu et al., 2019) 125M 92.75\u00b10.24 87.14\u00b10.39 61.28\u00b10.14 RoBERTa-large (Liu et al., 2019) 355M 96.74\u00b10.08 92.34\u00b10.06 63.01\u00b10.12 DeBERTa-large (He et al., 2021) 354M 98.13\u00b10.34 94.67\u00b10.25 65.67\u00b10.13 EventGround-RoBERTa-base 126M 93.30\u00b10.11 87.65\u00b10.13 62.11\u00b10.07 EventGround-RoBERTa-large 358M 97.10\u00b10.13 92.86\u00b10.05 63.96\u00b10.15 EventGround-DeBERTa-large 358M 98.29\u00b10.16 95.01\u00b10.32 66.05\u00b10.12 Table 2: Main results on the benchmarks. Numbers are mean and standard deviation of accuracy (%) over three runs. Underlined results are the previous state-of-the-art performance. Model SCT-v1.0 SCT-v1.5 Random 50.00 50.00 ChatGPTVanilla 77.80 77.00 ChatGPTDOT 67.80 69.00 ChatGPTNode 72.00 78.00 ChatGPTNode & Edge 79.60 78.00 Table 3: ChatGPT evaluation results (accuracy %). We report the model performance when (1) ChatGPTVanilla: no knowledge is provided; (2) ChatGPTDOT, ChatGPTNode, and ChatGPTNode & Edge: the knowledge subgraphs are transformed into sequences as part of the inputs. EventGround-RB EventGround-BB w/o know. 92.75\u00b10.24 83.63\u00b11.16 w/o extract. 91.86\u00b10.21 83.74\u00b10.38 w/o norm. 92.43\u00b10.46 83.98\u00b10.87 w/o PIE 92.81\u00b10.32 83.88\u00b11.40 ARGM 93.17\u00b10.25 84.79\u00b11.37 ARG2,3,4 93.03\u00b10.49 84.53\u00b10.60 ARG1 93.30\u00b10.11 85.78\u00b10.74 Table 4: Effect of event extraction, normalization and partial information extraction (PIE). The mean and standard deviation of accuracies on SCTv1.0 are reported, where \u201cRB\u201d and \u201cBB\u201d refer to RoBERTa-base and BERT-base versions. 4.5.1. Effect of event extraction, normalization, and partial information extraction As shown in Table 4, we ablate the event extraction (\u201cw/o extract.\u201d), the event normalization (\u201cw/o norm.\u201d) and the partial information extraction (\u201cw/o PIE\u201d and \u201cARGX\u201d) respectively. Specifically, when ablating the event extraction module, we instead use the whole sentence for event grounding. When ablating the event normalization part, we skip the normalization step, and use the raw events for grounding. For partial information extraction, we drop event arguments in the order described in \u00a7 3.1.3, where the highest level (\u201cARG1\u201d) contains all the partial events in the previous levels. The baseline (\u201cw/o know.\u201d) shows the results of vanilla language models, which do not leverage any external knowledge. We have several observations. First, the event extraction and normalization steps are necessary. When removed, the performance relative to the baseline does not improve, or even drops. Second, the partial information extraction step is crucial. By only taking the first level of partial events (removing modifier arguments), we have seen considerable performance gain. The model reaches its best performance after dropping ARG1. In \u00a7 3, we discuss the sparsity of events. Here, we conduct both automatic and human evaluation to discuss how our method contribute to the alleviation of sparsity. \u2022 Automatic Evaluation (Figure 3) We analyze by automatic measures: (1) the average L2 distance \u00af d in event matching (\u00a7 3.2.1), and (2) the percentage of events considered as successful match, i.e. with L2 distance below l = 0.65 (hit rate). \u2022 Human Evaluation (Table 5, Figure 4) We evaluate the matching results by human annotation. Three domain experts are asked to annotate whether event matching is successful for 50 stories (\u223c500 events) randomly sampled from the validation set of SCT v1.0. The Fleiss\u2019s Kappa value is 0.7414. We obtain ground-truth labels by majority vote, and present the accuracy of different event matching methods in Table 5. To investigate the effect of the threshold l used in \u00a7 3.2.1, we visualize F1 scores under different threshold values in Figure 4. We can observe that: 1) Directly matching sentences to KGs (w/o extract.) has rather low performance, which necessitates the event extraction stage. 2) The event normalization step drastically improves the matching performance. Removing normalization step can decrease the accuracy by up to 76.7%. 3) In general, the matching perforFigure 3: A comparison on the event grounding performance under different settings. The bar plot (with y-axis on the left) shows the percentage hit rate of event matching. The lines show the average L2 distance \u00af d. We do not conduct normalization for \u201cw/o extract.\u201d. w/o norm. w/ norm. w/o extract. 4.7 w/o PIE 7.5 37.5 ARGM 10.0 56.2 ARG2,3,4 14.6 73.4 ARG1 9.9 86.6 Table 5: Human evaluation for the accuracy of event matching (%). mance gradually increases as the abstract level increases. 4) The Pearson\u2019s r between automatic and human evaluation results is 0.8977, indicating thresholding on L2 distance is a reasonable way to automatically filter out poorly matched events. Moreover, from Figure 4, we learn that event extraction, normalization, and partial information extraction improve not only performance but also robustness of event matching. Notably, our main model (w/ norm. -ARG1) has much higher success rate than the other models, and it is meanwhile insensitive to the tuning of threshold l. Figure 4: The F1-score to threshold curves. They reflect the event matching performance under different threshold l. Model Type w/o know. w/ know. BERT base 83.63\u00b11.16 85.78\u00b10.74 large 88.85\u00b10.23 90.49\u00b10.41 RoBERTa base 92.75\u00b10.24 93.30\u00b10.11 large 96.74\u00b10.08 97.10\u00b10.13 DeBERTa base 96.03\u00b10.17 96.38\u00b10.14 large 98.13\u00b10.24 98.29\u00b10.16 Table 6: Effect of different text encoders. Three backbone language models BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), and DeBERTa (He et al., 2021) are tested on SCT-v1.0. L-layer n-hidden conv. 2 3 128 RGCN 93.30\u00b10.11 92.97\u00b10.17 GIN 92.93\u00b10.37 92.57\u00b10.24 GCN 92.95\u00b10.10 93.16\u00b10.22 256 RGCN 93.14\u00b10.20 93.12\u00b10.17 GIN 93.05\u00b10.42 92.41\u00b10.31 GCN 92.94\u00b10.13 92.86\u00b10.21 Table 7: Effect of different GNN settings on SCTv1.0. 4.5.2. Effect of model structure We test the GNN-based reasoning model performance with different backbone text encoders (Table 6). Compared with the baselines (\u201cw/o know.\u201d), our framework consistently improves performance across different versions of LMs. We also investigate the effect of different GNN configurations in Table 7. Apart from the relational convolutional layers (RGCN (Schlichtkrull et al., 2018)), we additionally test GIN (Xu et al., 2018) and GCN (Kipf and Welling, 2016), which do not model the edge type information. We can observe that RGCN outperforms GIN and GCN under the same settings. This indicates the discourse relation knowledge in ASER is beneficial for narrative reasoning. We evaluate the LLM-based reasoning model under different graph sequentialization settings (Table 12). It is noteworthy that ChatGPT faces difficulties in understanding the knowledge represented in DOT language, resulting in a performance drop of approximately 10%. One possible reason for this is that the model was not trained to comprehend such structured representations. Additionally, providing only node information to the model does not yield significant benefits. The model demonstrates improved performance when using the \"Node & Edge\" representation of graphs. 4.6. Case study A running example is presented in Figure 5. The top three nodes that our model focuses on are \u201c[P0] study,\u201d \u201c[P0] pass the test,\u201d and \u201c[P0] believe.\u201d They are highly related to the correct candidate ending 1. Also note the path (\u201c[P0] study,\u201d Reason, \u201cit go well,\u201d Conjunction, \u201c[P0] pass the test\u201d) could be explained as the causal story: Someone studies hard, so it (the learning, or the exam) goes well, and he/she passes the test. Context: s1: Caroline was a student in medical school. s2: Caroline worked very hard to get good grades. s3: One day Caroline failed a test by one point. s4: Caroline was very frustrated but she continued to study hard. Candidate endings: 0. But she gave up. \ud83e\udd14 1. Later, she passed the test. \u2705 [P0] be a student in medical school [P0] work very hard to get good grade [P0] get good grade One day [P0] fail a test by one point [P0] be very frustrated [P0] continue to study hard [P0] study hard Later [P0] pass the test [P0] be a student in medical school [P0] be [P0] fail [P0] continue [P0] study [P0] be sorry It go well [P0] believe [P0] pass the test [P0] leave [P0] pass [P0] get good grade [P0] work [P0] study hard [P0] get [P0] fail the test [P0] be very frustrated Synchronous Succession Contrast Contrast Conjunction Condition Condition Reason Conjunction ASER nodes Context nodes Context edge Grounding edge ASER edge (discourse relation) Context events KG events Top-10 node attention weights Figure 5: An example from SCT-v1.0. The top-10 node attention weights are shown in the barplot. The top-3 nodes are bolded and underlined . 5. Conclusion We point out two critical problems on grounding free-texts to eventuality-centric KGs, namely the event representation and event sparsity problems. We propose a simple while effective approach, EventGround, to address these problems and to leverage the retrieved graph knowledge for narrative reasoning. Empirical results demonstrate its consistent performance improvement. Further investigation reveals that the normalization and partial information extraction components drastically improve the grounding performance by alleviating event sparsity. Limitations In event normalization, we only normalize personal words in event as it is the most common spans that worth normalization, normalization of other type of information are not considered, which we leave for future work. When grounding to event-centric KGs, we consider finding the shortest paths to retrieve the knowledge subgraph due to high computational complexity of solving the Steiner tree problem. Other retrieval methods (e.g. reinforcement learning based) could also be considered. Acknowledgements The authors of this paper were supported by the NSFC Fund (U20B2053) from the NSFC of China, the RIF (R6020-19 and R6021-20) and the GRF (16211520 and 16205322) from RGC of Hong Kong. We also thank the support from the UGC Research Matching Grants (RMGS20EG01D, RMGS20CR11, RMGS20CR12, RMGS20EG19, RMGS20EG21, RMGS23CR05, RMGS23EG08). Emmon Bach. 1986. The algebra of events. Linguistics and philosophy, pages 5\u201316. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28August 2, 2019, Volume 1: Long Papers, pages 4762\u20134779. Association for Computational Linguistics. S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco T\u00falio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, abs/2303.12712. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT, pages 789\u2013797. Chunkit Chan and Tsz Ho Chan. 2023. Discourseaware prompt for argument impact classification. In Proceedings of the 15th International Conference on Machine Learning and Computing, ICMLC 2023, Zhuhai, China, February 17-20, 2023, pages 165\u2013171. ACM. Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, and Yangqiu Song. 2023a. Chatgpt evaluation on sentence level relations: A focus on temporal, causal, and discourse relations. CoRR, abs/2304.14827. Chunkit Chan, Xin Liu, Tsz Ho Chan, Jiayang Cheng, Yangqiu Song, Ginny Y. Wong, and Simon See. 2023b. Self-consistent narrative prompts on abductive natural language inference. CoRR, abs/2309.08303. Chunkit Chan, Xin Liu, Jiayang Cheng, Zihan Li, Yangqiu Song, Ginny Y. Wong, and Simon See. 2023c. Discoprompt: Path prediction prompt tuning for implicit discourse relation recognition. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 35\u201357. Association for Computational Linguistics. Snigdha Chaturvedi, Haoruo Peng, and Dan Roth. 2017. Story comprehension for predicting what happens next. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1603\u20131614. Yi Chen, Jiayang Cheng, Haiyun Jiang, Lemao Liu, Haisong Zhang, Shuming Shi, and Ruifeng Xu. 2022. Learning from sibling mentions with scalable graph inference in fine-grained entity typing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2076\u20132087. Jiayang Cheng, Haiyun Jiang, Deqing Yang, and Yanghua Xiao. 2021. A question-answering based framework for relation extraction validation. arXiv preprint arXiv:2104.02934. Li Cui, Deqing Yang, Jiayang Cheng, and Yanghua Xiao. 2021a. Incorporating syntactic information into relation representations for enhanced relation extraction. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 416\u2013428. Springer. Li Cui, Deqing Yang, Jiaxin Yu, Chengwei Hu, Jiayang Cheng, Jingjie Yi, and Yanghua Xiao. 2021b. Refining sample embeddings with relation prototypes to enhance continual relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 232\u2013243. Richard R Day, Julian Bamford, Willy A Renandya, George M Jacobs, and Vivienne Wai-Sze Yu. 1998. Extensive reading in the second language classroom. RELC Journal, 29(2):187\u2013191. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, and Junwen Duan. 2019. Event representation learning enhanced with external commonsense knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4894\u20134903. Tianqing Fang, Hongming Zhang, Weiqi Wang, Yangqiu Song, and Bin He. 2021. Discos: Bridging the gap between discourse knowledge and commonsense knowledge. In Proceedings of the Web Conference 2021, pages 2648\u20132659. Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi-hop relational reasoning for knowledgeaware question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1295\u20131309. Thibault F\u00e9vry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4937\u2013 4951. Association for Computational Linguistics. Emden R Gansner, Eleftherios Koutsofios, Stephen C North, and K-P Vo. 1993. A technique for drawing directed graphs. IEEE Transactions on Software Engineering, 19(3):214\u2013230. Michael R Garey and David S. Johnson. 1977. The rectilinear steiner tree problem is npcomplete. SIAM Journal on Applied Mathematics, 32(4):826\u2013834. Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradientdisentangled embedding sharing. arXiv preprint arXiv:2111.09543. Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable ai: Challenges and prospects. arXiv preprint arXiv:1812.04608. Yuxin Jiang, Chunkit Chan, Mingyang Chen, and Wei Wang. 2023. Lion: Adversarial distillation of closed-source large language model. CoRR, abs/2305.12870. Cheng Jiayang, Lin Qiu, Tsz Chan, Tianqing Fang, Weiqi Wang, Chunkit Chan, Dongyu Ru, Qipeng Guo, Hongming Zhang, Yangqiu Song, et al. 2023. Storyanalogy: Deriving story-level analogies from large language models to unlock analogical understanding. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11518\u201311537. Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535\u2013547. Michael P Jordan. 1998. The power of negation in english: Text, context and relevance. Journal of pragmatics, 29(6):705\u2013752. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Trans. Assoc. Comput. Linguistics, 8:64\u201377. Diederik P . Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Jan Kocon, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika Szydlo, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Kocon, Bartlomiej Koptyra, Wiktoria MieleszczenkoKowszewicz, Piotr Milkowski, Marcin Oleksy, Maciej Piasecki, Lukasz Radlinski, Konrad Wojtasik, Stanislaw Wozniak, and Przemyslaw Kazienko. 2023. Chatgpt: Jack of all trades, master of none. CoRR, abs/2302.10724. Sebastian Krause, Feiyu Xu, Hans Uszkoreit, and Dirk Weissenborn. 2016. Event linking with sentential features from convolutional neural networks. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 239\u2013249. I-Ta Lee and Dan Goldwasser. 2019. Multirelational script learning for discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4214\u20134226. I-Ta Lee, Maria Leonor Pacheco, and Dan Goldwasser. 2020. Weakly-supervised modeling of contextualized event embedding for discourse relations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4962\u2013 4972. I-Ta Lee, Maria Leonor Pacheco, and Dan Goldwasser. 2021. Modeling human mental states with an entity-based narrative graph. arXiv preprint arXiv:2104.07079. Haoran Li, Yulin Chen, Jinglong Luo, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, and Yangqiu Song. 2023a. Privacy in large language models: Attacks, defenses and future directions. CoRR, abs/2310.10383. Haoran Li, Dadi Guo, Donghao Li, Wei Fan, Qi Hu, Xin Liu, Chunkit Chan, Duanyi Yao, and Yangqiu Song. 2023b. P-bench: A multi-level privacy evaluation benchmark for language models. CoRR, abs/2311.04044. Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Constructing narrative event evolutionary graph for script event prediction. arXiv preprint arXiv:1805.05081. Zhongyang Li, Xiao Ding, and Ting Liu. 2019. Story ending prediction by transferable bert. arXiv preprint arXiv:1905.07504. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2829\u20132839. Xin Liu, Jiayang Cheng, Yangqiu Song, and Xin Jiang. 2022. Boosting graph structure learning with dummy nodes. In International Conference on Machine Learning, pages 13704\u201313716. PMLR. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Shangwen Lv, Fuqing Zhu, and Songlin Hu. 2020. Integrating external event knowledge for script learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 306\u2013315. Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, and Xuanjing Huang. 2022. Template-free prompt tuning for few-shot NER. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 5721\u20135732. Association for Computational Linguistics. Aman Madaan and Yiming Yang. 2021. Neural language modeling for contextualized temporal graph generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 864\u2013 881, Online. Association for Computational Linguistics. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 777\u2013782. Sewon Min, Danqi Chen, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. Knowledge guided text retrieval and reading for open domain question answering. arXiv preprint arXiv:1911.03868. Yusuke Mori, Hiroaki Yamane, Yusuke Mukuta, and Tatsuya Harada. 2020. Finding and generating a missing part for story completion. In Proceedings of the The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 156\u2013166. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839\u2013849. Nasrin Mostafazadeh, Aditya Kalyanpur, Lori Moon, David Buchanan, Lauren Berkowitz, Or Biran, and Jennifer Chu-Carroll. 2020. Glucose: Generalized and contextualized story explanations. arXiv preprint arXiv:2009.07758. Alexander PD Mourelatos. 1978. Events, processes, and states. Linguistics and philosophy, 2:415\u2013434. Gregory Murphy. 2004. The big book of concepts. MIT press. Joel Nothman, Matthew Honnibal, Ben Hachey, and James R Curran. 2012. Event linking: Grounding event reference in a news archive. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 228\u2013232. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. TB OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71\u2013106. Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 43\u201354. Association for Computational Linguistics. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC\u201908). Nils Reimers, Iryna Gurevych, Nils Reimers, Iryna Gurevych, Nandan Thakur, Nils Reimers, Johannes Daxenberger, Iryna Gurevych, Nils Reimers, Iryna Gurevych, et al. 2019. Sentencebert: Sentence embeddings using siamese bertnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, pages 671\u2013688. Association for Computational Linguistics. Joshua Robinson and David Wingate. 2023. Leveraging large language models for multiple choice question answering. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, and Yejin Choi. 2021. proScript: Partially ordered scripts generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2138\u20132149, Punta Cana, Dominican Republic. Association for Computational Linguistics. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027\u20133035. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593\u2013607. Springer. Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story ending biases in the story cloze test. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 752\u2013757. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-first AAAI conference on artificial intelligence. Siddarth Srinivasan, Richa Arora, and Mark Riedl. 2018. A simple and effective approach to the story cloze test. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 92\u201396. Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019a. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2380\u20132390. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231\u20134242. Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, and Haifeng Wang. 2021. ERNIE 3.0: Largescale knowledge enhanced pre-training for language understanding and generation. CoRR, abs/2107.02137. Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE: enhanced representation through knowledge integration. CoRR, abs/1904.09223. Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen. 2020. Facts as experts: Adaptable and interpretable neural memory over symbolic knowledge. CoRR, abs/2007.00849. Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, et al. 2023. Survey on factuality in large language models: Knowledge, retrieval and domainspecificity. arXiv preprint arXiv:2310.07521. Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. 2019. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315. Peter West, Chandra Bhagavatula, Jack Hessel, Jena D. Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2021. Symbolic knowledge distillation: from general language models to commonsense models. CoRR, abs/2110.07178. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38\u201345, Online. Association for Computational Linguistics. Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2020. Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2019. Improving question answering over incomplete kbs with knowledge-aware reader. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4258\u20134264. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qagnn: Reasoning with language models and knowledge graphs for question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 535\u2013546. Changlong Yu, Hongming Zhang, Yangqiu Song, and Wilfred Ng. 2020. Cocolm: Complex commonsense enhanced language model. arXiv preprint arXiv:2012.15643. Xiaodong Yu, Wenpeng Yin, Nitish Gupta, and Dan Roth. 2021. Event linking: Grounding event mentions to wikipedia. arXiv preprint arXiv:2112.07888. Hongming Zhang, Xin Liu, Haojie Pan, Haowen Ke, Jiefu Ou, Tianqing Fang, and Yangqiu Song. 2022. Aser: Towards large-scale commonsense knowledge acquisition via higher-order selectional preference over eventualities. Artificial Intelligence, page 103740. Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020. Aser: A large-scale eventuality knowledge graph. In Proceedings of the web conference 2020, pages 201\u2013211. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: enhanced language representation with informative entities. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28August 2, 2019, Volume 1: Long Papers, pages 1441\u2013 1451. Association for Computational Linguistics. Ming Zhong, Yang Liu, Suyu Ge, Yuning Mao, Yizhu Jiao, Xingxing Zhang, Yichong Xu, Chenguang Zhu, Michael Zeng, and Jiawei Han. 2022. Unsupervised summarization with customized granularities. arXiv preprint arXiv:2201.12502. Yucheng Zhou, Xiubo Geng, Tao Shen, Guodong Long, and Daxin Jiang. 2022a. Eventbert: A pretrained model for event correlation reasoning. In Proceedings of the ACM Web Conference 2022, pages 850\u2013859. Yucheng Zhou, Xiubo Geng, Tao Shen, Jian Pei, Wenqiang Zhang, and Daxin Jiang. 2021. Modeling event-pair relations in external knowledge graphs for script reasoning. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4586\u20134596. Yucheng Zhou, Tao Shen, Xiubo Geng, Guodong Long, and Daxin Jiang. 2022b. Claret: Pretraining a correlation-aware context-to-event transformer for event-centric generation and classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2559\u20132575. A. Detailed experimental results We present the detail performance comparison for SCT-v1.0 and SCT-v1.5 (in Table 8), as well as MCNC (in Table 9). Performance of the significant baselines in the corresponding tasks is presented. Method SCT-v1.0 SCT-v1.5 Random 50.00 50.00 (Chaturvedi et al., 2017) 77.60 (Mostafazadeh et al., 2016) 58.50 (Srinivasan et al., 2018) 76.50 (Yu et al., 2020) 97.70 (Zhou et al., 2022a) 91.33 (Zhou et al., 2022b) 91.18 (Li et al., 2019) 91.80 90.30 RoBERTa-base 92.75\u00b10.24 87.14\u00b10.39 RoBERTa-large 96.74\u00b10.08 92.34\u00b10.06 DeBERTa-large 98.13\u00b10.34 94.67\u00b10.25 EventGround-RB 93.30\u00b10.11 87.65\u00b10.13 EventGround-RL 97.10\u00b10.13 92.86\u00b10.05 EventGround-DL 98.29\u00b10.16 95.01\u00b10.32 Table 8: Results on SCT v1.0 and v1.5. Numbers are the mean and standard deviation of accuracy (%) over three runs. Method MCNC Random 20.00 (Chambers and Jurafsky, 2008) 30.52 (Granroth-Wilding and Clark, 2016) 49.57 (Li et al., 2018) 52.45 (Ding et al., 2019) 56.03 (Lv et al., 2020) 58.66 (Zhou et al., 2021) 63.62 (Zhou et al., 2022a) 63.50 (Lee et al., 2020) 63.59 (Lee and Goldwasser, 2019) 63.67 (Zhou et al., 2022b) 64.61 RoBERTa-base 61.28\u00b10.14 RoBERTa-large 63.01\u00b10.12 DeBERTa-large 65.67\u00b10.13 EventGround-RB 62.11\u00b10.07 EventGround-RL 63.96\u00b10.15 EventGround-DL 66.05\u00b10.12 Table 9: Results on MCNC. Numbers are the mean and standard deviation of accuracy (%) over three runs. B. Results and statistics of event extraction and grounding Table 11 shows the detailed statistics of the event grounding and subgraph retrieval stage. It is clear that our proposed event extraction, normalization and multi-level extraction method help alleviate the event sparsity to a large extent. This not only reflects on the hit rate and mean L-2 distance during event grounding stage, but also in their retrieved graphs statistics. Table 10 shows the performance comparison between semantic similarity based matching (which we used) and the token-level similarity matching. It is clear from the table that the token-level based similarity matching, such as tf-idf, fails to perform as good as the semantic based matching. Note that, the information extraction here is fundamentally different from the entity-centric line of work (Cui et al., 2021b,a; Chen et al., 2022), as our setting involves decomposition and semantic similarity computations over text snippets. RoBERTa BERT Baseline (w/o know.) 92.75\u00b10.24 83.63\u00b11.16 Token-level similarity (tf-idf) 92.84\u00b10.27 84.27\u00b10.73 Semantic similarity (SBERT) 93.30\u00b10.11 85.78\u00b10.74 Table 10: Performance comparison between baseline, token-level similarity based event matching, and semantic similarity based event matching. C. Supplementary case studies Apart from the case study provided in Section 4.6, we additionally provide another two examples in Figure 10 and 11. D. Annotation details We show the annotation interface presented to the expert annotators in 12. Users are prompted to compare the event and its matched anchor, and then to give an evaluation of the quality (Successful-1 or Not-0). Since the annotation requires domain-specific knowledge, we recruited 3 student researchers within our area who volunteered to help us conduct the evaluation. The payment to annotators is higher than the local minimum wage. E. Obtaining ChatGPT Performance In addition to GNNs (Kipf and Welling, 2016; Xu et al., 2018; Schlichtkrull et al., 2018; Liu et al., 2022), we also evaluated large language models as graph reasoning modules. Recently, large language models (e.g., ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023)) have shown promising performance on various tasks, and have raised concerns and discussions on topics such as factuality and privacy (Wang et al., 2023; Bubeck et al., 2023; Kocon et al., 2023; Chan et al., 2023a; Jiang et al., 2023; Li et al., 2023a,b). In this paper, we Eventuality Knowledge Graphs Knowledge Models Prediction Models Knowledge Model Paradigm Retrieval-and-Integration Paradigm 1. Caroline was a student in medical school. 2. Caroline worked very hard to get good grades. 3. One day Caroline failed a test by one point. 4. Caroline was very frustrated but she continued to study hard. \u2705Later, she passed the test. \u274cBut she gave up. :... PLMs Story context Story ending candidates Eventuality Knowledge Fine-tuning Input Prediction Subgraph Retrieval Input Prediction Queries I study hard Reason Conjunction I pass the test Retrieved Subgraph It goes well Figure 6: Overview of the knowledge model paradigm (left) and the retrieval-and-integration paradigm (right). The knowledge model paradigm pretrains LMs with specially designed objectives, and then further finetunes them to adapt to downstream tasks for prediction. The retrieval-and-integration paradigm retrieves relevant subgraphs of the story context and then makes predictions according to the retrieved subgraphs. Event grounding Subgraph retrieval hit rate (%) mean L2 distance \u00af d |Vsub| |Esub| |Vjoint| |Ejoint| w/o extract. 1.43 0.9566 0.1235 0.1951 5.12 8.35 w/o PIE 88.28 0.3853 13.37 36.33 21.60 67.17 12.50 0.8351 ARGM 93.22 0.2819 22.34 74.12 30.53 109.64 21.43 0.7801 ARG2,3,4 94.38 0.1818 28.03 93.94 36.20 134.09 45.44 0.6477 ARG1 97.12 0.1150 63.27 281.32 71.41 330.73 41.97 0.6968 Table 11: Results and statistics of event grounding and subgraph retrieval. The gray numbers are the statistics for \u201cw/o norm.\u201d experiments. Figure 7: The Precision to threshold curves. test ChatGPT 16 in narrative reasoning tasks with additional grounded knowledge. The zero-shot performance of large language models, which relies on the sophisticated design of templates, has shown variance across various tasks (Ma et al., 2022; Chan et al., 2023b,c; Chan and Chan, 2023). To obtain replicable and representative results, we follow Robinson and Wingate (2023); Cheng et al. 16The evaluation is performed in September 2023 by calling ChatGPT Model (gpt-3.5-turbo) API . Figure 8: The Recall to threshold curves. (2021) to formulate the task as a multiple choice question answering problem. Figure 9: The Precision-Recall curve. Model SCT-v1.0 (%) SCT-v1.5 (%) Random 50.00 50.00 ChatGPTPrompt 77.80 77.00 ChatGPTw/ proscript DOT 67.80 69.00 ChatGPTw/ node 72.00 78.00 ChatGPTw/ node & edge 79.60 78.00 Table 12: The performance of ChatGPT performs on the SCT-v1.0 test set (sampled 500 instances) and the SCT-v1.5 validation set. The submission upload for the SCT-v1.5 leaderboard (https://competitions. codalab.org/competitions/15333) is no longer available. Therefore, we test ChatGPT performance on the validation set. The ChatGPT template is displayed in Figure 13. Context: s1: Ava needed to go shopping with her two-year old. s2: But she couldn\u2019t find his shoes even after looking everywhere! s3: She decided she had no choice but to buy him new shoes. s4: She carried him into the store in order to select a new pair. Candidate endings: 0. Ava was a neglectful mother. ! 1. Ava took good care of her son. \u2705 [P0] need to go shopping with [P0\u2019s] two-year old [P0] go shopping with [P0\u2019s] twoyear old [P0] shopping But [P0] could not find [P1\u2019s] shoe even after look everywhere [P0] look everywhere [P0] have no choice but to buy [P1] new shoe [P0] buy [P1] new shoe [P0] take good care of [P0\u2019s] son [P0\u2019s] go to shop [P0] look everywhere [P0] like [P0] will buy [P1] a new one [P0] leave [P0] take good care [P0] carry [P1] [P0] look ASER nodes Context nodes Context edge Grounding edge ASER edge (discourse relation) Context events KG events [P0] carry [P1] into the store in order to select a new pair Select a new pair [P0] be glad [P0] go shopping [P0] could not find it [P0] go back [P0] have to buy [P1] [P0] could not find [P1] anywhere Conjunction Conjunction Synchronous Contrast Precedence Result Conjunction Result Precedence Contrast Figure 10: Supplementary case 1. Context: s1: The children were inside playing when they heard music. s2: They ran to their mother and begged for change. s3: She handed them a couple of dollars. s4: They took off running outside. Candidate endings: 0. The children threw the money in the street. ! 1. The children excitedly bought ice cream cones. \u2705 [P0] be inside playing when [P0] hear music [P0] playing when [P0] hear music When [P0] hear music [P0] run to [P0\u2019s] mother [P0] beg for change [P1] hand [P0] a couple of dollars [P0] take run outside, catch up to the ice cream truck [P0] excitedly buy ice cream cone [P0] love [P0] take [P0] love ice cream [P1] have a dollar ASER nodes Context nodes Context edge Grounding edge ASER edge (discourse relation) Context events KG events [P0] run outside [P0] catch to the ice cream truck [P0] be glad [P0] run [P0] hear music * In this case, we omitted some KG nodes since the original graph is very dense [P0] have ice cream [P0] make some change [P0] run outside [P0] be with [P0\u2019s] mother [P0] beg [P0] hear music [P0] can not make Synchronous Contrast Conjunction Conjunction Conjunction Conjunction Contrast [P0] play [P0] be inside [P0] come out Succession Figure 11: Supplementary case 2. Figure 12: Annotation interface in command line. Templates ChatGPTPrompt Question: Which choice of narrative is more reasonable? Only answer \\\"A\\\" or \\\"B\\\" only without any other words or explanations.\\nA. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. Danny decided to go to sleep.\\nB. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. They prepared for the start of the race.\\nAnswer: ChatGPTProscript DOT Event knowledge on narrative choice A: 0: \\'[P0] buy a boat\\\u2019; \u2026 \\n 12: \\'[P0] go\\'\\nEvent knowledge Edges for narrative choice A: 0-->1; \u2026 12-->6; \\nEvent knowledge on narrative choice B: 0: \\'[P0] buy a boat\\\u2019; \u2026 \\n 12: \\'[P2] prepare\\'Event knowledge Edges for narrative choice B: 0-->1; \u2026 12-->5; \\nQuestion: Which choice of narrative is more reasonable based on the event knowledge, knowledge edge and the choices? Only answer \"A\" or \"B\" only without any other words or explanations. All [P0], [P1]...etc are the people mentioned in the passage.\\nA. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. Danny decided to go to sleep.\\nB. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. They prepared for the start of the race.\\nAnswer: ChatGPTNode Event knowledge on narrative choice A: [P0] buy a boat. \u2026 \\n [P0] go\\nEvent knowledge on narrative choice B: [P0] buy a boat. \u2026 \\n [P2] prepare\\n\\nQuestion: Which choice of narrative is more reasonable based on the event knowledge and the choices? Only answer \"A\" or \"B\" only without any other words or explanations. All [P0], [P1]...etc are the people mentioned in the passage.\\nA. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. Danny decided to go to sleep.\\nB. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. They prepared for the start of the race.\\nAnswer: ChatGPTNode & Edge Event knowledge on narrative choice A: [P0] buy a boat-->[P0\\'s] nearby marina have a race; \u2026 [P0] go-->[P0] go to sleep; \\nEvent knowledge on narrative choice B: [P0] buy a boat-->[P0\\'s] nearby marina have a race; \u2026 [P2] prepare-->[P2] prepare for the start of the race; \\n\\nQuestion: Which choice of narrative is more reasonable based on the event knowledge and the choices? Only answer \"A\" or \"B\" only without any other words or explanations. All [P0], [P1]...etc are the people mentioned in the passage.\\nA. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. Danny decided to go to sleep.\\nB. Danny bought a boat. His nearby marina was having a race. He decided to enter. Danny and his best friend manned the boat. They prepared for the start of the race.\\nAnswer: Figure 13: ChatGPT Template" |
| } |
| ] |
| } |