diff --git "a/0dFST4oBgHgl3EQfVTiq/content/tmp_files/load_file.txt" "b/0dFST4oBgHgl3EQfVTiq/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0dFST4oBgHgl3EQfVTiq/content/tmp_files/load_file.txt" @@ -0,0 +1,680 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf,len=679 +page_content='Computer Algebra in R Bridges a Gap Between Mathematics and Data in the Teaching of Statistics and Data Science Mikkel Meyer Andersen and Søren Højsgaard February 1, 2023 Mikkel Meyer Andersen Department of Mathematical Sciences, Aalborg University, Denmark Skjernvej 4A 9220 Aalborg Ø, Denmark ORCiD: 0000-0002-0234-0266 mikl@ math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' aau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' dk Søren Højsgaard Department of Mathematical Sciences, Aalborg University, Denmark Skjernvej 4A 9220 Aalborg Ø, Denmark ORCiD: 0000-0002-3269-9552 sorenh@ math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' aau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' dk Abstract The capability of R to do symbolic mathematics is enhanced by the caracas package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This package uses the Python computer algebra library SymPy as a back-end but caracas is tightly integrated in the R environment, thereby enabling the R user with symbolic mathematics within R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' We demonstrate how mathematics and statistics can benefit from bridging computer algebra and data via R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This is done thought a number of examples and we propose some topics for small student projects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The caracas package integrates well with e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Rmarkdown, and as such creation of scientific reports and teaching is supported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Introduction The caracas package [Andersen and Højsgaard, 2021] and the Ryacas package [Andersen and Højsgaard, 2019] enhance the capability of R [R Core Team, 2023] to handle symbolic mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In this paper we will illustrate the use of the caracas package in connection with teaching mathematics and statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Focus is on 1) treating statistical models symbolically, 2) on bridging the gap between symbolic mathe- matics and numerical computations and 3) on preparing teaching material in a reproducible framework (provided by, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' rmarkdown [Allaire et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', 2021, Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', 2018, 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The caracas package is avail- able from CRAN [R Core Team, 2023].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The open-source development version of caracas is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='com/r-cas/caracas and readers are recommended to study the online documenta- tion at https://r-cas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='io/caracas/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The caracas package provides an interface from R to the Python package sympy [Meurer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This means that SymPy is “running under the hood” of R via the reticulate package [Ushey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The sympy package is mature and robust with many users and developers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Neither caracas nor Ryacas are as powerful as some of the larger commercial computer algebra systems (CAS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The virtue of caracas and Ryacas lie elsewhere: (1) Mathematical tools like equation solving, summation, limits, symbolic linear algebra, outputting in tex format etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' are directly available from within R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (2) The packages enable working with the same language and in the same environment as the user does for statistical analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (3) Symbolic mathematics can easily be combined with data which is helpful in e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' numerical optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (4) The packages are open-source and therefore support 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='13777v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='AP] 31 Jan 2023 e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' education - also for people with limited economical means and thus contributing to United Nations sustainable development goals [United Nations General Assembly, 2015].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The paper is organized in the following sections: The section Mathematics and documents containing mathematics briefly introduces the caracas package and its syntax, including how caracas can be used in connection with preparing texts, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' teaching material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' More details are provided in the Section Important technical aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Several vignettes illustrating caracas are provided and they are also available online, see https://r-cas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='io/caracas/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The section Statistics examples is the main section of the paper and here we present a sample of statistical models where we believe that a symbolic treatment is a valuable supplement to a numerical in connection with teaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The section Possible topics to study contains suggestions about hand-on activities for students.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Lastly, the section Discussion and future work contains a discussion of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Mathematics and documents containing mathematics We start by introducing the caracas syntax on familiar topics within calculus and linear algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Calculus First we define a caracas symbol x (more details will follow in Section Important technical aspects) and subsequently a caracas polynomial p in x (p becomes a symbol because x is): R> library(caracas) R> def_sym(x) ## Declares ’x’ as a symbol R> p <- 1 - x^2 + x^3 + x^4/4 - 3 * x^5 / 5 + x^6 / 6 R> p #> [c]: 6 5 4 #> x 3*x x 3 2 #> -- - ---- + -- + x x + 1 #> 6 5 4 The gradient of p is: R> grad <- der(p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' x) ## ’der’ is shorthand for derivative R> grad #> [c]: 5 4 3 2 #> x 3*x + x + 3*x 2*x Stationary points of p can be found by finding roots of the gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In this simple case we can factor the gradient: R> factor_(grad) #> [c]: 2 #> x*(x - 2)*(x - 1) *(x + 1) The factorizations shows that stationary points are −1, 0, 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' To investigate if extreme points are local minima,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' local maxima or saddle points,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' we compute the Hessian and evaluate the Hessian in the stationary points: R> hess <- der2(p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' x) R> hess #> [c]: 4 3 2 #> 5*x 12*x + 3*x + 6*x - 2 R> hess_ <- as_func(hess) R> hess_ 2 #> function (x) #> { #> 5 * x^4 - 12 * x^3 + 3 * x^2 + 6 * x - 2 #> } #> R> stationary_points <- c(-1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 2) R> hess_(stationary_points) #> [1] 12 -2 0 6 Alternatively,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' we can create an R expression and evaluate: R> eval(as_expr(hess),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' list(x = stationary_points)) #> [1] 12 -2 0 6 The sign of the Hessian in these points gives that x = −1 and x = 12 are local minima,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' x = 0 is a local maximum and x = 1 is a saddle point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In general we can find the stationary symbolically and evaluate the Hessian as follows (output omitted): R> sol <- solve_sys(lhs = grad,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' vars = x) ## finds roots by default R> subs(hess,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' sol[[1]]) ## the first solution R> lapply(sol,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' function(s) subs(hess,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s)) ## iterate over all solutions Linear algebra Next,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' we create a symbolic matrix and find its inverse: R> M <- as_sym(toeplitz(c("a",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "b",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 0))) ## as_sym() converts an R object to caracas symbol R> Minv <- inv(M) %>% simplify() Default printing of M is (Minv is shown below in next section): R> M #> [c]: [a b 0] #> [ ] #> [b a b] #> [ ] #> [0 b a] A vector is a one-column matrix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' but it is printed as its transpose to save space: R> v <- vector_sym(3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "v") R> v #> [c]: [v1 v2 v3]^T Matrix products are computed using the %*% operator: R> M %*% v #> [c]: [a*v1 + b*v2 a*v2 + b*v1 + b*v3 a*v3 + b*v2]^T 3 Preparing mathematical documents The packages Sweave [Leisch,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 2002] and Rmarkdown [Allaire et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', 2021] provide integration of LaTeX and other text formatting systems into R helping to produce text document with R content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In a similar vein, caracas provides an integration of computer algebra into R and in addition, caracas also facilitates creation of documents with mathematical content without e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' typing tedious LaTeX instructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' A LaTeX rendering of the caracas symbol p is obtained by typing $$p(x) = `r tex(p)`$$ which results in the following when the document is compiled: p(x) = x6 6 − 3x5 5 + x4 4 + x3 − x2 + 1 Typing $$Mˆ{-1} = `r tex(Minv)`$$ produces the result: M −1 = � �� a2−b2 a(a2−2b2) − b a2−2b2 b2 a(a2−2b2) − b a2−2b2 a a2−2b2 − b a2−2b2 b2 a(a2−2b2) − b a2−2b2 a2−b2 a(a2−2b2) � �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The determinant of M is det(M) = a3 −2∗a∗b2 and this can be factored out of the matrix by dividing each entry with the determinant and multiplying the new matrix by the determinant which simplifies the appearance of the matrix: R> Minv_fact <- as_factor_list(1 / factor_(det(M)), simplify(Minv * det(M))) Typing $$Mˆ{-1} = `r tex(Minv_fact)`$$ produces this: M −1 = 1 a (a2 − 2b2) � � a2 − b2 −ab b2 −ab a2 −ab b2 −ab a2 − b2 � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Finally we illustrate creation of additional mathematical expressions: R> def_sym(x, n) R> y <- (1 + x/n)^n R> lim(y, n, Inf) #> [c]: exp(x) Typing $$y = `r tex(y)`$$ etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' gives y = � 1 + x n �n , lim n−>∞ y = exp(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' We can also prepare unevaluated expressions using the doit argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' That helps making repro- ducible documents where the changes in code appears automatically in the generated formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This is done as follows: R> l <- lim(y, n, Inf, doit = FALSE) R> l #> [c]: n #> / x\\ #> lim |1 + -| #> n->oo\\ n/ R> doit(l) #> [c]: exp(x) Typing $$`r tex(l)` = `r tex(doit(l))`$$ gives lim n→∞ � 1 + x n �n = ex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Several functions have the doit argument, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' lim(), int() and sum_().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 4 Important technical aspects A caracas symbol is a list with a pyobj slot and the class caracas_symbol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The pyobj is an an object in Python (often a sympy object).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' As such, a symbol (in R) provides a handle to a Python object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In the design of caracas we have tried to make this distinction something the user should not be concerned with, but it is worthwhile being aware of the distinction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Sections Calculus and Linear algebra illustrate that caracas symbols can be created with def_sym() and as_sym().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Both declares the symbol in R and in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' A symbol can also be defined in terms of other symbols: Define symbols s1 and s2 and define symbol s3 in terms of s1 and s2: R> def_sym(s1, s2) ## Note: ’s1’ and ’s2’ exist in both R and Python R> s1$pyobj #> s1 R> s3_ <- s1 * s2 ## Note: ’s3’ is a symbol in R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' no corresponding object in Python R> s3_$pyobj #> s1*s2 The underscore in s3_ indicates that this expression is defined in terms of other symbols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This convention is used through out the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Next express s1 and s2 in terms of symbols u and v (which are created on the fly): R> s4_ <- subs(s3_, c("s1", "s2"), c("u+v", "u-v")) R> s4_ #> [c]: (u - v)*(u + v) Statistics examples In this section we examine larger statistical examples and demonstrate how caracas can help improve understanding of the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Linear models A matrix algebra approach to e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' linear models is very clear and concise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' On the other hand, it can also be argued that matrix algebra obscures what is being computed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Numerical examples are useful for some aspects of the computations but not for other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In this respect symbolic computations can be enlightening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Consider a two-way analysis of variance (ANOVA) with one observation per group, see Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Table 1: Two-by-two layout of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' y11 y21 y12 y22 R> nr <- 2 R> nc <- 2 R> y <- matrix_sym(nr, nc, "y") R> dim(y) <- c(nr*nc, 1) R> y #> [c]: [y11 y21 y12 y22]^T R> dat <- expand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='grid(r=factor(1:nr), s=factor(1:nc)) R> X <- model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='matrix(~r+s, data=dat) |> as_sym() R> b <- vector_sym(ncol(X), "b") R> mu <- X %*% b 5 For the specific model we have random variables y = (yij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' All yijs are assumed independent and yij ∼ N(µij, v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The corresponding mean vector µ has the form given below: y = � ��� y11 y21 y12 y22 � ��� , X = � ��� 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 1 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 1 1 1 1 � ��� , b = � � b1 b2 b3 � � , µ = Xb = � ��� b1 b1 + b2 b1 + b3 b1 + b2 + b3 � ��� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Above and elsewhere, dots represent zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The least squares estimate of b is the vector ˆb that minimizes ||y − Xb||2 which leads to the normal equations (X⊤X)b = X⊤y to be solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' If X has full rank, the unique solution to the normal equations is ˆb = (X⊤X)−1X⊤y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Hence the estimated mean vector is ˆµ = Xˆb = X(X⊤X)−1X⊤y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Symbolic computations are not needed for quantities involving only the model matrix X,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' but when it comes to computations involving y,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' a symbolic treatment of y is useful: R> XtX <- t(X) %*% X R> XtXinv <- inv(XtX) R> Xty <- t(X) %*% y R> b_hat <- XtXinv %*% Xty X⊤y = � � y11 + y12 + y21 + y22 y21 + y22 y12 + y22 � � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' ˆb = 1 4 � � 3y11 + y12 + y21 − y22 −2y11 − 2y12 + 2y21 + 2y22 −2y11 + 2y12 − 2y21 + 2y22 � � (1) Hence X⊤y (a sufficient reduction of data if the variance is known) consists of the sum of all observa- tions,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' the sum of observations in the second row and the sum of observations in the second column.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' For ˆb, the second component is, apart from a scaling, the sum of the second row minus the sum of the first row.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Likewise, the third component is the sum of the second column minus the sum of the first column.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' It is hard to give an interpretation of the first component of ˆb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Logistic regression In the following we go through details of a logistic regression model, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' McCullagh and Nelder [1989] for a classical description of logistic regression: Observables are binomially distributed, yi ∼ bin(pi, ni).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The probability pi is connected to a q-vector of covariates xi = (xi1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , xiq) and a q-vector of regression coefficients b = (b1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , bq) as follows: The term si = xi ·b is denoted the linear predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The probability pi can be linked to si in different ways, but the most commonly employed is via the logit link function which is logit(pi) = log(pi/(1 − pi)) so here logit(pi) = si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' As an example, consider the budworm data from the doBy package [Højsgaard and Halekoh, 2023].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The data shows the number of killed moth tobacco budworm Heliothis virescens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Batches of 20 moths of each sex were exposed for three days to the pyrethroid and the number in each batch that were dead or knocked down was recorded: R> data(budworm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' package = "doBy") R> bud <- subset(budworm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' sex == "male") R> bud #> sex dose ndead ntotal #> 1 male 1 1 20 #> 2 male 2 4 20 #> 3 male 4 9 20 #> 4 male 8 13 20 #> 5 male 16 18 20 #> 6 male 32 20 20 Below we focus only on male budworms and the mortality is illustrated in Figure 1 (produced with ggplot2 [Wickham,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 2016]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' On the y-axis we have the empirical logits, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' log((ndead + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='5)/(ntotal − ndead + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='5)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The figure suggests that logit grows linearly with log dose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 6 −2 0 2 4 0 10 20 30 dose Empirical logits −2 0 2 4 0 1 2 3 4 5 log2(dose) Empirical logits Figure 1: Insecticide mortality of the moth tobacco budworm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Each component of the likelihood The log-likelihood is log L = � i yi log(pi)+(ni−yi) log(1−pi) = � i log Li, say.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' With log(pi/(1−pi)) = si we have pi = 1/(1 + exp(−si)) and d dsi pi = exp(−si) (1+exp(−si))2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' With si = xi · b, we have d dbsi = xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Consider the contribution to the total log-likelihood from the ith observation which is li = yi log(pi)+ (ni − yi) log(1 − pi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Since we are focusing on one observation only, we shall ignore the subscript i in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' First notice that with s = log(p/(1 − p)) we can find p as: R> def_sym(s, p) R> sol_ <- solve_sys(lhs = log(p / (1 - p)), rhs = s, vars = p) R> sol_[[1]]$p #> [c]: exp(s) #> ---------- #> exp(s) + 1 Next, find the likelihood as a function of p, as a function of s and as a function of b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The underscore in logLb_ and elsewhere indicates that this expression is defined in terms of other symbols (this is in contrast to the free variables, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' y, p, and n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='): R> def_sym(y, n, p, x, s, b) R> logLp_ <- y * log(p) + (n - y) * log(1 - p) R> p_ <- exp(s) / (exp(s) + 1) R> logLs_ <- subs(logLp_, p, p_) R> s_ <- sum(x * b) R> logLb_ <- subs(logLs_, s, s_) R> logLb_ #> [c]: / exp(b*x) \\ / exp(b*x) \\ #> y*log|------------| + (n - y)*log|1 - ------------| #> \\exp(b*x) + 1/ \\ exp(b*x) + 1/ The log-likelihood can be maximized using e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Newton-Rapson (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Nocedal and Wright [2006]) and in this connection we need the score function,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' S,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' and the Hessian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' H: R> Sb_ <- score(logLb_,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' b) |> simplify() R> Hb_ <- hessian(logLb_,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' b) |> simplify() R> Sb_ #> [c]: [x*(y - (n - y)*exp(b*x))] #> [------------------------] #> [ exp(b*x) + 1 ] R> Hb_ 7 #> [c]: [ 2 ] #> [ n*x *exp(b*x) ] #> [---------------------------] #> [exp(2*b*x) + 2*exp(b*x) + 1] Since x and b are vectors,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' the term b*x above should be read as the inner product x · b (or as x⊤b in matrix notation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Also, since x is a vector, the term xˆ2 above should be read as the outer product x ⊗ x (or as xx⊤ in matrix notation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' More insight in the structure is obtained by letting b and x be 2-vectors (to save space,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' the Hessian matrix is omitted in the following): R> b <- vector_sym(2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "b") R> x <- vector_sym(2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "x") R> s_ <- sum(x * b) R> logLb_ <- subs(logLs_,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s_) R> Sb_ <- score(logLb_,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' b) |> simplify() logLb_ = y log � eb1x1+b2x2 eb1x1+b2x2 + 1 � + (n − y) log � 1 − eb1x1+b2x2 eb1x1+b2x2 + 1 � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (2) Sb_ = � � x1(−neb1x1+b2x2+yeb1x1+b2x2+y) eb1x1+b2x2+1 x2(−neb1x1+b2x2+yeb1x1+b2x2+y) eb1x1+b2x2+1 � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (3) Next, insert data, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' x1 = 1, x2 = 2, y = 9, n = 20 to obtain a function of the regression parameters only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Note how the expression depending on other symbols, S_, is named S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' to indicate that data has been inserted: R> nms <- c("x1", "x2", "y", "n") R> vls <- c(1, 2, 9, 20) R> logLb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' <- subs(logLb_, nms, vls) R> Sb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' <- subs(Sb_, nms, vls) The total score for the entire dataset can be obtained as follows: R> Sb_list <- lapply(seq_len(nrow(bud)), function(r){ + vls <- c(1, log2(bud$dose[r]), bud$ndead[r], bud$ntotal[r]) + subs(Sb_, nms, vls) + }) R> Sb_total <- Reduce(‘+‘, Sb_list) This score can be used as part of an iterative algorithm for solving the score equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' If one wants to use Newton-Rapson, the total Hessian matrix must also be created following lines similar to those above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' It is straight forward implement a Newton-Rapson algorithm based on these quantities,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' one must only note the distinction between the two expressions below (and it is the latter one would use in an iterative algorithm): R> subs(Sb_total,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' b,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' c(1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 2)) R> subs(Sb_total,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' b,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' c(1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 2)) |> as_expr() An alternative is to construct the total log-likelihood for the entire dataset as a caracas object,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' convert this object to an R function and maximize this function using one of R’s optimization methods: R> logLb_list <- lapply(seq_len(nrow(bud)),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' function(r){ + vls <- c(1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' log2(bud$dose[r]),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' bud$ndead[r],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' bud$ntotal[r]) + subs(logLb_,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' nms,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' vls) + }) R> logLb_total <- Reduce(‘+‘,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' logLb_list) R> logLb_total_func <- as_func(logLb_total,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' vec_arg = TRUE) 8 The total likelihood symbolically We conclude this section by illustrating that the log-likelihood for the entire dataset can be constructed in a few steps (output is omitted to save space): R> X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' <- as_sym(cbind(1, log2(bud$dose))) R> n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' <- as_sym(bud$ntotal) R> y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' <- as_sym(bud$ndead) R> N <- nrow(X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=') R> q <- ncol(X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=') R> X <- matrix_sym(N, q, "x") R> n <- vector_sym(N, "n") R> y <- vector_sym(N, "y") R> p <- vector_sym(N, "p") R> s <- vector_sym(N, "s") R> b <- vector_sym(q, "b") X = � ������� x11 x12 x21 x22 x31 x32 x41 x42 x51 x52 x61 x62 � ������� , X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' = � ������� 1 0 1 1 1 2 1 3 1 4 1 5 � ������� , n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' = � ������� 20 20 20 20 20 20 � ������� , n = � ������� n1 n2 n3 n4 n5 n6 � ������� , y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' = � ������� 1 4 9 13 18 20 � ������� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The symbolic computations are as follows: R> ## log-likelihood as function of p R> logLp <- sum(y * log(p) + (n-y) * log(1-p)) R> ## log-likelihood as function of s R> p_ <- exp(s) / (exp(s) + 1) R> logLs <- subs(logLp,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' p_) R> ## linear predictor as function of regression coefficients: R> s_ <- X %*% b R> ## log-Likelihood as function of regression coefficients: R> logLb <- subs(logLs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s_) Next,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' numerical values can be inserted: R> logLb <- subs(logLb,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' cbind(n,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' y,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' X),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' cbind(n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=')) An alternative would have been to define logLp above in terms of n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' and y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' and similarly define s_ in terms of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' If doing so, the last step where numerical values are inserted could have been avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' From here, one may proceed by computing the score function and the Hessian matrix and solve the score equation, using e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Newton-Rapson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Alternatively, one might create an R function based on the log-likelihood, and maximize this function using one of R’s optimization methods (see the example in the previous section): R> logLb_func <- as_func(logLb, vec_arg = TRUE) R> optim(c(0, 0), logLb_func, control = list(fnscale = -1), hessian = TRUE) Maximum likelihood under constraints In this section we illustrate constrained optimization using Lagrange multipliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This is demonstrated for the independence model for a two-way contingency table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Consider a 2 × 2 contingency table with cell counts yij and cell probabilities pij for i = 1, 2 and j = 1, 2, where i refers to row and j to column as illustrated in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Under multinomial sampling, the log likelihood is l = log L = � ij yij log(pij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 9 Under the assumption of independence between rows and columns, the cell probabilities have the form, (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Højsgaard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' [2012], p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 32) pij = u · ri · sj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' To make the parameters (u, ri, sj) identifiable, constraints must be imposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' One possibility is to require that r1 = s1 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The task is then to estimate u, r2, s2 by maximizing the log likelihood under the constraint that � ij pij = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' This can be achieved using a Lagrange multiplier where we instead solve the unconstrained optimization problem maxp Lag(p) where Lag(p) = −l(p) + λg(p) under the constraint that (4) g(p) = � ij pij − 1 = 0, (5) where λ is a Lagrange multiplier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' In SymPy, lambda is a reserved symbol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Hence the underscore as postfix below: R> y_ <- c("y_11",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "y_21",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "y_12",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "y_22") R> y <- as_sym(y_) R> def_sym(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' r2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' lambda_) R> p <- as_sym(c("u",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "u*r2",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "u*s2",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' "u*r2*s2")) R> logL <- sum(y * log(p)) R> Lag <- -logL + lambda_ * (sum(p) - 1) R> vars <- list(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' r2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' lambda_) R> gLag <- der(Lag,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' vars) R> sol <- solve_sys(gLag,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' vars) R> print(sol,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' method = "ascii") #> Solution 1: #> lambda_ = y_11 + y_12 + y_21 + y_22 #> r2 = (y_21 + y_22)/(y_11 + y_12) #> s2 = (y_12 + y_22)/(y_11 + y_21) #> u = (y_11 + y_12)*(y_11 + y_21)/(y_11 + y_12 + y_21 + y_22)^2 R> sol <- sol[[1]] There is only one critical point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Fitted cell probabilities ˆpij are: R> p11 <- sol$u R> p21 <- sol$u * sol$r2 R> p12 <- sol$u * sol$s2 R> p22 <- sol$u * sol$r2 * sol$s2 R> p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='hat <- matrix_(c(p11,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' p21,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' p12,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' p22),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' nrow = 2) ˆp = 1 (y11 + y12 + y21 + y22)2 � (y11 + y12) (y11 + y21) (y11 + y12) (y12 + y22) (y11 + y21) (y21 + y22) (y12 + y22) (y21 + y22) � To verify that the maximum likelihood estimate has been found,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' we compute the Hessian matrix which is negative definite (the Hessian matrix is diagonal so the eigenvalues are the diagonal entries and these are all negative),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' output omitted: R> H <- hessian(logL,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' list(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' r2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' s2)) |> simplify() An AR(1) model Symbolic computations In this section we study the auto regressive model of order 1 (an AR(1) model),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Shumway and Stoffer [2016], p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 75 ff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' for details: Consider random variables x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , xn following a stationary zero mean AR(1) process: xi = axi−1 + ei;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' i = 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , n, (6) 10 where ei ∼ N(0, v) and all eis are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Note that v denotes the variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The marginal distribution of x1 is also assumed normal, and for the process to be stationary we must have that the variance Var(x1) = v/(1 − a2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Hence we can write x1 = 1 √ 1−a2 e1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' For simplicity of exposition, we set n = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' All terms e1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , e4 are independent and N(0, v) distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Let e = (e1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , e4) and x = (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' x4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Hence e ∼ N(0, vI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Isolating error terms in (6) gives e = � ��� e1 e2 e3 e4 � ��� = � ��� √ 1 − a2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' −a 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' −a 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' −a 1 � ��� � ��� x1 x2 x3 x4 � ��� = Lx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Since Var(e) = vI we have Var(e) = vI = LVar(x)L′ so the covariance matrix of x is V = Var(x) = vL−(L−)⊤ while the concentration matrix (the inverse covariance matrix) is K = v−1L⊤L: R> n <- 4 R> L <- diff_mat(n, "-a") R> def_sym(a) R> L[1, 1] <- sqrt(1-a^2) R> def_sym(v) R> Linv <- inv(L) R> K <- crossprod_(L) / v R> V <- tcrossprod_(Linv) * v L−1 = � ���� 1 √ 1−a2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' a √ 1−a2 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' a2 √ 1−a2 a 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' a3 √ 1−a2 a2 a 1 � ���� , (7) K = 1 v � ��� 1 −a .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' −a a2 + 1 −a .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' −a a2 + 1 −a .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' −a 1 � ��� , (8) V = v � ���� 1 1−a2 a 1−a2 a2 1−a2 a3 1−a2 a 1−a2 a2 1−a2 + 1 a3 1−a2 + a a4 1−a2 + a2 a2 1−a2 a3 1−a2 + a a4 1−a2 + a2 + 1 a5 1−a2 + a3 + a a3 1−a2 a4 1−a2 + a2 a5 1−a2 + a3 + a a6 1−a2 + a4 + a2 + 1 � ���� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (9) The zeros in the concentration matrix K implies a conditional independence restriction: If the ijth element of a concentration matrix is zero then xi and xj are conditionally independent given all other variables, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Højsgaard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' [2012], p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 84 for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Next, we take the step from symbolic computations to numerical evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The joint distribution of x is multivariate normal distribution, x ∼ N(0, K−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Let W = xx⊤ denote the matrix of (cross) products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The log-likelihood is therefore (ignoring additive constants) log L = n 2 (log det(K) − x⊤Kx) = n 2 (log det(K) − tr(KW)), where we note that tr(KW) is the sum of the elementwise products of K and W since both matrices are symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Ignoring the constant n 2 , this can be written symbolically to obtain the expression in this particular case: R> x <- vector_sym(n, "x") R> logL <- log(det(K)) - sum(K * (x %*% t(x))) %>% simplify() log L = log � −a2 v4 + 1 v4 � − −2ax1x2 − 2ax2x3 − 2ax3x4 + x2 1 + x2 2 � a2 + 1 � + x2 3 � a2 + 1 � + x2 4 v .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' 11 Numerical evaluation Next we illustrate how bridge the gap from symbolic computations to numerical computations based on a dataset: For a specific data vector we get: R> xt <- c(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='1, -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='9, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='4, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='0) R> logL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' <- subs(logL, x, xt) log L = log � −a2 v4 + 1 v4 � − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='97a2 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='9a + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='98 v .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' We can use R for numerical maximization of the likelihood and constraints on the parameter values can be imposed e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' in the optim() function: R> logL_wrap <- as_func(logL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=', vec_arg = TRUE) R> eps <- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='01 R> par <- optim(c(a=0, v=1), logL_wrap, + lower=c(-(1-eps), eps), upper=c((1-eps), 10), + method="L-BFGS-B", control=list(fnscale=-1))$par R> par #> a v #> -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='376 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='195 The same model can be fitted e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' using R’s arima() function as follows (output omitted): R> arima(xt, order = c(1, 0, 0), include.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content='mean = FALSE, method = "ML") It is less trivial to do the optimization in caracas by solving the score equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' There are some possibilities for putting assumptions on variables in caracas (see the “Reference” vignette), but it is not possible to restrict the parameter a to only take values in (−1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Variance of the average of correlated data Consider random variables x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , xn where Var(xi) = v and Cov(xi, xj) = vr for i ̸= j, where 0 ≤ |r| ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' For n = 3, the covariance matrix of (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' , xn) is therefore V = vR = v � � 1 r r r 1 r r r 1 � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' (10) Let ¯x = � i xi/n denote the average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Suppose interest is in the variance of the average, Var(¯x), when n goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' One approach is as follow: Let 1 denote an n-vector of 1’s and let V be an n × n matrix with v on the diagonal and vr outside the diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' Then Var(¯x) = 1 n2 1⊤V 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' The answer lies in studying the limiting behaviour of this expression when n → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' First, we must calculate variance of a sum x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=' = � i xi which is Var(x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFST4oBgHgl3EQfVTiq/content/2301.13777v1.pdf'} +page_content=') = � i Var(xi) + 2 � ij:i