text
string | source
string |
|---|---|
2ΩZ O(n)0F1 B−1M′A−2MB−1, XX′ , which is known to possess the desired form. (2) follows from Table 3. 21 4.2. Behrens-Fisher problem In the two sample discriminant analysis problem, we are going to analyse the power when an observation X2may belong to one of two matrix normal populations π1∼Nn,p(M1;A, B) orπ0∼Nn,p(M0, I, B ) with unequal covari- ance matrices A̸=I. The classification rules must be constructed to assign x2to the correct population. The central distribution of the ratio statis- tic is derived by many authors, e.g., Chikuse (1981), Srivastava and Khatri (1979) when A=Iand L¨ auter et al. (1998) when B=I. This leaves us to determine the non-central distribution of the ratio of two quadratic forms S=S1S−1 0where S1=X′X1andS0=X′ 0X0, where X1∼π1, and X2∼π2, assuming independence. TheS0follows a non-central Wishart distribution (1.1). As for the non- central distribution for S1, the hypergeometric term directly added in the central distribution in Theorem 1.1 according to Theorem 2.4 is (1). To simplify the problem, let us assume rank( M1) = rank( M0) = 1 such that M1= 1nµ′ 1andM0= 1nµ′ 0. Let us consider these three hypotheses: •H0:µ1=µ0andA=I; •H1:µ1=µ0with unknown equal covariance; •H2:µ1=µ0with unequal covariance. 4.2.1. Central distributions with unequal covariances A̸=I In order to determine this central distribution of S=S1S−1 0when S1is distributed according to (3.16), independent of S0∼Wp(n;B), one should observe a fact that the mean vector is zero. A direct computation yields the joint distribution of S1andS0is p(S1, S0) =1 2npΓp(n 2)2|A|p 2|B|netr −1 2B−1S1 |S1|n−p−1 2 ×0F0 I−A−1,1 2B−1S1 etr −1 2B−1S0 |S0|n−p−1 2(4.1) 22 The transformation ( S1, S0)7→(S1S−1 0, S0) has the Jacobian |S0|−(p+1)/2. In this situation, the joint distribution of SandS0derived from (4.1) is p(S, S 0) =1 2npΓp n 22|A|p 2|B|netr −1 2B−1(S+I)S0 ×|S|n−p−1 2|S0|n−p+1 20F0 I−A−1,1 2B−1SS0 .(4.2) By Lemma 3 and these properties of the hypergeometric function etr(X) = 0F0(X),Z O(n,p)0F0(AHBH′)dH=0F0(A, B), Z S>0etr(−AS)|S|a−n+1 20F0(C, BS )dS= Γ n(a)|A|−a 1F0(a;C, BA−1),(4.3) integrating with respect to S0in (4.2), we have the distribution of S, p(S) =1 4npBp(n 2,n 2)|A|p 2|B|3n 2|I+S|n+p+1 2|S|n−p−1 2 ×1F0(n;I−A−1, S(I+S)−1),(4.4) when A=I, this reduces to (1.4) by symmetry. 4.2.2. Non-central means with equal unknown covariance The distribution of Sis already known due to the works of Hotelling (1931) and Constantine (1966). 4.2.3. Non-central latent roots with unequal means The procedure of integrating S0in the joint non-central distribution of SandS0also applies here. The central latent roots based on the result in 4.2.1 and Lemma 4 are πp2 2 4npBp(n 2,n 2)Γp(p 2)|A|p 2|B|3n 2pY i=1(li)n−p−1 2pY i<j(li−lj)2 ×1F0(n;I−A−1,(I+L−1)−1),(4.5) 23 In order to obtain the non-central latent roots, note that 1F1(a;a;X, Y) = 0F0(X, Y), 1F1(a;b;Z) =1 Bn(a, b)Z 0<X<Ietr(−XZ)|X|a−n+1 2|I−X|b−n+1 2dX,(4.6) where XandZare both m×nandℜ(a),ℜ(b)>1 2(n−1). Thus, the distribution of the latent roots of |S1−lS0|= 0 depends on the latent roots of |M′A−2M−ψB2|= 0 and is πp2 2 4npBp(n 2,n 2)Γp(p 2)|A|p 2|B|3n 2pY i=1(li)n−p−1 2pY i<j(li−lj)2 ×etr(−1 2Ω)1F1(n;p; Ω,1 4Ψ(I+L−1)−1)(4.7) Based on the non-central distributions, we can compare the power under H0toH2. These results will be shown in Appendix D. Acknowledgement The author should thank Prof. Kai-Tai Fang for his encouragement
|
https://arxiv.org/abs/2505.00470v1
|
and discussion during the preparation of this paper. His classic book and many fruitful ideas once led me into the field of multivariate statistics. Although the generalised product moment distribution of some matrix variates T1-T3 is derived, the classification of matrix populations is still far from complete. Here, this article only tries to throw a pebble in this way. Appendix A. Examples of T1-T3Covariance We are going to determine whether an np×npreal symmetric positive definite matrix, is of the form A⊗Bfor some n×nandp×preal symmetric positive definite matrices AandB 24 Example 1. The following square matrix of order 4 1 0 .5 0.2 0.1 0.5 1 0 .5 0.2 0.2 0.5 1 0 .5 0.1 0.2 0.5 1 (A.1) is not a Kronecker product of two 2 ×2 matrices. Therefore, it sounds absurd to presume such a tensor form since it cannot always describe all situations behind the hidden structure of the data. This initiates our study of weak assumptions for matrices. Although not all square matrices of divisible order npare of the form A×Bfor some n×nmatrix Aandp×pmatrix B, it’s still possible for a square matrix to have a reduced form of the sum of Kronecker products. The following three examples show how this could be done. Example 2 (Partial diagonalization T1). 1 0 .5 0.2 0.1 0.5 1 0 .5 0.2 0.2 0.5 1 0 .5 0.1 0.2 0.5 1 =1 0 .2 0.2 1 ⊗1 0 0 0 +0.5 0.1 0.5 0.5 ⊗0 1 0 0 +0.5 0.5 0.1 0.5 ⊗0 0 1 0 +1 0 .2 0.2 1 ⊗0 0 0 1 =A11⊗E11+A12⊗E12+A21⊗E21+A22⊗E22 where the 2 ×2 real symmetric rank one matrices E11, E12andE21are pairwise commutative. ( E21is the transpose of E12.) Example 3 (Totally orthogonal diagonalization T2). 1 0 0 .2 0 0 1 0 0 .4 0.2 0 1 0 0 0 .4 0 1 =6 50.5 0.5 0.5 0.5 ⊗1 0 0 0 +4 50.5−0.5 −0.5 0 .5 ⊗1 0 0 0 +3 50.5−0.5 −0.5 0 .5 ⊗0 0 0 1 +7 50.5 0.5 0.5 0.5 ⊗0 0 0 1 =γ11A11⊗E11+γ12A11⊗E22+γ21A22⊗E11+γ22A22⊗E22 25 where the 2 ×2 real symmetric rank one matrices A11andA22are commuta- tive. Since we know that X∼Nn,p(M, A, B )⇔vec(X)∼Nnp(vec(M), A⊗ B),T2is not necessarily a covariance for vector elliptically contoured distri- bution. Example 4 (Partially orthogonal diagonalization T11 2). 1 0 0 .2 0 0 0 0 1 0 0 0 0 .2 0.2 0 1 0 0 0 0 0 0 1 0 0 .2 0 0 0 0 1 0 0 0 .2 0 0 .2 0 1 = 1 0 .2 0 0.2 1 0 0 0 1 ⊗1 0 0 0 + 1 0 0 .2 0 1 0 .2 0.2 0.2 1 ⊗0 0 0 1 =A11⊗E11+A22⊗E22 where the 2 ×2 real symmetric matrices E11andE22are commutative. Appendix B. Central Results for T1,T11 2, and T2 The proof of these results are similar to Theorem 1. Theorem 3. Suppose Xis an n×preal matrix according to the matrix normal population T′ 1. 1.The probability density distribution of1 2p(p+
|
https://arxiv.org/abs/2505.00470v1
|
1) variables in the p×p real symmetric matrix S=X′X= (sij), i≤jis |Σ1|1 2etr(−QS)|S|n−p−1 2 2np 2Γp(n−1 2)pY i,j=10F0 I−1 2q−1 ijAij, qijsij , where Q= (qij)is an arbitrary p×pmatrix with positive entries and this holds for all p×preal symmetric positive definite matrices S; elsewhere zero. 2.The moment generating function for S=X′Xis |Σ1|1 2pY i,j=11F0n 2;Uij, W−1 , where Uij=I−AijandW=I−R. 26 3.The joint distribution of latent roots l1, l2, . . . , l pofS=X′Xis |Σ1|1 2 Γp(n 2)Γp(p 2)πp2 2 2np 2pY i<j(li−lj)pY i=1(li)n−p−1 20F0(Aii, li), where l1> l2>···> lp>0; elsewhere zero. Theorem 4. Suppose Xis an n×preal matrix according to the matrix normal population T′ 11 2. 1.The probability density distribution of1 2p(p+ 1) variables in the p×p real symmetric matrix S=X′X= (sij), i≤jis |Σ11 2|1 2etr(−QS)|S|n−p−1 2 2np 2Γp(n−1 2)pY i=10F0 I−1 2q−1 iAi, qisii , where Q= diag( qi)hasppositive diagonals and this holds for all p×p real symmetric positive definite matrices S; elsewhere zero. 2.The moment generating function for S=X′Xis |Σ11 2|1 2pY i=11F0n 2;Ui, W−1 , where Ui=I−AiandW=I−R. 3.The joint distribution of latent roots l1, l2, . . . , l pofS=X′Xis |Σ11 2|1 2 Γp(n 2)Γp(p 2)πp2 2 2np 2pY i<j(li−lj)pY i=1(li)n−p−1 20F0(Aii, li), where l1> l2>···> lp>0; elsewhere zero. Theorem 5. Suppose Xis an n×preal matrix according to the matrix normal population T′ 2. 1.The probability density distribution of1 2p(p+ 1) variables in the p×p real symmetric matrix S=X′X= (sij), i≤jis |Σ2|1 2exp −Pn i=1Pp j=1qijsjj |S|n−p−1 2 2np 2Γp(n−1 2)nY i=1pY j=10F0 I−1 2q−1 ijγijAij, qijsjj , where qijare arbitrary positive constants and this holds for all p×p real symmetric positive definite matrices S; elsewhere zero. 27 2.The moment generating function for S=X′Xis |Σ2|1 2nY i=1pY j=11F0n 2;Uij, W−1 , where Uij=I−γijAijandW=I−R. 3.The joint distribution of latent roots l1, l2, . . . , l pofS=X′Xis |Σ11 2|1 2 Γp(n 2)Γp(p 2)πp2 2 2np 2pY i<j(li−lj)nY i=1pY j=1(lj)n−p−1 20F0(γijAij, lj), where l1> l2>···> lp>0; elsewhere zero. Appendix C. Non-central Results for T1,T11 2, and T2 The proofs of theorems in this section is similar to Theorem 2. Theorem 6. Suppose Xis an n×preal matrix according to the matrix normal distribution T1. and Mis an arbitrary real n×pmatrix. Let Y= X+MandS=Y′Y. 1.The probability density distribution for Sis the central distribution in Theorem 3.1 multiplied by etr −1 2Ω pY i,j=10F1n 2;1 4ΨijS , where Ω =Pp i,j=1M′AijMB ijandΨij=BijM′A2 ijMB ij. 2.The moment generating function is the central function in Theorem 3.2 multiplied by etr −1 2Ω pY i,j=1etr1 4ΨijW−1 where Wis defined in Theorem 3.2. 3.The latent roots l1, l2, . . . , l pofSis the central distribution in Theorem 3.2 multiplied by etr −1 2Ω pY i,j=10F1n 2;1 4Ψij, L where L= diag( li),l1> l2>···> lp; elsewhere zero. 28 Theorem 7. Suppose Xis an n×preal matrix according to the matrix normal distribution T11 2. and Mis an arbitrary real n×pmatrix. Let Y=X+MandS=Y′Y. 1.The probability density distribution for Sis the central distribution in Theorem 4.1 multiplied by etr −1 2ΩpY i=10F1n 2;1 4ΨiS , where Ω =Pp i=1M′AiMB iandΨi=BiM′A2 iMB i. 2.The moment generating function
|
https://arxiv.org/abs/2505.00470v1
|
is the central function in Theorem 4.2 multiplied by etr −1 2ΩpY i=1etr1 4ΨiW−1 where Wis defined in Theorem 4.2. 3.The latent roots l1, l2, . . . , l pofSis the central distribution in Theorem 4.2 multiplied by etr −1 2ΩpY i=10F1n 2;1 4Ψi, L where L= diag( li),l1> l2>···> lp; elsewhere zero. Theorem 8. Suppose Xis an n×preal matrix according to the matrix normal distribution T2. and Mis an arbitrary real n×pmatrix. Let Y= X+MandS=Y′Y. 1.The probability density distribution for Sis the central distribution in Theorem 5.1 multiplied by etr −1 2ΩnY i=1pY j=10F1n 2;1 4ΨijS , where Ω =Pn i=1Pp j=1γijM′AiMB jandΨij=γ2 ijBjM′A2 iMB j. 29 2.The moment generating function is the central function in Theorem 5.2 multiplied by etr −1 2ΩnY i=1pY j=1etr1 4ΨijW−1 where Wis defined in Theorem 5.2. 3.The latent roots l1, l2, . . . , l pofSis the central distribution in Theorem 5.2 multiplied by etr −1 2ΩnY i=1pY j=10F1n 2;1 4Ψij, L where L= diag( li),l1> l2>···> lp; elsewhere zero. Appendix D. Power Analysis of the Two-Sample Discriminant Anal- ysis Table D.4: Exact power analysis for (a) µ1=µ0= 0 ( A=γI,B=I),n= 30, p= 2. Test Statistic γ= 1 γ= 2 γ= 3 γ= 4 γ= 5 γ= 6 Hotelling’s T2= tr( S) 0.051 0.847 0.982 1.000 1.000 1.000 Wilks’ Λ = det( S) 0.049 0.821 0.963 0.993 0.998 1.000 Max Eigenvalue λmax(S) 0.048 0.805 0.951 0.990 0.997 0.999 Table D.5: Exact power analysis for (b) µ1=µ0=µ(γ= 1), n= 30, p= 2. Test Statistic µ= 0 µ= 0.5µ= 1.0µ= 1.5µ= 2.0µ= 2.5 Hotelling’s T20.050 0.328 0.873 0.972 0.998 1.000 Wilks’ Λ 0.048 0.302 0.842 0.961 0.994 0.999 Max Eigenvalue 0.047 0.284 0.821 0.953 0.988 0.998 30 Table D.6: Asymptotic power analysis using linear approximation (Anderson-Bahadur method). (c)µ1= 0.1, µ0= 0 ( A=γI,B=I) Test Statistic γ= 1 γ= 2 γ= 3 γ= 4 γ= 5 γ= 6 Likelihood Ratio −2 lnL1 L00.056 0.043 0.039 0.034 0.031 0.029 (c)µ1= 0.3, µ0= 0 ( A=γI,B=I). Test Statistic γ= 1 γ= 2 γ= 3 γ= 4 γ= 5 γ= 6 Likelihood Ratio −2 lnL1 L00.999 0.993 0.983 0.972 0.956 0.941 (c)µ1= 0.8, µ0= 0 ( A=γI,B=I). Test Statistic γ= 1 γ= 2 γ= 3 γ= 4 γ= 5 γ= 6 Likelihood Ratio −2 lnL1 L01.000 1.000 1.000 1.000 1.000 1.000 Note: N= 30, n= 2, 1000 Monte Carlo simulations. References T. W. Anderson. The non-central Wishart distribution and certain problems of multivariate statistics. Ann. Math. Stat. , 17:409–431, 1946. T. W. Anderson. An introduction to multivariate statistical analysis , vol- ume 2. Wiley New York, 1958. C. Bingham. An identity involving partitional generalized binomial coeffi- cients. J. Multivar. Anal. , 4:210–223, 1974. Y. Chikuse. Distributions of some matrix variates and latent roots in mul- tivariate Behrens-Fisher discriminant analysis. Ann. Stat. , 9(2):401 – 407, 1981. A. G. Constantine. Some non-central distribution problems in multivariate analysis. Ann. Math. Stat. , 34(4):1270–1285, 1963. A. G. Constantine. The distribution of Hotelling’s generalized T02.Ann. Math. Statist. , 37:215–225, 1966. A. P. Dawid. Spherical matrix distributions and a multivariate model. J. R. Stat.
|
https://arxiv.org/abs/2505.00470v1
|
Soc. B , 39(2):254–261, 1977. 31 R. L. Dykstra. Establishing the positive definiteness of the sample covariance matrix. Ann. Math. Stat. , 41(6):2153–2154, 1970. K.-T. Fang and Y.-T. Zhang. Generalized multivariate analysis. Science Press , 1990. A. K. Gupta and D. G. Kabe. A multiple integral involving zonal polynomi- als.Appl. Math. Lett. , 17(6):671–675, 2004. H. Hotelling. The generalization of student’s ratio. Ann. Math. Stat. , 2: 360–387, 1931. P.-L. Hsu. A new proof of the joint product moment distribution. Math. Proc. Phil. Soc. , 35(2):336–338, 1939. L.-K. Hua. Harmonic analysis of functions of several complex variables in the classical domains . 6. American Mathematical Society, 1958. A. T. James. The non-central Wishart distribution. Proc. Roy. Soc. Lond. A, 229:364 – 366, 1955. D. R. Jensen and I. J. Good. Invariant distributions associated with matrix laws under structural symmetry. J. R. Stat. Soc. B , 43(3):327–332, 1981. C. G. Khatri. On certain distribution problems based on positive definite quadratic functions in normal vectors. Ann. Math. Stat. , 37(2):468 – 479, 1966. J. L¨ auter, E. Glimm, and S. Kropf. Multivariate tests based on left- spherically distributed linear scores. Ann. Stat. , 26(5):1972–1988, 1998. R. J. Muirhead. Aspects of multivariate statistical theory . John Wiley & Sons, 1982. M. S. Srivastava and C. G. Khatri. An introduction to multivariate statistics . Elsevier Science Ltd, 1979. Y. Tumura. The distributions of latent roots and vectors. TRU , 1965. S. S. Wilks. Certain generalizations in the analysis of variance. Biometrika , 24(3/4):471–494, 1932. J. Wishart. The generalised product moment distribution in samples from a normal multivariate population. Biometrika , 20:32–52, 1928. 32 Figure D.2: Power analysis for n= 30, p= 2, α= 0.05 based on the Hotelling’s T2(up- left), Wilks’ Λ (up-right), Maximum root (bottom-left) and Likelihood Ratio (bottom- right). The distribution of the first three statistics is exact and the last is approximate χ2 under 3000 Monte Carlo simulations. 33 Figure D.3: Power analysis for n= 70, p= 3, α= 0.05. Similar as above. 34 Figure D.4: Power analysis for n= 200 , p= 5, α= 0.05. Similar as above. 35
|
https://arxiv.org/abs/2505.00470v1
|
EW D-optimal Designs for Experiments with Mixed Factors Siting Lin1, Yifei Huang2, and Jie Yang1 1University of Illinois at Chicago and2Astellas Pharma, Inc. May 23, 2025 Abstract We characterize EW D-optimal designs as robust designs against unknown pa- rameter values for experiments under a general parametric model with discrete and continuous factors. When a pilot study is available, we recommend sample-based EW D-optimal designs for subsequent experiments. Otherwise, we recommend EW D-optimal designs under a prior distribution for model parameters. We propose an EW ForLion algorithm for finding EW D-optimal designs with mixed factors, and justify that the designs found by our algorithm are EW D-optimal. To facili- tate potential users in practice, we also develop a rounding algorithm that converts an approximate design with mixed factors to exact designs with prespecified grid points and the number of experimental units. By applying our algorithms for real experiments under multinomial logistic models or generalized linear models, we show that our designs are highly efficient with respect to locally D-optimal designs and more robust against parameter value misspecifications. Key words and phrases: EW ForLion algorithm, Exact design, Mixed factors, Gener- alized linear model, Multinomial logistic model, Robust experimental design 1. Introduction In this paper, we consider robust designs against unknown parameter values for exper- iments with discrete and continuous factors. A motivating example is the paper feeder experiment described in Joseph and Wu (2004). The goal was to ensure precise feeding of one sheet of paper each time to a copier. Two common failure modes are frequently observed, namely misfeed when the feeder fails to feed any sheet of paper, and multifeed when the feeder picks up more than one sheets of paper at a time. The experiment involves eight discrete control factors (see Table 2 in Joseph and Wu (2004) and also Sec- tion S4 in the Supplementary Material) and one continuous factor, the stack force. The responses fall into three mutually exclusive categories, namely misfeed (typically with low stack force), normal , and multifeed (typically with high stack force). The results of the original experiment were listed in Table 3 of Joseph and Wu (2004). The original design was conducted via a control array modified from OA(18,21×37) (see Table 5 in 1arXiv:2505.00629v2 [stat.ME] 22 May 2025 Joseph and Wu (2004)) for the eight discrete factors and an essentially uniform alloca- tion on a predetermined set of discrete levels of the continuous factor. If a subsequent experiment will be conducted to study the factor effects, how can we do better? In the original analysis by Joseph and Wu (2004), two generalized linear models (McCullagh and Nelder, 1989; Dobson and Barnett, 2018) with probit link were used to model misfeed and multifeed separately. However, since misfeed and multifeed do not occur simultaneously and are both possible outcomes of the experiment, it would be more appropriate to adopt one multinomial model rather than two separate binomial models, so that the factor effects can be estimated more precisely due to the combined data and also be comparable for different types of failures. In this study, we
|
https://arxiv.org/abs/2505.00629v2
|
adopt the multinomial logistic models (Glonek and McCullagh, 1995; Zocchi and Atkinson, 1999; Bu et al., 2020) for analyzing the paper feeder experiment, which include baseline-category (also known as multiclass logistic), cumulative, adjacent-categories, and continuation- ratio logit models. For experiments with discrete factors only, many numerical algorithms have been proposed for finding optimal designs, including Fedorov-Wynn (Fedorov, 1972; Fedorov and Hackl, 1997), multiplicative (Titterington, 1976, 1978; Silvey et al., 1978), cocktail (Yu, 2010), lift-one (Yang and Mandal, 2015; Yang et al., 2017; Bu et al., 2020), as well as classical optimization techniques (see Huang et al. (2024) for a review). According to Yang et al. (2016) and Huang et al. (2024), the lift-one algorithm outperforms commonly used optimization algorithms and obtains D-optimal designs with fewer distinct design points. For experiments with continuous factors only, Ai et al. (2023) incorporated the Fedorov-Wynn algorithm with the lift-one algorithm for continuation-ratio link models, Yang et al. (2013) and Harman et al. (2020) discretized the continuous factors into grid points and treated the experiments as with discrete factors. For experiments with mixed factors, particle swarm optimization (PSO) algorithms have been applied for D-optimal designs under generalized linear models with binary responses (Lukemire et al., 2019) and cumulative logit models (Lukemire et al., 2022), which, however, cannot guarantee the optimal solution to be ever found (Kennedy and Eberhart, 1995; Poli et al., 2007). Huang et al. (2024) proposed a ForLion algorithm, which added a merging step to Ai et al. (2023)’s algorithm and significantly reduced the number of design points. In practice, it often leads to reduced experimental time and cost (Huang et al., 2024). Unlike PSO-types of algorithms, the ForLion algorithm is guaranteed to find D-optimal designs based on the general equivalence theorem (Kiefer, 1974; Pukelsheim, 1993; Atkinson et al., 2007; Stufken and Yang, 2012; Fedorov and Leonov, 2014). The ForLion algorithm has been successfully applied to both generalized linear models and multinomial logistic models with mixed factors (Huang et al., 2024). However, it relies on pre-assumed parameter values and produces designs known as locally optimal (Chernoff, 1953), which may not be robust when the parameter values are misspecified. To address the limitation due to local optimality, Bayesian D-optimal designs have been proposed (Chaloner and Verdinelli, 1995), which maximize E(log|F|) with respect to a prior distribution on unknown parameters, where Fis the Fisher information matrix associated with the design. However, a major drawback of Bayesian D-optimal designs is its computational intensity, even with D-criterion that maximizes the determinant of 2 the Fisher information matrix. As a promising alternative, Expected Weighted (EW) D-optimal designs have been proposed for generalized linear models (Yang et al., 2016), cumulative link models (Yang et al., 2017), and multinomial logistic models (Bu et al., 2020) with discrete factors, which maximize log |E(F)|or equivalently |E(F)|. Compared with Bayesian D-optimal designs, EW D-optimal designs are computationally much faster and often highly efficient in terms of the Bayesian criterion (Yang et al., 2016, 2017; Bu et al., 2020). Along this line, Tong et al. (2014) provided analytical solutions for certain cases of EW D-optimal designs
|
https://arxiv.org/abs/2505.00629v2
|
for generalized linear models, and Bu et al. (2020) incorporated the lift-one and exchange algorithms with the EW criterion. In this paper, we adopt EW D-criterion and adapt the ForLion algorithm for finding robust designs with mixed factors. The original EW D-optimal designs were proposed for experiments with discrete factors or discretized continuous factors. To solve our problems, we formulate in Section 2 the EW D-optimal criterion and characterize EW D-optimal designs for general parametric statistical models with mixed factors. In Section 3, we develop algorithms for finding two types of EW D-optimal designs with mixed factors, namely sample-based EW D-optimal designs for experiments with a pilot study, and integral-based EW D-optimal designs for experiments with a prior distribution for un- known parameters. We further formulate EW D-optimal designs for multinomial logistic models and generalized linear models, and propose an algorithm for converting approxi- mate designs with mixed factors to exact designs with grid points, which is much faster than finding optimal designs restricted on the same set of grid points. In Section 4, we construct the proposed robust designs for two real experiments, namely the paper feeder experiment (Joseph and Wu, 2004), and an experiment on minimizing surface defects (Wu, 2008). In Section 5, we conclude our findings. The algorithms are implemented in R with version 4.4.2 and the simulations are run on a Windows 11 laptop with 32GB of RAM and a 13th Gen Intel Core i7-13700HX processor. 2. EW D-optimal Designs with Mixed Factors Following Huang et al. (2024), we first consider experiments with a design region X=Qd j=1Ij⊂Rd, where Ijis either a closed finite interval for the first kfactors, or a finite set of distinct numerical levels for k+1≤j≤d. In both cases, we denote aj= min Ij>−∞ andbj= max Ij<∞. Ifk= 0, all factors are discrete or qualitative; if k=d, all factors are continuous. Note that Xhere is compact and built as a product of sets, which is common for typical applications. For some applications, for examples, when the levels of some discrete factors are not numerical (see Example 1 below), or the level combinations of discrete factors under con- sideration are restricted to a true subset of all possible level combinations (see Example 2 below), we also need to consider design regions X=Qk j=1Ij× D, where D ⊆Qd j=k+1Ij. Apparently,Qd j=1Ijis a special case ofQk j=1Ij× D withD=Qd j=k+1Ij. Example 1. A polysilicon deposition process for circuit manufacturing was described by Phadke (1989) involved one qualitative control factor, cleaning method (None for no cleaning, CM 2for cleaning inside the reactor, and CM 3for cleaning outside the reactor), and five other factors that are continuous in nature (see Table 4.1 in Phadke (1989)). 3 Since the levels of cleaning method are not numerical, it would be more appropriate to convert it to two (binary) indicator variables (1CM2,1CM3)with (0,0),(1,0),(0,1)stand- ing for None, CM 2, and CM 3, respectively. In this case, the corresponding design region isX=Q5 j=1Ij× {(0,0),(1,0),(0,1)}, where I1, . . . , I 5may be defined as in Table S1 of the Supplementary Material of Huang et
|
https://arxiv.org/abs/2505.00629v2
|
al. (2024). Note that the cleaning method was simplified into a binary factor ( CM 2orCM 3) in Lukemire et al. (2022) and Huang et al. (2024). □ Example 2. In the paper feeder experiment described by Joseph and Wu (2004), besides one continuous factor, there are eight discrete or qualitative control factors, including two binary ones and six three-level ones (see Section S4 in the Supplementary Material). Instead of considering all 22×36= 2,916level combinations, the experimenters adopted 18 out of them modified from an orthogonal array OA(18,21×37)(see Table 5 in Joseph and Wu (2004)). In this case, for illustration purposes, we may consider a restricted design region X= [0,160]× D, where [0,160] is the range of the continuous factor, and D={(xi2, . . . , x i9)|i= 1, . . . , 18}consists of the original 18 level combinations of eight discrete factors. □ Given an experimental setting xi= (xi1, . . . , x id)T∈ X, suppose there are ni≥0 experimental units assigned to this experimental setting. Their responses (could be vectors) are assumed to be iid from a parametric model M(xi,θ), where the parameter vector θ∈Θ⊆Rpwith the parameter space Θ. Suppose the responses are independent across different experimental settings. Under regularity conditions (see, e.g., Sections 2.5 and 2.6 in Lehmann and Casella (1998)), the corresponding Fisher information matrix can be written asPm i=1niF(xi,θ), with respect to the corresponding design {(xi, ni), i= 1, . . . , m }, known as an exact design , where F(xi,θ) is a p×pmatrix, known as the Fisher information at xi. In design theory, ξ={(xi, wi), i= 1, . . . , m }with wi=ni/n≥0 and n=Pm i=1ni, known as an approximate design , is often considered first (Kiefer, 1974; Pukelsheim, 1993; Atkinson et al., 2007). The collection of all feasible designs is denoted byΞ={{(xi, wi), i= 1, . . . , m } |m≥1,xi∈ X, wi≥0,Pm i=1wi= 1}. 2.1 EW D-optimal designs Following Bu et al. (2020) and Huang et al. (2024), we adopt the D-criterion for op- timal designs. When θis given or assumed to be known, an ξ∈Ξthat maximizes f(ξ) =|F(ξ,θ)|is known as a locally D-optimal design (Chernoff, 1953), where F(ξ,θ) =Pm i=1wiF(xi,θ) is the Fisher information matrix associated with ξ. The ForLion algo- rithm was proposed by Huang et al. (2024) for finding locally D-optimal designs with mixed factors. When θis unknown while a prior distribution or probability measure Q(·) onΘis available instead, we adopt the EW D-optimality (Atkinson et al., 2007; Yang et al., 2016, 2017; Bu et al., 2020; Huang et al., 2025) and look for ξ∈Ξmaximizing fEW(ξ) =|E{F(ξ,Θ)}|= mX i=1wiE{F(xi,Θ)} , (2.1) 4 called an integral-based EW D-optimal approximate design, where E{F(xi,Θ)}=Z ΘF(xi,θ)Q(dθ) (2.2) is ap×pmatrix after entry-wise expectation with respect to Q(·) onΘ. By replacing θinF(ξ,θ) with Θ, we indicate that the expectation E{F(ξ,Θ)}is taken with respect to a random parameter vector in Θ. When θis unknown but a dataset from a prior or pilot study is available, following Bu et al. (2020) and Huang et al. (2025), we bootstrap the original dataset to obtain
|
https://arxiv.org/abs/2505.00629v2
|
a set of bootstrapped datasets, and their corresponding parameter vectors {ˆθ1, . . . , ˆθB} obtained by fitting the parametric model on each of the bootstrapped datasets. In other words, we may replace the prior distribution Q(·) in (2.2) with the empirical distribution of{ˆθ1, . . . , ˆθB}. That is, we may look for ξ∈Ξmaximizing fSEW(ξ) =|ˆE{F(ξ,Θ)}|= mX i=1wiˆE{F(xi,Θ)} , (2.3) called a sample-based EW D-optimal approximate design, where ˆE{F(xi,Θ)}=1 BBX j=1F(xi,ˆθj) (2.4) is a bootstrapped estimate of E{F(xi,Θ)}based on {ˆθ1, . . . , ˆθB}. According to Bu et al. (2020), the EW D-optimal design obtained via bootstrapped samples is a good approximation to the Bayesian D-optimal design obtained in a similar way. For multinomial logistic models, such as the cumulative logit model used for the trauma clinical trial (see Example 5.2 in Bu et al. (2020)), the feasible parameter space Θ={θ= (β11, β12, β21, β22, β31, β32, β41, β42)T|β11+β12x < β 21+β22x < β 31+β32x < β41+β42x,for all x∈ X} is not rectangular, which makes the integral in (2.2) difficult to calculate. In this case, even with a prior distribution instead of a dataset, we may also draw a random sample {ˆθ1, . . . , ˆθB}from the prior distribution, and adopt the sample- based EW D-optimality. According to Section S5 in the Supplementary Material, in terms of relative efficiency, the obtained EW D-optimal designs are fairly stable against the random samples. 2.2 Characteristics of EW D-optimal designs Since the design region Xtypically contains infinitely many design points, which is different in nature from the experiments considered by Yang et al. (2016, 2017) and Bu et al. (2020), both the theoretical properties and numerical algorithms for EW D-optimal designs here are far more difficult. In this section, to characterize EW D-optimal designs, we fix either a prior distri- bution Q(·) onΘfor integral-based EW D-optimality, or a set of bootstrapped or sam- pled parameter vectors {ˆθ1, . . . , ˆθB}for sample-based EW D-optimality, so that either E{F(xi,Θ)}is defined as in (2.2), or ˆE{F(xi,Θ)}is defined as in (2.4). Similarly to 5 Fedorov and Leonov (2014) and Huang et al. (2024), we list the assumptions needed for this paper. (A1) The design region X ⊂Rdis compact. (A2) For each θ∈Θ, the Fisher information F(x,θ) is element-wise continuous with respect to all continuous factors of x∈ X. (A3) For each θ∈Θ, the Fisher information F(x,θ) = ( Fst(x,θ))s,t=1,...,pis element- wise continuous with respect to all continuous factors of x∈ X, and there exists a nonnegative and integrable function K(θ) onΘ, such that,R ΘK(θ)Q(dθ)<∞, and|Fst(x,θ)| ≤K(θ), for all s, t∈ {1, . . . , p },x∈ X, and θ∈Θ. In this paper, Xis eitherQd j=1IjorQk j=1Ij× D. Both of them are bounded and closed in Rdand thus Assumption (A1) is satisfied. As for Assumptions (A2) and (A3), (i)ifk= 0, that is, all factors are discrete, then both Assumption (A2) and the first half (i.e., the continuous statement) of As- sumption (A3) are automatically satisfied; (ii)when k≥1, that is, there is at least one continuous factor, (A3) always implies (A2). Following Fedorov
|
https://arxiv.org/abs/2505.00629v2
|
and Leonov (2014), to characterize the sample-based and integral- based D-optimal EW designs, we extend the collection Ξof designs (each design consists only of a finite number of design points) to Ξ(X), which consists of all probability measures on X. In other words, a design ξ∈Ξ(X) is also a probability measure on X, and for each θ∈Θ, F(ξ,θ) =Z XF(x,θ)ξ(dx). Then for sample-based EW criterion and ξ∈Ξ(X), we denote FSEW(ξ) =ˆE{F(ξ,Θ)}=1 BBX j=1F(ξ,ˆθj) =Z XˆE{F(x,Θ)}ξ(dx). (2.5) For integral-based EW criterion, under Assumption (A3) and Fubini Theorem (see, for example, Theorem 5.9.2 in Resnick (2003)), we denote FEW(ξ) = E{F(ξ,Θ)}=Z ΘF(ξ,θ)Q(dθ) =Z ΘZ XF(x,θ)ξ(dx)Q(dθ) =Z XZ ΘF(x,θ)Q(dθ)ξ(dx) (by Fubini Theorem) =Z XE{F(x,Θ)}ξ(dx). (2.6) Furthermore, we denote the collections of Fisher information matrices, FSEW(X) = {FSEW(ξ)|ξ∈Ξ(X)}, and FEW(X) ={FEW(ξ)|ξ∈Ξ(X)}. As a summary of the arguments in Section S2, we have the following lemma. 6 Lemma 1. Under Assumptions (A1) and (A2), FSEW(X)is convex and compact; while under Assumptions (A1) and (A3), FEW(X)is convex and compact. With the aid of Lemma 1 and Carath´ eodory’s Theorem (see, for example, Theo- rem 2.1.1 in Fedorov (1972)), we obtain the following theorem, whose proof, as well as proofs for other theorems, is relegated to Section S3 in the Supplementary Material. Theorem 1. If Assumptions (A1) and (A2) hold, then for any p×pmatrix F∈ FSEW(X), there exists a design ξ∈Ξ⊂Ξ(X)with no more than m=p(p+ 1)/2 + 1 points, such that FSEW(ξ) =F. IfFis a boundary point of the convex set FSEW(X), then we need no more than p(p+ 1)/2support points for ξ. If Assumptions (A1) and (A3) are satisfied, then the same conclusions hold for FEW(X)andFEW(ξ)as well. To characterize D-optimality for EW designs, we define the objective function Ψ( F) = −log|F|forF∈ F SEW(X) orFEW(X). Then fSEW(ξ) = exp {−Ψ(ˆE{F(ξ,Θ)})}, and fEW(ξ) = exp {−Ψ(E{F(ξ,Θ)})}. Note that minimizing Ψ( F) for F∈ F SEW(X) or FEW(X) is equivalent to maximizing fSEW(ξ) orfEW(ξ) for ξ∈Ξ, respectively, due to Theorem 1. For D-optimality, according to Section 2.4.2 in Fedorov and Leonov (2014), Ψ al- ways satisfies their Assumptions (B1), (B2), and (B4). In our notations, we let Ξ(q) denote {ξ∈Ξ| |ˆE{F(ξ,Θ)}| ≥ q}for sample-based EW D-optimality, or {ξ∈Ξ| |E{F(ξ,Θ)}| ≥ q}for integral-based EW D-optimality. We still need the following assumption: (B3) There exists a q >0, such that Ξ(q) is non-empty. Theorem 2. For sample-based EW D-optimality under Assumptions (A1), (A2), and (B3), or integral-based EW D-optimality under Assumptions (A1), (A3), and (B3), we must have (i) there exists an optimal design ξ∗that contains no more than p(p+1)/2de- sign points; (ii) the set of optimal designs is convex; and (iii) a design ξis EW D-optimal if and only if max x∈Xd(x,ξ)≤p, where d(x,ξ) = tr([ E{F(ξ,Θ)}]−1E{F(x,Θ)})for integral-based EW D-optimality, or tr([ˆE{F(ξ,Θ)}]−1ˆE{F(x,Θ)})for sample-based EW D-optimality. 3. Algorithms for EW D-optimal Designs 3.1 EW lift-one algorithm for discrete factors only When all factors are discrete, the design region Xcontains only a finite number of distinct experimental settings. In this case, we may denote the corresponding design settings as x1, . . . ,xm. The EW D-optimal design problem in this case is
|
https://arxiv.org/abs/2505.00629v2
|
to find w∗= (w∗ 1, . . . , w∗ m)T∈S={(w1, . . . , w m)T∈Rm|wi≥0, i= 1, . . . , m ;Pm i=1wi= 1}, which maximizes fEW(w) =|Pm i=1wiE{F(xi,Θ)}|orfSEW(w) =|Pm i=1wiˆE{F(xi,Θ)}|. The lift-one algorithm (see Algorithm 3 in the Supplementary Material of Huang et al. (2025)) can be used for finding EW D-optimal designs, with f(w) replaced by fEW(w) orfSEW(w), called EW lift-one algorithm . 7 To speed up the lift-one algorithm, Yang and Mandal (2015) provided analytic so- lutions in their Lemma 4.2 for generalized linear models. Bu et al. (2020) mentioned but with no details that the corresponding optimization problems for multinomial logis- tic models (MLM) also have analytic solutions when the number of response categories J≤5. In Section S1 of the Supplementary Material, we provide all the relevant formulae for the lift-one algorithm under an MLM with J≤5. 3.2 EW ForLion algorithm and rounding algorithm When at least one factor is continuous, we follow the ForLion algorithm proposed by Huang et al. (2024) for locally D-optimal designs, to find EW D-optimal designs, called theEW ForLion algorithm (see Algorithm 1). When there is no confusion, we denote (i)F(ξ) for E{F(ξ,Θ)}under integral-based EW D-optimality or ˆE{F(ξ,Θ)}under sample-based EW D-optimality, for each ξ∈Ξ; and(ii)FxforE{F(x,Θ)}under integral-based EW D-optimality or ˆE{F(x,Θ)}under sample-based EW D-optimality, for each x∈ X. As a result, d(x,ξ) in Theorem 2 can be simplified to d(x,ξ) = tr( F(ξ)−1Fx). Algorithm 1: EW ForLion Algorithm Step 1: Specify the merging threshold δ >0 (e.g., 10−2) and the converging threshold ϵ >0 (e.g., 10−6), and establish an initial design ξ0={(x(0) i, w(0) i), i= 1, . . . , m 0} ∈Ξ, such that, ∥x(0) i−x(0) j∥ ≥δfor all i̸=j, and|F(ξ0)|>0. Step 2: Merging close design points: Given ξt={(x(t) i, w(t) i), i= 1, . . . , m t} ∈Ξ obtained at the tth iteration, detect if there are any two points x(t) iandx(t) jfor which ∥x(t) i−x(t) j∥< δ. For such a pair, tentatively merge them into their midpoint ( x(t) i+ x(t) j)/2 with combined weight w(t) i+w(t) j, and denote the resulting design as ξmer. If |F(ξmer)|>0, replace the two design points with their merged one and reduce mtby 1; otherwise, do not merge them. Stop if all such pairs have been examined. Step 3: Given ξt∈Ξafter the merging step, apply the lift-one algorithm (see Al- gorithm 3 in the Supplementary Material of Huang et al. (2025) and Remark 1 in Huang et al. (2024)) with the convergence threshold ϵ, and obtain an EW D-optimal allocation ( w∗ 1, . . . , w∗ mt)Tfor the design points {x(t) 1, . . . ,x(t) mt}. Replace w(t) iwith w∗ i, respectively. Step 4: Deleting step: Update ξtby discarding any x(t) iwith w(t) i= 0. Step 5: Adding new point: Given ξt∈Ξ, find x∗∈ X that maximizes d(x,ξt). More specifically, if all factors are continuous, x∗can be obtained by the “L-BFGS-B” quasi-Newton method (Byrd et al., 1995) directly. If the first kfactors are continuous with 1 ≤k≤d−1, we first find x∗ (1)= argmaxx(1)∈Qk j=1[aj,bj]d (xT (1),xT
|
https://arxiv.org/abs/2505.00629v2
|
(2))T,ξt for each x(2)∈ D ⊆Qd j=k+1Ij, and then choose x∗ (2)that yields the largest d(((x∗ (1))T,xT (2))T,ξt). Note that x∗ (1)depends on x(2). Then x∗= ((x∗ (1))T,(x∗ (2))T)T. 8 Step 6: Ifd(x∗,ξt)≤p, proceed to Step 7 . Otherwise, obtain ξt+1by adding ( x∗,0) toξtand increasing mtby 1, and return to Step 2. Step 7: Output ξtas an EW D-optimal design. The EW ForLion algorithm is essentially applying the original ForLion algorithm proposed by Huang et al. (2024) to F(ξ) under an EW D-optimality. It should be noted that (i)Step 2 in Algorithm 1 is different from the merging step in the original ForLion algorithm by addressing a possible issue when the merging step leads to a degenerated F(ξmer), which is possible in practice; and (ii)in Step 5, a general subset Dis used instead ofQd j=k+1Ij, which is also different from the original ForLion algorithm (see Examples 1 and 2 for motivations). According to Step 6 in Algorithm 1, ξtreported by the EW ForLion algorithm satisfies max x∈Xd(x,ξt)≤p. As a direct conclusion of Theorem 2, we have the following corollary. Corollary 1. For sample-based EW D-optimality under Assumptions (A1), (A2), and (B3), or integral-based EW D-optimality under Assumptions (A1), (A3), and (B3), the design obtained by Algorithm 1 must be EW D-optimal. Similarly to the ForLion algorithm (Huang et al., 2024), we may relax in practice the stopping rule d(x∗,ξt)≤pin Step 6 to d(x∗,ξt)≤p+ϵ. Following the arguments in Remark 2 in Huang et al. (2024), as a direct conclusion of Theorem 2, Algorithm 1 is guaranteed to stop in finite steps. Since the design found by Algorithm 1 is an approximate design with continuous design points, to facilitate users in practice, we propose a rounding algorithm (see Algo- rithm 2) to convert an approximate design to an exact design with a user-specified set of grid points. Different from the rounding algorithms proposed in the literature (see, for example, Algorithm 2 in Huang et al. (2025)), Algorithm 2 here involves rounding both the weights and the design points. Algorithm 2: Rounding Algorithm Step 0 : Input: An approximate design ξ={(xi, wi), i= 1, . . . , m 0} ∈Ξwith all wi> 0 and |F(ξ)|>0, a pre-specified merging threshold δr(e.g., δr= 0.1, typically larger than δin Algorithm 1), grid levels (or pace lengths) L1, . . . , L kfor the kcontinuous factors (e.g., L1=···=Lk= 0.25, typically larger than δr), respectively, and the total number nof experimental units. Step 1: Merging step: For each pair of xiandxj, define their distance dij=∥xi−xj∥ if their levels of discrete factors are identical; and ∞otherwise. If dij< δr, tentatively merge xiandxjinto a single design point ( wixi+wjxj)/(wi+wj) with combined weight wi+wj, and denote the resulting design as ξmer. If|F(ξmer)|>0, replace ξ withξmer; otherwise, do not merge. Stop if all such pairs have been examined. Step 2: Rounding design points to grid levels: Round the continuous components of each design point to their nearest multiples of the corresponding Lj,j= 1, . . . , k . Step 3: Allocating experimental units: First set ni=⌊nwi⌋, the largest integer no
|
https://arxiv.org/abs/2505.00629v2
|
more than nwi, and then allocate any remaining units to design points with nwi> n i, in the order of increasing F(ξ) the most (see Algorithm 2 in Huang et al. (2025)). 9 Step 4: Deleting step: Discard any xifor which ni= 0. Step 5: Output: Exact design {(xi, ni), i= 1, . . . , m }with ni>0 andPm i=1ni=n. 3.3 Calculating E{F(x,Θ)}orˆE{F(x,Θ)} As illustrated by Yang et al. (2016, 2017) and Bu et al. (2020), one major advantage of EW D-optimality against Bayesian D-optimality is that it optimizes |E{F(x,Θ)}|or |ˆE{F(x,Θ)}|, instead of E{log|F(x,Θ)|}. To implement the EW ForLion algorithm (Algorithm 1) for EW D-optimal designs, in this section, we show by two examples that instead of calculating Fx=E{F(x,Θ)}orˆE{F(x,Θ)}directly, we may focus on key components of the Fisher information matrix to make the computation more efficiently. Example 3. Multinomial logistic models (MLM): A general MLM (Glonek and McCullagh, 1995; Zocchi and Atkinson, 1999; Bu et al., 2020) for categorical responses with Jcategories can be defined by CTlog(Lπi) =ηi=Xiθ,i= 1, . . . , m , with (2 J− 1)×Jconstant matrices CandL, category probabilities πi= (πi1, . . . , π iJ)T,J×p model matrices Xi, and model parameters θ∈Θ⊆Rp. According to Theorem 2 of Huang et al. (2024), to calculate the Fisher information Fxatx∈ X, it is enough to replace ux st, orux st(θ) as it is a function of θ, with E{ux st(Θ)}=R Θux st(θ)Q(dθ) or ˆE{ux st(Θ)}=B−1PB j=1ux st(ˆθj), for s, t= 1, . . . , J . The formulae for calculating ux st(θ) can be found in Appendix A of Huang et al. (2024). Then F(x,θ) =XT xUx(θ)Xx according to Corollary 3.1 in Bu et al. (2020), where Xx= hT 1(x)0T··· 0ThT c(x) 0ThT 2(x)......... .........0ThT c(x) 0T··· 0ThT J−1(x)hT c(x) 0T··· ··· 0T0T J×p(3.7) with predetermined predictor functions hj= (hj1, . . . , h jpj)T,j= 1, . . . , J −1,hc= (h1, . . . , h pc)T, and Ux(θ) = ( ux st(θ))s,t=1,...,J. Then E{F(x,Θ)}=XT xE{Ux(Θ)}Xx and ˆE{F(x,Θ)}=XT xˆE{Ux(Θ)}Xx. □ Theorem 3. For MLMs under Assumptions (A1) and (B3), suppose all the predictor functions h1, . . . ,hJ−1andhcdefined in (3.7) are continuous with respect to all continu- ous factors of x∈ X, and the parameter space Θ⊆Rpis bounded. Then there exists an EW D-optimal design ξthat contains no more than p(p+ 1)/2support points, and the design obtained by Algorithm 1 must be EW D-optimal. The boundedness of Θin Theorem 3 is a practical requirement. For typical appli- cations, the boundedness of a working parameter space is often needed for numerical searches for parameter estimates or theoretical derivations for desired properties (Fergu- son, 1996). Example 4. Generalized linear models (GLM): A generalized linear model (Mc- Cullagh and Nelder, 1989; Dobson and Barnett, 2018) can be defined by E(Yi) =µiand 10 ηi=g(µi) =XT iθ, where gis a given link function, Xi=h(xi) = ( h1(xi), . . . , h p(xi))T, i= 1, . . . , m , and θ∈Θ⊆Rp(see, for example, Section 4 in Huang et al. (2024)). Then F(x,θ) =ν{h(x)Tθ}·h(x)h(x)T, where ν={(g−1)′}2/swith
|
https://arxiv.org/abs/2505.00629v2
|
s(ηi) = Var( Yi) (see Table 5 in the Supplementary Material of Huang et al. (2025) for examples of ν). To cal- culate Fx=E{F(x,Θ)}orˆE{F(x,Θ)}under GLMs, it is enough to replace ν{h(x)Tθ} with E[ν{h(x)TΘ}] =R Θν{h(x)Tθ}Q(dθ) or ˆE[ν{h(x)TΘ}] =B−1PB j=1ν{h(x)Tˆθj}. That is, E{F(x,Θ)}=E ν{h(x)TΘ} ·h(x)h(x)T, and ˆE{F(x,Θ)}=ˆE ν{h(x)TΘ} · h(x)h(x)T. To calculate F(ξ,θ) under GLMs, instead ofPm i=1wiF(xi,θ), we recommend its matrix form F(ξ,θ) =XT ξWξ(θ)Xξ, where Xξ= (h(x1), . . . , h(xm))T∈Rm×p, and Wξ(θ) = diag {w1ν{h(x1)Tθ}, . . . , w mν{h(xm)Tθ}}. Then E{F(ξ,θ)}=XT ξE{Wξ(Θ)} Xξand ˆE{F(ξ,θ)}=XT ξˆE{Wξ(Θ)}Xξ. The matrix form is much more efficient when programming in R. □ Theorem 4. For GLMs under Assumptions (A1) and (B3), suppose all the predictor functions h= (h1, . . . , h p)Tare continuous with respect to all continuous factors of x∈ X, andΘis bounded. Then there exists an EW D-optimal design ξthat contains no more than p(p+ 1)/2support points, and the design obtained by Algorithm 1 must be EW D-optimal. 3.4 First-order derivative of sensitivity function To implement the EW ForLion algorithm (Algorithm 1) for EW D-optimal designs, if there is at least one continuous factor, we need to calculate the first-order derivative of the sensitivity function d(x,ξ) in Theorem 2. Following the simplified notation in Section 3.2, d(x,ξ) = tr( F(ξ)−1Fx). Similarly to Theorem 3 in Huang et al. (2024), we obtain the following theorem for multinomial logistic models (MLM) under EW D-optimality. Theorem 5. For an MLM model under Assumptions (A1) and (B3), consider ξ∈Ξ satisfying |F(ξ)|>0. If we denote the p×pmatrix F(ξ)−1= (Est)s,t=1,...,Jwith submatrix Est∈Rps×pt, and pJ=pc,hx j=hj(x),hx c=hc(x),ux st=E{ux st(Θ)}orˆE{ux st(Θ)}for simplicity, then d(x,ξ) =J−1X t=1ux tt(hx t)TEtthx t+J−1X s=1J−1X t=1ux st(hx c)TEJJhx c + 2J−2X s=1J−1X t=s+1ux st(hx t)TEsthx s+ 2J−1X s=1J−1X t=1ux st(hx c)TEsJhx s. Example 5. Multinomial logistic models (MLM): To calculate the first-order derivative of d(x,ξ) under MLMs, we follow Appendix C in Huang et al. (2024) after (i) F(ξ) is replaced with E{F(ξ,Θ)}orˆE{F(ξ,Θ)};(ii)Fxis replaced with E{F(x,Θ)}or ˆE{F(x,Θ)};(iii)Ux= (ux st)s,t=1,...,Jwith ux streplaced by E{ux st(Θ)}orˆE{ux st(Θ)}; and (iv)when k≥1, for i= 1, . . . , k ,∂Ux ∂xi= (∂ux st ∂xi)s,t=1,...,Jwith∂ux st ∂xireplaced by∂E{ux st(Θ)} ∂xi= 11 ∂ ∂xiR Θux st(θ)Q(dθ) or ˆE{∂ux st ∂xi(Θ)}=1 BPB j=1∂ux st ∂xi(ˆθj) . Note that∂ux st ∂xi(ˆθj) denotes∂ux st ∂xi atθ=ˆθj, whose formulae can be found in Appendix C of Huang et al. (2024). □ Example 6. Generalized linear models (GLM): Under GLMs, the sensitivity func- tiond(x,ξ) =E[ν{h(x)TΘ}]·h(x)T[E{F(ξ,Θ)}]−1h(x) for fEW(ξ) or ˆE[ν{h(x)TΘ}]· h(x)Th ˆE{F(ξ,Θ)}i−1 h(x) for fSEW(ξ). Following Section S.5 of the Supplementary Material of Huang et al. (2024), we denote AE= [XT ξE{Wξ(Θ)}Xξ]−1andAˆE= [XT ξˆE{Wξ(Θ)}Xξ]−1. The first-order derivative of d(x,ξ) with respect to continuous factors x(1)= (x1, . . . , x k)Tis∂d(x,ξ) ∂x(1)=h(x)TAEh(x)·∂E[ν{h(x)TΘ}] ∂x(1)+ 2·E[ν{h(x)TΘ}]· {∂h(x) ∂xT (1)}TAEh(x) for fEW(ξ), or ∂d(x,ξ) ∂x(1)=h(x)TAˆEh(x)·( ∂h(x) ∂xT (1))T ·1 BBX j=1ν′{h(x)Tˆθj}ˆθj +2 B·BX j=1ν{h(x)Tˆθj} ·( ∂h(x) ∂xT (1))T AˆEh(x) forfSEW(ξ). □ Similar to Theorem 4 in Huang et al. (2024), we obtain the following result for GLM under EW D-optimality. Theorem 6. For a GLM model under Assumptions (A1) and (B3), consider ξ∈Ξ satisfying
|
https://arxiv.org/abs/2505.00629v2
|
|XT ξWξXξ|>0, where Wξis either E{Wξ(Θ)}for integral-based EW D- optimality or ˆE{Wξ(Θ)}for sample-based EW D-optimality (see Example 4). Then ξ is EW D-optimal if and only if max x∈XE[ν{h(x)TΘ}]·h(x)T[XT ξWξXξ]−1h(x)≤pfor integral-based D-optimality or max x∈XˆE[ν{h(x)TΘ}]·h(x)T[XT ξWξXξ]−1h(x)≤pfor sample-based EW D-optimality. 4. Applications to Real Experiments In this section, we use the paper feeder experiment (see Section 1 and Example 2) and a minimizing surface defects experiment (see also Example 1), both under MLMs, to illustrate the advantages gained by adopting our algorithms and the EW D-optimal designs. For examples under GLMs, please see Section S6 of the Supplementary Material. 4.1 Paper feeder experiment In this section, we consider the motivating example mentioned in Section 1 and Exam- ple 2, the paper feeder experiment. There are eight discrete control factors and one continuous factor, the stack force, under our consideration (see Section S4 for more de- tails). As mentioned in Section 1, instead of using two separate GLMs to model the two types of failures, misfeed and multifeed , we consider multinomial logistic models 12 (MLM) to model all three possible outcomes, namely misfeed ,normal , and multifeed , at the same time. Using Akaike information criterion (AIC, Akaike (1973)) and Bayesian information criterion (BIC, Hastie et al. (2009)), we adopt a cumulative logit model with npo as the most appropriate model for this experiment (see Table S.1 in Section S4 of the Supplementary Material). For this experiment, we compare ten different designs (see Tables 1 and 2): (1) “Orig- inal allocation”: the original design that collected 1,785 observations roughly uniformly at 183 distinct experimental settings; (2) “EW Bu-appro”: an approximate design ob- tained by applying Bu et al. (2020)’s EW lift-one algorithm on the original 183 distinct settings; (3) “EW Bu-exact”: an exact design obtained by applying Bu et al. (2020)’s EW exchange algorithm on the 183 distinct settings with n= 1,785; (4) “EW Bu-grid2.5” & (5) “EW Bu-grid2.5 exact”: approximate and exact designs by applying Bu et al. (2020)’s algorithms after discretizing the range [0 ,160] of stack force into grid points with equal space 2.5; (6) “EW ForLion”: the proposed EW D-optimal approximate de- sign by applying Algorithm 1 to the continuous factor, stack force, ranging in [0 ,160]; (7)∼(10) exact designs obtained by applying Algorithm 2 with different grid levels or pace lengths ( L= 0.1,0.5,1,2.5) and n= 1,785. As mentioned in Example 2, for il- lustration purpose, all designs follow the same 18 runs modified from OA(18,21×37) for the eight discrete factors, while their design points can be different in terms of the levels of the continuous factor. Since the feasible space of a cumulative logit model is not rectangular (see Example 5.2 in Bu et al. (2020) or Example 8 in Huang et al. (2025)), we bootstrap the original dataset to collect B= 100 (due to computational intensity) samples and the corresponding estimated model parameters, denoted by ˆθ1,ˆθ2, . . . , ˆθ100. All the EW designs are under the sample-based EW D-optimality with respect to the 100 parameter vectors. The constructed designs are presented in Tables S.3
|
https://arxiv.org/abs/2505.00629v2
|
and S.4 of the Supplementary Material. In Table 1, we list the number of design points m, the computational time in seconds for obtaining the designs, the objective function values, and the relative efficiencies com- pared with the recommended EW ForLion design, that is, ( |ˆE{F(ξ,Θ)}|/|ˆE{F(ξForLion , Θ)}|)1/pwith p= 32. Compared with our EW ForLion design, the original design on 183 settings only attains 63 .03% relative efficiency. Having applied Bu et al. (2020)’s algorithms on the same 183 settings, the relative efficiencies of EW Bu-appro and EW Bu-exact, 88 .16% and 88 .15%, are not satisfactory either. When we apply Bu et al. (2020)’s algorithms on grid points with length 2 .5, it takes much longer time to find EW Bu-grid2.5 and EW Bu-grid2.5 exact designs, which also need more settings (55 and 44). The EW ForLion design and its exact designs at different grid levels require the minimum number of experimental settings, less computational time, and higher relative efficiencies. To assess the robustness of these designs, we first use the original ForLion algorithm (Huang et al., 2024) to find the corresponding locally D-optimal design ξ∗ jfor each ˆθj. Then the robustness of a design ξ∈Ξcan be evaluated by the 100 relative efficiencies (|F(ξ,ˆθj)|/|F(ξ∗ j,ˆθj)|)1/p,j= 1, . . . , 100. Table 2 (see also Figure S.2 in the Supplemen- tary Material) presents the five-number summary of the 100 relative efficiencies. Overall, the EW ForLion design is the most robust one against parameter misspecifications. For practical uses, we recommend the grid-0.1 exact design (EW ForLion exact grid0.1), 13 Table 1: Robust designs for the paper feeder experiment Designs m Time (s) |ˆE{F(ξ,Θ)}| Relative Efficiency Original allocation 183 - 1.342e+28 63.03% EW Bu-appro 38 116s 6.162e+32 88.16% EW Bu-exact 38 1095s 6.159e+32 88.15% EW Bu-grid2.5 55 15201s 2.078e+34 98.40% EW Bu-grid2.5 exact 44 69907s 2.078e+34 98.40% EW ForLion 38 1853s 3.482e+34 100.00% EW ForLion exact grid0.1 38 0.28s 2.122e+34 98.46% EW ForLion exact grid0.5 38 0.24s 2.065e+34 98.38% EW ForLion exact grid1 38 0.23s 1.579e+34 97.56% EW ForLion exact grid2.5 38 0.22s 1.326e+34 97.03% Note: Time for EW ForLion exact designs are for applying Algorithm 2 to EW ForLion (approximate) design. which has nearly the same robustness as the EW D-optimal approximate design and the smallest number of distinct experimental settings (i.e., 38). Table 2: Summary of 100 relative efficiencies against locally D-optimal designs for paper feeder experiment Robust design Min Q1 Median Q3 Max Original allocation 0.5075 0.6005 0.6115 0.6308 0.6587 EW Bu-appro 0.6813 0.8133 0.8343 0.8488 0.8741 EW Bu-exact 0.6813 0.8134 0.8346 0.8489 0.8742 EW Bu-grid2.5 0.7592 0.9183 0.9327 0.9417 0.9513 EW Bu-grid2.5 exact 0.7594 0.9184 0.9328 0.9417 0.9515 EW ForLion 0.8363 0.9277 0.9364 0.9445 0.9542 EW ForLion exact grid0.1 0.8028 0.9276 0.9361 0.9443 0.9541 EW ForLion exact grid0.5 0.7569 0.9269 0.9363 0.9447 0.9534 EW ForLion exact grid1 0.7502 0.9163 0.9315 0.9389 0.9489 EW ForLion exact grid2.5 0.7548 0.9098 0.9253 0.9348 0.9439 4.2 Minimizing surface defects Wu (2008) presented a study that investigated optimal parameter settings in a polysilicon deposition process for circuit manufacturing (see also Example 1). This experiment
|
https://arxiv.org/abs/2505.00629v2
|
involves responses of five categories, five continuous factors, and one discrete factor (see Table S1 in the Supplementary Material of Huang et al. (2024) or Table 6 in Lukemire et al. (2022)). Lukemire et al. (2022) employed a PSO algorithm and derived a locally D-optimal approximate design with 14 support points under a cumulative logit model with po, while Huang et al. (2024) applied their ForLion algorithm and obtained a locally D-optimal approximate design with 17 design points. In practice, an experimenter may not know the precise values of some or all parameters but often has varying degrees of insight into their actual values. To construct a robust design against parameter misspecifications, we sample B= 1000 parameter vectors from the prior distributions listed in Table S1 of the Supplementary Material of Huang et al. 14 (2024) for the cumulative logit model with po as follows: logπi1+···+πij πi,j+1+···+πiJ =θj−β1xi1−β2xi2−β3xi3−β4xi4−β5xi5−β6xi6 with i= 1, . . . , m ,j= 1,2,3,4, and J= 5, where πijis the probability that the response associated with xi= (xi1, . . . , x i6)Tfalls into the jth category. We use Algorithm 1 to construct a sample-based EW D-optimal approximate design (see the left side of Table 3). Assuming that the total number of experimental units is n= 1000, we apply our Algorithm 2 with L1=···=L5= 1 for continuous control variables and obtain an exact design, as shown on the right side of Table 3. Both our approximate and exact designs have 17 support points. To compare EW ForLion D-optimal designs with Bu et al. (2020)’s EW D-optimal designs on grid points, we discretize each of the five continuous control factors into 2 or 4 evenly distributed grid points, which leads to 2 ×25= 64 or 2 ×45= 2,048 possible design points, respectively. Discretizations with higher number of grid points are skipped due to computational intensity. Then Bu et al. (2020)’s lift-one algorithm is applied to the discretized design region and the corresponding EW D-optimal approximate designs are constructed (EW Bu grid2 and EW Bu grid4 in Figure 1). Table 3: EW D-optimal approximate (left) and exact (right) designs by EW ForLion and rounding algorithms for minimizing surface defects experiment Design Cleaning Deposition Deposition Nitrogen Silane Settling Design Cleaning Deposition Deposition Nitrogen Silane Settling Point Method Temp. Pressure Flow Flow Time wi Point Method Temp. Pressure Flow Flow Time ni 1 -1 -25.000 200.000 -150 0 0 0.0475 1 -1 -25 200 -150 0 0 48 2 1 25.000 0 0 0 0 0.0986 2 1 25 0 0 0 0 99 3 1 25.000 200.000 -150 0 16 0.0380 3 1 25 200 -150 0 16 38 4 -1 -25.000 0 0 0 16 0.1206 4 -1 -25 0 0 0 16 121 5 1 0 0 0 0 16 0.0984 5 1 0 0 0 0 16 98 6 -1 25.000 -104.910 -150 -100 16 0.0426 6 -1 25 -105 -150 -100 16 43 7 -1 14.870 -1.299 0 0 0 0.1017 7 -1 15 -1 0 0 0 102 8 1 -13.168 2.802 0 0 16
|
https://arxiv.org/abs/2505.00629v2
|
0.0399 8 1 -13 3 0 0 16 40 9 -1 25.000 111.805 0 -100 16 0.0533 9 -1 25 112 0 -100 16 53 10 -1 25.000 -72.107 0 -100 16 0.0432 10 -1 25 -72 0 -100 16 43 11 -1 -25.000 -170.690 -150 0 0 0.0401 11 -1 -25 -171 -150 0 0 40 12 1 25.000 -158.631 -150 0 16 0.0470 12 1 25 -159 -150 0 16 47 13 1 -25.000 -200.000 -150 -100 0 0.0215 13 1 -25 -200 -150 -100 0 21 14 1 -25.000 -20.500 0 -100 0 0.0843 14 1 -25 -20 0 -100 0 84 15 1 -25.000 200.000 -150 -100 0 0.0415 15 1 -25 200 -150 -100 0 42 16 1 -2.925 0.017 0 0 0 0.0294 16 1 -3 0 0 0 0 29 17 -1 -10.568 -1.093 0 0 0 0.0524 17 -1 -11 -1 0 0 0 52 To compare the robustness of different designs against parameter misspecifications, we sample 10,000 parameter vectors from the prior distribution provided in Table S1 of Huang et al. (2024). For each sampled parameter vector θj, we calculate the relative efficiency of a design ξunder comparison with respect to our EW ForLion approximate design ξEWForLion , i.e., ( |F(ξ,θj)|/|F(ξEWForLion ,θj)|)1/pwith p= 10 in this case. Figure 1 shows histograms and boxplots of 10 ,000 relative efficiencies for Bu et al. (2020)’s EW D-optimal designs on grid-2 or grid-4 design regions, EW ForLion exact design by applying our Algorithm 2 to the EW ForLion approximate design, and lo- cally D-optimal designs obtained by Huang et al. (2024)’s original ForLion algorithm or Lukemire et al. (2022)’s PSO. It is evident that the EW ForLion exact design performs very similar to the EW ForLion approximate design. In most cases, the original ForLion and PSO designs yield relative efficiencies less than 1. EW Bu grid4 design performs apparently better than EW Bu grid2, but not as good as EW ForLion designs. Overall we recommend our EW ForLion approximate and exact designs. 15 Figure 1: Relative efficiency comparison among designs for 10,000 sampled parameter sets in the minimizing surface defects experiment 5. Conclusion In this paper, we characterize EW D-optimal designs for fairly general parametric models with mixed factors and derive detailed formulae for multinomial logistic models and generalized linear models. We develop an EW ForLion algorithm for finding EW D- optimal designs with mixed factors or continuous factors, and a rounding algorithm to convert approximate designs to exact designs. Depending on the availability of a pilot study or a prior distribution on unknown parameters, we recommend either sample-based or integral-based EW D-optimal designs. Apparently, the obtained EW D-optimal design depends on the set of parameter vectors or the prior distribution chosen. By applying our algorithms to real experiments, we show that the obtained EW D-optimal designs under a reasonable parameter vector set or prior distribution provide well-performed robust designs against parameter misspecifications. By merging close design settings, our designs may significantly reduce the design cost and time by minimizing the number of distinct design settings.
|
https://arxiv.org/abs/2505.00629v2
|
Supplementary Materials The Supplementary Material includes several sections: S1 provides analytic solutions for lift-one algorithm with MLMs; S2 discusses assumptions for design region and parametric models; S3 provides proofs of main theorems; S4 displays model selection and design comparison for paper feeder experiment; S5 discusses robustness of sample-based EW 16 designs; S6 provides two examples for GLMs and integral-based EW D-optimality. References Abramowitz, M. and I. A. Stegun (1970). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (9th ed.). Dover Publications. Ai, M., Z. Ye, and J. Yu (2023). Locally d-optimal designs for hierarchical response experiments. Statistica Sinica 33 , 381–399. Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B. Petrov and F. Csaki (Eds.), Proceedings of the 2nd International Symposium on Information Theory , pp. 267–281. Akademiai Kiado, Budapest. Atkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum Experimental Designs, with SAS . Oxford, United Kingdom: Oxford University Press. Billingsley, P. (1999). Convergence of Probability Measures (2 ed.). John Wiley & Sons. Bu, X., D. Majumdar, and J. Yang (2020). D-optimal designs for multinomial logistic models. Annals of Statistics 48 (2), 983–1000. Byrd, R. H., P. Lu, J. Nocedal, and C. Zhu (1995). A limited memory algorithm for bound constrained optimization. SIAM Journal on scientific computing 16 (5), 1190–1208. Chaloner, K. and I. Verdinelli (1995). Bayesian experimental design: a review. Statistical Science 10 , 273–304. Chernoff, H. (1953). Locally optimal designs for estimating parameters. Annals of Math- ematical Statistics 24 , 586–602. Dobson, A. and A. Barnett (2018). An Introduction to Generalized Linear Models (4 ed.). Chapman & Hall/CRC. Fedorov, V. (1972). Theory of Optimal Experiments . Academic Press. Fedorov, V. and S. Leonov (2014). Optimal Design for Nonlinear Response Models . Chapman & Hall/CRC. Fedorov, V. V. and P. Hackl (1997). Model-Oriented Design of Experiments . Springer Science & Business Media. Ferguson, T. (1996). A Course in Large Sample Theory . Chapman & Hall. Glonek, G. and P. McCullagh (1995). Multivariate logistic models. Journal of the Royal Statistical Society, Series B 57 , 533–546. Harman, R., L. Filov´ a, and P. Richt´ arik (2020). A randomized exchange algorithm for computing optimal approximate designs of experiments. Journal of the American Statistical Association 115 (529), 348–361. 17 Hastie, T., R. Tibshirani, and J. Friedman (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2 ed.). Springer. Huang, Y., K. Li, A. Mandal, and J. Yang (2024). Forlion: A new algorithm for d-optimal designs under general parametric statistical models with mixed factors. Statistics and Computing 34 , 157. Huang, Y., L. Tong, and J. Yang (2025). Constrained d-optimal design for paid research study. Statistica Sinica . To appear, available at https://www3.stat.sinica.edu. tw/ss_newpaper/SS-2022-0414_na.pdf . Joseph, V. and C. Wu (2004). Failure amplification method: an information maximization approach to categorical response optimization (with discussions). Technometrics 46 , 1–31. Kennedy, J. and R. Eberhart (1995). Particle swarm optimization. In Proceedings of ICNN’95-International Conference on Neural Networks , Volume 4, pp. 1942–1948. IEEE. Kiefer, J. (1974). General equivalence theory for optimum designs (approximate theory).
|
https://arxiv.org/abs/2505.00629v2
|
Annals of Statistics 2 , 849–879. Lehmann, E. L. and G. Casella (1998). Theory of Point Estimation (2 ed.). Springer, New York. Lukemire, J., A. Mandal, and W. K. Wong (2019). d-qpso: A quantum-behaved particle swarm technique for finding d-optimal designs with discrete and continuous factors and a binary response. Technometrics 61 (1), 77–87. Lukemire, J., A. Mandal, and W. K. Wong (2022). Optimal experimental designs for ordinal models with mixed factors for industrial and healthcare applications. Journal of Quality Technology 54 , 184–196. McCullagh, P. and J. Nelder (1989). Generalized Linear Models (2 ed.). Chapman and Hall/CRC. Phadke, M. (1989). Quality Engineering using Robust Design . Prentice-Hall, Englewood Cliffs. Poli, R., J. Kennedy, and T. Blackwell (2007). Particle swarm optimization: An overview. Swarm Intelligence 1 , 33–57. Pukelsheim, F. (1993). Optimal Design of Experiments . John Wiley & Sons. Resnick, S. (2003). A Probability Path . Springer Science & Business Media. Silvey, S., D. Titterington, and B. Torsney (1978). An algorithm for optimal designs on a finite design space. Communications in Statistics - Theory and Methods 14 , 1379–1389. 18 Stufken, J. and M. Yang (2012). Optimal designs for generalized linear models , Chapter 4, pp. 137–164. Wiley. Titterington, D. (1976). Algorithms for computing d-optimal design on finite design spaces. In Proc. of the 1976 Conf. on Information Science and Systems , Volume 3, pp. 213–216. John Hopkins University. Titterington, D. (1978). Estimation of correlation coefficients by ellipsoidal trimming. Journal of the Royal Statistical Society, Series C (Applied Statistics) 27 , 227–234. Tong, L., H. Volkmer, and J. Yang (2014). Analytic solutions for d-optimal factorial designs under generalized linear models. Electronic Journal of Statistics 8 , 1322–1344. Wu, C. and M. Hamada (2009). Experiments: Planning, Analysis, and Optimization (2 ed.). Wiley. Wu, F.-C. (2008). Simultaneous optimization of robust design with quantitative and ordinal data. International Journal of Industrial Engineering: Theory, Applications and Practice 5 , 231–238. Yang, J. and A. Mandal (2015). D-optimal factorial designs under generalized linear models. Communications in Statistics - Simulation and Computation 44 , 2264–2277. Yang, J., A. Mandal, and D. Majumdar (2016). Optimal designs for 2kfactorial experi- ments with binary response. Statistica Sinica 26 , 385–411. Yang, J., L. Tong, and A. Mandal (2017). D-optimal designs with ordered categorical data. Statistica Sinica 27 , 1879–1902. Yang, M., S. Biedermann, and E. Tang (2013). On optimal designs for nonlinear models: a general and efficient algorithm. Journal of the American Statistical Association 108 , 1411–1420. Yu, Y. (2010). Monotonic convergence of a general algorithm for computing optimal designs. Annals of Statistics 38 , 1593–1606. Zocchi, S. and A. Atkinson (1999). Optimum experimental designs for multinomial logistic models. Biometrics 55 , 437–444. 19 EW D-optimal Designs for Experiments with Mixed Factors Siting Lin1, Yifei Huang2, and Jie Yang1 1University of Illinois at Chicago and2Astellas Pharma, Inc. Supplementary Material S1Analytic Solutions for Multinomial Logistic Models S2Assumptions and Relevant Results S3Proofs of Main Theorems S4Model Selection and Design Comparison for Paper Feeder Experiment S5Robustness of Sample-based EW Designs S6More Examples S1. Analytic Solutions for Multinomial Logistic Mod- els According to Theorem
|
https://arxiv.org/abs/2505.00629v2
|
S.9 in the Supplementary Material of Bu et al. (2020), for multino- mial logistic models, given a design ξ={(xi, wi)|i= 1, . . . , m } ∈Ξ, fori∈ {1, . . . , m } and 0 < z < 1, we have fi(z) = (1 −z)p−J+1J−1X j=0bjzj(1−z)J−1−j, f′ i(z) = (1 −z)p−JJ−1X j=1bj(j−pz)zj−1(1−z)J−1−j−pb0(1−z)p−1, where b0=fi(0), ( bJ−1, bJ−2, . . . , b 1)T=B−1 J−1c,BJ−1= (st−1)s,t=1,2,...,J−1, and c= (c1, c2, . . . , c J−1)Twith cj= (j+ 1)pjJ−1−pfi(1/(j+ 1))−jJ−1fi(0),j= 1, . . . , J −1. For typically applications, p≥J≥3. Since fi(z) is an order- ppolynomial in z with fi(1) = 0, its maximum on [0 ,1] is attained either at z= 0 or at an interior point z∈(0,1) satisfying f′ i(z) = 0, that is, J−1X j=1jbjzj−1(1−z)J−j−1=pJ−1X j=0bjzj(1−z)J−j−1,0< z < 1. (S1.1) In Bu et al. (2020), it was mentioned that (S1.1) has analytic solutions for J≤5, while no formula was provided accordingly. In this section, we provide explicit solutions to facilitate the programming for multinomial logistic models. Note that we only need the real-valued solutions locating in (0 ,1) for our purposes. According to the analytic solutions to quadratic, cubic, and quartic equations (see, for example, Sections 3.8.1 ∼3.8.3 in Abramowitz and Stegun (1970)), we have the following results: S1 Lemma 2. When J= 3,(S1.1) can be rewritten as A2z2+A1z+A0= 0 with A2= p(b0−b1+b2),A1=b1(1 +p)−2(b0p+b2), and A0=b0p−b1. (i) If A2=A1=A0= 0, then (S1.1) has infinitely many solutions in (0,1)and fi(z)≡fi(0)≥0. In this case, we may choose z= 0to maximize fi(z). (ii) If A2=A1= 0butA0̸= 0, then we must have A0>0. In this case, (S1.1) has no solution, while z= 0uniquely maximizes fi(z)withz∈[0,1]. (iii) If A2= 0butA1̸= 0, then (S1.1) has a unique solution −A0/A1∈R. (iv) If A2̸= 0andA2 1−4A2A0>0, then (S1.1) has two real solutions −A1±p A2 1−4A2A0 2A2. (v) If A2̸= 0andA2 1−4A2A0= 0, then (S1.1) has only one real solution −A1/(2A2). (vi) If A2̸= 0andA2 1−4A2A0<0, then (S1.1) has no real solution. Proof of Lemma 2: When J= 3, (S1.1) can be rewritten as A2z2+A1z+A0= 0 with A2=p(b0−b1+b2),A1=b1(1 +p)−2(b0p+b2),A0=b0p−b1, and f′ i(z) =−(1−z)p−J(A2z2+A1z+A0). We only need to verify case (ii). Actually, in this case, if A0<0, then f′ i(z) =−(1− z)p−JA0>0 for all z∈(0,1), while fi(0)≥0 =fi(1) leads to a contradiction. □ As a direct conclusion of Sections 3.8.2 in Abramowitz and Stegun (1970), we obtain the following lemma for J= 4: Lemma 3. When J= 4, equation (S1.1) is equivalent to A3z3+A2z2+A1z+A0= 0 withA0=b0p−b1,A1=−3b0p+b1(2+p)−2b2,A2= 3b0p−b1(1+2 p)+b2(2+p)−3b3, andA3=p(−b0+b1−b2+b3). The cases with A3= 0has been listed in Lemma 2. If A3̸= 0,(S1.1) is equivalent to z3+a2z2+a1z+a0= 0withai=Ai/A3,i= 0,1,2. We let q=a1 3−a2 2 9, r=a1a2−3a0 6−a3 2 27, s1= [r+ (q3+r2)1/2]1/3, and s2= [r−(q3+r2)1/2]1/3. (i) If q3+r2>0, then (S1.1) has only one real solution s1+s2−a2/3. (ii) If q3+r2= 0, then (S1.1) has two real solutions z1= 2r1/3−a2/3andz2= −r1/3−a2/3. (iii) If q3+r2<0, then (S1.1) has three real solutions z1=s1+s2−a2 3, z 2, z3=−s1+s2 2−a2 3±i√ 3 2(s1−s2) withi=√−1, known as the imaginary unit. S2 ForJ= 5, we follow the arguments for equation (12) in Tong et al.
|
https://arxiv.org/abs/2505.00629v2
|
(2014)) and obtain the following results: Lemma 4. When J= 5, equation (S1.1) is equivalent to A4z4+A3z3+A2z2+A1z+A0= 0withA0=b0p−b1,A1=−4b0p+b1(3+p)−2b2,A2= 6b0p−3b1(1+p)+b2(4+p)−3b3, A3=−4b0p+b1(1 + 3 p)−2b2(1 +p) +b3(3 +p)−4b4, and A4=p(b0−b1+b2−b3+b4). The cases with A4= 0has been listed in Lemmas 3 & 2. If A4̸= 0,(S1.1) is equivalent toz4+a3z3+a2z2+a1z+a0= 0with ai=Ai/A4,i= 0,1,2,3. Then there are four solutions to (S1.1) calculated as complex numbers: z1, z2=−a3 4−√A∗ 2±√B∗ 2, z 3, z4=−a3 4+√A∗ 2±√C∗ 2 with A∗=−2a2 3+a2 3 4+G∗ 3×21/3, B∗=−4a2 3+a2 3 2−G∗ 3×21/3−−8a1+ 4a2a3−a3 3 4√A∗, C∗=−4a2 3+a2 3 2−G∗ 3×21/3+−8a1+ 4a2a3−a3 3 4√A∗, D∗= F∗+p F2 ∗−4E3 ∗1/3 , E∗= 12 a0+a2 2−3a1a3, F∗= 27 a2 1−72a0a2+ 2a3 2−9a1a2a3+ 27a0a2 3, G∗=D∗+ 22/3E∗/D∗. S2. Assumptions and Relevant Results Following Fedorov and Leonov (2014) and Huang et al. (2024), in this section we discuss the assumptions needed for this paper in details. Note that our assumptions have been adjusted for mixed factors. Recall that a design point or experimental setting x= (x1, . . . , x d)T∈ X ⊆ Rd. Among the d≥1 factors, the first kfactors are continuous, and the last d−kfactors are discrete. Note that 0 ≤k≤dandk= 0 implies no continuous factor. When k≥1, we denote x(1)= (x1, . . . , x k)Tas the vector of continuous factors. Lemma 5. Suppose k≥1and the parametric model M(x,θ)withx∈ X andθ∈Θ satisfies Assumption (A2). Given {ˆθ1, . . . , ˆθB} ⊆Θ,ˆE{F(x,Θ)}=1 BPB j=1F(x,ˆθj)is element-wise continuous with respect to all continuous factors of x∈ X. The proof of Lemma 5 is straightforward. It is for sample-based EW designs. For integral-based EW designs, we need the following lemma: Lemma 6. Suppose k≥1and the parametric model M(x,θ)withx∈ X andθ∈Θsat- isfies Assumption (A3). Then E{F(x,θ)}=R ΘF(x,θ)Q(dθ)is element-wise continuous with respect to all continuous factors of x∈ X. S3 Proof of Lemma 6. Given any x0= (x01, . . . , x 0d)T∈ X, we let {xn= (xn1, . . . , xnd)T∈ X | n≥1}be a sequence of design points, such that, lim n→∞xn=x0. Ifk < d , then we must have ( xn,k+1, . . . , x nd)≡(x0,k+1, . . . , x 0d) for large enough n. For any s, t∈ {1, . . . , p }, since Fst(x,θ) is continuous with respect to x(1)= (x1, . . . , x k)T, we must have lim n→∞Fst(xn,θ) =Fst(x0,θ) for each θ∈Θ. Since (A3) is satisfied, ac- cording to the Dominated Convergence Theorem (DCT, see, for example, Theorem 5.3.3 in Resnick (2003)), we must have lim n→∞E{Fst(xn,Θ)}= lim n→∞Z ΘFst(xn,θ)Q(dθ) =Z ΘFst(x0,θ)Q(dθ) (by DCT) =E{Fst(x0,Θ)} for each x0∈ X, as long as lim n→∞xn=x0. That is, E{F(x,Θ)}is element-wise continuous with respect to x(1). Since the Fisher information matrices F(x,ˆθj) and F(x,θ) are symmetric and nonneg- ative definite for all x∈ X andˆθj,θ∈Θ, and both integration and convex combination preserve symmetry and nonnegative definiteness, then F(ξ,θ),F(ξ,ˆθj),FSEW(ξ), and FEW(ξ) are symmetric and nonnegative definite as well. For any θ∈Θ,F(ξ,θ) is linear in ξ∈Ξ(X). Actually, for any two designs ξ1,ξ2∈ Ξ(X) and any λ∈[0,1], it can be verified that F(λξ1+ (1−λ)ξ2,θ) =λF(ξ1,θ) + (1 −λ)F(ξ2,θ),
|
https://arxiv.org/abs/2505.00629v2
|
which further implies FSEW(λξ1+ (1−λ)ξ2) = λFSEW(ξ1) + (1 −λ)FSEW(ξ2), FEW(λξ1+ (1−λ)ξ2) = λFEW(ξ1) + (1 −λ)FEW(ξ2). That is, both FSEW(X) andFEW(X) are convex. Since Xis compact under Assumption (A1), it can be verified that Ξ(X) is also compact under the topology of weak convergence, that is, ξnconverges weakly to ξ0, asngoes to ∞, if and only if lim n→∞R Xf(x)ξn(dx) =R Xf(x)ξ0(dx) for all bounded continuous function fonX(Billingsley, 1999; Fedorov and Leonov, 2014). Further with Assumption (A2), ˆE{F(x,Θ)}is element-wise continuous with respect to all continuous factors of x∈ X due to Lemma 5, and must be bounded due to the compactness of X, then FSEWas a map from Ξ(X) toRp×pis also bounded continuous, and thus its image FSEW(X) is also compact. Similarly with Assumption (A3) and Lemma 6, FEWis a bounded continuous map from Ξ(X) toRp×p, andFEW(X) is compact as well. As a summary of the above arguments, we obtain Lemma 1. Recall that we denote F(ξ) = ˆE{F(ξ,Θ)}for sample-based EW D-optimality, and E{F(ξ,Θ)}for integral-based EW D-optimality; Fx=ˆE{F(x,Θ)}for sample-based EW D-optimality, and E{F(x,Θ)}for integral-based EW D-optimality. Then the same set of assumptions listed in Section 2.4.2 in Fedorov and Leonov (2014) are provided in our notations as below. S4 (B1) Ψ( F) is a convex function for F∈ F SEW(X) orFEW(X). (B2) Ψ( F) is a monotonically non-increasing function for F∈ F SEW(X) orFEW(X). (B3) There exists a q >0, such that Ξ(q) is non-empty. (B4) For any ξ∈Ξ(q),¯ξ∈Ξ, and α∈(0,1), Ψ[(1−α)F(ξ) +αF(¯ξ)] = Ψ[ F(ξ)] +αZ Xψ(x,ξ)¯ξ(dx) +o(α), asα→0, where ψ(x,ξ) =p−tr[F(ξ)−1Fx]. According to Section 2.4.2 in Fedorov and Leonov (2014), for D-optimality, Assump- tions (B1), (B2), and (B4) are always satisfied. Further discussions have been provided in Section 2.2. S3. Proofs of Main Theorems Proof of Theorem 1. Under Assumptions (A1) and (A2), following the proof of the lemma in Section 1.26 of Pukelsheim (1993), it can be verified that {FSEW(ξ)|ξ∈Ξ} ⊂ Rp×p, as the convex hull of the set {ˆE{F(x,Θ)} |x∈ X} with dimension p(p+ 1)/2 due to their symmetry, is convex and compact. Due to Lemma 1 and Carath´ eodory’s Theorem (see, for example, Theorem 2.1.1 in Fedorov (1972)), it can be further verified that for any matrix F∈ F SEW(X) with ξ∈Ξ(X), there exists a ξ0∈Ξwith at most p(p+1)/2+1 design points, such that, FSEW(ξ0) =F. When F∈ F SEW(X) is a boundary point (that is, any open set of FSEW(X) containing Fcontains both matrices inside and outside FSEW(X)), the number of design points in ξ0can be at most p(p+ 1)/2. Similarly, we can obtain the same conclusions for {FEW(ξ)|ξ∈Ξ},{E{F(x,Θ)} | x∈ X} , andFEW(X) under Assumptions (A1) and (A3). Proof of Theorem 2. For sample-based EW D-optimality, under Assumptions (A1), (A2), and (B3), due to Lemma 1, FSEW(X) is convex and compact. Due to Assump- tion (B3), there exists a sample-based EW D-optimal design ξ∗∈Ξ(X). Due to the monotonicity of Ψ (see Assumption (B2) in Section S2), ˆE{F(ξ∗,Θ)}must be a boundary point of FSEW(X). According to Theorem 1, there exists a sample-based EW D-optimal design with no more than p(p+ 1)/2 support points. Similarly, for integral-based
|
https://arxiv.org/abs/2505.00629v2
|
EW D-optimality, under Assumptions (A1), (A3), and (B3), there must exist an EW D-optimal design with no more than p(p+ 1)/2 support points. According to Section 2.4.2 in Fedorov and Leonov (2014), Assumptions (B1), (B2), and (B4) are always satisfied for D-optimality. The rest parts of Theorem 2 are direct conclusions of Theorem 2.2 in Fedorov and Leonov (2014). Proof of Theorem 3. Suppose all the predictor functions h1, . . . ,hJ−1andhcde- fined in (3.7) are continuous with respect to all continuous factors of x∈ X. Then there exists an Mx>0 due to the compactness of X, such that, ∥hj(x)∥ ≤ Mxand ∥hc(x)∥ ≤Mx, for all j= 1, . . . , J −1 and x∈ X. S5 According to Appendix A of Huang et al. (2024), if we further have a bounded Θ, then there exists an Mη>0, such that, ∥ηx∥=∥Xxθ∥ ≤Mηfor all θ∈Θandx∈ X. Then there exist 0 < δx<∆x<1, such that, δx≤πx j(θ)≤∆xfor all j= 1, . . . , J ,θ∈Θ, and x∈ X. Then there exists an Mu>0, such that, |ux st(θ)| ≤Mufor all s, t= 1, . . . , J ,θ∈ Θ, and x∈ X. According to Theorem 2 in Huang et al. (2024), there exists an MF>0, such that, ∥Fx st(θ)∥=∥Fst(x,θ)∥ ≤MFfor all s, t= 1, . . . , J ,θ∈Θ, and x∈ X. As a direct conclusion, the Fisher information of an MLM satisfies Assumption (A3). The rest statements are direct conclusions of Theorem 2 and Corollary 1. Proof of Theorem 4. Suppose all the predictor functions h= (h1, . . . , h p)Tare continuous with respect to all continuous factors of x∈ X, and Θis bounded. Since the number of level combinations of discrete factors is finite, there exist Mx>0 and Mη>0, such that, ∥h(x)h(x)T∥ ≤Mxfor all x∈ X, and|h(x)Tθ| ≤Mηfor all x∈ X andθ∈Θ. Since νis continuous for commonly used GLMs (see, for example, Table 5 in the Supplementary Material of Huang et al. (2025)), there is an MF>0, such that, |Fst(x,θ)| ≤MFfor all s, t= 1, . . . , p ,θ∈Θ, and x∈ X. As a direct conclusion, the Fisher information of a commonly used GLM satisfies Assumption (A3). The rest statements are direct conclusions of Theorem 2 and Corollary 1. Proof of Theorem 5. According to Theorem 2, design ξis EW D-optimal if and only if max x∈Xd(x,ξ)≤p. By applying Theorem 2 and its proof (see their Lemma 1) of Huang et al. (2024), we obtain tr (Estux sthx s(hx t)T) =ux sttr (hx t)TEsthx s =ux st(hx t)TEsthx s. Note that ux sthere is either E{ux st(Θ)}orˆE{ux st(Θ)}, which is different from Huang et al. (2024). Proof of Theorem 6. According to Theorem 2, a necessary and sufficient condi- tion for a design ξto be EW D-optimal is max x∈Xd(x,ξ)≤p, which is equivalent to max x∈Xtr(F(ξ)−1Fx)≤p. Then the conclusions are obtained due to F(ξ) =XT ξWξXξ. S4. Model Selection and Design Comparison for Pa- per Feeder Experiment In this section, we provide more details about the model selection process and the designs obtained for
|
https://arxiv.org/abs/2505.00629v2
|
the paper feeder experiment, which was carried out at Fuji-Xerox by Y. Norio and O. Akira (Joseph and Wu, 2004). In this experiment, there are eight discrete control factors, namely feed belt material (x1, Type A or Type B), speed (x2, 288 mm/s, 240 mm/s, or 192 mm/s), drop height (x3, 3 mm, 2 mm, or 1 mm), center roll (x4, Absent or Present), belt width (x5, 10 mm, 20 mm, or 30 mm), tray guidance angle (x6, 0, 14, or 28), tip angle (x7, 0, 3.5, or 7), and turf (x8, None, 1 sheet, or 2 sheets), one noise factor ( stack quantity , High or Low) (see also Table 2 in Joseph and Wu (2004)), and one continuous control factor, stack force , taking values in [0 ,160] according to the original experiment. The noise factor is skipped in this analysis since it is not in control of users and is not significant either (Joseph and Wu, 2004). Following Joseph and Wu (2004), we adopt the 18 level settings of eight discrete control factors obtained by coding level 3 as level 1 in column x4of Table 5 in Joseph and Wu (2004). S6 S4.1 Model selection for paper feeder experiment For the paper feeder experiment, the summarized response variables include the number of misfeeds yi1, the number of precise feeding of the paper yi2, and the number of multifeeds yi3at the experimental setting xi. It is important to notice that these two types of failures, misfeed or multifeed, cannot occur simultaneously. Therefore, a multinomial logistic model is more appropriate than two separate generalized linear models for analyzing this experiment. For those three-level factors, namely x2,x3,x5, x6,x7, and x8, which are all quantitative factors, we follow Wu and Hamada (2009) and Joseph and Wu (2004) to convert the two degrees of freedom for each factor into linear and quadratic components with contrasts ( −1,0,1) and (1 ,−2,1), respectively. Following Joseph and Wu (2004), we code each of the two-level factors x1andx4, which are qualitative factors, as ( −1,1). As for the continuous factor stack force M, we transform it to log( M+ 1), which is slightly different from log Min Joseph and Wu (2004), to deal with a few cases with M= 0. Table S.1: Model comparison for paper feeder experiment Cumulative Continuation Adjacent Baseline po npo po npo po npo po npo AIC 1011.06 937.87 1037.60 970.59 1027.72 962.99 1661.38 962.99 BIC 1104.34 1113.46 1130.88 1146.18 1121.00 1138.58 1754.66 1138.58 We first use AIC and BIC criteria to choose the most appropriate multinomial logistic model from eight different candidates (Bu et al., 2020). According to Table S.1, we adopt the cumulative logit model with non-proportional odds (npo), which achieves the smallest AIC and BIC values (highlighted in bold font), and is described as follows: logπi1 πi2+πi3=β11+β12log(Mi+ 1) + β13xi1+β14xi2l+β15xi2q +β16xi3l+β17xi3q+β18xi4+β19xi5l+β110xi5q+β111xi6l +β112xi6q+β113xi7l+β114xi7q+β115xi8l+β116xi8q, logπi1+πi2 πi3=β21+β22log(Mi+ 1) + β23xi1+β24xi2l+β25xi2q +β26xi3l+β27xi3q+β28xi4+β29xi5l+β210xi5q+β211xi6l +β212xi6q+β213xi7l+β214xi7q+β215xi8l+β216xi8q, where i= 1, . . . , m with m= 183 for the original experimental design. S4.2 Locally D-optimal designs for paper feeder experiment Using the data listed in Table 3 of
|
https://arxiv.org/abs/2505.00629v2
|
Joseph and Wu (2004), we fit the chosen model, a cumulative logit model with npo and p= 32 parameters. The fitted parameter values S7 are ˆθ= ( ˆβ11,ˆβ12,ˆβ13,ˆβ14,ˆβ15,ˆβ16,ˆβ17,ˆβ18, ˆβ19,ˆβ110,ˆβ111,ˆβ112,ˆβ113,ˆβ114,ˆβ115,ˆβ116, ˆβ21,ˆβ22,ˆβ23,ˆβ24,ˆβ25,ˆβ26,ˆβ27,ˆβ28, ˆβ29,ˆβ210,ˆβ211,ˆβ212,ˆβ213,ˆβ214,ˆβ215,ˆβ216)T = (7 .995,−3.268,−1.275,1.531,0.044,−0.156,−0.141,0.534, −0.261,0.418,−1.749,−0.084,−0.207,0.759,0.782,0.356, 10.928,−2.461,−0.409,0.711,0.080,−0.144,−0.120,0.196, 0.019,−0.023,−0.931,−0.012,−0.133,0.153,0.128,0.125)T. Assuming ˆθto be the true values of the model parameters, we first apply the ForLion algorithm proposed by Huang et al. (2024) and obtain a locally D-optimal approximate design, which consists of 41 design points and is listed as “ForLion” in both Table S.2 and Table S.4. To convert this approximate design with a continuous factor to an exact design, we apply our rounding algorithm (Algorithm 2) to it with merging threshold δr= 3.2, grid level L= 2.5, and the number of experimental units n= 1,785 (the same sample size as in the original experiment). The obtained exact design is listed as “ForLion exact grid2.5” in Tables S.2 and S.4. For comparison purpose, we also apply the lift-one and exchange algorithms of Bu et al. (2020) to two different sets of design points: one set is the same as the original experiment, and the other is based on the grid points of stack force with pace length 2.5 (that is, {0,2.5,5, . . . , 157.5,160}) for each of the 18 experimental settings (runs) of the eight discrete factors. The designs obtained correspondingly are presented as “Bu-appro” (lift-one algorithm on original design points) and “Bu-exact” (exchange algorithm on original design points) in Tables S.2 and S.3, and “Bu-grid2.5” (lift-one algorithm on grid-2.5 points) and “Bu-grid2.5 exact” (exchange algorithm on grid-2.5 points) in Tables S.2 and S.4. According to the summarized information of those designs listed in Table S.2, the ForLion design and its exact designs obtained by Algorithm 2 with grid level L= 0.1,0.5,1,2.5, respectively, achieve the highest relative efficiencies with respect to the ForLion design itself, as well as the fewest design points. On the contrary, the lift-one and exchange algorithms of Bu et al. (2020) are much more time-consuming even with grid level 2.5, and the original design and Bu’s designs (Bu- appro and Bu-exact) on the original set of design points are not satisfactory in terms of relative efficiency. The ForLion exact designs (with grid-0.1, grid-0.5, grid-1, and grid-2.5) in Table S.2 also show the convenience by adopting our rounding algorithm (Algorithm 2). Once an optimal approximate design is obtained, the exact design with a user-specified grid level can be obtained instantly with high relative efficiency. To show how we choose the merging threshold δr= 3.2 for this experiment, in Figure S.1 we show how the relative efficiency (left panel) and the number of design points (right panel) change along with the relative distance (i.e., the merging threshold δr), where the grid levels 0.1, 0.5, 1 and 2.5, and the relative distances between 2 and 6 with increment of 0.1 are used for illustration. Notably, the next change on the number of design points after δr= 3.2 will not occur until δr= 31.29, which is too large. As a result, we recommend the merging threshold δr= 3.2 for this experiment. The choice
|
https://arxiv.org/abs/2505.00629v2
|
of grid level typically depends control accuracy of the factors and the options that the experimenter may have. As illustrated in Table S.2, the experimenter may choose the smallest possible grid level when applicable. S8 Table S.2: Locally D-optimal designs for paper feeder experiment Designs m Time (s) |F(ξ,θ)| Relative Efficiency Original allocation 183 - 1.164e+28 62.792% Bu-appro 41 62.58s 5.930e+32 88.106% Bu-exact 41 1598s 5.928e+32 88.105% Bu-grid2.5 appro 50 81393s 2.397e+34 98.903% Bu-grid2.5 exact 48 85443s 2.396e+34 98.902% ForLion 41 22099s 3.411e+34 100.000% ForLion exact grid-0.1 36 0.01s 3.412e+34 100.001% ForLion exact grid-0.5 36 0.01s 3.374e+34 99.965% ForLion exact grid-1 36 0.01s 2.556e+34 99.102% ForLion exact grid-2.5 36 0.01s 2.245e+34 98.701% Note: mis the number of design points; Relative Efficiency is ( |F(ξ,ˆθ)|/|F(ξForLion ,ˆθ)|)1/pwith p= 32. S4.3 More graphs and tables for paper feeder experiment In this section, we provide (i)Figure S.2 for boxplots of relative efficiencies of different robust designs with respect to locally D-optimal designs, which is a graphical display of Table 2; (ii)Table S.3 for listing designs constructed on the set of design points of the original experiment, including Original (design), Bu-exact by Bu et al. (2020)’s exchange algorithm, Bu-appro by Bu et al. (2020)’s lift-one algorithm, EW Bu-exact by Bu et al. (2020)’s EW exchange algorithm, and EW Bu-appro by Bu et al. (2020)’s EW lift-one algorithm; and (iii)Table S.4 for listing designs constructed on grid points of [0 ,160] for stack force. S9 Figure S.1: Relative efficiency and number of design points of exact designs against relative distance (or merging threshold) for paper feeder experiment Figure S.2: Boxplots of relative efficiencies of designs in Table 2 with respect to locally D- optimal designs under 100 different sets of parameter values for paper feeder experiment S10 Table S.3: Designs constructed on the original set of design points for paper feeder experiment Run DesignStack Force0 5 10 15 20 25 30 35 40 42.5 45 50 55 60 62.5 65 70 75 80 82.5 85 90 95 100 105 110 120 160 1 Original 10 5 10 5 5 10 15 5 5 10 10 5 5 10 5 5 Bu-exact 59 0 0 0 0 0 0 0 0 0 0 0 0 0 50 0 Bu-appro 0.033 0 0 0 0 0 0 0 0 0 0 0 0 0 0.028 0 EW Bu-exact 57 0 0 0 0 0 0 0 0 0 0 0 0 0 48 0 EW Bu-appro 0.032 0 0 0 0 0 0 0 0 0 0 0 0 0 0.027 0 2 Original 10 10 10 10 15 5 20 5 15 5 5 5 Bu-exact 0 54 0 0 0 0 0 0 0 0 0 50 Bu-appro 0 0.030 0 0 0 0 0 0 0 0 0 0.028 EW Bu-exact 0 53 0 0 0 0 0 0 0 0 0 50 EW Bu-appro 0 0.029 0 0 0 0 0 0 0 0 0 0.028 3 Original 15 10 10 20 10 15 5 15 Bu-exact 0 57 0 0 0 0 0 44 Bu-appro
|
https://arxiv.org/abs/2505.00629v2
|
0 0.032 0 0 0 0 0 0.025 EW Bu-exact 0 56 0 0 0 0 0 43 EW Bu-appro 0 0.031 0 0 0 0 0 0.024 4 Original 5 10 10 10 15 10 5 15 5 5 5 Bu-exact 0 53 0 0 0 0 0 0 0 0 48 Bu-appro 0 0.030 0 0 0 0 0 0 0 0 0.027 EW Bu-exact 42 49 0 0 0 0 0 0 0 0 47 EW Bu-appro 0.024 0.028 0 0 0 0 0 0 0 0 0.026 5 Original 10 10 15 20 5 20 5 10 Bu-exact 58 0 0 0 0 0 0 48 Bu-appro 0.033 0 0 0 0 0 0 0.027 EW Bu-exact 58 0 0 0 0 0 0 47 EW Bu-appro 0.032 0 0 0 0 0 0 0.026 6 Original 10 10 15 20 5 10 5 5 Bu-exact 55 0 0 0 0 0 0 44 Bu-appro 0.031 0 0 0 0 0 0 0.025 EW Bu-exact 53 0 0 0 0 0 0 43 EW Bu-appro 0.030 0 0 0 0 0 0 0.024 7 Original 10 20 5 20 10 20 5 5 5 5 Bu-exact 0 59 0 0 0 0 0 0 0 52 Bu-appro 0 0.033 0 0 0 0 0 0 0 0.029 EW Bu-exact 0 57 0 0 0 0 0 0 0 50 EW Bu-appro 0 0.032 0 0 0 0 0 0 0 0.028 8 Original 5 10 10 10 10 10 15 5 10 10 5 5 Bu-exact 0 0 14 39 0 0 0 0 0 0 0 46 Bu-appro 0 0 0.008 0.022 0 0 0 0 0 0 0 0.026 EW Bu-exact 0 0 0 52 0 0 0 0 0 0 0 43 EW Bu-appro 0 0 0 0.029 0 0 0 0 0 0 0 0.024 9 Original 10 10 10 5 10 20 5 5 10 5 10 5 Bu-exact 0 0 2 53 0 0 0 0 0 0 0 48 Bu-appro 0 0 0.001 0.029 0 0 0 0 0 0 0 0.027 EW Bu-exact 0 0 0 53 0 0 0 0 0 0 0 46 EW Bu-appro 0 0 0 0.030 0 0 0 0 0 0 0 0.026 10 Original 15 20 15 20 20 10 Bu-exact 0 21 0 0 0 44 Bu-appro 0 0.012 0 0 0 0.025 EW Bu-exact 0 22 0 0 0 44 EW Bu-appro 0 0.013 0 0 0 0.025 11 Original 15 20 20 20 20 10 Bu-exact 0 32 0 0 0 44 Bu-appro 0 0.018 0 0 0 0.024 EW Bu-exact 0 30 0 0 0 44 EW Bu-appro 0 0.017 0 0 0 0.024 12 Original 10 10 10 10 5 10 5 10 5 5 Bu-exact 0 26 33 0 0 0 0 0 0 51 Bu-appro 0 0.015 0.018 0 0 0 0 0 0 0.029 EW Bu-exact 0 27 30 0 0 0 0 0 0 50 EW Bu-appro 0 0.015 0.017 0 0 0 0
|
https://arxiv.org/abs/2505.00629v2
|
0 0 0.028 13 Original 10 10 10 10 5 5 5 5 5 5 5 5 Bu-exact 0 0 58 0 0 0 0 0 0 0 0 50 Bu-appro 0 0 0.032 0 0 0 0 0 0 0 0 0.028 EW Bu-exact 0 0 56 0 0 0 0 0 0 0 0 50 EW Bu-appro 0 0 0.031 0 0 0 0 0 0 0 0 0.028 14 Original 10 20 15 20 20 20 5 5 Continued on next page S11 Continued Run DesignStack Force0 5 10 15 20 25 30 35 40 42.5 45 50 55 60 62.5 65 70 75 80 82.5 85 90 95 100 105 110 120 160 Bu-exact 3 52 0 0 0 0 0 43 Bu-appro 0.002 0.029 0 0 0 0 0 0.024 EW Bu-exact 0 53 0 0 0 0 0 42 EW Bu-appro 0 0.029 0 0 0 0 0 0.024 15 Original 10 10 15 15 15 10 5 5 5 Bu-exact 47 0 0 0 0 0 46 0 0 Bu-appro 0.026 0 0 0 0 0 0.026 0 0 EW Bu-exact 46 0 0 0 0 0 46 0 0 EW Bu-appro 0.026 0 0 0 0 0 0.026 0 0 16 Original 10 10 15 20 10 15 10 5 10 Bu-exact 49 0 0 0 0 0 0 0 45 Bu-appro 0.028 0 0 0 0 0 0 0 0.025 EW Bu-exact 49 0 0 0 0 0 0 0 45 EW Bu-appro 0.027 0 0 0 0 0 0 0 0.025 17 Original 10 10 5 10 10 5 5 5 5 5 10 5 5 Bu-exact 0 51 0 0 0 0 0 0 0 0 0 0 47 Bu-appro 0 0.029 0 0 0 0 0 0 0 0 0 0 0.027 EW Bu-exact 0 51 0 0 0 0 0 0 0 0 0 0 46 EW Bu-appro 0 0.028 0 0 0 0 0 0 0 0 0 0 0.026 18 Original 10 5 10 10 5 5 10 5 10 5 10 10 5 Bu-exact 0 6 53 0 0 0 0 0 0 0 0 51 0 Bu-appro 0 0.003 0.030 0 0 0 0 0 0 0 0 0.029 0 EW Bu-exact 0 0 57 0 0 0 0 0 0 0 0 50 0 EW Bu-appro 0 0 0.032 0 0 0 0 0 0 0 0 0.028 0 S12 Table S.4: Designs on [0 ,160] of stack force for paper feeder experiment Run Design Stack Force wiorni 1 Bu-grid2.5 20.0 112.5 - - 0.0305 0.0264 - - Bu-grid2.5 exact 20.0 112.5 - - 55 47 - - ForLion 21.839 110.554 - - 0.0292 0.0270 - - ForLion exact grid2.5 22.5 110.0 - - 52 48 - - EW Bu-grid2.5 20.0 22.5 110 112.5 0.0173 0.0124 0.0065 0.0189 EW Bu-grid2.5 exact 20.0 22.5 112.5 - 30 23 45 - EW ForLion 22.127 111.339 - - 0.0288 0.0255 - - EW ForLion exact grid2.5 22.5 112.5 - - 51 45 - - 2 Bu-grid2.5 5.0
|
https://arxiv.org/abs/2505.00629v2
|
75.0 - - 0.0291 0.0279 - - Bu-grid2.5 exact 5.0 75.0 - - 52 50 - - ForLion 4.749 74.807 - - 0.0286 0.0280 - - ForLion exact grid2.5 5.0 75.0 - - 51 50 - - EW Bu-grid2.5 5.0 75.0 77.5 - 0.0284 0.0169 0.0110 - EW Bu-grid2.5 exact 5.0 75.0 77.5 - 51 37 13 - EW ForLion 4.505 75.677 - - 0.0281 0.0280 - - EW ForLion exact grid2.5 5.0 75.0 - - 50 50 - - 3 Bu-grid2.5 7.5 47.5 - - 0.0284 0.0257 - - Bu-grid2.5 exact 7.5 47.5 - - 51 46 - - ForLion 7.669 46.322 - - 0.0284 0.0261 - - ForLion exact grid2.5 7.5 47.5 - - 51 47 - - EW Bu-grid2.5 7.5 45.0 47.5 - 0.0279 0.0055 0.0197 - EW Bu-grid2.5 exact 7.5 45.0 47.5 - 50 4 41 - EW ForLion 7.735 46.387 - - 0.0279 0.0252 - - EW ForLion exact grid2.5 7.5 47.5 - - 50 45 - - 4 Bu-grid2.5 17.5 20.0 97.5 100 0.0055 0.0237 0.0175 0.0084 Bu-grid2.5 exact 17.5 20.0 97.5 100 10 42 34 12 ForLion 18.476 97.700 - - 0.0288 0.0268 - - ForLion exact grid2.5 17.5 97.5 - - 51 48 - - EW Bu-grid2.5 0.0 20.0 100.0 102.5 0.0253 0.0270 0.0114 0.0136 EW Bu-grid2.5 exact 0.0 20.0 100.0 102.5 45 48 5 40 EW ForLion 0.000 19.984 100.379 - 0.0190 0.0265 0.0242 - EW ForLion exact grid2.5 0.0 20.0 100.0 - 34 47 43 - 5 Bu-grid2.5 15.0 82.5 - - 0.0321 0.0287 - - Bu-grid2.5 exact 15.0 82.5 - - 57 51 - - ForLion 13.925 83.114 - - 0.0315 0.0294 - - ForLion exact grid2.5 15.0 82.5 - - 56 53 - - EW Bu-grid2.5 15.0 82.5 85.0 - 0.0314 0.0245 0.0033 - EW Bu-grid2.5 exact 15.0 82.5 - - 56 50 - - EW ForLion 14.527 83.063 - - 0.0311 0.0280 - - EW ForLion exact grid2.5 15.0 82.5 - - 55 50 - - 6 Bu-grid2.5 12.5 15.0 87.5 - 0.0075 0.0229 0.0251 - Bu-grid2.5 exact 12.5 15.0 87.5 - 14 40 45 - ForLion 12.212 86.679 - - 0.0277 0.0259 - - ForLion exact grid2.5 12.5 87.5 - - 49 46 - - EW Bu-grid2.5 12.5 15.0 85.0 87.5 0.0239 0.0047 0.0046 0.0203 EW Bu-grid2.5 exact 12.5 15.0 87.5 - 43 8 45 - EW ForLion 12.682 87.709 - - 0.0273 0.0250 - - EW ForLion exact grid2.5 12.5 87.5 - - 49 45 - - 7 Bu-grid2.5 20.0 92.5 - - 0.0329 0.0290 - - Bu-grid2.5 exact 20.0 92.5 - - 59 52 - - ForLion 18.005 20.767 90.517 - 0.0048 0.0282 0.0299 - ForLion exact grid2.5 20.0 90.0 - - 59 53 - - EW Bu-grid2.5 20.0 90.0 92.5 95.0 0.0322 0.0001 0.0254 0.0023 EW Bu-grid2.5 exact 20.0 92.5 - - 57 50 - - EW ForLion 20.206 92.451 - - 0.0323 0.0277 - - EW ForLion exact grid2.5 20.0 92.5 - - 58 49 - - 8 Bu-grid2.5 35.0 160 - - 0.0275 0.0261 - - Bu-grid2.5 exact 35.0 160 - - 49
|
https://arxiv.org/abs/2505.00629v2
|
47 - - ForLion 33.541 35.854 160 - 0.0224 0.0048 0.0268 - ForLion exact grid2.5 35.0 160 - - 49 48 - - EW Bu-grid2.5 35.0 160.0 - - 0.0272 0.0251 - - EW Bu-grid2.5 exact 35.0 160.0 - - 49 45 - - EW ForLion 33.863 160 - - 0.0269 0.0252 - - EW ForLion exact grid2.5 35.0 160 - - 48 45 - - 9 Bu-grid2.5 22.5 25.0 110.0 - 0.0111 0.0199 0.0265 - Bu-grid2.5 exact 22.5 25.0 110.0 - 19 36 47 - ForLion 24.964 27.906 106.844 - 0.0227 0.0075 0.0275 - ForLion exact grid2.5 25.0 107.5 - - 54 49 - - EW Bu-grid2.5 25.0 112.5 115.0 - 0.0302 0.0160 0.0090 - EW Bu-grid2.5 exact 25.0 112.5 - - 54 45 - - EW ForLion 1.398 26.346 115.877 - 0.0127 0.0275 0.0235 - EW ForLion exact grid2.5 2.5 27.5 115.0 - 23 49 42 - 10 Bu-grid2.5 2.5 45.0 - - 0.0253 0.0247 - - Bu-grid2.5 exact 2.5 45.0 - - 45 44 - - ForLion 1.573 44.645 - - 0.0249 0.0247 - - ForLion exact grid2.5 2.5 45.0 - - 45 44 - - EW Bu-grid2.5 2.5 45.0 - - 0.0245 0.0247 - - EW Bu-grid2.5 exact 2.5 45.0 - - 44 44 - - EW ForLion 1.588 44.899 - - 0.0247 0.0248 - - EW ForLion exact grid2.5 2.5 45.0 - - 44 44 - - 11 Bu-grid2.5 2.5 32.5 35.0 - 0.0254 0.0208 0.0038 - Bu-grid2.5 exact 2.5 32.5 35.0 - 45 38 6 - ForLion 2.105 33.392 - - 0.0254 0.0247 - - ForLion exact grid2.5 2.5 32.5 - - 46 44 - - EW Bu-grid2.5 2.5 32.5 - - 0.0248 0.0245 - - EW Bu-grid2.5 exact 2.5 32.5 - - 44 44 - - EW ForLion 1.913 33.162 - - 0.0250 0.0247 - - EW ForLion exact grid2.5 2.5 32.5 - - 45 44 - - 12 Bu-grid2.5 12.5 15.0 95.0 97.5 0.0302 0.0004 0.0042 0.0241 Bu-grid2.5 exact 12.5 15.0 95.0 97.5 54 1 5 46 ForLion 12.553 96.478 - - 0.0304 0.0284 - - ForLion exact grid2.5 12.5 97.5 - - 54 51 - - EW Bu-grid2.5 12.5 92.5 95.0 - 0.0303 0.0120 0.0158 - EW Bu-grid2.5 exact 12.5 92.5 95.0 - 54 25 25 - EW ForLion 12.194 94.016 - - 0.0301 0.0279 - - EW ForLion exact grid2.5 12.5 95.0 - - 54 50 - - 13 Bu-grid2.5 12.5 15.0 92.5 95.0 0.0187 0.0130 0.0224 0.0061 Bu-grid2.5 exact 12.5 15.0 92.5 95.0 33 24 41 10 ForLion 14.693 92.175 - - 0.0312 0.0287 - - ForLion exact grid2.5 15.0 92.5 - - 56 51 - - EW Bu-grid2.5 12.5 15.0 92.5 95.0 0.0034 0.0276 0.0098 0.0179 EW Bu-grid2.5 exact 12.5 15.0 95 - 6 49 49 - EW ForLion 14.488 93.508 - - 0.0306 0.0279 - - Continued on next page S13 Continued Run Design Stack Force wiorni EW ForLion exact grid2.5 15.0 92.5 - - 55 50 - - 14 Bu-grid2.5 15.0 17.5 77.5 80.0 0.0082 0.0217 0.0025 0.0235 Bu-grid2.5 exact 15.0 17.5 80 - 14 39 46 -
|
https://arxiv.org/abs/2505.00629v2
|
ForLion 16.439 18.660 76.924 - 0.0207 0.0090 0.0269 - ForLion exact grid2.5 17.5 77.5 - - 53 48 - - EW Bu-grid2.5 17.5 80.0 82.5 - 0.0291 0.0192 0.0056 - EW Bu-grid2.5 exact 17.5 80.0 - - 52 44 - - EW ForLion 17.449 80.152 - - 0.0291 0.0249 - - EW ForLion exact grid2.5 17.5 80.0 - - 52 44 - - 15 Bu-grid2.5 0.0 32.5 - - 0.0242 0.0247 - - Bu-grid2.5 exact 0.0 32.5 - - 43 44 - - ForLion 0.462 32.783 - - 0.0247 0.0246 - - ForLion exact grid2.5 0.0 32.5 - - 44 44 - - EW Bu-grid2.5 0.0 32.5 - - 0.0226 0.0248 - - EW Bu-grid2.5 exact 0.0 32.5 - - 40 44 - - EW ForLion 0.483 32.917 - - 0.0245 0.0248 - - EW ForLion exact grid2.5 0.0 32.5 - - 44 44 - - 16 Bu-grid2.5 5.0 65.0 - - 0.0259 0.0246 - - Bu-grid2.5 exact 5.0 65.0 - - 46 44 - - ForLion 4.119 64.968 - - 0.0253 0.0247 - - ForLion exact grid2.5 5.0 65.0 - - 45 44 - - EW Bu-grid2.5 5.0 65.0 - - 0.0255 0.0248 - - EW Bu-grid2.5 exact 5.0 65.0 - - 45 44 - - EW ForLion 4.185 64.902 - - 0.0251 0.0249 - - EW ForLion exact grid2.5 5.0 65.0 - - 45 44 - - 17 Bu-grid2.5 17.5 142.5 145.0 - 0.0268 0.0015 0.0237 - Bu-grid2.5 exact 17.5 145 - - 48 45 - - ForLion 17.181 144.491 - - 0.0265 0.0254 - - ForLion exact grid2.5 17.5 145 - - 47 45 - - EW Bu-grid2.5 17.5 137.5 140.0 142.5 0.0266 0.0005 0.0193 0.0052 EW Bu-grid2.5 exact 17.5 140 - - 47 45 - - EW ForLion 17.437 140.331 - - 0.0262 0.0251 - - EW ForLion exact grid2.5 17.5 140.0 - - 47 45 - - 18 Bu-grid2.5 17.5 20.0 92.5 95.0 0.0280 0.0042 0.0115 0.0176 Bu-grid2.5 exact 17.5 20.0 92.5 95.0 50 7 23 29 ForLion 17.157 19.653 93.331 - 0.0301 0.0019 0.0298 - ForLion exact grid2.5 17.5 92.5 - - 57 53 - - EW Bu-grid2.5 17.5 92.5 95.0 - 0.0312 0.0123 0.0159 - EW Bu-grid2.5 exact 17.5 95 - - 56 50 - - EW ForLion 17.933 94.525 - - 0.0314 0.0282 - - EW ForLion exact grid2.5 17.5 95.0 - - 56 50 - - S5. Robustness of Sample-based EW Designs In this section, we use the minimizing surface defects experiment (see Section 4.2) to illustrate the robustness of sample-based EW D-optimal designs against different sets of parameter vectors. For illustration purpose, we simulate five random sets of parameter vectors from the prior distributions listed in Table S1 of Huang et al. (2024), denoted by Θ(r)={θ(r) 1, . . . , θ(r) B}, with B= 1000, and r∈ {1,2,3,4,5}. For each r= 1, . . . , 5, we obtain the corresponding sample-based EW D-optimal design ξ(r)for the minimizing surface defects experiment, which maximizes fSEW(ξ) =|ˆE{F(ξ,Θ(r))}|=|B−1PB j=1F(ξ,θ(r) j)|. The numbers of design points are 15, 13, 21, 19, and 17, respectively. Forl,
|
https://arxiv.org/abs/2505.00629v2
|
r∈ {1,2,3,4,5}, we calculate the relative efficiencies of ξ(l)with respect to ξ(r) under Θ(r), that is, |ˆE{F(ξ(l),Θ(r))}| |ˆE{F(ξ(r),Θ(r))}|!1/p with p= 10. The relative efficiencies are shown in the following matrix l=1 l=2 l=3 l=4 l=5 r=1 1 .00000 0 .99179 0 .99586 0 .99284 0 .98470 r=2 0 .98258 1 .00000 0 .98508 0 .98134 0 .98117 r=3 0 .99078 0 .98748 1 .00000 0 .98165 0 .98632 r=4 0 .97629 0 .98376 0 .97910 1 .00000 0 .98501 r=5 0 .97947 0 .98850 0 .98804 0 .98842 1 .00000 S14 with the minimum relative efficiency 0 .97629. By applying the rounding algorithm (Algorithm 2) with merging threshold L= 1 and the number of experimental units n= 1000, we obtain the corresponding exact designs, whose numbers of design points are still 15, 13, 21, 19, and 17, respectively. Their relative efficiencies are l=1 l=2 l=3 l=4 l=5 r=1 1 .00000 0 .99144 0 .99593 0 .99299 0 .98448 r=2 0 .98287 1 .00000 0 .98535 0 .98174 0 .98128 r=3 0 .99080 0 .98716 1 .00000 0 .98184 0 .98614 r=4 0 .97634 0 .98359 0 .97916 1 .00000 0 .98488 r=5 0 .97959 0 .98836 0 .98797 0 .98846 1 .00000 with the minimum relative efficiency 0 .97634. In terms of relative efficiencies, the sample-based EW D-optimal designs for this experiment are fairly robust against the different sets of parameter vectors. S6. More Examples In this section, we provide two more examples, including an electrostatic discharge exper- iment example and a three-continuous-factor artificial example, both under generalized linear models. S6.1 Electrostatic discharge experiment An electrostatic discharge (ESD) experiment was considered by Lukemire et al. (2019) and Huang et al. (2024), which involves a binary response, four discrete factors, namely LotA, LotB, ESD, and Pulse, all taking values in {−1,1}, and one continuous factor Volt- age in [25 ,45]. The list of factors, corresponding parameters, the nominal values adopted by Lukemire et al. (2019), and the prior distribution adopted by Huang et al. (2024) are summarized in Table S.5. Lukemire et al. (2019) employed a d-QPSO algorithm and de- rived a locally D-optimal approximate design with 13 support points for a logistic model Y∼Bernoulli( µ) with logit( µ) =β0+β1LotA +β2LotB +β3ESD+β4Pulse +β5Voltage + β34(ESD×Pulse ). Huang et al. (2024) used their ForLion algorithm and found a slightly better locally D-optimal design in terms of relative efficiency, which consists of 14 design points. Both designs were constructed under the assumption that the nominal values listed in Table S.5 are the true values. In practice, however, an experimenter may not possess precise parameter values before the experiment. As an illustration, we adopt the prior distributions listed in Table S.5 (i.e., independent uniform distributions) on the model parameters. By applying our EW ForLion algorithm on the same model with the specified prior distribution, we obtain the integral-based EW D-optimal approximate design, and present it on the left side of Table S.6, which consists of 18 design points. Subsequently, we utilize our rounding algorithm with merging threshold L= 0.1 and the number
|
https://arxiv.org/abs/2505.00629v2
|
of experimental units n= 100 (as an illustration) to obtain an exact design, and present it on the right side of Table S.6, S15 which consists of 17 design points (the 18th design point of the approximate design vanishes due to its low weight). Table S.5: Factors and parameters for electrostatic discharge experiment Factor Parameter Parameter Type Factor(Parameter) Levels/Range Prior Distribution Nominal Value Intercept( β0) U(−8,−7) -7.5 Discrete Lot A ( β1) {-1,1} U(1,2) 1.5 Discrete Lot B ( β2) {-1,1} U(−0.3,−0.1) -0.2 Discrete ESD ( β3) {-1,1} U(−0.3,0) -0.15 Discrete Pulse ( β4) {-1,1} U(0.1,0.4) 0.25 Continuous Voltage ( β5) [25,45] U(0.25,0.45) 0.35 Discrete ESD ×Pulse ( β34){-1,1} U(0.35,0.45) 0.4 Table S.6: EW D-optimal approximate design (left) and exact design (right, n= 100) for electrostatic discharge experiment Support Support N= 100 , L= 0.1 point Lot A Lot B ESD Pulse Voltage pi(%) point Lot A Lot B ESD Pulse Voltage ni1 -1 -1 -1 -1 25 8.48 1 -1 -1 -1 -1 25 8 2 -1 -1 -1 1 25 8.75 2 -1 -1 -1 1 25 9 3 -1 -1 1 -1 25 4.10 3 -1 -1 1 -1 25 4 4 -1 -1 1 1 25 8.56 4 -1 -1 1 1 25 9 5 -1 1 -1 -1 25 6.90 5 -1 1 -1 -1 25 7 6 -1 1 -1 1 25 5.15 6 -1 1 -1 1 25 5 7 -1 1 1 -1 25 9.01 7 -1 1 1 -1 25 9 8 -1 1 1 1 25 8.45 8 -1 1 1 1 25 8 9 1 -1 1 -1 25 7.43 9 1 -1 1 -1 25 7 10 1 1 -1 -1 25 3.56 10 1 1 -1 -1 25 4 11 1 1 -1 1 25 6.21 11 1 1 -1 1 25 6 12 1 1 1 -1 25 4.43 12 1 1 1 -1 25 4 13 1 1 1 1 25 0.90 13 1 1 1 1 25 1 14 -1 1 1 -1 38.948 7.94 14 -1 1 1 -1 38.9 8 15 -1 1 -1 -1 34.023 1.57 15 -1 1 -1 -1 34.0 2 16 -1 1 -1 1 35.405 3.80 16 -1 1 -1 1 35.4 4 17 -1 -1 1 -1 37.196 4.55 17 -1 -1 1 -1 37.2 5 18 -1 1 1 1 33.088 0.22 To make comparison, we randomly generate N= 10,000 parameter vectors {θ1, . . . , θN}from the prior distribution listed in Table S.5. For each j∈ {1, . . . , N }and each ξ under comparison, we compute ( |F(ξ,θj)|1/p=|XT ξWξ(θj)Xξ|1/pwith p= 7 in this case. In Figure S.3, we present the resulting values as frequency polygons for our EW ForLion design, the locally D-optimal design (“ForLion”) obtained by Huang et al. (2024), and the d-QPSO design (“PSO”) obtained by Lukemire et al. (2019). Compared with the ForLion or PSO design, the frequency polygon for EW ForLion design shows a slightly higher density with low objective functions values, but also a noticeably increased frequency
|
https://arxiv.org/abs/2505.00629v2
|
in the higher-value range. Actually, in Table S.7, it clearly shows that the overall mean and median objective function values for EW ForLion designs (approximate and exact designs listed in Table S.6) are higher than ForLion and PSO designs. S6.2 A three-continuous-factor example under a GLM This example refers to Example 4.7 of Stufken and Yang (2012) on a logistic regression model with three continuous factors, represented as logit( µi) =β0+β1xi1+β2xi2+ S16 Figure S.3: Frequency polygons of objective function values based on 10,000 simulated parameter vectors for electrostatic discharge experiment Table S.7: Mean and median objective function values based on 10,000 simulated param- eter vectors for electrostatic discharge experiment Design Mean |XTWX|1/7Median |XTWX|1/7 EW ForLion 0.14677 0.16422 EW ForLion exact 0.14676 0.16410 ForLion 0.14253 0.14990 PSO 0.14187 0.14878 β3xi3, where the factors were originally defined as xi1∈[−2,2],xi2∈[−1,1], and xi3∈(−∞,∞). In this section, for illustration purposes, we reconsider this example by letting xi3∈ [−3,3] and assuming independent priors β0∼N(1, σ2),β1∼N(−0.5, σ2),β2∼N(0.5, σ2), andβ3∼N(1, σ2), with σ= 1. Using our EW ForLion algorithm, we obtain an EW D-optimal approximate design, which costs 783 seconds. We further apply our rounding algorithm to the approximate design with L= 0.0001 and n= 100, and obtain an exact design in just 0.45 second. Remarkably, both designs consist of 7 design points, as de- tailed in Table S.8, which are different from the 8-point design recommended by Stufken and Yang (2012) with unconstrained xi3. We compare our designs (“EW ForLion” and “EW ForLion exact-0.0001”) with EW D-optimal designs obtained by applying Bu et al. (2020)’s EW lift-one and exchange algorithms to grid points of continuous factors by discretizing them into 2, 4, 6, 8, and 10 evenly distributed grid points within the corresponding design regions. When the continuous variables are discretized into 10 evenly distributed grid points, Bu et al. (2020)’s EW lift-one and exchange algorithms no longer work due to computational intensity. The summary information of these designs are presented in Table S.9, from which we can see that our EW ForLion algorithm produces a design that is more robust against parameter misspecifications and also requires fewer design points. S17 Table S.8: EW D-optimal approximate design (left) and exact design (right, n= 100) for three-continuous-factor example Support Support point x1 x2 x3 pi(%) point x1 x2 x3ni1 -2 -1 -3 7.231 1 -2 -1 -3 7 2 2 -1 -3 20.785 2 2 -1 -3 21 3 -2 1 -1.8 19.491 3 -2 1 -1.8 19 4 2 1 3 2.718 4 2 1 3 3 5 2 1 -0.3321 18.870 5 2 1 -0.3321 19 6 -2 -1 -0.08665 10.951 6 -2 -1 -0.0867 11 7 0.94673 -0.99688 2.99323 19.954 7 0.9467 -0.9969 2.9932 20 Table S.9: Robust designs for three-continuous-factor example Design m Time (s) |E{F(ξ,Θ)}|Relative Efficiency EW Grid-2 7 0.06s 0.0009563 90.711% EW Grid-2 exact 7 0.11s 0.0009559 90.700% EW Grid-4 8 4.33s 0.0012320 96.641% EW Grid-4 exact 10 5.03s 0.0012314 96.629% EW Grid-6 8 13.87s 0.0013278 98.469% EW Grid-6 exact 8 19.20s 0.0013271 98.455% EW Grid-8 8 37.58s 0.0013562 98.989% EW Grid-8 exact 9
|
https://arxiv.org/abs/2505.00629v2
|
GLOBAL ACTIVITY SCORES∗ RUILONG YUE†AND GIRAY ¨OKTEN‡ Abstract. We introduce a new global sensitivity measure called global activity score . The new measure is obtained from the global active subspace method, similar to the way the activity score measure is obtained from the active subspace method. We present theoretical results on the relation- ship between Sobol’ sensitivity indices and global activity scores. We present numerical examples where we compare the results of the global sensitivity analysis of some models using Sobol’ sensitivity indices, derivative-based sensitivity measures, activity scores, and global activity scores. The numer- ical results reveal the scenarios when the global activity score has advantages over derivative-based sensitivity measures and activity scores, and when the three measures give similar results. Key words. Global sensitivity analysis, global active subspace method, Sobol’ sensitivity indices AMS subject classifications. 65C05, 65C20 1. Introduction. Global sensitivity analysis quantifies how uncertainty in the output of a model can be allocated to uncertainties in the model input. The informa- tion we gain from this analysis allows us to identify the important input parameters of a model, which allows the modeler, among other things, to reduce the complexity of the model by freezing the variables that are not deemed important. Applications of global sensitivity analysis are widespread, from natural sciences to engineering, and social sciences to mathematical sciences. Several methods for global sensitivity analysis have been introduced in the literature, including Sobol’ sensitivity indices, derivative-based global sensitivity measures, and activity scores. In this paper we introduce a new sensitivity measure called global activity score . We investigate the re- lationship between Sobol’ indices and global activity scores theoretically, and provide numerical results to demonstrate the advantages of global activity scores over others. The paper is organized as follows. In Section 2 we discuss Sobol’ sensitivity indices (Sobol’ [20]), derivative-based global sensitivity measures (Sobol’ and Kucherenko [19]), and activity scores (Constantine et al. [4]). In Section 3 we introduce our new sensitivity measure, global activity scores. We present numerical results in Section 4 where the aforementioned global sensitivity methods are applied to four examples. We conclude in Section 5. 2. Global Sensitivity Measures. In this section we review some popular types of sensitivity measures, in particular, the variance based and derivative-based mea- sures. For a comprehensive survey of sensitivity measures, see Iooss and Lemaˆ ıtre [9]. 2.1. Sobol’ Sensitivity Indices. Consider a ddimensional input vector xxx= (x1, . . . , x d), a square-integrable function f(xxx) defined on (0 ,1)d, and let the index set be defined as D={1,2, . . . , d }. The ANOVA decomposition of f(xxx) is f(xxx) =X u⊆Dfu(xxxu), where fu(xxxu) is the component function that only depends on xxxu. For the empty set, we put f∅=R f(xxx)dxxx. ∗Submitted to the editors DATE. †Department of Mathematics, Florida State University, Tallahassee, FL (ryue@math.fsu.edu). ‡Corresponding author. Department of Mathematics, Florida State University, Tallahassee, FL (okten@math.fsu.edu). 1 This manuscript is for review purposes only.arXiv:2505.00711v2 [math.ST] 19 May 2025 2 RUILONG YUE, GIRAY ¨OKTEN If we assume xxxhas uniform distribution on (0 ,1)d, we can write E[f(xxx)] =Z
|
https://arxiv.org/abs/2505.00711v2
|
(0,1)df(xxx)dxxx, and Var(f(xxx)) =σ2=Z (0,1)df2(xxx)dxxx−E[f(xxx)]2. As a consequence of the orthogonality of the ANOVA decomposition, the variance of fcan be written as σ2=X u⊆Dσ2 u, where σ2 uis the variance for the component function fu: σ2 u=Z (0,1)df2 u(xxx)dxxx− Z (0,1)dfu(xxx)dxxx!2 =Z (0,1)df2 u(xxx)dxxx. The Sobol’ sensitivity indices for the subset uare defined as Su=1 σ2X v⊆uσ2 v=τu σ2andSu=1 σ2X vTu̸=∅σ2 v=τu σ2, where Suis called the lower Sobol’ sensitivity index (or, the main effect) and Suis called the upper Sobol’ sensitivity index (or, the total effect). If Suis close to 1, then the parameters xxxuare viewed as important to the model output. In practice the case when uis a singleton {i}is often considered. If S{i}is close to 0, we consider xito be an unimportant variable, and freeze it to its mean value to simplify the model f. Upper Sobol’ indices can be written as (Sobol’ [20]) (2.1) ¯Si=1 2σ2Z (f(zzz)−f(vvv{i},zzz{−i}))2dzzzdvi. Eqn. (2.1) can be generalized to random vectors xxx= (x1, . . . , x d) with distribution function FFF(xxx) =F1(x1)···Fd(xd) (Kucherenko et al. [12]) as (2.2) ¯Si=1 2σ2Z (f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFi(vi). 2.2. Derivative-based sensitivity measures. Computationally more efficient sensitivity measures can be obtained if the partial derivatives of fexist. For ex- ample, Campolongo et al. [2] introduced a global sensitivity measure based onR (0,1)d ∂f(xxx) ∂xi dxxx, and Sobol’ & Kucherenko [19] introduced a global sensitivity mea- sure based onR (0,1)d(∂f(xxx) ∂xi)2dxxx. Here we discuss the second measure further. The derivative based global sensitivity measure (DGSM) of Sobol’ & Kucherenko [19] is given by (2.3) vi=Z (0,1)d∂f(xxx) ∂xi2 dxxx=E"∂f(xxx) ∂xi2# . This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 3 Sobol’ [18] showed that if f(xxx) is a linear function on each of its components xi, then (2.4) Si=1 12vi σ2. In general, Sobol’ & Kucherenko [19] established the following inequality (2.5) Si≤1 π2vi σ2. DGSM can be generalized to nonuniform distributions, where the random vector xxx= (x1, . . . x d) has independent components with marginal distributions F1, . . . , F d, and each Fihas a corresponding density function fi. Sobol’ & Kucherenko [19] and Kucherenko & Iooss [11] obtained several results in this setting. In particular, the following generalization of inequality (2.5) was obtained in [11]: (2.6) Si≤D(Fi)vi σ2, where D(Fi) = 4 sup x∈Rmin(Fi(x),1−Fi(x)) fi(x)2 . 2.3. Activity scores. The active subspace (AS) method (Constantine et al. [4]) finds the important directions of a function using the gradient of the function, and uses the important directions to reduce the domain of the function to a subspace. The activity score, introduced by Constantine & Diaz [3], is a global sensitivity measure obtained from this information. Consider a square-integrable function f(xxx) defined on (0 ,1)d, with finite partial derivatives that are square-integrable. Let p(xxx) be the uniform probability density function on (0 ,1)dand consider the gradient of f ∇f(xxx) =∂f(xxx) ∂x1,∂f(xxx) ∂x2, . . . ,∂f(xxx) ∂xdT . Define the matrix Casas (2.7) Cas=E[∇f(xxx)∇f(xxx)T], where the expectation is computed using the density function p. Let the eigenvalue decomposition of Casbe Cas=WΛasWT, where W= [w1, . .
|
https://arxiv.org/abs/2505.00711v2
|
. ,wd] is the d×dorthogonal matrix of eigenvectors, and Λ as= diag( λ1, . . . , λ d) with λ1≥. . .≥λd≥0 is the diagonal matrix of eigenvalues in descending order. Matrix Cascan be approximated using the Monte Carlo method as Cas≈ˆCas=1 NNX j=1(∇xxxf(xxx(j)))(∇xxx(f(xxx(j)))T, (2.8) where the xxx(1), . . . ,x xx(N)is a random sample of size Nfrom the density p. This manuscript is for review purposes only. 4 RUILONG YUE, GIRAY ¨OKTEN If there exists an integer msuch that eigenvalues λm+1, . . . , λ dare sufficiently small, then the active subspace method approximates f(xxx) with a lower dimensional function g(WT 1x) where W1is the d×mmatrix containing the first meigenvectors. The dimension of gism, whereas the dimension of fisd. The activity score for the ith parameter is defined as (2.9) αi(m) =mX j=1λjw2 ij, where wj= [w1j, . . . , w dj]Tis the jth eigenvector, m≤d, and i= 1, . . . , d . Constantine & Diaz [3] showed that the activity scores are bounded by DGSM αi(m)≤vi, where i= 1, . . . , d , and the inequality becomes an equality if m=d. The following inequality between the Sobol’ sensitivity index and activity scores was also established in [3]: Si≤1 π2αi(m) +λm+1 σ2, where i= 1, . . . , d ,m= 1, . . . , d −1. Duan & ¨Okten [5] note that this result can be generalized to non-uniform distributions over Rdusing Eqn. (2.6) as Si≤D(Fi)αi(m) +λm+1 σ2, where Fiandfi,i= 1, . . . , d , are marginal distributions and density functions for the components of the random input vector xxx. 3. Global Activity Scores. In this section we introduce a new global sensitivity measure: global activity scores. This measure is based on the global active subspace (GAS) method introduced by Yue & ¨Okten [22]. We first give a brief description of the GAS method. Consider a square-integrable real-valued function f(zzz) with domain Ω ⊂Rdand finite second-order partial derivatives. Suppose that Ω is equipped with a probability measure with a cumulative distribution function in the form FFF(zzz) =F1(z1)·. . .·Fd(zd), where Fiare marginal distribution functions. Define a vector function Dzzzf: Ω×Ω→ Rdas follows: Dzzzf(vvv,zzz) = [Dzzz,1f(v1,zzz), . . . , D zzz,df(vd,zzz)]T, where (3.1) Dzzz,if(vi,zzz) = (f(vvv{i},zzz−{i})−f(zzz))/(vi−zi). Here vvv{i}corresponds to the ith input of vector vvv, and zzz−{i}is the vector of inputs corresponding to those indices in the complement of {i}. In this way, ( vvv{i},zzz−{i}) is the vector that changes the ith input of zzztovi, while keeping the other inputs of zzz the same. Define the d×dmatrix CCCgasby (3.2) CCCgas= E [E [( Dzzzf)(Dzzzf)T|zzz]], This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 5 where the inner conditional expectation fixes zzzand averages over the components of vvv= [v1, . . . , v d]T, and the outside expectation averages with respect to zzz. The exis- tence of these expectations is discussed in Yue & ¨Okten [22]. Consider the eigenvalue decomposition of CCCgas (3.3) CCCgas=UUUΛgasUUUT, where U= [u1, . . . ,ud] is the d×dorthogonal matrix of eigenvectors, and
|
https://arxiv.org/abs/2505.00711v2
|
Λ gas= diag( λ1, . . . , λ d) with λ1≥. . .≥λd≥0 is the diagonal matrix of eigenvalues in descending order. Similar to the active subspace method, we seek an integer m < d such that eigenvalues λm+1, . . . , λ dare sufficiently small, and then approximate f(zzz) with a lower dimensional function g(UUUT 1zzz) where UUU1is the d×mmatrix containing the first meigenvectors. The estimation of CCCgascan be done using the Monte Carlo method: (3.4) CCCgas≈ˆCCCgas=1 M1M2M1X i=1M2X j=1(Dzzz(i)f(vvv(i,j),zzz(i)))(Dzzz(i)f(vvv(i,j),zzz(i)))T, where zzz(i)’s and vvv(i,j)’s are sampled from their corresponding distributions. More details are discussed in Yue & ¨Okten [22]. The main difference between the active subspace method and GAS is the former uses the gradient information, whereas the latter uses average rate of change in func- tion values, to determine the important directions of the function. The error analysis of GAS is given by Yue & ¨Okten [22], who conclude, through numerical examples, that GAS is much more robust than the active subspace method in the presence of noise and discontinuities, and the methods give similar results in the absence of these factors. The global activity scores are defined based on the eigenvalue decomposition of CCCgas. Definition 3.1. The global activity score for the ith parameter, 1≤i≤d, is (3.5) γi(m) =mX j=1λju2 ij, where λ1, . . . , λ dare the eigenvalues from Λgas,uuuj= [u1j, . . . , u dj]Tis the jth eigen- vector from UUU, and m≤d. The next theorem presents an inequality between the upper Sobol’ sensitivity index ¯Siandγi(d). Theorem 3.2. IfΩ = (0 ,1)d, equipped with the uniform probability measure, then (3.6) ¯Si≤1 2γi(d) σ2, fori= 1, . . . , d . Proof. Consider the upper Sobol’ index ¯Si ¯Si=1 2σ2Z (f(zzz)−f(vvv{i},zzz{−i}))2dzzzdvi. Since ( vi−zi)2≤1 for any vi, ziin (0,1), we have ¯Si≤1 2σ2Z ((f(zzz)−f(vvv{i},zzz{−i}))/(zi−vi))2dzzzdvi. This manuscript is for review purposes only. 6 RUILONG YUE, GIRAY ¨OKTEN Let¯S¯S¯S= [¯S1, ...,¯Sd]T. We have ¯S¯S¯S≤1 2σ2diag(CCCgas) =1 2σ2diag(UUUΛgasUUUT) where diag(A) is the vector obtained from the diagonal elements of the matrix A. Since the ith value of diag(UUUΛgasUUUT) isγi(d) by definition of γi(d), the above equality implies ¯Si≤1 2σ2γi(d) fori= 1, . . . , d. The following result links any γi(m),1≤m≤d−1, to the upper Sobol’ index. Corollary 3.3. IfΩ = (0 ,1)d, equipped with the uniform probability measure, then (3.7) ¯Si≤1 2γi(m) +λm+1 σ2, where i= 1, . . . , d , and λm+1is the (m+ 1)th eigenvalue of matrix CCCgas. Proof. The proof follows from Theorem 3.2, the orthogonality of UUU, and the following inequality: γi(d) =γi(m) +dX j=m+1λju2 ij≤γi(m) +λm+1dX j=m+1u2 ij. Theorem 3.2 and Corollary 3.3 can be generalized to unbounded domains and nonuniform measures. Let Ω = Rd, equipped with a probability distribution function in the form FFF(zzz) =F1(z1)·. . .·Fd(zd). Theorem 3.4. Letf(zzz)be a bounded function on Ω. For any 0< ϵ < 1 ¯Si≤(b′−a′)2 2γi(d) +κ σ2, (3.8) where a′, b′∈Rare such thatR (a′,b′)ddFFF(zzz) = 1−ϵ,i= 1, . . . , d , and κis a positive constant given by κ=2ϵ−ϵ2 (b′−a′)2sup zzz,vvv∈Ω(f(zzz)−f(vvv))2. (3.9) Proof. From the
|
https://arxiv.org/abs/2505.00711v2
|
definition of generalized upper Sobol’ indices (see Kucherenko et al. [12]), we have ¯Si=1 2σ2Z (f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFi(vi) =1 2σ2Z Ω×Ω(f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFFF(vvv). Let Ω′= (a′, b′)d, and write Ω ×Ω as the union of Ω′×Ω′and the complement (Ω′×Ω′)c. Since ( zi−vi)2≤(b′−a′)2, for any zi, vi∈(a′, b′), we have Z Ω′×Ω′(f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFFF(vvv) ≤(b′−a′)2Z Ω′×Ω′((f(zzz)−f(vvv{i},zzz−{i})/(zi−vi))2dFFF(zzz)dFFF(vvv). This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 7 The integral over the complement of Ω′×Ω′can be bounded by Z (Ω′×Ω′)c(f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFFF(vvv)≤(b′−a′)2κ. Using the bounds for the two integrals and dividing by 2 σ2we obtain ¯Si≤(b′−a′)2 2σ2Z Ω×Ω((f(zzz)−f(vvv{i},zzz−{i})/(zi−vi))2dFFF(zzz)dFFF(vvv) +(b′−a′)2 2σ2κ(3.10) =(b′−a′)2 2σ2Z ((f(zzz)−f(vvv{i},zzz−{i})/(zi−vi))2dFFF(zzz)dFi(vi) +(b′−a′)2 2σ2κ. Let¯S¯S¯S= [¯S1, . . . , ¯Sd]T. Inequality (3.10) can be written as ¯S¯S¯S≤(b′−a′)2 2σ2(diag(CCCgas) +κ1d1d1d) =(b′−a′)2 2σ2 diag(UUUΛgasUUUT) +κ1d1d1d . Since the ith value of diag(UUUΛgasUUUT) isγi(d), the proof is over. Remark 3.5. If the domain of fis a finite rectangle ( a, b)d, then the integral over the unbounded domain in the proof of Theorem 3.4 vanishes, and we obtain the bound ¯Si≤(b−a)2 2γi(d) σ2. If we put a= 0, b= 1 in the above inequality, we obtain the bound in Theorem 3.2. For any m, 1≤m≤d−1, we have the following inequality for γi(m) and ¯Si. Corollary 3.6. With the same assumptions as in Theorem 3.4, we have (3.11) ¯Si≤(b′−a′)2 2γi(m) +λm+1+κ σ2, where i= 1, . . . , d andλm+1is the (m+ 1)th eigenvalue of matrix CCCgas. Proof. The proof follows from Theorem 3.4, as in the proof of Corollary 3.3. Remark 3.7. If the domain of fis a finite rectangle ( a, b)d, then the bound in Corollary 3.6 simplifies as ¯Si≤(b−a)2 2γi(m) +λm+1 σ2. If we put a= 0, b= 1 in the above inequality, we obtain the bound in Corollary 3.3. The next theorem presents a much simpler relationship between upper Sobol’ indices and global activity scores for quadratic functions. Theorem 3.8. IfΩ =Rd,fis equipped with the multivariate standard normal probability distribution FFF= (F1, . . . , F d), and f(zzz) =1 2zzzTAzAzAz+bbbTzzz, where AAAis a symmetric d×dmatrix, then (3.12) ¯Si=γi(d) σ2, fori= 1, . . . , d . This manuscript is for review purposes only. 8 RUILONG YUE, GIRAY ¨OKTEN Proof. We have zzz= (z1, . . . , z d)T, and bbb= (b1, . . . , b d). Then 2σ2¯Si=Z (f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFi(vi) =Z∂f(xxxi) ∂zi2 (zi−vi)2dFFF(zzz)dFi(vi) = E"∂f(xxxi) ∂zi2 (zi−vi)2# , where xxxi= (z1, . . . , z i−1,zi+vi 2, zi+1, . . . , z d)T. Using the independence of zi−vi, zi+vi, and zj, with j̸=i, we write the above expectation as 2σ2¯Si= E"∂f(xxxi) ∂zi2# ×E [(zi−vi)2] = 2E"∂f(xxxi) ∂zi2# . Finally, we observe, γi(d) =Z ((f(zzz)−f(vvv{i},zzz{−i}))/(zi−vi))2dFFF(zzz)dFi(vi) =Z∂f(xxxi) ∂zi2 dFFF(zzz)dFi(vi) =σ2¯Si. Remark 3.9. For the activity scores, the relationship between the upper Sobol’ index and the activity score for quadratic functions is an inequality. Indeed, under the same hypothesis of Theorem 3.8, Liu and Owen [13] shows that ¯Si≤αi(d) σ2, fori= 1, . . . , d . 4. Numerical Results. In this section we compare the four sensitivity mea- sures; Sobol’ indices, DGSMs, activity scores, and global activity scores, numerically, when they are used
|
https://arxiv.org/abs/2505.00711v2
|
to conduct the global sensitivity analysis of four examples. To compute Sobol’ indices, we use a Monte Carlo sample size of N= 10 ,000 and use the algorithm in Sobol’ [20] to estimate upper Sobol’ indices, and the correlation 2 algorithm in Owen [16] for lower Sobol’ indices. To compute the DGSM (Eqn. (2.3)), we use a Monte Carlo sample size of N= 10 ,000 and estimate derivatives using forward finite difference with an increment of h= 0.001. Likewise, to com- pute activity score αi(m) (Eqn. (2.9)), we use N= 10 ,000 in Eqn. (2.8) and let h= 0.001 in the forward finite differences. To compute the global activity score γi(m) (Eqn. (3.5)), we let M1= 10,000 and M2= 1 in Eqn. (3.4) (so that the total number of Monte Carlo samples is N=M1×M2= 10,000), and use Algorithm 2.1 of Yue and ¨Okten [22]. The computer codes for some of the examples are available at: https://github.com/RuilongYue/global-activity-scores. An important step in the calculation of activity and global activity scores is de- termining the dimension mof the active subspace and global active subspace that approximates the original input domain. Consider the eigenvalues λ1≥. . .≥λd≥ 0 of Λ asand Λ gas. Define the mth normalized cumulative sum of eigenvalues as This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 9 Pm k=1ˆλk/Pd k=1ˆλk, where ˆλkis the kth estimated eigenvalue. In the following nu- merical results, for each method, we will pick mas the smallest integer for which the cumulative normalized sum of eigenvalues for the corresponding method is greater than 90%. 4.1. Example 1: A test function with noise. Consider the following function (4.1) f(zzz) =10X i=1izi+ 10( z1z2−z9z10) +kϵ, ϵ∼N(0,1), where zzz= (z1, z2, ..., z 10). The function has a linear component with inputs in in- creasing order of importance, and a component with second order interactions. The inputs are independent and have the uniform distribution on ( −0.5,0.5). We assume the function is evaluated with some noise modeled by the tern kϵ, where kis a positive constant and ϵis a standard normal random variable. Akin to the global sensitivity analysis of stochastic codes (see Fort et al. [6], Hart et al. [7], and Nanty et al. [14]), we want to investigate the accuracy of various global sensitivity measures as the level of noise, k, increases. In the numerical results that follow, we consider three cases: no noise ( k= 0), low noise ( k= 0.01), and high noise ( k= 1). 4.1.1. No noise ( k= 0).Fig. 1 plots the normalized cumulative sum of eigen- values of CCCasandCCCgas, and the components of the eigenvector that corresponds to the largest eigenvalue for each method. The methods give virtually identical results. Fig. 1: Normalized cumulative sum of eigenvalues (left) and the first eigenvector (right) for the active subspace (AS) and global active subspace (GAS) methods when k= 0 Fig. 2 plots Sobol’ lower and upper indices, normalized DGSM’s, normalized activity scores, and normalized global activity scores. Normalization is done by divid- ing each score by the sum of the corresponding scores. We
|
https://arxiv.org/abs/2505.00711v2
|
do not normalize Sobol’ indices, for we want to see the unscaled differences between lower and upper Sobol’ indices. For the global activity scores plot, we include the results from m= 1 and m= 10. For the activity scores, we only show the results for m= 1, since m= 10 for activity scores corresponds to DGSM. The upper Sobol’ indices and DGSM follow the same pattern. When m= 1, the activity scores and global activity scores are virtually identical, and they devalue the importance of the first two parameters compared to Sobol’ indices and DGSMs, leading to a discrepancy in the ranking of the least three important inputs. When m= 10, however, all the methods give the same ranking for the importance of inputs. This manuscript is for review purposes only. 10 RUILONG YUE, GIRAY ¨OKTEN (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 2: Sensitivity indices when k= 0 4.1.2. Low noise ( k= 0.01).Fig. 3 plots the normalized cumulative sum of eigenvalues for AS and GAS methods in the presence of low noise. We observe the methods have significantly different eigenvalues. Fig. 3: Normalized cumulative sum of eigenvalues (left) and the first eigenvector (right) for the active subspace (AS) and global active subspace (GAS) methods when k= 0.01 Fig. 4 plots the sensitivity indices. Based on the cumulative normalized eigenvalue criterion discussed earlier, we pick m= 8 for the AS method, and m= 1 for the GAS method. With the introduction of noise, we observe that DGSM and activity This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 11 scores do not look similar to Sobol’ indices (or global activity scores) anymore. Even though DGSM still ranks most of the inputs as the upper Sobol’ indices, the relative differences between the importance of the inputs are very different than that of Sobol’ indices. The activity scores with m= 8 rank several inputs differently than the Sobol’ upper indices. The global activity scores, on the other hand, do not seem to be adversely affected by noise. When m= 10, the global activity scores and the upper Sobol’ indices are virtually indistinguishable. When m= 1, the global activity scores rank the first three inputs (which are the least important) differently compared to upper Sobol’ indices, and interestingly follow the same pattern as lower Sobol’ indices. (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 4: Sensitivity indices when k= 0.01 4.1.3. High noise ( k= 1).Fig. 5 plots the eigenvalues and the first eigenvector, and Fig. 6 plots the sensitivity indices for the case of high noise. Based on the eigenvalues, we pick m= 8 for AS, and m= 5 for the GAS methods. DGSM and activity scores completely fail in identifying the important variables. All the inputs have about the same importance according to DGSM, and the most important input becomes the least important according to activity scores. Global activity score with m= 5 ranks input parameters the same way as upper Sobol’ indices, except
|
https://arxiv.org/abs/2505.00711v2
|
for the ranking of inputs 2 and 4. If we increase mto 10, then global activity scores and the upper Sobol’ indices follow the same pattern and give the same conclusions. In Fig. 7 we investigate the convergence of upper Sobol’ indices and global activity scores as a function of sample size, N, for the case of high noise. When estimating global activity scores, we use M1=N, and M2= 1. Rather remarkably, the global This manuscript is for review purposes only. 12 RUILONG YUE, GIRAY ¨OKTEN Fig. 5: Normalized cumulative sum of eigenvalues (left) and the first eigenvector (right) for the active subspace (AS) and global active subspace (GAS) methods when k= 1 (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 6: Sensitivity indices when k1= 1 activity scores give the correct ranking of the input variables when the sample size is as small as 10. Upper Sobol’ indices reach the correct ranking when the sample size is 1000. 4.2. Example 2: A test function with discontinuity. Hoyt and Owen [8] discuss the mean dimension of some discontinuous ridge functions. We consider the This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 13 Fig. 7: Convergence of upper Sobol’ indices (left) and global activity scores (right) when k= 1 following example from [8]: (4.2) f(zzz) = 111{θTzzz >0},zzz∈Rd, where zzz= (z1, . . . , z d) has the d-dimensional standard normal distribution, and 111{θTzzz >0}equals 1 if θTzzz >0 and zero otherwise. Each component of θis gen- erated randomly from N(0,1). Then θis normalized to obtain a vector of unit length. We set θ= (0.143,0.708,0.491,−0.297,−0.124,0.248,0.058,−0.032,0.131,−0.227)T andd= 10 in the following numerical results, and compute the global sensitivity indices for f. Figure 8 plots the eigenvalues of AS and GAS (left), and the first eigenvectors for the methods as well as θ(right). Based on the eigenvalues, we pick m= 2 for AS, andm= 6 for the GAS method. We observe that the first eigenvector for the GAS method is identical with θ. Fig. 8: Normalized cumulative sum of eigenvalues and the first eigenvector for the active subspace (AS) and global active subspace (GAS) methods for the discontinuous function Fig. 9 plots the sensitivity indices. Upper Sobol’ indices and global activity scores (form= 6 and m= 10) give the same rankings and the relative differences between This manuscript is for review purposes only. 14 RUILONG YUE, GIRAY ¨OKTEN the importance of inputs are very similar. DGSM and activity scores ( m= 2) are virtually identical and they tell a different story: both find the most important inputs to be input 9, followed by input 10, and then inputs 3, 4, 6, 7 which have about the same importance. According to upper Sobol’ indices, however, the most important input is input 9, followed by input 4, and then input 10. Inputs 3, 4, 6, 7 do not have equal importance according to upper Sobol’ or activity scores: out of ten inputs, input 4 has the second highest importance, input 3 has the sixth highest importance,
|
https://arxiv.org/abs/2505.00711v2
|
and inputs 6 and 7 have similar importance at rank four. (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 9: Sensitivity indices for the discontinuous function Fig. 10 plots the upper Sobol’ indices and global activity scores as a function of sample size, N, for the discontinuous function. The global activity scores give the correct ranking of the input variables when the sample size is 10. Upper Sobol’ indices give the correct ranking when the sample size is 10,000. 4.3. Example 3: IEEE 14-bus power system. In this example we consider the global sensitivity analysis of the IEEE 14-bus power system [1]. A description of the problem is provided in Yue et al. [21], who used Sobol’ sensitivity indices, DGSMs, and activity scores to analyze the model. Here we will apply the global activity score method to the power system problem and compare it with the other sensitivity measures. The output of interest in the sensitivity analysis is the steady state solution for the voltage at busbar 9. The inputs are the active power at each node i. We set up the differential equations as in Yue et al. [21], and then find the output by computing the load flow solution with Newton-Raphson method (computational details can be This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 15 Fig. 10: Convergence of upper Sobol’ indices (left) and global activity scores (right) for discontinuous function found in Section 6.10 of Saadat [17]). We assume the ith input ( i= 1, . . . , 14) follows the distribution N Pi,(0.5Pi)2 , which is one of the uncertainty scenarios considered in Ni et al. [15]. The value of Pi’s are presented in Table 1. Table 1: Means of normal distributions for power system Busbar i1 2 3 4 5 6 7 Pi 0 21.7 94.2 47.8 7.6 11.2 0 Busbar i8 9 10 11 12 13 14 Pi 0 29.5 9 3.5 6.1 13.5 14.9 Figure 11 plots the normalized cumulative sum of eigenvalues and the first eigen- vector for the AS and GAS methods. We observe that the first eigenvalue of GAS is much larger than that of AS, and the corresponding eigenvectors are quite different. The figure suggests choosing m= 5 for the AS method, and m= 6 for the GAS method. Based on Figure 12 we make the following observations. DGSM and activity score methods suggest very different important variables than Sobol’ indices. The normalized DGSM and activity scores for nodes 4, 6, and 14 are about zero (the largest value is 3 ×10−5). However, these nodes have significant importance, according to Sobol’ indices. For example, node 4 has the second largest upper Sobol’ index with a value of 0.22. The upper Sobol’ indices for nodes 6 and 14 are 0.09 and 0.082. The global activity scores, on the other hand, are very similar to upper Sobol’ indices. According to upper Sobol’ indices, the most six important inputs, in decreasing order, are 3, 4, 9, 13, 6, 14, and according to global
|
https://arxiv.org/abs/2505.00711v2
|
activity scores ( m= 6), the most important inputs are 3, 9, 4, 14, 13, 6. Figure 13 plots activity scores αi(m) and global activity scores γi(m),m= 1, . . . , 14, for the six most important variables for each method. The graph illustrates the sensitivity of the scores with respect to the dimension mof the approximating subspace. Note that the activity scores indicate input 9 is the most important variable This manuscript is for review purposes only. 16 RUILONG YUE, GIRAY ¨OKTEN Fig. 11: Normalized cumulative sum of eigenvalues and the first eigenvector for the active subspace (AS) and global active subspace (GAS) methods for power system (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 12: Sensitivity indices for power system for all m, whereas global activity scores indicate input 3 is the most important one (agreeing with Sobol’ indices) for all m. Fig. 14 plots the upper Sobol’ indices and global activity scores as a function of sample size, N, for the power system. Both methods require a sample size of 10,000 to converge to the correct rankings. 4.4. Example 4: A counterexample from Sobol’ & Kucherenko [10]. Sobol’ & Kucherenko [10] presents a function where the ranking of important variables This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 17 (a) Activity scores (b) Global activity scores Fig. 13: Sensitivity indices for power system as a function of m Fig. 14: Convergence of upper Sobol’ indices (left) and global activity scores (right) for power system by DGSM may result in false conclusions and give a different ranking than the upper Sobol’ indices. The function is given by f(xxx) =4X i=1ci(xi−1/2) +c12(x1−1/2) (x2−1/2)5, where xxx= (x1, x2, x3, x4), ci= 1,1≤i≤4 and c12= 50. The upper Sobol’ index for x1andx2is 0.289, and the upper Sobol’ index for x3andx4is 0.237. We want to see how the global activity scores compare with upper Sobol’ and DGSM for this example. Fig. 15 plots the sensitivity indices for the function. Normalized DGSMs and activity scores ( m= 2) give the same results, and they incorrectly find the first input variable less important than the second. Sobol’ & Kucherenko explain the reason for this discrepancy by the strong nonlinearity of the term c12(x1−1/2) (x2−1/2)5. The normalized global activity scores of the input This manuscript is for review purposes only. 18 RUILONG YUE, GIRAY ¨OKTEN variables are much closer to the Sobol’ sensitivity indices, even though the global activity score of the first variable is slightly larger than the second one. (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 15: Sensitivity indices for the example from Sobol’ & Kucherenko 5. Conclusions. Through the numerical results we have observed that global activity scores align with the upper Sobol’ indices much better than DGSM or activity scores in the presence of noise (example 1), discontinuity (example 2), large variance (example 3), and strong nonlinearity (example 4). In the absence of such factors, we have empirically observed that the global activity
|
https://arxiv.org/abs/2505.00711v2
|
scores are similar to DGSM and activity scores (we report some of these results, namely the one for no-noise, in the paper). These observations, not surprisingly, are consistent with the conclusions of Yue and ¨Okten [22] where the global active subspace method was introduced and used as a tool to construct surrogate models. It was observed that the presence of noise and discontinuity were detrimental to the accuracy of the active subspace method, whereas the global active subspace method was much more robust. We also observed that the global activity scores can give the correct ranking of the input variables with very small sample sizes (example 1 and 2). This is a significant potential advantage of the method over Sobol’ indices in computationally expensive problems. REFERENCES [1]IEEE 14-Bus System , tech. report, Illinois Center for a Smarter Electric Grid, https://icseg. iti.illinois.edu/ieee-14-bus-system/. This manuscript is for review purposes only. GLOBAL ACTIVE SUBSPACE 19 [2]F. Campolongo, J. Cariboni, and A. Saltelli ,An effective screening design for sensitivity analysis of large models , Environmental Modelling & Software, 22 (2007), pp. 1509–1518. [3]P. G. Constantine and P. Diaz ,Global sensitivity metrics from active subspaces , Reliability Engineering & System Safety, 162 (2017), pp. 1–13. [4]P. G. Constantine, E. Dow, and Q. Wang ,Active subspace methods in theory and prac- tice: applications to kriging surfaces , SIAM Journal on Scientific Computing, 36 (2014), pp. A1500–A1524. [5]H. Duan and G. ¨Okten ,Derivative-based shapley value for global sensitivity analysis and machine learning explainability , arXiv preprint arXiv:2303.15183, (2023). [6]J.-C. Fort, T. Klein, and A. Lagnoux ,Global sensitivity analysis and Wasserstein spaces , SIAM/ASA Journal on Uncertainty Quantification, 9 (2021), pp. 880–921. [7]J. L. Hart, A. Alexanderian, and P. A. Gremaud ,Efficient computation of Sobol’ indices for stochastic models , SIAM Journal on Scientific Computing, 39 (2017), pp. A1514–A1530. [8]C. R. Hoyt and A. B. Owen ,Mean dimension of ridge functions , SIAM Journal on Numerical Analysis, 58 (2020), pp. 1195–1216. [9]B. Iooss and P. Lema ˆıtre,A review on global sensitivity analysis methods , Uncertainty Man- agement in Simulation-Optimization of Complex Systems: Algorithms and Applications, (2015), pp. 101–122. [10]S. Kucherenko et al. ,A new derivative based importance criterion for groups of variables and its link with the global sensitivity indices , Computer Physics Communications, 181 (2010), pp. 1212–1217. [11]S. Kucherenko and B. Looss ,Derivative-based global sensitivity measures , in Handbook of Uncertainty Quantification, 2017, pp. 1241–1263. [12]S. Kucherenko, S. Tarantola, and P. Annoni ,Estimation of global sensitivity indices for models with dependent variables , Computer Physics Communications, 183 (2012), pp. 937– 946. [13]S. Liu and A. B. Owen ,Preintegration via active subspace , SIAM Journal on Numerical Analysis, 61 (2023), pp. 495–514. [14]S. Nanty, C. Helbert, A. Marrel, N. P ´erot, and C. Prieur ,Sampling, metamodeling, and sensitivity analysis of numerical simulators with functional stochastic inputs , SIAM/ASA Journal on Uncertainty Quantification, 4 (2016), pp. 636–659. [15]F. Ni, M. Nijhuis, P. H. Nguyen, and J. F. Cobben ,Variance-based global sensitivity analysis for power systems , IEEE Transactions on Power Systems, 33 (2017), pp. 1670–1682. [16]A. B. Owen ,Better estimation of
|
https://arxiv.org/abs/2505.00711v2
|
small Sobol’ sensitivity indices , ACM Transactions on Mod- eling and Computer Simulation (TOMACS), 23 (2013), pp. 1–17. [17]H. Saadat et al. ,Power system analysis , vol. 2, McGraw-Hill, 1999. [18]I. Sobol’ ,On derivative-based global sensitivity criteria , Mathematical Models and Computer Simulations, 3 (2011), pp. 419–423. [19]I. Sobol’ and S. Kucherenko ,Derivative based global sensitivity measures , Procedia Social and Behavioral Sciences, 2 (2010), pp. 7745–7746. [20]I. M. Sobol’ ,Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates , Mathematics and Computers in Simulation, 55 (2001), pp. 271–280. [21]R. Yue, H. Duan, G. ¨Okten, and B. Uzuno ˘glu,A comparison of global sensitivity methods for power systems , in 2022 6th International Conference on System Reliability and Safety (ICSRS), IEEE, 2022, pp. 130–137. [22]R. Yue and G. ¨Okten ,The global active subspace method , arXiv preprint arXiv:2304.14142, (2023). This manuscript is for review purposes only.
|
https://arxiv.org/abs/2505.00711v2
|
Asymmetric Penalties Underlie Proper Loss Functions in Probabilistic Forecasting Erez Buchweitz João Vitor Romano Ryan J. Tibshirani Abstract Accurately forecasting the probability distribution of phenomena of interest is a classic and ever more widespread goal in statistics and decision theory. In comparison to point forecasts, probabilistic forecasts aim to provide a more complete and informative characterization of the target variable. This endeavor is only fruitful, however, if a forecast is “close” to the distribution it attempts to predict. The role of a loss function—also known as a scoring rule—is to make this precise by providing a quantitative measure of proximity between a forecast distribution and target random variable. Numerous loss functions have been proposed in the literature, with a strong focus on proper losses, that is, losses whose expectations are minimized when the forecast distribution is the same as the target. In this paper, we show that a broad class of proper loss functions penalize asymmetrically , in the sense that underestimating a given parameter of the target distribution can incur larger loss than overestimating it, or vice versa. Our theory covers many popular losses, such as the logarithmic, continuous ranked probability, quadratic, and spherical losses, as well as the energy and threshold-weighted generalizations of continuous ranked probability loss. To complement our theory, we present experiments with real epidemiological, meteorological, and retail forecast data sets. Further, as an implication of the loss asymmetries revealed by our work, we show that hedging is possible under a setting of distribution shift. 1 Introduction In probabilistic forecasting, the goal is to predict the distribution of a target variable, rather than a particular parameter of that distribution, such as its mean or a single quantile, which is termed point forecasting. The pursuit of probabilistic forecasts has been promoted based on philosophical grounds, as an adequate form of expressing the inherent uncertainty in predicting the target variable (Dawid, 1984; Gneiting and Raftery, 2007). From a practical viewpoint, a probabilistic forecast provides more detailed information on the possible realizations of the target, which is generally considered useful for decision making (Jordan et al., 2011; Jolliffe and Stephenson, 2012; Cramer et al., 2022b). Accordingly, in various scientific disciplines, probabilistic forecasts are widespread and growing in use (Gneiting and Katzfuss, 2014), with some examples being the prediction of floods (Pappenberger et al., 2011), earthquakes (Schorlemmer et al., 2018), epidemics (Cramer et al., 2022b), energy usage (Hong et al., 2016), population growth (Raftery and Šev ˇcíková, 2023), and inflation (Galbraith and van Norden, 2012). Loss functions measure the discrepancy between a prediction and an observed target. In probabilistic forecasting, each prediction is an entire probability distribution, whereas the target is a random variable. Hence, a loss function assigns a numeric value to the discrepancy between a real value (in case the target variable is real) and a probability distribution from which it has purportedly originated. The study of loss functions—commonly called scoring rules in probabilistic forecasting—originated in weather forecasting (Brier, 1950; Winkler and Murphy, 1968; Matheson and Winkler, 1976), intertwined with the development of subjective probability (Good, 1952; de Finetti, 1975;
|
https://arxiv.org/abs/2505.00937v1
|
Savage, 1971), and has largely revolved around propriety. A loss function is deemed proper if the least loss, in expectation over all possible realizations of the target, is incurred when the forecast equals the distribution of the target (Gneiting and Raftery, 2007). It is commonly held that proper losses encourage “honest forecasting” (Gneiting and Raftery, 2007; Parry et al., 2012); formally, this is guaranteed only in the special (rare) case that the forecaster knows with certainty what the distribution of the target is, as then they can do no better than to forecast it. In any case, this forms the basis of the wide appeal and subsequently the wide prevalence of proper losses, in both theory and practice. A practitioner looking to fit or evaluate probabilistic forecasts must choose which loss function to optimize. For instance, it is common to fit a forecasting model by minimizing the average loss over a training set (Rasp and Lerch, 2018), or to 1arXiv:2505.00937v1 [math.ST] 2 May 2025 choose from among a collection of forecasts by minimizing the average loss on a test set (Cramer et al., 2022b). Dawid (2007) lays out a method for constructing proper losses in which propriety is tied to Bayes (optimal) actions in a given decision-making task. However, it remains that practitioners largely prefer to choose one out of a plethora of losses that have been proposed in the literature for general purpose, or a combination thereof (as exhibited in the domain-focused references on forecasting given previously). In the absence of a definitive theory as to which loss is preferable in a given situation, some authors have recommended choosing one whose well-known properties appear to be aligned with the use case (Winkler et al., 1996; Gneiting and Raftery, 2007). The study of the properties of each particular loss function has therefore become a major theme in probabilistic forecasting, and we contribute to this literature by characterizing asymmetries : which forecasting errors are awarded lesser or greater loss, by particular loss functions. By definition, any proper loss function will favor the true distribution of the target by awarding it the minimal loss, on average over all possible realizations of the target. But in practice, forecasts are rarely equal to the true distribution of the target due to model misspecification, distribution shift and for other reasons. Which forecast will then be awarded the least expected loss depends on how a particular loss function penalizes different kinds of errors. Figure 1 displays the expected loss when the forecast and target distribution are both normal, for two of the most commonly used loss functions in practice. Given a choice between two forecasts where the forecast variance is either double or half the target variance, logarithmic loss will favor doubling the variance, choosing a flat, less informative forecast, while continuous ranked probability (CRP) loss will favor halving the variance, providing for a sharp overconfident forecast. (In both cases the mean is correctly specified. For definitions of those and other loss functions, see Section 2.) The same findings can be recreated on real data from the Covid-19
|
https://arxiv.org/abs/2505.00937v1
|
Forecast Hub (Cramer et al., 2022a), as shown in Figure 2. (For details on the experimental setup, see Section 4.) We highlight another effect appearing in the left-most plot in Figure 2: given a choice between two forecasts, either shifted upward by one unit, or shifted downward by the same amount, logarithmic loss will favor the downshifted forecast. We attribute this asymmetry effect particularly to right-skewed forecasts, which Covid-19 mortality distributions typically are. Finally, notice in Figure 2 that the target mean and variance do not minimize logarithmic loss. This is because the forecast location-scale family is misspecified, meaning that it does not equal the target location-scale family. A setting of misspecification in which minimizers of loss deviate from the target is studied analytically in Section 5, and further research is warranted. The effects described above may lead, among other things, to: 1.Chosen forecasts having a systematic tendency to be overly flat, sharp, or shifted, depending on the loss function used to choose them; 2.Forecasters being incentivized to report overly flat, sharp, or shifted forecasts, again depending on the loss used to evaluate them. The first consequence above is demonstrated in Figures 3 and 4, which depict results on real data from the Covid-19 Forecast Hub and the M5 Uncertainty Competition (Makridakis et al., 2022), respectively. In both cases, forecasters with high standardized variance (meaning that they produced relatively flat forecasts) were generally ranked at least as high under logarithmic loss (meaning that they received relatively low loss) as under CRP loss. Conversely, CRP loss generally ranked forecasters with low standardized variance at least as high as logarithmic loss did. Practitioners ought to be aware of the asymmetries described in this paper pertaining to these and other commonly used loss functions. The choice of loss function can affect the ranking of forecasts in predictable ways. The second of the consequences above, detailing incentives imposed upon the forecasters by different loss functions, is discussed in Section 5. 1.1 Summary of main results We now summarize the main results of this paper, on asymmetries inherent to logarithmic, CRP, quadratic, spherical, energy, and Dawid–Sebastiani (DS) losses (Section 2 gives their definitions). We denote by ℓ(F, G)the expected loss when Fis the forecast and Gis the target distribution. Our first main result (Section 3.1), pertains to scale families. Theorem 1 (Scale family) .Let{Gσ:σ >0}be a scale family. Fix G=G1andσ >1. The following holds. •For CRP and energy losses, ℓ(Gσ, G)> ℓ(G1/σ, G). •For quadratic and DS losses, ℓ(Gσ, G)< ℓ(G1/σ, G). •For spherical loss, ℓ(Gσ, G) =ℓ(G1/σ, G). 2 Logarithmic loss CRP loss logσµ logσµ Figure 1: Expected logarithmic and CRP losses for a fixed standard normal target and normal forecasts with varying location µand scaleσ. A lighter color represents a lower loss, with minimum achieved at the star, where the forecast distribution is also a standard normal. When the location is correctly specified, logarithmic loss penalizes underestimating the scale more than overestimating it, whereas the opposite is true for CRP. When the scale is correctly specified, both losses penalize symmetrically on the location. Logarithmic loss
|
https://arxiv.org/abs/2505.00937v1
|
CRP loss logσµ logσµ Figure 2: Average logarithmic and CRP losses over targets and forecasts from the Covid-19 Forecast Hub (for details see Section 4). The qualitative assessment for CRP loss is exactly the same as in the normal case; for logarithmic loss there is an additional location asymmetry that is due to the forecasts being right-skewed. Logarithmic loss CRP loss 1. COVIDhub-ensemble 1. COVIDhub-ensemble 2. UMass-MechBayes 6. SteveMcConnell-CovidComplete 3. Karlen-pypm 3. Karlen-pypm 4. OliverWyman-Navigator 2. UMass-MechBayes 5. epiforecasts-ensemble1 8. GT-DeepCOVID 6. SteveMcConnell-CovidComplete 9. MOBS-GLEAM_COVID 7. CMU-TimeSeries 5. epiforecasts-ensemble1 8. GT-DeepCOVID 4. OliverWyman-Navigator 9. MOBS-GLEAM_COVID 7. CMU-TimeSeries 10. USC-SI_kJalpha 10. USC-SI_kJalpha Figure 3: Ranking of top forecasters in the Covid-19 Forecast Hub via logarithmic and CRP losses. Each line connects the same forecaster over the two ranked lists, and orange (cyan) lines identify the forecasters with lowest (highest) standardized variance. Logarithmic loss CRP loss 07. Kazu01. EXTASY 07. Kazu 01. EXTASY08. Lindada02. lpyczn 08. Lindada 02. lpyczn11. Bo Peng03. Everyday Low SPLices11. Bo Peng 03. Everyday Low SPLices16. Ouranos04. slaweks 16. Ouranos 04. slaweks18. valeska15. IHiroaki 18. valeska 15. IHiroaki20. Random_prediction17. Ka Ho_STU20. Random_prediction 17. Ka Ho_STU30. MPWARE26. V olo30. MPWARE 26. V olo35. Rob Mulla32. A Certain Uncertainty35. Rob Mulla 32. A Certain Uncertainty37. AkiraIshikawa34. Astral37. AkiraIshikawa 34. Astral38. rapidsai36. KrisztianSz_STU38. rapidsai 36. KrisztianSz_STU 46. Costas V oglis39. GoodsForecast - Nick Mamonov46. Costas V oglis 39. GoodsForecast - Nick Mamonov47. M0T040. golubyatniks47. M0T0 40. golubyatniks48. sibmike_STU42. toshi_k48. sibmike_STU 42. toshi_k 49. shirokane_friends_from_acc43. Marisaka Mozz49. shirokane_friends_from_acc 43. Marisaka Mozz 50. shuheioka45. Onl50. shuheioka 45. Onl Figure 4: Ranking of top forecasters in the M5 Uncertainty Competition via logarithmic and CRP losses. Lines again connect the same forecasters over the ranked lists, and orange (cyan) lines identify the forecasters with lowest (highest) standardized variance. 3 Supposing the target variable Yfollows a distribution G, we define Gσas the distribution of σY. For CRP and energy losses, the expected loss is larger when forecasting Gσversus G1/σfor any value of σ >1. Recalling G=G1, these loss functions thus favor underestimating the scale parameter compared to overestimating it by the same amount, on a logarithmic scale. In other words, facing a choice between a flat, less informative forecast Gσand a sharp overconfident forecast G1/σ, CRP and energy losses will always favor the latter, regardless of the particular scale family. Conversely, quadratic and DS losses will always favor the former: the flat, less informative forecast in a scale family. Spherical loss will favor neither, however this should not be taken at face value, as will become clearer later. For threshold-weighted CRP loss, the asymmetry depends on the weight function (see Section 6.3). We find Theorem 1 in agreement with the empirical findings in Figure 1. Importantly, Figure 2 presents empirical evidence that a similar phenomenon can occur when the forecast and target distribution are not of the same family. Logarithmic loss is notably absent from Theorem 1. For this loss, the direction of asymmetry may differ between scale families. Our second main result (Section 3.3) characterizes the asymmetry inherent to logarithmic loss, with respect to exponential families. Theorem
|
https://arxiv.org/abs/2505.00937v1
|
2 (Exponential family) .Let{Gθ:θ >0}be a minimal exponential family. Fix G=G1,θ >1, and denote byℓthe logarithmic loss. The following holds. •If the family is a normal, exponential, Laplace, Weibull, or (generalized) gamma scale family, or a log-normal log-scale family, then ℓ(Gθ, G)< ℓ(G1/θ, G). •If the family is an inverse gamma scale family, then ℓ(Gθ, G)> ℓ(G1/θ, G). •If the family is a (generalized) gamma, Pareto, inverse Gaussian, or beta shape family, or a Poisson rate family, thenℓ(Gθ, G)> ℓ(G1/θ, G). In most (but not all) of the examples of scale families that we were able to find, logarithmic loss favors overestimating the scale rather than underestimating it by the same amount, on a logarithmic scale as before. This is in agreement with Figure 1 (which examines the normal case) and 2 (the misspecified case where F, G are not of the same family). The converse is true for exponential families with so-called shape parameters, typically right-skewed, in which logarithmic loss favors underestimating the shape. We believe the empirical results pertaining to asymmetry in right-skewed location families, which are favored underestimated in Figure 2, may be understood in this light. We complement the above results on scale and exponential families with a result on location families. Theorem 3 (Location family) .Let{Gµ:µ∈R}be a location family. Fix G=G0. The following holds. •For CRP , quadratic, spherical, energy, and DS losses, ℓ(Gµ, G) =ℓ(G−µ, G). •For logarithmic loss, ℓ(Gµ, G) =ℓ(G−µ, G), provided Gis symmetric. Supposing the target variable Yfollows a distribution G, we define Gµas the distribution of Y+µ. CRP, quadratic, spherical, energy, and DS losses do not favor underestimating nor overestimating the location, in any location family. Logarithmic loss behaves the same way, provided Gis symmetric. For threshold-weighted CRP loss, the asymmetry depends on the weight function (see Section 6.3). These results are in line with Figure 1, and Figure 2 again provides empirical evidence that the same can be true when the forecast and target distribution are not of the same family (recall, the asymmetry for logarithmic loss with respect to location can be interpreted via Theorem 2). The results in Theorems 1–3 still hold when we replace each ℓ(F, G)by the divergence d(F, G) =ℓ(F, G)−ℓ(G, G). Common proper losses induce well-known divergences, such as the Kullback–Leibler divergence (which is induced by logarithmic loss), Cramér distance (CRP loss), energy distance (energy loss), L2distance between densities (quadratic loss), and so on. These divergences are widely-used in probability, statistics, and many application areas. Therefore, a characterization of asymmetries, as we give in this paper, may be of interest outside of probabilistic forecasting. 1.2 Structure of this paper The rest of this paper is structured as follows. In Section 2, we formally introduce the setting of probabilistic forecasting and the loss functions that are discussed in this paper. In Section 3, we derive our main theoretical results regarding asymmetric penalties. In Section 4, we present empirical analyses of three real data sets, as well as synthetic data. In 4 Section 5, we leverage our results to shed light on hedging proper loss functions under
|
https://arxiv.org/abs/2505.00937v1
|
distribution shift. In Section 6, we conclude with a discussion of our findings, some assorted additional results, and ideas for future work. 1.3 Related literature References to the broader literature on proper losses will be given in the next section, and here we restrict our attention to literature more narrowly adjacent to the focus of our paper. We are of course not the first to consider the operating characteristics of proper losses, and how this might influence the choice of which loss to use. This has been studied in probabilistic classification by Buja et al. (2005), and in the context of eliciting forecasts from experts by Carvalho (2016); Merkle and Steyvers (2013); Bickel (2007); Machete (2013). Wheatcroft (2021) articulated well the need for studying what forecasting errors different losses favor, and left it for future work. In the context of point forecasts, the same has been emphasized by Ehm et al. (2016) and Patton (2020). The work of Machete (2013) adopts a broadly similar approach to ours, though in a different setting, and reaches different conclusions. Recently, Resin et al. (2024) obtained a decomposition of the expected CRP loss into shift and dispersion components. Our work may be viewed as complementing theirs, toward an understanding of the behavior of CRP loss under differences in location and scale between the forecast and target distribution. 2 Preliminaries We now introduce the formal setting which we study in this paper. Suppose that Yis a random variable taking values in an outcome space Y, according to some probability distribution G. We call Ythetarget variable andGthetarget distribution . Letting Fdenote a set of probability distributions on Y, to measure how close a forecast F∈ F is to the target distribution, we define a loss function ℓ:F × Y → [−∞,∞], such that ℓ(F, y)is the loss incurred for a forecast Fof the observation y. The forecast Fmay be given in the form of a measure, density, cumulative distribution function, quantiles, or otherwise; different loss functions are defined in terms of different forms of inputs, in this regard. To keep the setting general and clear, we use the general word distribution to describe the forecast Fand target Gso as not to prefer any particular form. We assume that low expected loss indicates a good forecast, the expected loss defined as ℓ(F, G) =Eℓ(F, Y),forY∼G. (It will be clear from the context whether ℓdenotes the loss or expected loss.) A loss is said to be proper , if ℓ(F, G)≥ℓ(G, G),for any F, G. In other words, if the target distribution itself minimizes the expected loss. A proper loss is said to be strictly proper if strict inequality holds in the above whenever F̸=G. Any proper loss induces a divergence d(F, G) =ℓ(F, G)−ℓ(G, G), which is nonnegative, and vanishes when the forecast Fequals the target distribution G. We emphasize that in these definitions, and in our study in general, the forecast Fis not treated as random, but as a fixed probability distribution. 2.1 Loss functions We now introduce the loss functions studied in this paper. We refer the
|
https://arxiv.org/abs/2505.00937v1
|
reader to Gneiting and Raftery (2007) for a more comprehensive review of loss functions in probabilistic forecasting. Logarithmic loss. When the outcome space Yis a convex subset of the finite-dimensional Euclidean space Rd, and F, G are absolutely continuous with respect to the Lebesgue measure, we let f, gdenote their respective densities. One of the earliest examples of proper losses, logarithmic loss (Good, 1952) is defined as the negative log-likelihood: ℓ(F, y) =−logf(y). Log loss is strictly proper if restricted to the family of densities with finite Shannon entropy (i.e., ℓ(F, F)<∞), and it induces the well-known Kullback–Leibler (KL) divergence (Kullback and Leibler, 1951): d(F, G) =Z Yg(y) logg(y) f(y)dy=E logg(Y) f(Y) = KL( g∥f), 5 recalling that Yis a random variable with distribution G. Logarithmic loss may be defined on a general outcome space Y, given that F, G are both absolutely continuous with respect to a common measure on Y, but the results in this paper pertain to Euclidean outcome spaces, as defined above. Log loss is commonly used in epidemiology (Reich et al., 2019) and in eliciting probabilities from experts (Carvalho, 2016). Quadratic loss. In the same setting as logarithmic loss, quadratic loss (Brier, 1950) is affine in the likelihood: ℓ(F, y) =−2f(y) +Z Yf(x)2dx, assuming the density fis square integrable. The integral term constitutes a penalty for low entropy. Quadratic loss is strictly proper. It is worth noting that if the integral penalty is omitted, then the loss becomes improper. The induced divergence is the squared L2distance between the densities: d(F, G) =Z Y(f(y)−g(y))2dy. Quadratic loss is sometimes used in the setting of forecast aggregation (Hora and Karde¸ s, 2015). More recently, it has been used to evaluate estimated distributions of network traffic features (Dietmüller et al., 2024). Spherical loss. In the same setting as quadratic losses, spherical loss (Roby, 1965) is now linear in the likelihood: ℓ(F, y) =−f(y)Z Yf(x)2dx−1 2 , assuming the density is square integrable. Similar to quadratic loss, the integral term here constitutes a penalty for low entropy. Spherical loss is strictly proper, and the induced divergence is d(F, G) =Z Yg(y)2dy1 2 −Z Yf(y)2dy−1 2Z Yf(y)g(y)dy. Spherical loss is a special case of pseudospherical loss (Good, 1971). This has recently gained popularity in machine learning, with Yu et al. (2021) deriving an efficient approach for fitting for energy-based models via minimization of pseudospherical loss functions. Lee and Lee (2022) proposed using a pseudospherical loss in knowledge distillation, whose goal is to transfer knowledge from a larger to a smaller model. Continuous ranked probability (CRP) loss. Unlike logarithmic, quadratic, and spherical losses, we now no longer require the existence of densities. In the case that the outcome space Yis a convex subset of the real line R, we identify Fwith a cumulative distribution function. Continuous ranked probability (CRP) loss (Matheson and Winkler, 1976) is defined by ℓ(F, y) =Z Y(F(x)−I{y≤x})2dx. where I{y≤x}is equal to 1 if y≤xand 0 otherwise. This has the alternative representation (Baringhaus and Franz, 2004; Székely and Rizzo, 2005): ℓ(F, y) =E|X−X′|E|X−y| E|X−X′|−1 2 , where X, X′∼Fare independent. The
|
https://arxiv.org/abs/2505.00937v1
|
multiplier E|X−X′|outside of the parentheses serves as a penalty for high entropy (in contrast to quadratic and spherical losses), as the term inside the parentheses is scale invariant. CRP loss is strictly proper if restricted to the family of distributions with finite first moment. It induces the Cramér divergence (Cramér, 1928): d(F, G) =Z Y(F(y)−G(y))2dy. These properties of CRP loss—that it does not require a forecast to have a density, and has a representation in terms of expectations—make it quite popular. In particular, CRP loss is widely used in atmospheric sciences (Gneiting et al., 2005; Scheuerer and Möller, 2015; Rasp and Lerch, 2018; Kochkov et al., 2024; Clement and Zaoui, 2024), and features 6 in isotonic distributional regression (Henzi et al., 2021). In addition, CRP loss has another alternative representation as an integrated loss in terms of the quantile function F−1(Laio and Tamea, 2007). This provides a close connection to (weighted) interval score, which has recently become popular in epidemic forecasting (Bracher et al., 2021). Threshold-weighted CRP loss. A threshold-weighted version of CRP loss (Matheson and Winkler, 1976; Gneiting and Ranjan, 2011) generalizes CRP loss by introducing a nonnegative integrable weight function w:Y → [0,∞), via ℓ(F, y) =Z Yw(x)(F(x)−I{y≥x})2dx. This has the alternative representation in terms of expectations: ℓ(F, y) =E|v(X)−v(X′)|E|v(X)−v(y)| E|v(X)−v(X′)|−1 2 , where X, X′∼Fare independent, and where vis any antiderivative of w(i.e.,v′=w). Threshold-weighted CRP loss is strictly proper if restricted to the family of distributions with finite first moment, and restricted to a strictly positive weight function. It induces the weighted Cramér divergence: d(F, G) =Z Yw(y)(F(y)−G(y))2dy. Threshold-weighted CRP loss is used in the forecasting of high-impact events, for example, in meteorological sciences (Allen et al., 2023a; Taillardat et al., 2023). We refer to the case where the weight function is a power, w(y) =yαfor α∈R, as a power-weighted CRP loss. Energy loss. Generalizing CRP in a different direction is the energy loss (Gneiting and Raftery, 2007), which extends the expectation formulation of CRP to a multidimensional outcome space Y ⊆Rd, using a parameter β∈(0,2), via ℓ(F, y) =E∥X−X′∥β 2E∥X−y∥β 2 E∥X−X′∥β 2−1 2 , where X, X′∼Fare independent, and ∥ · ∥ 2denotes the Euclidean norm. Note that CRP loss is given by the special case where d=β= 1. As with CRP loss, the multiplier outside of the parentheses may be seen as a penalty for high entropy. Energy loss is strictly proper if restricted to the family of distributions with finite β-moment. The induced divergence is the generalized energy distance: d(F, G) =E∥X−Y∥β 2−1 2 E∥X−X′∥β 2+E∥Y−Y′∥β 2 , which is related to energy statistics (Székely and Rizzo, 2013). Here, X, X′, Y, Y′are independent random variables having distributions F, F, G, G , respectively. As a generalization of CRP loss, energy loss is employed in many of the same application areas and also admits weighted versions that are used to evaluate, for example, forecasts of extreme meteorological events (Allen et al., 2023b). Dawid–Sebastiani (DS) loss. Introduced in Dawid and Sebastiani (1999) originally for non-forecasting purposes, the Dawid–Sebastiani (DS) loss is given for a real-valued outcome
|
https://arxiv.org/abs/2505.00937v1
|
space Y ⊆Rby ℓ(F, y) = log Var( X) +(y−EX)2 Var(X), where X∼F, assumed to have finite variance. It is, up to additive and multiplicative constants, the logarithmic loss when the input is normally distributed with the same mean and variance as the forecast F. Being dependent only on the forecast mean and variance, it is proper but not strictly proper. The induced divergence is d(F, G) = logVar(X) Var(Y)+Var(Y)−Var(X) + (EY−EX)2 Var(X), where recall Y∼G. This identifies with the divergence induced by logarithmic loss when both the forecast and target distributions are normal, with mean and variance equal to those of FandG, respectively. DS loss (or its multivariate extension) have been used as approximations to logarithmic and energy losses, in meteorology (Feldmann et al., 2015; Scheuerer and Hamill, 2015), epidemiology (Meyer and Held, 2014) and economics (Lerch et al., 2017). 7 2.2 More background and commentary A loss ℓis said to be local ifℓ(F, y)depends only on the observed target y, and the density evaluated at the target f(y). Up to affine transformations, logarithmic loss is the only proper loss that is local, for outcome spaces with more than two elements (Bernardo, 1979; Parry et al., 2012). Some authors argue that locality is a desirable property on philosophical grounds—a local loss depends only on target values that were realized. On the other hand, proper losses that naturally derive from decision making tasks are often non-local (Dawid, 2007). In some situations, it can be demanding or even intractable to compute a function of the entire distribution. Indeed, Shao et al. (2024) state that in language modeling the logarithmic loss is almost exclusively used, due in part to its desirable characteristics and in part to its tractability given the notoriously large sample space. The log loss can be infinite if the forecast fails to attribute positive density to the observed outcome. Proponents of this property argue it elicits careful assessment of all possible events, no matter how unlikely they are (Benedetti, 2010), while opponents point to the difficulty and potential arbitrariness in assigning very small probabilities simply to avoid an infinite loss (Selten, 1998). In weather and climate applications, it is common to form a probabilistic ensemble forecast out of point predictions from numerical simulation models with different initial conditions (Leutbecher and Palmer, 2008). CRP loss is widespread in those fields, with extensive use that can be partially explained by the convenience it provides: when CRP is represented in its alternative expectation-based form, the forecast Fcan be readily replaced by sample averages over the members of an ensemble, or the output of a Markov chain Monte Carlo procedure (Gneiting et al., 2007; Allen et al., 2023a). Compared to the logarithmic loss, a further reason for using CRP is that it retains propriety when straightforward and natural weighting schemes are applied to emphasize certain events (Matheson and Winkler, 1976; Gneiting and Ranjan, 2011), whereas the logarithmic loss does not (Amisano and Giacomini, 2007), and weighting schemes that maintain propriety for log loss can be less intuitive. 3 Main results We now turn to proving
|
https://arxiv.org/abs/2505.00937v1
|
the main results, dividing our presentation on scale families (Section 3.1), location families (Section 3.2), and exponential families (Section 3.3). 3.1 Scale families For a scale family {Gσ:σ >0}, in order to study asymmetry inherent to a loss ℓ(whether ℓfavors overestimating or underestimating the scale σ), we impose two conditions on the associated divergence d. First we assume dissymmetric , which means it is invariant to the order of its arguments, that is, d(F, G) =d(G, F),for all F, G. It is clear from their definitions in Section 2 that the divergences induced by quadratic, CRP, threshold-weighted CRP, and energy losses are symmetric, whereas those induced by spherical, DS, and logarithmic losses are not. Spherical and DS losses deviate from symmetry in ways that Theorem 1—proved in this subsection—is able to accommodate, while logarithmic loss does not and is left for later treatment. The second requirement is that there exists a function h: (0,∞)→(0,∞)such that d(Fσ, Gτ) =h(τ)d(Fσ/τ, G1),for all σ, τ > 0, and for all scale families {Fσ:σ >0},{Gσ:σ >0}. This says that the forecast and target distributions can be jointly rescaled by the same factor, at the cost of a multiplicative element applied to the divergence. We call divergences which satisfy this condition rescalable , and we call hthescaling function. Note that hmust always satisfy h(1) = 1 . Interestingly, the only possible continuous scaling functions hare real powers: h(σ) =σγ, for some γ∈R. Indeed, if we equate rescaling by στto rescaling by σand then τ, we obtain the identity h(στ) =h(σ)h(τ), forσ, τ > 0. This is often called Pexider’s equation, and subject to h(1) = 1 , the only continuous functions which satisfy this equation are real powers (Small, 2007). The following lemma summarizes the rescalability properties of the loss functions discussed in this paper. 8 Lemma 1. The following holds. •CRP loss induces a rescalable divergence, with h(σ) =σ. •Energy loss induces a rescalable divergence, with h(σ) =σβ. •Quadratic loss induces a rescalable divergence, with h(σ) = 1 /σ. •Logarithmic and DS losses induce rescalable divergences, with h(σ) = 1 . •Power-weighted CRP loss ( w(y) =yα) induces a rescalable divergence, with h(σ) =σα+1. The proof in each case is by change of variables and is omitted. From the perspective of Theorem 1, what shall matter is whether the scaling his an increasing, decreasing or constant function. To be clear, we say that his increasing if h(x)< h(y)for all 0< x < y , and decreasing if −his increasing. The divergences induced by CRP, energy, and power-weighted CRP (with α >−1) losses have an increasing scaling function, and the ones induced by quadratic and power-weighted CRP (with α <−1) losses have a decreasing scaling function. The divergences induced by DS, logarithmic and power-weighted CRP (when α=−1) losses have constant scaling, so we may call them scale invariant . A key step for the proof of Theorem 1 is given in the next lemma, which characterizes whether a loss function favors underestimating the scale, overestimating it, or neither, based on symmetry and rescalability of the induced divergence.
|
https://arxiv.org/abs/2505.00937v1
|
Lemma 2. Let{Gσ:σ >0}be a scale family, ℓa loss function, and dthe induced divergence. If dis symmetric and rescalable with scaling function h, then the following holds for G=G1and all σ >1. •Ifhis increasing, then ℓ(Gσ, G)> ℓ(G1/σ, G). •Ifhis decreasing, then ℓ(Gσ, G)< ℓ(G1/σ, G). •Ifhis constant, then ℓ(Gσ, G) =ℓ(G1/σ, G). Proof. Using symmetry then rescalability, we compute: d(Gσ, G) =d(G, G σ) =h(σ)d(G1/σ, G). Suppose that the scaling his increasing, so that h(σ)> h(1) = 1 . It follows that d(Gσ, G)> d(G1/σ, G), from which we conclude ℓ(Gσ, G)−ℓ(G1/σ, G) =d(Gσ, G)−d(G1/σ, G)>0. The cases where the scaling his decreasing or constant follow similarly. The proof of Theorem 1 is now just a matter of putting together Lemmas 1 and 2, with some additional work along the same lines to accommodate spherical and DS losses. Proof of Theorem 1. CRP and energy losses are symmetric and by Lemma 1 they are rescalable with increasing scaling. Hence Lemma 2 implies ℓ(Gσ, G)> ℓ(G1/σ, G)as required. Meanwhile, quadratic loss is symmetric and by Lemma 1 it is rescalable with decreasing scaling, so by Lemma 2 we conclude that ℓ(Gσ, G)< ℓ(G1/σ, G), as required. Suppose now that ℓis the spherical loss. The induced divergence dis neither symmetric nor rescalable. However, when the forecast and target distribution are members of the same scale family, we have the following two properties. First, a condition similar to rescalability holds, with decreasing scaling: d(Gσ, Gτ) =1√τd(Gσ/τ, G). This may be shown using a change of variables. Second, a property related to symmetry holds: d(Gσ, Gτ) =rσ τd(Gτ, Gσ). Interestingly, the effects of the two multiplicative factors above cancel each other, resulting in d(Gσ, G) =√σd(G, G σ) = d(G1/σ, G), and thus ℓ(Gσ, G) =ℓ(G1/σ, G), as required. 9 Finally, for ℓbeing DS loss, and dthe associated divergence, we may check by direct differentiation that d(G1/σ, G)−d(Gσ, G) = σ2−1 σ2 −2 logσ >0. Consequently, ℓ(Gσ, G)< ℓ(G1/σ, G), as required. This completes the proof. 3.2 Location families The argument for a location family {Gµ:µ∈R}is broadly similar to that given in the previous subsection for a scale family. We assume that the divergence dinduced by the given loss ℓis symmetric as well as translation invariant : d(Fµ, Gν) =d(Fµ−ν, G0),for all µ, ν∈R, and for all location families {Fµ:µ∈R},{Gµ:µ∈R}. It is clear from their definitions in Section 2 that the divergences induced by quadratic, CRP, energy, spherical, DS, and logarithmic losses are translation invariant. The next lemma is similar to Lemma 2 and is key to the proof of Theorem 3. Lemma 3. Let{Gµ:µ∈R}be a location family, ℓa loss function, and dthe induced divergence. If dis symmetric and translation invariant, then ℓ(Gµ, G) =ℓ(G−µ, G)forG=G0and all µ∈R. Proof. Using symmetry then translation invariance, d(Gµ, G) =d(G, G µ) =d(G−µ, G). We conclude ℓ(Gµ, G)− ℓ(G−µ, G) =d(Gµ, G)−d(G−µ, G) = 0 . We now prove Theorem 3. Proof of Theorem 3. For quadratic, CRP, and energy losses, the associated divergences are symmetric and translation invariant, so application of Lemma 3 leads to the desired
|
https://arxiv.org/abs/2505.00937v1
|
results. The divergences induced by spherical and DS losses are translation invariant, and furthermore they are symmetric when restricted to the case where the forecast and target distribution both belong to the same location family. Therefore the same argument as in Lemma 3 gives the desired result for spherical and DS losses. For logarithmic loss, let gbe the Lebesgue density of G. Recall gis itself assumed to be symmetric, and by translation invariance we may assume, without loss of generality, that gis symmetric around zero, i.e., g(y) =g(−y). By a change of variables, followed by symmetry, d(Gµ, G) =Z g(y) logg(y) g(y−µ)dy=Z g(−y) logg(−y) g(−y−µ)dy=Z g(y) logg(y) g(y+µ)dy=d(G−µ, G). We conclude that ℓ(Gµ, G) =ℓ(G−µ, G), as required. 3.3 Exponential families For logarithmic loss, the arguments given in Section 3.1, which are based on symmetry of the divergence, do not apply. Indeed, the KL divergence, which is the divergence associated with log loss and is not symmetric in its arguments, can behave in subtle ways for scale families. For the Cauchy scale family, log loss is actually symmetric in its penalty for σ versus 1/σ(this follows from Nielsen, 2019), which shows that this loss does not always favor overestimating a scale parameter, despite what the empirical results in Figures 1 and 2 might seem to suggest at face value. The approach we take in this subsection is to characterize the asymmetries inherent to logarithmic loss within minimal single-parameter exponential families. Let {pη:η >0}be a minimal exponential family of densities, where pη(x) =h(x)eηT(x)−A(η), and the sufficient statistic Tis not almost everywhere constant. Below we establish conditions for the logarithmic loss to favor forecasting pθηoverpη/θ, the opposite, or neither, for any θ >1, when the target distribution is pη. These 10 conditions are described in terms of the second derivative of the log-partition function A(η); note the second derivative always exists and is nonnegative (Keener, 2010). In particular, logarithmic loss will favor underestimating the natural parameter if the acceleration A′′(η)dominates a rate 1/η3, in a particular sense to be made precise. On the other hand, logarithmic loss will favor overestimating the natural parameter for exponential families where the acceleration A′′(η) is dominated by the rate 1/η3. When the natural parameter space is the entire real line (instead of the positive reals), we derive similar conditions defined in terms of A′′(η)versus A′′(−η). Our results are summarized in the following. Lemma 4. Let{pη:η∈Ω}be a minimal single-parameter exponential family, with the log-partition function A(η). Denote by ℓthe logarithmic loss. If Ω = (0 ,∞), then the following holds for all η∈Ωand all θ >1. 1. Ifu3A′′(u)is increasing in u, then ℓ(pθη, pη)> ℓ(pη/θ, pη). 2. Ifu3A′′(u)is decreasing in u, then ℓ(pθη, pη)< ℓ(pη/θ, pη). 3. Ifu3A′′(u)is constant in u, then ℓ(pθη, pη) =ℓ(pη/θ, pη). IfΩ =R, then the following holds for all η∈Ωand all θ >0. 1. IfA′′(u)−A′′(−u)is increasing in u, then ℓ(pη+θ, pη)> ℓ(pη−θ, pη). 2. IfA′′(u)−A′′(−u)is decreasing in u, then ℓ(pη+θ, pη)< ℓ(pη−θ, pη). 3. IfA′′(u)−A′′(−u)is constant in u, then ℓ(pη+θ, pη) =ℓ(pη−θ, pη). Proof. Suppose Ω = (0 ,∞). Using scale invariance, we may
|
https://arxiv.org/abs/2505.00937v1
|
assume without loss of generality that η= 1. Denoting p=p1, note that the difference in divergence can be written as ℓ(pθ, p)−ℓ(p1/θ, p) =d(pθ, p)−d(p1/θ, p) =Elogp1/θ(Y) pθ(Y)=A(θ)−A(1/θ)−(θ−1/θ)ET(Y), where recall Yis a random variable with density p. A standard exponential family identity states that ET(Y) =A′(1), hence we may write ℓ(pθ, p)−ℓ(p1/θ, p) =dA(θ,1)−dA(1/θ,1), where dA(y, x) =A(y)−A(x)−A′(x)(y−x)is the Bregman divergence associated with the log-partition function A. Assume u3A′′(u)is increasing. Applying the fundamental theorem of calculus twice, we bound one of the Bregman divergences from below, dA(θ,1) =Zθ 1Zv 1A′′(u)dudv =Zθ 1Zv 1u3A′′(u) u3dudv > A′′(1)Zθ 1Zv 1dudv u3. Applying the same argument again, we bound the other Bregman divergence from above, dA(1/θ,1) =Z1 1/θZ1 vA′′(u)dudv =Z1 1/θZ1 vu3A′′(u) u3dudv < A′′(1)Z1 1/θZ1 vdudv u3. Now, the statement ℓ(pθ, p)> ℓ(p1/θ, p)will have been established as soon as we notice that the two integrals identify, Zθ 1Zv 1dudv u3=(θ−1)2 2θ=Z1 1/θZ1 vdudv u3. The statements for u3A′′(u)decreasing and constant, as well as the statements for the case that Ω =R, are proven in a similar way, and we omit the details. We now turn to proving Theorem 2. Before we do that, let us define the exponential families covered by the theorem. •The generalized gamma scale family has density gσ(x) = Γ( k/γ)−1σ−kγxk−1e−(x/σ)γ, forx, σ, γ, k > 0, and where Γis the gamma function. Here the natural parameter is η=σ−γ∈(0,∞)and the log-partition function isA(η) =−(k/γ) logη. Special cases are the gamma scale family ( γ= 1), exponential scale family ( γ= 1and k= 1) and the Weibull scale family ( k=γ). If the generalized gamma scale family is extended by symmetry to x∈R, we obtain the Laplace scale family ( γ= 1andk= 1) and the normal scale family ( γ= 2andk= 1). 11 •The log-normal log-scale family has density gσ(x) = (xσ√ 2π)−1e−(logx−µ)2/(2σ2), where x, σ > 0andµ∈R. The natural parameter is η=σ−2∈(0,∞)and the log-partition function is A(η) =−(1/2) log η. •The inverse gamma scale family has density gσ(x) = Γ( k)−1σkx−k−1e−σ/x, forx, σ, k > 0. Here the natural parameter is η=σ∈(0,∞)and the log-partition function is A(η) =−klogη. •The generalized gamma shape family has density gk(x) = Γ( k/γ)−1σ−kγxk−1e−(x/σ)γ, forx, σ, γ, k > 0. The natural parameter is η=k∈(0,∞)and the log-partition function is A(η) = (log σ)η+ log Γ( η/γ). A special case is the gamma shape family ( γ= 1). •The Pareto shape family has density gk(x) =kmk/xk, forx≥mandk, m > 0. Here the natural parameter is η=k∈(0,∞)and the log-partition function is A(η) =−logη−(logm)η. •The inverse Gaussian shape family has density gk(x) =p k/2πx3e−k(x−µ)2/(2µ2x), forx, k, µ > 0. The natural parameter is η=k∈(0,∞)and the log-partition function is A(η) =−(1/2) log η. •The beta shape family has density gk(x) =xk−1(1−x)β−1Γ(k+β)/(Γ(k)Γ(β)), forx∈[0,1]andk, β > 0. The natural parameter is η=k∈(0,∞)and the log-partition function is A(η) =−log(Γ( η+β)/Γ(η)). •The Poisson rate family has density gλ(k) =λke−λ/k!, fork≥0integer and λ >0. Here the natural parameter isη= log λ∈Rand the log-partition function is A(η) =eη. Proof of Theorem 2. For families with a log-partition function of the form A(η) =−alogη+bη, where a
|
https://arxiv.org/abs/2505.00937v1
|
>0and b∈R, the acceleration is A′′(η) =a/η2which means that u3A′′(u)is increasing in u. Hence, by Lemma 4, we have ℓ(pθη, pη)> ℓ(pη/θ, pη)for all η >0and all θ >1. Consequently, for the generalized gamma scale family, as the scale parameter σis inversely proportional to the natural parameter η= 1/σγ, we have, writing gθ1/γfor the density of Gθ, ℓ(G1/θ, G) =ℓ gθ−1/γ, g =ℓ(pθ, p)> ℓ(p1/θ, p) =ℓ gθ1/γ, g =ℓ(Gθ, G), for all θ >1. As special cases, we obtain the required results for gamma, exponential and Weibull scale families. By symmetrization, the same applies to the Laplace and normal scale families as well, and analogous arguments hold for log-normal log-scale family, inverse gamma scale family, and the Pareto and inverse Gaussian shape families. For the generalized gamma shape family, the acceleration is A′′(η) =γ−2ψ′(η/γ), where ψis the digamma function. It suffices to show that f(x) =x3ψ′(x)is an increasing function of x. From the series representation (Arfken et al., 2011) ψ′(x) =∞X n=01 (x+n)2and ψ′′(x) =−∞X n=02 (x+n)3, valid for all x >0, we differentiate: f′(x) = 3 x2ψ′(x) +x3ψ′′(x) =x2∞X n=0x+ 3n (x+n)3>0. Therefore, fis increasing, consequently u3A′′(u)is increasing in u, and by Lemma 4 we obtain the desired result. As a special case we obtain the desired result for the gamma shape family. For the beta shape family, the acceleration is A′′(η) =ψ′(η)−ψ′(η+β), and it suffices to show that f(x) =x3(ψ′(x)−ψ′(x+β))is an increasing function of x. We use the series representation to differentiate: f′(x) = 3x2ψ′(x) +x3ψ′′(x) − 3x2ψ′(x+β) +x3ψ′′(x+β) =x2∞X n=0x+ 3n (x+n)3−x+ 3(β+n) (x+β+n)3 . Asa7→(x+ 3a)/(x+a)3is decreasing, the derivative f′is positive, thus fis increasing. Consequently, u3A′′(u)is increasing in u, and by Lemma 4 we obtain the desired result. Finally, for the Poisson rate family, the acceleration is A′′(η) =eη, with Ω =R. Since A′′is increasing, by Lemma 4 we obtain the desired result, completing the proof. Remark 1. Using the same approach, we may obtain similar results for further exponential families, and we omit the details. Examples of such families to which this method is applicable: normal location family, log-normal log-location family, inverse Gaussian mean family, binomial family with success probability as parameter. 12 4 Empirical results We now discuss in greater detail the experiments presented in the introduction, and report further results. We begin by highlighting the implications of the asymmetries underlying proper loss functions on real data, over three domains. A comprehensive synthetic experiment across five proper loss functions finishes this section, which corroborates many of our previous findings and motivates directions for future work. All of our results are reproducible from the code available at https://github.com/jv-rv/loss-asymmetries/ . 4.1 Covid-19 mortality The Covid-19 Forecast Hub (Cramer et al., 2022a) has been used by the U.S. Centers for Disease Control and Prevention (CDC) to communicate information leading to policy decisions, including as to the allocation of ventilators, vaccines, and medical staff across geographic locations over the Covid-19 pandemic. We restrict our attention to death forecasts, as in Cramer et al. (2022b), and we analyze weekly predictions from dozens of forecasters of the number of Covid-19 deaths
|
https://arxiv.org/abs/2505.00937v1
|
one through four weeks ahead, for each U.S. state, over the period from April 2020 to March 2023. All forecasts in the Hub are stored in quantile format, and so we converted each forecast from a discrete set of quantiles to a density or CDF before compute log or CRP loss, respectively. Details are given in Appendix A.1. To compute the heatmap in Figure 2, we applied transformations to the forecasts and the target data, as follows. First, we estimated and applied nonparametric transformations to standardize the target distribution, giving it approximately mean zero and variance one at every time point and for every location; details are given in Appendix A.2. Second, we centered and scaled each input forecast distribution in the Covid-19 Hub, giving it mean µand variance σ, and then we evaluated the loss (logarithmic or CRP) between the transformed forecast distribution and standardized target value. We averaged these loss values over all dates, all locations, and all forecasters; this was done over a fine grid of µ, σ values in order to create the heatmap (each pair µ, σ corresponds to a particular pixel value). Lastly, the color scale for the losses in Figures 1 and 2 (as well as Figure 6) was set nonlinearly in order to better emphasize the sublevel sets. Figure 3 reports the standardized ranking of ten forecasters who participated in the Covid-19 Forecast Hub and received the top scores in Figure 2 of Cramer et al. (2022b). The standardized ranking is computed in the same manner as that paper, as follows. For each state, date and weeks ahead, we ranked the forecasters according to loss, then divided by the number of forecasters which submitted a forecast for the state-date-weeks-ahead combination. This is the standardized ranking value, between zero and one, which we then averaged for each forecaster across states, dates, and weeks ahead. Each column of Figure 3 displays this for a different choice of loss—log loss (left column) and CRP loss (right). To further annotate, we identified the four forecasters with the highest standardized ranking according to forecast variance (instead of loss) and the four lowest. For the four with the highest standardized variance, logarithmic loss ranked each one at least as high as CRP loss, with an average margin of 2 places. For the four with the lowest standardized variance, CRP loss ranked each one at least as high as log loss, with an average margin of 1.75 places. 4.2 Retail sales The Makridakis Competitions are well-known in the applied forecasting community, with the initial edition beginning over 40 years ago (Makridakis et al., 1982). The fifth edition introduced an uncertainty track called the M5 Uncertainty Competition (Makridakis et al., 2022), in which teams were tasked with forecasting nine quantiles of future retail sales for a myriad of products, aggregation levels, and horizons. Out of almost 900 competing teams, the 50 with best average performance, according to a weighted and scaled version of the quantile loss, had their forecasts published. Demand is notoriously intermittent, especially at the least aggregated level, which posed
|
https://arxiv.org/abs/2505.00937v1
|
a steep challenge to forecasters. Many of the best-performing teams used gradient boosting (Friedman, 2001) to predict quantiles. As we did with the forecasts in the Covid-19 Forecast Hub, we converted these quantile forecasts into a density or CDF before computing log or CRP loss, respectively, following the same strategy as that outlined in Appendix A.1. Each forecaster was then ranked among all forecasts issued for each prediction task, and the rankings were standardized to be between zero and one. Averaging the standardized ranks coming from logarithmic and CRP losses, as we did in the Covid-19 experiment, yielded Figure 4 in the introduction. As we can see, log loss clearly prefers the forecasters that have higher standardized forecast variance (from left to right, there is clear downward movement in the rankings of highest-variance forecasters), while CRP clearly prefers the opposite (from left to right, clear upward movement in the rankings of lowest-variance forecasters). 13 4.3 Temperature extremes The Coupled Model Intercomparison Project (CMIP) is a significant project collecting the results of climate simulation models, introduced nearly 30 years ago by Meehl et al. (1997). Here we consider the maximum temperatures over the boreal summer simulated by the 30 models available from its sixth edition CMIP6 (Eyring et al., 2016). We source the target data from Hadex3 (Dunn et al., 2020), a data set of monthly extreme temperatures (and other climate data) on a spatial grid, based on observations recorded from thousands of meteorological stations across the world. Using historical runs of each model from 1950 to 2014, we first calculated the predicted monthly maximum temperature from the available daily data. We then translated each model’s simulated predictions to Hadex3’s spatial grid via bilinear interpolation routines from the Climate Data Operators collection (Schulzweida, 2023). After this step, each model has 195 simulated maximum temperatures at 3055 spatial locations (grid points). Adopting a similar approach to previous work in the literature (Thorarinsdottir et al., 2020), we applied kernel density estimation—with a Gaussian kernel and Scott’s rule for automatic bandwidth selection (Scott, 1992)—in order to form a forecast distribution at each location. The same was done with the target data, in order to form a target distribution at each location. (We ensure densities at each location have the same support fitting exponential tails to the extreme samples.) 0.0 0.2 0.4 0.6 0.8 1.0 Average scaled d(Fa,G) 0.00.20.40.60.81.0Average scaled d(F/a,G) Overdispersed forecasts KL divergence Cramér divergence 0.0 0.2 0.4 0.6 0.8 1.0 Average scaled d(Fa,G) 0.00.20.40.60.81.0Average scaled d(F/a,G) Underdispersed forecasts KL divergence Cramér divergence Figure 5: Average divergence for 30 models in CMIP6, in cases where they are initially overdispersed (left panel) or underdispersed (right panel). For each forecast distribution Faθ, and target distribution Gθ, we compare divergences d(Faθ, Gθ)andd(Fθ/a, Gθ) to see whether the given divergence prefers overdispersion to underdispersion. The average Cramér divergence is improved for all 30 models when initially overdispersed forecasts are made underdispersed, and the average KL divergence is improved for all 30 models when underdispersed forecasts are made overdispersed. After computing these forecast and target densities, we then performed the
|
https://arxiv.org/abs/2505.00937v1
|
following evaluation scheme for each model. At each spatial location, we first shifted the forecast density so that it matches the mean of the target density—this was done to focus on differences in scale. We then estimated the standard deviations of the forecast and target densities, and denote these distributions by FaθandGθ, respectively. Note that if a >1then the forecast distribution is overdispersed compared to the target at the given spatial location, and if a <1then it is underdispersed. Lastly, subsetting to spatial locations with a >1, we computed the average divergence d(Faθ, Gθ), as well as the average divergence d(Fθ/a, Gθ). The former represents the divergence of the (original) overdispersed forecast, while the latter represents the divergence of the underdispersed counterpart (where the deviation in scale has been inverted). This is displayed on the left panel of Figure 5, where one point represents one model and one divergence—either KL (associated with log loss), or Cramér divergence (associated with CRP loss). The right panel of Figure 5 displays the result of a complementary calculation: subsetting to spatial locations with a <1, we compare the average divergence d(Faθ, Gθ)and the average divergence d(Fθ/a, Gθ). We can clearly see that all 30 models from CMIP6 have their overdispersed forecasts improve in Cramér 14 divergence when deflated and all their underdispersed forecasts improve in KL divergence when inflated. 4.4 Synthetic data The asymmetric Laplace distribution has density f(x) =p(1−p) σe−x−µ σ(p−I{x≤µ}), forx∈R, a location parameter µ∈R, a scale parameter σ >0, and skew parameter p∈(0,1)(Koenker and Machado, 1999; Yu and Zhang, 2005). This reduces to the standard Laplace distribution for p= 1/2, it is skewed to the right for p <1/2, and skewed to the left for p >1/2. Often used in Bayesian quantile regression (Yu and Moyeed, 2001), it has been more recently employed in probabilistic forecasting, for wind power forecasts (Wang et al., 2022a,b). The purpose of the experiment in this subsection is twofold: it depicts behaviors one would expect given our theoretical results, and also displays interesting phenomena outside of the scope of our theory, pointing towards possible future work. We computed the CRP, logarithmic, quadratic, spherical, and Dawid–Sebastiani loss between a zero-mean unit-variance target asymmetric Laplace distribution, and forecast distributions in the same family but with varying location and scale. Figure 6 displays the results, with each row showing a different skew parameter, and each column a different loss. The top row corresponds to p= 0.2(right-skewed), the middle row to p= 0.5(symmetric), and the bottom row to p= 0.8 (left-skewed). As expected, losses penalize symmetrically on the location µwhen the scale is correctly specified, except for logarithmic loss—this loss is symmetric in µwhen the distribution itself is symmetric (middle row), but it prefers upshifted or downshifted forecasts when the distribution is right- or left-skewed (top or bottom rows), respectively. logσµLogarithmic logσQuadratic logσCRP logσSpherical logσDawid-Sebastiani logσµ logσ logσ logσ logσ logσµ logσ logσ logσ logσ Figure 6: Expected losses for a zero-mean unit-variance asymmetric Laplace target, and forecasts of the same family with varying location µand scale σ. Distributions are right skewed
|
https://arxiv.org/abs/2505.00937v1
|
(top panel; p= 0.2), symmetric (middle; p= 1/2), or left skewed (bottom; p= 0.8). A lighter color represents a lower loss, with minimum achieved at the star. When the location is correctly specified, the penalties on the scale σall agree with what is suggested by our theory. CRP prefers underdispersed forecasts, log loss prefers underdispersed forecasts in the case without skewness (middle row, in which case we are in an exponential family), quadratic and DS losses prefer overdispersed forecasts, and spherical 15 loss is symmetric in logσ. Interestingly, the asymmetric penalty of log loss in the symmetric Laplace case extends to the right- and left-skewed cases (top and bottom rows), despite the fact that the asymmetric Laplace family is not an exponential family. Moreover, considering the behavior of losses in µ, σ jointly leads to some interesting observations. For example, spherical loss, despite being symmetric on both axes, can become asymmetric in the scale parameter, for misspecified location. As another example, CRP seems particularly affected by skew in the underlying distribution, with its sublevel sets exhibiting a strong tilt in the asymmetric Laplace cases. 5 Hedging proper losses under distribution shift It is commonly held that proper loss functions encourage honest forecasting, in the sense that if a forecaster believes the target distribution is G, then they can minimize their expected loss by forecasting G. This statement can be understood in the framework of subjective probability: if Gis the subjective probability of the forecaster, then Gis the conditional distribution of the target given the information available to the forecaster. Propriety guarantees that no other forecast F will incur lesser (conditional expected) loss, ℓ(F, G)≥ℓ(G, G). This subjective probability formulation is articulated explicitly in earlier seminal papers (as in Brier, 1950; Winkler and Murphy, 1968), but not as often in modern ones (as in Gneiting and Raftery, 2007; Parry et al., 2012). In practice, forecasts are rarely subjective probabilities. This can be attributed to a multitude of issues, such as misspecification of the data generating process. If Gis not the forecaster’s subjective probability, then no formal guarantee appears to follow from the definition of propriety that would merit the assertion that the forecaster should forecast their opinion G. Consider a forecaster with firm opinion that the target distribution is G. The forecaster may ask—what if I am wrong? If error is conceivable, the forecaster might then ask—is it better to err in one form over another? If the answer to the latter question is affirmative, the forecaster might be incentivized to deviate from their honest opinion to protect against the possibility of erring in a particularly unfavorable way. Such behavior has been termed hedging . A key motivation for using proper losses has historically been the supposition that they provide no opportunity for hedging (Brier, 1950; Murphy and Epstein, 1967; Winkler and Murphy, 1968). We are motivated to scrutinize this supposition in situations where forecasting error is possible (or even unavoidable) and losses exhibit asymmetric penalties on forecast errors. In particular, we study hedging in a setting of distribution shift, in which the
|
https://arxiv.org/abs/2505.00937v1
|
parameter in a scale or exponential family shifts in testing relative to the training population. In this case, if the shift is log-symmetric around the training parameter value, then the asymmetry in loss penalties can be used to describe the direction toward which the forecaster ought to deviate from their honest opinion. 5.1 Problem setting We suppose that the forecaster has ideal knowledge of the training population, say, by having access to infinitely many independent and identically distributed observations, so they are able to conclude with certainty that the distribution of the target in training is Gσtrain. In testing, however, suppose that there is distribution shift in the scale parameter, where the law of target variable Yin testing is defined as follows: we first draw a random scale parameter σtest>0, then draw Y∼Gσtest. To express our ignorance of the nature of the distribution shift relative to the training population, we assume thatσtestis log-symmetric around σtrain, that is, logσtesthas a symmetric distribution around logσtrain. A forecaster not aware of distribution shift in their data might naively forecast Gσtrain. The question then arises whether the forecaster can do better if they suspect distribution shift is present (as would be common in many, if not most, real applications of forecasting). If a loss function ℓis proper, then the minimum expected loss over all possible realizations of the test scale σtestis of course obtained by EGσtest, the unconditional law of Y. That is, Eℓ(F, G σtest)≥Eℓ(EGσtest, Gσtest),for any F. However, this requires the forecaster to have perfect knowledge of the distribution of σtest, whereas our setting assumes no such knowledge. The forecaster, equipped only with knowledge of log-symmetry of the test scale distribution, might seek a better forecast than Gσtrainwithin the scale family {Gσ:σ >0}, and the question we consider here is whether there is a fixed forecast Gσ∗with lesser expected loss. We show that the answer is generally yes. Before going on to state and prove general results about hedging for the various loss functions, we present an informal discussion that may serve as an intuitive guide relating hedging to asymmetric penalties. The setting of distribution 16 shift we analyze here differs from the previous setting of asymmetric penalties studied in Section 3, in that the roles of forecast and target distributions are, in a certain sense, reversed. In Section 3, recall, the target G=G1was fixed and we considered multiple forecasts GaandG1/adispersed log-symmetrically around the target. Instead, we now think of a given forecast Gas fixed, and we consider multiple targets dispersed log-symmetrically around the forecast. If σtestis distributed uniformly on {a,1/a}witha >1(assuming without loss of generality that σtrain= 1), then 2Ed(G, G σtest) =d(G, G a) +d(G, G 1/a). If the divergence is scale invariant, such as the one induced by logarithmic loss, then d(G, G a) =d(G1/a, G)and d(G, G 1/a) =d(Ga, G). Meaning, penalties are reversed, and based on Theorem 2 the realization Gaof larger target scale will typically lead to larger divergence. If instead the divergence is symmetric, such as the one induced by CRP or quadratic loss, then d(G, G
|
https://arxiv.org/abs/2505.00937v1
|
a) =d(Ga, G)and d(G, G 1/a) =d(G1/a, G). Lemma 2 tells us that Gawill lead to larger divergence if dis rescalable with increasing scale function, as in CRP loss, or will lead to smaller divergence if dis rescalable with decreasing scale function, as in quadratic loss. In summary, for symmetric divergences the two settings of asymmetric penalties and of distribution shift are interchangeable, whereas for scale invariant divergences they are the reverse of each other. It may appear at this point that a forecaster would hedge in a direction that draws toward the least favorable realization of the test scale parameter in order to mitigate it, and this is indeed the case for logarithmic, CRP, and quadratic losses in certain cases. However, in general, the precise arguments needed for these (and other) losses are more subtle and we give the details across the next two subsections. 5.2 Scale families The next theorem characterizes the direction of hedging for losses that induce symmetric rescalable divergence, and the direction is shown to depend on the particular scale family. (We make the assumption that the training scale parameter is set to be σtrain= 1here, which we again emphasize comes at no loss of generality.) Theorem 4. Let{Gσ:σ >0}be a scale family, ℓa loss function, and dthe induced divergence, where dis symmetric and rescalable with a scaling function h. Letσtest>0be a random variable, which is not almost surely constant, such thatσtestand1/σtesthave the same distribution and Eℓ(Gσ, Gσtest)is finite for all σ >0. FixG=G1and define f(σ) =d(Gσ, G), Assume that fandhare differentiable and conditions for exchanging differentiation and taking expectation with respect toσtestapply (e.g., a sufficient condition is that σtesthas compactly supported density and f′is continuous). Then the following holds. 1. If(h−1)/fis increasing on [1,∞), then there exists σ∗<1such that Eℓ(Gσ∗, Gσtest)<Eℓ(G, G σtest). 2. If(h−1)/fis decreasing on [1,∞), then there exists σ∗>1such that Eℓ(Gσ∗, Gσtest)<Eℓ(G, G σtest). Proof. Define ga(σ) =d(Gσ, Ga) +d(Gσ, G1/a). We will first show that g′ a(1)has the same sign as the derivative of (h−1)/fata. Indeed, using rescalability (twice), we can rewrite ga(σ)as ga(σ) =h(a)d(Gσ/a, G) +1 h(a)d(Gaσ, G) =h(a)f(σ/a) +1 h(a)f(aσ). Since fis differentiable, so is ga, and the derivative of gaat1is g′ a(1) =h(a) af′(1/a) +a h(a)f′(a). 17 Now we relate f′(1/a)tof′(a)in order to plug into the equation for g′ a(1)above. By symmetry and rescalability, note thatf(σ) =h(σ)f(1/σ), and differentiating this we get f′(σ) =h′(σ)f(1/σ)−h(σ) σ2f′(1/σ) =h′(σ) h(σ)f(σ)−h(σ) σ2f′(1/σ). Rearranging to isolate f′(1/σ), and plugging into the previous equation for g′ a(1), we arrive at g′ a(1) =a h(a)(h′(a)f(a)−(h(a)−1)f′(a)) =af(a)2 h(a)h−1 f′ (a). The condition that (h−1)/fis increasing on [1,∞)is thus sufficient to ensure a positive derivative g′ a(1)>0for all a >1. Recalling σtestis a random variable such that σtestand1/σtesthave a common distribution, denoted H, define g(σ) = 2Ed(Gσ, Gσtest) =E[d(Gσ, Gσtest) +d(Gσ, G1/σtest)] =Z ga(σ)dH(a). If differentiation under the integral sign is permitted, then g′(σ) =Z g′ a(σ)dH(a). The condition that (h−1)/fis increasing on [1,∞)is thus also sufficient to ensure g′(1)>0, entailing the existence ofσ∗<1for which Eℓ(Gσ∗, Gσtest)−Eℓ(G, G σtest) =1 2(g(σ∗)−g(1))<0. The case where (h−1)/fis decreasing is proven
|
https://arxiv.org/abs/2505.00937v1
|
similarly. Example 1 (Exponential distribution) .When Gσis the exponential distribution with scale σ(see Section 3.3), we can infer the following using Theorem 4. Under CRP loss, hedging is carried out by inflating the scale, making it flatter and less informative, while under quadratic loss it is carried out by deflating the scale, making it sharper and overconfident. To see this, we note a simple calculation yields for CRP loss that d(Gσ, G) = (1 + σ)/2−2σ/(1 +σ). Recalling h(σ) =σ, it may be shown that d(Gσ, G)/(h(σ)−1)is increasing. Hence by Theorem 4, there is a forecast Gσ∗flatter than G(i.e.,σ∗>1) which attains lower expected CRP loss. For quadratic loss, a simple calculation yields d(Gσ, G) = (σ−1)2/(2σ(σ+ 1)) . Recalling h(σ) = 1 /σ, it may be shown that (h(σ)−1)/d(Gσ, G)is increasing. Thus by Theorem 4, there is a forecast Gσ∗sharper than G(i.e.,σ∗<1) which attains lower expected quadratic loss. 5.3 Exponential families For logarithmic loss, the divergence it induces is not symmetric, but we can characterize the direction of hedging with a specialized argument for exponential families. Theorem 5. Let{pη:η >0}be a minimal exponential family of densities, where pη(x) =h(x)eηT(x)−A(η). Denote byℓthe logarithmic loss, and let ηtest>0be a random variable where we assume ET(Y)exists and is finite, where Y is a random variable whose conditional distribution given ηtestispηtest. Then η∗= (A′)−1(EA′(ηtest))is well-defined and minimizes then the expected loss Eℓ(pη, pηtest)over all η >0. Proof. We first show that η∗is well-defined. Recall, for a minimal exponential family, the log-partition function Ais continuously differentiable and strictly convex (Wainwright and Jordan, 2008), and A′acts as a bijection between its domain and image (Rockafellar, 1970), which must then be an open interval. Furthermore, by a standard identity for exponential families, ET(Y) =E[ET(Y)|ηtest] =EA′(ηtest). 18 By assumption, this value is finite, and thus the expectation EA′(ηtest)must lie within the image of A′, which implies the existence and uniqueness of η∗for which A′(η∗) =EA′(ηtest). It remains to show that η∗minimizes the expected logarithmic loss by completing the Bregman divergence. We can compute the expected loss for any η >0by Eℓ(pη, pηtest) =−E[Elogpη(Y)|ηtest] =A(η)−ηET(Y) +c=A(η)−ηA′(η∗) +c, where cis constant. The difference in expected loss between ηandη∗is therefore Eℓ(pη, pηtest)−Eℓ(pη∗, pηtest) =A(η)−A(η∗)−(η−η∗)A′(η∗). This is the Bregman divergence of the strictly convex function A, tangent to Aatη∗, which is nonnegative and vanishes if and only if η=η∗. No assumption was required so far on the relation between ηtrainandηtest, the natural parameters in training and testing. If we assume ηtestis logarithmically symmetric around ηtrain, then ηtrain=eElog(ηtest). Compare this with the optimal forecast from the theorem: η∗= (A′)−1(EA′(ηtest)). We see that whether η∗< η trainorη∗> η trainholds will depend on the log-partition function A. We conclude this subsection with an example where this occurs. Example 2 (Generalized gamma scale family) .When pηis the generalized gamma density (see Section 3.3), with scale σinversely proportional to η, we can infer the following from Theorem 5. Under logarithmic loss, hedging is carried out by inflating the scale relative to the training population. To see this, recall that for the generalized gamma family η= 1/σγwithγ >0, and the
|
https://arxiv.org/abs/2505.00937v1
|
log-partition function has derivative A′(η) =−(k/γ)/η. The natural parameter η∗for which the forecast pη∗attains global minimum expected logarithmic loss is 1 η∗=E1 ηtest . Now compare the optimal scale σ∗= (η∗)−1/γwith the training scale σtrain= (ηtrain)−1/γ: using Jensen’s inequality, (σ∗)−γ=η∗= (Eη−1 test)−1< eElog(ηtest)=ηtrain=σ−γ train, which shows that σ∗> σ train. From special cases of the generalized gamma scale family, we may derive results for the exponential, Laplace, normal, gamma, and Weibull scale families. 6 Discussion In this work, we studied asymmetries in the penalization of a broad set of proper loss functions, including logarithmic, continuous ranked probability, threshold-weighted CRP, quadratic, spherical, energy, and Dawid–Sebastiani losses. To recap some highlights, by establishing general results in exponential families for logarithmic loss, we showed this loss typically penalizes overestimating scale parameters less severely than underestimating them by the same amount on a logarithmic scale. Moreover, by introducing the notion of symmetric rescalable divergences, we showed that in scale families CRP loss favors sharp forecasts (underestimating the scale), whereas quadratic loss favors flat forecasts (overestimating the scale). These results are clearly visible in practice: through experiments, we confirmed the effects anticipated by the theory on data from Covid-19 mortality, temperature, and retail forecasts. Finally, under a setting with distribution shift, we showed that hedging of certain proper loss functions is possible, which can be understood as an implication of their inherent asymmetry. We close with some additional related comments and discussion. 19 6.1 Confounding effects of aggregation across different scales In practice, a loss is often averaged over non-identically distributed target observations, for example, to evaluate average performance of a forecaster across different dates, geographic locations, or generally over different tasks. The problem of confounding effects stemming from the differences between the tasks is well-recognized, with skill losses a popular yet imperfect remedy (Gneiting and Raftery, 2007). Here we note that the confounding effects of scale can depend on the loss function being used, and the asymmetries therein. We demonstrate via two examples that losses which induce symmetric rescalable divergences with increasing scaling functions (e.g., CRP and energy) place more weight on observations with large scale versus those with small scale, and losses with decreasing scaling function (e.g., quadratic) behave in the opposite way. On the other hand, losses with constant scaling function (e.g., logarithmic and DS) are indifferent to the scale of the target. Example 3 (Different specializations) .Consider two tasks with different scale, and two forecasters, Glenn and Bob. Glenn is relatively good in forecasting one task, and Bob equally better in the other. We show that which forecaster is awarded least expected loss depends on the loss function being used. Concretely, in one task with the target distribution being G, suppose that Glenn forecasts Fand Bob H, and Glenn achieves lower expected loss, ℓ(F, G)< ℓ(H, G). However, in another task with target distribution Gσ, suppose that in a reversal Bob now forecasts Fσand Glenn Hσ. Here, the distributions Fσ, Gσ, Hσshould be read as members of respective scale families, where we assume σ >1to signify increased scale. If the divergence induced by
|
https://arxiv.org/abs/2505.00937v1
|
ℓis symmetric and rescalable with constant scaling function (e.g., logarithmic and DS), it is indifferent to the scale of the second task and assigns equal expected loss to Glenn and Bob in total over both tasks, ℓ(F, G) +ℓ(Hσ, Gσ) =ℓ(H, G) +ℓ(Fσ, Gσ). However, when the divergence has increasing scaling function (e.g., CRP and energy), more emphasis is placed on the second upscaled task, and consequently Bob wins: ℓ(F, G) +ℓ(Hσ, Gσ)> ℓ(H, G) +ℓ(Fσ, Gσ). Lastly, if the divergence has decreasing scaling function (e.g., quadratic, and also spherical when F, G, H belong to the same scale family), lesser weight is put on the second task due to its increased scale, and Glenn wins: ℓ(F, G) +ℓ(Hσ, Gσ)< ℓ(H, G) +ℓ(Fσ, Gσ). We have reached three different conclusions by using different loss functions, with all else being equal. Example 4 (Missing forecasts) .In this example, Glenn and Bob make the same forecasts, but Bob is missing forecasts for some target observations. (Missingness is common in some domains, e.g., in epidemiological forecasting; Cramer et al. (2022b) report that only 28 out of 71 forecasters of Covid-19 mortality submitted full forecasts for at least 60% of participating weeks in their analysis.) Concretely, in the first task, suppose that both Glenn and Bob forecast F, when the target distribution is G. In the second task, suppose Glenn forecasts Fσ, and the target distribution is Gσwithσ >1, however, Bob makes no forecast. Which forecaster is awarded the least expected loss—now averaged over the observed forecasts—depends on the loss function being used. If the divergence induced by ℓis symmetric and rescalable with constant scaling function (e.g., logarithmic and DS), then expected loss at the second task equals that at the first task, ℓ(F, G) +ℓ(Fσ, Gσ) 2=ℓ(F, G), and neither Glenn nor Bob wins. If, however, the divergence has increasing scaling function (e.g., CRP and energy), then the loss at the second task is greater, leading to Bob winning: ℓ(F, G) +ℓ(Fσ, Gσ) 2> ℓ(F, G). Conversely, if the divergence has decreasing scaling function (e.g., quadratic, and also spherical when F, G belong to the same scale family), then the loss at the second task is lesser, leading to Glenn winning: ℓ(F, G) +ℓ(Fσ, Gσ) 2< ℓ(F, G). Once again, we see that three different conclusions have been reached by using different loss functions. 20 6.2 A closer look at logarithmic loss in exponential families In Section 3.3, we considered densities pη(x) =h(x)eηT(x)−A(η)in the exponential family {pη:η >0}and proved, forθ >1andℓbeing the logarithmic loss, that ℓ(pθη, pη)−ℓ(pη/θ, pη)is positive, negative, or zero when η3A′′(η)is respectively increasing, decreasing, or constant. Instead of setting a parametrization a priori (i.e., comparing θηtoη/θ), one may instead ask about the sign of ℓ(pη1, pη)−ℓ(pη2, pη)for varying η1, with η2, ηfixed. Figure 7 visualizes this quantity for the normal scale family, and Theorem 6 provides a precise characterization for a wide class of distributions, which includes the normal scale family, log-normal log-scale family, exponential family, Weibull scale family, Laplace scale family, gamma scale family, and others. 1 2 3 401234 σ1=
|
https://arxiv.org/abs/2505.00937v1
|
1/η1ℓ(pη1, pη)−ℓ(pη2, pη)σ2= 1/η2= 0.6 1 2 3 401234 σ1= 1/η1ℓ(pη1, pη)−ℓ(pη2, pη)σ2= 1/η2= 3.0 Figure 7: The function ℓ(pη1, pη)−ℓ(pη2, pη)for normal densities with distinct fixed values of σ2(solid), the roots given by Theorem 6 in terms of the Lambert function (dashed) and the multiplicative inverse of the largest of the two roots (dotted). Theorem 6. For an exponential family {pη:η >0}with log-partition function of the form A(η) =c1logη+c2, for constants c1, c2∈R, the roots of ℓ(pη1, pη)−ℓ(pη2, pη)occur at −ηW −η2 ηexp −η2 η , where W: [−1 e,∞)→R2is the Lambert function, which satisfies the implicit equation x=W(x) exp( W(x)). Proof. Representing the difference of logarithmic losses by the equivalent difference of KL divergences yields ℓ(pη1, pη)−ℓ(pη2, pη) =d(pη1, p)−d(pη2, pη) =Elogpη2(Y) pη1(Y)=A(η1)−A(η2)−(η1−η2)ET(Y), where Yis a random variable with density pη. From the exponential family identity EηT(Y) =A′(η), we have ℓ(pη1, pη)−ℓ(pη2, pη) =A(η1)−A(η2)−(η1−η2)A′(η) =c1logη1−c1logη2−c1 η(η1−η2). Setting the above display to zero gives logη1 η2=η1−η2 η. After rearranging and taking exponents, we have exp(η1) exp(−ηlogη1) = exp( η2) exp(−ηlogη2). Exponentiating by −1 ηand then multiplying by −1 η, gives −η1 ηexp −η1 η =−η2 ηexp −η2 η , 21 from which the Lambert function is immediately recognized, yielding −η1 η=W −η2 ηexp −η2 η . The result follows by multiplying both sides by −η. 6.3 Threshold-weighted continuous ranked probability loss In the introduction and throughout the paper, we alluded to results available for threshold-weighted CRP loss. Here we present the details, for weights of the form w(y) =yαwithα∈R, subject to wremaining nonnegative (power-weighted CRP loss). This loss has asymmetries in location and scale families governed by the exponent α. Indeed, as a direct consequence of Lemma 1, for {Gσ:σ >0}a scale family, σ >0, and ℓa power-weighted CRP loss, we have: •ℓ(Gσ, G) =ℓ(G1/σ, G)ifα=−1; •ℓ(Gσ, G)> ℓ(G1/σ, G)ifα >−1; •ℓ(Gσ, G)< ℓ(G1/σ, G)ifα <−1. Moreover, for {Gµ:µ∈R}a location family, by a similar argument, we have: •ℓ(Gµ, G) =ℓ(G−µ, G)ifα= 0orµ= 0; •ℓ(Gµ, G)> ℓ(G−µ, G)ifsign(α) = sign( µ)̸= 0; •ℓ(Gµ, G)< ℓ(G−µ, G)otherwise. We conclude that power-weighted CRP loss has an inherent trade-off: it can penalize symmetrically in location families at the expense of asymmetry in scale families ( α= 0), or it can penalize symmetrically in scale families at the expense of asymmetry in location families ( α=−1). There is no power weight function that guarantees symmetric penalties in both families simultaneously. A natural question is now whether there exists a weight function, not necessarily a power function, that wholly symmetrizes CRP loss in this context. Our next results answers this in the negative, apart from the trivial zero function. Proposition 1. Let{Gσ:σ >0}and{Gµ:µ∈R}be scale and location families, respectively. For the threshold- weighted CRP loss ℓ, ℓ(Gσ, G1) =ℓ(G1/σ, G1)andℓ(Gµ, G0) =ℓ(G−µ, G0),for all σ >0, µ∈R⇐⇒ w(y) = 0 ,for all y∈ Y. In other words, there does not exist a nonzero weight function such that symmetry is achieved for a location and scale family simultaneously. Proof. The “if” direction is obvious. For the “only if” direction, observe that ℓ(Gσ, G) =ℓ(G1/σ, G)implies Z Yσw(yσ)(G(y)−G(yσ))2dy=Z Yw(y)(G(y)−G(yσ))2dy=⇒σw(yσ) =w(y),
|
https://arxiv.org/abs/2505.00937v1
|
whereas ℓ(Gµ, G) =ℓ(G−µ, G)implies Z Yw(y+µ)(G(y)−G(y+µ))2dy=Z Yw(y)(G(y)−G(y+µ))2dy=⇒w(y+µ) =w(y). The only weight function that satisfies both conditions for all σ >0andµ∈Ris the zero function: w(y) = 0 for all y∈ Y. Therefore, the trade-off between ensuring symmetric penalties either in location or in scale families is not a peculiarity of power-weighted CRP loss, but extends to the more general threshold-weighted version. In this particular sense, CRP loss is unsymmetrizable. Exploring whether quantile-weighted CRP loss behaves similarly remains an avenue for future work. 22 Acknowledgements We thank members of the Delphi research group, as well as Tilmann Gneiting, Evan Ray, and Nicholas Reich for useful discussions. We also thank the participants of the Berkeley Statistics probabilistic forecasting reading group. This work was supported by Centers for Disease Control and Prevention (CDC) grant number 75D30123C15907. References Sam Allen, Jonas Bhend, Olivia Martius, and Johanna Ziegel. Weighted verification tools to evaluate univariate and multivariate probabilistic forecasts for high-impact weather events. Weather and Forecasting , 38(3):499–516, 2023a. Sam Allen, David Ginsbourger, and Johanna Ziegel. Evaluating forecasts for high-impact events using transformed kernel scores. SIAM/ASA Journal on Uncertainty Quantification , 11(3):906–940, 2023b. Gianni Amisano and Raffaella Giacomini. Comparing density forecasts via weighted likelihood ratio tests. Journal of Business & Economic Statistics , 25(2):177–190, 2007. George B. Arfken, Hans J. Weber, and Frank E. Harris. Mathematical Methods for Physicists: A Comprehensive Guide . Academic Press, 2011. Ludwig Baringhaus and Carsten Franz. On a new multivariate two-sample test. Journal of Multivariate Analysis , 88(1): 190–206, 2004. Riccardo Benedetti. Scoring rules for forecast verification. Monthly Weather Review , 138(1):203–211, 2010. José M. Bernardo. Expected information as expected utility. Annals of Statistics , 7(3):686–690, 1979. J. Eric Bickel. Some comparisons among quadratic, spherical, and logarithmic scoring rules. Decision Analysis , 4(2): 49–65, 2007. Johannes Bracher, Evan L. Ray, Tilmann Gneiting, and Nicholas G. Reich. Evaluating epidemic forecasts in an interval format. PLOS Computational Biology , 17(2):1–15, 2021. Glenn W. Brier. Verification of forecasts expressed in terms of probability. Monthly Weather Review , 78(1):1–3, 1950. Andreas Buja, Werner Stuetzle, and Yi Shen. Loss functions for binary class probability estimation and classification: Structure and applications. Technical Report, The Wharton School, University of Pennsylvania, 2005. Arthur Carvalho. An overview of applications of proper scoring rules. Decision Analysis , 13(4):223–242, 2016. Dombry Clement and Ahmed Zaoui. Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation. In Advances in Neural Information Processing Systems , 2024. Estee Y . Cramer, Yuxin Huang, Yijin Wang, Evan L. Ray, Matthew Cornell, Johannes Bracher, Andrea Brennen, Alvaro J. Castro Rivadeneira, Aaron Gerding, Katie H. House, et al. The United States COVID-19 Forecast Hub dataset. Scientific Data , 9, 2022a. Estee Y . Cramer, Evan L. Ray, Velma K. Lopez, Johannes Bracher, Andrea Brennen, Alvaro J. Castro Rivadeneira, Aaron Gerding, Tilmann Gneiting, Katie H. House, Yuxin Huang, et al. Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States. Proceedings of the National Academy of Sciences , 119(15):e2113561119, 2022b. Harald Cramér. On the composition of elementary errors. Scandinavian Actuarial Journal , 1928(1):141–180, 1928. A.
|
https://arxiv.org/abs/2505.00937v1
|
Philip Dawid. Present position and potential developments: Some personal views statistical theory the prequential approach. Journal of the Royal Statistical Society: Series A , 147(2):278–290, 1984. A. Philip Dawid. The geometry of proper scoring rules. Annals of the Institute of Statistical Mathematics , 59:77–93, 2007. 23 A. Philip Dawid and Paola Sebastiani. Coherent dispersion criteria for optimal experimental design. Annals of Statistics , pages 65–81, 1999. Bruno de Finetti. Theory of Probability . John Wiley & Sons, 1975. Alexander Dietmüller, Albert Gran Alcoz, and Laurent Vanbever. FitNets: An adaptive framework to learn accurate traffic distributions. arXiv: 2405.10931, 2024. Robert J. H. Dunn, Lisa V . Alexander, Markus G. Donat, Xuebin Zhang, Margot Bador, Nicholas Herold, et al. Devel- opment of an updated global land in situ-based data set of temperature and precipitation extremes: HadEX3. Journal of Geophysical Research: Atmospheres , 125(16):e2019JD032263, 2020. Werner Ehm, Tilmann Gneiting, Alexander Jordan, and Fabian Krüger. Of quantiles and expectiles: Consistent scoring functions, Choquet representations an forecast rankings. Journal of the Royal Statistical Society: Series B , 78(3): 505–562, 2016. Veronika Eyring, Sandrine Bony, Gerald A. Meehl, Catherine A. Senior, Bjorn Stevens, Ronald J. Stouffer, and Karl E. Taylor. Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development , 9(5):1937–1958, 2016. Kira Feldmann, Michael Scheuerer, and Thordis L. Thorarinsdottir. Spatial postprocessing of ensemble forecasts for temperature using nonhomogeneous gaussian regression. Monthly Weather Review , 143(3):955–971, 2015. Jerome Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics , 29(5):1190– 1232, 2001. John W. Galbraith and Simon van Norden. Assessing gross domestic product and inflation probability forecasts derived from bank of england fan charts. Journal of the Royal Statistical Society: Series A , 175(3):713–727, 2012. Tilmann Gneiting and Matthias Katzfuss. Probabilistic forecasting. Annual Review of Statistics and Its Application , 1: 125–151, 2014. Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association , 102(477):359–378, 2007. Tilmann Gneiting and Roopesh Ranjan. Comparing density forecasts using threshold- and quantile-weighted scoring rules. Journal of Business & Economic Statistics , 29(3):411–422, 2011. Tilmann Gneiting, Adrian E. Raftery, Anton H. Westveld, and Tom Goldman. Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Monthly Weather Review , 133(5):1098– 1118, 2005. Tilmann Gneiting, Fadoua Balabdaoui, and Adrian E. Raftery. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society: Series B , 69(2):243–268, 2007. I. J. Good. Rational decisions. Journal of the Royal Statistical Society: Series B , 14(1):107–114, 1952. I. J. Good. Comment on “Measuring information and uncertainty,” by R. J. Buehler. In Foundations of Statistical Inference , pages 337–339. Holt, Rinehart and Winston of Canada, 1971. Alexander Henzi, Johanna F. Ziegel, and Tilmann Gneiting. Isotonic distributional regression. Journal of the Royal Statistical Society Series B: Statistical Methodology , 83(5):963–993, 2021. Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli, and Rob J. Hyndman. Probabilistic energy forecasting: Global energy forecasting competition 2014 and beyond. International Journal of forecasting , 32(3): 896–913, 2016. Stephen C. Hora and Erim Karde¸ s.
|
https://arxiv.org/abs/2505.00937v1
|
Calibration, sharpness and the weighting of experts in a linear opinion pool. Annals of Operations Research , 229(1):429–450, 2015. 24 Ian T. Jolliffe and David B. Stephenson. Forecast Verification: A Practitioner’s Guide in Atmospheric Science . John Wiley & Sons, 2012. Thomas H. Jordan, Yun-Tai Chen, Paolo Gasparini, Raul Madariaga, Ian Main, Warner Marzocchi, Gerassimos Papadopoulos, Gennady Sobolev, Koshun Yamaoka, and Jochen Zschau. Operational earthquake forecasting: State of knowledge and guidelines for utilization. Annals of Geophysics , 54(4):315–391, 2011. Robert W. Keener. Theoretical Statistics: Topics for a Core Course . Springer, 2010. Seung-Jean Kim, Kwangmoo Koh, Stephen Boyd, and Dimitry Gorinevsky. ℓ1trend filtering. SIAM Review , 51(2): 339–360, 2009. Dmitrii Kochkov, Janni Yuval, Ian Langmore, Peter Norgaard, Jamie Smith, Griffin Mooers, Milan Klöwer, James Lottes, Stephan Rasp, Peter Düben, et al. Neural general circulation models for weather and climate. Nature , 632 (8027):1060–1066, 2024. Roger Koenker and José A. F. Machado. Goodness of fit and related inference processes for quantile regression. Journal of the American Statistical Association , 94(448):1296–1310, 1999. Solomon Kullback and Richard A. Leibler. On information and sufficiency. Annals of Mathematical Statistics , 22(1): 79–86, 1951. Francesco Laio and Stefania Tamea. Verification tools for probabilistic forecasts of continuous hydrological variables. Hydrology and Earth System Sciences , 11:1267–1277, 2007. Kyungmin Lee and Hyeongkeun Lee. Pseudo-spherical knowledge distillation. In Proceedings of the International Joint Conference on Artificial Intelligence , 2022. Sebastian Lerch, Thordis L. Thorarinsdottir, Francesco Ravazzolo, and Tilmann Gneiting. Forecaster’s dilemma: Extreme events and forecast evaluation. Statistical Science , 32(1):106–127, 2017. Martin Leutbecher and Tim N. Palmer. Ensemble forecasting. Journal of Computational Physics , 227(7):3515–3539, 2008. Reason L. Machete. Contrasting probabilistic scoring rules. Journal of Statistical Planning and Inference , 143(10): 1781–1790, 2013. Spyros Makridakis, Allan Andersen, Robert Carbone, Robert Fildes, Michèle Hibon, Rudolf Lewandowski, Joseph Newton, Emanuel Parzen, and Robert L. Winkler. The accuracy of extrapolation (time series) methods: Results of a forecasting competition. Journal of Forecasting , 1(2):111–153, 1982. Spyros Makridakis, Evangelos Spiliotis, Vassilios Assimakopoulos, Zhi Chen, Anil Gaba, Ilia Tsetlin, and Robert L. Winkler. The M5 uncertainty competition: Results, findings and conclusions. International Journal of Forecasting , 38(4):1365–1385, 2022. James E. Matheson and Robert L. Winkler. Scoring rules for continuous probability distributions. Management Science , 22(10):1087–1096, 1976. Gerald A. Meehl, George J. Boer, Curt Covey, Mojib Latif, and Ronald J. Stouffer. Intercomparison makes for a better climate model. Eos, Transactions American Geophysical Union , 78(41):445–451, 1997. Edgar C. Merkle and Mark Steyvers. Choosing a strictly proper scoring rule. Decision Analysis , 10(4):292–304, 2013. Sebastian Meyer and Leonhard Held. Power-law models for infectious disease spread. Annals of Applied Statistics , 3 (8):1612–1639, 2014. Allan H. Murphy and Edward S. Epstein. A note on probability forecasts and “hedging”. Journal of Applied Meteorol- ogy, 6(6):1002–1004, 1967. Frank Nielsen. On the Kullback-Leibler divergence between location-scale densities. arXiv: 1904.10428, 2019. 25 Florian Pappenberger, Jutta Thielen, and Mauro Del Medico. The impact of weather forecast improvements on large scale hydrology: Analysing a decade of forecasts of the European Flood Alert System. Hydrological Processes , 25 (7):1091–1113, 2011. Matthew Parry, A. Philip Dawid, and Steffen
|
https://arxiv.org/abs/2505.00937v1
|
Lauritzen. Proper local scoring rules. Annals of Statistics , 40(1):561–592, 2012. Andrew J. Patton. Comparing possibly misspecified forecasts. Journal of Business & Economic Statistics , 38(4): 796–809, 2020. Adrian E. Raftery and Hana Šev ˇcíková. Probabilistic population forecasting: Short to very long-term. International Journal of Forecasting , 39(1):73–97, 2023. Stephan Rasp and Sebastian Lerch. Neural networks for postprocessing ensemble weather forecasts. Monthly Weather Review , 146(11):3885–3900, 2018. Nicholas G. Reich, Logan C. Brooks, Spencer J. Fox, Sasikiran Kandula, Craig J. McGowan, Evan Moore, Dave Osthus, Evan L. Ray, Abhinav Tushar, Teresa K. Yamana, et al. A collaborative multiyear, multimodel assessment of seasonal influenza forecasting in the united states. Proceedings of the National Academy of Sciences , 116(8): 3146–3154, 2019. Johannes Resin, Daniel Wolffram, Johannes Bracher, and Timo Dimitriadis. Shift-dispersion decompositions of Wasserstein and Cramér distances. arXiv: 2408.09770, 2024. Thornton B. Roby. Belief states and the uses of evidence. Behavioral Science , 10(3):255–270, 1965. R. Tyrrell Rockafellar. Convex Analysis . Princeton University Press, 1970. Leonard J. Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association , 66(336):783–801, 1971. Michael Scheuerer and Thomas M. Hamill. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Monthly Weather Review , 143(4):1321–1334, 2015. Michael Scheuerer and David Möller. Probabilistic wind speed forecasting on a grid based on ensemble model output statistics. Annals of Applied Statistics , 9(3):1328–1349, 2015. Danijel Schorlemmer, Maximilian J. Werner, Warner Marzocchi, Thomas H. Jordan, Yosihiko Ogata, David D. Jackson, Sum Mak, David A. Rhoades, Matthew C. Gerstenberger, Naoshi Hirata, et al. The collaboratory for the study of earthquake predictability: Achievements and priorities. Seismological Research Letters , 89(4):1305–1313, 2018. Uwe Schulzweida. CDO user guide, October 2023. URL https://doi.org/10.5281/zenodo.10020800 . David W. Scott. Multivariate Density Estimation: Theory, Practice, and Visualization . Wiley, 1992. Reinhard Selten. Axiomatic characterization of the quadratic scoring rule. Experimental Economics , 1:43–61, 1998. Chenze Shao, Fandong Meng, Yijin Liu, and Jie Zhou. Language generation with strictly proper scoring rules. In Proceedings of the International Conference on Machine Learning , 2024. Christopher G. Small. Functional Equations and How to Solve Them . Springer, 2007. Gábor J. Székely and Maria L. Rizzo. A new test for multivariate normality. Journal of Multivariate Analysis , 93(1): 58–80, 2005. Gábor J. Székely and Maria L. Rizzo. Energy statistics: A class of statistics based on distances. Journal of statistical planning and inference , 143(8):1249–1272, 2013. Maxime Taillardat, Anne-Laure Fougères, Philippe Naveau, and Raphaël de Fondeville. Evaluating probabilistic forecasts of extremes using continuous ranked probability score distributions. International Journal of Forecasting , 39(3):1448–1459, 2023. 26 Thordis L. Thorarinsdottir, Jana Sillmann, Marion Haugen, Nadine Gissibl, and Marit Sandstad. Evaluation of CMIP5 and CMIP6 simulations of historical surface air temperature extremes using proper evaluation methods. Environmental Research Letters , 15(12):124041, December 2020. Ryan J. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. Annals of Statistics , 42(1):285–323, 2014. Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning , 1(1–2):1–305, 2008. Yun Wang, Tuo Chen, Runmin Zou, Dongran Song, Fan Zhang, and Lingjun Zhang. Ensemble
|
https://arxiv.org/abs/2505.00937v1
|
probabilistic wind power forecasting with multi-scale features. Renewable Energy , 201:734–751, 2022a. Yun Wang, Houhua Xu, Runmin Zou, Lingjun Zhang, and Fan Zhang. A deep asymmetric Laplace neural network for deterministic and probabilistic wind power forecasting. Renewable Energy , 196:497–517, 2022b. Edward Wheatcroft. Evaluating probabilistic forecasts of football matches: The case against the ranked probability score. Journal of Quantitative Analysis in Sports , 17(4):273–287, 2021. Robert L. Winkler and Allan H. Murphy. “Good” probability assessors. Journal of Applied Meteorology , 7:751–758, 1968. Robert L. Winkler, Javier Munoz, José L. Cervera, José M Bernardo, Gail Blattenberger, Joseph B. Kadane, Den- nis V . Lindley, Allan H. Murphy, Robert M. Oliver, and David Ríos-Insua. Scoring rules and the evaluation of probabilities. Test, 5:1–60, 1996. Keming Yu and Rana A. Moyeed. Bayesian quantile regression. Statistics & Probability Letters , 54(4):437–447, 2001. Keming Yu and Jin Zhang. A three-parameter asymmetric Laplace distribution and its extension. Communications in Statistics: Theory and Methods , 34(9-10):1867–1879, 2005. Lantao Yu, Jiaming Song, Yang Song, and Stefano Ermon. Pseudo-spherical contrastive divergence. In Advances in Neural Information Processing Systems , 2021. A More details on Covid-19 mortality experiments A.1 Converting Hub forecasts Forecasts in the Hub appear in terms of their quantiles [F] ={F−1(τ) :τ∈T}, where Fis the forecasted cumulative distribution function and Tis a discrete set of probability levels, containing evenly-spaced values from 0.05 to 0.95 in increments of 0.05, as well as 0.01, 0.025, 0.0975, and 0.99. Thus a forecast is in fact not a probability distribution but an equivalence class [F]over probability distributions comprising all distributions that identify on every quantile in T. In order to compute the loss for a given forecast and observed outcome, we choose a particular representative from the equivalence class [F], described as follows. First, we set the representative to have a density which is piecewise linear between min[F]andmax[F]with knots at elements of [F]. Second, we set the representative to have lower and upper tails (below min[F]and above max[F], respectively) of exponential distributions with quantiles matching [F]on the bottom and top two quantiles in T, respectively. If Fis the representative of [F]described above, then we define the loss of the forecast to be ℓ([F], y) =ℓ(F, y). Likewise, the forecast variance was defined as the variance of a random variable with distribution F. Finally, we discarded forecasts from the Hub with atoms (quantiles which were equal at adjacent probability levels), or forecasts with quantile crossings (quantiles which were out of order at adjacent levels). A.2 Standardizing target values For each location, we estimated the smoothed mean function of the target distribution across time using trend filtering of cubic order (Kim et al., 2009; Tibshirani, 2014) on the observed death counts. Trend filtering is a general-purpose nonparametric smoother that acts similarly to a locally-adaptive regression spline; it is formulated as the solution to a penalized least squares problem, and we used a cross-validation scheme with the one-standard-error rule to choose the 27 regularization parameter. We then estimated the smoothed variance of the target distribution across dates by applying trend filtering of constant order to the
|
https://arxiv.org/abs/2505.00937v1
|
Rerandomization for covariate balance mitigates p-hacking in regression adjustment Xin Lu, Peng Ding∗ Abstract Rerandomization enforces covariate balance across treatment groups in the design stage of experiments. Despite its intuitive appeal, its theoretical justification remains unsatisfying because its benefits of improving efficiency for estimating the average treatment effect diminish if we use regression adjustment in the analysis stage. To strengthen the theory of rerandomization, we show that it mitigates false discoveries resulting from p-hacking, the practice of strategically selecting covariates to get more significant p-values. Moreover, we show that rerandomization with a sufficiently stringent threshold can resolve p-hacking. As a byproduct, our theory offers guidance for choosing the threshold in rerandomization in practice. Keywords: experiment design; covariate adjustment; completely randomized experiment ∗Xin Lu, Department of Statistics and Data Science, Beijing, China (E-mail: lux20@mails.tsinghua.edu.cn). Peng Ding, Department of Statistics, University of California, Berkeley, CA 94720 (E-mail: pengdingpku@berkeley.edu). Peng Ding is partially supported by the National Science Foundation grant # 1945136 1arXiv:2505.01137v1 [math.ST] 2 May 2025 1 Introduction Randomized experiments are the gold standard for estimating the average treatment effect (ATE). They balance observed and unobserved covariates on average. Chance covari- ate imbalances are common in realized treatment allocations, which may complicate the interpretation of the estimated ATE. Rerandomization, termed by Cox (1982) and Mor- gan & Rubin (2012), enforces covariate balance in the design stage by rejecting allocations with covariate imbalances, while regression adjustment addresses covariate imbalances in the analysis stage (Fisher 1935, Freedman 2008, Lin 2013). Bruhn & McKenzie (2009) conducted a survey of leading experimental researchers in development economics, and suggested that rerandomization is commonly used yet often poorly documented. Gerber & Green (2012) recommended rerandomization as a way to approximate blocking. Although rerandomization is intuitive and has been widely im- plemented in practice, its necessity has not been fully justified. Since Morgan & Rubin (2012), most papers on rerandomization have focused on the aspect of efficiency for esti- mating the ATE. That is, rerandomization decreases the variances of the estimators for the ATE, if the covariates are predictive of the outcomes (Branson et al. 2016, Morgan & Rubin 2012, Li et al. 2018, Wang & Li 2022, Wang et al. 2023, Li & Ding 2020). However, those papers demonstrated that when we use the same set of covariates in rerandomization and regression adjustment, rerandomization cannot further improve efficiency compared with solely regression adjustment. Given regression adjustment, the additional benefits of rerandomization remain unclear if the criterion is the efficiency for estimating the ATE. We depart from the current literature on efficiency and offer a new perspective. In par- ticular, we demonstrate that rerandomization can mitigate p-hacking caused by strategi- cally selecting covariates in regression adjustment. The term p-hacking, coined by Simmons 2 et al. (2011), refers to the practices that a researcher might use to generate more significant p-values (Brodeur et al. 2020). It has become a pervasive and profound issue, undermin- ing the credibility and reproducibility of scientific discoveries (John et al. 2012, Ioannidis 2005). Among the practices of p-hacking, strategically selecting covariates is especially noteworthy, as it is
|
https://arxiv.org/abs/2505.01137v1
|
common for researchers to explore various covariate combinations in the analysis stage (Simmons et al. 2011). Our contributions are twofold. First, we study two rerandomization schemes: reran- domization with Mahalanobis distance (ReM) (Cox 1982, Morgan & Rubin 2012) and rerandomization based on the p-values of marginal t-tests (ReP) (Gerber & Green 2012, Zhao & Ding 2024). We demonstrate that rerandomization can lower the type I error rate due to p-hacking, and as rerandomization enforces nearly perfect balancing of the co- variates, it can control the type I error rate. The intuition is that when we balance the covariates by rerandomization, the p-values from regression adjustments with different co- variate combinations tend to be coherent with each other; see Zhao & Ding (2024) for more discussion about rerandomization and coherence. Therefore, p-hacking will not distort the p-values too much under rerandomization. Second, our theory gives recommendations for the rerandomization thresholds to control the type I error rates. Notation: LetP∞denote the probability measure with respect to the asymptotic dis- tribution. For two finite-population arrays {ai}n i=1and{bi}n i=1, define the finite-population mean ¯a=n−1Pn i=1aiand covariance Sab= (n−1)−1Pn i=1(ai−¯a)(bi−¯b)⊤.Let Φ( ·) be the cumulative distribution function of a standard normal distribution. Let z1−αbe theαth upper quantile of a standard normal distribution. Let χ2 Kbe the χ2distribu- tion with degrees of freedom K. Let χ2 K,1−αbe the upper αth quantile of χ2 K. For a set of tuples {(ui,Vi1, . . . ,ViL) :ui∈R,Vil∈RKl, i= 1, . . . , n, l = 1, . . . , L }, let 3 lm(ui∼Vi1+···+ViL) be the ordinary least squares (OLS) fit of uion (Vi1, . . . ,ViL). 2 Design-based framework for hacked p-values Consider a completely randomized experiment (CRE) with nzunits, z= 0,1, assigned to the treatment group z, and n1+n0=n. Let Yi(z) be the potential outcome of unit iunder the treatment group zandZi∈ {0,1}be the treatment indicator of unit i. The observed outcome is Yi=Yi(1)Zi+Yi(0)(1−Zi). Let τi=Yi(1)−Yi(0) be the individual level treatment effect. For unit i, we observe Kcovariates, xi= (xi1, . . . , x iK). We are interested in estimating the ATE: ¯τ=n−1nX i=1τi. We focus on the design-based inference framework, which relies solely on the randomness of treatment assignment, conditional on the potential outcomes and covariates (Neyman 1990, Freedman 2008, Imbens & Rubin 2015). Consequently, conclusions drawn from this framework do not depend on the underlying stochastic process of the potential outcomes and covariates; see Li & Ding (2017) for a review. Fisher (1935) considered testing the sharp null hypothesis: H0f:Yi(1) = Yi(0), i= 1, . . . , n. Although we observe only one of the two potential outcomes for each unit, the sharp null hypothesis enables us to impute all the missing potential outcomes and implement the Fisher randomization test. Specifically, for any test statistic, we can compute its finite- sample exact p-value by randomly permuting the treatment indicators to simulate the randomization distribution. 4 Neyman (1990) considered the weak null hypothesis: H0n: ¯τ= 0. The common choices of the test statistics for the weak null hypothesis are the studentized ATE
|
https://arxiv.org/abs/2505.01137v1
|
estimators. For example, the coefficient estimator of Zifrom the following OLS regression is an unbiased estimator of ¯ τ(Freedman 2008, Theorem 1): lm(Yi∼1 +Zi), (1) which we denote by ˆ τ=n−1 1P i:Zi=1Yi−n−1 0P i:Zi=0Yi. The corresponding tstatistic using the associated Eicker–Huber–White (EHW) standard error is asymptotically valid for testing H0n. With covariates xi, regression adjustment is often used in the analysis stage to reduce the variance of the ATE estimator, therefore improving the power of the ttest (Fisher 1935, Freedman 2008, Lin 2013). Assume that the covariates are centered such that ¯x= 0 and that the covariance matrix, Sxx, is nonsingular to exclude collinearity. Lin proposed the interacted regression (Lin 2013) for regression adjustment, which adds covariates and the treatment-covariates interaction in the OLS regression: lm(Yi∼1 +Zi+xi+Zixi). (2) It guarantees the asymptotic variance reduction of the ATE estimator, i.e., its coefficient estimator of Zihas smaller asymptotic variance than ˆ τ. Let ˆ τL, ˆ seL,TL= ˆτL/ˆ seLbe the ATE estimator, the EHW standard error, the t-statistic obtained from Lin’s interacted regression. Lin (2013) showed that the p-value of the two-sided t-test pL= 2(1−Φ(|TL|)) is valid for testing the weak null hypothesis H0neven when the linear model in (2) is misspecified. 5 However, pLcan be susceptible to p-hacking caused by strategically selecting covariates, driven by the pressure on researchers to publish statistically significant results. Even if the researchers have no intention of fraud, it is common for them to explore various covariate combinations, and report only favorable p-values (Simmons et al. 2011). We now formally define the hacked p-value caused by strategically selecting covariates. Let [K] ={1,2, . . . , K }. Given K ⊆ [K], letxiK= (xik, k∈ K) be the subset of covari- ates in K. Let ˆ τL,K, ˆ seL,K,TL,K,pL,Kbe the ATE estimator, the EHW standard error, the t-statistic, the two-sided p-value, obtained through (2), replacing xiwithxiK. (1) corre- sponds to K=∅and (2) corresponds to K= [K]. The researcher might search through all possible covariate combination Kin [K] to report the most significant p-value. De- fine the hacked p-value for testing H0nas the minimal p-value over all possible covariate combinations: ph L= min K⊆[K]pL,K. Letrz=nz/n,z= 0,1, be the proportion of treatment group z. Let ei(z),i= 1, . . . , n , be the residual of lm( Yi(z)∼1+xi), and R2 xbe the R2of lm( r−1 1Yi(1)+ r−1 0Yi(0)∼1+xi), which are both finite-population quantities. We study the asymptotic property of ph Lunder the following standard assumption for asymptotic analysis (Li & Ding 2017). Condition 1. Asn→ ∞ , for z= 0,1, (i) rzhas a positive limit; (ii) SY(z)Y(z),Sxx, SxY(z),Sττhave finite limits, the limit of Sr−1 1e(1)+r−1 0e(0),r−1 1e(1)+r−1 0e(0)is positive, and the limit of Sxxis nonsingular; and (iii) max 1≤i≤n|Yi(z)−¯Y(z)|2=o(n),max 1≤i≤n∥xi∥2 ∞= o(n). Proposition 1. Under H0f, we have P∞(ph L≤α)≥α, the equality holds if and only if R2 x= 0. Proposition 1 demonstrates that under H0f, the type I error rate of the hacked p-value 6 is greater than the significance level unless R2 x= 0. Under H0n, the problem will be worse: since H0fimplies H0n, we have
|
https://arxiv.org/abs/2505.01137v1
|
supH0nP∞(ph L≤α)≥supH0fP∞(ph L≤α). In the next section, we show that rerandomization can mitigate the type I error rate inflation due to p-hacking. 3 Rerandomization mitigates p-hacking In the design stage, covariate imbalances may occur which complicates the interpreta- tion of the ATE estimators. Rerandomization has been used in the design stage to avoid covariate imbalances. It draws a desired assignment by rejective sampling until the as- signment satisfies a given requirement of covariate balance measure (Cox 1982, Morgan & Rubin 2012). Most of the existing papers on rerandomization focused on its benefits for improving efficiency, namely its ability to reduce the asymptotic variance of the unadjusted ATE estimators; see, for example, Li et al. (2018), Wang et al. (2023). However, the bene- fits diminish if we use regression adjustment in the analysis stage Li & Ding (2020). Zhao & Ding (2024) provided a different angle, demonstrating that rerandomization improves coherence between regression adjusted estimators. In this section, we quantify the benefit of rerandomization under a different statistical framework. In particular, we demonstrate that rerandomization can mitigate p-hacking caused by strategically selecting covariates in regression adjustment and can even resolve it entirely when the threshold is stringent enough. To illustrate the concept, we focus on two rerandomization schemes, ReM and ReP, and the theory extends to other rerandomization schemes. 7 3.1 ReM mitigates p-hacking The difference in covariate means ˆτx=n−1 1P i:Zi=1xi−n−1 0P i:Zi=0xiprovides an intuitive measure of imbalance. Rerandomization based on Mahalanobis distance accepts an assignment if and only if the Mahalanobis distance of ˆτxis below a given threshold a (Morgan & Rubin 2012): Arem(a) ={ˆτ⊤ xcov−1(ˆτx)ˆτx≤a}. Theorem 1. Under Condition 1 and H0n, we have, for any a > 0andα∈(0,1), (i) P∞(ph L≤α| Arem(a))−P∞(ph L≤α)≤0;(ii)lima→0P∞(ph L≤α| Arem(a))≤α. Theorem 1 is our key result. Theorem 1 (i) states that ReM mitigates p-hacking, because the asymptotic type I error rate of the hacked p-value under ReM never exceeds that under CRE. Theorem 1 (ii) states that when the threshold aof ReM tends to 0, ReM resolves p-hacking, controlling the type I error rate. We explain the intuition behind the results of Theorem 1. As we will demonstrate in the Supplementary Material: ˆτL,K≈ˆτ−β⊤ L,KˆτK. (3) where ˆτKis the subvector of ˆτxinKandβL,Kis the regression coefficients of xiKin lm(r−1 1Yi(1) + r−1 0Yi(0)∼1 +xiK). Rerandomization ensures that ˆ τxis small and ˆ τL,K, K ∈[K], are close to each other (Zhao & Ding 2024). Therefore, under rerandomization, p-hacking will not change the p-values too much. 3.2 Guidance on selecting rerandomization threshold Theorem 1 motivates a natural question: assume that in the design stage, we do not observe ( Yi(1), Yi(0))n i=1, we only observe xi, and we wish to control the type I error rate 8 due to p-hacking in the future. How should we specify the threshold of ReM ain the design stage? We address this question in this section. LetR2 Kbe the R2of lm( r−1 1Yi(1) + r−1 0Yi(0)∼1 +xiK). By definition, R2 x=R2 [K]. Let [−k] = [K]\{k}. For k∈[K], define ∆k=R2 x−R2 [−k] as the difference in R2of covariate sets [ K]
|
https://arxiv.org/abs/2505.01137v1
|
and [−k]. By definition, ∆ k/(1−R2 [−k]) equals to the partial correlation of the full model lm( r−1 1Yi(1)+ r−1 0Yi(0)∼1+xi) versus the reduced model lm( r−1 1Yi(1) + r−1 0Yi(0)∼1 +xi[−k]). Hence, ∆ kcan be interpreted as the scaled partial R2of covariate k. Our subsequent discussion depends on R2 xand ∆ = min k∈[K]∆k. Assume that in addition, we have prior knowledge about ∆ andR2 xin the design stage. Define M(R2 x,∆) as the set of possible potential outcomes ( Yi(1), Yi(0))n i=1given ∆ and R2 x. For a meaningful discussion, we only consider ( R2 x,∆)∈(0,1)×(0,1) such that M(R2 x,∆)̸=∅. We denote the set of such ( R2 x,∆) byB ∈(0,1)×(0,1). Theorem 2 gives sufficient and necessary condition to universally control the type I error rates for all ( Yi(1), Yi(0))n i=1∈M(R2 x,∆). Theorem 2. Under Condition 1 and H0n, for any (R2 x,∆)∈ B,α∈(0,1), we have max (Yi(1),Yi(0))n i=1∈M(R2x,∆)P∞(ph L≤α| Arem(a))≤α, if and only if a≤¯arem(α, R2 x,∆)where ¯arem(α, R2 x,∆) =z2 1−α/2∆p 1−R2 x+p 1−R2 x+ ∆2. (4) By Theorem 2, ¯ arem(α, R2 x,∆) is the minimal requirement to resolve p-hacking given (α, R2 x,∆). It is intuitive that the threshold ¯ aremshould depend on R2 x, since the magnitude ofβL,Kin (3) is influenced by R2 x. While ph Lis not invariant to non-degenerate linear 9 transformations of xi,R2 xis. Therefore, besides R2 x, we also need to know ∆ to resolve p-hacking. We can show that ¯ aremis an increasing function of ∆ andR2 x, and a decreasing function ofα. A small value of ¯ aremsuggests that resolving p-hacking is challenging. The implications are threefold. First, it is easier to resolve p-hacking for a test requiring a lower significance level α. Second, it is easier to resolve p-hacking when the covariates are predictive of the potential outcomes. Third, it is challenging to resolve p-hacking when ∆ is small, such as when there are redundant covariates or highly correlated covariates. Given R2 x, the ∆ can be arbitrarily close to 0 if there are redundant covariates. Con- sequently, ¯ aremcan also be arbitrarily close to 0. Therefore, when we only know R2 x, we need additional assumptions to get a nontrivial threshold ¯ arem̸= 0. We consider a set of orthogonal and equally important covariates summarised by the following Assumption. Condition 2. Sxxis a diagonal matrix and R2 k=R2 1fork∈[K]. Under Condition 2, we have ∆ =R2 x/K. By Theorem 2, we require a≤¯arem(α, R2 x, R2 x/K) to resolve p-hacking, which requires only specification of R2 xandα. We can check that ¯arem(α, R2 x, R2 x/K) is an increasing function of R2 xand a decreasing function of α, suggest- ing that ReM more easily resolves p-hacking if αis small and R2 xis large. Consider K= 5, if we specify α= 0.05,R2 x= 0.6, we require a≤0.252≈χ2 5,0.15%. Namely, we require an asymptotic acceptance probability of 0 .15%, which is less stringent than the recommended acceptance probability of 0 .1% by Li et al. (2018). By the finite-population central limit theorem (Li & Ding 2017), the asymptotic
|
https://arxiv.org/abs/2505.01137v1
|
accep- tance probability of Arem(a) isP∞(Arem(a)) =P(χ2 K≤a). Figure 1 plots the asymptotic acceptance probability of Arem(a) with a= ¯arem(α, R2 x, R2 x/K) under α∈ {0.05,0.1}, R2 x∈[0.1,1] and K∈ {1,2,3,4,5,6}. The asymptotic acceptance probability decreases 10 dramatically as R2 xdecreases. For example, when K= 5 and α= 0.05, as R2 xdecreases from 0 .7 to 0 .1, the asymptotic acceptance probability decreases from about 4 ×10−3to an extremely stringent threshold of 3 ×10−6. So resolving p-hacking is challenging when the covariates are not predictive of the potential outcomes. When we know neither R2 xnor ∆ , for any threshold of ReM a̸= 0, there always exists type I error inflation for the worst case of ( Yi(1), Yi(0))n i=1. However, to choose a threshold a̸= 0 of ReM, it is useful to obtain an upper bound on the type I error rate to assess the potential inflation under this threshold. Theorem 3. Under Condition 1 and H0n, we have P∞(ph L≤α| Arem(a))≤P (ε2+V)1/2≥z1−α/2 V≤a , where εandVare independent standard Gaussian and χ2 Krandom variables, respectively. The bound in Theorem 3 does not depend on R2 xand ∆ . It is also independent of the covariance structure of the covariates, owing to the invariance of ReM under non-degenerate linear transformation of x. As aapproaches 0, the right-hand side of the inequality tends toP(|ε| ≥z1−α/2) =α, echoing Theorem 1 (ii). When we know neither R2 xnor ∆ , we can determine a threshold with acceptable inflation of the type I error rate based on Theorem 3. When K= 5, α= 0.05 and a=χ2 5,0.01, the type I error bound obtained by Theorem 3 is approximately 0 .063, reflecting an inflation of 0 .013 over the significance level of 0 .05. Given the above bound, we can specify a test which rejects H0nifph L≤α−γso that the type I error is less than α, where γis the solution of P (ε2+V)1/2≥z1−α/2+γ/2 V≤a =α. Continuing the example with K= 5,α= 0.05 and a=χ2 5,0.01, the corresponding γ≈0.01. 11 α= 0.05 α= 0.1 0.10.20.30.40.50.60.70.80.91.00.10.20.30.40.50.60.70.80.91.010−710−610−510−410−310−210−1100 Rx2Required acceptance probability of ReMK 1 2 3 4 5 6Figure 1: Required acceptance probability of ReM to resolve p-hacking under different α, R2 xandKwhen Condition 2 holds. The acceptance probability is plotted under the log10 transformation. 12 3.3 ReP mitigates p-hacking Another popular way of checking the balance is to perform two-sample t-test for each covariate and report all the associated p-values (Zhao & Ding 2024, Bruhn & McKenzie 2009, Gerber & Green 2012). Let pt,kbe the p-value of the two-sided two-sample t-test using EHW standard error based on data {xik, Zi}n i=1,k∈[K]. To balance the covariates, an intuitive and possibly widely used method is to perform rejective sampling until every p-value passes the balance check: min k∈[K]pt,k≥αt, for some prespecified threshold αt. Denote this rerandomization scheme by Arep(αt) ={min k∈[K]pt,k≥αt}. For a matrix V∈RS×S, letσ(V) = diag( V1/2 ss)S s=1andD(V) =σ(V)−1V σ(V)−1. Theorem 4 parallels Theorem 1. Theorem 4. Under Condition 1 and H0n, for any αt∈(0,1)andα∈(0,1), we have (i) P∞(ph L≤α| Arep(αt))−P∞(ph L≤α)≤0,and (ii) limαt→1P∞(ph L≤α| Arep(αt))≤α. Theorem 4
|
https://arxiv.org/abs/2505.01137v1
|
(i) states that ReP mitigates p-hacking. Theorem 4 (ii) states that when αt→1, ReP resolves p-hacking. Define ∥V∥∞,2= max ∥u∥∞=1∥V u∥2. ReP restricts the l∞norm of ˆτxwhile ReM restricts the l2norm of ˆτx. Since ˆτxinArem(¯arem) is balanced enough to resolve p-hacking, we can use Arep(αt) to resolve p-hacking if Arep(αt)⊆ A rem(¯arem). Corollary 1, which serves both as a corollary of Theorem 2 and a parallel result to it, gives such a threshold of ReP to resolve p-hacking depending on ( α, R2 x,∆). Corollary 1. Under Condition 1 and H0n, for any (R2 x,∆)∈ B,α∈(0,1), ifαtis large enough to satisfy z1−αt/2≤ ∥D(Sxx)−1/2∥−1 ∞,2n ¯arem(α, R2 x,∆)o1/2 , (5) 13 we have max (Yi(1),Yi(0))n i=1∈M(R2x,∆)P∞(ph L≤α| Arep(αt))≤α. (5) is motivated by the condition that Arep(αt)⊆ A rem(¯arem). By Corollary 1, similar to the discussion of Theorem 2, the threshold is more stringent with a larger α, a smaller ∆and a smaller R2 x. Therefore, our discussion for ReM under Theorem 2 can apply to the setting of ReP. Corollary 1 is a sufficient condition to resolve p-hacking, which can be overly stringent. With Condition 2, we can obtain a less stringent threshold. Since ∆ =R2 x/Kunder Condition 2, the set M(R2 x,∆) is determined solely by R2 x. Hence, we denote M(R2 x,∆) simply by M(R2 x). Theorem 5. Letg(α) =z1−α/2, for α∈(0,1). Under Condition 1,2 and H0n, for R2 x∈ (0,1), we have max (Yi(1),Yi(0))n i=1∈M(R2x)P∞(ph L≤α| Arep(αt))≤α, if and only if αtis large enough to satisfy αt≥αt(R2 x, α), where αt(R2 x, α) =g−1(crep(R2 x)g(α)), c rep(R2 x) =(R2 x/K)1/2 1 + (1 −R2 x)1/2. We can use αt(R2 x, α) as the threshold of ReP if Condition 2 holds. It is easy to see that crep(R2 x)≤1 and the equality holds if and only if R2 x= 1 and K= 1. Since g(·) is a decreasing function, we have αt(R2 x, α) =g−1(crep(R2 x)g(α))≥g−1(g(α)) =α. Therefore, the common practice of setting αt=αis generally insufficient to resolve p- hacking. Moreover, αt(R2 x, α) is more stringent for a smaller crep(R2 x). Since crep(R2 x) is 14 an increasing function of R2 x, the threshold is more stringent with a smaller R2 x, similar to ReM. When we know neither R2 xnor ∆ , to estimate the type I error rate inflation under ReP, we have the following parallel result of Theorem 3. Theorem 6. Under Condition 1 and H0n, we have P∞(ph L≤α| Arep(αt))≤ Pn (ε2+∥ξ∥2 2)1/2≥z1−α/2 ∥D(Sxx)1/2ξ∥∞≤z1−αt/2o where ε∈Randξ∈RKare independent standard Gaussian random variables. Similar to Theorem 3, the bound in Theorem 6 does not depend on R2 xor ∆. However, it depends on the covariance structure of the covariates through D(Sxx). This is because ReP is not invariant to linear transformations of xi. As αt→1, the bound tends to P(|ε| ≥z1−α/2) =α. We can apply this bound, by specifying D(Sxx),αandαt. For example, with K= 5,D(Sxx) = 0 .2IK+ 0.8,α= 0.05, and αtequal to the 1% upper quantile of the asymptotic distribution of min k∈[K]pt,k, Theorem 6 yields a bound about 0.076. This reflects an inflation of 0 .026.
|
https://arxiv.org/abs/2505.01137v1
|
Similar to the discussion of Theorem 3, we can specify a conservative test that controls the type I error rate based on the error bound in Theorem 6. 4 Simulation study We conduct a simulation study to evaluate p-hacking with and without rerandomization. The study considers 3 designs: CRE, ReM and ReP. For {ai}n i=1, we define Scale( ai) = ai/S1/2 aa. The sharp null hypothesis is the most challenging case for p-hacking under the weak null hypothesis. Because the p-values of ttests under the sharp null hypothesis are generally 15 smaller than in other cases: Zhao & Ding (2021) showed that we have P∞(pL,K≤α)≤α for all K ⊆ [K] under the weak null hypothesis, and P∞(pL,K≤α) =αholds for all K ⊆ [K] only under the sharp null hypothesis. Therefore, we generate the potential outcomes satisfying the sharp null hypothesis with n= 1000: Yi(1) = Yi(0) = Rx·Scale( x⊤ iβ) + (1 −R2 x)1/2Scale( εi), where εi,i= 1, . . . , n , are i.i.d. generated from standard normal distribution. xiare i.i.d. generated from N(05,(1−ρ)I5+ρ15×5). We conduct ReM and ReP with an asymptotic acceptance probability of 0 .01. Once generated, {Yi(1), Yi(0),xi}n i=1are fixed in the simulation. We set r1= 0.5, draw random assignments under CRE, ReM and ReP, and compute the hacked p-values for each realization of treatment assignment. Each design is repeated B= 105times to approximate the distribution of the hacked p-values. We consider combinations of ρ∈ {0,0.8},R2 x∈ {0.1,0.2, . . . , 0.9}andβ∈ {15,(1,1,0.3,0.3,0.3)}.ρandβrepresent correlation structure of covariates and variability in covariate importance. In particular, ρ= 0 and β=15 corresponds to the setting when Condition 2 holds. We focus on the empirical type I error rate of hacked p-values at level α= 0.05: PB j=1I(ph L,(j)≤α)/B, where ph L,(j)is the hacked p-value under the jth treatment assign- ment. Figure 2 and Table 1 show the empirical type I error rates of different designs under different R2 x,ρandβ. The empirical type I error rates under ReP and ReM consistently remain lower than those under the CRE. By Figure 2, when ρ= 0, ReP and ReM perform similarly, both resolve p-hacking for large R2 x. By Table 1, when ρ= 0, it is easier to resolve p-hacking for β=15compared with β= (1,1,0.3,0.3,0.3): for β=15, ReM and ReP both resolve p-hacking if R2 x≥0.6 while for β= (1,1,0.3,0.3,0.3), we need R2 x≥0.9. 16 When ρ= 0.8, neither ReM nor ReP resolves p-hacking, and the inflation of empirical type I error rates remains substantial. These results align with Theorem 2, demonstrating that correlated covariates and varying covariate importance present greater challenges in resolving p-hacking due to a small ∆ . By Table 1, when β=15andρ= 0, i.e., Condition 2 holds, the performance of rerandomization is better in finite samples than in asymptotic theory. Although χ2 5,0.01≈ ¯arem(0.05,0.8,0.8/5) and 0 .01≈ {1−αt(0.8,0.05)}5suggest that an acceptance probability of 0.01 resolves p-hacking if R2 x≥0.8 for both ReP and ReM, Table 1 suggests that this acceptance probability can resolve p-hacking if R2 x≥0.6. By Theorem 2, a larger R2 xis more favorable for resolving
|
https://arxiv.org/abs/2505.01137v1
|
p-hacking under ReP and ReM. However, as shown in Figure 2, it is also the setting where p-hacking is more prob- lematic under the CRE. This is because a higher value of R2 xis associated with larger l2 norms of βL,K,K ∈ [K], in (3). Large βL,Klead to great divergence among ˆ τL,Kunder the CRE, thereby exacerbating the severity of p-hacking. This suggests the importance of balancing informative covariates by rerandomization. We present the theoretical bounds of the type I error rates obtained by Theorems 3 and 6 in Figure 2 and Table 1. By Figure 2, the empirical type I error rates of ReM and ReP consistently fall below their respective bounds, validating Theorem 3 and Theorem 6. These bounds are close to the highest empirical type I error rates, thus providing useful guidance for selecting rerandomization thresholds. Notably, by Figure 2, ReP exhibits higher empirical type I error rates than ReM when ρ= 0.8, as well as a higher theoretical bound, suggesting that ReM may be better than ReP at mitigating p-hacking when the covariates are correlated. 17 ρ= 0 ρ= 0.8Equal Unequal 0.25 0.50 0.75 0.25 0.50 0.750.00.10.20.3 0.00.10.20.3 R2Type I error rateDesign ReM ReP CREFigure 2: The plot of R2 xversus empirical type I error rate computed by 105treatment assignments for different ρandβunder different designs. β=15is labelled as “Equal” andβ= (1,1,0.3,0.3,0.3) is labelled as “Unequal”, respectively. Black dashed lines signify the significance level α= 0.05. The type I error bounds under different rerandomization schemes, obtained through Theorem 3 and Theorem 6, are shown as dashed lines in colors of the corresponding designs. 18 β ρ Design R2= 0.1 R2= 0.2 R2= 0.3 R2= 0.4 R2= 0.5 R2= 0.6 R2= 0.7 R2= 0.8 R2= 0.9 Equal0.00 CRE 9.85 (0.09) 12.37 (0.10) 14.90 (0.11) 17.35 (0.12) 19.95 (0.13) 22.81 (0.13) 26.28 (0.14) 30.16 (0.15) 34.99 (0.15) ReM (6.35) 5.66(0.07) 5.61 (0.07) 5.49 (0.07) 5.36 (0.07) 5.25 (0.07) 5.17 (0.07) 5.13 (0.07) 5.13 (0.07) 5.13 (0.07) ReP (6.69) 5.77 (0.07) 5.73 (0.07) 5.60 (0.07) 5.42 (0.07) 5.26 (0.07) 5.14 (0.07) 5.13 (0.07) 5.13 (0.07) 5.13 (0.07) 0.80 CRE 7.92 (0.09) 9.41 (0.09) 10.93 (0.10) 12.48 (0.10) 14.24 (0.11) 16.35 (0.12) 19.19 (0.12) 23.31 (0.13) 30.84 (0.15) ReM (6.35) 5.59 (0.07) 5.69 (0.07) 5.76 (0.07) 5.81 (0.07) 5.86 (0.07) 5.89 (0.07) 5.88 (0.07) 5.80 (0.07) 5.57 (0.07) ReP (7.59) 5.92 (0.07) 6.11 (0.08) 6.29 (0.08) 6.44 (0.08) 6.59 (0.08) 6.74 (0.08) 6.86 (0.08) 6.92 (0.08) 6.76 (0.08) Unequal0.00 CRE 9.02 (0.09) 10.94 (0.10) 12.66 (0.11) 14.51 (0.11) 16.35 (0.12) 18.40 (0.12) 20.77 (0.13) 23.72 (0.13) 27.71 (0.14) ReM (6.35) 5.56 (0.07) 5.53 (0.07) 5.49 (0.07) 5.45 (0.07) 5.43 (0.07) 5.39 (0.07) 5.33 (0.07) 5.25 (0.07) 5.15 (0.07) ReP (6.69) 5.66 (0.07) 5.63 (0.07) 5.56 (0.07) 5.52 (0.07) 5.50 (0.07) 5.47 (0.07) 5.39 (0.07) 5.27 (0.07) 5.13 (0.07) 0.80 CRE 7.88 (0.09) 9.38 (0.09) 10.81 (0.10) 12.42 (0.10) 14.08 (0.11) 16.18 (0.12) 18.86 (0.12) 22.77 (0.13) 29.52 (0.14) ReM (6.35) 5.58 (0.07) 5.70 (0.07) 5.76 (0.07) 5.81 (0.07) 5.83 (0.07) 5.83 (0.07) 5.79 (0.07) 5.72 (0.07) 5.59 (0.07) ReP (7.59) 5.91 (0.07) 6.15
|
https://arxiv.org/abs/2505.01137v1
|
(0.08) 6.31 (0.08) 6.45 (0.08) 6.55 (0.08) 6.66 (0.08) 6.73 (0.08) 6.73 (0.08) 6.55 (0.08) Table 1: The table of empirical type I error rates. Values are multiplied by 100. β=15is labelled as “Equal” and β= (1,1,0.3,0.3,0.3) is labelled as “Unequal”, respectively. The numbers in parentheses in the second column represent the type I error bounds for the corresponding designs. The numbers in parentheses from the third column onward denote the standard errors of the empirical type I error rates. 5 Discussion Our theory shows that rerandomization, such as ReM and ReP, can mitigate p-hacking caused by strategically selecting covariates in regression adjustment for estimating the average treatment effect. Additionally, we present theorems that offer guidance on selecting rerandomization thresholds to resolve p-hacking depending on whether we know R2 xand ∆. Our theory demonstrates that without prior knowledge of R2 x, or when the rerandomiza- tion threshold is not stringent enough given R2 x, rerandomization may not resolve p-hacking. To address this limitation, we can adjust the inference procedure. One approach is to spec- ify a conservative test using the error bound in Theorem 3 and 6. Another approach is to 19 reinterpret the hacked p-value as a test statistic rather than a valid p-value. Then, we can use the Fisher randomization test to compute a valid p-value for this test statistic under the sharp null hypothesis (Lee & Rubin 2015). However, this randomization test might not be valid under the weak null hypothesis. To ensure validity for both sharp null and weak null hypothesis, we need an additional prepivoting procedure (Cohen & Fogarty 2021), which requires understanding the sampling distribution of the hacked p-value. It is a nontrivial theoretical task, and we leave it to future research. Stratified randomized experiments are frequently used in the design stage to balance the discrete covariates such as gender and age. Rerandomization can be paired with strat- ified randomized experiments to further balance the continuous covariates. Wang et al. (2023) proved the benefits of rerandomization for improving the efficiency of ATE estima- tors in stratified randomized experiments. The benefits also diminish if we use regression adjustment in the analysis stage. We show that rerandomization also mitigates p-hacking in stratified randomized experiments. The results are similar to those under the CRE ex- cept that, in stratified randomized experiments, we usually add more regressors such as stratum-treatment-covariates interaction in OLS regression. Consequently, we will require a more stringent threshold to resolve p-hacking. We relegate the corresponding results to the Supplementary Material. References Angrist, J., Oreopoulos, P. & Williams, T. (2014), ‘When opportunity knocks, who an- swers? New evidence on college achievement awards’, Journal ofHuman Resources 49, 572–610. Branson, Z., Dasgupta, T. & Rubin, D. B. (2016), ‘Improving covariate balance in 2K 20 factorial designs via rerandomization with an application to a New York City Department of Education High School Study’, TheAnnals ofApplied Statistics 10, 1958 – 1976. Brodeur, A., Cook, N. & Heyes, A. (2020), ‘Methods matter: P-hacking and publication bias in causal analysis in economics’, American Economic Review 110, 3634–3660. Bruhn, M. & McKenzie, D. (2009),
|
https://arxiv.org/abs/2505.01137v1
|
‘In pursuit of balance: Randomization in practice in development field experiments’, American economic journal: applied economics 1, 200– 232. Cohen, P. L. & Fogarty, C. B. (2021), ‘Gaussian prepivoting for finite population causal inference’, Journal oftheRoyal Statistical Society Series B:Statistical Methodology 84, 295–320. Cox, D. R. (1982), ‘Randomization and concomitant variables in the design of experiments’, Statistics andprobability: Essays inHonor ofC.R.Rao pp. 197–202. Ding, P. (2021), ‘The Frisch–Waugh–Lovell theorem for standard errors’, Statistics & Probability Letters 168, 108945. Ding, P. (2024), ‘Linear model and extensions’, arXiv preprint arXiv:2401.00649 . Duflo, E., Glennerster, R. & Kremer, M. (2007), ‘Using randomization in development economics research: A toolkit’, Handbook ofdevelopment economics 4, 3895–3962. Firpo, S., Foguel, M. N. & Jales, H. (2020), ‘Balancing tests in stratified randomized controlled trials: A cautionary note’, Economics letters 186, 108771. Fisher, R. A. (1935), TheDesign ofExperiments, 1st edn, Oliver and Boyd, Edinburgh. Freedman, D. A. (2008), ‘On regression adjustments to experimental data’, Advances in Applied Mathematics 40, 180–193. 21 Gerber, A. S. & Green, D. P. (2012), Field experiments: Design, analysis, and interpretation, New York: W.W. Norton. Imbens, G. W. & Rubin, D. B. (2015), Causal Inference inStatistics, Social, andBiomedical Sciences: AnIntroduction, Cambridge University Press. Ioannidis, J. P. (2005), ‘Why most published research findings are false’, PLoS medicine 2, e124. John, L. K., Loewenstein, G. & Prelec, D. (2012), ‘Measuring the prevalence of questionable research practices with incentives for truth telling’, Psychological science 23, 524–532. Lee, J. J. & Rubin, D. B. (2015), ‘Valid randomization-based p-values for partially post hoc subgroup analyses’, Statistics inmedicine 34, 3214–3222. Li, X. & Ding, P. (2017), ‘General forms of finite population central limit theorems with ap- plications to causal inference’, Journal oftheAmerican Statistical Association 112, 1759– 1769. Li, X. & Ding, P. (2020), ‘Rerandomization and regression adjustment’, Journal ofthe Royal Statistical Society: Series B(Statistical Methodology) 82, 241–268. Li, X., Ding, P. & Rubin, D. B. (2018), ‘Asymptotic theory of rerandomization in treat- ment–control experiments’, Proceedings oftheNational Academy ofSciences 115, 9157– 9162. Lin, W. (2013), ‘Agnostic notes on regression adjustments to experimental data: Reexam- ining Freedman’s critique’, TheAnnals ofApplied Statistics 7, 295–318. Liu, H. & Yang, Y. (2020), ‘Regression-adjusted average treatment effect estimates in stratified randomized experiments’, Biometrika 107, 935–948. 22 Miratrix, L. W., Sekhon, J. S. & Yu, B. (2013), ‘Adjusting treatment effect estimates by post-stratification in randomized experiments’, Journal oftheRoyal Statistical Society Series B:Statistical Methodology 75, 369–396. Morgan, K. L. & Rubin, D. B. (2012), ‘Rerandomization to improve covariate balance in experiments’, Annals ofStatistics 40, 1263–1282. Neyman, J. (1990), ‘On the application of probability theory to agricultural experiments.’, Statistical Science 5, 465–472. Royen, T. (2014), ‘A simple proof of the Gaussian correlation conjecture extended to mul- tivariate gamma distributions’, FarEast Journal ofTheoretical Statistics 48, 139–145. Simmons, J. P., Nelson, L. D. & Simonsohn, U. (2011), ‘False-positive psychology: Undis- closed flexibility in data collection and analysis allows presenting anything as significant’, Psychological science 22, 1359–1366. Wang, X., Wang, T. & Liu, H. (2023), ‘Rerandomization in stratified randomized experi- ments’, Journal oftheAmerican Statistical Association 118, 1295–1304. Wang, Y. & Li, X. (2022), ‘Rerandomization with
|
https://arxiv.org/abs/2505.01137v1
|
diminishing covariate imbalance and diverging number of covariates’, TheAnnals ofStatistics 50, 3439–3465. Zhao, A. & Ding, P. (2021), ‘Covariate-adjusted Fisher randomization tests for the average treatment effect’, Journal ofEconometrics 225(2), 278–294. Zhao, A. & Ding, P. (2024), ‘No star is good news: A unified look at rerandomization based onp-values from covariate balance tests’, Journal ofEconometrics 241, 105724. 23 SUPPLEMENTARY MATERIAL Section A provides additional theoretical results under stratified randomized experi- ments (SRE). Section B provides additional notation and lemmas for the Supplementary Material. Section C provides lemmas that are useful for the proofs including joint asymptotic normality for the ATE estimators and difference in means of covariates, asymptotic limits of the regression coefficients and EHW standard errors, and asymptotic limits of hacked p-values under SRE with Hstrata. The results under the CRE correspond to SRE with H= 1. Section D provides the proofs of theoretical results under CRE (Proposition 1, Theo- rem 1–6, and Corollary 1). Section E provides the proofs of theoretical results under SRE (Theorem S1–S3). A Extension to SRE A.1 Hacked p-values under SRE We now extend the results to the SRE. Consider nunits in Hstrata. Let nhbe the size of stratum h,h= 1, . . . , H . An SRE conducts independent CREs across strata. In stratum h,nhzunits are assigned to the treatment group z,z= 0,1, with nh1+nh0=nh andrhz=nhz/nh. LetShbe the set of units in stratum h. Let πh=nh/nbe the proportion of units in stratum h. Let ¯ τh=n−1 hP i∈Shτibe the ATE of stratum h. We are interested in the ATE for SRE, ¯τ=HX h=1πh¯τh. The goal is to test the weak null hypothesis for the SRE: H0n: ¯τ= 0. An unbiased estimator of ¯ τis ˆτ=PH h=1πhˆτhwith ˆτh=n−1 h1P i:i∈Sh,Zi=1Yi−n−1 h0P i:i∈Sh,Zi=0Yi. Leth= 1 be the reference level and Si= (I(i∈ S h)−πh)H h=2be the vector of centered strata indicators, where I(·) is the indicator function. The ˆ τis equal to the coefficient estimator 24 ofZiin the regression of Yion the treatment indicator, the centered strata indicators, and their interaction terms (Lin 2013, Miratrix et al. 2013, Liu & Yang 2020): lm(Yi∼1 +Zi+Si+ZiSi). (6) We can use the corresponding studentized two-sided p-value of the coefficient estimator of Zito test H0n. For two finite-population arrarys {ai}n i=1,{bi}n i=1, define ¯ah=n−1 hP i∈Shaias its stratum-specific mean and Sh,ab= (nh−1)−1P i∈Sh(ai−¯ah)(bi−¯bh)⊤,as their stratum- specific covariance, h= 1, . . . , H . Let [ H] ={1, . . . , H }. Assume the covariates are centered within each stratum such that ¯xh=0, for h= 1, . . . , H . Assume that Sh,xx, h∈[H], are nonsingular. Let ⊗be the Kronecker product. When there are covariates, Lin’s regression further adds covariates and their interaction with Zi,Si,ZiSiin the re- gression, lm(Yi∼1 +Zi+Si+ZiSi+xi+Zixi+Si⊗xi+Zi(Si⊗xi)). (7) Let ˆτL,K, ˆ seL,K,TL,K= ˆτL,K/ˆ seL,K,pL,K= 2 1−Φ(|TL,K|) be the ATE estimator, the EHW standard error, the t-statistic, the two-sided p-value obtained through (7), replacing xiwithxiK. Define the hacked p-value as ph L= min K∈[K]pL,K. Another popular ATE estimator is obtained through the fixed effects regression model: lm(Yi∼1 +Zi+Si+xi). (8) (8) excludes all interaction
|
https://arxiv.org/abs/2505.01137v1
|
terms in the regression (7) to avoid overparametrization (Imbens & Rubin 2015, Duflo et al. 2007, Angrist et al. 2014). When xi=∅, we denote the estimated coefficient of Ziin this fixed effects regression by ˆ τω. Ding (2021) has shown that ˆ τω=PH h=1ωhˆτh, where ωh=πhrh1rh0/PH h′=1πh′rh′1rh′0. Therefore ˆ τωis an unbiased estimator of ¯τω=HX h=1ωh¯τh. Hence, we use (8) to test H0ω: ¯τω= 0. Let ˆτfe,K, ˆ sefe,K,Tfe,K= ˆτfe,K/ˆ sefe,K,pfe,K= 2 1−Φ(|Tfe,K|) be the ATE estimator, the EHW standard error, the t-statistic, the two-sided p-value obtained through (8), replacing xiwith xiK. Define the hacked p-value under the fixed effects regression as ph fe= min K∈[K]pfe,K. 25 A.2 Rerandomization under SRE Since (7) is equivalent to performing Lin’s regression separately within each stratum and then combining the adjusted estimators from each stratum by the stratum proportion πh, a natural way to extend Arem(a) and Arep(αt) is to perform them separately within each stratum, termed as stratum-specific ReM (SS-ReM), stratum-specific ReP (SS-ReP). These two rerandomization schemes are studied in Wang et al. (2023) and Zhao & Ding (2024) and are shown to reduce the variance of the unadjusted estimator. LetVh,xx= (rh1rh0)−1Sh,xxbe the covariance matrix of n1/2 hˆτh,xwhere ˆτh,x=n−1 h1P i:i∈Sh,Zi=1xi−n−1 h0P i:i∈Sh,Zi=0xi. By the finite-population CLT, the asymptotic distri- butions of nhˆτ⊤ h,xV−1 h,xxˆτh,x,h∈[H],follow the distribution of χ2 K. We define the SS-ReM as Ass-rem(a) ={nhˆτ⊤ h,xV−1 h,xxˆτh,x≤a, h∈[H]}. Letpt,hkbe the p-value of the two-sample ttest under stratum hfor covariate xik. We define the SS-ReP as Ass-rep(αt) ={pt,hk≥αt, h∈[H], k∈[K]}. The published articles, which use (8) to estimate the ATEs, commonly present balance tables showing the two-sided p-values obtained from ttests for the coefficient estimators of Ziin the fixed effects regression: lm(xik∼1 +Zi+Si), k∈[K]. (9) A large p-value from (9) suggests that the covariate is well balanced. A commonly used rerandomization scheme is to rerandomize until the p-value for each covariate passes the balance check (Firpo et al. 2020). We refer to this rerandomization scheme as FE-ReP where FE represents fixed effects regression adjustment. Define px fe,kas the two-sided p-value of the tstatistic for the kth covariate. Define FE-ReP as Afe-rep(αt) =n min k∈[K]px fe,k≥αto . 26 A.3 Hacked p-values under rerandomization in SRE We consider the type I error rate of ph Landph feunder rerandomization. Let σ2 h,adj= Vh,ττ−Vh,τxV−1 h,xxVh,xτwithV⊤ h,xτ=Vh,τx=nhcov(ˆτh,ˆτh,x),Vh,ττ=nhvar(ˆτh) and Vh,xx=nhcov(ˆτh,x). Let ˜ se2 febe the asymptotic limit of ˆ se2 fewhich we will define in the Section C. Condition S1 extends Condition 1. Condition S1. His fixed. For h= 1, . . . , H ,z= 0,1(i)πhandrh1have limits in (0,1); (ii) Sh,Y(z)Y(z),Sh,xx,Sh,xY(z),Sh,ττhave finite limits, σ2 h,adjhas a positive limit, and the limit of Sh,xxis positive-definite; (iii) max h∈[H]max i∈Sh|Yi(z)−¯Yh(z)|2=o(n), max h∈[H]max i∈Sh∥xi∥2 ∞=o(n); (iv) n˜ se2 fehas a positive limit. Theorem S1. Under SRE, assume Condition S1 holds. We have (i) Theorem 1 holds with Arem(a)replaced by Ass-rem(a). (ii) Theorem 4 holds with Arep(αt)replaced by Ass-rep(αt); Theorem 4 holds with H0n,ph L,Arep(αt)replaced by H0ω,ph fe,Afe-rep(αt), respectively. Parallel to Theorem 1, Theorem S1 demonstrates that the aforementioned rerandomiza- tion schemes mitigate p-hacking under SRE. The asymptotic distributions of these hacked
|
https://arxiv.org/abs/2505.01137v1
|
p-values are relegated to Section C. As a→0 and αt→1, SS-ReM and SS-ReP resolve p-hacking for testing H0nwhile FE-ReP resolves p-hacking for testing H0ω. To establish Theorem S1, we need to obtain asymptotic limits of the t-statistics obtained from Lin’s regression and the fixed effects regression. Zhao & Ding (2024) has shown the consistency of the estimator of the fixed effects regression. However, there is no current result for its asymptotic normality under the design-based inference framework. We fill this gap in Section C. We study how to resolve p-hacking by rerandomization. Define the stratum-specific R2 x for stratum hasR2 h,x=Vh,τxV−1 h,xxVh,xτ/Vh,ττ. Define R2 x=PH h=1R2 h,xπhVh,ττ/{PH h=1πhVh,ττ} as the R2of ˆτon (ˆτh,x)h∈[H].R2 xmeasures the extent to which ˆ τcan be explained by (ˆτh,x)h∈[H]. When H= 1, R2 xis equal to the R2of lm( r−1 1Yi(1) + r−1 0Yi(0)∼1 +xi). Define Mss(R2 x) as the set of all possible potential outcomes given R2 x. We now obtain rerandomization thresholds for SS-ReM and SS-ReP to universally control the type I error rates for ( Yi(1), Yi(0))n i=1∈Mss(R2 x). 27 Theorem S2. Recall g(α)defined in Theorem 5. Under SRE, assume Condition S1 holds. If Condition 2 holds for every stratum, then we have, under H0n, for R2 x∈(0,1), (i) max (Yi(1),Yi(0))n i=1∈Mss(R2x)P∞(ph L≤α| Ass-rem(a))≤α, if and only if a≤¯arem(α, R2 x, R2 x/K) H; (ii) max (Yi(1),Yi(0))n i=1∈Mss(R2x)P∞(ph L≤α| Ass-rep(αt))≤α, if and only if αt≥g−1crep(R2 x) H1/2g(α) . Theorem S2 shows the required thresholds for SS-ReM and SS-ReP to resolve p-hacking of Lin’s regression given R2 x. Compared to the thresholds under CRE in Theorem 2 and 5, the thresholds required for Lin’s regression under SRE are more stringent; see the factor Hin the expressions. It is more challenging to resolve p-hacking of the fixed effects regression by rerandomiza- tion. Let ˆτω,x=PH h=1ωhˆτh,x. LetVω,xx=ncov(ˆτω,x),Vω,τx=V⊤ ω,xτ=ncov(ˆτω,x,ˆτω), Vω,ττ=nvar(ˆτω).Define R2 ω,x=Vω,τxV−1 ω,xxVω,xτ/Vω,ττas the R2of ˆτωonˆτω,x. Let ˆ τfe be the coefficient estimator of Ziby (8). We will show in Section C that ˆτfe≈ˆτω−β⊤ fe,xˆτω,x,βfe,x=HX h=1πhSh,xx−1nHX h=1πh rh1Sh,xY(1)+rh0Sh,xY(0)o . Define the optimal projection coefficient, βω,x= cov( ˆτω,x)−1cov(ˆτω,x,ˆτω). Therefore, we can decompose ˆτfe≈(ˆτω−β⊤ ω,xˆτω,x) + (βω,x−βfe,x)⊤ˆτω,x. The first term and the second term are asymptotically independent. Rerandomization only affects the second term. The variance of the first term is proportional to (1 −R2 ω,x) while the variance of the second term depends on βω,x−βfe,x. However, since βω,x−βfe,xis typically unknown before the experiment, we lack the information to assess how rerandomization might influence ˆ τfe, let alone its effect on the hacked p-value. Therefore, it is challenging to obtain a rerandomization threshold to resolve the p-hacking of the fixed effects regression by rerandomization. Given the limitation of FE-ReM, a more feasible approach is to conduct the Fisher randomization test using the hacked p-value statistic in the analysis stage. Theorem S3 gives the type I error bound, parallel to Theorem 3 and Theorem 6. 28 Theorem S3. Under SRE, assume Condition S1 holds. We have, under H0n (i)P∞(ph L≤α| Ass-rem(a))≤Pn ε2+HX h=1∥ξh∥2 21/2≥z1−α/2 max h∈[H]∥ξh∥2 2≤ao ; (ii)P∞(ph L≤α| Ass-rep(αt))≤Pn ε2+HX h=1∥ξh∥2 21/2≥z1−α/2 max h∈[H]∥D(Vh,xx)1/2ξh∥∞≤z1−αt/2o ; and under H0ω, (iii)P∞(ph fe≤α|
|
https://arxiv.org/abs/2505.01137v1
|
Afe-rep(αt))≤P (ε2+∥ξ0∥2 2)1/2≥z1−α/2 ∥D(Vω,xx)1/2ξ0∥∞≤z1−αt/2 ; where ε∈Randξh∈RK,h= 0,1, . . . , H are independent standard Gaussian random variables. Similar to Theorem 3 and Theorem 6, Theorem S3 holds for all ( Yi(1), Yi(0))n i=1∈Rn×2. Asa→0 and αt→1, the right-hand sides of the inequalities all tend to P(|ε|> z1−α/2) =α, thus echoing Theorem S1. We can use Theorem S3 to obtain a type I error bound for a given rerandomization threshold, thus providing some risk control. B Additional notation and lemmas In the CRE, for observed quantities {ai}n i=1,{bi}n i=1and treatment assignments {Zi}n i=1, let¯az=n−1 zP i:Zi=zai,sz,ab= (nz−1)−1P i:Zi=z(ai−¯az)(bi−¯bz)⊤. In SRE, for observed quantities {ai}n i=1,{bi}n i=1and treatment assignments {Zi}n i=1, let¯ahz=n−1 hzP i:Zi=z,i∈Shai, shz,ab= (nhz−1)−1P i:Zi=z,i∈Sh(ai−¯ahz)(bi−¯bhz)⊤. To simplify the notation, when xk andxKappear as the subscript, we denote them simply by kandK. For example, we write sz,YxKandsz,Y x kassz,YKandsz,Y k, respectively. Let Vh,KKandVh,Kτbe the submatrices of Vh,xxandVh,xτcorresponding to K, respectively. Let Vω,KKandVω,Kτ be the submatrices of Vω,xxandVω,xτcorresponding to K, respectively. Fork∈[K],h∈[H], let ˆ sx2 h,L,kbe the EHW standard error of the coefficient estimator ofZiobtained from lm( xik∼1 +Zi),i∈ S h. Let ˆ sx2 fe,kbe the EHW standard error of Zi obtained from lm( xik∼1+Zi+Si). Here, we use ˆ sx instead of ˆ se to distinguish them from the EHW standard errors of Ziobtained from the regression lm( Yi∼1 +Zi+xik+Zixik), 29 i∈ S hand lm( Yi∼1 +Zi+Si+xik), respectively. Lemma S6 gives the expression of ˆ sx2 h,L,kand ˆ sx2 fe,k. Let ˆτLand ˆ se Lbe the coefficient estimator and the EHW standard error of Ziobtained through Lin’s regression with the full covariate vector, respectively. Let ˆ τh,Land ˆ se h,Lbe the coefficient estimator and the EHW standard error of Ziobtained through lm( Yi∼ 1 +Zi+xi+Zixi),i∈ S h. Let ˆ τfeand ˆ se febe the coefficient estimator and the EHW standard error of Ziobtained through the fixed effects regression (8) with the full covariate vector, respectively. Letτe,i=τi−(βh(i)(1)−βh(i)(0))⊤xi. Define the asymptotic limits of ˆ se2 h,L, ˆ se2 Land ˆ se2 feas follows: ˜ se2 h,L=n−1 hσ2 h,adj+n−1 hSh,τeτe,˜ se2 L=HX h=1π2 h˜ se2 h,L, ˜ se2 fe= cov(ˆ τω−β⊤ fe,xˆτω,x) +HX h=1ω2 h(r−1 h1r−1 h0−3) nπh(¯τh−¯τω)2+HX h=1ω2 h nπhSh,ττ. Let [n] ={1, . . . , n }. Let0S1×S2∈RS1×S2and1S1×S2∈RS1×S2be the matrices of all 0’s and 1’s, respectively. Let 0S∈RSand1S∈RSbe vectors of all 0’s and 1’s, respectively. When there is no ambiguity, we may omit their subscripts. For Al∈RSl×Sl,l∈[L], let diag( Al)l∈[L]and diag( Al)L l=1be the block diagonal matrix of Al. Let diag( A1,A2) be the block diagonal matrix of A1andA2. For row vectors Vl,l∈[L], let ( Vl)l∈[L]= (V1, . . . ,VL) be the aggregated row vector of Vl. For column vectors Ul,l∈[L], let (Ul)l∈[L]= (U⊤ 1, . . . ,U⊤ L)⊤be the aggregated column vector of Ul. For scalars al,l∈[L], let (al)l∈[L]be the column vector of al. For a matrix A∈Rnr×ncand 1 ≤a≤b≤nr,1≤ c≤d≤nc, letAa:b,c:dbe the submatrix of Afrom ath to bth rows and from cth to dth columns. For a vector v∈Rsand 1 ≤a≤b≤s, letva:bbe the subvector of vfrom ath tobth entries. We will use Frisch–Waugh– Lovell (FWL) theorems (Ding 2021)
|
https://arxiv.org/abs/2505.01137v1
|
for both the regression coefficients and standard errors. We will use the invariance of OLS to compute ATE estimators and EHW standard errors. Lemma S1 (Invariance of OLS) .For OLS regression of Y∈RnonX1∈Rn×J, let ˆβ1 be the coefficient estimator, ˆei,(1)be the regression residual for unit i,i∈[n],V(1)be the 30 EHW covariance matrix: V(1)= (X⊤ 1X1)−1(X⊤ 1diag(ˆ e2 i,(1))n i=1X1)(X⊤ 1X1)−1. LetX2=X1Pwhere P∈RJ×Jis an invertible matrix. Define, analogously, ˆβ2,ˆei,(2) andV(2)as the coefficient estimator, regression residual and EHW covariance matrix of OLS regression of YonX2. We have: (i) ˆβ1=Pˆβ2;(ii)fori∈[n],ˆei,(1)= ˆei,(2); (iii) V(1)=PV (2)P⊤. Proof of Lemma S1. The proof is by simple linear algebra; see Ding (2024). C Asymptotic limits of hacked p-values C.1 Joint asymptotic normality for the ATE estimators and dif- ference in means of covariates Let us consider (fixed) d-dimensional potential outcomes Ri(z) = (Ri,1(z), . . . , R i,d(z))⊤, i= 1, . . . , n andz= 0,1. For example, Ri(z)∈ {Yi(z),xi,(Yi(z),x⊤ i)⊤}. Define the vector- form average treatment effect: ¯τR=n−1Pn i=1τR,i, where τR,i=Ri(1)−Ri(0), and its stratified difference-in-means estimator: ˆτR=hX h=1πhˆτh,R,ˆτh,R=n−1 h1X i:i∈Sh,Zi=1Ri(1)−n−1 h0X i:i∈Sh,Zi=0Ri(0). Lemma S2, from the Proposition 1 of Wang et al. (2023), gives the expression of cov { n1/2(ˆτR−¯τR)}. Lemma S2. We have, under SRE, cov{n1/2(ˆτR−¯τR)}=ΣR,where ΣR=HX h=1πhSh,R(1)R(1) rh1+Sh,R(0)R(0) rh0−Sh,τRτR . Applying Lemma S2 with Ri(z) = (Yi(z),x⊤ i)⊤fori∈ S h, we have nhcov ˆτh−¯τh ˆτh,x = Vh,ττVh,τx Vh,xτVh,xx , 31 where Vh,ττ=Sh,Y(1)Y(1) rh1+Sh,Y(0)Y(0) rh0−Sh,ττ,Vh,τx=Sh,Y(1)x rh1+Sh,Y(0)x rh0,Vh,xx=Sh,xx rh1rh0. Leth(i) be the stratum which ibelongs to. Applying Lemma S2 with Ri(z) = ωh(i)/πh(i)(Yi(z),x⊤ i)⊤, we have ncov ˆτω−¯τω ˆτω,x = Vω,ττ Vω,τx Vω,xτVω,xx , where Vω,ττ=HX h=1ω2 h πhVh,ττ,Vω,τx=HX h=1ω2 h πhVh,τx,Vω,xx=HX h=1ω2 h πhVh,xx. We require Condition S2 from Wang et al. (2023) to establish the CLT. Condition S2. Asn→ ∞ , (i) rh1,h= 1, . . . , H , have limits in (0,1). (ii) For z= 0,1, max h∈[H]max i∈Sh∥Ri(z)−¯Rh(z)∥2 ∞=o(n).(iii) The following three matrices have finite limits: HX h=1πhSh,R(1)R(1) rh1,HX h=1πhSh,R(0)R(0) rh0,HX h=1πhSh,τRτR, and the limit of ΣRis (strictly) positive definite. We use the same notation to denote the limits of the finite-population quantities when there is no ambiguity. We require Lemma S3 from Theorem 1 of Wang et al. (2023), to establish the CLT for n1/2(ˆτR−¯τR). Lemma S3. Under Condition S2 and SRE, n1/2(ˆτR−¯τR)d→ N (0,ΣR). Lemma S4. We have Vω,ττ−Vω,τxV−1 ω,xxVω,xτ≥HX h=1ω2 h πhσ2 h,adj. Proof of Lemma S4. Using the Cauchy–Schwarz inequality, ( b⊤b)(d⊤d)≥(b⊤d)2with b= (ω1 π1/2 1V1/2 1,xxV−1 ω,xxVω,xτ, . . . ,ωH π1/2 HV1/2 H,xxV−1 ω,xxVω,xτ)⊤; d=ω1 π1/2 1V−1/2 1,xxV1,xτ, . . . ,ωH π1/2 HV−1/2 H,xxVH,xτ⊤ , 32 we have HX h=1ω2 h πhVh,τxV−1 h,xxVh,xτ≥Vω,τxV−1 ω,xxVω,xτ. Recall that σ2 h,adj=Vh,ττ−Vh,τxV−1 h,xxVh,xτ. Therefore, Vω,ττ−Vω,τxV−1 ω,xxVω,xτ≥HX h=1ω2 h πh Vh,ττ−Vh,τxV−1 h,xxVh,xτ =HX h=1ω2 h πhσ2 h,adj. Let ( εh,τ,ε⊤ h,x)⊤,h= 1, . . . , H , be independent random vectors with εh,τ εh,x ∼ N 0, Vh,ττVh,τx Vh,xτVh,xx . Let ( εω,τ,εω,x) be random vector with εω,τ εω,x ∼ N 0, Vω,ττ Vω,τx Vω,xτVω,xx . Lemma S5. Under Condition S1 (i)–(iii) and SRE, we have (i) (n1/2 1(ˆτ1−¯τ1,ˆτ⊤ 1,x), . . . , n1/2
|
https://arxiv.org/abs/2505.01137v1
|
H(ˆτH− ¯τH,ˆτ⊤ H,x))⊤d→((ε1,τ,ε⊤ 1,x), . . . , (εH,τ,ε⊤ H,x))⊤and (ii) n1/2(ˆτω−¯τω,ˆτ⊤ ω,x)⊤d→(εω,τ,ε⊤ ω,x). Proof of Lemma S5. For Lemma ( i), applying Lemma S3 with Ri(z) = ( Yi(z),x⊤ i)⊤for i∈ S h, we have, under Condition S1 ( i)–(iii), n1/2 h(ˆτh−¯τh,ˆτ⊤ h,x)⊤d→(εh,τ,ε⊤ h,x)⊤. Because for h= 1, . . . , H ,n1/2 h(ˆτh−¯τh,ˆτ⊤ h,x)⊤are independent, the conclusion follows. For Lemma ( ii), we see that under Condition S1 ( ii), Lemma S4 implies that Vω,ττ− Vω,τxV−1 ω,xxVω,xτhas a positive limit. Applying Lemma S3 with Ri(z) =ωh(i)/πh(i)(Yi(z),x⊤ i)⊤, the conclusion follows. C.2 Asymptotic limits of the regression coefficients and EHW standard errors Let˜Si= (I(i∈ S 1), . . . , I (i∈ S H)). 33 Lemma S6. We have ˆ sx2 h,L,k=n1−1 n2 1sh1,kk+n0−1 n2 0sh0,kk; ˆ sx2 fe,k=HX h=1ω2 hn ˆ sx2 h,L,k+r−1 h1r−1 h0−3 nh(ˆτh,k−ˆτk)2o . Proof of Lemma S6. Consider the CRE. Let ˜Zi=Zi−r1, and ˆ ei=Zi(xik−¯xk,1) + (1 − Zi)(xik−¯xk,0). Using FWL, we have ˆ sx2 L,k=Pn i=1˜Z2 iˆe2 i (Pn i=1˜Z2 i)2=r2 0P i:Zi=1(xik−¯xk,1)2+r2 1P i:Zi=0(xik−¯xk,0)2 (n0r2 1+n1r2 0)2 =n1−1 n2 1s1,kk+n0−1 n2 0s0,kk. Thus, for stratum h, we have ˆ sx2 h,L,k=nh1−1 n2 h1sh1,kk+nh0−1 n2 h0sh0,kk. The equality for ˆ sx2 fe,kfollows from Theorem 6 of Ding (2021) with ( Yi(1), Yi(0))≡(xik, xik). Lemma S7 is from Lemma A16 of Li et al. (2018). Lemma S7. Assume Condition S1 (i)–(iii) holds, under SRE, we have, for z∈ {0,1}, h∈[H], shz,Y Y−Sh,Y(z)Y(z)=oP(1),shz,xx−Sh,xx=oP(1),shz,xY−Sh,xY(z)=oP(1). Lemma S8. We have ˆ se2 L=HX h=1π2 hˆ se2 h,L,ˆτL=HX h=1πhˆτh,L. Moreover, we have ˆτL= ˆτ−PH h=1πhˆβ⊤ h,L,xˆτh,xwhere ˆβh,L,x=r0ˆβh(1) + r1ˆβh(0),ˆβh(z) = s−1 hz,xxshz,xY. Proof of Lemma S8. The conclusion follows from linear algebra and Lemma S1. Lemma S9. ˆτfe= ˆτω−ˆβ⊤ fe,xˆτω,xwhere ˆβfe,x=h n−1HX h=1 (nh1−1)sh1,xx+ (nh0−1)sh0,xx+nhrh1rh0(ˆτω,x−ˆτh,x)(ˆτω,x−ˆτh,x)⊤ i−1 ·h n−1HX h=1 (nh1−1)sh1,xY+ (nh0−1)sh0,xY+nhrh1rh0(ˆτω,x−ˆτh,x)(ˆτω−ˆτh) i . 34 Proof of Lemma S9. By Lemma S1, ˆ τfeis also equal to the coefficient estimator of Ziin lm(Yi∼Zi+˜Si+xi). (10) Let ˆeibe the residual of unit iobtained from lm( Yi∼Zi+˜Si). By the proof of (Ding 2021, Theorem 6), we have the sum of squares of residual nX i=1ˆe2 i=HX h=1nX i:Zi=1,i∈Sh(Yi−¯Yh1)2+X i:Zi=0,i∈Sh(Yi−¯Yh0)2o +HX h=1nhrh0rh1(ˆτω−ˆτh)2. Letˆβfe,xbe the coefficient of xiin (10). Replacing Yi(z) by Yi(z)−x⊤ iβ, we view Pn i=1ˆe2 ias a function of β.ˆβfe,xminimizes nX i=1ˆe2 i=HX h=1nX i:Zi=1,i∈Sh(Yi−x⊤ iβ−¯Yh1+¯x⊤ h1β)2+X i:Zi=0,i∈Sh(Yi−x⊤ iβ−¯Yh0+¯x⊤ h0β)2o + HX h=1nhrh0rh1 ˆτω−ˆτh−β⊤(ˆτω,x−ˆτh,x) 2. Therefore, the expression of ˆβfe,xfollows. Substituting Yi(z) with Yi(z)−ˆβ⊤ fe,xxiin the expression of ˆ τω, we have ˆ τfe= ˆτω− ˆβ⊤ fe,xˆτω,x. Lemma S10. Under Condition S1 and under SRE, we have ˆβh(z)−βh(z) =oP(1),βh(z) =S−1 h,xxSh,xY(z), ˆβh,L,x−βh,L,x=oP(1),βh,L,x=V−1 h,xxVh,xτ, ˆβfe,x−βfe,x=oP(1),βfe,x=HX h=1πhSh,xx−1nHX h=1πh rh1Sh,xY(1)+rh0Sh,xY(0)o . Proof of Lemma S10. The first two lines follow from the definitions of ˆβh(z) and ˆβh,L,x, together with Lemma S7. It remains to prove the last line. By Lemma S5, we have ˆτω,x−ˆτh,x=OP(n−1/2 h) =OP(n−1/2),ˆτω−ˆτh−(¯τω−¯τh) =OP(n−1/2). Condition S1 ( iv) implies that ¯ τω−¯τh=O(1). Therefore, we have X hπhrh1rh0(ˆτω,x−ˆτh,x)(ˆτω,x−ˆτh,x)⊤=OP(n−1), X hπhrh1rh0(ˆτω,x−ˆτh,x)(ˆτω−ˆτh) =OP(n−1/2)OP(1 +n−1/2) =OP(n−1/2). Combining this with Lemma S7, we complete the proof. 35 Lemma S11 shows the asymptotic limits of the EHW standard errors. Lemma S11. Assume Condition S1 holds. Under SRE, we have
|
https://arxiv.org/abs/2505.01137v1
|
nˆ se2 fe−n˜ se2 fe=oP(1); nhˆ se2 h,L−nh˜ se2 h,L=oP(1); nˆ sx2 fe,k−Vω,kk=oP(1), n hˆ sx2 h,L,k−Vh,kk=oP(1), k∈[K], h∈[H]. Proof of Lemma S11. We first prove nˆ se2 fe−n˜ se2 fe=oP(1). The proof proceeds in 3 steps. In Step 1 , we decompose nˆ se2 feinto 6 terms; in Step 2 , we derive the closed-form expression of related matrices; in Step 3 , we derive the asymptotic limits for each term. Step 1 : We decompose nˆ se2 feinto 6 terms. LetVi= (Zi,˜Si), ˆeibe the residual of lm( Yi∼1 +Zi+Si) for unit i, and G= G11G12 G21G22 ,H= H11H12 H21H22 ,G−1= Λ11Λ12 Λ21Λ22 where G11=n−1nX i=1ViV⊤ i,G12=G⊤ 21=n−1nX i=1Vix⊤ i,G22=n−1nX i=1xix⊤ i; H11=n−1nX i=1ˆe2 iViV⊤ i,H12=H⊤ 21=n−1nX i=1ˆe2 iVix⊤ i,H22=n−1nX i=1ˆe2 ixix⊤ i. By definition of nˆ se2 fe, we have nˆ se2 fe=e⊤ 1 Λ11Λ12 H Λ11 Λ21 e1. By the formula of inverse 2 ×2 block matrix, we have Λ11=G−1 11+G−1 11G12(G22−G21G−1 11G12)−1G21G−1 11, Λ⊤ 21=Λ12=−G−1 11G12(G22−G21G−1 11G12)−1. LetΣ=G22−G21G−1 11G12andU=G−1 11G12. We have nˆ se2 fe=e⊤ 1 G−1 11+UΣ−1U⊤−UΣ−1 H G−1 11+UΣ−1U⊤ −Σ−1U⊤ e1 =6X q=1Tq, 36 where T1=e⊤ 1G−1 11H11G−1 11e1, T 2= 2e⊤ 1G−1 11H11UΣ−1U⊤e1, T3=e⊤ 1UΣ−1U⊤H11UΣ−1U⊤e1, T 4=−2e⊤ 1G−1 11H12Σ−1U⊤e1, T5=−2e⊤ 1UΣ−1U⊤H12Σ−1U⊤e1, T 6=e⊤ 1UΣ−1H22Σ−1U⊤e1. Step 2: We now derive the closed-form expression of e⊤ 1G−1 11,Σ,UandH. Lete1∈RH+1 be the vector with 1 at the first entry and 0 at the other entries. Let L=G21e1. By simple calculation, we have L=HX h=1πhrh1¯xh1,G21=Le⊤ 1. Therefore, Σ=n−1nX i=1xix⊤ i−L(G−1 11)1,1L⊤,U=G−1 11e1L⊤. Letc=PH h=1n−1nh1=PH h=1πhrh1and we have G11= c π 1r11··· πHrH1 π1r11 π1 ...... πHrH1 πH . Next, we prove e⊤ 1G−1 11=HX h=1πhrh0rh1−1 (1,−r11, . . . ,−rH1). By linear algebra, we have (G−1 11)1,1= det( G11)−1HY h=1πh =HX h=1πhrh0rh1−1 , where for the first equality, we apply the inverse of matrix formula G−1 11= adj( G11)/det(G11) and for the second equality, we apply det A B C D = det( D) det(A−BD−1C). 37 Using the formula of the inverse of 2 ×2 block diagonal matrix, we have (G−1 11)2:(H+1),1=− cdiag( πh)h∈[H]−(πhrh1)h∈[H](πhrh1)⊤ h∈[H]−1 (πhrh1)h∈[H], where, by the Sherman-Morrison formula, cdiag( πh)h∈[H]−(πhrh1)h∈[H](πhrh1)⊤ h∈[H]−1 =c−1diag( π−1 h)h∈[H]+(c−1rh1)h∈[H](c−1rh1)⊤ h∈[H] 1−PH h=1(cπh)−1(πhrh1)2 =c−1diag( π−1 h)h∈[H]+ c−1rh1 h∈[H] c−1rh1⊤ h∈[H] c−1PH h=1πhrh0rh1. Thus, (G−1 11)2:(H+1),1=−HX h=1πhrh0rh1−1 (rh1)h∈[H], and therefore e⊤ 1G−1 11= (G−1 11)1,1 (G−1 11)2:(H+1),1 ⊤ =HX h=1πhrh0rh1−1 (1,−r11, . . . ,−rH1). By definition, we have H11= n−1P i:Zi=1ˆe2 i n−1P i:i∈S1,Zi=1ˆe2 i··· n−1P i:i∈SH,Zi=1ˆe2 i n−1P i:i∈S1,Zi=1ˆe2 i n−1P i:i∈S1ˆe2 i ...... n−1P i:i∈SH,Zi=1ˆe2 i n−1P i:i∈SHˆe2 i H21= n−1X i:Zi=1ˆe2 ixi, n−1X i:i∈S1ˆe2 ixi, . . . , n−1X i:i∈SHˆe2 ixi ,H22=n−1nX i=1ˆe2 ixix⊤ i. Step 3 : We deal with Tq, 1≤q≤6 one by one. We will prove T1=n˜ se2 fe+oP(1) and Tq=oP(1) for 2 ≤q≤6. First, we see that T1=HX h=1πhrh0rh1−2n n−1X i:Zi=1ˆe2 i−2n−1HX h=1X i:i∈Sh,Zi=1rh1ˆe2 i+n−1HX h=1X i:i∈Shr2 h1ˆe2 io =HX h=1πhrh0rh1−2 n−1nHX h=1X i:i∈Sh,Zi=1r2 h0ˆe2 i+HX h=1X i:i∈Sh,Zi=0r2 h1ˆe2 io . 38 By the proof of Theorem 6 in Ding (2021), the residual for unit i∈ S h, obtained from lm(Yi∼Zi+˜Si), equals to Yi−¯Yh1−rh0(ˆτω−ˆτh), Z i= 1; Yi−¯Yh0+rh1(ˆτω−ˆτh), Z
|
https://arxiv.org/abs/2505.01137v1
|
i= 0. Replacing Yiwith Yi−x⊤ iˆβfe,xin the above expression, we have, for i∈ S h, ˆei= Yi−¯Yh1−(xi−¯xh1)⊤ˆβfe,x−rh0(ˆτω−ˆτh) +rh0(ˆτω,x−ˆτh,x)⊤ˆβfe,x, Z i= 1; Yi−¯Yh0−(xi−¯xh0)⊤ˆβfe,x+rh1(ˆτω−ˆτh)−rh1(ˆτω,x−ˆτh,x)⊤ˆβfe,x, Z i= 0. Define ˆ ϵi, fori∈ S h ˆϵi= Yi−¯Yh1−(xi−¯xh1)⊤ˆβfe,x, Z i= 1; Yi−¯Yh0−(xi−¯xh0)⊤ˆβfe,x, Z i= 0. Using thatP i:i∈Sh,Zi=z(ˆϵi+ahz)2=P i:i∈Sh,Zi=zˆϵ2 i+nhza2 hzforh∈[H],z∈ {0,1}, with ah1=−rh0(ˆτω−ˆτh)+rh0(ˆτω,x−ˆτh,x)⊤ˆβfe,xandah0=rh1(ˆτω−ˆτh)−rh1(ˆτω,x−ˆτh,x)⊤ˆβfe,x, we have T1=HX h=1πhrh0rh1−2 n−1HX h=1X i:i∈Sh,Zi=1r2 h0ˆϵ2 i+HX h=1X i:i∈Sh,Zi=0r2 h1ˆϵ2 i + HX h=1πhrh0rh1−2hHX h=1πhrh1rh0(r3 h0+r3 h1) ˆτω−ˆτh−ˆβ⊤ fe,x(ˆτω,x−ˆτh,x) 2i =:T11+T12. Using Lemmas S7 and S10, we have under Condition S1 1 nhz−1X i:i∈Sh,Zi=zˆϵ2 i=shz,Y Y−2shz,Yxˆβfe,x+ˆβ⊤ fe,xshz,xxˆβfe,x =Sh,Y(z)Y(z)−2Sh,Y(z)xβfe,x+β⊤ fe,xSh,xxβfe,x+oP(1) = Sh,Y(z)−x⊤βfe,x,Y(z)−x⊤βfe,x+oP(1). Applying Lemma S2 to stratum hwithRi(z) =Yi(z)−β⊤ fe,xxi, we have nhcov(ˆτh−β⊤ fe,xˆτh,x) =X z∈{0,1}r−1 hzSh,Y(z)−x⊤βfe,x,Y(z)−x⊤βfe,x−Sh,ττ. 39 Therefore, we have T11=HX h=1πhrh0rh1−2nHX h=1πhr2 h1r2 h0X z∈{0,1}r−1 hzSh,Y(z)−x⊤βfe,x,Y(z)−x⊤βfe,xo +oP(1) =HX h=1ω2 h πhX z∈{0,1}r−1 hzSh,Y(z)−x⊤βfe,x,Y(z)−x⊤βfe,x+oP(1) =HX h=1ω2 h πhn nhcov(ˆτh−β⊤ fe,xˆτh,x) +Sh,ττo +oP(1) =ncov(ˆτω−β⊤ fe,xˆτω,x) +HX h=1ω2 h πhSh,ττ+oP(1). ForT12, we have (ˆτω−ˆτh−ˆβ⊤ fe,x(ˆτω,x−ˆτh,x))2= (¯τω−¯τh)2+oP(1). Therefore, we have T12=HX h=1πhrh0rh1−2HX h=1πhrh1rh0(r3 h0+r3 h1)(¯τω−¯τh)2+oP(1) =HX h=1ω2 h(r−1 h1r−1 h0−3) πh(¯τh−¯τω)2+oP(1), where for the last equality, we use ( rh1rh0)−1(r3 h1+r3 h0) =r−1 h1r−1 h0−3. Combining the formulas of T11andT12, we have T1=T11+T12=n˜ se2 fe+oP(1). We prove that Tq,q= 2, . . . , 6 are oP(1). Using that L=OP(n−1/2); (G−1 11)1,1=O(1),U=G−1 11e1L⊤, we have e⊤ 1G−1 11H11U=e⊤ 1G−1 11H11G11e1L⊤=T1L⊤=OP(n−1/2); (11) U⊤H11U=T1L⊤L=OP(n−1),U⊤e1=L(G−1 11)1,1=OP(n−1/2). (12) On the other hand, we have e⊤ 1G−1 11H12=HX h=1πhrh0rh1−1 n−1X i:Zi=1ˆe2 ixi−HX h=1X i:i∈Shrh1ˆe2 ixi . 40 Using ∥xi∥2 ∞=o(n) from condition S1 ( iii), we have ∥e⊤ 1G−1 11H12∥∞=O n−1HX h=1X i:i∈Shˆe2 i O(max i∈[n]∥xi∥∞) =O(T1)o(n1/2) =oP(n1/2) (13) and U⊤H12=L·e⊤ 1G−1 11H12=OP(n−1/2)·oP(n1/2) =oP(1). (14) We see that 1 nnX i=1xix⊤ i=HX h=1πhnh−1 nhSh,xx, and, by Condition S1 ( ii), the limit of n−1Pn i=1xix⊤ iis positive definite. Combining this withL(G−1 11)1,1L⊤=OP(n−1), we have Σ−1=OP(1). (15) Finally, we have ∥H22∥∞=O(max i∈[n]∥xi∥2 ∞)O(n−1X i:Zi=1ˆe2 i) =O(max i∈[n]∥xi∥2 ∞)O(T1) =o(n)OP(1) = oP(n). (16) Putting together (11)–(16), we have T2=e⊤ 1G−1 11H11U·Σ−1·U⊤e1=OP(n−1/2·1·n−1/2) =oP(1), T3=e⊤ 1U·Σ−1·U⊤H11U·Σ−1·U⊤e1=OP(n−1/2·1·n−1·1·n−1/2) =oP(1), T4=e⊤ 1G−1 11H12·Σ−1·U⊤e1=oP(n1/2·1·n−1/2) =oP(1), T5=e⊤ 1U·Σ−1·U⊤H12·Σ−1·U⊤e1=oP(n−1/2·1·1·1·n−1/2) =oP(1), T6=e⊤ 1U·Σ−1·H22·Σ−1·U⊤e1=oP(n−1/2·1·n·1·n−1/2) =oP(1). Thus nˆ se2 fe=T1+oP(1) = n˜ se2 fe+oP(1). (17) Now we prove the rest of the lemma. By (Li & Ding 2020, Theorem 8), under Condition S1, we have nhˆ se2 h,L−nh˜ se2 h,L=oP(1). (18) Applying (17) and (18) with Yi(z)≡xikandxi≡ ∅, we have nˆ sx2 fe,k−Vω,kk=oP(1); nhˆ sx2 h,L,k−Vh,kk=oP(1). 41 C.3 Asymptotic limits of the hacked p-values We review the notation of rerandomization with the general covariate balance criterion (ReG) from Li et al. (2018) and Zhao & Ding (2024). Definition S1. Letϕ(B,C)be a binary covariate balance indicator function, where (B,C) are two statistics computed from the data. An ReG accepts a randomization if ϕ(B,C) = 1 . Condition S3. The binary indicator function ϕ(·,·)satisfies: (i) ϕ(·,·)is almost surely continuous; (ii) for u∼N(0J,V0), we have P{ϕ(u,V0) = 1}>0for all V0>0, and cov{u|ϕ(u,V0) = 1}is a continuous function of V0. Lemma S12 is from Zhao & Ding (2024, Lemma S7). Lemma S12 (Weak convergence under ReG) .Let(ϕn)∞ n=1be a sequence of binary indicator functions under Condition S3 that converges to ϕ.For a sequence of random elements (An,Bn,Cn)∞ n=1that satisfies (An,Bn,Cn)d→(A,B,C)asn→ ∞ , we have
|
https://arxiv.org/abs/2505.01137v1
|
(An,Bn)| {ϕn(Bn,Cn) = 1}d→ (A,B)| {ϕ(B,C) = 1}. Define ˆ τh,L,K,ˆβh,L,K,βh,L,K, ˆ se2 h,L,K, ˜ se2 h,L,K, ˆτL,K, ˆ se2 L,K, ˜ se2 L,K, ˆτfe,K,ˆβfe,K,βfe,K, ˆ se2 fe,K, ˜ se2 fe,Kas the analogy of ˆ τh,L,ˆβh,L,x,βh,L,x, ˆ se2 h,L, ˜ se2 h,L, ˆτL, ˆ se2 L, ˜ se2 L, ˆτfe,ˆβfe,x,βfe,x, ˆ se2 fe, ˜ se2 fe withxiKreplaced by xi. Lemma S13. Under Condition S1, we have under SRE n1/2 h(ˆτh−¯τh), n1/2 hˆτh,x,(ˆβh,L,K, nhˆ se2 h,L,K)K⊆[K],(nhˆ sx2 h,L,k)k∈[K],Vh,xx h∈[H]d→ εh,τ,εh,x,(βh,L,K, nh˜ se2 h,L,K)K⊆[K],(Vh,kk)k∈[K],Vh,xx h∈[H]; n1/2(ˆτω−¯τω), n1/2ˆτω,x,(ˆβfe,K, nˆ se2 fe,K)K⊆[K],(nˆ sx2 fe,k)k∈[K] d→ εω,τ,εω,x,(βfe,K, n˜ se2 fe,K)K⊆[K],(Vω,kk)k∈[K] . Proof of Lemma S13. Applying Lemma S10 and S11 with xi≡xiKforK ⊆ [K], we have (ˆβh,L,K, nhˆ se2 h,L,K)K⊆[K]d→(βh,L,K, nh˜ se2 h,L,K)K⊆[K]; (ˆβfe,K, nhˆ se2 fe,K)K⊆[K]d→(βfe,K, nh˜ se2 fe,K)K⊆[K]. Combining this with Lemma S5, we complete the proof. 42 Let NL,K=PH h=1π1/2 h(εh,τ−β⊤ h,L,Kεh,K) n1/2˜ seL,K,Nfe,K=εω,τ−β⊤ fe,Kεω,K n1/2˜ sefe,K, (19) where εh,Kandεω,Kare subvectors of εh,xandεω,xcorresponding to K, respectively. Let ˆτh,Kandˆτω,Kbe the subvectors of ˆτh,xandˆτω,xcorresponding to K, respectively. Lemma S14. Assume Condition S1 holds. Under SRE, we have (i) under H0n,(ˆτL,K/ˆ seL,K)K⊆[K]d→ (NL,K)K⊆[K]; (ii) under H0ω,(ˆτfe,K/ˆ sefe,K)K⊆[K]d→(Nfe,K)K⊆[K]. Proof of Lemma S14. By Lemma S8 and S9, we have ˆτL,K ˆ seL,K=n1/2PH h=1πh(ˆτh−ˆβ⊤ h,L,Kˆτh,K) n1/2ˆ seL,K=PH h=1π1/2 hn1/2 h(ˆτh−ˆβ⊤ h,L,Kˆτh,K) n1/2ˆ seL,K, ˆτfe,K ˆ sefe,K=n1/2(ˆτω−ˆβ⊤ fe,Kˆτω,K) n1/2ˆ sefe,K. The conclusion follows from Lemma S5 and S11. Lemma S15. Recall NL,Kin(19). For all K ⊆ [K],var(NL,K)≤1; when H0fholds, we have var(NL,K) = 1 . Proof of Lemma S15. We only prove the case K= [K], and the other cases are similar. By definition var(NL,[K]) =PH h=1πhvar(εh,τ−β⊤ h,L,xεh,x) n˜ se2 L=PH h=1πh(Vh,ττ−Vh,τxV−1 h,xxVh,xτ) n˜ se2 L =PH h=1πhσ2 h,adjPH h=1πh(σ2 h,adj+Sh,τeτe)≤1. When H0fholds, i.e., Yi(1) = Yi(0), for i∈[n],Sh,τeτe= 0. Therefore, var( NL,[K]) = 1. Lemma S16. Under Condition S1 and SRE, we have: under H0n, (i)P∞(ph L≤α| Ass-rep(αt)) =P max K⊆[K]|NL,K| ≥z1−α/2 max h∈[H]∥σ(Vh,xx)−1εh,x∥∞≤z1−αt/2 ; (ii)P∞(ph L≤α| Ass-rem(a)) =P max K⊆[K]|NL,K| ≥z1−α/2 max h∈[H]ε⊤ h,xV−1 h,xxεh,x≤a ; (iii)P∞(ph L≤α) =P max K⊆[K]|NL,K| ≥z1−α/2 ; (iv)P∞(Ass-rep(αt)) =P max h∈[H]∥σ(Vh,xx)−1εh,x∥∞≤z1−αt/2 ; (v)P∞(Ass-rem(αt)) =P max h∈[H]ε⊤ h,xV−1 h,xxεh,x≤a ; 43 under H0ω, (vi)P∞(ph fe≤α| Afe-rep(αt)) =P max K⊆[K]|Nfe,K| ≥z1−α/2 ∥σ(Vω,xx)−1εω,x∥∞≤z1−αt/2 ; (vii)P∞(ph fe≤α) =P max K⊆[K]|Nfe,K| ≥z1−α/2 ; (viii)P∞(Afe-rep(αt)) =P(∥σ(Vω,xx)−1εω,x∥∞≤z1−αt/2). Proof of Lemma S16. By definition, pL,K= 2n 1−Φ ˆτL,K ˆ seL,K o , p fe,K= 2n 1−Φ ˆτfe,K ˆ sefe,K o , pt,hk= 2n 1−Φ ˆτh,k ˆ sxh,L,k o , px fe,k= 2n 1−Φ ˆτω,k ˆ sxfe,k o . We have Ass-rep(αt) ={max h∈[H]max k∈[K] ˆτh,k/ˆ sxh,L,k ≤z1−αt/2},Afe-rep(αt) ={max k∈[K] ˆτω,k/ˆ sxfe,k ≤z1−αt/2}, {ph L≤α}=n max K⊆[K] PH h=1πh(ˆτh−ˆβ⊤ h,L,Kˆτh,K) (PH h=1π2 hˆ se2 h,L,K)1/2≥z1−α/2o ,{ph fe≤α}=n max K⊆[K] ˆτfe,K ˆ sefe,K≥z1−α/2o . (iii)–(v), (vii)-(viii) follows from Lemma S13. Applying Lemma S12 with ( Bn,Cn) = ((Bn,h)h∈[H],diag(Cn,h)h∈[H]) and ϕ(Bn,Cn) = I(max h∈[H]∥σ(Cn,h)−1Bn,h∥∞≤z1−αt/2) where Bn,h=n1/2 hˆτh,x,Cn,h= diag( nhˆ sx2 h,L,k)k∈[K], and by Lemma S13, we have n1/2 h(ˆτh−¯τh), n1/2 hˆτh,x,(ˆβh,L,K, nhˆ se2 h,L,K)K⊆[K] h∈[H] Ass-rep(αt)d→ εh,τ,εh,x,(βh,L,K, nh˜ se2 h,L,K)K⊆[K] h∈[H] max h∈[H]∥σ(Vh,xx)−1εh,x∥∞≤z1−αt/2. Lemma ( i) follows. By Lemma S12 with ( Bn,Cn) = (( Bn,h)h∈[H],diag(Cn,h)h∈[H]) and ϕ(Bn,Cn) =I( max h∈[H]B⊤ n,hC−1 n,hBn,h≤a), where Bn,h=n1/2 hˆτh,xandCn,h=Vh,xx, and
|
https://arxiv.org/abs/2505.01137v1
|
Lemma S13, we have n1/2 h(ˆτh−¯τh), n1/2 hˆτh,x,(ˆβh,L,K, nhˆ se2 h,L,K)K⊆[K] h∈[H] Ass-rem(a)d→ εh,τ,εh,x,(βh,L,K, nh˜ se2 h,L,K)K⊆[K] h∈[H] max h∈[H]ε⊤ h,xV−1 h,xxεh,x≤a. Lemma ( ii) follows. 44 By Lemma S12 with ( Bn,Cn) = ( n1/2ˆτω,x,diag( nˆ sx2 fe,k)k∈[K]) and ϕ(Bn,Cn) = I( ∥σ(Cn)−1Bn∥∞≤z1−αt/2), and Lemma S13, we have n1/2(ˆτω−¯τω), n1/2ˆτω,x,(ˆβfe,K, nˆ se2 fe,K)K⊆[K] Afe-rep(αt)d→ εω,τ,εω,x,(βfe,K, n˜ se2 fe,K)K⊆[K] ∥σ(Vω,xx)−1εω,x∥∞≤z1−αt/2. Lemma ( vi) follows. D Proofs of theoretical results under CRE LetV⊤ xτ=Vτx=ncov(ˆτ,ˆτx),Vττ=nvar(ˆτ) and Vxx=ncov(ˆτx). Let VKKand VKτbe the submatrices of VxxandVxτcorresponding to K, respectively. Let βL,x= V−1 xxVxτandβL,K=V−1 KKVKτ. Define the random vector ( ετ,ε⊤ x): ετ εx ∼ N 0, VττVτx VxτVxx . LetεKbe the subector of εxcorresponding to K. A CRE is an SRE with H= 1.NL,Kdefined in (19) reduces to NL,K=ετ−β⊤ L,KεK n1/2˜ seL,K. D.1 Proof of Proposition 1 Proof of Proposition 1. By Lemma S15, under H0f,NL,K,K ⊆ [K], satisfies cov( NL,K) = 1. Therefore, NL,Kare standard Gaussian random variables. Therefore, P∞( min K⊆[K]pL,K≤α) =P( max K⊆[K]|NL,K| ≥z1−α/2)≥P(|NL,∅| ≥z1−α/2) =α. The above inequality holds if and only if cov( NL,K,NL,∅) = 1, for all K ⊆ [K], i.e., R2 x= 0. D.2 Proof of Theorem 1 and Theorem 4 It follows from Theorem S1 with H= 1. 45 D.3 Proof of Theorem 2 Proof of Theorem 2. The proof of the“if” part : Applying Lemma S14 with H= 1, we have (ˆ τL,K/ˆ seL,K)K⊆[K]d→(NL,K)K⊆[K]. The “if” part follows from Lemma S19 with H= 1. The proof of the “only if” part : To prove the “only if” part, we prove that if a >¯arem(α, R2 x,∆), for ( R2 x,∆)∈ B, there exists ( Yi(1), Yi(0))n i=1∈M(R2 x,∆) such that P∞(ph L| Arem(a))>0. Since ( R2 x,∆)∈ B, there exists ( ˘Yi(1),˘Yi(0))n i=1∈M(R2 x,∆). We construct ( Yi(1), Yi(0))n i=1 ∈M(R2 x,∆) as follows: Yi(1) = Yi(0) = r1˜Yi(0) + r0˜Yi(1) for i∈[n]. Since τi= 0 under this construction of ( Yi(1), Yi(0))n i=1, we can see that by the definition of ˜ se L,K, n˜ se2 L,K= var( ετ−β⊤ L,KεK) and therefore NL,K∼ N(0,1). Let ¯R2 x= 1−R2 x,ξ=V−1/2 xxεx,ε0=NL,[K], and ˜R2 K=R2 x−R2 K. Let E= {maxK⊆[K]|NL,K| ≥z1−α/2},A∞ rem(a) ={ε⊤ xV−1 xxεx≤a}. We see that P∞(ph L≤α| Arem(a)) =P(E | A∞ rem(a)) =P(E1| A∞ rem(a)) +P(E2| A∞ rem(a)), where E1={|N L,[K]| ≥z1−α/2}={|ε0| ≥z1−α/2},E2={max K⊊[K]|NL,K| ≥z1−α/2,|NL,[K]|< z1−α/2}. ForE1, because εxandε0are independent, we have P(E1| A∞ rem(a)) =P(E1) =P(|ε0| ≥z1−α/2) =α. It remains to show that P(E2∩A∞ ss-rem(a))>0.E2∩A∞ ss-rem(a) can be seen as a measurable set of ( ε0,ξ). Since nonempty open set has measure greater than 0, it suffices to show that the following open subset E3⊆ E 2∩ A∞ ss-rem(a) is nonempty: E3={|N L,[K]|< z1−α/2,max K⊊[K]|NL,K|> z1−α/2} ∩ {∥ ξ∥2 2< a}. Define ˜βL,K∈RKwith ( ˜βL,K)K=βL,Kand 0 at the other entries. We have var( ετ− β⊤ L,KεK) = (1 −R2 K)Vττ= (¯R2 x+˜R2 K)Vττand |NL,K|=|V1/2 ττ¯Rxε0+ε⊤ x(βL,x−˜βL,K)| (¯R2 x+˜R2 K)1/2V1/2 ττ= ¯Rx|ε0|+sK∥ξ∥2˜RK (¯R2 x+˜R2 K)1/2, 46 where sK= sign( ε0)ε⊤ x(βL,x−˜βL,K)/(V1/2 ττ∥ξ∥2˜RK). Let k0= arg min k∈[K](¯R2 x+ ∆ k)1/2z1−α/2−¯Rx|ε0| ∆1/2 k, ψ(|ε0|) = min k∈[K](¯R2 x+ ∆ k)1/2z1−α/2−¯Rx|ε0|
|
https://arxiv.org/abs/2505.01137v1
|
∆1/2 k. Noticing that ∆ k0=˜R2 [−k0], we have, under {s[−k0]= 1}, max K⊊[K]|NL,K| ≥ |N L,[−k0]|=¯Rx|ε0|+∥ξ∥2∆1/2 k0 (¯R2 x+ ∆ k0)1/2. Therefore, E3∩ {s[−k0]= 1} ⊇n |ε0|< z1−α/2, s[−k0]= 1,¯Rx|ε0|+∥ξ∥2∆1/2 k0 (¯R2 x+ ∆ k0)1/2> z1−α/2,∥ξ∥2 2< ao ⊇n |ε0|< z1−α/2, s[−k0]= 1,∥ξ∥2> ψ(|ε0|),∥ξ∥2 2< ao . We see that ψ(z1−α/2) = min k∈[K](¯R2 x+ ∆ k)1/2z1−α/2−¯Rxz1−α/2 ∆1/2 k={¯arem(α, R2 x,∆)}1/2< a1/2. Since ψ(|ε0|) is a continuous function of |ε0|, there exists 0 < t < z 1−α/2such that ψ(t)< a1/2. Therefore, E3∩ {s[−k0]= 1} ⊇n |ε0|< z1−α/2, s[−k0]= 1,∥ξ∥2> ψ(|ε0|),∥ξ∥2 2< ao ⊇n |ε0|=t, s [−k0]= 1, a1/2>∥ξ∥2> ψ(t)o . We see that sK=sign(ε0)ε⊤ x(βL,x−˜βL,K) ∥ξ∥2˜RKV1/2 ττ=sign(ε0)ξ⊤V1/2 xx(βL,x−˜βL,K) ∥ξ∥2· ∥V1/2 xx(βL,x−˜βL,K)∥2, which implies that {s[−k0]= 1}=nξ ∥ξ∥2=ν(ε0)o ,where ν(ε0) =sign(ε0)V1/2 xx(βL,x−˜βL,[−k0]) ∥V1/2 xx(βL,x−˜βL,[−k0])∥2. It follows that E3∩ {s[−k0]= 1} ⊇n ε0=t,ξ ∥ξ∥2=ν(t), a1/2>∥ξ∥2> ψ(t)o . Therefore, E3is nonempty. We complete the proof. 47 D.4 Proof of Corollary 1 LetA∞ rep(αt) ={∥σ(Vxx)−1εx∥∞≤z1−αt/2}denote the asymptotic limit of Arep(αt). Let¯R2 x= 1−R2 x. By (20) in Lemma S19, we have P∞(ph L≤α| Arep(αt))−α≤P(E2| A∞ rep(αt)) where E2=n |ε0|< z1−α/2,∥V−1/2 xxεx∥2≥ {¯arem(α, R2 x,∆)}1/2o , andε0is a standard Gaussian random variable independent of εx. Letξ=D(Vxx)−1/2σ(Vxx)−1εx. We have ∥ξ∥2=∥V−1/2 xxεx∥2and A∞ rep(αt) ={∥σ(Vxx)−1εx∥∞≤z1−αt/2}={∥D(Vxx)1/2ξ∥∞≤z1−αt/2}. Since∥ξ∥2≤ ∥D(Vxx)−1/2∥∞,2∥D(Vxx)1/2ξ∥∞,we have, if z1−αt/2≤ ∥D(Vxx)−1/2∥−1 ∞,2{¯arem(α, R2 x,∆)}1/2, ∥D(Vxx)1/2ξ∥∞≤z1−αt/2implies that ∥ξ∥2≤ {¯arem(α, R2 x,∆)}1/2. Therefore A∞ rep(αt)⊆ A∞ rep(¯arem(α, R2 x,∆)). Since P A∞ rem(¯arem(α, R2 x,∆))∩ E2 = 0, P A∞ rep(αt)∩ E2 ≤P A∞ rem(¯arem(α, R2 x,∆))∩ E2 = 0 Thus, we prove the desired result. D.5 Proof of Theorem 3 It follows from Theorem S3 ( i) with H= 1. D.6 Proof of Theorem 5 It follows from Theorem S2 ( ii) with H= 1. D.7 Proof of Theorem 6 It follows from Theorem S3 ( ii) with H= 1. 48 E Proofs of theoretical results under SRE E.1 Proof of Theorem S1 Definition S2. For two symmetric random vectors AandBinRm, we say Aismore peaked thanB, denoted by A⪰B, ifP(A∈ C)≥P(B∈ C)for all symmetric convex sets CinRm. Lemma S17 is the Gaussian correlation inequality (Royen 2014). Lemma S17. LetPNbe a probability measure on Rm, m > 1, given by a Gaussian random variable with 0mean and a non-singular covariance matrix. We have PN(C1∩ C2)≥PN(C1)PN(C2) for all symmetric convex sets C1,C2∈Rm. Define ˜βh,L,K∈RKsuch that ( ˜βh,L,K)K=βh,L,Kand all other entries are 0. Lemma S18. Consider SRE and assume Condition S1 holds. (i) Under H0n, we have P∞(ph L≤α| Ass-rep(αt))≤P∞(ph L≤α),P∞(ph L≤α| Ass-rem(a))≤P∞(ph L≤α); and (ii) under H0ω, we have P∞(ph fe≤α| Afe-rep(αt))≤P∞(ph fe≤α). Proof of Lemma S18. We see that, by Lemma S16, P∞(ph L> α) =P ∩K⊆[K]n −z1−α/2<PH h=1π1/2 h(εh,τ−β⊤ h,L,Kεh,K) n1/2˜ seL,K< z1−α/2o =P (εh,τ,ε⊤ h,x)h∈[H]∈ ∩K⊆[K]CK , where CK={x∈RH(K+1)| −z1−α/2<(n1/2˜ seL,K)−1(π1/2 h,−π1/2 h˜β⊤ h,L,K)h∈[H]x< z1−α/2}. SinceCK,K ⊆ [K] are symmetric convex sets, so is ∩K⊆[K]CK. On the other hand, we have, for ⋆∈ {ss-rep ,ss-rem }, P∞(ph L≤α| A⋆(αt)) =P (εh,τ,ε⊤ h,x)h∈[H]∈ ∩K⊆[K]CK (εh,τ,ε⊤ h,x)h∈[H]∈ C⋆ , 49 where Css-rep={x= (xh)h∈[H]∈RH(K+1)|xh∈RK+1,∥σ(Vh,xx)−1xh,2:(K+1)∥∞≤z1−αt/2, h∈[H]}; Css-rem ={x= (xh)h∈[H]∈RH(K+1)|xh∈RK+1,x⊤ h,2:(K+1)V−1 h,xxxh,2:(K+1)≤a, h∈[H]}. Css-rem andCss-rep are both symmetric convex sets. If follows that, by Lemma S17, P∞(ph L> α| Ass-rep(αt)) =P (εh,τ,ε⊤
|
https://arxiv.org/abs/2505.01137v1
|
h,x)h∈[H]∈ ∩K⊆[K]CK ∩ Css-rep P (εh,τ,ε⊤ h,x)h∈[H]∈ Css-rep ≥P (εh,τ,ε⊤ h,x)h∈[H]∈ ∩K⊆[K]CK =P∞(ph L> α). As a consequence, we have P∞(ph L≤α| Ass-rep(αt))≤P∞(ph L≤α). Similarly, we can prove that P∞(ph L≤α| Ass-rem(a))≤P∞(ph L≤α);P∞(ph fe≤α| Afe-rep(αt))≤P∞(ph fe≤α). Proof of Theorem S1. By Lemma S18, we have, under H0n, P∞(ph L≤α| Ass-rep(αt))≤P∞(ph L≤α);P∞(ph L≤α| Ass-rem(a))≤P∞(ph L≤α), and, under H0ω, P∞(ph fe≤α| Afe-rep(αt))≤P∞(ph fe≤α). The remaining results follow from Theorem S3, by letting a→0 and αt→1. E.2 Proof of Theorem S2 (i) LetVττ=PH h=1πhVh,ττ,R2 h,K=Vh,τKV−1 h,KKVh,Kτ/Vh,ττ,R2 K=PH h=1πhR2 h,KVh,ττ/Vττ, ˜R2 K=R2 x−R2 Kand ¯R2 x= 1−R2 x. To prove Theorem S2 ( i), we first prove a lemma. Lemma S19. Consider SRE and assume Condition S1 holds. Under H0n, if a≤H−1¯arem(α, R2 x,∆), then we have P∞(ph L≤α| Ass-rem(a))≤α. 50 Proof of Lemma S19. Letβh,K(z) =S−1 h,KKSh,KY(z),z= 0,1. Let τK,i=Yi(1)−Yi(0)− x⊤ i(βh(i),K(1)−βh(i),K(0)). Note that NL,K= (n1/2˜ seL,K)−1PH h=1π1/2 h(εh,τ−ε⊤ h,Kβh,L,K) and, by definition, n˜ se2 L,K=HX h=1πh var(εh,τ−ε⊤ h,Kβh,L,K) +Sh,τKτK ≥HX h=1πhvar(εh,τ−ε⊤ h,Kβh,L,K) =HX h=1πh(1−R2 h,K)Vh,ττ= (¯R2 x+˜R2 K)Vττ, where the inequality holds when Yi(1) = Yi(0) for i= 1, . . . , n . Define ˜NL,K=NL,Kn1/2˜ seL,K (¯R2 x+˜R2 K)1/2V1/2 ττ,K ⊆ [K]. ˜NL,Kare standard Gaussian random variables and |˜NL,K| ≥ |N L,K|. The remaining proof contains 2 steps. Step 1: We prove that P∞(ph L≤α| Ass-rem(a))−α≤P max K⊊[K]|˜NL,K| ≥z1−α/2,|˜NL,[K]|< z1−α/2 A∞ ss-rem(a) . Since εh,τ−ε⊤ h,xβh,L,xis independent of εh,x, we have var(εh,τ−ε⊤ h,Kβh,L,K)−var(εh,τ−ε⊤ h,xβh,L,x) = var( ε⊤ h,xβh,L,x−ε⊤ h,Kβh,L,K). By the definition of R2 h,Kandβh,L,K, we have var(εh,τ−ε⊤ h,Kβh,L,K) =Vh,ττ(1−R2 h,K); var( εh,τ−ε⊤ h,xβh,L,x) =Vh,ττ(1−R2 h,x), which yields that var(ε⊤ h,xβh,L,x−ε⊤ h,Kβh,L,K) =Vh,ττ(R2 h,x−R2 h,K). Let˜R2 h,K=R2 h,x−R2 h,K,¯R2 h,x= 1−R2 h,x. LetA∞ ss-rem(a) =∩H h=1{ε⊤ h,xV−1 h,xxεh,x≤a}. By Lemma S16 and |NL,K| ≤ | ˜NL,K|, we have P∞(ph L≤α| Ass-rem(a)) =P( max K⊆[K]|NL,K| ≥z1−α/2| A∞ ss-rem(a))≤P(E | A∞ ss-rem(a)), where E={maxK⊆[K]|˜NL,K| ≥z1−α/2}. 51 We see that E={|˜NL,[K]| ≥z1−α/2} ∪ { max K⊊[K]|˜NL,K| ≥z1−α/2,|˜NL,[K]|< z1−α/2}=:E1∪ E2. Therefore, P∞(ph L≤α| Ass-rem(a))≤P(E1| A∞ ss-rem(a)) +P(E2| A∞ ss-rem(a)). Since ˜NL,[K]is independent of εh,x,h∈[H], we have P(E1| A∞ ss-rem(a)) =P(E1) =α, which yields that P∞(ph L≤α| Ass-rem(a))−α≤P(E2| A∞ ss-rem(a)). Step 2: We prove that: If a≤H−1¯arem(α, R2 x,∆), we have P max K⊊[K]|˜NL,K| ≥z1−α/2,|˜NL,[K]|< z1−α/2 A∞ ss-rem(a) = 0. Letξh=V−1/2 h,xxεh,xandε0=˜NL,[K]. They both are standard Gaussian random vari- ables. We express A∞ ss-rem(a) =n max h∈[H]∥ξh∥2 2≤ao . In the remainder of the proof, we will show that E2⊂n |ε0|< z1−α/2,max h∈[H]∥ξh∥2≥¯arem(α, R2 x,∆) Ho . (20) By (20), it follows that if a≤¯arem(α, R2 x,∆)/H,P(E2| A∞ ss-rem(a)) = 0 . Define ˜βh,L,K∈RKsuch that ( ˜βh,L,K)K=βh,L,Kand all other entries are 0. Therefore, ε⊤ h,xβh,L,x−ε⊤ h,Kβh,L,K=ε⊤ h,x(βh,L,x−˜βh,L,K) =ξ⊤ hV1/2 h,xx(βh,L,x−˜βh,L,K), with ∥V1/2 h,xx(βh,L,x−˜βh,L,K)∥2 2= var( ε⊤ h,xβh,L,x−ε⊤ h,Kβh,L,K) =Vh,ττ˜R2 h,K. Therefore, we have |ε⊤ h,xβh,L,x−ε⊤ h,Kβh,L,K| ≤ ∥ξh∥2˜Rh,KV1/2 h,ττand |˜NL,K|= ¯RxV1/2 ττε0+PH h=1π1/2 h(ε⊤ h,xβh,L,x−ε⊤ h,Kβh,L,K) (¯R2 x+˜R2 K)1/2V1/2 ττ ≤¯RxV1/2 ττ|ε0|+PH h=1π1/2 h∥ξh∥2˜Rh,KV1/2 h,ττ (¯R2 x+˜R2 K)1/2V1/2 ττ. 52 By the Cauchy-Schwarz inequality, HX h=1π1/2 h˜Rh,KV1/2 h,ττ≤HX h=1πh˜R2 h,KVh,ττ1/2√ H=V1/2 ττ˜RK√ H. Therefore, we have |˜NL,K| ≤¯RxV1/2 ττ|ε0|+ max h∈[H]∥ξh∥2PH h=1π1/2 h˜Rh,KV1/2 h,ττ (Vττ¯R2 x+Vττ˜R2 K)1/2 ≤¯RxV1/2 ττ|ε0|+ max h∈[H]∥ξh∥2V1/2 ττ˜RK√ H (Vττ¯R2 x+Vττ˜R2 K)1/2 =¯Rx|ε0|+
|
https://arxiv.org/abs/2505.01137v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.