text
string
source
string
2ΩZ O(n)0F1 B−1M′A−2MB−1, XX′ , which is known to possess the desired form. (2) follows from Table 3. 21 4.2. Behrens-Fisher problem In the two sample discriminant analysis problem, we are going to analyse the power when an observation X2may belong to one of two matrix normal populations π1∼Nn,p(M1;A, B) orπ0∼Nn,p(M...
https://arxiv.org/abs/2505.00470v1
and discussion during the preparation of this paper. His classic book and many fruitful ideas once led me into the field of multivariate statistics. Although the generalised product moment distribution of some matrix variates T1-T3 is derived, the classification of matrix populations is still far from complete. Here, t...
https://arxiv.org/abs/2505.00470v1
1) variables in the p×p real symmetric matrix S=X′X= (sij), i≤jis |Σ1|1 2etr(−QS)|S|n−p−1 2 2np 2Γp(n−1 2)pY i,j=10F0 I−1 2q−1 ijAij, qijsij , where Q= (qij)is an arbitrary p×pmatrix with positive entries and this holds for all p×preal symmetric positive definite matrices S; elsewhere zero. 2.The moment generating fu...
https://arxiv.org/abs/2505.00470v1
is the central function in Theorem 4.2 multiplied by etr −1 2ΩpY i=1etr1 4ΨiW−1 where Wis defined in Theorem 4.2. 3.The latent roots l1, l2, . . . , l pofSis the central distribution in Theorem 4.2 multiplied by etr −1 2ΩpY i=10F1n 2;1 4Ψi, L where L= diag( li),l1> l2>···> lp; elsewhere zero. Theorem 8. Suppose...
https://arxiv.org/abs/2505.00470v1
Soc. B , 39(2):254–261, 1977. 31 R. L. Dykstra. Establishing the positive definiteness of the sample covariance matrix. Ann. Math. Stat. , 41(6):2153–2154, 1970. K.-T. Fang and Y.-T. Zhang. Generalized multivariate analysis. Science Press , 1990. A. K. Gupta and D. G. Kabe. A multiple integral involving zonal polynomi-...
https://arxiv.org/abs/2505.00470v1
EW D-optimal Designs for Experiments with Mixed Factors Siting Lin1, Yifei Huang2, and Jie Yang1 1University of Illinois at Chicago and2Astellas Pharma, Inc. May 23, 2025 Abstract We characterize EW D-optimal designs as robust designs against unknown pa- rameter values for experiments under a general parametric model w...
https://arxiv.org/abs/2505.00629v2
adopt the multinomial logistic models (Glonek and McCullagh, 1995; Zocchi and Atkinson, 1999; Bu et al., 2020) for analyzing the paper feeder experiment, which include baseline-category (also known as multiclass logistic), cumulative, adjacent-categories, and continuation- ratio logit models. For experiments with discr...
https://arxiv.org/abs/2505.00629v2
for generalized linear models, and Bu et al. (2020) incorporated the lift-one and exchange algorithms with the EW criterion. In this paper, we adopt EW D-criterion and adapt the ForLion algorithm for finding robust designs with mixed factors. The original EW D-optimal designs were proposed for experiments with discrete...
https://arxiv.org/abs/2505.00629v2
al. (2024). Note that the cleaning method was simplified into a binary factor ( CM 2orCM 3) in Lukemire et al. (2022) and Huang et al. (2024). □ Example 2. In the paper feeder experiment described by Joseph and Wu (2004), besides one continuous factor, there are eight discrete or qualitative control factors, including ...
https://arxiv.org/abs/2505.00629v2
a set of bootstrapped datasets, and their corresponding parameter vectors {ˆθ1, . . . , ˆθB} obtained by fitting the parametric model on each of the bootstrapped datasets. In other words, we may replace the prior distribution Q(·) in (2.2) with the empirical distribution of{ˆθ1, . . . , ˆθB}. That is, we may look for ξ...
https://arxiv.org/abs/2505.00629v2
and Leonov (2014), to characterize the sample-based and integral- based D-optimal EW designs, we extend the collection Ξof designs (each design consists only of a finite number of design points) to Ξ(X), which consists of all probability measures on X. In other words, a design ξ∈Ξ(X) is also a probability measure on X,...
https://arxiv.org/abs/2505.00629v2
to find w∗= (w∗ 1, . . . , w∗ m)T∈S={(w1, . . . , w m)T∈Rm|wi≥0, i= 1, . . . , m ;Pm i=1wi= 1}, which maximizes fEW(w) =|Pm i=1wiE{F(xi,Θ)}|orfSEW(w) =|Pm i=1wiˆE{F(xi,Θ)}|. The lift-one algorithm (see Algorithm 3 in the Supplementary Material of Huang et al. (2025)) can be used for finding EW D-optimal designs, with f...
https://arxiv.org/abs/2505.00629v2
(2))T,ξt for each x(2)∈ D ⊆Qd j=k+1Ij, and then choose x∗ (2)that yields the largest d(((x∗ (1))T,xT (2))T,ξt). Note that x∗ (1)depends on x(2). Then x∗= ((x∗ (1))T,(x∗ (2))T)T. 8 Step 6: Ifd(x∗,ξt)≤p, proceed to Step 7 . Otherwise, obtain ξt+1by adding ( x∗,0) toξtand increasing mtby 1, and return to Step 2. Step 7: ...
https://arxiv.org/abs/2505.00629v2
more than nwi, and then allocate any remaining units to design points with nwi> n i, in the order of increasing F(ξ) the most (see Algorithm 2 in Huang et al. (2025)). 9 Step 4: Deleting step: Discard any xifor which ni= 0. Step 5: Output: Exact design {(xi, ni), i= 1, . . . , m }with ni>0 andPm i=1ni=n. 3.3 Calculatin...
https://arxiv.org/abs/2505.00629v2
s(ηi) = Var( Yi) (see Table 5 in the Supplementary Material of Huang et al. (2025) for examples of ν). To cal- culate Fx=E{F(x,Θ)}orˆE{F(x,Θ)}under GLMs, it is enough to replace ν{h(x)Tθ} with E[ν{h(x)TΘ}] =R Θν{h(x)Tθ}Q(dθ) or ˆE[ν{h(x)TΘ}] =B−1PB j=1ν{h(x)Tˆθj}. That is, E{F(x,Θ)}=E ν{h(x)TΘ} ·h(x)h(x)T, and ˆE{F(x...
https://arxiv.org/abs/2505.00629v2
|XT ξWξXξ|>0, where Wξis either E{Wξ(Θ)}for integral-based EW D- optimality or ˆE{Wξ(Θ)}for sample-based EW D-optimality (see Example 4). Then ξ is EW D-optimal if and only if max x∈XE[ν{h(x)TΘ}]·h(x)T[XT ξWξXξ]−1h(x)≤pfor integral-based D-optimality or max x∈XˆE[ν{h(x)TΘ}]·h(x)T[XT ξWξXξ]−1h(x)≤pfor sample-based EW D-...
https://arxiv.org/abs/2505.00629v2
and S.4 of the Supplementary Material. In Table 1, we list the number of design points m, the computational time in seconds for obtaining the designs, the objective function values, and the relative efficiencies com- pared with the recommended EW ForLion design, that is, ( |ˆE{F(ξ,Θ)}|/|ˆE{F(ξForLion , Θ)}|)1/pwith p= ...
https://arxiv.org/abs/2505.00629v2
involves responses of five categories, five continuous factors, and one discrete factor (see Table S1 in the Supplementary Material of Huang et al. (2024) or Table 6 in Lukemire et al. (2022)). Lukemire et al. (2022) employed a PSO algorithm and derived a locally D-optimal approximate design with 14 support points unde...
https://arxiv.org/abs/2505.00629v2
0.0399 8 1 -13 3 0 0 16 40 9 -1 25.000 111.805 0 -100 16 0.0533 9 -1 25 112 0 -100 16 53 10 -1 25.000 -72.107 0 -100 16 0.0432 10 -1 25 -72 0 -100 16 43 11 -1 -25.000 -170.690 -150 0 0 0.0401 11 -1 -25 -171 -150 0 0 40 12 1 25.000 -158.631 -150 0 16 0.0470 12 1 25 -159 -150 0 16 47 13 1 -25.000 -200.000 -150 -100 0 0.0...
https://arxiv.org/abs/2505.00629v2
Supplementary Materials The Supplementary Material includes several sections: S1 provides analytic solutions for lift-one algorithm with MLMs; S2 discusses assumptions for design region and parametric models; S3 provides proofs of main theorems; S4 displays model selection and design comparison for paper feeder experim...
https://arxiv.org/abs/2505.00629v2
Annals of Statistics 2 , 849–879. Lehmann, E. L. and G. Casella (1998). Theory of Point Estimation (2 ed.). Springer, New York. Lukemire, J., A. Mandal, and W. K. Wong (2019). d-qpso: A quantum-behaved particle swarm technique for finding d-optimal designs with discrete and continuous factors and a binary response. Tec...
https://arxiv.org/abs/2505.00629v2
S.9 in the Supplementary Material of Bu et al. (2020), for multino- mial logistic models, given a design ξ={(xi, wi)|i= 1, . . . , m } ∈Ξ, fori∈ {1, . . . , m } and 0 < z < 1, we have fi(z) = (1 −z)p−J+1J−1X j=0bjzj(1−z)J−1−j, f′ i(z) = (1 −z)p−JJ−1X j=1bj(j−pz)zj−1(1−z)J−1−j−pb0(1−z)p−1, where b0=fi(0), ( bJ−1, bJ−2, ...
https://arxiv.org/abs/2505.00629v2
(2014)) and obtain the following results: Lemma 4. When J= 5, equation (S1.1) is equivalent to A4z4+A3z3+A2z2+A1z+A0= 0withA0=b0p−b1,A1=−4b0p+b1(3+p)−2b2,A2= 6b0p−3b1(1+p)+b2(4+p)−3b3, A3=−4b0p+b1(1 + 3 p)−2b2(1 +p) +b3(3 +p)−4b4, and A4=p(b0−b1+b2−b3+b4). The cases with A4= 0has been listed in Lemmas 3 & 2. If A4̸= 0,...
https://arxiv.org/abs/2505.00629v2
which further implies FSEW(λξ1+ (1−λ)ξ2) = λFSEW(ξ1) + (1 −λ)FSEW(ξ2), FEW(λξ1+ (1−λ)ξ2) = λFEW(ξ1) + (1 −λ)FEW(ξ2). That is, both FSEW(X) andFEW(X) are convex. Since Xis compact under Assumption (A1), it can be verified that Ξ(X) is also compact under the topology of weak convergence, that is, ξnconverges weakly to ξ0...
https://arxiv.org/abs/2505.00629v2
EW D-optimality, under Assumptions (A1), (A3), and (B3), there must exist an EW D-optimal design with no more than p(p+ 1)/2 support points. According to Section 2.4.2 in Fedorov and Leonov (2014), Assumptions (B1), (B2), and (B4) are always satisfied for D-optimality. The rest parts of Theorem 2 are direct conclusions...
https://arxiv.org/abs/2505.00629v2
the paper feeder experiment, which was carried out at Fuji-Xerox by Y. Norio and O. Akira (Joseph and Wu, 2004). In this experiment, there are eight discrete control factors, namely feed belt material (x1, Type A or Type B), speed (x2, 288 mm/s, 240 mm/s, or 192 mm/s), drop height (x3, 3 mm, 2 mm, or 1 mm), center roll...
https://arxiv.org/abs/2505.00629v2
Joseph and Wu (2004), we fit the chosen model, a cumulative logit model with npo and p= 32 parameters. The fitted parameter values S7 are ˆθ= ( ˆβ11,ˆβ12,ˆβ13,ˆβ14,ˆβ15,ˆβ16,ˆβ17,ˆβ18, ˆβ19,ˆβ110,ˆβ111,ˆβ112,ˆβ113,ˆβ114,ˆβ115,ˆβ116, ˆβ21,ˆβ22,ˆβ23,ˆβ24,ˆβ25,ˆβ26,ˆβ27,ˆβ28, ˆβ29,ˆβ210,ˆβ211,ˆβ212,ˆβ213,ˆβ214,ˆβ215,ˆβ216...
https://arxiv.org/abs/2505.00629v2
of grid level typically depends control accuracy of the factors and the options that the experimenter may have. As illustrated in Table S.2, the experimenter may choose the smallest possible grid level when applicable. S8 Table S.2: Locally D-optimal designs for paper feeder experiment Designs m Time (s) |F(ξ,θ)| Relat...
https://arxiv.org/abs/2505.00629v2
0 0.032 0 0 0 0 0 0.025 EW Bu-exact 0 56 0 0 0 0 0 43 EW Bu-appro 0 0.031 0 0 0 0 0 0.024 4 Original 5 10 10 10 15 10 5 15 5 5 5 Bu-exact 0 53 0 0 0 0 0 0 0 0 48 Bu-appro 0 0.030 0 0 0 0 0 0 0 0 0.027 EW Bu-exact 42 49 0 0 0 0 0 0 0 0 47 EW Bu-appro 0.024 0.028 0 0 0 0 0 0 0 0 0.026 5 Original 10 10 15 20 5 20 5 10 Bu-...
https://arxiv.org/abs/2505.00629v2
0 0 0.028 13 Original 10 10 10 10 5 5 5 5 5 5 5 5 Bu-exact 0 0 58 0 0 0 0 0 0 0 0 50 Bu-appro 0 0 0.032 0 0 0 0 0 0 0 0 0.028 EW Bu-exact 0 0 56 0 0 0 0 0 0 0 0 50 EW Bu-appro 0 0 0.031 0 0 0 0 0 0 0 0 0.028 14 Original 10 20 15 20 20 20 5 5 Continued on next page S11 Continued Run DesignStack Force0 5 10 15 20 25 30 3...
https://arxiv.org/abs/2505.00629v2
75.0 - - 0.0291 0.0279 - - Bu-grid2.5 exact 5.0 75.0 - - 52 50 - - ForLion 4.749 74.807 - - 0.0286 0.0280 - - ForLion exact grid2.5 5.0 75.0 - - 51 50 - - EW Bu-grid2.5 5.0 75.0 77.5 - 0.0284 0.0169 0.0110 - EW Bu-grid2.5 exact 5.0 75.0 77.5 - 51 37 13 - EW ForLion 4.505 75.677 - - 0.0281 0.0280 - - EW ForLion exact gr...
https://arxiv.org/abs/2505.00629v2
47 - - ForLion 33.541 35.854 160 - 0.0224 0.0048 0.0268 - ForLion exact grid2.5 35.0 160 - - 49 48 - - EW Bu-grid2.5 35.0 160.0 - - 0.0272 0.0251 - - EW Bu-grid2.5 exact 35.0 160.0 - - 49 45 - - EW ForLion 33.863 160 - - 0.0269 0.0252 - - EW ForLion exact grid2.5 35.0 160 - - 48 45 - - 9 Bu-grid2.5 22.5 25.0 110.0 - 0....
https://arxiv.org/abs/2505.00629v2
ForLion 16.439 18.660 76.924 - 0.0207 0.0090 0.0269 - ForLion exact grid2.5 17.5 77.5 - - 53 48 - - EW Bu-grid2.5 17.5 80.0 82.5 - 0.0291 0.0192 0.0056 - EW Bu-grid2.5 exact 17.5 80.0 - - 52 44 - - EW ForLion 17.449 80.152 - - 0.0291 0.0249 - - EW ForLion exact grid2.5 17.5 80.0 - - 52 44 - - 15 Bu-grid2.5 0.0 32.5 - -...
https://arxiv.org/abs/2505.00629v2
r∈ {1,2,3,4,5}, we calculate the relative efficiencies of ξ(l)with respect to ξ(r) under Θ(r), that is, |ˆE{F(ξ(l),Θ(r))}| |ˆE{F(ξ(r),Θ(r))}|!1/p with p= 10. The relative efficiencies are shown in the following matrix  l=1 l=2 l=3 l=4 l=5 r=1 1 .00000 0 .99179 0 .99586 0 .99284 0 .98470 r=2 0 .98258 1 .00000 0 .9...
https://arxiv.org/abs/2505.00629v2
of experimental units n= 100 (as an illustration) to obtain an exact design, and present it on the right side of Table S.6, S15 which consists of 17 design points (the 18th design point of the approximate design vanishes due to its low weight). Table S.5: Factors and parameters for electrostatic discharge experiment Fa...
https://arxiv.org/abs/2505.00629v2
in the higher-value range. Actually, in Table S.7, it clearly shows that the overall mean and median objective function values for EW ForLion designs (approximate and exact designs listed in Table S.6) are higher than ForLion and PSO designs. S6.2 A three-continuous-factor example under a GLM This example refers to Exa...
https://arxiv.org/abs/2505.00629v2
GLOBAL ACTIVITY SCORES∗ RUILONG YUE†AND GIRAY ¨OKTEN‡ Abstract. We introduce a new global sensitivity measure called global activity score . The new measure is obtained from the global active subspace method, similar to the way the activity score measure is obtained from the active subspace method. We present theoretic...
https://arxiv.org/abs/2505.00711v2
(0,1)df(xxx)dxxx, and Var(f(xxx)) =σ2=Z (0,1)df2(xxx)dxxx−E[f(xxx)]2. As a consequence of the orthogonality of the ANOVA decomposition, the variance of fcan be written as σ2=X u⊆Dσ2 u, where σ2 uis the variance for the component function fu: σ2 u=Z (0,1)df2 u(xxx)dxxx− Z (0,1)dfu(xxx)dxxx!2 =Z (0,1)df2 u(xxx)dxxx. The ...
https://arxiv.org/abs/2505.00711v2
. ,wd] is the d×dorthogonal matrix of eigenvectors, and Λ as= diag( λ1, . . . , λ d) with λ1≥. . .≥λd≥0 is the diagonal matrix of eigenvalues in descending order. Matrix Cascan be approximated using the Monte Carlo method as Cas≈ˆCas=1 NNX j=1(∇xxxf(xxx(j)))(∇xxx(f(xxx(j)))T, (2.8) where the xxx(1), . . . ,x xx(N)is a ...
https://arxiv.org/abs/2505.00711v2
Λ gas= diag( λ1, . . . , λ d) with λ1≥. . .≥λd≥0 is the diagonal matrix of eigenvalues in descending order. Similar to the active subspace method, we seek an integer m < d such that eigenvalues λm+1, . . . , λ dare sufficiently small, and then approximate f(zzz) with a lower dimensional function g(UUUT 1zzz) where UUU1...
https://arxiv.org/abs/2505.00711v2
definition of generalized upper Sobol’ indices (see Kucherenko et al. [12]), we have ¯Si=1 2σ2Z (f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFi(vi) =1 2σ2Z Ω×Ω(f(zzz)−f(vvv{i},zzz{−i}))2dFFF(zzz)dFFF(vvv). Let Ω′= (a′, b′)d, and write Ω ×Ω as the union of Ω′×Ω′and the complement (Ω′×Ω′)c. Since ( zi−vi)2≤(b′−a′)2, for any zi, ...
https://arxiv.org/abs/2505.00711v2
to conduct the global sensitivity analysis of four examples. To compute Sobol’ indices, we use a Monte Carlo sample size of N= 10 ,000 and use the algorithm in Sobol’ [20] to estimate upper Sobol’ indices, and the correlation 2 algorithm in Owen [16] for lower Sobol’ indices. To compute the DGSM (Eqn. (2.3)), we use a ...
https://arxiv.org/abs/2505.00711v2
do not normalize Sobol’ indices, for we want to see the unscaled differences between lower and upper Sobol’ indices. For the global activity scores plot, we include the results from m= 1 and m= 10. For the activity scores, we only show the results for m= 1, since m= 10 for activity scores corresponds to DGSM. The upper...
https://arxiv.org/abs/2505.00711v2
for the ranking of inputs 2 and 4. If we increase mto 10, then global activity scores and the upper Sobol’ indices follow the same pattern and give the same conclusions. In Fig. 7 we investigate the convergence of upper Sobol’ indices and global activity scores as a function of sample size, N, for the case of high nois...
https://arxiv.org/abs/2505.00711v2
and inputs 6 and 7 have similar importance at rank four. (a) Sobol’ indices (b) Normalized DGSMs (c) Normalized activity scores (d) Normalized global activity scores Fig. 9: Sensitivity indices for the discontinuous function Fig. 10 plots the upper Sobol’ indices and global activity scores as a function of sample size,...
https://arxiv.org/abs/2505.00711v2
activity scores ( m= 6), the most important inputs are 3, 9, 4, 14, 13, 6. Figure 13 plots activity scores αi(m) and global activity scores γi(m),m= 1, . . . , 14, for the six most important variables for each method. The graph illustrates the sensitivity of the scores with respect to the dimension mof the approximatin...
https://arxiv.org/abs/2505.00711v2
scores are similar to DGSM and activity scores (we report some of these results, namely the one for no-noise, in the paper). These observations, not surprisingly, are consistent with the conclusions of Yue and ¨Okten [22] where the global active subspace method was introduced and used as a tool to construct surrogate m...
https://arxiv.org/abs/2505.00711v2
small Sobol’ sensitivity indices , ACM Transactions on Mod- eling and Computer Simulation (TOMACS), 23 (2013), pp. 1–17. [17]H. Saadat et al. ,Power system analysis , vol. 2, McGraw-Hill, 1999. [18]I. Sobol’ ,On derivative-based global sensitivity criteria , Mathematical Models and Computer Simulations, 3 (2011), pp. 4...
https://arxiv.org/abs/2505.00711v2
Asymmetric Penalties Underlie Proper Loss Functions in Probabilistic Forecasting Erez Buchweitz João Vitor Romano Ryan J. Tibshirani Abstract Accurately forecasting the probability distribution of phenomena of interest is a classic and ever more widespread goal in statistics and decision theory. In comparison to point ...
https://arxiv.org/abs/2505.00937v1
Savage, 1971), and has largely revolved around propriety. A loss function is deemed proper if the least loss, in expectation over all possible realizations of the target, is incurred when the forecast equals the distribution of the target (Gneiting and Raftery, 2007). It is commonly held that proper losses encourage “h...
https://arxiv.org/abs/2505.00937v1
Forecast Hub (Cramer et al., 2022a), as shown in Figure 2. (For details on the experimental setup, see Section 4.) We highlight another effect appearing in the left-most plot in Figure 2: given a choice between two forecasts, either shifted upward by one unit, or shifted downward by the same amount, logarithmic loss wi...
https://arxiv.org/abs/2505.00937v1
CRP loss logσµ logσµ Figure 2: Average logarithmic and CRP losses over targets and forecasts from the Covid-19 Forecast Hub (for details see Section 4). The qualitative assessment for CRP loss is exactly the same as in the normal case; for logarithmic loss there is an additional location asymmetry that is due to the fo...
https://arxiv.org/abs/2505.00937v1
2 (Exponential family) .Let{Gθ:θ >0}be a minimal exponential family. Fix G=G1,θ >1, and denote byℓthe logarithmic loss. The following holds. •If the family is a normal, exponential, Laplace, Weibull, or (generalized) gamma scale family, or a log-normal log-scale family, then ℓ(Gθ, G)< ℓ(G1/θ, G). •If the family is an i...
https://arxiv.org/abs/2505.00937v1
distribution shift. In Section 6, we conclude with a discussion of our findings, some assorted additional results, and ideas for future work. 1.3 Related literature References to the broader literature on proper losses will be given in the next section, and here we restrict our attention to literature more narrowly adj...
https://arxiv.org/abs/2505.00937v1
reader to Gneiting and Raftery (2007) for a more comprehensive review of loss functions in probabilistic forecasting. Logarithmic loss. When the outcome space Yis a convex subset of the finite-dimensional Euclidean space Rd, and F, G are absolutely continuous with respect to the Lebesgue measure, we let f, gdenote thei...
https://arxiv.org/abs/2505.00937v1
multiplier E|X−X′|outside of the parentheses serves as a penalty for high entropy (in contrast to quadratic and spherical losses), as the term inside the parentheses is scale invariant. CRP loss is strictly proper if restricted to the family of distributions with finite first moment. It induces the Cramér divergence (C...
https://arxiv.org/abs/2505.00937v1
space Y ⊆Rby ℓ(F, y) = log Var( X) +(y−EX)2 Var(X), where X∼F, assumed to have finite variance. It is, up to additive and multiplicative constants, the logarithmic loss when the input is normally distributed with the same mean and variance as the forecast F. Being dependent only on the forecast mean and variance, it is...
https://arxiv.org/abs/2505.00937v1
the main results, dividing our presentation on scale families (Section 3.1), location families (Section 3.2), and exponential families (Section 3.3). 3.1 Scale families For a scale family {Gσ:σ >0}, in order to study asymmetry inherent to a loss ℓ(whether ℓfavors overestimating or underestimating the scale σ), we impos...
https://arxiv.org/abs/2505.00937v1
Lemma 2. Let{Gσ:σ >0}be a scale family, ℓa loss function, and dthe induced divergence. If dis symmetric and rescalable with scaling function h, then the following holds for G=G1and all σ >1. •Ifhis increasing, then ℓ(Gσ, G)> ℓ(G1/σ, G). •Ifhis decreasing, then ℓ(Gσ, G)< ℓ(G1/σ, G). •Ifhis constant, then ℓ(Gσ, G) =ℓ(G1/...
https://arxiv.org/abs/2505.00937v1
results. The divergences induced by spherical and DS losses are translation invariant, and furthermore they are symmetric when restricted to the case where the forecast and target distribution both belong to the same location family. Therefore the same argument as in Lemma 3 gives the desired result for spherical and D...
https://arxiv.org/abs/2505.00937v1
assume without loss of generality that η= 1. Denoting p=p1, note that the difference in divergence can be written as ℓ(pθ, p)−ℓ(p1/θ, p) =d(pθ, p)−d(p1/θ, p) =Elogp1/θ(Y) pθ(Y)=A(θ)−A(1/θ)−(θ−1/θ)ET(Y), where recall Yis a random variable with density p. A standard exponential family identity states that ET(Y) =A′(1), h...
https://arxiv.org/abs/2505.00937v1
>0and b∈R, the acceleration is A′′(η) =a/η2which means that u3A′′(u)is increasing in u. Hence, by Lemma 4, we have ℓ(pθη, pη)> ℓ(pη/θ, pη)for all η >0and all θ >1. Consequently, for the generalized gamma scale family, as the scale parameter σis inversely proportional to the natural parameter η= 1/σγ, we have, writing g...
https://arxiv.org/abs/2505.00937v1
one through four weeks ahead, for each U.S. state, over the period from April 2020 to March 2023. All forecasts in the Hub are stored in quantile format, and so we converted each forecast from a discrete set of quantiles to a density or CDF before compute log or CRP loss, respectively. Details are given in Appendix A.1...
https://arxiv.org/abs/2505.00937v1
a steep challenge to forecasters. Many of the best-performing teams used gradient boosting (Friedman, 2001) to predict quantiles. As we did with the forecasts in the Covid-19 Forecast Hub, we converted these quantile forecasts into a density or CDF before computing log or CRP loss, respectively, following the same stra...
https://arxiv.org/abs/2505.00937v1
following evaluation scheme for each model. At each spatial location, we first shifted the forecast density so that it matches the mean of the target density—this was done to focus on differences in scale. We then estimated the standard deviations of the forecast and target densities, and denote these distributions by ...
https://arxiv.org/abs/2505.00937v1
(top panel; p= 0.2), symmetric (middle; p= 1/2), or left skewed (bottom; p= 0.8). A lighter color represents a lower loss, with minimum achieved at the star. When the location is correctly specified, the penalties on the scale σall agree with what is suggested by our theory. CRP prefers underdispersed forecasts, log lo...
https://arxiv.org/abs/2505.00937v1
parameter in a scale or exponential family shifts in testing relative to the training population. In this case, if the shift is log-symmetric around the training parameter value, then the asymmetry in loss penalties can be used to describe the direction toward which the forecaster ought to deviate from their honest opi...
https://arxiv.org/abs/2505.00937v1
a) =d(Ga, G)and d(G, G 1/a) =d(G1/a, G). Lemma 2 tells us that Gawill lead to larger divergence if dis rescalable with increasing scale function, as in CRP loss, or will lead to smaller divergence if dis rescalable with decreasing scale function, as in quadratic loss. In summary, for symmetric divergences the two setti...
https://arxiv.org/abs/2505.00937v1
similarly. Example 1 (Exponential distribution) .When Gσis the exponential distribution with scale σ(see Section 3.3), we can infer the following using Theorem 4. Under CRP loss, hedging is carried out by inflating the scale, making it flatter and less informative, while under quadratic loss it is carried out by deflat...
https://arxiv.org/abs/2505.00937v1
log-partition function has derivative A′(η) =−(k/γ)/η. The natural parameter η∗for which the forecast pη∗attains global minimum expected logarithmic loss is 1 η∗=E1 ηtest . Now compare the optimal scale σ∗= (η∗)−1/γwith the training scale σtrain= (ηtrain)−1/γ: using Jensen’s inequality, (σ∗)−γ=η∗= (Eη−1 test)−1< eElo...
https://arxiv.org/abs/2505.00937v1
ℓis symmetric and rescalable with constant scaling function (e.g., logarithmic and DS), it is indifferent to the scale of the second task and assigns equal expected loss to Glenn and Bob in total over both tasks, ℓ(F, G) +ℓ(Hσ, Gσ) =ℓ(H, G) +ℓ(Fσ, Gσ). However, when the divergence has increasing scaling function (e.g.,...
https://arxiv.org/abs/2505.00937v1
1/η1ℓ(pη1, pη)−ℓ(pη2, pη)σ2= 1/η2= 0.6 1 2 3 401234 σ1= 1/η1ℓ(pη1, pη)−ℓ(pη2, pη)σ2= 1/η2= 3.0 Figure 7: The function ℓ(pη1, pη)−ℓ(pη2, pη)for normal densities with distinct fixed values of σ2(solid), the roots given by Theorem 6 in terms of the Lambert function (dashed) and the multiplicative inverse of the largest of...
https://arxiv.org/abs/2505.00937v1
whereas ℓ(Gµ, G) =ℓ(G−µ, G)implies Z Yw(y+µ)(G(y)−G(y+µ))2dy=Z Yw(y)(G(y)−G(y+µ))2dy=⇒w(y+µ) =w(y). The only weight function that satisfies both conditions for all σ >0andµ∈Ris the zero function: w(y) = 0 for all y∈ Y. Therefore, the trade-off between ensuring symmetric penalties either in location or in scale families...
https://arxiv.org/abs/2505.00937v1
Philip Dawid. Present position and potential developments: Some personal views statistical theory the prequential approach. Journal of the Royal Statistical Society: Series A , 147(2):278–290, 1984. A. Philip Dawid. The geometry of proper scoring rules. Annals of the Institute of Statistical Mathematics , 59:77–93, 200...
https://arxiv.org/abs/2505.00937v1
Calibration, sharpness and the weighting of experts in a linear opinion pool. Annals of Operations Research , 229(1):429–450, 2015. 24 Ian T. Jolliffe and David B. Stephenson. Forecast Verification: A Practitioner’s Guide in Atmospheric Science . John Wiley & Sons, 2012. Thomas H. Jordan, Yun-Tai Chen, Paolo Gasparini,...
https://arxiv.org/abs/2505.00937v1
Lauritzen. Proper local scoring rules. Annals of Statistics , 40(1):561–592, 2012. Andrew J. Patton. Comparing possibly misspecified forecasts. Journal of Business & Economic Statistics , 38(4): 796–809, 2020. Adrian E. Raftery and Hana Šev ˇcíková. Probabilistic population forecasting: Short to very long-term. Interna...
https://arxiv.org/abs/2505.00937v1
probabilistic wind power forecasting with multi-scale features. Renewable Energy , 201:734–751, 2022a. Yun Wang, Houhua Xu, Runmin Zou, Lingjun Zhang, and Fan Zhang. A deep asymmetric Laplace neural network for deterministic and probabilistic wind power forecasting. Renewable Energy , 196:497–517, 2022b. Edward Wheatcr...
https://arxiv.org/abs/2505.00937v1
Rerandomization for covariate balance mitigates p-hacking in regression adjustment Xin Lu, Peng Ding∗ Abstract Rerandomization enforces covariate balance across treatment groups in the design stage of experiments. Despite its intuitive appeal, its theoretical justification remains unsatisfying because its benefits of i...
https://arxiv.org/abs/2505.01137v1
common for researchers to explore various covariate combinations in the analysis stage (Simmons et al. 2011). Our contributions are twofold. First, we study two rerandomization schemes: reran- domization with Mahalanobis distance (ReM) (Cox 1982, Morgan & Rubin 2012) and rerandomization based on the p-values of margina...
https://arxiv.org/abs/2505.01137v1
estimators. For example, the coefficient estimator of Zifrom the following OLS regression is an unbiased estimator of ¯ τ(Freedman 2008, Theorem 1): lm(Yi∼1 +Zi), (1) which we denote by ˆ τ=n−1 1P i:Zi=1Yi−n−1 0P i:Zi=0Yi. The corresponding tstatistic using the associated Eicker–Huber–White (EHW) standard error is asym...
https://arxiv.org/abs/2505.01137v1
supH0nP∞(ph L≤α)≥supH0fP∞(ph L≤α). In the next section, we show that rerandomization can mitigate the type I error rate inflation due to p-hacking. 3 Rerandomization mitigates p-hacking In the design stage, covariate imbalances may occur which complicates the interpreta- tion of the ATE estimators. Rerandomization has ...
https://arxiv.org/abs/2505.01137v1
and [−k]. By definition, ∆ k/(1−R2 [−k]) equals to the partial correlation of the full model lm( r−1 1Yi(1)+ r−1 0Yi(0)∼1+xi) versus the reduced model lm( r−1 1Yi(1) + r−1 0Yi(0)∼1 +xi[−k]). Hence, ∆ kcan be interpreted as the scaled partial R2of covariate k. Our subsequent discussion depends on R2 xand ∆ = min k∈[K]∆k...
https://arxiv.org/abs/2505.01137v1
accep- tance probability of Arem(a) isP∞(Arem(a)) =P(χ2 K≤a). Figure 1 plots the asymptotic acceptance probability of Arem(a) with a= ¯arem(α, R2 x, R2 x/K) under α∈ {0.05,0.1}, R2 x∈[0.1,1] and K∈ {1,2,3,4,5,6}. The asymptotic acceptance probability decreases 10 dramatically as R2 xdecreases. For example, when K= 5 an...
https://arxiv.org/abs/2505.01137v1
(i) states that ReP mitigates p-hacking. Theorem 4 (ii) states that when αt→1, ReP resolves p-hacking. Define ∥V∥∞,2= max ∥u∥∞=1∥V u∥2. ReP restricts the l∞norm of ˆτxwhile ReM restricts the l2norm of ˆτx. Since ˆτxinArem(¯arem) is balanced enough to resolve p-hacking, we can use Arep(αt) to resolve p-hacking if Arep(α...
https://arxiv.org/abs/2505.01137v1
Similar to the discussion of Theorem 3, we can specify a conservative test that controls the type I error rate based on the error bound in Theorem 6. 4 Simulation study We conduct a simulation study to evaluate p-hacking with and without rerandomization. The study considers 3 designs: CRE, ReM and ReP. For {ai}n i=1, w...
https://arxiv.org/abs/2505.01137v1
p-hacking under ReP and ReM. However, as shown in Figure 2, it is also the setting where p-hacking is more prob- lematic under the CRE. This is because a higher value of R2 xis associated with larger l2 norms of βL,K,K ∈ [K], in (3). Large βL,Klead to great divergence among ˆ τL,Kunder the CRE, thereby exacerbating the...
https://arxiv.org/abs/2505.01137v1
(0.08) 6.31 (0.08) 6.45 (0.08) 6.55 (0.08) 6.66 (0.08) 6.73 (0.08) 6.73 (0.08) 6.55 (0.08) Table 1: The table of empirical type I error rates. Values are multiplied by 100. β=15is labelled as “Equal” and β= (1,1,0.3,0.3,0.3) is labelled as “Unequal”, respectively. The numbers in parentheses in the second column represe...
https://arxiv.org/abs/2505.01137v1
‘In pursuit of balance: Randomization in practice in development field experiments’, American economic journal: applied economics 1, 200– 232. Cohen, P. L. & Fogarty, C. B. (2021), ‘Gaussian prepivoting for finite population causal inference’, Journal oftheRoyal Statistical Society Series B:Statistical Methodology 84, ...
https://arxiv.org/abs/2505.01137v1
diminishing covariate imbalance and diverging number of covariates’, TheAnnals ofStatistics 50, 3439–3465. Zhao, A. & Ding, P. (2021), ‘Covariate-adjusted Fisher randomization tests for the average treatment effect’, Journal ofEconometrics 225(2), 278–294. Zhao, A. & Ding, P. (2024), ‘No star is good news: A unified lo...
https://arxiv.org/abs/2505.01137v1
terms in the regression (7) to avoid overparametrization (Imbens & Rubin 2015, Duflo et al. 2007, Angrist et al. 2014). When xi=∅, we denote the estimated coefficient of Ziin this fixed effects regression by ˆ τω. Ding (2021) has shown that ˆ τω=PH h=1ωhˆτh, where ωh=πhrh1rh0/PH h′=1πh′rh′1rh′0. Therefore ˆ τωis an unb...
https://arxiv.org/abs/2505.01137v1
p-values are relegated to Section C. As a→0 and αt→1, SS-ReM and SS-ReP resolve p-hacking for testing H0nwhile FE-ReP resolves p-hacking for testing H0ω. To establish Theorem S1, we need to obtain asymptotic limits of the t-statistics obtained from Lin’s regression and the fixed effects regression. Zhao & Ding (2024) h...
https://arxiv.org/abs/2505.01137v1
Afe-rep(αt))≤P (ε2+∥ξ0∥2 2)1/2≥z1−α/2 ∥D(Vω,xx)1/2ξ0∥∞≤z1−αt/2 ; where ε∈Randξh∈RK,h= 0,1, . . . , H are independent standard Gaussian random variables. Similar to Theorem 3 and Theorem 6, Theorem S3 holds for all ( Yi(1), Yi(0))n i=1∈Rn×2. Asa→0 and αt→1, the right-hand sides of the inequalities all tend to P(|ε|> z1...
https://arxiv.org/abs/2505.01137v1
for both the regression coefficients and standard errors. We will use the invariance of OLS to compute ATE estimators and EHW standard errors. Lemma S1 (Invariance of OLS) .For OLS regression of Y∈RnonX1∈Rn×J, let ˆβ1 be the coefficient estimator, ˆei,(1)be the regression residual for unit i,i∈[n],V(1)be the 30 EHW cov...
https://arxiv.org/abs/2505.01137v1
H(ˆτH− ¯τH,ˆτ⊤ H,x))⊤d→((ε1,τ,ε⊤ 1,x), . . . , (εH,τ,ε⊤ H,x))⊤and (ii) n1/2(ˆτω−¯τω,ˆτ⊤ ω,x)⊤d→(εω,τ,ε⊤ ω,x). Proof of Lemma S5. For Lemma ( i), applying Lemma S3 with Ri(z) = ( Yi(z),x⊤ i)⊤for i∈ S h, we have, under Condition S1 ( i)–(iii), n1/2 h(ˆτh−¯τh,ˆτ⊤ h,x)⊤d→(εh,τ,ε⊤ h,x)⊤. Because for h= 1, . . . , H ,n1/2 h(...
https://arxiv.org/abs/2505.01137v1
nˆ se2 fe−n˜ se2 fe=oP(1); nhˆ se2 h,L−nh˜ se2 h,L=oP(1); nˆ sx2 fe,k−Vω,kk=oP(1), n hˆ sx2 h,L,k−Vh,kk=oP(1), k∈[K], h∈[H]. Proof of Lemma S11. We first prove nˆ se2 fe−n˜ se2 fe=oP(1). The proof proceeds in 3 steps. In Step 1 , we decompose nˆ se2 feinto 6 terms; in Step 2 , we derive the closed-form expression of re...
https://arxiv.org/abs/2505.01137v1
i= 0. Replacing Yiwith Yi−x⊤ iˆβfe,xin the above expression, we have, for i∈ S h, ˆei=  Yi−¯Yh1−(xi−¯xh1)⊤ˆβfe,x−rh0(ˆτω−ˆτh) +rh0(ˆτω,x−ˆτh,x)⊤ˆβfe,x, Z i= 1; Yi−¯Yh0−(xi−¯xh0)⊤ˆβfe,x+rh1(ˆτω−ˆτh)−rh1(ˆτω,x−ˆτh,x)⊤ˆβfe,x, Z i= 0. Define ˆ ϵi, fori∈ S h ˆϵi=  Yi−¯Yh1−(xi−¯xh1)⊤ˆβfe,x, Z i= 1; Yi−¯Yh0−(xi−¯xh0...
https://arxiv.org/abs/2505.01137v1
(An,Bn)| {ϕn(Bn,Cn) = 1}d→ (A,B)| {ϕ(B,C) = 1}. Define ˆ τh,L,K,ˆβh,L,K,βh,L,K, ˆ se2 h,L,K, ˜ se2 h,L,K, ˆτL,K, ˆ se2 L,K, ˜ se2 L,K, ˆτfe,K,ˆβfe,K,βfe,K, ˆ se2 fe,K, ˜ se2 fe,Kas the analogy of ˆ τh,L,ˆβh,L,x,βh,L,x, ˆ se2 h,L, ˜ se2 h,L, ˆτL, ˆ se2 L, ˜ se2 L, ˆτfe,ˆβfe,x,βfe,x, ˆ se2 fe, ˜ se2 fe withxiKreplaced by...
https://arxiv.org/abs/2505.01137v1
Lemma S13, we have  n1/2 h(ˆτh−¯τh), n1/2 hˆτh,x,(ˆβh,L,K, nhˆ se2 h,L,K)K⊆[K] h∈[H] Ass-rem(a)d→  εh,τ,εh,x,(βh,L,K, nh˜ se2 h,L,K)K⊆[K] h∈[H] max h∈[H]ε⊤ h,xV−1 h,xxεh,x≤a. Lemma ( ii) follows. 44 By Lemma S12 with ( Bn,Cn) = ( n1/2ˆτω,x,diag( nˆ sx2 fe,k)k∈[K]) and ϕ(Bn,Cn) = I( ∥σ(Cn)−1Bn∥∞≤z1−αt/2), and Lemma ...
https://arxiv.org/abs/2505.01137v1
∆1/2 k. Noticing that ∆ k0=˜R2 [−k0], we have, under {s[−k0]= 1}, max K⊊[K]|NL,K| ≥ |N L,[−k0]|=¯Rx|ε0|+∥ξ∥2∆1/2 k0 (¯R2 x+ ∆ k0)1/2. Therefore, E3∩ {s[−k0]= 1} ⊇n |ε0|< z1−α/2, s[−k0]= 1,¯Rx|ε0|+∥ξ∥2∆1/2 k0 (¯R2 x+ ∆ k0)1/2> z1−α/2,∥ξ∥2 2< ao ⊇n |ε0|< z1−α/2, s[−k0]= 1,∥ξ∥2> ψ(|ε0|),∥ξ∥2 2< ao . We see that ψ(z1−α/2) ...
https://arxiv.org/abs/2505.01137v1
h,x)h∈[H]∈ ∩K⊆[K]CK ∩ Css-rep P (εh,τ,ε⊤ h,x)h∈[H]∈ Css-rep ≥P (εh,τ,ε⊤ h,x)h∈[H]∈ ∩K⊆[K]CK =P∞(ph L> α). As a consequence, we have P∞(ph L≤α| Ass-rep(αt))≤P∞(ph L≤α). Similarly, we can prove that P∞(ph L≤α| Ass-rem(a))≤P∞(ph L≤α);P∞(ph fe≤α| Afe-rep(αt))≤P∞(ph fe≤α). Proof of Theorem S1. By Lemma S18, we have, ...
https://arxiv.org/abs/2505.01137v1