text
string
source
string
observations yR`nforn“1,...,T ´Rused to evaluateCnpQ,P q “śn m“1RmpQ,P qand implement SSRE using boundary values kland ku. If SSRE terminates for nďT´Rthen they will use the method selected to produce their forecast. If the event CQXP T´Roccurs, however, an adjustment to the decision process must be applied, assuming that further observations at time points T`τ,τą0, are not available at timeT. A not unreasonable rule to follow is to divide the event CQXP n forn“T´Rinto the union of TQ n“CQXP nč EQ nandTP n“CQXP nč EP n whereEQ n“ tklăCnpQ,P q ă1uandEP n“ t1ăCnpQ,P q ăkuu, and select method Q ifTQ noccurs and select method P if TP noccurs. This adjustment is equivalent to truncating SSRE at the last data point and replacing CQXP n with the union of tn´1č i“1EQXP i uč EQ nand tn´1č i“1EQXP i uč EP n. Clearly,TQ nŞTP n“ H , and the event TQ nŤTP nonly occurs if the SSRE process fails to terminate on or before n“T´R. It is not surprising to find, via an argument that parallels th e development preceeding Proposition 2, that the adjustment brought about by the necessity to invok e the truncation decision rule can result in not insignificant increases in er ror rates. Practitioners may therefore wish to avoid having to use the truncation rule. One way of doi ng this is to divide the sample ofTvalues ofYin such a way that implementation of SSRE has a high chance of t erminating fornďT´R. From the corollary to Proposition 1we know that the moments of N, the termination point of the SSRE, exist, and from Markov’s ineq uality we have that PrpNănq ą1´EPrNs n. By assigning a value to EPrNsthis bound can be used to determine the value of nthat will give the minimum sample size required to achieve a prescribe d probability. Thereby a guide toPrpNąnqcan be established and a suitable choice for R, the length of the training data, made. 4. Relationship to e-variables, and generalized e-variabl es. For SSRE HQandHP do not play the role of null and alternative hypotheses as in c onventional, i.e., Neyman- Person type, significance tests. Neither method Q nor method P takes precedence over SEQUENTIAL FORECAST METHOD SELECTION 13 the other, suggesting that the objective of SSRE is to balanc e the strength of evidence in favour of one method against the strength of evidence in favo ur of the other. An assign- ment in which PrpTQ n|HPq “PrpTP n|HQqis equivalent to employing what is termed in Poskitt and Sengarapillai (2010 ) a ’balanced test’. Our aim therefore is to determine bound- ary valuesklandkuthat yield error probabilities that reproduce desired erro r rates whilst equating method Q error rates with those of method P. 4.1. SSRE and Generalized Universal e-values. To build our SSRE balanced testing approach, we begin by noting that Lemma 1gives an interesting result on the behavior of the MGF of the statistic ∆npP,Q q “řn m“1DmpQ,P q. Rewriting EPrCnpQ,P qhsas EPrexpphlogpCnpQ,P qqqs, it follows by Lemma 1thathPcan be represented as hP:“ ´ωP¨signtEPr∆npQ,P qsu,whereωPP r0,8q. The above representation suggests that the
https://arxiv.org/abs/2505.09090v1
strength of evid ence for, or against, HQcan be gauged by the magnitude of EPrexpt´ω¨signtEPr∆npQ,P qsu∆npQ,P qus, for some unknown values of ωP r0,8q. A formal statement that clarifies the nature of this relationship is given in the following lemma. LEMMA 2.Under the conditions of Lemma 1, and for any ωP r0,|hP|s, # EPrexptω∆npQ,P qus p underHQq EPrexptω∆npP,Q qus p underHPq* ď1. PROOF . UnderHQ,EPr∆npQ,P qs ă0, for allm, so that by Lemma 1we have that hP“ωPą0. Furthermore, from the convexity properties of the MGF esta blished as part of the proof of Lemma 1 we have that under HQ EPrexpt´ω¨signtEPr∆npQ,P qsu∆npQ,P qus “EPrexptω∆npQ,P qus ď1, for allω“hP r0,hPs. Conversely, hP“ ´ωPă0underHPand the same argument, in reverse, implies that EPrexptω∆npP,Q qus ď1for allω“ ´hP r0,|hP|sunderHP. While the construction of the SSRE may seem obtuse, Lemma 2indicates that if hPwere known,Cpωq npQ,P q:“exptω¨∆npQ,P qu, would be an e-variable for all ωP r0,|hP|s, and subsequently could be used to test HQ{HP. Given a probability space pY,Bn,PqT n“1, ane- variable ,Enp¨q:YÑ r0,8s, for a hypothesis His an extended random variable that satisfies, for allQPH,EY„QrEns ď1,nďT. An e-process pMnqT n“1, whereTcan be finite or infinite, is a non-negative stochastic process adapted to th e filtration Bnand such that for any stopping time Nand anyQPH,EY„QrMℓs ď1; i.e.,Mtis an e-variable for Hfor all tďℓ Although it is possible to use tCpωq npQ,P quně1in several ways, we believe the most direct way is to use the fact that Cpωq npQ,P qcan be interpreted as a universal generalized e-value (GUe-value), see Dey, Martin and Williams (2024a ) and Dey, Martin and Williams (2024b ). In particular, the key realization of Lemma 2is that it demonstrates an immediate connection betweenCpωq npQ,P qand e-values: Lemma 2implies that, for any ωP r0,|hP|s,Cpωq npQ,P q is an e-process under HQ; whileCpωq npP,Q qis an e-process under HP. Thus, ifhPwere known, we could build tests that deliver reliable control of error probabilities under HQand HP. 14 The critical property of e-values from the perspective of SS RE is that they can be used to control the overall risk of making an incorrect decision. Us ing the above definitions, and Ville’s inequality, we can show that values of Cpωq npQ,P q, respectively Cpωq npP,Q q, that bound the size of tail probabilities can be straightforward ly determined. LEMMA 3.For anyπP p0,1qandωP r0,|hP|s,Pr” supn“1,...,TCpωq npQ,P q ě1{πı ďπ underHQ, and under HPPr” supn“1,...,TCpωq npP,Q q ě1{πı ďπ. Lemma 3suggests that SSRE schemes formulated by setting klandkuequal to pre- conceived thresholds will deliver error rates that are suit ably controlled regardless of the stopping rule. Thus, setting kl“ tβ{p1´βqu1{ωandku“ tp1´βq{βu1{ωwhere0ăβă1 2 results in error rates βQandβPthat are both simultaneously less than or equal to β{p1´βq. Such a choice for the boundary constants is likely to produce a conservative procedure, as the only guiding principle in this choice is the forecasters wis h to control overall risk, and since the bounds in Lemma 3are not sharp. Note that the e-value inequalities in Lemma 3are here controlling what in a conventional Neyman-Person framewor k would be regarded as type II
https://arxiv.org/abs/2505.09090v1
errors –klcontrols the probability of terminating by selecting metho d Q when HPholds, andkucontrols the probability of terminating by selecting metho d P when HQholds. 4.2. Implementation and False Discovery. Consider again the scenario introduced in Section 3.4, where a sample of T`τvalues ofY,y1,...,yT`τ, is partitioned into the firstR!T‘training data’ values and out-of-sample ‘test data’ obser vationsyR`nfor n“1,...,N “T´Rused to evaluate CnpQ,P qand implement SSRE using boundary valuesklandku. In the event the SSRE process fails to terminate on or before n“N, a truncation decision rule was suggested that divides the ev entCQXP Ninto the union of EQ N“ tklăCNpQ,P q ă1uandEP N“ t1ăCNpQ,P q ăkuu, with method Q selected ifEQ Noccurs and method P selected if EP Noccurs. Unfortunately, invoking the truncation decision rule can result in significant increases in error ra tes, and more importantly, it ig- nores information about the relative merits of method Q and m ethod P contained in CQXP n forn“1,...,N ´1. The critical finding in Section 4.1is that due to the behavior of the function CnpQ,P q elucidated in Lemma 1, by Lemma 2we have that tCpωq npQ,P quně1is an e-process for ωP r0,|hP|s, and thus can be used to control the size of tail probabilitie s as in Lemma 3. However, even through tCpωq npQ,P quně1is an e-process, if we were to test HQ(orHP) many times, the resulting procedure would not control the false discovery rate (FDR), defined as the expected proportion of falsely rejected hypotheses, i.e. the probability of selecting method P when HQholds (respectively selecting method Q when HPholds) in a sequence of tests. Nevertheless, Wang and Ramdas (2022) have construed a testi ng algorithm that controls the FDR and only requires that we have access to a collection o f e-valuese1,...,eN, corre- sponding to Ntests of the hypothesis HQ,Hn Q,n“1,...,N , where, with an obvious abuse of notation, Hn Q, wherenďNrefers to the satisfaction of HQat the point ně1. The key idea of Wang and Ramdas (2022) is to create a new sequence of e- values through scaling changes: • Letep1qďep2qď ¨ ¨ ¨ ďepNqdenote the ordered set of e-variables. • Transform these ordered e-variables into new e-variables e‹ p1q,...,e‹ pNqaccording to e‹ pjq“ pj{Nqepjq,j“1,...,N . SEQUENTIAL FORECAST METHOD SELECTION 15 Wang and Ramdas (2022 ) prove that the e-process te‹ pnq: 1ďnďNucontrols the FDR under arbitrary dependence between the original e-values. Hence, since we know that Cpωq npQ,P qare e-values, for ωP r0,|hP|s, then so are the ordered and scaled e-variables e‹ pjqpQ,P q “Cpωq pjqpQ,P qpj{Nq,1ďjďN, where each Cpωq pjqpQ,P qis based on njďnobservations. Thus, if our goal is to assess the validity for a collection of null hypothesis H1 Q,...,HN Q, whenNis some stopping time, this can be achieved using any of the strategies proposed in V ovk a nd Wang (2021) for combin- ing e-values. Therefore, we follow the suggestion of Dey, Ma rtin and Williams (2024), and propose to use the average e-variable Epωq N:“1 NNÿ j“1j N¨Cpωq pjqpQ,P q to establish the validity of the collection of hypotheses. T his then delivers the first useful result,
https://arxiv.org/abs/2505.09090v1
which shows that the error of falsely rejecting a tru eHn Qamong the H1 Q,...,HN Qcan be controlled. THEOREM 1.If all ofHp1q Q,...,HpNq Qare true, then, for any πP p0,1q, andNě2, PrP! Epωq Něπ´1) ďπ PROOF . From Markov’s inequality, PrP! Epωq Něπ´1) ďEPrEpωq Nsπ“π1 NNÿ j“1j NEP” Cpωq pjqpQ,P qı “πNpN`1q 2N2 ďπ, where the second equality comes from the fact that, for each ωP r0,|hP|s, by Lemma 2 EP” Cpωq pjqpQ,P qı ď1, and the third from the fact that 2N2ąNpN`1qfor allNě2. Alternatively, if at least one of the collection of hypothes es is invalid, we can show that Epωq Nwill be larger than 1{πwith probability converging to one. To state such a result, w e first require the following intermediate result, the proof o f which is given in the appendix. LEMMA 4.Suppose that |n´1∆npQ,P q´EPrn´1∆npQ,P qs| “opp1q. IflimnEPrn´1∆npQ,P qs “ ϑą0, then asnÑ 8 PrtCpωq npQ,P q ěπ´1u ě1´op1q,@πP p0,1q, ωP r0,|hP|s. To obtain our next result, recall that each Cpωq pjqpQ,P qare calculated using njobservations. In what follows, we then assume that, for n0ďnjfor eachnj,jďN, and thatn0Ñ 8 as TÑ 8 . THEOREM 2.Suppose that |n´1 0∆n0pQ,P q ´EPrn´1 0∆n0pQ,P qs| “opp1qasTÑ 8 , and thatHj Qis false for some jďN. Then for each ωP p0,|hP|q,Epωq Nsatisfies Pr! Epωq Ně1{π) ě1´op1q. 16 PROOF . For anyjďN, Epωq Něj N2Cpωq pjqpQ,P q “j N2Cpωq njpQ,P q. Consequently, Pr! Epωq Ně1{π) ěPrˆj N2Cpωq njpQ,P q ě1{π˙ “Pr# Cpωq njpQ,P q ěˆjπ N2˙´1+ Note thatj{N2P p0,1qby construction. Hence, jπ{N2P p0,1q. Thus, if Hj Qis false, by Lemma 4, we have Pr! Epωq Ně1{π) ěPr# Gpωq njpQ,P q ěˆjπ N2˙´1+ ě1´op1q. Theorems 1and2indicate that SSRE schemes can be supplemented by monitorin g the average e-variables Epωq Nalong withCnpQ,P q, forn“1,...,N , using the same klandku critical values. Information contained in CQXP n ,n“1,...,N , concerning the relative merits of method Q and method P will thereby be exploited and incorpo rated into the sequential decision making process. By setting klandkuequal to values that will deliver preassigned terminal error rates, as in Section 4.1, suitable control of the FDR will also be achieved whilst maintaining the desired error rates, regardless of the stop ping rule. 5. Empirical Implementation. 5.1. Selection of ω.Implementation of the SSRE procedure based on GUe-variable s requires selecting a value of ω, with the validity of our theoretical results requiring tha t this choice of ωbe smaller than the unknown |hP|. While such a constraint might ap- pear to be a hindrance to implementation, this issue is not as difficult as it first seems. In particular, we know from the proof of Lemma 1that under HQ, the function mpωq “ Eexpt´ω¨signtEP∆npQ,P qu∆npQ,P qutakes a value of unity at ω“0andωPą0. Further,mpωqis less than unity for all ωP r0,ωPs. Hence, by Role’s theorem we know that mpωq ă1for allωP p0,ωPq. Thus, a possible strategy is not to select a specific value of ω, but to choose several ωP p0,1sand compare the behavior of the statistics expt´ω¨signr∆RpQ,P qs∆RpQ,P qu across these different choices, where Rrepresents the length of the in-sample period.
https://arxiv.org/abs/2505.09090v1
If all the statistics behave similarly, this is meaningful eviden ce that we have not inadvertently chosenωąωP. The behavior of the statistics can be plotted visually acro ss the in-sample set and the resulting figures compared to determine that their be havior is similar. Assuming that the values of ωchosen deliver statistics with similar behavior, each spec ific value ofωchosen can be used to construct a different Epωq Nand the average version of these can be used in place of a single test statistic Epωq N: ifEpω1q NandEpω2q Nare e-processes, then so is their average. Thus, given Svaluesω1,...,ωSassociated with Ssimilar statistics, and a SEQUENTIAL FORECAST METHOD SELECTION 17 window across which to implement the SSRE scheme, including NďT´Rdistinct periods, the SSRE scheme can be based on a sequence of statistics # EAvg N´j:“1 SSÿ s“1Epωsq N´j:j“K`T´R,...,N `K`T´R+ , whereKě0is chosen by the researcher. For example, in many cases, the r esearcher may only wish to test differences in forecast accuracy over the fi nal100pseudo out-of-sample observations. In such cases, we take Ksuch thatN“100“T´R´K, i.e.,K“T´ R´100. A test can then be implemented by comparing each subsequent value ofEAvg N´jwith an assigned boundary value, and rejecting the assumed null h ypothesis once the boundary is breached. Hence, in what follows we use the average e-variab le across three different choices ofωin our experiments: ωP t1{4,1{2,1u. The error rates of such a SSRE testing procedure can be contro lled by setting the boundary valuesklandkuappropriately. To understand how this can be accomplished, let us simplify the setting by assuming we wish to establish whether a partic ular model, say model Q, is preferable; of course, nothing prohibits us from entertain ing either model as being preferable, but for the purpose of selecting klandku, it is simpler to consider evidence for or against one specific model rather than having to consider values of klandkuthat are appropriate for both models. Such an approach implicitly favors model Q as a b enchmark and then seeks to find evidence against the benchmark, i.e. evidence in favour of model P. Viewing model Q as our benchmark, we can ask the question of ho w do our theoretical results shed light on what an appropriate value of klandkumight be. Since the suggested testing approach is based on e-variables, we know that if HQis more accurate then, on av- erage,CnpQ,P qwill be less than unity. We can therefore restructure the SSR E scheme by considering a value of kl“1as suggesting that the benchmark model Q is favored over mode l P, and then selecting a value of kuą1to ensure we can control the error of favouring model P when in fact benchmark model Q is preferable. This leads us t o consider a slightly modified version of the SSRE scheme based on the preference and indiff erence regions illustrated in Figure 2. 0 kl“1 ku 8 CnpQ,P qfavoursHQ CnpQ,P qfavoursHPSSRE indifference region Fig 2: Regions of favorability for HQandHPin terms of the modified SSRE boundary constantsklandku, withkl“1. Given this form of the SSRE, we can leverage our theoretical r esults to understand how
https://arxiv.org/abs/2505.09090v1
to selectkuto control the probability of falsely favouring P when the be nchmark Q holds. To this end, define the event TP N´j:“! EAvg N´jěku) , kuą1, j “0,1...,K which represents termination of SSRE and selection of metho d P at timeN´jďN. We can control the behavior of Pr´ TP N|H1 Q,...,HN Q¯ - the probability of selecting method P when HQholds and the benchmark method is more accurate - by using The orem 1. In particular, 18 fix someβP p0,1{2qand defineβQ“β 1´β, to be the error we are willing to tolerate in falsely selecting P when in fact Q is more accurate; i.e., βQ“β 1´β“Pr` TP N|H1 Q,...,HN Q˘ . Then, by Theorem 1, we know that βQď1{ku, and we can control this error by setting ku“1{βQ“ p1´βq{β. Furthermore, by Theorem 2we have that for each ωP p0,|hP|qthe eventEpωq NsatisfiesPr´ Epωq Ně1{π¯ ě1´op1qifHj Qis false for some jďN, and thus that the probability Pr` TP N|H1 P,...,HN P˘ ě1´op1qfor anyku“ p1´βq{β,βP p0,1{2q. Hence we can conclude that when in fact HPholds, and method P is more accurate than the benchmark, the probability that we fail to select method P co nverges to zero. Now observe that the roles of model Q and model P can easily be r eversed since the previous constructions and derivations are symmetric in th ese arguments. Interchanging Q and P and setting method P as the benchmark, a value klsuch that the probability of selecting method Q when HPholds,Pr´ TQ N|H1 P,...,HN P¯ , is controlled, and the probability of selecting method Q when the benchmark does not hold, Pr´ TQ N|H1 Q,...,HN Q¯ , converges to one, can be determined. In this way a SSRE scheme with bound ary constants klandku that is guaranteed to have desirable statistical propertie s can be constructed. 5.2. Examples. We now examine the empirical accuracy of our proposed SSRE ap proach across several common examples in the forecasting literatu re. In each example, we compare the benchmark model, denoted by Q, against a single alternative, denoted by P. We compare our SSRE approach against the most common approach for testi ng the accuracy of forecasts: a one-sided version of the Diebold and Mariano test, which te sts that the benchmark, Q, is not inferior to the alternative, P. While the two testing approaches are slightly different in their null and alternative hypotheses - we treat a sequence o f conditional hypothesis where as the DM test treats a single average hypothesis - such a compar ison is warranted as the DM test is far and away the most popular method for testing forec ast accuracy. The key feature of the DM test is that its validity requires th e strong assumption that the difference in scores follows a weakly stationary process; w hich is known to not be satisfied when models are estimated or when working with non-stationa ry series. In contrast, our test- ing approach makes no such requirements and builds these fea tures directly into the testing framework. Consequently, it is likely that our approach wil l
https://arxiv.org/abs/2505.09090v1
deliver accurate results in exactly the situations where standard testing methods will fail. Across the experiments that follow, we compare the accuracy of the different models in terms of log-scores and root-mean-squared error. We compar e the accuracy of our SSRE scheme and the DM test across M“1000 replications from different assumed models, which we present in the subsequent subsections. Across each examp le, the SSRE approach uses the testing protocol discussed in the previous section for each experiment; we also set β“1{10, so thatku«11.11, and conduct the test with this fixed threshold. For each experiment, the replicated dataset is based on T“1000 total observations, which we break in to an in-sample fitting dataset comprised of R“T{2“500observations, and an out-of-sample test set based on the remaining observatio ns. For simplicity, we consider a fixed in-sample period and do not consider rolling or expandi ng windows, and fit each model with maximum likelihood. We remark that nothing in the const ruction of the SSRE prohibits such schemes, and that fixed windows are merely used to avoid m odel the computational cost of model refitting. For the SSRE approach, we reserve onl y the lastN“100periods to calculate the SSRE test statistics and conduct the sequen tial testing approach, while the SEQUENTIAL FORECAST METHOD SELECTION 19 DM test statistic is based on all T´R“500observations in the test set. This feature should positively bias the DM test results due to them having a large r test set, and thus closer to the asymptotic regime. When calculating the DM test statistic w e estimate the variance of the loss using HAC based methods, and we reject the null if the cor responding test statistic is greater than Φ´1p1´βq “1.28, i.e., if the average difference in the losses is larger than the 10% quantile of the standard normal, which is a standard reje ction region for the one-sided DM test when the losses are negatively oriented. To gauge the accuracy of our SSRE approach we simulate data fr om a fixed version of the benchmark Q and investigate the Monte Carlo frequency wi th which this null is rejected and compare that to the theoretical results, which, by Theor em1, under our specific choices should be controlled and less than β{p1´βq «11% across all experiments. We then verify the accuracy of Theorem 2by simulating data under a fixed version of model P and calcula t- ing the same rejection frequency, which should diverge as Tdiverges. We now briefly present the different simulation experiments and then present the results of these experiments. 5.2.1. Example 1: Unit Root versus AR(1). Consider comparing the accuracy of a bench- mark forecasting model, Q, given by a unit root process, against the alternative forec asting model,P, given by an autoregressive process of order 1 (AR(1)). Both forecasts are then based on the specification: (5) Yt“ρYt´1`σεt, εt„Np0,1q, t “2,3,...,T, where model P enforces |ρ| ă1, while model Q sets ρ“1. In this case, method Q corre- sponds to the forecasting model Yt|Yt´1„Npµ`Yt´1,σ2q, while method P corresponds toYt|Yt´1„Npµ`ρYt´1,σ2q, where |ρ| ă1. We fixσ“2in all experiments,
https://arxiv.org/abs/2505.09090v1
and con- siderρP t1,0.90u. The case where ρ“1corresponds to model Q being more accurate than modelP, while the case ρ“0.90covers the converse. Under the stipulated choice of ku, our theoretical results imply that the empirical rejection fre quency should be less than 11% in the first case pρ“1q, while in the second ( ρ“0.90) our results suggest that the empirical rejection frequency should be large. 5.2.2. Example 2: AR(2) versus ARMA(2,1). In the second example, we generate data from the ARMA(2,1) process: (6)Yt“ρ1Yt´1`ρ2Yt´2`θεt´1`σεt, εt„Np0,1q, t “2,3,...,T. We consider two forecasting methods: a benchmark model, Q, based on fitting an ARMA(2,1) model, and an alternative, P, based on fitting an AR(2). In this example, we use the ARMA(2,1) model to explore situations where neither model s hould be favored, and so we require that both models deliver forecasts that are of sim ilar accuracy. Following Poskitt (1987 ), we know that if data is generated from the ARMA(2,1) proces s with parameters ρ1“1.4,ρ2“ ´.6andθ1“0.285, the resulting forecasts based on the AR(2) model may actually be favored over those of the true ARMA(2,1) model in small-to-moderate samples. Hence, such a setting would deliver a case where the benchmar k method Q is indeed more accurate than method P, but where detection of this should be difficult. Conversely, it was also shown by Poskitt (1987 ) that if we fit a misspecified AR(2) model to the ARMA(2,1) process the pseudo-true parameter values are given by ρ1“1.50, andρ2“ ´0.70. Thus, if we were to instead generate data from the ARMA(2,1) model und er these values of ρ1andρ2, while setting θ“0, we would have a situation where the alternative model P shou ld actually be favored over the benchmark Q, but where the two models will again produce extremely similar predictions, and so we would not reject the benchmar k model Q in such a case. 20 5.2.3. Example 3: Forecast Combinations. As our last example, we compare the case where forecasts are generated from a combination of time ser ies models using regression. Forecast combination is one of the most common approaches to empirical forecasting, and testing forecast accuracy in such settings has many well-kn own issues. In this experiment, the observed data is generated according to Yt“ωX1t` p1´ωqX2t`σεt, εt„Np0,1q, t “2,3,...,T X1t“0.50ǫt´1`0.285ǫt´2`ǫt, ǫt„Np0,1q, t “3,...,T X2t“0.50νt´1´2ˆ0.285νt´2`νt, νt„Np0,1q, t “3,...,T. The data generating process is designed to mimic a scenario i nvolving alternative leading indicators, and we do not attempt to model the time-series de pendence in X1tandX2tbut treat them as exogenous regressors. Our only goal is to produ ce point forecasts for Yt`1 and so we only consider forecast accuracy in terms of MSE.4Our benchmark method Q is an equally weighted combination of forecasts based on fittin g individual regressions of Yt onX1t, andYtonX2t. This is tested against an optimally weighted forecast comb ination based on minimizing the sum of in-sample squared forecast er rors, method P. This particular alternative is chosen since in this case the predictive mode l based on the equally-weighted combination, model Q, coincides with the predictive model Pwhen we set
https://arxiv.org/abs/2505.09090v1
ω“1{2. Given this, we consider two scenarios: in the first case we generate data so that method Q is approx- imately correct, which corresponds to setting ω“1{2; in the second, we set ω“1{4so that the forecast model Q is less accurate than model P. 5.2.4. Results. We give the results across the different examples in Table 1, which con- tains results for which method is more accurate, Q or P, as wel l as for log-score and MSE, except for Example 3 where we only treat MSE. Across both exam ples and scores, when model Q is indeed more accurate than model P, our test controls the error associated with incorrectly choosing model P. Conversely, when model P is more accurate than model Q, our test clearly favours model P. We remind the reader that in Example 2, the forecasting models are essentially equivalent under the chosen paramet erizations and so we should ex- pect that differentiating between the two models is not feas ible; i.e., under the two designs the rejection rates should be similar, and if they are not, this i ndicates that the proposed testing method is incorrectly favoring one model over the other. Comparing our results to those of the DM test, we see that the r esults of the DM test are more varied. In each scenario we analyze, the size of the D M test is much lower than the nominal level of 10% used when conducting the test. Such a result suggests that the underlying conditions on which the theoretical behavior of the test is founded are invalid. Generally, when the alternative is correct the DM test accou nts for this difference adequately, with two notable exceptions. In Example 1, the SSRE approach has a significantly higher MSE rejection rate than the DM test, and in Example 2 under the log score, the DM test chooses the alternative model about 20% of the time, when in f act the two models deliver equivalent forecasts under the chosen parameterizations. These findings support the existing literature that DM-type tests may not be appropriate in complex forecasting comparison settings with non-regular models and estimated parameters. In contrast, we note that our testing approach does not suffe r these same issues. Theorem 1 says that if method Q is more accurate, then we can control the probability of falsely select- ing method P, but if model P is indeed more accurate than model Q, then with probability converging to one we will obtain enough evidence to reject mo delQ. Furthermore, no as- sumptions about the nature of Q and P are made in our setting. 4A combination of the models based on the individual model den sities will not deliver a predictive density in the correct class, and so we forego analysis using the log sco re in this example. SEQUENTIAL FORECAST METHOD SELECTION 21 TABLE 1 Comparison of SSRE and one sided DM tests. The setting where t he benchmark model Q is more accurate is denoted by HQ, andHPdenotes when the alternative model P is more accurate.
https://arxiv.org/abs/2505.09090v1
The av erage rejection probabilities under HQandHPfor SSRE and DM are presented in columns SSRE and DM respectiv ely. Example 1 Example 2 Example 3 SSRE DM SSRE DM SSRE DM MSEHQ 0.010 0.003 0.007 0.001 0.050 0.020 HP 0.950 0.810 0.012 0.003 0.470 0.460 Log ScoreHQ 0.001 0.001 0.050 0.001 n/a n/a HP 1.000 1.000 0.001 0.200 n/a n/a 6. Conclusion. This paper has considered the theoretical analysis of seque ntial forecast production and evaluation based on proper scoring rules. We have shown that it is feasi- ble to deliver reliable sequential tests of forecast accura cy under much weaker assumptions than those used in many testing approaches, by leveraging a p articular type of test statistic and evaluation framework. The key novelty in this framework is the recognition that the test statistic we have formulated encodes forecast accuracy in a way that it can be represented as a generalized e-variable ( Dey, Martin and Williams ,2024a ,Dey, Martin and Williams ,2024b ) that depends on a particular “learning rate”. We demonstrat e that this learning rate encodes which forecasting model is more accurate, and discuss how it can be set in practice. In lever- aging the behavior of e-variables, we show that our sequenti al testing framework controls the family-wise error rates associated with the testing proced ure. In this manuscript, we have considered the application of th is testing procedure to the empirically relevant case where a benchmark forecasting mo del is compared against some alternative method, and where the null hypothesis is that th is benchmark model is at least as accurate as the alternative. However, in theory, the prop osed framework should extend beyond this setting. An exciting possibility is the use of th is framework for the construction of multi-comparisons across different models, with the ult imate aim being the construction of a sequential model confidence set (see Hansen, Lunde and Nason ,2011 , for a discussion of the concept of a model confidence set). To keep this current pa per brief, we have not explored this exciting idea in this manuscript but speculate that it s hould be feasible to extend our current framework to construct such model confidence sets, w hile maintaining control on the error rates associated with incorrectly including models i n the said set. APPENDIX: PROOFS A.0.0.1. Proof of Lemma 3.Firstly, from the definition of HQ, for anyně1, andPPHQ, RnpQ,P qis an e-variable: i) it is non-negative; 2) if PPHQ, thenEY„PrRnpQ,P qs ď1 for anynPN. Since each RnpQ,P qis adapted to Bn, by the tower property of conditional expectations CnpQ,P q “śn m“1RnpQ,P q,nPN, is a non-negative super-martingale. In particular, since RnpQ,P q ě0, and since under HQwe haveEY„PrRnpQ,P q |Bn´1s ď1, it follows that EY„PrCnpQ,P qs “EY„P« EY„P«nź m“1RmpQ,P q |Bn´1ffff “EY„Pnź m“1EY„PrRmpQ,P q |Bm´1s, with the convention that EY„PrR1pQ,P q |B0s “EY„PrR1pQ,P qs. UnderHQ,EY„PrRmpQ,P q | Bm´1s ď1for eachmě1, we have EY„PrCnpQ,P qs ď1. For any PPP, the result then follows by Ville’s inequality. 22 A.0.0.2. Proof of Proposition 1:.Assume that for each mthe distribution of CmpQ,P q is
https://arxiv.org/abs/2505.09090v1
nondegenerate and that the probability of at least one of EQ morEP mis non-zero, m“ 1,2,.... Letpm“PrpklăCmpQ,P q ăkuqq. ThenPrpNąnq ď̺n“śn m“1pmwhere hmă1,m“1,...,n , and the sequence ̺n,n“1,2,..., converges as nincreases since it is monotonically nonincreasing and bounded below by zero. Suppose that limnÑ8śn m“1pmą0. Cauchy’s condition for the convergence of a product implies thatśn m“1pmconverges if, and only if, for every ǫą0there exists an nεsuch that |pnǫ`1pnǫ`2¨ ¨ ¨pn´1| ăǫfor allnąnǫ. This implies that pnÑ1asnǫÑ 8 , contradicting the condition that pnă1. We can therefore conclude that the productśn m“1pm‘diverges’ andlimnÑ8śn m“1pm“0. It follows that limnÑ8PrpNąnq “0and hence that the SSRE process terminates with probability one. Moreover, if we se t̺ǫ“nǫaśnǫ m“1pm, then we have that for all nąnǫ PrpNąnq ď˜nǫź m“1pm¸˜nź m“nǫ`1pm¸ “̺nǫǫ¨˜nź m“nǫ`1pm¸ ăexptnǫlogp̺ǫqu where0ă̺ǫă1. A.0.0.3. Proof of Corollory 1:.By definition EPrexppNsqs “8ÿ n“1exppnsqPrpN“nq “nǫÿ n“1exppnsqPrpN“nq `8ÿ n“nǫexppnsqtPrpNąn`1q ´PrpNąnqu. The second term can be rewritten asř8 k“0expppnǫ`kqsqtPrpNąnǫ`k`1q ´PrpNą nǫ`kqu. By Proposition 1PrpNąnq ăexptnǫlogp̺ǫqufor allněnǫ`1, We also have that PrpNąnǫq ďexptnǫlogp̺ǫquandPrpNąn`1q ďPrpNąnq. So if PrpNąnǫ`kq ďexptpnǫ`k´1qlogp̺ǫquthenPrpNąnǫ`k`1q ďexptpnǫ` kqlogp̺ǫquexpt´2log p̺ǫqu, and hence we find that for k“0,1,2,... |PrpNąnǫ`k`1q ´PrpNąnǫ`kq| ďexptpnǫ`k´1qlogp̺ǫqurexpt´logp̺ǫqu `1s. Forssuch thatsă ´logp̺ǫqthe modulus of the second term in the previous expansion of ErexppNsqscan therefore be bounded above by cǫ8ÿ k“0expppnǫ`kqsqexptpnǫ`k´1qlogp̺ǫqu “cǫexpt´logp̺ǫqu8ÿ k“0rexpts`logp̺ǫqusnǫ`k “Cǫ8ÿ k“0rexpts`logp̺ǫqusk “Cǫ p1´expts`logp̺ǫquq wherecǫ“1`expt´logp̺ǫquandCǫ“cǫexptnǫs` pnǫ´1qlogp̺ǫqu. ThusErexppNsqs converges to a finite constant for all sP p´8,´logp̺ǫqq. A.0.0.4. Proof of Lemma 4.First, let∆PpQ,P q “EP∆npQ,P q. Decompose logCpωq npQ,P q as logCpωq npQ,P q “logCpωq npQ,P q ´ωEP∆npQ,P q `ωEP∆npQ,P q “ωt∆npQ,P q ´∆PpQ,P qu `ω∆PpQ,P q “ωDnpQ,P q `ω∆PpQ,P q SEQUENTIAL FORECAST METHOD SELECTION 23 whereDnpQ,P q “ t∆npQ,P q ´∆PpQ,P qu. Now, consider PrtCpωq npQ,P q ăπ´1u “Pr! logCpωq npQ,P q ă ´logπ) “Pr"1 nDnpQ,P q `1 n∆PpQ,P q ă ´logπ nω* ďPr"1 nDnpQ,P q ă0* loooooooooooomoooooooooooon Term 1`Pr"1 n∆PpQ,P q ă ´logπ nω* looooooooooooooooomooooooooooooooooon Term 2 where the inequality follows from the union bound. Since πP p0,1q, we have ´logπě0. By the hypothesis of the result, limnn´1∆PpQ,P q “ϑą0, and Term 2 in the above equation converges to zero for all ωą0asnÑ 8 . It remains to analyze Term 1. To simplify notation, write ∆npQ,P q:“n´1∆npQ,P qand ∆PpQ,P q:“EP∆npQ,P q. By hypothesis, for any εą0, Pr/visualspace ∆npQ,P q:ˇˇ∆npQ,P q ´∆PpQ,P qˇˇěε( “op1q. (7) Decompose the event above as t|∆npQ,P q ´∆PpQ,P q| ěεu “/visualspace“ ∆npQ,P q ´∆PpQ,P q‰ ěε( Y/visualspace“ ∆npQ,P q ´∆PpQ,P q‰ ďε( “/visualspace“ ∆npQ,P q ´∆PpQ,P q‰ ěε( Y/visualspace ∆npQ,P q ď∆PpQ,P q `ε( where the second line comes from re-arranging the first set. I f∆nP/visualspace ∆npQ,P q ď∆PpQ,P q `ε( for allεą0, then∆nP/visualspace ∆npQ,P q ď∆PpQ,P q( . Hence, from ( 7) and the above we have Pr"1 nDnpQ,P q ă0* “Pr/visualspace ∆npQ,P q ă∆PpQ,P q( “op1q, which proves that PrtCpωq npQ,P q ăπ´1u ďop1q. Taking compliments then yields the stated result. REFERENCES BARNARD , G. A. (1946). Sequential Tests in Industrial Statistics. Journal of the Royal Statistical Society Sup- plement 8 1-26. Contains four pages of discussion. BREHMER , J. R. and G NEITING , T. (2021). Scoring interval forecasts: Equal-tailed, sho rtest, and modal interval. Bernoulli 271993-2110. BROWN ,
https://arxiv.org/abs/2505.09090v1
B. M. (1969). Moments of a stopping rule related to the centr al limit theorem. Annals of Mathemetical Statistics 401236-1249. DEGROOT , M. H. (1970). Optimal Statistical Decisions . McGraw-Hill, New York. DEY, N., M ARTIN , R. and W ILLIAMS , J. P. (2024a). Anytime-Valid Generalized Universal Infer ence on Risk Minimizers. arXiv preprint arXiv:2402.00202 . DEY, N., M ARTIN , R. and W ILLIAMS , J. P. (2024b). Multiple Testing in Generalized Universal I nference. arXiv preprint arXiv:2412.01008 . DIEBOLD , F. X. (2015). Comparing predictive accuracy, twenty years later: A personal perspective on the use and abuse of Diebold–Mariano tests. Journal of Business & Economic Statistics 331–1. DIEBOLD , F. X. and M ARIANO , R. S. (1995). Comparing predictive accuracy. Journal of Business & Economic Statistics 13253–263. DOOB, J. L. (1953). Stochastic Processes . John Wiley, New York. DVORETZKY , A., K IEFER , J. and W OLFORWITZ , J. (1953). Sequential decision problems for processess wi th continuous time parameter. Annals of Mathematical Statistics 24254-264. FERGUSON , T. S. (1967). Mathematical Statistics: A Decision Theoretic Approach . Academic Press. 24 FISSLER , T. and Z IEGEL , J. F. (2016). Higher order elicitability and Osband’s prin ciple. The Annals of Statistics 441680–1707. FRAZIER , D. T., C OVEY , R., M ARTIN , G. M. and P OSKITT , D. S. (2023). Solving the forecast combination puzzle Technical Report, arXiv preprint arXiv:2308.05263 . GIACOMINI , R. and W HITE , H. (2006). Tests of conditional predictive ability. Econometrica 741545–1578. GNEITING , T. (2011). Making and evaluating point forecasts. Journal of the American Statistical Association 106 746–762. GNEITING , T., B ALABDAOUI , F. and R AFTERY , A. E. (2007). Probabilistic forecasts, calibration and sh arpness. Journal of the Royal Statistical Society: Series B (Statist ical Methodology) 69243–268. GNEITING , T. and R AFTERY , A. E. (2007). Strictly Proper Scoring Rules, Prediction, a nd Estimation. Journal of the American Statistical Association 102359–378. GNEITING , T. and R ANJAN , R. (2011). Comparing Density Forecasts Using Threshold- a nd Quantile-Weighted Scoring Rules. Journal of Business & Economic Statistics 29411-422. GRÜNWALD , P., DEHEIDE , R. and K OOLEN , W. (2024). Safe Testing. Journal of the Royal Statistical Society Series B: Statistical Methodology 861091-1128. HALL, W. J. (1970). On Wald’s equations in continuous time. Journal of Applied Probability 759-68. HALMOS , P. R. (1950). Measure Theory . Van Nostrand Reinhold, New York. HANSEN , P. R. (2005). A test for superior predictive ability. Journal of Business & Economic Statistics 23365– 380. HANSEN , P. R., L UNDE , A. and N ASON , J. M. (2011). The model confidence set. Econometrica 79453–497. LAI, T. Z., G ROSS , S. T. and S HEN, D. B. (2011). Evaluating probability forecasts. Annals of Statistics 39 2356-2382. LAZARUS , E., L EWIS , D. J., S TOCK , J. H. and W ATSON , M. W. (2018). HAR Inference: Recommendations for Practice. Journal of Business and Economic Statistics 36541-559. MARTIN , G. M., L OAIZA -MAYA, R., M ANEESOONTHORN ,
https://arxiv.org/abs/2505.09090v1
W., F RAZIER , D. T. and R AMÍREZ -HASSAN , A. (2022). Optimal probabilistic forecasts: When do they work ?International Journal of Forecasting 38384– 406. NOCEDAL , J. and W RIGHT , S. (2006). Numerical Optimization , 2nd ed. Springer. PATTON , A. J. (2020). Comparing possibly misspecified forecasts. Journal of Business & Economic Statistics 38 796–809. POSKITT , D. S. (1987). Precision, complexity and Bayesian model det ermination. Journal of the Royal Statistical Society: Series B 49199-208. POSKITT , D. S. and S ENGARAPILLAI , A. (2010). Dual P-Values, Evidential Tension and Balanced Tests. De- partment of Econometrics and Business Statistics Working P aper 15/10 . ROBBINS , H. and S AMUEL , E. (1966). An extention of a lemma of Wald. Journal of Applied Probability 3 272-273. SHAFER , G. (2021). Testing by betting: A strategy for statistical a nd scientific communication. Journal of the Royal Statistical Society Series A 184407-431. TASHMAN , L. . J. (2000). Out-of-sample tests of forecasting accurac y: An analysis and review. International Journal of Forecasting 16437-450. VOVK, V. and W ANG , R. (2021). E-values: Calibration, combination and applic ations. Annals of Statistics 39 1736-1754. WALD, A. (1947). Sequential Analysis . John Wiley, New York. WANG , R. and R AMDAS , A. (2022). False discovery rate control with e-values. Journal of the Royal Statistical Society Series B 84822-852. WASSERMAN , L., R AMDAS , A. and B ALAKRISHNAN , S. (2020). Universal inference. Proceedings of the Na- tional Academy of Sciences 11716880–16890. WEST, K. D. (1996). Asymptotic inference about predictive abili ty.Econometrica: Journal of the Econometric Society 641067–1084. WETHERILL , G. B. (1975). Sequential Methods in Statistics , 2 ed. Chapmen and Hall, Cambridge. WHITE , H. (2000). A reality check for data snooping. Econometrica 681097–1126. YEN, Y. and Y EN, T. (2021). Testing forecast accuracy of expectiles and qua ntiles with the extremal consistent loss functions. International Journal of Forecasting 37733-758. ZHU, Y. and T IMMERMANN , A. (2020). Can Two Forecasts Have the Same Conditional Expe cted Accuracy? arXiv:2006.03238v2 [stat.ME].
https://arxiv.org/abs/2505.09090v1
arXiv:2505.09098v1 [cs.IT] 14 May 20251 Statistical Mean Estimation with Coded Relayed Observations Yan Hao Ling, Zhouhao Yang, and Jonathan Scarlett Abstract We consider a problem of statistical mean estimation in which the samples are not observed directly, but are instead observed by a relay (“teacher”) that transmits information through a memoryless channel to the decoder (“student”), who then produces the final estimate. We consider the minimax estimation error in the large deviations regime, and establish achievable error exponents that are tight in broad regimes of the estimation accuracy and channel quality. In contrast, two natural baseline methods are shown to yield strictly suboptimal error exponents. We initially focus on Bernoulli sources and binary symmetric channels, and then generalize to sub-Gaussian and heavy-tailed settings along with arbitrary discrete memoryless channels. I. I NTRODUCTION In this paper, we are interested in the fundamental statistical problem of mean estimation, in which nsamples X1, . . . , X nare drawn independently from some unknown distribution PX, and we are interested in estimating θ∗:=E[X]. The related work on mean estimation is extensive, and is briefly discussed in Section I-A. The main distinction from standard mean estimation in this paper is that the samples are not observed directly by the estimator, but are instead observed by an intermediate agent and transmitted through a noisy communication channel. The setup is summarized in Figure 1, and is detailed as follows: •There are two agents, which we call the teacher andstudent following the terminology of [1] (alternatively, they may be referred to as the relay anddecoder ). •The teacher sequentially observes X1, . . . , X n. •At each time t∈ {1, . . . , n }, the teacher sends information to the student through a single use of a discrete memoryless channel (DMC); the input at time tis denoted by Wt, the output by Zt, and the channel by PZ|W. Importantly, Wtis only allowed to depend on the previous samples X1, . . . , X t−1. •Based on Z1, . . . , Z n, the student forms an estimate ˆθnofθ∗. Y .H. Ling is with the Department of Computer Science, School of Computing, National University of Singapore (NUS). Z. Yang is with the Department of Applied Mathematics and Statistics at Johns Hopkins University. J. Scarlett is with the Department of Computer Science, Department of Mathematics, and Institute of Data Science, National university of Singapore. This research is supported by the Singapore National Research Foundation under its Global AI Visiting Professorship program. Emails: lingyh@nus.edu.sg; zyang145@jh.edu; scarlett@comp.nus.edu.sg May 15, 2025 DRAFT 2 X W Z ˆθnTeacher PZ|W Student Fig. 1. Illustration of our problem setup; the quantity being estimated is θ∗=E[X]. We will initially focus on the case that X∼Ber(θ)andPZ|Wis a binary symmetric channel (BSC) with parameter p∈ 0,1 2 , as this already captures the main ideas and difficulties. We will then turn to more general sources and channels in the later sections. In general, mean estimation can come with several different goals, such as mean squared error guarantees and small/moderate/large deviations bounds. In this paper,
https://arxiv.org/abs/2505.09098v1
we focus on the large deviations regime; our reason- ing/motivation for this is discussed in Section I-B. Specifically, for some constant ε >0(not depending on n), we seek to minimize P(|ˆθn−θ∗|> ε) (1) and analyze how quickly this quantity decays as a function of n. We will generally assume that εandPZ|Ware known throughout the system, though in our protocols only the student will use this knowledge; the teacher will not, except “indirectly” in the sense of knowing a good error-correcting code for the channel PZ|W. We say that a teaching and learning strategy achieves an error exponent Efor a family of distributions PXif inf PX∈PXlim sup n→∞−1 nlogP(|ˆθn−θ∗|> ε)≥E, (2) and we define E∗=E∗(PX, PZ|W, ε)as the highest possible error exponent among all protocols. Our goal is to obtain upper and lower bounds on the optimal error exponent E∗, for various families PXof interest. We highlight that our focus is on information-theoretic limits rather than computation or “practical” performance, though we will note in Section V-D that our protocols have polynomial runtime. We also note that our focus is the scalar-valued case in which Xtakes values in R, but we will discuss extensions to Rdin Section V-C. A. Related work 1-bit and low-rate teaching/learning. Our work is closely related to a recent line of works on teaching/learning a single bit of information [1] and the essentially equivalent problem of error exponents for low-rate relaying over a tandem of channels [2]. The problem studied in [1], [2] resembles our problem specialized to Bernoulli+BSC, but the goal is to estimate a single bit observed by the teacher through another BSC, rather than estimating the Bernoulli distribution parameter. Hence, a fundamental difference is the distinction between learning a discrete quantity (e.g., a single bit) vs. learning a continuous quantity (e.g., a Bernoulli parameter). We will find that some ideas such as block-structured teaching are common to both problems, but the details and analysis are largely distinct; see Section I-C for further discussion. For the 1-bit teaching/learning problem, following the initial works in [1], [2], the optimal error exponent was identified for BSCs in [3], and for general binary-input DMCs in [4]. Extensions to multi-bit and multi-hop settings May 15, 2025 DRAFT 3 were given in [5], [6]. It was noted in [2] that the problem has connections to a “many-hop” (number of hops linear in the block length) problem called information velocity, which was subsequently studied in [7], [8], [9]. Mean estimation and large deviations theory. Mean estimation is one of the most fundamental and widely- studied problems in statistics, and we do not attempt to summarize the literature in detail. For distributions with light tails, a typical strategy is to estimate using the sample mean, and then apply concentration inequalities [10], [11] and/or large deviations analyses [12]. For heavy-tailed distributions, the sample mean is typically a very poor estimate in the large deviations regime, and other estimators are adopted such as the trimmed mean or median-of- means [13]. Although the problem that we study is one of mean estimation, we will heavily rely on
https://arxiv.org/abs/2505.09098v1
tools from binary hypothesis testing (e.g., see [14], [15], [16]), and our strategy will be to combine the results of several hypothesis tests to produce the final mean estimate. In Remark 15, we will discuss how this hypothesis testing approach can outperform approaches based on the sample mean, even in the setting of direct observations. Communication-constrained statistical problems. There have recently been a wide range of works studying estimation and other statistical problems under various kinds of communication constraints, e.g., see [17], [18], [19], [20], [21], [22] and the references therein. Certain classical works also fall in this category, e.g., [23]. However, we are unaware of any such works that closely align with our problem setup; the works that we are aware of have significantly more differences to ours (compared to the “closer” 1-bit teaching/learning problems outlined above) and adopt substantially different methods. For example, notable differences include considering other recovery criteria such as mean squared error [18], [19], considering noiseless bits instead of noisy channels [17], [18], [19], [23], considering distinct problems such as distribution learning and testing [17], [21], [23], [24], and considering problems where all samples are observed before coding [23]. B. Discussion on large deviations and other regimes In principle, our problem setup could be considered alongside several recovery criteria, including mean squared error, constant error probability, moderate deviations, or large deviations bounds. We focus on the latter for two main reasons: 1) In regimes other than large deviations, very simple strategies can be near-optimal in terms of the leading asymptotic terms (see below), and thus understanding the benefit of more complex protocols would require introducing higher-order asymptotic terms or moving to a non-asymptotic analysis. 2) The large deviations regime closely aligns with the recent line of works on 1-bit teaching/learning and low- rate relaying [2], [1], [3], [5], which itself has interesting connections to other problems such as information velocity [2], [7]. To elaborate on the first of these points, suppose that we consider a regime in which the deviation probability decays sub-exponentially in n, say with εn→0in a manner such that we can expect P(|ˆθn−θ∗|> εn)≤e−Θ(n0.99). Then fix δ∈(0,1)and consider a protocol in which the teacher estimates θ∗using the first (1−δ)nsamples, and transmits a slightly quantized version of that estimate using the last δnchannel uses. Since δnis linear in n, we can May 15, 2025 DRAFT 4 usenCquantization levels (for any fixed C >0) while still maintaining exponential decay in the error probability of transmitting the quantized estimate. Thus, we attain essentially identical performance to directly estimating θ∗ from (1−δ)nsamples, which amounts to “losing” an arbitrarily small fraction of the samples. As we will see in Lemma 4 below and our numerical evaluations in Section III-A, the analogous loss of such an approach can become much more significant in the large deviations regime. C. Discussion of protocols In this subsection, we discuss a number of baselines and simple protocols, as well as briefly summarizing our own approach. The corresponding exponents will be compared numerically in Section III-A. Here and throughout the paper, it will be useful
https://arxiv.org/abs/2505.09098v1
to write the results in terms of two error exponents associated with the source and the channel individually, defined as follows. Definition 1. For a given class PXof source distributions and an accuracy parameter ε >0, we define the source exponent (ordirect observation exponent )Esrc ε(PX)as the highest Esuch that the error exponent Eis achieved (in the sense of (2)) by some ˆθnformed directly from the nsamples X1, . . . , X n. Definition 2. For a discrete memoryless channel PZ|Wand a number of messages M, we define the channel exponent Echan M(PZ|W)as the highest achievable error exponent (in the standard sense [14]; see Section II-B) for sending Mmessages in nchannel uses. Moreover, we define the zero-rate error exponent Echan(PZ|W,0) = limM→∞Echan M(PZ|W)(see Lemma 9 below for a justification of this equality). When the class of distributions and/or the channel are clear from the context, we adopt the shorthands Esrc ε, Echan M, and Echan(0). We briefly note some expressions for these quantities in special cases of interest (with D(a∥b) =aloga b+ (1−a) log1−a 1−bbeing the binary KL divergence): •For Bernoulli sources, we have Esrc ε=D1 2∥1 2+ε forε∈ 0,1 2 (see Section III). •For Gaussian sources with mean in [0,1]and variance σ2, we have Esrc ε=ε2 2σ2(see Section IV). •For the BSC, we have Echan M=cM·(1/2)D(1/2∥p)withcM≈M M−1∈[1/2,1], andEchan(0) = (1 /2)D(1/2∥p) (see Lemma 11 below, which also makes the “ ≈” statement more precise). We now proceed to outline various protocols and their associated error exponents. 1) Non-causal setting: Recall that an important feature of our setting is that the i-th transmitted symbol Wican only depend on the previous samples X1, . . . , X i−1. Nevertheless, it is useful to consider a hypothetical non-causal setting where the teacher first receives X1, . . . , X nandthen transmits the entire sequence W1, . . . , W n. In this setup, we have the following. Lemma 3. In the non-causal setting, for any distribution class PXwhose set of possible means is Θ∗= [0,1],1 and any ε∈ 0,1 2 , we have the following: 1This assumption is trivial for the Bernoulli class, and we will discuss in Section V how it can be relaxed in other cases. May 15, 2025 DRAFT 5 •(Converse) Any non-causal protocol achieves error exponent at most min{Esrc ε, Echan ⌈1/(2ε)⌉}. •(Achievability) For any δ∈(0,1), there exists a non-causal protocol achieving error exponent at least min{Esrc (1−δ)ε, Echan ⌈1/(2δε)⌉}. Proof. See Appendix A. Observe that in the limit as δ→0, the achievable exponent approaches min{Esrc ε, Echan(0)}. (3) Thus, whenever the first term achieves this minimum, this protocol achieves the optimal direct-observation exponent Esrc ε, i.e., the exponent that would be achieved if the student had direct access to X1, . . . , X n. 2) One-shot estimate-and-forward: Returning to the causal setting, a straightforward protocol is to perform the following for some parameters λ, δ∈(0,1): •Use the first (1−λ)nsamples to estimate θ∗to within accuracy (1−δ)n; •Use the last λnsamples to transmit the estimate to within accuracy δn. This simple protocol (which is described
https://arxiv.org/abs/2505.09098v1
more formally in the proof of Lemma 4) can be analyzed in a similar manner to the non-causal setting, giving the following. Lemma 4. For any distribution class PXwhose set of possible means is Θ∗= [0,1], the one-shot estimate-and- forward protocol described above with parameters (λ, δ)attains an error exponent of min{(1−λ)Esrc (1−δ)ε, λEchan ⌈1/(2δε)⌉}. Proof. See Appendix C. We see that this approach suffers from shortening the “effective block length” in each term, thus multiplying the first term by 1−λand the second term by λ. In particular, this means that the exponent is strictly worse than the direct-observation exponent whenever Esrc ε>0andEchan(0)<∞. We note that a straightforward calculation reveals that for fixed δ, the best choice in Lemma 4 is λ=Esrc ⌈1/(2δε)⌉ Esrc (1−δ)ε+Echan ⌈1/(2δε)⌉, which equates the two terms in the min{·,·}. 3) Simple forwarding: In the case that the random variable Xhas the same support as the channel input W, another natural strategy is simple forwarding , in which the previous sample observed is transmitted directly, i.e., Wi=Xi−1. In certain cases, the decoder can then estimate θdirectly from the received sequence Z1, . . . , Z n; for concreteness, we demonstrate this idea in the specific case of a Bernoulli source and a BSC. Lemma 5. Consider the case of a Bernoulli source and a BSC with parameter p∈ 0,1 2 , and fix ε∈ 0,1 2 . When the teacher performs simple forwarding, there exists a design of the student such that an error exponent of Esrc (1−2p)εis achieved. A limitation of this protocol is that the randomness of the source and the noise from the channel get “mixed together”, leading to an end-to-end distribution that is “noisier” than either of the two separately. Since Esrc ε= May 15, 2025 DRAFT 6 D1 2∥1 2+ε is strictly increasing in ε, we see that the exponent in Lemma 5 is is strictly worse than the direct- observation exponent Esrc εfor all p∈ 0,1 2 . 4) Existing 1-bit teaching and learning protocol: In [1], [2], [3], the closely related problem of 1-bit 2-hop relaying was considered. In their setup, there is only 1 bit (rather than a continuous parameter) to be estimated, but the teacher only receives observations of that bit passed through a BSC with a known crossover probability. An asymptotically optimal strategy, proposed in [3], is to have the teacher read in blocks of size k≪n, and transmit sequences of the form 1. . .10. . .0in size- kblocks, where the number of 1s sent is chosen to be a carefully-designed function of the number of 1s received in the previous block. One could conceivably consider a similar approach in our setup. We do not make any claims on the degree of (sub)optimality of such a protocol, but we found it relatively difficult to analyze, and it was unclear to us what function should be used to decide the number of 1s sent. 5) Our approach: We still use a block-structured strategy in a similar spirit as [3] (see Section III for details), but within each
https://arxiv.org/abs/2505.09098v1
block, we simply have the teacher send information about the sum or average of its received symbols using a general codebook (whose codewords need not be of the form 1. . .10. . .0). We then consider a decoder that performs binary hypothesis tests on a number of (θ, θ′)pairs, constructs a set Sofθvalues that are favored over all other values within distance 2ε, and outputs ˆθnas the midpoint of S. See Section III-B for the details. D. Summary of our results We now proceed to summarize our main results: •In Section III, we study the case of a Bernoulli source and a binary symmetric channel. We provide a protocol that attains an error exponent of min{Esrc ε, Echan(0)}, and we show (via Lemma 3) that no protocol can attain an exponent better than min{Esrc ε,(1 + 2 ε+O(ε2))Echan(0)}, where an exact expression is also available for the ε-dependent factor, in particular equaling1 1−2εunder “favorable rounding”. Thus, we attain matching achievability and converse whenever the former min{·,·}is attained by the first term, and more generally a multiplicative gap of at most 1 + 2 ε+O(ε2). In addition, the converse holds even for non-causal protocols. •In Section IV, we consider general sub-Gaussian sources and general discrete memoryless channels. We provide a protocol that attains an error exponent of min{Esrc ε, Echan(0)}, and we know from Lemma 3 that no protocol can attain an exponent better than min{Esrc ε, Echan ⌈1/(2ε)⌉}, where Echan ⌈1/(2ε)⌉≥Echan(0)but it holds that Echan ⌈1/(2ε)⌉→Echan(0)asε→0. Thus, we attain matching achievability and converse whenever the former min{·,·}is attained by the first term, and even in the second term the multiplicative gap approaches 1 as ε decreases. In addition, the converse holds even for non-causal protocols. •In Section V, we extend the results of Section IV beyond the sub-Gaussian case, in particular allowing general heavy-tailed distributions with a finite variance. We also provide partial results for vector-valued sources, and we discuss the (polynomial-time) computational complexity of our protocols. II. P RELIMINARIES In this section, we introduce some useful mathematical tools and results that will be used throughout the paper. May 15, 2025 DRAFT 7 A. Distance measures and distance-based codes For any two discrete distributions P1, P2defined on the same set, we denote the Bhattacharyya coefficient by ρ(P1, P2) =X xp P1(x)P2(x), (4) and the Bhattacharyya distance by dB(P1, P2) =−logρ(P1, P2). (5) Forθ1, θ2∈[0,1], we will also adopt the shorthand ρ(θ1, θ2) =ρ(Ber( θ1),Ber(θ2)) (6) to represent the Bhattacharyya coefficient between the corresponding Bernoulli distributions, and similarly for dB(θ1, θ2)andD(θ1∥θ2). The following simple identity will be useful. Lemma 6. For any ε∈(0,1/2), it holds that dB(1/2−ε,1/2 +ε) =D(1/2∥1/2 +ε) =D(1/2∥1/2−ε) =−1 2log(1−4ε2). (7) Proof. FordB, a direct calculation gives ρ(1/2−ε,1/2 +ε) = 2p (1/2−ε)(1/2 +ε) = 2p 1/4−ε2, and taking the negative log and simplifying gives −1 2log(1−4ε2). For the relative entropy, a direct calculation gives D(1/2∥1/2 +ε) =1 2log1/2 1/2+ε+1 2log1/2 1/2−ε, and we again get −1 2log(1−4ε2)by simplifying; note also that D(1/2∥p) =D(1/2∥1−p). Consider a discrete memoryless channel PZ|W, with input Wand output Z. Letting ⃗W,⃗W′be fixed strings of
https://arxiv.org/abs/2505.09098v1
the same length over W, we adopt the shorthand ρ(⃗W,⃗W′, PZ|W) =ρ P(·|⃗W),P(·|⃗W′) (8) withP(·|⃗W)being the distribution of the output string ⃗Zwhen ⃗Wis sent over the memoryless channel PZ|W, and similarly for P(·|⃗W′). Similarly, we write dB(⃗W,⃗W′, PZ|W) =−logρ(⃗W,⃗W′, PZ|W). The following tensorization property of dBis well known (e.g., see [14]). Lemma 7. For any two distributions P1, P2of the form P1(⃗ x) =Qk i=1P1,i(xi)andP2(⃗ x) =Qk i=1P2,i(xi), it holds that ρ(P1, P2) =kY i=1ρ(P1,i, P2,i), d B(P1, P2) =kX i=1dB(P1,i, P2,i). (9) Consequently, given two codewords ⃗W= (W1, . . . , W k)and⃗W′= (W′ 1, . . . , W′ k)transmitted over a DMC PZ|W, we have ρ(⃗W,⃗W′, PZ|W) =Qk i=1ρ(Wi, W′ i, PZ|W)anddB(⃗W,⃗W′, PZ|W) =Pk i=1dB(Wi, W′ i, PZ|W). The following lemma states an achievable minimum distance for low-rate codes; this is a standard result, but we provide a short proof for completeness. May 15, 2025 DRAFT 8 Lemma 8. For any constant C > 0, there exists a length- kcodebook with kCbinary codewords and minimum Hamming distance k/2−o(k)ask→ ∞ . Proof. Consider random coding, where each codeword is chosen uniformly at random from {0,1}k. The Hamming distance between a pair of strings follows a Binomial k,1 2 distribution. The probability of two specific codewords having Hamming distance at most (1 2−c)kis exponentially small in c. Since there are polynomially many codewords, a union bound over all pairs establishes the existence of a codebook with minimum distance (1 2−c)kwhen kis large enough. Since cis arbitrarily small, the result follows. B. Channel coding error exponents This subsection concerns error exponents for communication over discrete memoryless channels, not necessarily relating to mean estimation. For a fixed discrete memoryless channel PZ|Wand integers nandM, letP∗ e(n, M) denote the smallest possible error probability for sending one of Mmessages via nuses of PZ|W. (For the purposes of error exponents, it is inconsequential whether this error probability is for the worst-case message or a uniformly random message.) For a given rate R >0, let Echan(R) = lim n→∞−1 nlogP∗ e(n,exp(nR)) (10) be the associated optimal error exponent, and define Echan(0)by continuity, i.e. Echan(0) = lim R→0+Echan(R). (11) Similarly, let Echan M= lim n→∞−1 nlogP∗ e(n, M) (12) be the optimal error exponent for sending a constant number Mof messages. Then, we have the following. Lemma 9. ([14, Theorem 4]) 1) For any discrete memoryless channel PZ|W, we have Echan(0) = max PWX wX w′PW(w)PW(w′)dB(w, w′, PZ|W), (13) where max PWis taken over all probability distributions over the inputs of the channel 2) It holds that lim M→∞Echan M=Echan(0). (14) For comparing various bounds, it will be useful to write Echan M as a fraction of Echan(0), and for this purpose we introduce the following definition and lemma. Definition 10. For any DMC PZ|W, we define cM=cM(PZ|W)as the value such that Echan M=cMEchan(0). Lemma 11. ([15, Thm. 3 and Cor. 3.2] and [14, Eq. (1.19)] ) The quantity cMsatisfies the following: May 15, 2025 DRAFT 9 (i) For the BSC, we have cM=  M M−1Mis even 4·⌊M/2⌋·⌈M/2⌉ M(M−1)Mis odd .(15) (ii) For any DMC, we
https://arxiv.org/abs/2505.09098v1
have the following as M→ ∞ : cM≥M M−1−O1 M2 , c M≤1 +O1√log log M , (16) and hence limM→∞cM= 1. The following corollary for low rates will also be useful; this serves as a natural counterpart to Lemma 8. Corollary 12. For any DMC PZ|Wand any C > 0, there exists a length- kcodebook over PZ|Wcontaining kC codewords such that dB(⃗W,⃗W′, PZ|W)≥(k−o(k))Echan(0)for any pair of distinct codewords ⃗Wand⃗W′. Proof. The proof is analogous to that of Lemma 8 but with dBreplacing the Hamming distance; see Appendix D for the details. Remark 13. We allow both the cases Echan(0)<∞andEchan(0) = ∞, but we note that in the latter case the optimal error exponent for mean estimation can be attained by simply using one-shot estimate-and-forward (Lemma 4) and taking λ→0andδ→0. III. B ERNOULLI SOURCE AND BINARY SYMMETRIC CHANNEL In this section, we prove our first main result, stated as follows. Theorem 14. (Bernoulli+BSC Achievability) Under a Bernoulli source, a BSC with parameter p∈ 0,1 2 , and an accuracy parameter ε∈ 0,1 2 , the following error exponent is achievable: E∗≥min D1 2 1 2+ε ,1 2D 1/2 p! . (17) Remark 15. We make the following remarks on this result: 1) The exponent D1 2 1 2+ε can in fact exceed what would be achieved by the sample mean estimator (in the setting of direct observations), which is min D(θ∗−ε∥θ∗), D(θ∗+ε∥θ∗) by the method of types or Sanov’s theorem (e.g., setting θ∗=1 2andε= 0.1, the exponents are roughly 0.0204 and 0.0201 respectively). Hence, the sample mean can be suboptimal when the recovery criterion is the error exponent for a given ε. On the other hand, the sample mean does not require knowledge of ε, whereas our protocol does use such knowledge. 2) We recall that E∗is defined with respect to the worst-case choice of θ∗∈[0,1], so our result is minimax in nature. Our proof will also reveal an instance-dependent achievable exponent (i.e., dependent on θ∗; see (42) below), but for our converse in Corollary 16 below, we focus on minimax guarantees. To understand the tightness of Theorem 14, we state the following corollary of Lemma 3. May 15, 2025 DRAFT 10 Corollary 16. (Bernoulli+BSC Converse) Under a Bernoulli source, a BSC with parameter p∈ 0,1 2 , and an accuracy parameter ε∈ 0,1 2 , it holds (even in the non-causal setting) that E∗≤min D1 2 1 2+ε , c⌈1/(2ε)⌉·1 2D 1/2 p! , (18) where c⌈1/(2ε)⌉= 1 + 2 ε+O(ε2)asε→0, and the exact expression is given in (15). Thus, the upper and lower bounds match whenever the min{·,·}in (17) is achieved by the first term. Moreover, the second terms match to within a multiplicative factor of 1 + 2 ε+O(ε2). In certain scenarios the exact (non- asymptotic) multiplicative factor has a simple expression; in particular, if1 2εis an even integer, then by (15) the factor is exactly1 1−2ε(i.e.,M M−1withM=1 2ε). We will prove Theorem 14 in Sections III-B and III-C, and we will prove Corollary 16 in Section III-D. A. Numerical comparisons We compare our Bernoulli+BSC exponents with the baselines
https://arxiv.org/abs/2505.09098v1
from Section I-C in two ways: •In Figure 2, we fix the BSC parameter p= 0.1and vary ε∈ 0,1 2 ; •In Figure 3, we fix the accuracy parameter ε= 0.1and vary p∈ 0,1 2 . We see that the the achievable error exponents for simple forwarding and one-shot estimate-and-forward strategies are mostly highly suboptimal, though simple forwarding becomes near-optimal for εnear1 2(low accuracy) and one-shot estimate-and-forward becomes near-optimal for εnear0(high accuracy) and for pnear1 2(high noise), i.e., when the source exponent is small or the channel exponent is small. We see in Figure 2 that our protocol has an optimal error exponent thatmatches the direct-observation exponent Esrc εfor a wide range of εvalues of the form [0, ε0]; while not visible in this figure, the value of ε0turns out to increase (resp., decrease) as pdecreases (resp., increases). Moreover, we see in Figure 3 that at least in this example withε= 0.1, our protocol achieves the optimal exponent for most values of p, and the gap to the converse is still fairly small for the remaining values of p. We expect that improved protocols (both causal and non-causal) can be devised for εclose to1 2, e.g., by specifically only using M= 2 codewords of the form 0. . .0and1. . .1. However, since we view this regime as being of less interest compared to small-to-moderate ε, we do not pursue this point further. B. Description of the protocol 1) Block structure: As noted in the introduction, we will use a block-structured design in the same way as the 1-bit learning setup of [3], but with differing details of what is done within each block. In more detail, the transmission protocol of length nis broken down into n/k blocks, each of size k.2For each j= 1,2, . . . ,n k−1, we let (Wik+1, Wik+2, . . . , W (i+1)k)be a (deterministic) function of (X(i−1)k+1, X(i−1)k+2, . . . , X ik). 2To simplify notation we assume that nis a multiple of k; if not, we can round down to the nearest multiple of kby ignoring at most k symbols, and this only does not impact the final result since we will later set k=o(n). May 15, 2025 DRAFT 11 0 0.1 0.2 0.3 0.4 0.500.10.20.30.40.50.6 Ours Converse Non-Causal Simple Forwarding One-Shot Fig. 2. Comparison of error exponents in the Bernoulli+BSC setting with crossover probability p= 0.1. (The jagged behavior of the curve ‘Non-Causal’ for higher εcomes from requiring an integer number of codewords and that integer becoming small.) 0 0.1 0.2 0.3 0.4 0.500.0050.010.0150.02 Achievability Converse Non-Causal Simple Forwarding One-Shot Fig. 3. Comparison of error exponents in the Bernoulli+BSC setting with accuracy parameter ε= 0.1. The first ksymbols Z1, Z2, . . . , Z kare ignored by the student, and the last ksamples Xn−k+1, Xn−k+2, . . . , X n are ignored by the teacher. In other words, the j-th block received by the teacher only depends on the (j−1)-th block of samples received by the student; see Figure 4 for an illustration. To simplify notation, we define ⃗Zj=
https://arxiv.org/abs/2505.09098v1
(Zjk+1, Zjk+2, . . . , Z (j+1)k) (19) to be the j-th block received by student (excluding the ignored one), and let ⃗Zrefer to a generic block. 2) Teacher strategy: The teacher and student first agree on a length- kcodebook over BSC( p) with k+1codewords such that any two codewords differ in at leastk 2−o(k)bits (see Lemma 8). Let the codewords be ⃗W0,⃗W1, . . . , ⃗Wk. The teacher reads in blocks of length kas described above. For each block, the teacher counts the number of 1s and sends the corresponding message of the codebook. That is, when the number of 1s received in a block is α∈ {0,1, . . . , k }, the corresponding codeword is ⃗Wα. 3) Student strategy: Let⃗Zrepresent a generic block of length kreceived by the student, and define g(θ, α,⃗Z) =q P(α|θ)P(⃗Z|⃗Wα) (20) May 15, 2025 DRAFT 12 time 1 k 2k 3k 1 k 2k 3kteacher student Fig. 4. A diagrammatic representation of the block protocol and f(θ,⃗Z) =X αg(θ, α,⃗Z). (21) Intuitively, fbehaves like a likelihood function up to polynomial pre-factors; this is because if the summation over αwere inside the square root in (20), we would precisely have the square root of the likelihood. Defining the summation to be outside the square root turns out to be more convenient for the analysis. Recall that ⃗Zjis the j-th block received by the student. Let Tbe the set of integer multiples of1 2nin[0,1], and define the set S= θ∈T n/k−1Y j=1f(θ,⃗Zj)>n/k−1Y j=1f(θ′,⃗Zj),∀θ′∈Ts.t.|θ−θ′| ≥2ε−1 n . (22) The student then outputs the final estimate ˆθn=1 2(min S+ max S). Intuitively, the θvalues in Sare those that “beat” all 2ε-farθ′values in a binary hypothesis test when fis used for the decision rule. C. Proof of Theorem 14 (Bernoulli+BSC achievability) By definition, Sis contained within an interval of 2ε−1 n; this is because if some θ∈S“beats” some θ′then the opposite is automatically false. Let θ∗∗beθ∗rounded to an integer multiple of1 2n, where we round down if θ∗>1 2 and round up if θ∗<1 2(i.e. we always round towards1 2). Observe that if θ∗∗∈S, then θ∗∗∈[minS,maxS] and therefore the choice ˆθn=1 2(min S+ max S)ensures that |θ∗∗−ˆθn| ≤1 2 2ε−1 n =ε−1 2n, which implies |θ∗−ˆθn| ≤εby the triangle inequality. Therefore, the error probability (for ε-deviations) is upper bounded as follows: P(|θ∗−ˆθn|> ε)≤P(θ∗∗/∈S). (23) Towards bounding P(θ∗∗/∈S), we will first show that for any θ′satisfying |θ∗−θ′| ≥2ε, it holds with suitably high probability thatQ jf(θ∗,⃗Zj)>2Q jf(θ′,⃗Zj). In Lemma 18 below, we will consider the effect of discretization and show that f(θ∗∗,⃗Z)does not differ too much from f(θ∗,⃗Z). May 15, 2025 DRAFT 13 Lemma 17. Letθ∗be the true parameter and θ′be some other value. Then Ef(θ′,⃗Z) f(θ∗,⃗Z) ≤exp −(k−o(k))·min dB(θ∗, θ′),1 2D 1/2 p . (24) Proof. By the law of total probability and the definition of f, we have Ef(θ′,⃗Z) f(θ∗,⃗Z) =X αP(α|θ∗)Ef(θ′,⃗Z) f(θ∗,⃗Z) α (25) ≤X αP(α|θ∗)Ef(θ′,⃗Z) g(θ∗, α,⃗Z) α (26) =X αX α′P(α|θ∗)Eg(θ′, α′,⃗Z) g(θ∗, α,⃗Z) α . (27) We split the above sum into α=α′andα̸=α′: •When α=α′, the cancellation of P(⃗Z|⃗Wα)terms in (20)
https://arxiv.org/abs/2505.09098v1
leads to the following: g(θ′, α,⃗Z) g(θ∗, α,⃗Z)=s P(α|θ′) P(α|θ∗), (28) which implies X αP(α|θ∗)g(θ′, α,⃗Z) g(θ∗, α,⃗Z)=X αP(α|θ∗)s P(α|θ′) P(α|θ∗)= exp( −k·dB(θ, θ′)). (29) •When α̸=α′, substituting the definition of gand applying P(α|θ∗)≤1gives P(α|θ∗)g(θ′, α′,⃗Z) g(θ∗, α,⃗Z)≤P(α|θ∗)s P(⃗Z|⃗Wα′) P(α|θ∗)P(⃗Z|⃗Wα)≤s P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα), (30) and taking the expectation given α, we obtain E s P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα) α! =X ⃗ZP(⃗Z|⃗Wα)s P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα)=X ⃗Zq P(⃗Z|Wα)·P(⃗Z|Wα) =ρ(⃗Wα,⃗Wα′, PZ|W). (31) By Lemma 8, WαandWα′have Hamming distancek 2−o(k), which gives X (α,α′) :α̸=α′P(α|θ∗)Eg(θ′, α′,⃗Z) g(θ∗, α,⃗Z) α ≤X (α,α′) :α̸=α′ρ(⃗Wα,⃗Wα′, PZ|W)≤exp −(k−o(k))1 2D(1/2∥p) , (32) where the last step follows from tensorization (Lemma 7) and the identity between dBand KL divergence (Lemma 6 with ε=1 2−p). Substituting (29) and (32) into (27) gives the required result. Since the n/k−1blocks are independent, Lemma 17 implies the following when kis chosen such that k→ ∞ andn/k→ ∞ (e.g., k=√n): EY jf(θ′,⃗Zj) f(θ∗,⃗Zj) =Y jEf(θ′,⃗Zj) f(θ∗,⃗Zj) ≤exp(−(n−o(n))·min dB(θ∗, θ′),1 2D 1/2 p . (33) May 15, 2025 DRAFT 14 Next, we provide the following lemma relating the true value θ∗∈[0,1]to its rounded version θ∗∗∈T. Lemma 18. For all possible (⃗Z1,⃗Z2, . . . , ⃗Zn/k−1), it holds thatQ jf(θ∗,⃗Zj) f(θ∗∗,⃗Zj)≤2. Proof. Recall that θ∗∗is defined by rounding θ∗to the nearest multiple of1 2ntowards1 2. We focus on the case thatθ∗≥θ∗∗≥1/2; the case θ∗≤θ∗∗≤1/2is analogous and is omitted. We have P(α|θ∗) P(α|θ∗∗)≤θ∗ θ∗∗k ≤θ∗∗+1 2n θ∗∗k ≤ 1 +1 nk ≤exp(k/n), (34) where we respectively used that αis a sum of ki.i.d. Bernoulli random variables, the assumption |θ∗−θ∗∗|<1 2n, the assumption θ∗≥1 2, and the identity 1 +a≤ea. It follows that f(θ∗,⃗Z) =X αq P(α|θ∗)P(⃗Z|⃗Wα)≤exp(k/(2n))X αq P(α|θ∗∗)P(⃗Z|⃗Wα) = exp( k/(2n))f(θ∗∗,⃗Z),(35) and hence Y jf(θ∗,⃗Zj) f(θ∗∗,⃗Zj)≤exp(k/(2n))n/k−1≤exp(1 /2)≤2. (36) Recall that Tis the set of all integer multiples of1 2nin[0,1], and that Sis defined in (22). It follows that if we define Tε=T\h θ∗∗−2ε+1 n, θ∗∗+ 2ε−1 ni , (37) then the condition θ∗∗/∈Simplies the existence of some θ′∈Tεsuch thatQ jf(θ′,⃗Zj)≥Q jf(θ∗∗,⃗Zj). This further implies via Lemma 18 that 2Q jf(θ′,⃗Zj)≥2Q jf(θ∗∗,⃗Zj)≥Q jf(θ∗,⃗Zj). In addition, since |θ∗∗−θ∗| ≤1 2nand|θ′−θ∗∗| ≥2ε−1 n, we have |θ′−θ∗| ≥2ε−3 2ndue to the triangle inequality. Combining the above observations, we have P(θ∗∗/∈S) = P [ θ′∈TεY jf(θ′,⃗Zj)≥Y jf(θ∗∗,⃗Zj)! ≤P [ θ′∈Tε 2Y jf(θ′.⃗Zj)≥Y jf(θ∗,⃗Zj)! (38) ≤X θ′∈TεP 2Y jf(θ′.⃗Zj)≥Y jf(θ∗,⃗Zj)! (39) ≤2X θ′∈TεE Y jf(θ′,⃗Zj) f(θ∗,⃗Zj)! (40) ≤exp(−(n+o(n))·min min |θ′−θ∗|≥2ε−3 2ndB(θ∗, θ′),1 2D 1/2 p , (41) where (38) holds as discussed above, (39) uses a union bound, (40) applies Markov’s inequality, and (41) applies Lemma 17 and the above-established inequality |θ′−θ∗| ≥2ε−3 2n. May 15, 2025 DRAFT 15 By (23), (41), and the continuity of dB(θ1, θ2), we deduce that it is possible to achieve an error exponent of E≥min min |θ′−θ∗|≥2εdB(θ∗, θ′),1 2D 1/2 p , (42) and it remains to simplify the first term. This is done in the following lemma, which we prove in Appendix E using some elementary calculus. Lemma 19. For all θ′andθ∗satisfying |θ′−θ∗| ≥2ε, it holds that dB(θ∗, θ′)≥D1 2 1 2+ε . Combining this lemma with (42) yields Theorem 14. D. Proof of Corollary 16 (Converse for non-causal teacher) We apply
https://arxiv.org/abs/2505.09098v1
the general converse result for non-causal protocols from Lemma 3, and then rewrite the channel term using cMfrom Definition 10 and its characterization for the BSC from Lemma 11. It then only remains to show that the source term simplifies as Esrc ε=D1 2∥1 2+ε . It follows immediately from Theorem 14 (e.g., paired with a noiseless binary channel) that Esrc ε≥D1 2∥1 2+ε , so it suffices to show a matching converse, i.e., Esrc ε≤D1 2∥1 2+ε . To see this, we simply observe that attaining accuracy εimplies being able to successfully perform a hypothesis test between two Bernoulli distributions with means1 2+ε′and1 2−ε′for any ε′> ε. It is well-known (e.g., see [14]) that the optimal error exponent for distinguishing these is dB(1 2−ε′,1 2+ε′), which equals D(1 2∥1 2+ε′)by Lemma 6. Taking ε′→εfrom above gives the desired result. IV. S UB-GAUSSIAN SOURCE AND ARBITRARY DISCRETE MEMORYLESS CHANNEL A. Problem statement We now turn to a more general setting where Xis not necessarily known to lie in any parametric family, but is instead only known to be sub-Gaussian with parameter σ2>0(after centering), i.e., E[exp( s(X−θ∗))]≤exp1 2σ2s2 ∀s∈R, (43) where θ∗=E[X]. Note that σ2may differ from Var[X]. We also generalize the communication channel to be an arbitrary discrete memoryless channel PZ|W.3Our goal remains to characterize the error exponent associated with the event |ˆθn−θ∗|> εfor some ε >0, i.e., bound the optimal exponent E∗as defined in (2), with PXbeing the sub-Gaussian class. In this setting, allowing arbitrary θ∗∈Ris not possible, because the decoder can only receive at most O(n) bits of information, which cannot represent an unbounded real number within any given accuracy. To simplify the exposition, we assume that θ∗∈[0,1], but with only minor modifications we can handle an arbitrary fixed interval (known to the teacher and student) or even an interval whose width grows as O(nC)for any fixed constant C >0; see Section V for further discussion. 3This generalization can also be incorporated into Section III in the same way. May 15, 2025 DRAFT 16 Under the sub-Gaussianity assumption, if we take nsamples X1, X2, . . . , X nand let ¯X=1 nPn i=1Xibe the sample mean, then we have (e.g., see [11, Sec. 2.5]): P(¯X≥θ∗+ε)≤exp −nε2 2σ2 ,P(¯X≤θ∗−ε)≤exp −nε2 2σ2 . (44) Hence, we have an error exponent ofε2 2σ2when the samples are observed directly. We will shortly argue that this cannot be improved in general, by specializing to the Gaussian case. Our main result in this section is the following. Theorem 20. (Sub-Gaussian Achievability) For sub-Gaussian sources PXwith mean in [0,1]and sub-Gaussianity parameter σ2>0, any discrete memoryless channel PZ|W, and any accuracy parameter ε∈ 0,1 2 , the following error exponent is achievable: E∗≥minε2 2σ2, Echan(0) . (45) To understand the tightness of this result, we state the following corollary of Lemma 3 for Gaussian sources (which is a special case of sub-Gaussian). Corollary 21. (Gaussian Converse) For Gaussian sources PXwith mean in [0,1]and variance σ2>0, under any discrete memoryless channel PZ|W, and any accuracy parameter ε∈ 0,1 2 , it holds (even in the non-causal setting) that
https://arxiv.org/abs/2505.09098v1
E∗≤minε2 2σ2, c⌈1/(2ε)⌉Echan(0) , (46) where c⌈1/(2ε)⌉is defined in Definition 10, and satisfies c⌈1/(2ε)⌉→1asε→0(by Lemma 11). Thus, the upper and lower bounds match whenever the min{·,·}in (45) is achieved by the first term. Moreover, even the second terms match to within a factor that approaches 1 as ε→0. (Unlike the BSC, we do not have any simple exact expression or precise convergence rate.) We will prove Theorem 20 in Sections III-B and III-C, and we will prove Corollary 21 in Section IV-D. B. Description of the protocol We use the same block-structured approach as the one described in Section III for some block length ksatisfying k→ ∞ andn/k→ ∞ , but with differing details described below. Before proceeding, we claim that it suffices to handle the case that Xsatisfies the following assumption. Assumption 22. The distribution PXis such that Xis a multiple of 1/kwith probability one. We proceed to argue that proving Theorem 20 under this assumption readily implies the general case. To see this, suppose that in the general case, the teacher rounds Xas follows: •LetX↑(respectively, X↓) equal Xrounded up (respectively, down) to the nearest multiple of 1/k; •Round to X↑orX↓with probability proportional to the relative position of Xin the interval [X↓, X↑], i.e., the associated probabilities areX−X↓ X↑−X↓andX↑−X X↑−X↓respectively. May 15, 2025 DRAFT 17 A direct calculation of the conditional mean (given X) shows the “rounding noise” has mean zero, and thus the rounded random variable still has mean θ∗. (Note that this is often called stochastic quantization and is not a new idea.) However, the impact of the rounding on the sub-Gaussianity parameter σ2needs to be checked carefully. Let∆be the difference between the rounded value and the original value of X. We characterize the sub- Gaussianity of the rounded random variable X+ ∆ as follows: •By assumption, Xis sub-Gaussian with parameter σ2. •We can use the boundedness of ∆to deduce that it is sub-Gaussian; specifically, since |∆| ≤1 kandE[∆] = 0 , we have that ∆is sub-Gaussian with parameter1 k2[25, Corollary 2.6]. •The sub-Gaussianity of both XandX′implies the same for X+X′, but since the two are not independent we cannot simply sum their sub-Gaussianity parameters. Rather, a more general property (based on H ¨older’s inequality) not requiring independence gives an overall sub-Gaussian parameter of (σ+1 k)2[25, Theorem 2.7].4 Our final error exponent in Theorem 20 is continuous in σ, so the change from σ2to(σ+1 k)2(with k→ ∞ ) does not impact the final result. In the remainder of the section, we adopt Assumption 22. 1) Teacher strategy: Let the sample mean be ¯X=1 nPn i=1Xi, and define αas follows: α=  k ¯X > k −k¯X < k ¯X k ≤¯X≤k.(47) Under Assumption 22, ¯Xis a multiple of1 k2. Since αis simply given by ¯Xclipped to [−k, k], it follows that αcan take on one of at most 2k3+ 1≤4k3values. Accordingly, we consider a set of at most 4k3codewords of length kfor the channel PZ|Wsatisfying Corollary 12. The teacher reads in blocks of k, and transmits the codeword corresponding to the sample mean. If the sample mean is α,
https://arxiv.org/abs/2505.09098v1
we let ⃗Wαbe the corresponding codeword sent by the teacher. We claim that αsatisfies similar sub-Gaussian style bounds as the sample mean ¯X(see (44)); specifically, the bound that will be useful for our purposes is P(α|θ∗)≤exp −k(θ∗−α)2 2σ2 , (48) where here and subsequently, P(α|θ∗)represents the PMF of αunder an arbitrary distribution PXthat (i) lies in the sub-Gaussian class PX, (ii) satisfies Assumption 22, and (iii) has mean θ∗. To see that (48) holds, we consider the three cases in (47). Firstly, if α /∈ {k,−k}, then ¯X=αand the tail bound for ¯Xtrivially implies the same for 4More generally, the property in [25, Theorem 2.7] is that if X1andX2have sub-Gaussianity parameters σ2 1andσ2 2, then X1+X2is sub-Gaussian with parameter (σ1+σ2)2. In [25] this result is attributed to [26], but [25] also has a self-contained proof. If X1andX2are independent, the parameter further improves to σ2 1+σ2 2. May 15, 2025 DRAFT 18 α. On the other hand, if α=k, then using θ∗< α (recall that θ∗∈[0,1]) and ¯X > α =kgives P(α|θ∗) =P(¯X≥k|θ∗)≤exp −k(θ∗−k)2 2σ2 = exp −k(θ∗−α)2 2σ2 . (49) The case α=−kfollows from an analogous argument. 2) Student strategy: Recall that ⃗Zdenotes a generic length- kblock received by the student. We wish to take a similar approach as the Bernoulli + BSC case, for some function g(θ, α,⃗Z)defined similarly to (20). Since we are not given the probability distribution P(α|θ), we instead use exp(−k(θ−α)2 2σ2)to align with the sub-Gaussianity of X(see (48)). Specifically, we define g(θ, α,⃗Z) = exp −k(θ−α)2 4σ2 ·p(⃗Z|⃗Wα), (50) and f(θ,⃗Z) =X αg(θ, α,⃗Z), (51) withfserving as a “proxy for the likelihood function” as before. Furthermore, let ⃗Zjdenote the j-th block received by the student, and let Tbe the set of all integer multiples of1 n2in[0,1]. Then, define S= θ∈T Y jf(θ,⃗Zj)>Y jf(θ′,⃗Zj)∀θ′∈Ts.t.|θ−θ′| ≥2ε−2 n2 . (52) Similarly to Section III, Sis contained within an interval of length 2ε−2 n2, and we let the final estimate be ˆθn=1 2(min S+ max S). C. Proof of Theorem 20 (Achievability for sub-Gaussian sources) Letθ∗∗beθ∗rounded to the nearest integer multiple of1 n2. Observe that if θ∗∗∈S, then θ∗∗∈[minS,maxS]. Hence, and using maxS−minS≤2ε−2 n2as established above, the choice ˆθn=1 2(min S+ max S)ensures that|θ∗∗−ˆθn| ≤ε−1 n2, which implies |θ∗−ˆθn| ≤εby the triangle inequality. Therefore, the error probability is upper bounded by P(θ∗∗/∈S). Towards characterizing P(θ∗∗/∈S), we start with the following analog of Lemma 17. Lemma 23. Letθ∗be the true parameter, and let θ′be some other value. Then Ef(θ′,⃗Z) f(θ∗,⃗Z) ≤exp −(k−o(k)) min(θ∗−θ′)2 8σ2, Echan(0) (53) Proof. Recall the definition of αin (47), and that it takes on one of at most 4k3values due to Assumption 22. We May 15, 2025 DRAFT 19 proceed as follows: Ef(θ′,⃗Z) f(θ∗,⃗Z) =X αP(α|θ∗)Ef(θ′,⃗Z) f(θ∗,⃗Z) α (54) ≤X αexp −k(θ∗−α)2 2σ2 Ef(θ′,⃗Z) f(θ∗,⃗Z) α (55) ≤X αexp −k(θ∗−α)2 2σ2 Ef(θ′,⃗Z) g(θ∗, α,⃗Z) α (56) =X αX α′exp −k(θ∗−α)2 2σ2 Eg(θ′, α′,⃗Z) g(θ∗, α,⃗Z) α , (57) where (54) uses the law of total expectation, (55) uses the sub-Gaussian concentration property (see (48)), and (56)–(57) use the definition of f. We split the above sum into the
https://arxiv.org/abs/2505.09098v1
cases α=α′andα̸=α′: •When α=α′, theP(⃗Z|⃗Wα)terms from (50) cancel out, and we obtain exp −k(θ∗−α)2 2σ2g(θ′, α,⃗Z) g(θ∗, α,⃗Z)= exp −k(θ′−α)2 4σ2−k(θ∗−α)2 4σ2 (58) By a simple differentiation exercise, the maximum value of −k(θ′−α)2 4σ2−k(θ∗−α)2 4σ2 occurs at α=1 2(θ∗+θ′) and achieves the value −k(θ′−θ∗) 8σ2. Hence, and recalling that there are at most 4k3values of α, we obtain X αexp −k(θ∗−α)2 2σ2g(θ′, α,⃗Z) g(θ∗, α,⃗Z)≤4k3·exp −k(θ′−θ∗)2 8σ2 . (59) •When α̸=α′, we substitute the definition of gto obtain exp −k(θ∗−α)2 2σ2g(θ′, α′,⃗Z) g(θ∗, α,⃗Z)= exp −k(θ∗−α)2 4σ2 ·exp −k(θ′−α)2 4σ2s P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα)≤s P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα) (60) and taking the expectation given α, we obtain Es P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα) α =X ⃗ZP(⃗Z|Wα)s P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα)=X ⃗Zq P(⃗Z|Wα)·P(⃗Z|Wα) =ρ(⃗Wα,⃗Wα′, PZ|W). (61) We have from Corollary 12 that ρ(⃗Wα,⃗Wα′, PZ|W)≤exp(−(k−o(k))Echan(0)), and combining this with (60)–(61) gives X (α,α′) :α̸=α′exp −k(θ∗−α)2 2σ2 Eg(θ′, α′,⃗Z) g(θ∗, α,⃗Z) α (62) ≤X (α,α′) :α̸=α′Es P(⃗Z|⃗Wα′) P(⃗Z|⃗Wα) α (63) =X (α,α′) :α̸=α′ρ(⃗Wα,⃗Wα′, PZ|W) (64) ≤exp(−(k−o(k))Echan(0)), (65) May 15, 2025 DRAFT 20 where the last step also uses that there are at most 4k3= poly( k)values of α. Substituting (59) and (65) into (57) gives the desired result. We now move from studying a single block to the entire collection ofn k−1blocks, and we let khave an arbitrary dependence on nsatisfying k→ ∞ andn k→ ∞ (e.g,k=√n). We have P Y jf(θ′,⃗Zj) f(θ∗,⃗Zj)>1 2! ≤2·E Y jf(θ′,⃗Zj) f(θ∗,⃗Zj)! (66) = 2·Y jE f(θ′,⃗Zj) f(θ∗,⃗Zj)! (67) ≤ exp −(k−o(k)) min(θ∗−θ′)2 8σ2, Echan(0)!n/k−1 (68) ≤exp −(n−o(n)) min(θ∗−θ′)2 8σ2, Echan(0) , (69) where (66) applies Markov’s inequality, (67) uses the independence of the blocks, (68) applies Lemma 23, and (69) uses the above scaling assumptions on k. Recalling that θ∗∗isθ∗rounded to the nearest multiple of1 n2, it holds for all αthat k(θ∗−α)2 4σ2−k(θ∗∗−α)2 4σ2 ≤k 4σ2|θ∗−θ∗∗| · |θ∗+θ∗∗−2α| ≤k2 n2σ2, (70) where the first inequality uses a2−b2= (a−b)(a+b), and the second inequality bounds the two absolute values by1 n2andk+ 2≤4krespectively. Therefore, f(θ∗,⃗Z) =X αexp −k(θ∗−α)2 4σ2 ·P(⃗Z|⃗Wα) (71) ≤expk2 n2σ2 ·X αexp −k(θ∗∗−α)2 4σ2 ·P(⃗Z|⃗Wα) (72) = expk2 n2σ2 f(θ∗∗,⃗Z), (73) with the middle step using (70) and the other two steps using the definition of fin (51). Taking the product over all blocks j= 1, . . . , n/k −1, we obtain Y jf(θ∗,⃗Zj)≤Y jexpk2 n2σ2 f(θ∗∗,⃗Zj)≤2Y jf(θ∗∗,⃗Zj), (74) where the last step holds when n/k is sufficiently large (we have assumed that it approaches ∞asn→ ∞ ). LetTε=T\[θ∗−2ε+2 n2, θ∗+ 2ε−2 n2]. If there exists some θ′∈Tεsuch thatQ jf(θ′,⃗Zj)≥Q jf(θ∗,⃗Zj), then (74) gives 2Q jf(θ′,⃗Zj)≥2Q jf(θ∗∗,⃗Zj)≥Q jf(θ∗,⃗Zj). Moreover, since |θ∗∗−θ∗| ≤1 n2and|θ∗∗−θ′| ≥ 2ε−2 n2, the triangle inequality gives |θ′−θ∗| ≥2ε−3 n2. (75) May 15, 2025 DRAFT 21 We can now follow similar steps to (38)–(41) as follows: P [ θ′∈TεY jf(θ′,⃗Zj)≥Y jf(θ∗∗,⃗Zj)! (76) ≤P [ θ′∈Tε 2Y jf(θ′.⃗Zj)≥Y jf(θ∗,⃗Zj)! (77) ≤X θ′∈TεP 2Y jf(θ′,⃗Zj)≥Y jf(θ∗,⃗Zj)! (78) ≤2X θ′∈TεE Y jf(θ′,⃗Zj) f(θ∗,⃗Zj)! (79) ≤exp −(n−o(n)) min(θ∗−θ′)2 8σ2, Echan(0) (80) ≤exp −(n−o(n)) minε2 2σ2+O(1/n2), Echan(0) (81) ≤exp −(n−o(n)) minε2 2σ2, Echan(0) , (82) where (77) was established in the previous paragraph, (78) applies the union bound, (79) applies Markov’s inequality, (80) applies Lemma 23, and
https://arxiv.org/abs/2505.09098v1
(81) applies (75) and the fact that |Tε|has polynomial size. Equation (82) upper bounds the probability that θ∗∗/∈S, and recalling that this in turn upper bounds the error probability, it follows that the error exponent minε2 2σ2, Echan(0) is achievable, as desired. D. Proof of Corollary 21 (Converse for Gaussian sources) We apply the general converse result for non-causal protocols from Lemma 3, and then rewrite the channel term using cMfrom Definition 10. It then only remains to show that Esrc ε=ε2 2σ2. It follows immediately from Theorem 20 (e.g., paired with a noiseless binary channel) that Esrc ε≥ε2 2σ2, so it suffices to show a matching converse, i.e., Esrc ε≤ε2 2σ2. To see this, we simply observe that attaining accuracy εimplies being able to successfully perform a hypothesis test between two Gaussians with means separated by 2ε′, for any ε′> ε. It is well-known that such a hypothesis test has an optimal error exponent of(ε′)2 2σ2(e.g., see [27, Sec. 11.9]). Taking ε′→εfrom above gives the desired result. V. F URTHER EXTENSIONS AND DISCUSSION A. Heavy-tailed random variables In our analysis, the only place where we used the sub-Gaussian style concentration property is in (55). This observation lets us readily handle more general distributions by letting αitself be a more general “base” estimator for a single block (not necessarily the sample mean). In more detail, suppose that there exists an estimator α(⃗X)operating on blocks ⃗Xof size kand satisfying the following properties: May 15, 2025 DRAFT 22 (i) The estimator satisfies the following for some σ2>0and any ε∈ 0,1 2 : P(|α(⃗X)−θ∗| ≥ε)≤2 exp −kε2 2σ2(1 +o(1)) . (83) (ii) In the case that Xis a multiple of 1/kwith probability one, there are only at most poly( k)values within the range [−k, k]that the estimator can output. (iii) Property (i) continues to hold (possibly with a modified 1 +o(1)term) after rounding to a multiple of 1/kin the manner described following Assumption 22.5 Under these conditions, we can apply the exact same analysis as the sub-Gaussian case, with the encoder now forming αusing the base estimator rather than the sample mean. The rest of the analysis is unchanged, and the error exponent in (45) is again achievable. As a well-known example, we note that the median-of-means estimator with suitably-chosen parameters satisfies property (i) above with σ2= 32Var[ X][13, Thm. 2], and properties (ii) and (iii) are straightforward to verify.6 Thus, this estimator attains sub-Gaussian like concentration assuming nothing other than finite variance. The above argument allows us to transfer this guarantee for direct observations to an error exponent for our setting with coded relayed observations. B. Estimation beyond θ∗∈[0,1] In Section IV we mentioned that for sub-Gaussian sources some restriction is needed on θ∗, and we focused on θ∗∈[0,1]. However, the protocol and analysis easily extends to θ∗∈[−C, C]for any C= poly( k). Since our analysis permits setting k=√n, this means that we can handle any C= poly( n). The idea is that in the definition ofαin (47), we clip the rewards to [−C−k, C+k]instead of [−k, k], and we are still left with
https://arxiv.org/abs/2505.09098v1
only polynomially many possible outcomes. The rest of the analysis is essentially unchanged except for the polynomial pre-factors. We also note that our converse results are based on Lemma 3, and if we generalize from θ∗∈[0,1]toθ∗∈[−C, C] then we should replace Echan ⌈1/(2ε)⌉byEchan ⌈C/ε⌉therein. C. Vector-valued sources Consider the case where the source is d-dimensional, and the setup from one of our previous sections (Bernoulli in Section III or sub-Gaussian in Section IV) applies in every entry indexed by i∈1, . . . , d . We claim that in such a scenario, we can achieve the same error exponent in terms of ε-accuracy in each component (separately) as what we achieved for the case that d= 1. To see this, we simply modify the protocol such that if f(k)codewords were used when d= 1, then f(k)d codewords are used more generally, and these are used to encode the behavior of each of the dentries (e.g., the 5With only minor adjustments, we can generalize 1/kto1/kCfor any C >0. 6For property (ii), note that each of the ( kor fewer) intermediate means in the median-of-means method takes on one of poly( k)values (similar to Section IV-B), and the median will equal one of these intermediate means. For property (iii), similar to Section IV-B, we can write the rounded distribution as X+ ∆ and then use Var[X+ ∆] = Var[ X] + 2Cov[ X,∆] + Var[∆] , which simplifies to Var[X] +O(1/k)by writing Cov[X,∆]≤p Var[X]Var[∆] andVar[∆] = O(1/k2). May 15, 2025 DRAFT 23 total number of 1s in each entry in the Bernoulli case). Since f(k)is polynomial, so is f(k)dfor any constant d, and thus we can still use Lemma 8 and Corollary 12. The subsequent analysis is essentially unchanged except that there are more αvalues, but still only polynomially many. Moreover, since the error events have exponentially small probability, we can safely take a union bound over the dcomponents. However, it may also be of interest to understand error guarantees beyond the entry-wise one, e.g., the squared ℓ2-error. In such scenarios, entry-wise analyses appear to be insufficient, and parts of our protocol may require non-trivial changes (e.g., how to generalize the choice ˆθn=1 2 minS+ max S ). Such considerations are left for future work. D. Discussion on computational complexity While our focus is on information-theoretic limits rather than computational considerations, we note that our protocol has polynomial time, even when an “unstructured” (e.g., random) code is used for the codebook. At the teacher all that needs to be done is to compute αand transmit the corresponding codeword. At the student, there are more steps of computation involved; starting with the Bernoulli+BSC case, we note the following: •Regarding the codebook used, the k+ 1codewords of length kcan be stored in a size- O(k2)lookup table. •The function gin (20) can be computed in O(k)time by using a product over ksymbols to evaluate P(⃗Z|⃗Wα). •The function fin (51) can be computed in O(k2)time by summing over k+ 1 values of α, each of which computes g. •We can compute any givenQ jf(θ,⃗Zj)in (22) in O(nk)time
https://arxiv.org/abs/2505.09098v1
since there are n/k−1values of j. •Hence, we can compute the set S(and thus ˆθn) in time O(n2k)since there are O(n)values of θto consider. In the sub-Gaussian case, a similar argument applies, but there are O(k3)values of αandO(n2) values of θ, so the computation time is O(n3k3). This is perhaps higher than ideal, but nevertheless polynomial time. For both the Bernoulli+BSC case and the sub-Gaussian case, an interesting direction for future work would be to attain similar results with a more efficient decoder (e.g., n1+o(1)-time). VI. C ONCLUSION We have studied large deviations bounds for the problem of statistical mean estimation with coded relayed observations. For both Bernoulli and sub-Gaussian sources, and both BSCs and general DMCs, we have provided a block-structured protocol whose error exponent is optimal in broad cases of interest. We conclude by highlighting some directions that may be of interest in future work: 1) Our current results are strongest in medium-accuracy and high-accuracy regimes (i.e., small-to-moderate ε). In contrast, gaps still remain in lower-accuracy regimes (i.e., high ε), and in such cases it remains unclear whether or not there exists a gap between the causal and non-causal settings. 2) We only provided partial results for vector-valued sources, considering component-wise recovery. It remains open to handle other performance measures such as ℓ2 2error, which may require different techniques. May 15, 2025 DRAFT 24 3) We focused on minimax bounds (over θ∗∈[0,1]and/or over the sub-Gaussian class), and it would be of interest to better understand refined instance-dependent results (e.g., depending on θ∗in the Bernoulli case or depending on other distributional properties in the sub-Gaussian case). 4) Our protocols rely on the teacher knowing both εandPZ|W, as well as σ2in the sub-Gaussian setting, whereas ideally one might use a “universal” strategy that does not require such knowledge. 5) While our protocols are polynomial-time, the polynomial powers are higher than ideal, and generally speaking we do not claim our protocols to be “practical”. It is therefore a natural direction to study protocols that have a smaller runtime and/or are specifically targeted at attaining strong experimental performance. APPENDIX A. Proof of Lemma 3 (Non-causal setting) 1) Converse part: The first term in the converse is trivial by the definition of Esrc ε. For the second term, we consider the case that θ∗is restricted to be an integer multiple of ε′for some ε′> ε. Since θ∗∈[0,1], by letting ε′ be sufficiently close to ε, the number of such θ∗values is M=1 2ε . For example, if ε= 1/8then the points are approximately {0,1/4,1/2,3/4}but with the gaps expanded very slightly, giving M= 4, but for slightly smaller εthis would increase to M= 5 due to adding a fifth point near 1. Observe that even if the teacher knows θ∗exactly, it is still required to send the student sufficient information to identify the value from the size- Mset. This amounts to sending one of M= 1/(2ε) messages over the channel, and the associated exponent is Echan M=Echan ⌈1/(2ε)⌉, thus yielding the desired second term. 2) Achievability part: For the achievability part, we fix δ∈(0,1)and consider the
https://arxiv.org/abs/2505.09098v1
following protocol for the non-causal setting: •The teacher uses an optimal estimation procedure for direct observations to estimate θ∗to within accuracy (1−δ)ε; this estimate is denoted by ˜θ. •The teacher rounds ˜θto the nearest point in a set Iof points in [0,1]. We design this set such that every θ∈[0,1]has distance at most δεto the nearest point. For instance, this can be achieved as follows: –Start with the set I={δε,3δε,5δε, . . . } ∩[0,1]; –Ifθ= 1 is not δε-close to the last point, then add θ= 1 itself to I. With this construction, the size of Iis1 2δε⌉; for example, if δε=1 4thenI={1/4,3/4}, but if εis decreased slightly then we need to add a third point (namely, 1). After rounding the initial estimate to I(i.e., performing quantization), the corresponding index is sent to the student using an error-correcting code of length nwith |I|messages. •The student decodes the received sequence to obtain the corresponding point in I, and this forms the final estimate ˆθn. The teacher’s own estimation incurs an error of up to (1−δ)ε, and the subsequent rounding incurs up to another δεfor a total of at most ε. Since the number of codewords in the codebook is1 2δε , the result readily follows. May 15, 2025 DRAFT 25 Remark 24. In the above analysis, we consider uniform quantization for simplicity, but improved quantization strategies may be possible, e.g., quantizing more finely in regions where estimation is known to be more difficult. (In particular, see (42) for a relevant θ∗-dependent bound in the Bernoulli case.) The above strategy is only introduced as a baseline, rather than something that we seek to optimize carefully. B. Proof of Lemma 5 (Achievability via simple forwarding) Observe that under simple forwarding, the student has access to n−1i.i.d. observations drawn from Ber(˜θ∗), where ˜θ∗=θ∗(1−p) + (1 −θ∗)p=θ∗(1−2p) +p. (84) Accordingly, the student may apply any estimation procedure to form an estimate ˜θof˜θ∗, and then form the final estimate by shifting and scaling: ˆθ=˜θ−p 1−2p. (85) Clearly, if the estimate ˆθisε-accurate if and only if the estimate ˜θisε(1−2p)-accurate, and the desired result then follows from the definition of Esrc ε(the distinction between n−1vs.nsamples does not impact the error exponent). C. Proof of Lemma 4 (Achievability via one-shot estimate-and-forward) The analysis is identical to the achievability part in Appendix A, except that the teacher’s block length is reduced from nto(1−λ)n, and the student’s block length is reduced from ntoλn. The details are omitted to avoid repetition. D. Proof of Corollary 12 (Existence of good codebooks for general DMCs) LetPWbe the distribution achieving the maximum in (13). Consider random coding in which each symbol of each codeword is randomly drawn according to PW. Then, we can write dB(⃗W,⃗W′)as a sum of i.i.d. variables: dB(⃗W,⃗W′, PZ|W) =X idB(⃗Wi,⃗W′ i, PZ|W), (86) where the equality follows from the tensorization of dB(Lemma 7). The expected value is k·Echan(0), and we proceed to separately study the cases Echan(0)<∞andEchan(0) =∞. When Echan(0)<∞, the terms dB(⃗Wi,⃗W′ i, PZ|W)in (86) are all finite. As a result, for any c <1, the probability thatdB(⃗W,⃗W′, PZ|W)< ckEchan(0)is exponentially
https://arxiv.org/abs/2505.09098v1
small in cby Hoeffding’s inequality [10, Sec. 2.6]. Since there are polynomially many codewords, by the union bound, it holds with high probability that dB(⃗W,⃗W′, PZ|W)≥ (k−o(k))Echan(0). In the case that Echan(0) =∞, there must exist two symbols w, w′withdB(w, w′, PZ|W) =∞andPW(w)PW(w′)> 0. Moreover, we have dB(⃗W,⃗W′) =∞as long as there exists a single position i∈ {1, . . . , n }where one codeword May 15, 2025 DRAFT 26 haswand the other has w′. Since the codewords are i.i.d., this occurs with probability approaching 1 exponentially fast, and thus a similar argument to the case Echan(0)<∞applies. E. Proof of Lemma 19 (Simplification of exponent for the Bernoulli/BSC setting) We seek to show that |θ′−θ∗| ≥2εimplies dB(θ∗, θ′)≥D(1 2∥1 2+ε). Using the shorthand x=θ∗+θ′ 2and y=θ∗−θ′ 2, we have ρ(θ∗, θ′) =√ θ∗θ′+p (1−θ∗)(1−θ′) =p (x+y)(x−y) +p (1−x−y)(1−x+y). (87) We will now show that (87) achieves its maximum at x=1 2. Differentiating with respect to x, we have7 ∂ ∂xp (x+y)(x−y) +p (1−x−y)(1−x+y) =xp x2−y2−(1−x)p (1−x)2−y2. (88) It follows that when x=1 2, the derivative is 0. We now compute the second derivative: ∂2 ∂x2p (x+y)(x−y) +p (1−x−y)(1−x+y) =−y2 (x2−y2)3/2−y2 ((1−x)2−y2)3/2<0,∀x, y (89) sox=1 2is a global maximum. Therefore, ρ(θ∗, θ′)≤2r1 2+y1 2−y ≤2r 1 4−ε2= exp −D1 2 1 2+ε , (90) where the middle step uses1 2+y1 2−y =1 4−y2along with |y| ≥ε(since y=θ∗−θ′ 2and|θ′−θ∗| ≥2ε), and the last step uses Lemma 6. Taking the negative logarithm, we obtain dB(θ∗, θ′)≥D1 2 1 2+ε , (91) as desired. REFERENCES [1] V . Jog and P. L. Loh, “Teaching and learning in uncertainty,” IEEE Transactions on Information Theory , vol. 67, no. 1, pp. 598–615, 2021. [2] W. Huleihel, Y . Polyanskiy, and O. Shayevitz, “Relaying one bit across a tandem of binary-symmetric channels,” IEEE International Symposium on Information Theory (ISIT) , 2019. [3] Y . H. Ling and J. Scarlett, “Optimal rates of teaching and learning under uncertainty,” IEEE Transactions on Information Theory , vol. 61, no. 11, pp. 7067–7080, 2021. [4] ——, “Optimal 1-bit error exponent for 2-hop relaying with binary-input channels,” IEEE Transactions on Information Theory , 2024. [5] ——, “Multi-bit relaying over a tandem of channels,” IEEE Transactions on Information Theory , vol. 69, no. 6, pp. 3511–3524, 2023. [6] ——, “Maxflow-based bounds for low-rate information propagation over noisy networks,” IEEE Transactions on Information Theory , vol. 70, no. 6, pp. 3840–3854, 2023. [7] ——, “Simple coding techniques for many-hop relaying,” IEEE Transactions on Information Theory , vol. 68, no. 11, pp. 7043–7053, 2022. [8] E. Domanovitz, T. Philosof, and A. Khina, “The information velocity of packet-erasure links,” in IEEE Conference on Computer Communications . IEEE, 2022, pp. 190–199. 7It is straightforward to check from the definitions of xandythat the terms inside the square root in (88) are never negative. May 15, 2025 DRAFT 27 [9] E. Domanovitz, A. Khina, T. Philosof, and Y . Kochman, “Information velocity of cascaded Gaussian channels with feedback,” IEEE Journal on Selected Areas in Information Theory , 2024. [10] S. Boucheron, G. Lugosi, and P. Massart, Concentration Inequalities: A Nonasymptotic
https://arxiv.org/abs/2505.09098v1
Theory of Independence . OUP Oxford, 2013. [11] R. Vershynin, High-dimensional probability: An introduction with applications in data science . Cambridge University Press, 2018, vol. 47. [12] A. Dembo and O. Zeitouni, Large deviations techniques and applications . Springer Science & Business Media, 2009, vol. 38. [13] G. Lugosi and S. Mendelson, “Mean estimation and regression under heavy-tailed distributions: A survey,” Foundations of Computational Mathematics , vol. 19, no. 5, pp. 1145–1190, 2019. [14] C. Shannon, R. Gallager, and E. Berlekamp, “Lower bounds to error probability for coding on discrete memoryless channels. i,” Information and Control , vol. 10, no. 1, p. 65–103, 1967. [15] ——, “Lower bounds to error probability for coding on discrete memoryless channels. ii,” Information and Control , vol. 10, no. 5, p. 522–552, 1967. [16] R. Blahut, “Hypothesis testing and information theory,” IEEE Transactions on Information Theory , vol. 20, no. 4, pp. 405–417, 2003. [17] J. Acharya, C. Canonne, Y . Liu, Z. Sun, and H. Tyagi, “Distributed estimation with multiple samples per user: Sharp rates and phase transition,” Advances in Neural Information Processing Systems , vol. 34, pp. 18 920–18 931, 2021. [18] Y . Zhang, J. Duchi, M. I. Jordan, and M. J. Wainwright, “Information-theoretic lower bounds for distributed statistical estimation with communication constraints,” in Advances in Neural Information Processing Systems , vol. 26. Curran Associates, Inc., 2013. [19] Y . Han, A. ¨Ozg¨ur, and T. Weissman, “Geometric lower bounds for distributed parameter estimation under communication constraints,” in Conference On Learning Theory . PMLR, 2018, pp. 3163–3188. [20] ——, “Geometric lower bounds for distributed parameter estimation under communication constraints,” in Conference On Learning Theory . PMLR, 2018, pp. 3163–3188. [21] S. Salehkalaibar, M. Wigger, and L. Wang, “Hypothesis testing over the two-hop relay network,” IEEE Transactions on Information Theory , vol. 65, no. 7, pp. 4411–4433, 2019. [22] Y . Gao, W. Liu, H. Wang, X. Wang, Y . Yan, and R. Zhang, “A review of distributed statistical inference,” Statistical Theory and Related Fields , vol. 6, no. 2, pp. 89–99, 2022. [23] R. Ahlswede and I. Csisz ´ar, “Hypothesis testing with communication constraints,” IEEE Transactions on Information Theory , vol. 32, no. 4, pp. 533–542, 1986. [24] J. Acharya, C. L. Canonne, Y . Liu, Z. Sun, and H. Tyagi, “Interactive inference under information constraints,” IEEE Transactions on Information Theory , vol. 68, no. 1, pp. 502–516, 2021. [25] O. Rivasplata, “Subgaussian random variables: An expository note,” November 2012. [Online]. Available: https://www.stat.cmu.edu/ ∼arinaldo/36788/subgaussians.pdf [26] V . V . Buldygin and Y . V . Kozachenko, “Sub-Gaussian random variables,” Ukrainian Mathematical Journal , vol. 32, pp. 483–489, 1980. [27] T. M. Cover and J. A. Thomas, Elements of information theory . John Wiley & Sons, Inc., 2006. May 15, 2025 DRAFT
https://arxiv.org/abs/2505.09098v1
arXiv:2505.09152v1 [math.ST] 14 May 2025Nelson-Aalen kernel estimator to the tail index of right censored Pareto-type data Nour Elhouda Guesmia, Abdelhakim Necir∗, Djamel Meraghni Laboratory of Applied Mathematics, Mohamed Khider Univers ity, Biskra, Algeria On the basis of Nelson-Aalen product-limit estimator of a randomly ce nsored distribution function, we introduce a kernel estimator to the tail index of right -censored Pareto-like data. Under some regularity assumptions, the consistency and as ymptotic normality of the proposed estimator are established. A small simulation study sh ows that the proposed estimator performs much better, in terms of bias and stability, tha n the existing ones with, a slight increase in the mean squared error. The results are applied t o insurance loss data to illustrate the practical effectiveness of our estimator. Keywords and phrases: Extreme value index; Kernel estimation; Random censoring; Weak convergence. AMC 2020 Subject Classification: Primary 62G32; 62G05; 62G20. *Corresponding author: ah.necir@univ-biskra.dz E-mail address: nourelhouda.guesmia@univ-biskra.dz (N. Guesmia) djamel.meraghni@univ-biskra.dz (D. Meraghni) 1 2 1.Introduction Right censored data, typified by partially observed values, presen t a common challenge in statistics. Indeed, this type of data are known only within a certain range and remain un- specified outside this range. This phenomenon is observed in fields su ch as actuarial science, survival analysis and reliability engineering. Heavy-tailed distribution s are used to model data with extreme values. Compared to the normal distribution, th ey are characterized by a significant probability of observing values far from the mean. These distributions are crucial in applications like insurance, finance, andenvironmental studies, w here rarevalues can have a substantial impact. Let X1,X2,...,X nbe a sample of size n≥1 from a random variables (rv)X, and let C1,C2,...,Cnbe another sample from a rv C,defined on a probability space (Ω,A,P),with continuous cumulative distribution functions (cdf’s) FandG,respectively. Assume that XandCare independent. Also, we suppose that Xis right censored by C, that is each for stage 1 ≤j≤n,we can only observe the variable Zj:= min(Xj,Cj) and the indicator variable δj:=I{Xj≤Cj},which determines whether or not Xhas been observed. We assume that the tails functions F:= 1−FandG:= 1−Gare regularly varying at infinity (or Pareto-type) of positive tail indices γ1>0 andγ2>0 respectively. In other words, for anyx >0,we have lim u→∞F(ux) F(u)=x−1/γ1and lim u→∞G(ux) G(u)=x−1/γ2. (1.1) We denote the cdf of ZbyH,then by the independence between XandC,we have H=F×G,which yields that His also regularly varying at infinity, with tail index γ:=γ1γ2/(γ1+γ2).In the presence of extreme values and right censored data, seve ral methods have been proposed for estimating the tail index. Numero us techniques have been developed to address the challenges arising from the nature of suc h data. Many studies have focused on modifying traditional tail index estimation procedures, such as Hill’s method- ology (Hill,1975), to make them suitable for censored data. For instance, Einmahl et al. (2008) adapted the latter to handle the characteristics of right censor ed data. Their estima- tor is given by /hatwideγ(EFG) 1:=/hatwideγ(Hill)//hatwidep,where/hatwideγ(Hill):=k−1k/summationdisplay i=1log(Zn−i+1:n/Zn−k:n),denotes the popular Hill estimator of the tail index γcorresponding to the complete Z-sample and /hatwidep:=k−1k/summationdisplay i=1δ[n−i+1:n], (1.2) stands for an estimator of the proportion of upper non-censore d observations
https://arxiv.org/abs/2505.09152v1
p:=γ2/(γ1+γ2). (1.3) 3 The integer sequence k=knrepresents the number of top order statistics satisfying k→ ∞ andk/n→0 asn→ ∞.The sequence of rv’s Z1:n≤...≤Zn:nrepresent the order statistics pertaining to the sample Z1,...,Znandδ[1:n],...,δ[n:n]denote the corresponding concomitant values, satisfying δ[j:n]=δiforisuch that Zj:n=Zi.Useful Gaussian approximations to /hatwidep,/hatwideγ(Hill)and/hatwideγ(EFG) 1in terms of sequences of Brownian bridges are given in Brahimi et al. (2015). OnthebasisofKaplan-Meierintegration, R. Worms and J. Worms (2014), proposed a consistent estimator to γ1defined by /hatwideγ(W) 1:=k/summationdisplay i=1F(KM) n(Zn−i:n) F(KM) n(Zn−k:n)logZn−i+1:n Zn−i:n, whereF(KM) n(x) := 1−/productdisplay Zi:n≤x/parenleftbiggn−i n−i+1/parenrightbiggδ[i:n] denotes the popular Kaplan-Meier estimator of cdfF(Kaplan and Meier ,1958). The asymptotic normality of /hatwideγ(W) 1is established in Beirlant et al.(2019) by considering Hall’s model ( Hall,1982). The bias reduction in tail in- dex estimation for censored data is addressed by Beirlant et al.(2016),Beirlant et al.(2018) andBeirlant et al.(2019). By using a Nelson-Aalen integration, recently Meraghni et al. (2025) also derived a new γ1estimator /hatwideγ(MNS) 1:=k/summationdisplay i=1δ[n−i+1:n] iF(NA) n(Zn−i+1:n) F(NA) n(Zn−k:n)logZn−i+1:n Zn−k:n, where F(NA) n(z) = 1−/productdisplay Zi:n<zexp/braceleftbigg −δ[i:n] n−i+1/bracerightbigg , (1.4) denotes the well-known Nelson-Aalen estimator of cdf F(Nelson,1972). The authors showed that both tail index estimators /hatwideγ(W) 1and/hatwideγ(MNS) 1exhibit similar performances with respect to bias and mean squared error. Actually, comparison studies made between Kaplan-Meier and Nelson-Aalen estimators reached the conclusion that they exh ibit almost similar sta- tistical behaviors, see, for instance, Colosimo et al.(2002). On the other hand, reaching to establishtheasymptoticpropertiesofextremeKaplan-Meierinteg ralsposessomedifficulties. To overcome this issue, Meraghni et al.(2025) introduced Nelson-Aalen tail-product limit process and established its Gaussian approximation in terms of stan dard Wiener processes. Thus, they established the consistency and asymptotic normality o f their estimator /hatwideγ(MNS) 1 by considering both regular variation conditions ( 1.1) and (2.10).They also showed that (as expected)both the asymptotic bias and variance of /hatwideγ(W) 1and/hatwideγ(MNS) 1are respectively equal. Itiswell known, incomplete datacase, thatintroducing akernel fu nctiontothetailindexes- timators produces nice properties in terms of stability and bias, see for instance Cs¨ org˝ oet al. (1985) andGroeneboom et al.(2003). In the right-truncation data case, this problem is addressed by Benchaira et al.(2016b) who proposed a kernel estimator to the tail index 4 smoothing that given by Benchaira et al.(2016a). Our main goal in this work is to propose a kernel version to /hatwideγ(MNS) 1and to establish its consistency and asymptotic normality thanks to the aforementioned Gaussian approximation. To this end, let us d efine a specific function K:R→R+called the kernel, which satisfies the following conditions: •[A1] is non-increasing and right-continuous on R. •[A2]K(s) = 0 for s /∈[0,1) andK(s)≥0 fors∈[0,1). •[A3]/integraltext RK(s)ds= 1. •[A4]Kand its first and second Lebesgue derivatives K′andK′′are bounded on R. Such functions include, for example, the indicator kernel K1:=I{0≤s<1},the biweight K2 and triweight kernels K3,which are defined by K2(s) :=15 8/parenleftbig 1−s2/parenrightbig2andK3(s) :=35 16/parenleftbig 1−s2/parenrightbig3,0≤s <1, (1.5) and zero elsewhere, see, e.g., Groeneboom et al.(2003). By applying Potter’s inequalities, see e.g. Proposition B.1.10 in de Haan and Ferreira (2006), to the regularly varying function Ftogether with assumptions [ A1]−[A2],Benchaira et al.(2016b) stated that lim u→∞/integraldisplay∞ ug′ K/parenleftbiggF(x) F(u)/parenrightbigg logx udF(x) F(u)=γ1/integraldisplay∞ 0K(x)dx=γ1, (1.6) wheregK(x) :=xK(x) andg′ K(x) :=dgK(x)/dxdenotes the Lebesgue derivative of gK.By lettingu=Zn−k:nand
https://arxiv.org/abs/2505.09152v1
substituting Fby Nelson-Aalen estimator F(NA) n,we derive a kernel estimator to the tail index γ1as follows /hatwideγ1,K:=/integraldisplay∞ Zn−k:ng′ K/parenleftBigg F(NA) n(x) F(NA) n(Zn−k:n)/parenrightBigg logx Zn−k:ndF(NA) n(x) F(NA) n(Zn−k:n). (1.7) Let us rewrite the previous integral into a more friendly form as a su m of terms from the Z-sample. To this end, we use the following crucial representation of cdfFin terms of estimable functions HandH(1): /integraldisplayz 0dH(1)(x) H(x−)=/integraldisplayz 0dF(x) F(x)=−logF(x),forz >0, whereH(1)(z) :=P(Z≤z,δ= 1) =/integraltextz 0G(x)dF(x).This implies that F(z) = exp/braceleftbigg −/integraldisplayz 0dH(1)(x) H(x−)/bracerightbigg ,forz >0, (1.8) whereH(x−) := lim ǫ↓0H(x−ǫ).The empirical counterparts of cdf Hand sub-distribution H(1)are given by Hn(z) :=n−1n/summationdisplay i=1I{Zi:n≤z}andH(1) n(z) :=n−1n/summationdisplay i=1δ[i:n]I{Zi:n≤z}, 5 see, for instance, Shorack and Wellner (1986) page 294. Note that Hn(Zn:n) = 0,for this reason and to avoid dividing by zero, we use H(x−) instead of H(x) in formula ( 1.8). Nelson-Aalen estimator FNA n(z) is obtained by substituting both HnandH(1) nin the right- hand side of formula ( 1.8).That is, we have F(NA) n(z) := exp/braceleftBigg −/integraldisplayz 0dH(1) n(x) Hn(x−)/bracerightBigg . (1.9) SinceHn(Zi:n−) =Hn/parenleftbig Z(i−1):n/parenrightbig = (n−i+1)/n,then the previous integral leads to the expression of F(NA) n(z) in (1.4).Differentiating both sides of formula ( 1.9) yields that dF(NA) n(z) =F(NA) n(z) Hn(z−)dH(1) n(z). By substituting dF(NA) n(z) by the above in ( 1.7),we end with /hatwideγ1,K=k/summationdisplay i=1δ[n−i+1:n] iF(NA) n(Zn−i+1:n) F(NA) n(Zn−k:n)g′ K/parenleftBigg F(NA) n(Zn−i+1:n) F(NA) n(Zn−k:n)/parenrightBigg logZn−i+1:n Zn−k:n. The remainder of the paper is structured as follows. In Section 2, we present our main results namely the consistency and asymptotic normality of /hatwideγ1,K,the proofs of which are postponed to Section 5. In Section 3, we illustrate, through a simulation study, the finite sample behavior of our estimator with a comparison with already exist ing ones. In Section 4, we apply our results to a real dataset of insurance losses. 2.Main results Since weak approximations of extreme value theory based statistic s are achieved in the second-order framework (see e.g. page 48 in de Haan and Ferreira ,2006), then it seems quite natural to suppose that cdf Fsatisfies thewell-known second-order condition of regular variation. That is, we assume that for, any x >0,we have lim t→∞UF(tx)/UF(t)−xγ1 A∗ 1(t)=xγ1xτ1−1 τ1, (2.10) whereτ1≤0 is the second-order parameter and A∗ 1is a function tending to 0 ,not changing sign near infinity and having a regularly varying absolute value at infinit y with index τ1. Ifτ1= 0,interpret ( xτ1−1)/τ1as logx.The functions L←(s) := inf{x:L(x)≥s},for 0< s <1,andUL(t) :=L←(1−1/t), t >1,respectively stand for the quantile and tail quantile functions of any cdf L.For convenience, we set A1(t) :=A∗ 1/parenleftbig 1/F(t)/parenrightbig ,fort >1, andh:=UH(n/k). Theorem 2.1. Assume that cdf’s FandGsatisfy the first-order condition of regular vari- ation(1.1)withγ1< γ2,and letKbe a kernel function satisfying assumptions [A1]−[A4]. For a given integer sequence k=knsuch that k→ ∞andk/n→0,we have/hatwideγ1,KP→γ1, 6 asn→ ∞.In addition, if we assume that the second-order condition of regular variation (2.10)holds with√ kA1(h)→λ <∞,then√ k(/hatwideγ1,K−γ1)D→ N(µK,σ2 K),asn→ ∞,where µK:=λ/integraltext1 0s−τ1K(s)dsandσ2 K:=γ2 1/integraltext1 0s−1/p+1K2(s)ds. Remark 2.1. It is clear that µK1=λ/(1−τ1)andσ2 K1=pγ2 1/(2p−1)respectively coincide with the asymptotic mean and variance of /hatwideγ(GNM) 1and/hatwideγ(W) 1. Remark 2.2. In the case of complete data,
https://arxiv.org/abs/2505.09152v1
that is when p= 1, µKandσ2 Krespectively coincide with the asymptotic mean and variance of CDM’s esti mator (Cs¨ org˝ o et al. ,1985) defined by/hatwideγ(CDM) K:=/summationtextk i=1ϕK(i/k)log(Zn−i+1:n/Zn−k:n). Remark 2.3. It is worth remembering that kernel estimation procedure re duces the asymp- totic bias. However it increases the asymptotic variance, w hich is the price de pay when reducing the bias in general. Remark 2.4. When considering the product limit estimators F(KM) norF(NA) nin the pro- cess of estimating the tail index of Pareto-type models, the condition γ1< γ2arises, see for instance R. Worms and J. Worms (2014),Beirlant et al. (2018),Beirlant et al. (2019), Guesmia and Meraghni (2024) andBladt and Rodionov (2024). Equivalent to p >1/2,this condition means that the proportion of upper non-censored o bservations has to be greater than50%which seems to be quite reasonable. 3.Simulation study To evaluate the performance of our estimator /hatwideγ1,Kand compare it with /hatwideγ(W) 1and/hatwideγ(MNS) 1, we consider Burr model with parameters ( γ1,η) censored by Fr´ echet model with parameter γ2respectively defined by F(x) = 1−/parenleftbig 1+x1/η/parenrightbig−η/γ1andG(x) = exp(−x−1/γ2),forx >0 and zero elsewhere. We fix η= 0.25,consider two values 0 .4 and 0.6 forγ1and choose two censoring rates 1 −p= 0.40 and 0 .10.For each combination of γ1andp,we solve the equation p=γ2/(γ1+γ2) to get the corresponding value of γ2.We construct two estimators /hatwideγ1,K2and/hatwideγ1,K3which two versions of /hatwideγ1,Kcorresponding to the biweight and triweight kernel functions K2andK3,defined in ( 1.5),respectively. We generate 2000 samples of size 1000 and our results, obtained by averaging over all replications, are pr esented in Figure 3.1. As expected, we note that, unlike /hatwideγ(W) 1and/hatwideγ(MNS) 1,our estimators enjoy smoothness and bias reduction. Moreover, we can see in all four graphs of Figure 3.1that, compared to /hatwideγ(W) 1 and/hatwideγ(MNS) 1,our estimators prove superior stability through the range of k.This is a very good and conveniently suitable asset for people dealing with real-wor ld data in case studies, where only single datasets are available and no replications are possib le as in simulation studies. Finally, the top-left graph shows that the accuracy of ou r estimators improves as both censoring percentage and tail index value decrease. 7 0 200 400 600 8000.0 0.2 0.4 0.6 0.8 1.0 k0 200 400 600 8000.0 0.2 0.4 0.6 0.8 1.0 k 0 200 400 600 8000.0 0.2 0.4 0.6 0.8 1.0 k0 200 400 600 8000.0 0.2 0.4 0.6 0.8 1.0 k Figure 3.1. Plots of/hatwideγ1,K2(dotted blue line), /hatwideγ1,K3(dotdashed green line), /hatwideγ(MNS) 1(dashed red line) and /hatwideγ(W) 1(solid purple line), as functions of the numberkof upper order statistics, for γ1= 0.4 (top) and γ1= 0.6 (bottom) withp= 0.6 (left panel) and p= 0.9 (right panel). The horizontal line represents the true value of γ1. 4.Real data application The data used in this study pertain to insurance losses that are colle cted by the US In- surance Services Office, Inc. Available in the ”copula” package of th e statistical software R, these data have been analyzed in several studies, such as in Frees and Valdez (1998), Klugman and Parsa (1999) andDenuitet al.(2006) amongst others. The sample consists of
https://arxiv.org/abs/2505.09152v1
1500 observations, 34 of which are right censored. In insurance, censoring occurs when the loss exceeds a policy limit representing the maximum amount a company can compensate. Each loss has its own policy limit which varies from one contract to anot her. We apply our estimation methodology to this dataset using the biweight and triweig ht kernel functions resulting in estimators /hatwideγ1,K2and/hatwideγ1,K3respectively. These are plotted, in the right panel of Figure4.2, as functions of the number of upper order statistics kalong with/hatwideγ(MNS) 1and /hatwideγ(W) 1.We also plot the empirical proportion of upper non-censored obser vations/hatwidepin the left panel of Figure 4.2, where we clearly see that /hatwidep >1/2 for the vast majority of kvalues. Our estimators tend to provide lower values for the tail index, compare d to the other two, which clearly appear more volatile, especially for small values of k.Overall, all curves exhibit an increasing trend, reflecting the impact of larger numbers of upper order statistics on extreme value based estimation. These results suggest that kernel estima tors offer a more smooth and stable alternative, as discussed in Section 3. 8 0 200 400 600 800 10000.4 0.5 0.6 0.7 0.8 0.9 k0 200 400 600 800 10000.0 0.5 1.0 1.5 k Figure 4.2. Insurance loss dataset. Left panel: empirical proportion of up- per non-censored observations. Right panel: /hatwideγ1,K2(dotted blue line), /hatwideγ1,K3 (dotdash green line), /hatwideγ(MNS) 1(dashed red line) and /hatwideγ(W) 1(solid black line) as a functions of k. 5.Proof We begin by proving the consistency of /hatwideγ1,K.By using an integration by parts and a change of variables in formula ( 1.7),we get /hatwideγ1,K=/integraldisplay∞ 1x−1gK/parenleftBigg F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)/parenrightBigg dx. Taylor’s expansion to the second order, yields that gK/parenleftBigg F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)/parenrightBigg −gK/parenleftbiggF(Zn−k:nx) F(Zn−k:n)/parenrightbigg =/parenleftBigg F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)−F(Zn−k:nx) F(Zn−k:n)/parenrightBigg g′ K/parenleftbiggF(Zn−k:nx) F(Zn−k:n)/parenrightbigg +1 2/parenleftBigg F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)−F(Zn−k:nx) F(Zn−k:n)/parenrightBigg2 g′′ K(ηn(x)), whereg′′ Kdenotes the Lebesgue second derivative of gKandηn(x) is betweenF(NA) n(Zn−k:nx) F(NA) n(Zn−k:n) andF(Zn−k:nx) F(Zn−k:n).Observe that Ln,k:=F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)−F(Zn−k:nx) F(Zn−k:n), 9 may be decomposed into the sum of Mn1(x) :=Fn(Zn−k:nx)−F(Zn−k:nx) F(Zn−k:n), Mn2(x) :=/parenleftbiggF(Zn−k:n) Fn(Zn−k:n)−1/parenrightbiggFn(Zn−k:nx)−F(Zn−k:nx) F(Zn−k:n), and Mn3(x) :=−F(Zn−k:nx) F(Zn−k:n)Fn(Zn−k:n)−F(Zn−k:n) Fn(Zn−k:n). It follows that /hatwideγ1,K=I1n+I2n+I3n,where I1n:=/integraldisplay∞ 1x−1gK/parenleftbiggF(Zn−k:nx) F(Zn−k:n)/parenrightbigg dx, I2n:=/integraldisplay∞ 1x−1/braceleftBigg3/summationdisplay i=1Mni(x)/bracerightBigg g′ K/parenleftbiggF(Zn−k:nx) F(Zn−k:n)/parenrightbigg dx andI3n:=/integraltext∞ 1x−1L2 n(x)g′′ K(ηn(x))dx.On integrating by parts and using the properties of kernelK,we prove that assertion ( 1.6) is equivalent to /integraldisplay∞ 1x−1gK/parenleftbiggF(ux) F(u)/parenrightbigg dx→γ1,asu→ ∞. SinceZn−k:nP→ ∞,thenI1nP→γ1asn→ ∞,as well. Now, we need to show that both terms I2nandI3ntend to zero in probability. Meraghni et al.(2025) stated in their Gaussian approximation (6 .29),that there exists a sequence of standard Wiener processes {Wn(s); 0≤s≤1}defined on the probability space (Ω ,A,P),such that for any small η,ǫ0>0,we have √ k3/summationdisplay i=1Mni(x) =Jn(x)+oP/parenleftbig x(2η−p)/γ±ǫ0/parenrightbig , (5.11) uniformly over x≥1,whereJn(x) =J1n(x)+J2n(x),with J1n(x) :=/radicalbiggn k/braceleftbigg x1/γ2Wn,1/parenleftbiggk nx−1/γ/parenrightbigg −x−1/γ1Wn,1/parenleftbiggk n/parenrightbigg/bracerightbigg , (5.12) and J2n(x) :=x−1/γ1 γ/radicalbiggn k/integraldisplayx 1u1/γ−1/braceleftbigg pWn,2/parenleftbiggk nu−1/γ/parenrightbigg −qWn,1/parenleftbiggk nu−1/γ/parenrightbigg/bracerightbigg du,(5.13) whereWn,1andWn,2are two independent Wiener processes defined, for 0 ≤s≤1,by Wn,1(s) :={Wn(θ)−Wn(θ−ps)}1(θ−ps≥0) andWn,2(s) :=Wn(1)−Wn(1−qs), withθ:=H(1)(∞) andp= 1−q:=γ/γ1,uniformly on x≥1.On the other hand, since g′ K is bounded on R,then making use of Gaussian approximation ( 5.11),we infer that √ k|I2n|=OP(1)/integraldisplay∞ 1x−1|Jn(x)|dx+oP(1)/integraldisplay∞ 1x(2η−p)/γ±ǫ0−1dx. 10 Note that p
https://arxiv.org/abs/2505.09152v1
>1/2 (see Remark ( 2.4)), then the integral/integraltext∞ 1x(2η−p)/γ±ǫ0−1dxis finite for any smallη,ǫ0>0.On the other hand, by using the properties of the Wiener process W(t),we deduce that E|Wn,1(s)| ≤(ps)1/2andE|Wn,1(s)| ≤(qs)1/2.Thus we may readily check thatE|Jn(x)| ≤Mx−1/γ1/parenleftbig x1/(2γ)−1/parenrightbig ,for anyx≥1 and for some constant M >0.Recall that 1/γ= 1/γ1+ 1/γ2andγ1< γ2,then 1/(2γ)−1/γ1= (1/γ2−1/γ1)/2 is a negative number. It follows that/integraltext∞ 1x−1E|Jn(x)|dxis finite and therefore I2nP→0 asn→ ∞ (because 1 /√ k→0).Let us now consider the third term I3n.We have kI3n=/integraldisplay∞ 1x−1/parenleftBigg √ k3/summationdisplay i=1Mni(x)/parenrightBigg2 g′ K/parenleftbiggF(Zn−k:nx) F(Zn−k:n)/parenrightbigg . Once again, by using ( 5.11) with the fact that g′′is bounded, we may write kI3n=OP(1)/integraldisplay∞ 1x−1J2 n(x)dx +oP(1)/integraldisplay∞ 1x(2η−p)/γ±ǫ0−1|Jn(x)|dx+oP/parenleftbig x2(2η−p)/γ±2ǫ0−1/parenrightbig . It is clear that both the second and the third terms in the right-han d side above are equal tooP(1).SinceE[J2 n(x)]≤/radicalbig E|Jn(x)|,then by using similar arguments as those used for /integraltext∞ 1x−1E|Jn(x)|dxwe infer that/integraltext∞ 1x−1E[J2 n(x)]dxis finite. Therefore, we have I3nP→0 whenn→ ∞. In summary, we showed that both terms I2nandI3ntend in probability to zero, while I1nP→γ1,this means that /hatwideγ1,KP→γ1asn→ ∞.Let us now establish the asymptotic normality of /hatwideγ1,K.Recall that/integraltext RK(t)dt= 1,then it is easy to verify that /integraltext∞ 1x−1gK/parenleftbig x−1/γ1/parenrightbig dx=γ1,which allows us to rewrite the difference between /hatwideγ1,Kandγ1 into /hatwideγ1,K−γ1=/integraldisplay∞ 1x−1/braceleftBigg gK/parenleftBigg F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)/parenrightBigg −gK/parenleftbig x−1/γ1/parenrightbig/bracerightBigg dx. Once again, the second-order Taylor’s expansion to gK,yields that /hatwideγ1,K−γ1=1√ k/integraldisplay∞ 1x−1Dn(x)g′ K/parenleftbig x−1/γ1/parenrightbig dx+Rn1, where Dn(x) :=√ k/parenleftBigg F(NA) n(Zn−k:nx) F(NA) n(Zn−k:n)−x−1/γ1/parenrightBigg , x >0, andRn1:=1 2k/integraltext∞ 1x−1D2 n(x)g′′ K(ζn(x))dx,withg′′ Kdenoting the second derivative of gKand ζn(x) being between FNA n(xZn−k:n)/FNA n(Zn−k:n) andx1/γ1.Meraghni et al.(2025) stated in their Theorem 3, that sup x≥pγxǫ/pγ1/vextendsingle/vextendsingle/vextendsingle/vextendsingleDn(x)−Jn(x)−x−1/γ1xτ1/γ1−1 τ1γ1√ kA1(h)/vextendsingle/vextendsingle/vextendsingle/vextendsinglep→0, (5.14) 11 for every 0 < ǫ <1/2,asn→ ∞,provided that p >1/2 and√ kA1(h) =O(1).In order to establish the asymptotic normality of /hatwideγ1,K,we will use the following decomposition √ k(/hatwideγ1,K−γ1) =/integraldisplay∞ 1x−1Jn(x)g′ K/parenleftbig x−1/γ1/parenrightbig dx+3/summationdisplay i=1/tildewideRni, where/tildewideRn1:=1 2√ k/integraltext∞ 1x−1D2 n(x)g′′ K(ζn(x))dx, /tildewideRn2:=/integraldisplay∞ 1x−1/braceleftbigg Dn(x)−Jn(x)−x−1/γ1xτ1/γ1−1 γ1τ1√ kA1(h)/bracerightbigg g′ K/parenleftbig x−1/γ1/parenrightbig dx, and /tildewideRn3:=√ kA1(n/k)/integraldisplay∞ 1x−1/braceleftbigg x−1/γ1xτ1/γ1−1 γ1τ1/bracerightbigg g′ K/parenleftbig x−1/γ1/parenrightbig dx. Next, we show that /tildewideRni=oP(1),fori= 1,2 asn→ ∞,while/tildewideRn3→µK,whereµKis the asymptotic bias of /hatwideγ1,K(given in the theorem). Indeed, by using the weak approximation (5.14),and similar arguments as those used for the term In3together with assumption√ kA1(h)→λ,we deduce that/integraltext∞ 1x−1D2 n(x)dx=OP(1) (we omit details), therefore /tildewideRn1=oP(1).Sincegis bounded over R,then (once again) by using ( 5.14),we get/tildewideRn2= oP/parenleftBig 1/√ k/parenrightBig which equals oP(1),because 1 /√ k→0,asn→ ∞.We already noticed that √ kA1(h)→λ,therefore /tildewideRn3→λ/integraldisplay∞ 1x−1/braceleftbigg x−1/γ1xτ1/γ1−1 γ1τ1/bracerightbigg g′ K/parenleftbig x−1/γ1/parenrightbig dx,asn→ ∞. Thechangeofvariables s=x−1/γ1andanintegrationbypartstransformthepreviousintegral into/integraltext1 0s−τ1K(s)dswhich meets the asymptotic bias µK.In summary, we demonstrated that√ k(/hatwideγ1,K−γ1) =Tn+µK+oP(1),asn→ ∞,whereTn:=/integraltext∞ 1x−1Jn(x)g′ K/parenleftbig x−1/γ1/parenrightbig dxis a sequence of centered Gaussian rv’s. We show that the latter con verges in distribution to N(0,σ2 K),with σ2 K:=γ2 1/integraldisplay1 0s−1/p+1K2(s)ds. (5.15) The change of variables t=x−1/γ1with the definition of gKyield that Tn=γ1/integraldisplay1 0t−1Jn/parenleftbig t−γ1/parenrightbig d{tK(t)}, whereJn(t−γ1) =J1n(t−γ1)+J2n(t−γ1),withJ1nandJ2nbeing defined in ( 5.12) and (5.13) respectively. We modify J2n(x) by a change of variables to get J2n(x) =−x−1/γ1/radicalbiggn k/integraldisplayx−1/γ 1s−2/braceleftbigg pWn,2/parenleftbiggk ns/parenrightbigg −qWn,1/parenleftbiggk ns/parenrightbigg/bracerightbigg ds. (5.16) Now, we write Tnas the sum of two terms Tn=γ1/integraldisplay1 0t−1Jn1/parenleftbig t−γ1/parenrightbig d{tK(t)}+γ1/integraldisplay1 0t−1Jn2/parenleftbig t−γ1/parenrightbig d{tK(t)}, 12 whereJ1nandJ2nare respectively given in
https://arxiv.org/abs/2505.09152v1
( 5.12) and (5.16).In other words, Tnis expressed in terms of two independent Wiener processes Wn,1andWn,2.Consequently, Tnis itself Gaussian. By squaring Tn,we write E[T2 n] as the sum of the following three terms S1:=γ2 1E/parenleftbigg/integraldisplay1 0t−1Jn1/parenleftbig t−γ1/parenrightbig d{tK(t)}/parenrightbigg2 , S2:=γ2 1E/parenleftbigg/integraldisplay1 0t−1Jn2/parenleftbig t−γ1/parenrightbig d{tK(t)}/parenrightbigg2 , and S3:= 2γ2 1E/parenleftbigg/integraldisplay1 0t−1Jn1/parenleftbig t−γ1/parenrightbig d{tK(t)}/integraldisplay1 0t−1Jn2/parenleftbig t−γ1/parenrightbig d{tK(t)}/parenrightbigg . The following covariances E[Wn,1(s)Wn,2(t)] = 0, (5.17) E[Wn,1(s)Wn,1(t)] =pmin(s,t), (5.18) and E[Wn,2(s)Wn,2(t)] =qmin(s,t). (5.19) are useful in computing Si, i= 1,2,3.We start with the calculation of S1.We have S1=γ2 1/integraldisplay1 0/integraldisplay1 0s−1t−1E/bracketleftbig Jn1/parenleftbig s−γ1/parenrightbig Jn1/parenleftbig t−γ1/parenrightbig/bracketrightbig d{sK(s)}{tK(t)}. From representation ( 5.12) and using ( 5.18),it can be readily checked that S1= 2γ2 1p/integraldisplay1 0s−γ1/γ2−1{sK(s)}d{sK(s)}=γ2 1p/integraldisplay1 0s−γ1/γ2−1d{sK(s)}2. Note that −γ1/γ2−1 =−γ1/γ,then it follows that S1=γ2 1p/integraltext1 0s−γ1/γd{sK(s)}2.ForS2, we note that S2=E[I2],withI:=γ1/integraltext1 0t−1Jn2(t−γ1)d{tK(t)},which we simplify by using (5.16) and after an integration by parts, to get I=γ2 1 γ/radicalbiggn k/integraldisplay1 0K(t)t−γ1/γ/bracketleftbigg pWn,2/parenleftbiggk ntγ1/γ/parenrightbigg −qWn,1/parenleftbiggk ntγ1/γ/parenrightbigg/bracketrightbigg dt. By using results ( 5.18) and (5.19),we have S2=/parenleftbig γ3 1/γ/parenrightbig q/integraldisplay1 0K(s)s−γ1/γ/integraldisplays 0K(t)dtds+/integraldisplay1 0/integraldisplay1 sK(t)t−γ1/γdtd/braceleftbigg/integraldisplays 0K(t)dt/bracerightbigg . Sinceγ1/γ= 1/p >1/2 and after an integration by parts, we infer that S2= 2γ2 1q p/integraldisplay1 0/braceleftbigg/integraldisplays 0K(t)dt/bracerightbigg K(s)s−γ1/γds. Finally, we turn our attention to S3= 2γ2 1E[I′],where I′:=/integraldisplay1 0t−1Jn1/parenleftbig t−γ1/parenrightbig d{tK(t)}/integraldisplay1 0t−1Jn2/parenleftbig t−γ1/parenrightbig d{tK(t)}. 13 By substituting equations ( 5.12) and (5.16) inI′,we find I′=−n k/integraldisplay1 0/braceleftbigg t−γ1/γ2−1Wn,1/parenleftbiggk ntγ1/γ/parenrightbigg −Wn,1/parenleftbiggk n/parenrightbigg/bracerightbigg d{tK(t)} ×/integraldisplay1 0/integraldisplaytγ1/γ 1s−2/braceleftbigg pWn,2/parenleftbiggk ns/parenrightbigg −qWn,1/parenleftbiggk ns/parenrightbigg/bracerightbigg dsd{tK(t)}. Integrating the second factor in I′by parts yields I′=n kγ1 γ/integraldisplay1 0/braceleftbigg t−γ1/γ2−1Wn,1/parenleftbiggk ntγ1/γ/parenrightbigg −Wn,1/parenleftbiggk n/parenrightbigg/bracerightbigg d{tK(t)} ×/integraldisplay1 0K(t)t−γ1/γ/braceleftbigg pWn,2/parenleftbiggk ntγ1/γ/parenrightbigg −qWn,1/parenleftbiggk ntγ1/γ/parenrightbigg/bracerightbigg dt, which may be decomposed into the sum of I′ 1:=n k/integraldisplay1 0/integraldisplay1 0s−γ1/γ2−1Wn,1/parenleftbiggk nsγ1/γ/parenrightbigg Wn,2/parenleftbiggk ntγ1/γ/parenrightbigg K(t)t−γ1/γd{sK(s)}dt, I′ 2:=−qγ1 γn k/integraldisplay1 0/integraldisplay1 0s−γ1/γ2−1Wn,1/parenleftbiggk nsγ1/γ/parenrightbigg Wn,1/parenleftbiggk ntγ1/γ/parenrightbigg K(t)t−γ1/γd{sK(s)}dt, I′ 3:=−n k/integraldisplay1 0/integraldisplay1 0Wn,1/parenleftbiggk n/parenrightbigg Wn,2/parenleftbiggk ntγ1/γ/parenrightbigg K(t)t−γ1/γd{sK(s)}dt and I′ 4:=qγ1 γn k/integraldisplay1 0/integraldisplay1 0Wn,1/parenleftbiggk n/parenrightbigg Wn,1/parenleftbiggk ntγ1/γ/parenrightbigg K(t)t−γ1/γd{sK(s)}dt. It is clear that, from ( 5.17),we have E[I′ 1] =E[I′ 3] = 0 and using ( 5.18),we can readily show that E[I′ 4] is zero as well. Hence, we have E[I′] =E[I′ 2],to which we apply ( 5.19) to get E[I′ 2] =−q/integraldisplay1 0/integraldisplay1 0s−γ1/γ2−1min/parenleftbig sγ1/γ,tγ1/γ/parenrightbig t−γ1/γK(t)d{sK(s)}dt =−q/integraldisplay1 0/braceleftbigg/integraldisplays 0K(t)dt/bracerightbigg s−γ1/γ2−1d{sK(s)} −q/integraldisplay1 0/braceleftbigg/integraldisplay1 st−γ1/γK(t)dt/bracerightbigg s−γ1/γ2+γ1/γ−1d{sK(s)}. Note that −γ1/γ2+γ1/γ−1 = 0,thenE[I′ 2] =−q(I′ 21+I′ 22),where I′ 21:=/integraldisplay1 0/braceleftbigg/integraldisplays 0K(t)dt/bracerightbigg s−γ1/γ2−1d{sK(s)}, and I′ 22:=/integraldisplay1 0/braceleftbigg/integraldisplay1 st−γ1/γK(t)dt/bracerightbigg d{sK(s)}, which after integrations by parts become I′ 21=−/integraldisplay1 0{K(s)}2s−γ1/γ2ds+γ1 γ/integraldisplay1 0K(s)s−γ1/γ2−1/braceleftbigg/integraldisplays 0K(t)dt/bracerightbigg ds, 14 andI′ 22=/integraltext1 0(K(s))2s−γ1/γ2ds.Thus we, obtain E[I′ 2] =q/integraldisplay1 0(K(s))2s−γ1/γ2ds−q/integraldisplay1 0(K(s))2s−γ1/γ2ds −qγ1 γ/integraldisplay1 0/braceleftbigg/integraldisplays 0K(t)dt/bracerightbigg K(s)s−γ1/γ2−1ds. Finally, we end up with E/bracketleftbig T2 n/bracketrightbig = 2γ2 1p/integraldisplay1 0s−γ1/γ{sK(s)}d{sK(s)}=γ2 1p/integraldisplay1 0s−1/pd{sK(s)}2, which is exactly ( 5.15). References Beirlant, J., Bardoutsos, A., de Wet, T., & Gijbels, I., 2016. Bias redu ced tail estimation for censored Pareto type distributions. Statist. Probab. Lett. , 109, 78-88. Beirlant, J., Maribe, G., & Verster, A., 2018. Penalized bias reduction in extreme value estimationforcensoredPareto-typedata,andlong-tailedinsura nceapplications. Insurance Math. Econom. , 78, 114-122. Beirlant, J., Worms, J., & Worms, R., 2019. Estimation of the extreme value index in a censorship framework: asymptotic and finite sample behavior. J. Statist. Plann. Inference 202, 31-56. Benchaira, S., Meraghni, D., & Necir, A., 2016. Tail
https://arxiv.org/abs/2505.09152v1
product-limit proc ess for truncated data with application to extreme value index estimation. Extremes 19, no. 2, 219-251. Benchaira, S., Meraghni, D., & Necir, A., 2016. Kernel estimation of t he tail index of a right-truncated Pareto-type distribution. Statist. Probab. Lett. 119, 186-193. Bladt, M., & Rodionov, I., 2024. Censored extreme value estimation. ArXiv preprent: https://arxiv.org/abs/2312.10499 Brahimi, B., Meraghni, D., & Necir, A., 2015. Approximations to the tail index estimator of a heavy-tailed distribution under random censoring and applicatio n.Math. Methods Statist.24,266-279. Colosimo, E., Ferreira,F.V., Oliveira, M., &Sousa, C., 2002.Empiricalc omparisonsbetween Kaplan-Meier and Nelson-Aalen survival function estimators. J. Stat. Comput. Simul. , 72, 299-308. Cs¨ org˝ o, S., Deheuvels, P., & Mason, D., 1985. Kernel estimates o f the tail index of a distri- bution.Ann. Statist. 13, 1050-1077. Denuit, M., Purcaru, O., & Keilegom, I. V., 2006. Bivariate Archimedea n copula models for censored data in non-life insurance. Journal of Actuarial Practice, 13, 5-32. 15 Einmahl, J.H.J., Fils-Villetard, A., & Guillou, A., 2008. Statistics of extrem es under random censoring. Bernoulli 14, 207-227. Frees, E. W., & Valdez, E. A., 1998. Understanding relationships usin g copulas. N. Am. Actuar. J., 2, 1-25. Groeneboom, P., Lopuha¨ a, H. P., & De Wolf, P. P., 2003. Kernel-typ e estimators for the extreme value index. Ann. Statist. , 31, 1956-1995. Guesmia, N., & Meraghni, M., 2024.Estimating theconditional tail exp ectation of randomly right-censored heavy-tailed data. J. Stat. Theory Pract. 18, no. 3, Paper No. 30, 36 pp. de Haan, L., & Ferreira, A., 2006. Extreme Value Theory: An Introduction. Springer. Hall, P., 1982. On some simple estimates of an exponent of regular var iation.J. Roy. Statist. Soc. Ser. B 44, 37-42. Hill, B.M., 1975. A simple general approach to inference about the tail of a distribution. Ann. Statist. 3, 1163-1174. Kaplan, E. L., & Meier, P., 1958. Nonparametric estimation from incom plete observations. J. Amer. Statist. Assoc. , 53, 457-481. Klugman, S. A., & Parsa, R., 1999. Fitting bivariate loss distributions w ith copulas. Insur- ance Math. Econom., 24, 139-148. Meraghni, D., Necir, A., & Soltane, L., 2025. Nelson-Aalen tail produc t-limit pro- cess and extreme value index estimation under random censorship. Sankhya A. https://doi.org/10.1007/s13171-025-00384-y. Nelson, W., 1972. A short life test for comparing a sample with previou s accelerated test results.Technometrics , 14, 175-185. Shorack, G.R., & Wellner, J.A., 1986. Empirical Processes with Applicat ions to Statistics. Wiley. Worms, J., & Worms, R., 2014, New estimators of the extreme value in dex under random right censoring, for heavy-tailed distributions. Extremes 17, 337-358. Worms, J., & Worms, R., 2021. Estimation of extremes for heavy-ta iled and light-tailed distributions in the presence of random censoring. Statistics 55, 979-1017.
https://arxiv.org/abs/2505.09152v1
Robust Representation and Estimation of Barycenters and Modes of Probability Measures on Metric Spaces Washington Mio and Tom Needham Department of Mathematics, Florida State University, Tallahassee, FL, USA Abstract This paper is concerned with the problem of defining and estimating statistics for distribu- tions on spaces such as Riemannian manifolds and more general metric spaces. The challenge comes, in part, from the fact that statistics such as means and modes may be unstable: for example, a small perturbation to a distribution can lead to a large change in Fr´ echet means on spaces as simple as a circle. We address this issue by introducing a new merge tree representa- tion of barycenters called the barycentric merge tree (BMT), which takes the form of a measured metric graph and summarizes features of the distribution in a multiscale manner. Modes are treated as special cases of barycenters through diffusion distances. In contrast to the properties of classical means and modes, we prove that BMTs are stable—this is quantified as a Lipschitz estimate involving optimal transport metrics. This stability allows us to derive a consistency result for approximating BMTs from empirical measures, with explicit convergence rates. We also give a provably accurate method for discretely approximating the BMT construction and use this to provide numerical examples for distributions on spheres and shape spaces. Keywords: Fr´ echet mean, barycenter, median, modes of probability distributions, merge trees. 2020 Mathematics Subject Classification: 62R20, 62R30, 62R40, 55N31 1 Introduction The core theme of this paper is the development of stable and robust representations for barycenters and modes of probability distributions on metric spaces. The primary goal is to construct simple and provably stable summaries of barycenters and modes for datasets formed of objects that can be of varied nature, so long as they are representable as points in a metric space. This is achieved through a summary representation termed barycentric merge tree (BMT). 1.1 Barycenters The mean of a probability measure µon Euclidean space Rd, as the expected value of a random variable x∈Rdwith law µ, has long been used as a simple statistical summary that can be robustly estimated from sufficiently many independent draws from µ. The reinterpretation of the mean as the most central value of x, as measured by variance, dates back at least to ´E. Cartan in studies of more general forms of centers of mass of distributions on Riemannian manifolds (cf. [6, p. 235]). Ifµhas finite second moment, the mean is the unique minimizer of the Fr´ echet variance function V2:Rd→Rgiven by V2(x):=Z Rd∥y−x∥2dµ(y), (1) 1arXiv:2505.09609v2 [math.ST] 28 May 2025 the expected value of the squared distance to x[18]. This viewpoint shed light on how to define barycenters for probability distributions on general metric spaces. More formally, a metric measure space (mm-space) is a triple X= (X, d X, µ), where ( X, d X) is a Polish (complete and separable) metric space and µis a Borel probability measure on ( X, d X). The measure µhas finite p-moments , p≥1, ifR Xdp(x0, y)dµ(y)<∞for some (and therefore all) x0∈X. The collection of all Borel
https://arxiv.org/abs/2505.09609v2
probability measures on ( X, d X) with finite p-moments is denoted B(X, d X;p). For µ∈ B(X, d X;p), theFr´ echet variance function of order pofX, denoted Vp:X→R, is defined by Vp(x):=Eµ[dp X(x,·)] =Z Xdp X(x, y)dµ(y). (2) The set of global minima of Vp(the global minimum may not be unique—see Example 1.1) is known as the p-barycenter ofX, also as the Fr´ echet mean forp= 2 and the Fr´ echet median forp= 1. As in [23], we replace Vpwith its pth root σp:X→R,σp(x):= (Vp(x))1/p, and call it the p-deviation function ofX. Clearly, the minimum sets of Vpandσpare the same, so we view p-barycenters as minimizers of σp. This is done because σphas better analytical properties than Vp. As local minimum sets are also important in our formulation, henceforth we relax the terminology to also include the local minima of Vpin the p-barycenter of X. Barycenters have received significant attention in the literature. The existence and uniqueness of global minima of V2on Hadamard manifolds (that is, complete, non-positively curved, simply- connected Riemmanian manifolds) was already known to Cartan, whereas local minima of V2 were first investigated in the Riemmanian setting by Grove and Karcher [20], with subsequent developments in [21, 22, 26]. Barycenters for probability distributions on Wasserstein space have been investigated in [2, 28]. Other studies of barycenters include [7, 8, 1, 29, 4, 25], and [23] provides a persistent homology take on p-deviation functions. For additional historical remarks on developments related to barycenters, including applications, the reader may consult [1, 25]. Figure 1: Barycentric merge tree example. Left: A probability density on the circle. The points marked by⋆are local minima of σ2, with the darkest indicating the (global) Fr´ echet mean. Right: The barycentric merge tree summarizes the local minima, together with the shape of deviation function.In spite of the above, as is illustrated in Ex- amples 1.1 and 1.2, the global minima of σp (or equivalently, Vp) can exhibit unstable be- havior that makes them unsuited for data rep- resentation and analysis. Although not as pro- nounced, this instability persists even if we view the barycenter as comprising all local minima ofσp. For this reason, this paper seeks to con- struct a representation of barycenters that is as simple as possible and yet robust, with guaran- tees of stability and consistency. This leads us to refine barycenters and define richer barycen- tric merge tree (BMT) representations of X, show that they are stable, prove a consistency result for BMTs, and map a pathway to discrete models and computation. Roughly, the structure of the BMT representation of Xis as follows: the BMT is a rooted tree whose leaves represent the connected components of the local minimum sets of σp(i.e., of the p-barycenter), such that the tree structure captures interrelationships and connectivity properties of the sets of global and local minima. A BMT thus provides a more complete statistical summary of the mm-space ( X, d X, µ) than just the barycenter—see Figure 1 for an example on the circle. Example 1.1 (Instability of Means) .This example illustrates the instability
https://arxiv.org/abs/2505.09609v2
of global minima of σ2. Take the underlying metric space to be the unit circle endowed with geodesic distance and let µ 2 (a) (b) (c) (d) Figure 2: Instability of global minima of σpon the circle. (a)A pdf on the circle, its corresponding deviation function σ2, and the global minimizers of σ2. The deviation function and the pdf are both rescaled in this plot for visual clarity. (b)Four draws of 100 samples from the distribution shown in (a); σ2is plotted for each empirical distribution, along with its global minimizer, which varies wildly between samples. (c)A distribution on the circle consisting of equally weighted Dirac masses supported on antipodal points (represented by equal length arrows) and its deviation function σ1. Here, σ1is constant, so every point on the circle is a minimizer (i.e., a median). (d)Measures consisting of Dirac masses where the weight on the mass at (1 ,0) is given by µ1, together with their associated deviation functions σ1. Ifµ1>1/2, the unique median lies at (1 ,0), while the median lies at ( −1,0) ifµ1<1/2, indicated with a star in either case. be the distribution with two modes shown in Figure 2(a), which also displays the deviation function σ2ofµand the Fr´ echet mean set formed by the north and south pole of the circle. We sample 100 points from µand calculate the global Fr´ echet means of the associated empirical distributions. Figure 2(b) shows that the resulting Fr´ echet mean bounces between a point near the north pole and a point near the south pole, depending on the particular sample. This shows, qualitatively, that the Fr´ echet mean is unstable. Moreover, this same behavior persists as the number of samples is increased, as illustrated quantitatively in Example 4.6. ♢ Example 1.2 (Instability of Medians) .To see that medians are unstable, we once again consider an example where the metric space is the unit circle, endowed with geodesic distance. Let µbe a measure consisting of a weighted sum of Dirac masses—one with weight µ1at the point (1 ,0) and another with weight µ2= 1−µ1at (−1,0). Parameterizing the circle by angle φ∈[0,2π), with φ= 0 corresponding to (1 ,0), it is not hard to show that σ1is given explicitly by σ1(φ) =(µ1−µ2)φ+µ2π 0≤φ≤π (µ2−µ1)φ+ (2µ1−µ2)π π≤φ <2π.(3) Ifµ1=µ2= 1/2 then σ1is constant, as is illustrated in Figure 2(c), so all points on the circle are medians of µ. On the other hand, if µ1>1/2 then the unique minimizer of σ1(i.e., the median) lies at (1 ,0) and if µ1<1/2 then the unique minimizer lies at ( −1,0). Examples are shown in Figure 2(d). ♢ Example 1.3 (Stability of Barycentric Merge Trees) .We reconsider the distributions from Ex- ample 1.1 and Figures 2(a)(b). Figure 3 shows the barycentric merge trees associated to the distributions. One can intuitively see that the resulting structures share a common structure; this stability property is a reflection of the Lipschitz bound established in Theorem 4.5. ♢ 1.2 Modes The representation and estimation of the modes of a probability distribution on a metric space (X, d X) are problems conceptually
https://arxiv.org/abs/2505.09609v2
analogous to their counterparts for barycenters, as modes can 3 (a) (b) (c) Figure 3: Stability of Barycentric Merge Trees on the circle. (a)A circle with a bimodal distribution, its deviation function σ2and the Fr´ echet mean set, as in Figure 2. (b)The BMT for the distribution in (a); the leaves of the tree correspond to the global minima of σ2(indicated by the stars), and the height of the merge point at the top corresponds to the maximum value of σ2.(c)BMTs for empirical distributions drawn from the pdf. Leaves of the trees correspond to marked points on the circles. Observe that the merge trees all have similar structure, reflecting the stability of the BMTs as summaries of the behavior of the deviation function. be thought of as the most central points within pockets of probability mass. The shared centrality theme suggests a close connection between barycenters and modes, the main contrast being that, for modes, centrality takes a more localized form. However, even in very simple cases as Example 1.1, barycenters as defined above can lie in “data deserts”, away from regions of concentration of mass and contrary to the very concept of modes. For this reason, we broaden the definition of Fr´ echet variance and deviation functions to be able to frame the barycenter and mode problems as one. Instead of defining variance and deviation functions only with respect to the base metric dX, in (2), we allow other (pseudo) metrics θ:X×X→R. Then, Vptakes the form Vp(x) =Eµ[θp(x,·)] =Z Xθp(x, y)dµ(y). (4) Figure 4: Fr´ echet variance functions associated with the heat kernel at different scales for a pdf on the real line.As the notions of pockets of mass or clusters are inherently scale dependent, in mode detection, we employ diffusion distances θderived from ker- nel functions k:X×X→R+[12], as detailed in Section 5. The value k(x, y) can be thought of as quantifying the level of communication between the points xandy, so that if both points lie in the sup- port of the measure, mass near xand mass near y get clustered together by the kernel if k(x, y) is suf- ficiently large. This produces a chaining effect that places a sequence of sufficiently θ-close local pock- ets of mass all into the same cluster. This diffusion distance formulation allows us to approach modes as a barycenter problem. For Riemmanian manifolds with the geodesic distance, the heat kernel associated with the Laplace-Beltrami operator gives a natural one-parameter family of kernels kt,t >0, to employ in mode analysis. We thus obtain a family of BMTs representing ( X, d X, µ), parameterized by t >0. In the tradition of scale space approaches to data analysis (cf. [30, 11]), different structural properties are captured across the range of scales and the family can also reveal scales at which the barycentric merge trees undergo “phase transitions”. Figure 4 illustrates the connection between diffusion distances and modes of 4 Figure 5: Merge trees for modes: each pair shows the distribution on the circle from Figure 1 with modes indicated by ⋆. The BMTs are calculated
https://arxiv.org/abs/2505.09609v2
with respect to diffusion distances depending on an exponential kernel that has a scale parameter t >0. The results shown are for various values of this parameter. a distribution on the real line. The variance function V2is calculated with respect to diffusion distances derived from the heat kernel for several values of t >0 (see Section 5 for details and also [15] for additional examples). Observe that, for small values of t, the local minima of the Fr´ echet variance now correspond to local maxima (i.e., local modes) of the pdf. This illustrates how the framework proposed in this paper simultaneously applies to the barycenter and mode estimation problems. Example 1.4 (Modes and BMTs) .Figure 5 shows the local mode detection pipeline applied to the circle distribution from Figure 1. Here an exponential kernel, depending on a scale parameter t >0, is applied to the geodesic distance, which in turn determines the diffusion distance. Merge trees for the associated deviation function σ2are shown, together with the associated local modes on the circle (i.e., points corresponding to leaves on the trees). Increasing the scale parameter gives a coarser picture of the mode structure. ♢ 1.3 Barycentric Merge Trees Continuous functions f:X→Rdefined on connected domains can be succinctly represented by merge trees Tf, certain silhouettes of fvisualized as rooted trees, that have been studied primarily from a metric perspective (cf. [33, 5, 13]). The present approach to barycentric merge trees differs from the purely metric view as it is based on a metric-measure formulation. To our knowledge, such a formulation has only been considered in a heuristic manner in the merge-tree literature, as in [13]. Tfis the quotient space of Xunder the equivalence relation x∼yif there exists t∈R such that f(x) =f(y) =tand both xandylie in the same connected component of the sublevel setf−1((−∞, t]). By construction, fdescends to a function ˆf:Tf→R. Using merge heights for pairs of points in Tf, as measured by ˆf-values, one can equip Tfwith a (pseudo) metric df[13, Definition 16] analogous to cophenetic distances for phylogenetic trees and dendograms [34]. This turns a merge tree into a functional (pseudo) metric space ( Tf, df,ˆf) and a functional variant of the Gromov-Hausdorff distance provides a means for quantifying similarities and contrasts between two merge trees ( Tf, df,ˆf) and ( Tg, dg,ˆg), as done in [14] in the more general setting of Reeb graphs. A key difference between this type of metric and, say, the sup metric applied directly to fandg, is the geometry captured in a more explicit and compact form. The approach to merge trees in [33, 5] is different and based on interleaving distances. The merge tree Tpassociated with a p-deviation function σp:X→Rnot only is equipped with a metric structure dpand a function ˆ σp:Tp→R, as described above, but also with a probability measure µp, namely, the pushforward of the original distributions µonXto the quotient space Tp. We thus have a representation of the original mm-space X= (X, d X, µ) by a summary (functional) mm-space ( Tp, dp, µp,ˆσp). This additional probabilistic structure enables us to
https://arxiv.org/abs/2505.09609v2
analyze variation 5 in barycentric merge trees in a (functional) Gromov-Wasserstein framework [32, 36, 3], instead of taking a purely metric approach. Our main stability result, obtained in Theorem 4.5, asserts that ifµandµ′are probability measures on X, then the distance between their barycentric merge trees, viewed as functional mm-spaces, admits an upper bound of the order of the Wasserstein distance between µandµ′. Corollary 4.7 records the fact that this form of stability implies consistency, and we also obtain estimates for the rate of convergence of empirical barycentric merge trees to their theoretical population model. These rates are derived directly from the Stability Theorem and results by Weed and Bach on rates for Wasserstein convergence of empirical measures [38]. Stability and consistency for modes are included in these results because we frame modes as special cases of barycenters. As the deviation functions for empirical measures are defined on the entire space X, we also propose a discretization scheme for empirical barycentric merge trees, which are the BMTs of prac- tical interest. For Xcompact, starting from a finite δ-fine grid V⊆X,δ >0, we construct a combinatorial barycentric merge tree whose vertex set support a (functional) metric measure struc- ture that is δ-close to the BMT of µnwith respect to a Gromov-Wasserstein type distance. Since the combinatorial trees so obtained can be rather complex, with as many vertices as the cardinality ofV, we also discuss a simplification method that gives a provably accurate approximation. 1.4 Organization Section 2 defines Fr´ echet variance functions and p-deviation functions in a form that allows for a joint treatment of barycenters and modes, and also discusses connectivity properties on Xthat are needed to obtain stable barycentric merge trees. Section 3 constructs barycentric merge trees as (pseudo) metric measure spaces and examines some properties they satisfy. Section 4 proves the main results of the paper, namely, stability and consistency of barycentric merge trees. The application to analysis of modes is developed in Section 5, and Section 6 is devoted to discrete models. Section 7 closes the paper with a summary and some discussion. Acknowledgements The authors thank Nelson A. Silva for carefully reading the manuscript and suggesting several improvements to the text. This research was partially done while WM was visiting the Institute for Mathematical Sciences, National University of Singapore in 2024. TN was partially supported by NSF grants DMS–2107808 and DMS–2324962. 2 Preliminaries In this paper, unless otherwise stated, ( X, d X) is a connected and locally path-connected Polish metric space (Polish meaning complete and separable). For spaces satisfying these connectivity hypotheses, we repeatedly use the fact that any open, connected subset U⊆Xis path connected. To simplify notation, for w, z∈R, we adopt the abbreviations w∨z= max {w, z}andw∧z= min{w, z}throughout the text. 2.1 Fr´ echet Variance and Deviation Functions As explained in the Introduction, we take an approach to detection and estimation of modes of a probability distribution on a metric space ( X, d X) that requires versions of the Fr´ echet variance function based on distance functions other than dX. For this reason, we define
https://arxiv.org/abs/2505.09609v2
Fr´ echet variance 6 with respect to more general pseudo metrics θ:X×X→R, as this lets us frame modes as special cases of barycenters. A discussion of pseudo metrics derived from kernel functions and well suited to mode analysis is presented in Section 5. Definition 2.1. LetX= (X, d X, µ) be an mm-space with µ∈ B(X, d X;p),p≥1. (i) A pseudo-metric θ:X×X→RisL-admissible ,L > 0, ifθ(x, y)≤LdX(x, y),∀x, y∈X. We refer to such Las an admissibility constant forθ. We say that θisadmissible if it is L-admissible for some L >0. (ii) The Fr´ echet p-variance function Vp:X→Rassociated with an admissible θis defined by Vp(x):=Z Xθp(x, y)dµ(y). The fact that µhas finite p-moments ensures that Vpis well defined. We omit θfrom the Fr´ echet function notation to keep it simple, as the choice of θshould always be clear from the context. (iii) The p-deviation function σp:X→Ris defined as σp(x):= Vp(x)1/p. The standard Fr´ echet variance function is that associated with θ=dXandp= 2. The following proposition describes a basic property of deviation functions needed in our stability arguments. Proposition 2.2. Ifθis an admissible pseudo metric and µ∈ B(X, d X;p), then |σp(x)−σp(y)| ≤ θ(x, y), for any x, y∈X. Proof. Forx, y∈X, by the Minkowski inequality we have that |σp(x)−σp(y)|= Z Xθp(x, z)dµ(z)1/p −Z Xθp(y, z)dµ(z)1/p ≤Z X|θ(x, z)−θ(y, z)|pdµ(z)1/p ≤Z Xθp(x, y)dµ(z)1/p =θ(x, y).(5) 2.2 Connectivity Modulus This section introduces the notions of connectivity constant andconnectivity modulus for an admis- sible pseudo metric θ:X×X→Ron (X, d X). Let I= [0,1]. Given x, y∈X, let Γ( x, y) denote the collection of all (continuous) paths γ:I→(X, d X) such that γ(0) = xandγ(1) = y. Define themerge distance function rθ:X×X→Rby rθ(x, y):= inf γ∈Γ(x,y)sup t∈Iθ(x, γ(t))∨θ(γ(t), y). (6) Forθ=dX, we use the notation rXfor the merge distance function. Clearly, sup t∈Iθ(x, γ(t))∨θ(γ(t), y)≥θ(x, γ(0))∨θ(γ(0), y) =θ(x, y), (7) so that rθ(x, y)≥θ(x, y), for any x, y∈X. In particular, rX≥dX. By reversing path orientation, one readily verifies that rθis a symmetric function. Intuitively, rθ(x, y) quantifies how far out, as measured by θ, we have to go from xandyto be able to connect xandyby a path. 7 Definition 2.3. Let ( X, d X) be a metric space, θ:X×X→Ran admissible pseudo metric, and K > 0. (i)Kis aθ-connectivity constant for (X, d X) ifrθ(x, y)≤KdX(x, y),∀x, y∈X. Ifθ=dX, then K≥1 and we refer to Ksimply as a connectivity constant for (X, d X). (ii) The θ-connectivity modulus Kθof (X, d X) is defined as Kθ:= sup x,y∈X x̸=yrθ(x, y) dX(x, y). Forθ=dX, we write Kθ=KXand call KXtheconnectivity modulus of (X, d X). If the θ-connectivity modulus is finite, Kθis the smallest θ-connectivity constant for ( X, d X). Proposition 2.4. If(X, d X)is a geodesic space, then KX= 1. Proof. We show that rX=dX, which implies that KX= 1. As noted above, rX≥dX. For the opposite inequality, let γx,y:I→Xbe a geodesic from xtoy. Then, dX(x, γx,y(t))∨dX(γx,y(t), y)≤dX(x, γx,y(t)) +dX(γx,y(t), y) =dX(x, y), (8) ∀t∈I. Taking the infimum over γ∈Γ(x, y), we obtain rX(x, y)≤dX(x, y). Proposition 2.5. Letθ:X×X→Rbe an L-admissible pseudo metric. If K≥1is
https://arxiv.org/abs/2505.09609v2
a connec- tivity constant for (X, d X), then KLis aθ-connectivity constant for (X, d X); that is, rθ(x, y)≤ KLd X(x, y). Proof. By assumption, for any x, y∈Xandϵ >0, there exists a path γ∈Γ(x, y) such that supt∈IdX(x, γ(t))∨dX(γ(t), y)≤ϵ+KdX(x, y). Then, sup t∈Iθ(x, γ(t))∨θ(γ(t), y)≤Lsup t∈I dX(x, γ(t))∨dX(γ(t), y) ≤Lϵ+LKd X(x, y). (9) Since ϵ >0 is arbitrary, we have supt∈Iθ(x, γ(t))∨θ(γ(t), y)≤KLd X(x, y), which in turn implies thatrθ(x, y)≤KLd X(x, y). 3 Barycentric Merge Trees This section introduces our main objects of study, barycentric merge trees. For t∈R, we adopt the notation A(t):=σ−1 p((−∞, t]) =σ−1 p([0, t]) for the sublevel sets of σp. The last equality holds because σp≥0. Definition 3.1. Thebarycentric merge tree (BMT) ofXof order pfor an admissible pseudo metric θ,p≥1, is denoted Tp(X) and defined as the quotient space of Xunder the equivalence relation x∼yif there exists t∈Rsuch that σp(x) =σp(y) =tandxandylie in the same connected component of A(t). The quotient map is denoted αp:X→Tp(X). Points in Tp(X) are in one-to-one correspondence with the connected components of the sublevel sets of σp, as follows. For x∈X, lett=σp(x) and αp(x) =a∈Tp(X). Then, the point arepresents the connected component CaofA(t) containing x. Clearly, Caonly depends on the equivalence class of x. 8 Figure 6: BMTs for distributions on the sphere (see Example 3.2). Left: Heat map representation of the pdf of a highly symmetric distribution on the 2-sphere, with modes at each intersection of the sphere with the coordinate axes. The BMT for p= 2 is shown, together with points corresponding to leaves on the sphere. Right: A less symmetric distribution on the 2-sphere with its BMT and points corresponding to the leaves. Example 3.2 (BMTs on the Sphere) .We illustrate the barycentric merge tree construction for distributions on the 2-sphere in Figure 6. In each case, we use the standard geodesic distance as the metric, and consider the associated deviation function σ2. The first example is for a highly symmetric distribution, given by a pdf which exponentially decays in distance from the set of points {(±1,0,0),(0,±1,0),(0,0,±1)}. The resulting BMT has leaves corresponding to the faces of the standard cross-polytope, reflecting the symmetry of the measure. These points are the local minima of σ2. The leaves merge at a common height—corresponding to the unique local maximum value of the pdf—after which the sublevel sets of σ2are connected. The second example is a random transformation of the first, with no obvious symmetries, and results in a BMT with two leaves. Observe that in both examples, the computed local Fr´ echet means fall in low density areas, motivating the mode detection problem that we consider in Section 5. All numerical examples in the paper are implemented via discrete approximations of the under- lying spaces, following the method presented in Section 6. ♢ By construction, the deviation function σp:X→Rdescends to a function κp:Tp(X)→Rsuch that the diagram X Tp(X) Rσpαp κp(10) commutes (this is the function which was denoted as ˆ σpin the introduction). Tp(X) can be equipped with a poset (partially ordered set) structure that we now describe.
https://arxiv.org/abs/2505.09609v2
Definition 3.3. Fora∈Tp(X), let Ca⊆Xbe the connected component of the sublevel set of σp represented by a. (i) The partial order ⪯is defined by a⪯bifCa⊆Cb. (ii) A point a∈Tp(X) is a leafif it is a minimum point of the poset; that is, if b⪯a, then a=b. (iii) Given a, b, c∈Tp(X),cis amerge point foraandbifa⪯candb⪯c. The partial ordering ⪯is compatible with the quotient topology on Tp(X) in the sense that the graph of ⪯, given by {(a, b)∈Tp(X)×Tp(X):a⪯b}, (11) 9 is a closed subspace of Tp(X)×Tp(X). Thus, Tp(X) is a pospace (partially ordered space), not just a poset. Remark 3.4. The following properties satisfied by ⪯are readily verified: (i) The function κp:Tp(X)→Ris monotonic; that is, κp(a)≤κp(b) ifa⪯b; (ii) For a∈Tp(X),Ca=α−1 p(↓a), where ↓a:={b∈TpX):b⪯a}is the principal lower set ofa; (iii) If a⪯banda⪯c, then b⪯corc⪯b; that is, Tp(X) has no branching points; (iv) If a⪯bandκp(a) =κp(b), then a=b. We now turn Tp(X) into a (pseudo) mm-space starting with a (pseudo) metric on Tp(X). For a, b∈Tp(X), let Λ(a, b) ={c∈Tp(X):a⪯candb⪯c} (12) be the merge set ofaandb. Note that Λ( a, b)̸=∅. Indeed, let xa, xb∈Xbe such that αp(xa) =a andαp(xb) =b, and let γ:I→Xbe a path from xatoxb. Such γexists because Xis path connected. Set t0= argmax σp(γ(t)),x0=γ(t0), and c=αp(x0). Then, c∈Λ(a, b). Definition 3.5 (cf. [13, 19]) .The distance function dp,X:Tp(X)×Tp(X)→Ris defined as dp,X(a, b):= inf c∈Λ(a,b) κp(c)−κp(a) ∨ κp(c)−κp(b) = inf c∈Λ(a,b)κp(c)−(κp(a)∧κp(b)). The next goal is to give an alternative description of dp,Xin terms of variation of σpalong paths inXand show that dp,Xis indeed a pseudo metric. For γ∈Γ(x, y), let ρ(γ) = sup t∈I σp(γ(t))−σp(x) ∨ σp(γ(t))−σp(y) = sup t∈Iσp(γ(t))−(σp(x)∧σp(y)).(13) Remark 3.6. ρ(γ) only depends on the trace of γ, as it is invariant under reparameterizations of the curve in a very flexible sense. Namely, for any mapping h:I→Isuch that h(0) = 0 and h(1) = 1, the curve γ′=γ◦hsatisfies ρ(γ) =ρ(γ′). Definition 3.7. Forµ∈ B(X, d X;p), the merge radius function mp,X:X×X→RofX= (X, d X, µ) is defined as mp,X(x, y):= inf γ∈Γ(x,y)ρ(γ) = inf γ∈Γ(x,y)sup t∈Iσp(γ(t))−(σp(x)∧σp(y)). Proposition 3.8. LetX= (X, d X, µ)be an mm-space with µ∈ B(X, d X;p),p≥1. Then, for any x, x′, y, y′, z∈X, the following statements hold: (i)mp,X(x, z)≤mp,X(x, y) +mp,X(y, z); (ii)ifαp(x) =αp(x′), then mp,X(x, x′) = 0 ; (iii) ifαp(x) =αp(x′)andαp(y) =αp(y′), then mp,X(x, y) =mp,X(x′, y′). 10 Proof. The argument is similar to that given in [14, Proposition 4] for a related result for Reeb spaces. (i) Given γ1∈Γ(x, y) and γ2∈Γ(y, z), let γ∈Γ(x, z) be the concatenation of γ1andγ2. Then, sup t∈Iσp(γ(t))−σp(x) = sup t∈Iσp(γ1(t))−σp(x) ∨ sup t∈Iσp(γ2(t))−σp(x) = sup t∈Iσp(γ1(t))−σp(x) ∨ sup t∈Iσp(γ2(t))−σp(y) +σp(y)−σp(x) ≤ sup t∈Iσp(γ1(t))−σp(x) + sup t∈Iσp(γ2(t))−σp(y) ≤ρ(γ1) +ρ(γ2),(14) for,σp(y)−σp(x)≤supt∈Iσp(γ1(t)))−σp(x). Similarly, supt∈Iσp(γ(t))−σp(z)≤ρ(γ1) +ρ(γ2). Therefore, ρ(γ)≤ρ(γ1) +ρ(γ2), which implies that mp,X(x, z) = inf γ∈Γ(x,z)ρ(γ)≤ inf γ1∈Γ(x,y)ρ(γ1) + inf γ2∈Γ(y,z)ρ(γ2) =mp,X(x, y) +mp,X(y, z). (15) (ii) Let t=σp(x) =σp(x′) so that xandx′fall in the same connected component of the sublevel setσ−1 p((−∞, t]). For any ϵ >0,xandx′must fall in the same connected component Cϵof the open set σ−1 p((−∞, t+ϵ)). Therefore, there is a path γ∈Γ(x, y)
https://arxiv.org/abs/2505.09609v2
whose image is contained in Cϵ. This implies that mp,X(x, x′)≤ρ(γ)< ϵ. Since ϵ >0 is arbitrary, mp,X(x, x′) = 0. (iii) By (i) and (ii), mp,X(x, y)≤mp,X(x, x′) +mp,X(x′, y′) +mp,X(y′, y) =mp,X(x′, y′). (16) Similarly, mp,X(x′, y′)≤mp,X(x, y). This concludes the proof. Proposition 3.8 implies that mp,Xinduces a pseudo metric on Tp(X); that is, the function ˆdp,X:Tp(X)×Tp(X)→Rgiven by ˆdp,X(αp(x), αp(y)):=mp,X(x, y) (17) is a well-defined pseudo metric. Proposition 3.9. Ifµ∈ B(X, d X;p),p≥1, then dp,X=ˆdp,X. In particular, dp,Xis a pseudo metric. Proof. We first show that dp,X≤ˆdp,X. Let a, b∈Tp(X) and xa, xb∈Xbe such that αp(xa) =a andαp(xb) =b. Given ϵ >0, there is γ∈Γ(xa, xb) such that supt∈Iσp(γ(t))−(σp(x)∧σp(y))< ˆdp,X(a, b) +ϵ. Let t0∈Ibe a point that realizes this supremum and set c=αp(γ(t0)). Clearly, σp(γ(t0))≥σp(x)∨σp(y) so that c∈Λ(a, b) and dp,X(a, b)≤κp(c)−(κp(a)∧κp(b)) =σp(γ(t0))−(σp(x)∧σp(y))<ˆdp,X(a, b) +ϵ. (18) Since ϵ >0 is arbitrary, we obtain the desired inequality. For the opposite inequality, given ϵ >0, let c∈Λ(a, b) be such that κp(c)−(κp(a)∧κp(b))< dp,X(a, b) +ϵ. Let xa, xb, xc∈Xbe representatives of the equivalence classes a, b, c ∈Tp(X), respectively, and tc=κp(c). Then, xa, xb, xcare contained in the same connected component C of the open set σ−1 p(−∞, dp,X(a, b) +ϵ). Since Cis path connected, there is γ∈Γ(xa, xb) whose image lies in C. This implies that ˆdp,X(a, b)≤supt∈Iσp(γ(t))−σp(xa)∧σp(xb)< dp,X(a, b) +ϵ. Since ϵ >0 is arbitrary, we get ˆdp,X(a, b)≤dp,X(a, b). 11 One can show that if Xis compact and Tp(X) has finitely many leaves, then dp,Xmetrizes the quotient topology on Tp(X) (cf. [13]). This, however, may not be true in general. Nonetheless, we show that dp,Xhas properties that make it well suited for our purposes: (a) the topology τp induced by dp,Xis fine enough for the mapping κp:Tp(X)→Rto be continuous (1-Lipschitz, as a matter of fact); (b) τpis coarse enough for αp:X→Tp(X) to be continuous; and (c) dp,Xallows us to analyze BMT structural variation in a Gromov-Wasserstein type framework. As such, from this point on, we always assume that Tp(X) is equipped with the metric dp,X. Proposition 3.10. Ifµ∈ B(X, d X;p), then the following statements hold: (i)the map κp: (Tp(X), dp,X)→Ris 1-Lipschitz; (ii)the map αp: (X, d X)→(Tp(X), dp,X)is continuous. Proof. (i) Given a, b∈Tp(X) and ϵ >0, Definition 3.5 ensures that there exists c∈Λ(a, b) such that (κp(c)−κp(a))∨(κp(c)−κp(b))< dp,X(a, b) +ϵ . (19) Since cis a merge point for aandb, we also have that κp(a)≤κp(c) and κp(b)≤κp(c). Therefore, |κp(a)−κp(b)|< dp,X(a, b) +ϵ. Since ϵ >0 is arbitrary, |κp(a)−κp(b)| ≤dp,X(a, b). (ii) Let U⊆Tp(X) be an open set and eU=α−1 p(U)⊆X. To show that eUis open, given x∈eUwe construct an open neighborhood Vofxsuch that αp(V)⊆U. Let a=αp(x) and pickϵ >0 such that B(a, ϵ)⊆U, where B(a, ϵ) is the open ball of radius ϵcentered at a. Since Xis locally path connected, there exists a path connected open neighborhood Vofxwith the property that V⊆B(x, ϵ/2L), where L > 0 is an admissibility constant for θ. We claim that αp(V)⊆B(a, ϵ)⊆U. Indeed, given any y∈V, letβ∈Γ(x, y) be a path contained in Vand thus inB(x, ϵ/2L). Proposition 2.2 ensures that σpisL-Lipschitz
https://arxiv.org/abs/2505.09609v2
so that |σp(β(s))−σp(β(t))| ≤ |σp(β(s))−σp(x)|+|σp(β(t))−σp(x)| ≤LdX(β(s), x) +LdX(β(t), x)< ϵ,(20) for any s, t∈I. In particular, ρ(β) = supt∈I σp(β(t))−σp(x) ∨ σp(β(t))−σp(y) < ϵ. This implies that ˆdp,X(a, αp(y)) = ˆdp,X(αp(x), αp(y)) = inf γ∈ρ(γ)≤ρ(β)< ϵ . (21) By Proposition 3.9, dp,X(a, αp(y))< ϵ, showing that αp(y)∈B(a, ϵ). This concludes the proof. Definition 3.11. Letµ∈ B(X, d X, p) andX= (X, d X, µ). The metric-measure barycentric merge treeof order pofXis the triple ( Tp(X), dp,X, µp), where µp=αp♯(µ) is the pushforward of µunder αp:X→Tp(X). Henceforth, we always assume this mm-structure on Tp(X) and simplify the terminology to barycentric merge tree (BMT) of Xif the choice of p(and the admissible pseudo metric θ) is clear from the context. We close this section with additional discussion of the metric properties of the map αpunder the assumption that ( X, d X) has finite connectivity modulus (see Definition 2.3). Proposition 3.12. IfK > 0is a connectivity constant for (X, d X)andθisL-admissible, then the quotient map αp: (X, d X)→(Tp(X), dp,X)isKL-Lipschitz. 12 Proof. Given x, y∈Xandϵ >0, by Proposition 2.5 there is a path γϵ∈Γ(x, y) such that sup t∈Iθ(γϵ(t), x)∨θ(γϵ(t), y)< ϵ+rθ(x, y)≤ϵ+KLd X(x, y), (22) with rθ(x, y) as in (6). Proposition 2.2 and (22) imply that ρ(γϵ)< ϵ+KLd X(x, y). Therefore, dp,X(αp(x), αp(y)) = inf γ∈Γ(x,y)ρ(γ)≤ρ(γϵ)< ϵ+KLd X(x, y). (23) Since ϵ >0 is arbitrary, dp,X(αp(x), αp(y))≤KLd X(x, y), as claimed. 4 Stability and Consistency for Barycentric Merge Trees The main goal of this section is to establish the stability of barycentric merge trees associated with probability measures µ∈ B(X, d X;p), where the BMTs are constructed with respect to a fixed admissible pseudo metric θ. Since Tp(X) is also equipped with a function κp:Tp(X)→R induced by the deviation function σp(see (10)), we address stability of the full functional barycentric merge tree Fp(X):= (Tp(X), dp,X, µp, κp). In our stability and consistency results, variation in µis quantified with the Wasserstein distance in ( X, d X), from classical optimal transport theory [37], whereas variation in Fp(X) is measured with a functional version of Sturm’s Lp-transportation distance, so we begin with a discussion of this distance. 4.1 Distances Between Functional Metric Measure Spaces In [35], Sturm introduced Lp-transportation distances, p≥1, between mm-spaces building on Kantorovich’s formulation of optimal transport distances that are frequently referred to in the literature as the Wasserstein p-distances wp. Later M´ emoli introduced a variant, termed Gromov- Wasserstein p-distance , based on expected distortions of probabilistic couplings [31]. As in some of the subsequent literature Sturm’s version has also been called Gromov-Wasserstein, to avoid confusion, we refer to Sturm’s formulation as the Kantorovich-Sturm p-distance dKS,p and denote M´ emoli’s version dGW,p. The distance dGW,p has the virtue of being more amenable to computation and is a lower bound for dKS,p; that is, dGW,p(Z,Z′)≤dKS,p(Z,Z′) [32, Theorem 5.1]. Because of this inequality, in studying the stability of barycentric merge trees, we use dKS,p-type metrics as they lead to stronger results that imply stability with respect to dGW,p-type distances. The Kantorovich-Sturm distance can be extended to pseudo mm-spaces and described as follows.
https://arxiv.org/abs/2505.09609v2
LetZ= (Z, d, µ ) and Z′= (Z′, d′, µ′) be (pseudo) mm-spaces, where µandµ′have finite p- moments. Let A⊆ZandA′⊆Z′be the supports of µandµ′, respectively. A metric coupling between ZandZ′is a (pseudo) metric δ: (Z⊔Z′)×(Z⊔Z′)→Ron the disjoint union Z⊔Z′ with the property that δ|A×A=d|A×Aandδ|A′×A′=d′|A′×A′. The set of all such metric couplings is denoted M(Z,Z′). A probabilistic coupling between µandµ′is a Borel probability measure h onZ×Z′that marginalizes to µandµ′; that is, π♯(h) =µandπ′ ♯(h) =µ′, where πandπ′denote the projections to the first and second components, respectively. The set of all such probabilistic couplings is denoted C(µ, µ′). Definition 4.1 ([35]).The (extended) Kantorovich-Sturm p-distance ,p≥1, is defined as dKS,p(Z,Z′):= inf h∈C(µ,µ′) δ∈M(Z,Z′)Z Z×Z′δp(z, z′)dh(z, z′)1/p . 13 Since a merge tree Tp(X) is also equipped with a function κp:Tp(X)→R, we adopt a functional analogue of dKS,p. For f:Z→Randf′:Z′→R, we define a Kantorovich-Sturm distance between the functional mm-spaces F= (Z, d, µ, f ) and F′= (Z′, d′, µ′, f′). Definition 4.2. The (extended) functional Kantorovich-Sturm p-distance ,p≥1, is defined as DKS,p(F,F′):= inf h∈C(µ,µ′) δ∈M(Z,Z′)Z Z×Z′δp(z, z′)dh(z, z′)1/p ∨Z Z×Z′|f(z)−f′(z′)|pdh(z, z′)1/p . We refer to the first term in the above maximum as the structural offset of the pair ( δ, h) and to the second term as the functional offset of the coupling h. Clearly, by their definitions, dKS,p(Z,Z′)≤DKS,p(F,F′). The distance DKS,p is a variant of the Fused Gromov-Wasserstein distance of [36], which is a functional version of dGW,p. Metric couplings are closely tied to relations and correspondences between ZandZ′(a corre- spondences is a relation R⊆Z×Z′for which the projections π:R→Zandπ′:R→Z′are sur- jective). Here, we review a basic connection needed in the paper. Given a relation ∅ ̸=R⊆Z×Z′, thedistortion ofRis defined as dis(R):= sup (x,x′),(y,y′)∈R|d(x, y)−d′(x′, y′)|. (24) Forr >0, define δr: (Z⊔Z′)×(Z⊔Z′)→Rbyδr|Z×Z=d,δr|Z′×Z′=d′, and δr(z, z′) =δr(z′, z):=r+ inf (w,w′)∈Rd(z, w) +d′(w′, z′), (25) for any z∈Zandz′∈Z′. Proposition 4.3 ([9], see also [3]) .If∅ ̸=R⊆Z×Z′anddis(R)≤2r, then δris a pseudo metric onZ⊔Z′. Clearly, δr(z, z′)≥r >0, for any z∈Zandz′∈Z′. Thus, δris a metric if and only if both d andd′are metrics. 4.2 Stability and Consistency Forµ, µ′∈ B(X, d X;p), we adopt the abbreviations Fp= (Tp, dp, µp, κp) and F′ p= (T′ p, d′ p, µ′ p, κ′ p) for the functional barycentric merge trees of µandµ′, respectively, constructed with respect to a fixed admissible θ:X×X→R. Similarly, we let Tp= (Tp, dp, µp) and T′ p= (T′ p, d′ p, µ′ p) be the structural parts of FpandF′ p, and σpandσ′ pthep-deviation functions of µandµ′. To relate DKS,p(Fp,F′ p) with wp(µ, µ′), we first construct a metric coupling between TpandT′ pstarting with the correspondence R⊆Tp×T′ pgiven by R:= (αp(x), α′ p(x)):x∈X . (26) Given r > 0, let δr: (Tp⊔T′ p)×(Tp⊔T′ p)→Rbe defined as described in (25): namely, (i) δr|Tp×Tp=dp; (ii) δr|T′p×T′p=d′ p; and (iii) for a∈Tpandb∈T′ p, δr(a, b) =δr(b, a):=r+ inf x∈X dp(a, αp(x)) +d′ p(α′ p(x), b) . (27) Note that if x1, x2∈X, then (27) implies that δr(αp(x1), α′ p(x2))≤r+dp(α(x1), αp(x2)), (28) an inequality that is used below in the proof of the stability of BMTs. 14 Lemma 4.4 (The Coupling
https://arxiv.org/abs/2505.09609v2
Lemma) .If the p-deviation functions satisfy |σp(x)−σ′ p(x)| ≤r, ∀x∈X, then δrdefines a pseudo metric on Tp⊔T′ p. In particular, δryields a metric coupling between TpandT′ p. Proof. By Proposition 4.3, it suffices to check that dis(R)≤2r; that is, for any x, y∈X, |dp(αp(x), αp(y))−d′ p(α′ p(x), α′ p(y))| ≤2r . (29) We first show that for any curve γ∈Γ(x, y),|ρ(γ)−ρ′(γ)| ≤2r, where ρ(γ) is as in (13) and ρ′(γ) is its counterpart for σ′ p. Let t0∈Ibe such that ρ(γ) = σp(γ(t0))−σp(x) ∨ σp(γ(t0))−σp(y) . (30) Then, by the hypothesis on the deviation functions and the triangle inequality, we have that σp(γ(t0))−σp(x) =σp(γ(t0))−σ′ p(γ(t0)) +σ′ p(γ(t0))−σ′ p(x) +σ′ p(x)−σp(x) ≤2r+σ′ p(γ(t0))−σ′ p(x)≤2r+ sup t∈Iσ′ p(γ(t))−σ′ p(x).(31) Similarly, σp(γ(t0))−σp(y)≤2r+ sup t∈Iσ′ p(γ(t))−σ′ p(y). (32) Taking the maximum of (31) and (32), we obtain ρ(γ) = σp(γ(t0))−σp(x) ∨ σp(γ(t0))−σp(y) ≤2r+ρ′(γ). (33) Symmetrically, we have that ρ′(γ)≤2r+ρ(γ). Therefore, |ρ(γ)−ρ′(γ)| ≤2r, as claimed. We now estimate the difference d′ p(α′ p(x), α′ p(y))−dp(αp(x), αp(y)). For a fixed curve γ∈, we have inf γ′∈ρ′(γ′)−ρ(γ)≤ρ′(γ)−ρ(γ). (34) Therefore, d′ p(α′ p(x), α′ p(y))−dp(αp(x), αp(y)) = inf γ′∈Γ(x,y)ρ′(γ′)−inf γ∈Γ(x,y)ρ(γ) = sup γ∈Γ(x,y) inf γ′∈ρ′(γ′)−ρ(γ) ≤sup γ∈Γ(x,y) ρ′(γ)−ρ(γ) ≤2r.(35) Similarly, dp(αp(x), αp(y))−d′ p(α′ p(x), α′ p(y))≤2r. This shows that (29) holds and concludes the proof. Theorem 4.5 (Stability of BMTs) .Let(X, d X)be a connected and locally path-connected Polish metric space and θ:X×X→RanL-admissible pseudo metric, L >0. Ifµ, µ′∈ B(X, d X;p)and K≥1is a connectivity constant for (X, d X), then DKS,p(Fp,F′ p)≤L(1 +K)wp(µ, µ′). In particular, DKS,p(Fp,F′ p)≤2wp(µ, µ′)if(X, d X)is a geodesic space and θ=dX. Proof. We begin the proof by showing that |σp(x)−σ′ p(x)| ≤Lwp(µ, µ′), (36) 15 for all x∈X, using a standard argument that employs the Minkowski inequality and the marginal conditions of a measure coupling. Indeed, for any h∈C(µ, µ′), we have that |σp(x)−σ′ p(x)|= Z Xθp(x, y)dµ(y)1/p −Z Xθp(x, y′)dµ′(y′)1/p = Z X×Xθp(x, y)dh(y, y′)1/p −Z X×Xθp(x, y′)dh(y, y′)1/p ≤Z X×X|θ(x, y)−θ(x, y′)|pdh(y, y′)1/p ≤Z X×Xθp(y, y′)dh(y, y′)1/p ≤LZ X×Xdp X(y, y′)dh(y, y′)1/p ,(37) Since the coupling his arbitrary, we obtain |σp(x)−σ′ p(x)| ≤Linf h∈C(µ,µ′)Z X×Xdp(y, y′)dh(y, y′)1/p =Lwp(µ, µ′), (38) as claimed. Therefore, the Coupling Lemma applied with r=Lwp(µ, µ′) ensures that δr, defined in (27), gives a coupling between dpandd′ p. Also note that any h∈C(µ, µ′) induces a coupling ¯h:= (αp×α′ p)♯(h)∈C(µp, µ′ p), where µp=αp♯(µ) and µ′ p=α′ p♯(µ′). Then, from inequality (28) and Proposition 3.12, we obtain δr(αp(x), α′ p(y))≤r+dp(αp(x), αp(y))≤Lwp(µ, µ′) +KLd X(x, y). (39) Therefore, using the Minkowski inequality, we get the following estimate for the structural offset of the pair ( δr,¯h): Z Tp×T′pδp r(¯x,¯y)d¯h(¯x,¯y)1/p =Z X×Xδp r(αp(x), α′ p(y))dh(x, y)1/p ≤Lwp(µ, µ′) +KLZ X×Xdp X(x, y)dh(x, y)1/p .(40) For the functional offset of ¯h, (38) and Proposition 2.2 yield Z Tp×T′p|κp(¯x)−κ′ p(¯y)|pd¯h(¯x,¯y)1/p =Z X×X|σp(x)−σ′ p(y)|pdh(x, y)1/p =Z X×X|σp(x)−σ′ p(x) +σ′ p(x)−σ′ p(y)|pdh(x, y)1/p ≤Lwp(µ, µ′) +LZ X×Xdp X(x, y)dh(x, y)1/p , (41) an upper bound that is smaller than that for the structural offset in (40) because K≥1. Since the coupling his arbitrary, it follows that DKS,p(F,F′)≤Lwp(µ, µ′) +KL inf h∈C(µ,µ′)Z X×Xdp X(x, y)dh(x,
https://arxiv.org/abs/2505.09609v2
y)1/p =L(1 +K)wp(µ, µ′),(42) as claimed. For a geodesic space ( X, d X) and θ=dX, we can choose L= 1 and K= 1, as shown in Proposition 2.2. 16 Figure 7: Quantitative stability of BMTs (see Example 4.6) Top Row: A summary of the exper- iment. We begin with a smooth bimodal distribution on the circle, which is then sampled ntimes (n∈ {10,25,50,100,250,500,1000}), from which we compute a Fr´ echet mean (i.e., a point on the cir- cle) and a barycentric merge tree ( p= 2). This simulation is repeated 50 times for each n.Bottom Row: The distribution of Wasserstein distances between samples, treated as empirical distributions, for each value ofn, shows convergence in number of samples. On the other hand, the Fr´ echet mean has a consistent spread for any number of samples. Finally, the distribution of Gromov-Wasserstein distances between BMTs for each nshows fast convergence. Example 4.6 (Illustrating the Stability of BMTs) .This example provides a quantitative illus- tration of the stability of BMTs and the lack thereof for (global) Fr´ echet means. We work with on the unit circle, endowed with geodesic distance, and with the bimodal distribution pictured in Figure 7. This distribution is sampled ntimes, for n∈ {10,25,50,100,250,500,1000}, from which we define the associated empirical distribution. The global Fr´ echet mean (i.e., minimizer of σ2) and the BMT, with respect to σ2, of this empirical distribution are then calculated, and this experiment is repeated 50 times for each value of n. Observe that, as nincreases, the Wasserstein distance between the empirical distribution and the original measure converges to zero. Despite this fact, the variance in the resulting Fr´ echet means is essentially constant—the Fr´ echet mean bounces between roughly the north pole and south pole, indicating the instability of this statistic. On the other hand, the Gromov-Wasserstein distance between the associated BMTs converges to zero rather quickly as nincreases—i.e., the BMTs are a stable invariant. We note that the Gromov-Wasserstein distance (with p= 2) is used here, rather than the Kantorovich-Sturm distance, due to the availability of numerical solvers for the former; we use the Python Optimal Transport package for the calculation in this experiment [17]. Since the GW distance lower bounds the KS distance, the stability illustrated here is also guaranteed by the conclusion of Theorem 4.5. ♢ For an mm-space X= (X, d X, µ), let d∗ p(µ) denote the upper p-Wasserstein dimension of X (see [38]). Corollary 4.7 (Empirical Estimation of FMTs) .LetX= (X, d X, µ)be an mm-space with µ∈ B(X, d X;p),p≥1, where (X, d X)is a connected and locally path connected Polish space, and (xi)∞ i=1be independent samples from µ. For n >0, letµn=Pn i=1δxi/nbe the associated empirical measure and Xn= (X, d X, µn). If(X, d X)has finite connectivity modulus and θisL-admissible, then the following hold: (i)(Consistency) limn→∞DKS,p(Fp(X),F(Xn)→0, (⊗∞µ)-almost surely; 17 (ii)(Convergence Rate) If s > d∗ p(µ), then there is a constant C >0such that E[DKS,p(Fp(X),Fp(Xn))]≤Cdiam( X)n−1/s, where Cdepends only on s,p,Land the connectivity modulus of (X, d X). Proof. (i) The statement follows from the Stability Theorem
https://arxiv.org/abs/2505.09609v2
for barycentric merge trees and the facts that µn→µweakly a. s. and wpmetrizes weak convergence of probability measures. (ii) This follows from the Stability Theorem and the estimate E[wp(µ, µn)]≤C′n−1/sby Weed and Bach under the hypothesis that diam( X)≤1, where C′depends only on sandp[38]. 5 Modes We begin our discussion of robust detection and estimation of modes of a probability distribution on a metric space ( X, d X) with a review of diffusion distances on X(cf. [12]). This assumes a fixed reference measure νonX; for example, the volume measure on a Riemannian manifold. Letk:X×X→Rbe a kernel function assumed to be continuous and non-negative. For each x∈X, define kx:X→Rbykx(z) =k(x, z). Given q≥1, denote the q-norm on Lq(X, ν) by ∥·∥ν,q. Definition 5.1. Letk:X×X→Rbe such that kx∈Lq(X, ν),∀x∈X. For q≥1, the diffusion distance θk,q:X×X→Ris defined as θk,q(x, y):=∥kx−ky∥ν,q=Z X|k(x, z)−k(y, z)|qdν(z)1/q . Clearly, θk,qdefines a pseudo-metric on Xbecause it is the pullback of the Lq-distance under the map X→Lq(X, ν) given by x7→kx. Sometimes we drop explicit reference to qin the notation because it is fixed throughout. Definition 5.2. Letµ∈ B(X, d X, p),p≥1, and assume that θk,qis an admissible pseudo metric. (i) A p-mode ofX= (X, d X, µ) with respect to the kernel kis a connected component of a local minimum set of the p-deviation function σk,p:X→Rgiven by σk,p(x) =Z Xθp k,q(x, y)dµ(y)1/p . (ii) The p-mode merge tree ofXfor the kernel kis the functional p-barycentric merge tree Fk,p(X) constructed with respect to the pseudo metric θk,q. Example 5.3 (Modes of Distributions on Shape Space) .Consider the moduli space Xof planar n-gons; each point in this space is an equivalence class of a closed n-sided polygon, considered up to rescaling, translation and rotation. We endow Xwith a Riemannian structure by utilizing a (perhaps surprising) correspondence between the moduli space and the Grassmann manifold Gr2(Rn) of 2-planes in real n-space [24, 10]. The Grassmannian has a canonical Riemannian metric with an explicit formula for geodesic distance [16]. Denoting the induced distance on XbydX, we then consider the kernel k:X×X→Rdefined by k(x, y) = exp( −10·dX(x, y)2). This kernel is used to compute mode merge trees for two distributions on the space of 10-gons, as follows. 18 (a) (b) (c) (d) Figure 8: Mode trees in shape space (see Example 5.3 for details). (a)Random samples from the moduli space of 10-sided polygons. We work with a dataset of 2000 samples. (b)A pdf is defined as the sum of kernels which decay exponentially in the geodesic distance from the polygons indicated by boxes. The 2-mode tree is with respect to an exponential kernel; the leaves correspond to modes, which are located at the boxed polygons. (c)A new pdf is defined which gives higher weight to polygons with especially low or especially high total curvature. The lowest leaves of the mode tree correspond to the polygons shown in the bottom of (d), whose total curvatures lie in the tails of the histogram in the top of (d). 1. First, we fix two (equivalence classes of) polygons x0, x1∈Xand define a pdf on Xby starting
https://arxiv.org/abs/2505.09609v2
with y7→exp(−d(x0, y)2) + exp( −d(x1, y)2), then renormalizing by total volume. Intuitively, the modes of the resulting distribution should be located at the points x0andx1; this intuition is borne out computationally, as is shown in Figure 8. 2. Second, we define a pdf by renormalizing the function x7→(TC( x)−cTC)4, where TC( x) is thetotal curvature ofx∈X, or the sum of all of its turning angles, and cTC is the average value of total curvature. The resulting pdf assigns heavier weights to polygons with relatively low curvature or relatively high curvature. The mode tree shown in Figure 8 shows that the calculated local modes have extreme total curvatures values. As was indicated in Example 3.2, our numerics are performed by discretely approximating the underlying continuous spaces; in particular, we sample 2000 points from the Grassmannian by taking QR decompositions of random 10 ×2 matrices. We also remark that the displayed mode trees have been simplified by removing very short leaf edges, to improve visual clarity. ♢ By definition, the leaves of Fk,p(X) represent the p-modes of Xwith respect to the kernel k. Theorem 4.5 and Corollary 4.7 ensure that if ( X, d X) is a connected and locally path connected Polish space with finite connectivity modulus and θk,qis an admissible pseudo metric, then the p- mode merge trees for distributions with finite p-moments are stable and reliably can be estimated from sufficiently large independent samples. Corollary 4.7 also provides an estimate for the rate of convergence of mode merge trees associated with empirical measures. As such, we conclude this section by examining conditions on the kernel kthat guarantee the admissibility of θk,q. Definition 5.4. The kernel k:X×X→R+isuniformly L-Lipschitz ,L >0, if|k(x, z)−k(y, z)| ≤ LdX(x, y), for every x, y, z ∈X. Proposition 5.5. If the kernel k:X×X→Ris uniformly L-Lipschitz and ν(X)<∞, then θk,q isLmq,ν-admissible for (X, d X), where mq,ν= (ν(X))1/q. Proof. The assumption that kis uniform L-Lipschitz implies that θk,q(x, y) =Z X|k(x, z)−k(y, z)|qdν(z)1/q ≤LdX(x, y)Z Xdν(z)1/q =Lmq,νdX(x, y).(43) 19 Example 5.6. Let (M, d M) be a closed (compact and without boundary) and connected Rieman- nian manifold equipped with the geodesic distance dM. Denote by kt:M×M→R,t >0, the heat kernel associated with the Laplace-Beltrami operator on M. By [27, Eq. (2.12)], there is a constant Ct(M)>0 that varies continuously with t >0 such that |kt(x, z)−kt(y, z)| ≤Ct(M)dM(x, y), (44) ∀x, y, z ∈M. Thus, ktis uniformly Lipschitz and Proposition 5.5 ensures that θk,qis admissible. Since ( M, d M) is a geodesic space, Theorem 4.5 guarantees that stability and consistency of mode merge trees hold for distributions on M. ♢ Proposition 5.5 does not apply to the heat kernel on Rdwith νthe Lebesgue measure because of the finiteness condition on ν. Since this is an important case for applications, particularly for q= 2, in the next example we show by a direct calculation that, for q= 2, the diffusion distance onRdis admissible with respect to the Euclidean distance. Example 5.7. Letkt:Rd×Rd→R,t > 0, be the heat kernel that is given by kt(x, y) = 1 (4πt)d/2exp(−∥x−y∥2/4t).
https://arxiv.org/abs/2505.09609v2
For q= 2 and t >0, denote the corresponding diffusion distance (with νthe Lebesgue measure) by θt:Rd×Rd→R. A routine calculation (see [15, Eq. 6]) shows that θ2 t(x, y) = 21 (8πt)d/2−k2t(x, y) . For a fixed pair ( x, y), parameterize the segment from xtoyby β(s) = (1 −s)x+sy,s∈I, and let g(s):=θ2 t(x, β(s)) =2 (8πt)d/2 1−exp −∥x−β(s)∥2 8t . (45) Then, θ2 t(x, y) =g(1)−g(0) =R1 0g′(s)ds. The derivative of gis given by g′(s) =(y−x)·(β(s)−x) 2t(8πt)d/2exp −∥x−β(s)∥2 8t . (46) Since∥β(s)−x∥ ≤ ∥ x−y∥, from (46) we obtain θ2 t(x, y)≤sup s∈I|g′(s)| ≤ ∥x−y∥2/2t(8πt)d/2. This implies that θt(x, y)≤ ∥x−y∥/√ 2t(8πt)d/4, showing that θtisLt-admissible with Lt= 1/√ 2t(8πt)d/4. ♢ Having shown in Example 5.7 that the diffusion distance θtis admissible, we now recall an interpretation given in [15, Eq. 10] of the Fr´ echet variance function Vp,t:Rd→R, with respect to θt, forp= 2. At scale t/2, we have V2,t/2(x) =Z Rdd2 t/2(x, y)dµ(y) =2 (4πt)d/2−2u(t, x), (47) where u(t, x) is the solution of the heat equation ∂tu= ∆uwith initial condition µ. In particular, up to a scaling factor 2, V2,t/2is an upside down version of the Gaussian kernel density estimator of µat scale t. Therefore, in this special case, the modes of µdefined as the connected components of the local minimum sets of V2,t/2coincide with the view of modes as local maxima of kernel density estimators. Note, however, that in the proof that merge trees provide a stable and consistent representation of the modes of a distribution, we do not make assumptions such as local minima occurring at isolated points. Moreover, our approach also shows how to define modes for other values of pin a stable manner. For example, for p= 1, mode merge trees can be interpreted as local-global summaries of medians of probability measures. 20 6 Discrete Models Corollary 4.7 establishes the consistency of BMT estimators derived from independent samples of a distribution on a connected and locally connected Polish space ( X, d X). With an eye towards computation and applications, we now discuss provably accurate combinatorial approximations to barycentric merge trees associated with empirical measures on compact spaces. We begin with a discussion of general combinatorial BMTs and then apply the construction to empirical measures on (X, d X). 6.1 Combinatorial Barycentric Merge Trees LetG= (V, E) be a finite simple graph. We write (unoriented) edges as 2-element sets {v, w} ∈E. In particular, loops {v, v}={v}are disallowed. We also assume that Vis equipped with a metric dVand a probability measure ωso that V= (V, dV, ω) is a finite mm-space. For p≥1, as before, define the p-deviation function σp:V→Rby σp(v) =X v′∈Vdp V(v, v′)ω(v′)1/p . (48) Fort∈R, let Gt⊆Gbe the subgraph spanned by the sublevel set σ−1 p([0, t])⊆V; that is, the largest subgraph of Gwhose vertex set is σ−1 p([0, t]). On V, define an equivalence relation by v∼wif there exists t≥0 such that: (i) σp(v) =σp(w) =tand (ii) vandwfall in the same connected component of Gt. The quotient Wp=V/∼is viewed as the vertex set of a (simple) graph Tp= (Wp, Ep) whose
https://arxiv.org/abs/2505.09609v2
edge set is defined by {[v],[w]} ∈Epif{v, w} ∈Eand [ v]̸= [w]. Here, [·] denotes equivalence class. The quotient graph Tpis the p-barycentric merge tree ofV. Clearly, σpinduces a function κp:Wp→Rsuch that σp=κp◦αp, where αp:V→Wpis the quotient map. As in the continuous case, we introduce a poset structure and a pseudo-metric on the node set of Tp. For [ v]∈Wp, let t=κp([v]) = σp(v) and Cv⊆Gbe the connected component of Gt containing v. It is simple to verify that C[v]is independent of the choice of the representative vof the equivalence class. Define a partial order by [ v]⪯[w] ifCv⊆Cw. A vertex [ z]∈Wpis a merge point for [ v] and [ w] if [v]⪯[z] and [ w]⪯[z]. The merge set Λ[v, w] is defined as the collection of all merge points for [ v] and [ w]. The function dp:Wp×Wp→R, given by dp([v],[w]) = inf [z]∈Λ[v,w](σp(z)−σp(v))∨(σp(z)−σp(w)), (49) defines a pseudo-metric on Wp. As in Proposition 3.9, we can interpret dp([v],[w]) in terms of (combinatorial) paths γconnecting vtow, where a path is given by a finite sequence v= v0, v1, . . . , v m−1, vm=wof vertices of Gsuch that {vi−1, vi} ∈E, for every 1 ≤i≤m. The distance dpmay be expressed as dp([v],[w]) = inf γ∈Γc(v,w)ρc(γ), where Γ c(v, w) is the collection of all combinatorial paths from vtowand ρc(γ) = max vi∈γ σp(vi)−σp(v) ∨ σp(vi)−σp(w) . (50) Setting ωp:=αp♯(ω), the pushforward of ωtoWp, we obtain a finite functional mm-space (Wp, dp, ωp, κp). 6.2 From Continuous to Discrete Let ( X, d X) be a compact, connected and locally path connected metric space. We construct discrete graph models G= (V, E) ofXfrom finite δ-coverings V⊆X,δ > 0. A subset V= 21 {v1, . . . , v m} ⊆Xis aδ-covering if for each x∈X, there exists vi∈Vsuch that dX(x, vi)≤δ. The parameter δis to be viewed as controlling the desired accuracy of the approximation. We denote the induced metric on VbydV. The edge set is defined by {vi, vj} ∈EifdV(vi, vj)≤3δandi̸=j. Given a dataset Xn={x1, . . . , x n} ⊆X, letpnbe the normalized counting measure on Xn, given bypn(xi) = 1 /n, and µn=Pn i=1δxi/nbe the associated empirical measure on X. Define a map ϕ:Xn→V, where ϕ(xi) is arbitrarily chosen among the points vj∈Vthat satisfy dX(xi, xj)≤δ. Define ωnas the probability measure on Vgiven by ωn(vi) =|ϕ−1(vi)|/n, the pushforward of pn under ϕ. We refer to the finite mm-space Vn= (V, dV, ωn) (with the underlying graph structure described above) as a combinatorial δ-approximation toXn= (X, d X, µn). Further, if ı:V ,→X denotes the inclusion map and νn=ı♯(ωn), we obtain yet another mm-space Wn= (X, d X, νn). We thus have three mm-spaces in play: (i) Xn= (X, d X, µn) which is the object of primary interest; (ii)Vn= (V, dV, ωn) intended as a combinatorial proxy for Xn; and (iii) Wn= (X, d X, νn) that bridges the discrete and continuous models. Note that, by construction, if σp,Vn:V→Randσp,Wn:X→Rare the p-deviation functions ofVnandWn, then σp,Vn(v) =σp,Wn(v), for any v∈V. Moreover, wp(µn, νn)≤δ, for any p≥1. Indeed, let
https://arxiv.org/abs/2505.09609v2
ψ:Xn→X×Xbe given by ψ(xi) = (xi, ı◦ϕ(xi)) and h=ψ♯(pn). Then, h∈C(µn, νn) and wp(µn, νn)≤Z X×Xdp X(x, y)dh(x, y)1/p =1 nnX i=1dp X(xi, ϕ(xi))1/p ≤δ. (51) Therefore, under the assumptions of Theorem 4.5, the functional barycentric merge trees for Xn andWnsatisfy DKS,p Fp(Xn),Fp(Wn) ≤L(1 +K)δ . (52) As such, to establish the validity of Fp(Vn) as an approximation to Fp(Xn), it suffices to obtain δ-controlled upper bounds for the Kantorovich-Sturm distance DKS,p Fp(Vn),Fp(Wn) . Letαp,Vn:V→Tp(Vn) and αp,Wn:X→Tp(Wn) be the quotient maps and write the functional BMTs as Fp(Vn) = Tp(Vn), dp,Vn, ωp,n, κp,Vn andFp(Wn) = Tp(Wn), dp,Wn, νp,n, κp,Wn , where ωp,nandνp,nare the pushforwards of ωnandνnunder the quotient maps to the respective merge trees. As in the proof of the Stability Theorem, the key step in estimating DKS,p Fp(Vn),Fp(Wn) is the construction of a metric coupling between dp,Vnanddp,Wn, starting from a relation R⊆ Tp(Vn)×Tp(Wn), which we define as R:= αp,Vn(v), αp,Wn(v) :v∈V . (53) Lemma 6.1. IfK≥1is a connectivity constant for (X, d X),θisL-admissible, and V⊆Xa δ-covering of (X, d X), then the distortion of the relation Rsatisfies dis(R)≤KLδ. Proof. Letv, w∈V. Given a continuous path γ∈Γ(v, w), take a grid 0 = t0< t1< . . . < t N= 1 fine enough so that dX(γ(ti), γ(ti−1))≤δ, for every 1 ≤i≤N. Let z0=v,zN=w, and for 0< i < N , set zi∈Vbe such that dX(γ(ti), zi)≤δ. By the triangle inequality, dV(zi−1, zi)≤3δ, which implies that {zi−1, zi}is an edge of G= (V, E). Define βto be the combinatorial path v=z0, z1, . . . , z N=w. Then, we have σp,Vn(zi)−σp,Vn(v) =σp,Wn(zi)−σp,Wn(γ(ti)) +σp,Wn(γ(ti))−σp,Wn(v) ≤LdX(zi, γ(ti)) + sup t∈Iσp,Wn(γ(t))−σp,Wn(v) ≤Lδ+ sup t∈Iσp,Wn(γ(t))−σp,Wn(v).(54) Similarly, σp,Vn(zi)−σp,Vn(w)≤Lδ+ sup t∈Iσp,Wn(γ(t))−σp,Wn(w). (55) 22 Therefore, ρc(β) = max i σp,Vn(zi)−σp,Vn(v)∨σp,Vn(zi)−σp,Vn(w) ≤Lδ+ sup t∈I σp,Wn(γ(t))−σp,Wn(v) ∨ σp,Wn(γ(t))−σp,Wn(w) =Lδ+ρ(γ).(56) Taking the infimum over γ, we obtain dp,Vn(αp,Vn(v), αp,Vn(w))≤Lδ+dp,Wn(αp,Wn(v), αp,Wn(w)), (57) for any v, w∈V. Conversely, let an edge path β∈Γc(v, w) be given by the sequence v= z0, z1, . . . , z N=winV. Given ϵ >0, for each 1 ≤i≤N, letγibe a path from zi−1toziwith the property that sup t dX(v, γi(t))∨dX(γi(t), w) ≤ϵ+KdX(zi−1, zi), (58) whose existence is guaranteed by the fact that Kis a connectivity constant for ( X, d X). Concate- nating all γi, 1≤i≤N, we obtain γ∈Γ(v, w) such that for each t∈I, there is i(t)∈ {0,1, . . . , N } such that dX(γ(t), zi(t))≤ϵ+Kδ. Hence, σp,Wn(γ(t))−σp,Wn(v) =σp,Wn(γ(t))−σp,Wn(zi(t)) +σp,Vn(zi(t))−σp,Vn(v) ≤LdX(γ(t), zi(t)) + max iσp,Vn(zi)−σp,Vn(v)≤ϵ+KLδ +ρc(β).(59) An identical argument shows that σp,Wn(γ(t))−σp,Wn(w)≤ϵ+KLδ +ρc(β). (60) Since ϵ >0 is arbitrary, taking the infimum over βand using (59) and (60), we obtain dp,Wn(αp,Wn(v), αp,Wn(w))≤KLδ +dp,Vn(αp,Vn(v), αp,Vn(w)), (61) for any v, w∈V. Combining (57) and (61), since K≥1, we get dis(R)≤KLδ, as claimed. Theorem 6.2. IfK≥1is a connectivity constant for the compact metric space (X, d X),θis L-admissible, and V⊆Xaδ-covering of (X, d X), then DKS,p Fp(Vn),Fp(Wn) ≤KLδ/ 2. Proof. Proposition 4.3 and Lemma 6.1 imply that, for r=KLδ/ 2, there is a metric coupling δr∈M(dp,Vn, dp,Wn) such that for ¯ v∈Tp(Vn) and ¯ x∈Tp(Wn), δr(¯v,¯x) =δr(¯x,¯v) =r+ inf w∈Vdp,Vn(¯v, αp,Vn(w)) +dp,Wn(αp,Wn(w),¯x). (62) In particular, δr(αp,Vn(v),
https://arxiv.org/abs/2505.09609v2
αp,Wn(v)) =r. Let ∆: V→V×Xbe given by ∆( v) = (v, v). Since νn=ı♯(ωn),h= ∆ ♯(ωn) gives a coupling between ωnandνn, which in turn, induces a coupling ¯h:= (σp,Vn×σp,Wn)♯(h)∈C(ωp,n, νp,n). Since the coupling his diagonal, by Proposition 3.12 and (6.2), the structural offset of ( δr,¯h) satisfies Z Tp(Vn)×Tp(Wn)δp r(¯v,¯x)d¯h(¯v,¯x)1/p =Z Vδp r(αp,Vn(v), αp,Wn(v))dωn(v)1/p ≤r. (63) Asσp,Vn(v) =σp,Wn(v),∀v∈V, the functional offset of ¯hvanishes. Indeed, Z Tp(Vn)×Tp(Wn)|κp,Vn(¯v)−κp,Wn(¯x)|pd¯h(¯v,¯x) =Z V|σp,Vn(v)−σp,Wn(v)|pdωn(v) = 0 . (64) Therefore, DKS,p Fp(Vn),Fp(Wn) ≤r=KLδ/ 2. 23 Corollary 6.3. IfK≥1is a connectivity constant for a compact metric space (X, d X),θis L-admissible, and V⊆Xaδ-covering of (X, d X), then DKS,p Fp(Vn),Fp(Xn) ≤CK,Lδ, where CK,L=L+ 3KL/2. Proof. This follows from (52) and Theorem 6.2. 6.3 Binning Section 6.2 constructs combinatorial approximations to barycentric merge trees with accuracy guar- antees. A drawback of the construction is that, generically, the values of a p-deviation function on the nodes of the tree are all distinct, making the associated BMT rather large and complex with many small branches. Such merge trees do not achieve the goal of providing a concise summary of the original probability distribution. For this reason, we introduce an additional binning step to simplify the function σp,Vn:V→Rbefore constructing the merge tree. Given ε >0, define τε:V→Rbyτε(v) =ε⌊σp,Vn(x)/ε⌋, an approximation to σp,Vnthat satisfies |σp,Vn(v)−τε(v)| ≤ε, for every v∈V. Here, ⌊·⌋denotes the floor function. This has the effect of approximating σp,Vn by a function that has constant value iεon sets of the form σ−1 p,Vn [iε,(i+ 1)ε) ⊆V, where iis an arbitrary non-negative integer. Construct a combinatorial functional merge tree Fp,ε(Vn), viewed as an approximation to Fp(Vn), from the same graph mm-structure ( V, E, d V, ωn) only replacing σp,Vnwith τε. Theorem 6.4. For any ε >0,DKS,p Fp(Vn),Fp,ε(Vn) ≤ε. Proof. We use the abbreviations Fp(V) = ( T, d, ω p, κ),Fp,ε(Vn) = ( T′, d′, ω′ p, κ′),σ=σp,Vn, and alsoα:V→Tandα′:V→T′for the quotient maps. Since |σ(v)−τε(v)| ≤ε, for every v∈V, arguing as in Lemma 4.4, we have that δεgiven on pairs ( a, b)∈T×T′by δε(a, b) =dε(b, a):=ε+ inf v′∈V d(a, α(v′)) +d′(α′(v′), b) . (65) defines a coupling δε∈M(d, d′) for which δε(α(v), α′(v)) =ε,∀v∈V. As in the proof of Theorem 6.2, let h= ∆ ♯(ωn), where ∆: V→V×Vis the diagonal map, and ¯h∈C(ωp, ω′ p) be given by ¯h= (α, α′)♯(h). Then, the structural offset of ( δε,¯h) satisfies Z T×T′δp ε(¯v,¯w)d¯h(¯v,¯w)1/p =Z Vδp ε(α(v), α′(v))dωn(v)1/p =ε. (66) Similarly, Z T×T′|κ(¯v)−κ′( ¯w|pd¯h(¯v,¯w)1/p =Z V|(σ(v)−τε(v)|pdωn(v)1/p ≤ε. (67) Therefore, DKS,p Fp(Vn),Fp,ε(Vn) ≤ε. 7 Summary and Concluding Remarks As the p-barycenters and modes of a probability distribution µon a metric space ( X, d X) exhibit an unstable behavior, this paper introduced a functional merge tree representation of barycenters and modes that is provably stable and can be estimated reliably from data. This has been done at the generality of all Borel probability measures on a Polish metric space with finite p-moments. The leaves of a barycentric merge tree represent the connected components of the local minimum sets of 24 thep-deviation function (or equivalently, the Fr´ echet p-variance function) of ( X, d X, µ).
https://arxiv.org/abs/2505.09609v2
The merge tree provides a sketch of the interrelationships between the various sublevel sets of the p-deviation function, in particular, the merging patterns of the barycenters or modes of µ, thus providing a more complete summary of ( X, d X, µ) than the barycenters alone. A unifying framework was devel- oped for the investigation of both the barycenter and mode problems, with modes viewed as special types of barycenters associated with diffusion distances. The main results are a stability theorem and a consistency result for barycentric merge trees, which were proven in a Gromov-Wasserstein type framework. The paper also presented a pathway to discretization and computation via combi- natorial merge trees. The accuracy guarantees for these discrete approximations were established in Section 6.2 only for empirical measures as this sufficed for our purposes because Corollary 4.7 guarantees that the BMT for a more general probability measure µcan be approximated by those for empirical measures. Nonetheless, we note that the argument in Section 6.2 can be adapted to prove a similar approximation result more directly for a general distribution µ. The choice of merge trees was largely guided by the goal of achieving stability keeping the representation of barycenters as simple as possible. Similar representations can likely be obtained through Reeb graphs (cf. [14]) at the expense of increasing the complexity of both the model and computations. The focus of the paper was on foundational aspects of stable representations of barycenters leaving questions specific to cases such as distributions on Riemannian manifolds, length spaces, networks, or Wasserstein spaces for further investigation. While we provided several computational examples in this paper, their primary function was to provide intuition and enhance exposition. Developing a full-fledged, efficient numerical framework for extracting BMTs from real data remains an important direction of future research. The figures in the paper can be reproduced from the open access code available at our GitHub repository (https://github.com/trneedham/Barycentric-Merge-Trees ). References [1] B. Afsari. Riemannian Lpcenter of mass: existence, uniqueness, and convexity. Proc. Amer. Math. Soc. , 139:655–673, 2011. [2] M. Agueh and G. Carlier. Barycenters in the Wasserstein space. SIAM J. Math. Anal. , 43(2):904–924, 2011. [3] S. Anbouhi, W. Mio, and O. B. Okutan. On metrics for analysis of functional data on geometric domains. Found. Data Sci. , 7(3):671–704, 2025. [4] M. Arnaudon and L. Miclo. Means in complete manifolds: uniqueness and approximation. ESAIM: Probability and Statistics , 18:185–206, 2014. [5] K. Beketayev, D. Yeliussizov, D. Morozov, G. Weber, and B. Hamann. Measuring the distance between merge trees. In Topological Methods in Data Analysis and Visualization III , pages 151–166, 2014. [6] M. Berger. A Panoramic View of Riemannian Geometry . Springer-Verlag, 2003. [7] R. Bhattacharya and V. Patrangenaru. Large sample theory of intrinsic and extrinsic sample means on manifolds. I. Ann. Statist. , 31:1–29, 2003. [8] R. Bhattacharya and V. Patrangenaru. Large sample theory of intrinsic and extrinsic sample means on manifolds. II. Ann. Statist. , 33:1225–1259, 2005. 25 [9] D. Burago, Y. Burago, and S. Ivanov. A Course in Metric Geometry . American Mathematical Society, 2001. [10] J. Cantarella,
https://arxiv.org/abs/2505.09609v2
T. Needham, C. Shonkwiler, and G. Stewart. Random triangles and polygons in the plane. The American Mathematical Monthly , 126(2):113–134, 2019. [11] P. Chaudhuri and J. S. Marron. Scale space view of curve estimation. Ann. Statist. , 28:408–428, 2000. [12] R. R. Coifman and S. Lafon. Diffusion maps. Appl. Comput. Harmon. Anal. , 21(1):5–30, 2006. [13] J. Curry, H. Hang, W. Mio, T. Needham, and O. B. Okutan. Decorated merge trees for persistent topology. J Appl. and Comput. Topology , 6(3):371–428, 2022. [14] J. Curry, W. Mio, T. Needham, O. B. Okutan, and F. Russold. Stability and approximations for decorated Reeb spaces. In Symposium on Computational Geometry , Athens, Greece, June 2024. [15] D. H. Diaz Martinez, C. H. Lee, P. T. Kim, and W. Mio. Probing the geometry of data with diffusion Fr´ echet functions. Appl. Comput. Harmon. Anal. , 47(3):935–947, 2019. [16] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM journal on Matrix Analysis and Applications , 20(2):303–353, 1998. [17] R. Flamary, N. Courty, A. Gramfort, M. Z. Alaya, A. Boisbunon, S. Chambon, L. Chapel, A. Corenflos, K. Fatras, N. Fournier, et al. Pot: Python optimal transport. Journal of Machine Learning Research , 22(78):1–8, 2021. [18] M. Fr´ echet. Les ´ el´ ements al´ eatoires de nature quelconque dans un espace distanci´ e. Ann. Inst. H. Poincar´ e , 10:215–310, 1948. [19] E. Gasparovic, E. Munch, S. Oudot, K. Turner, B. Wang, and Y. Wang. Intrinsic interleaving distance for merge trees. La Matematica , 4(1):40–65, 2025. [20] K. Grove and H. Karcher. How to conjugate C1-close group actions. Math. Z. , 132:11–20, 1973. [21] K. Grove, H. Karcher, and E. A. Ruh. Group actions and curvature. Invent. Math. , 23:31–48, 1974. [22] K. Grove, H. Karcher, and E. A. Ruh. Jacobi fields and Finsler metrics on compact Lie groups with an application to differentiable pinching problems. Math. Ann. , 211:7–21, 1974. [23] H. Hang, F. M´ emoli, and W. Mio. A topological study of functional data and Fr´ echet functions of metric measure spaces. J Appl. and Comput. Topology , 3(4):359–380, 2019. [24] J.-C. Hausmann and A. Knutson. Polygon spaces and grassmannians. L’Enseignement Math´ ematique , 43, 1997. [25] S. Hundrieser, B. Eltzner, and S. Huckemann. A lower bound for estimating Fr´ echet means. arXiv:2402.12290, 2024. [26] H. Karcher. Riemannian center of mass and mollifier smoothing. Comm. Pure Appl. Math. , 30:509–541, 1977. 26 [27] A. Kasue and H. Kumura. Spectral convergence of Riemannian manifolds. Tohoku Math. J. , 46(2):147–179, 1994. [28] Y.-H. Kim and B. Pass. Wasserstein barycenters over Riemannian manifolds. Adv. Math , 307:640–683, 2017. [29] H. Le. Locating Fr´ echet means with application to shape spaces. Adv. Appl. Prob. , 33:324–338, 2001. [30] T. Lindeberg. Scale Space Theory in Computer Vision . Kluwer, Boston, 1994. [31] F. M´ emoli. On the use of Gromov-Hausdorff distances for shape comparison. In Proceedings Point Based Graphics , pages 81–90, 01 2007. [32] F. M´ emoli. Gromov-Wasserstein distances and the metric approach to object matching. Found. Comput. Math. , 11(4):417–487, 2011. [33]
https://arxiv.org/abs/2505.09609v2
D. Morozov, K. Beketayev, and G. Weber. Interleaving distance between merge trees. Discrete Comput. Geom. , 49:22–45, 2013. [34] R. R. Sokal and F. J. Rohlf. The comparison of dendrograms by objective methods. Taxon , 11:33–40, 1962. [35] K.-T. Sturm. On the geometry of metric measure spaces. Acta Mathematica , 196(1):65–131, 2006. [36] T. Vayer, L. Chapel, R. Flamary, R. Tavenard, and N. Courty. Fused Gromov-Wasserstein distance for structured objects. Algorithms , 13(9):212, 2020. [37] C. Villani. Optimal Transport: Old and New , volume 338. Springer Science & Business Media, 2008. [38] J. Weed and F. Bach. Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance. Bernoulli , 25(4A):2620–2648, 2019. 27
https://arxiv.org/abs/2505.09609v2
arXiv:2505.09647v1 [cs.DS] 12 May 2025On Unbiased Low-Rank Approximation with Minimum Distortion Leighton Pate Barnes, Stephen Cameron, and Benjamin Howard Center for Communications Research – Princeton May 2025 Abstract We describe an algorithm for sampling a low-rank random matrix Qthat best approximates a fixed target matrix P∈Cn×min the following sense: Qis unbiased, i.e., E[Q] =P;rank(Q)≤r; and Q minimizes the expected Frobenius norm error E∥P−Q∥2 F. Our algorithm mirrors the solution to the efficient unbiased sparsification problem for vectors, except applied to the singular components of the matrix P. Optimality is proven by showing that our algorithm matches the error from an existing lower bound. 1 Introduction Suppose that Pis any complex-valued nbymmatrix, and we would like to approximate it with a randomly sampled matrix Q(also in Cn×m) that is low rank. We require that rank(Q)≤rfor any realization of Q, and that the sampling is unbiased in the sense that E[Q] =Pentrywise. In this note, we show that a simple sampling scheme inspired by the problem of efficient unbiased sparsification [1], if applied to the singular components of P, produces a random matrix Qthat minimizes the expected distortion E∥P−Q∥2 F (1) where the expectation is over the random sampling and the norm is the usual Frobenius norm. The problem of unbiased low-rank approximation has recently been considered in the machine learning literature, and in particular the authors [2] show that a duality-based lower bound on the expected distortion can be matched by a complicated sampling procedure. Their procedure is only explicitly defined for the co- rank one scenario r+1 = rank(P). For arbitrary r, they point out that an algorithm can be written down by recursively applying the co-rank one version (see the section titled Proof of Lemma A.6 in the supplementary material of [2]). Our main contribution in the present note is to show that a simple sparsification procedure applied to the singular components of Pachieves this lower bound for any r, and therefore gives a simpler and more direct sampling procedure for producing optimal, unbiased, low-rank approximations. Our sampling procedure mirrors the optimal way to sample a sparse approximation of a dense vector that is described in [1]. One main difference is that the present work is basis-independent; we apply this procedure to the singular components of the matrix P, and it turns out to be optimal among all rank- r approximations. It is not obvious that we can work only in the basis described by the singular components ofP, and indeed, the sampling procedure from [2] does not work this way. In other words, suppose Pwere diagonal. Then there exist optimal, unbiased, low-rank approximations that are both strictly diagonal (as in our sampling procedure), and not strictly diagonal (as in [2]). This is true even when there are no repeated singular values. To give a concrete example, suppose that we want a rank 1 unbiased approximation of P=4 0 0 1 . 1 Our Algorithm 1 introduced below generates a random variable Qwhich takes values Q=  " 5 0 0 0# , with probability 4 /5 " 0 0 0
https://arxiv.org/abs/2505.09647v1
5# , with probability 1 /5, whereas the algorithm in [2] generates the random variable Q′=4±2 ±2 1 each with equal probability 1/2. Both Q, Q′achieve the minimal distortion E∥P−Q∥2 F=E∥P−Q′∥2 F= 8, as does the mixture Qtwhich takes the values Qt=Qwith probability tandQt=Q′with probability 1 −t for any 0 ≤t≤1. This variety of optimal solutions is in stark contrast to the rigid description of optimizers for the vector valued case [1], and suggests there is a rich space of possible optimal unbiased approximations to explore. In the work [1], the vector sparsification procedure is shown to be optimal for a wide range of divergence measures including both the KL divergence and the squared Euclidean distance. Unknown to the authors of [1] at the time of publication, this procedure was already known to be optimal in the special case of squared Euclidean distance as described in [3]. However, [1] shows that it holds in much more generality and is simultaneously optimal for large classes of divergence measures. Most other works that have considered unbiased approximation of matrices are concerned more with computational constraints, rather than getting the exact approximation that minimizes some error [4, 5]; consider a noisy low-rank recovery problem [6]; or only focus on the rank one case [7, 8]. Our unbiased low- rank approximation procedure is conceptually similar to the spectral version of atomo [9] from the federated learning literature. However, we consider a “hard” constraint where each realization of Qhas at most rank r, instead of their “soft” constraint which only requires that the expected number of singular components be at most r, and they do not establish the optimality of spectral atomo with respect to Frobenius norm error. As far as we know, the optimality of these types of sampling procedures in the sense of (1) is not previously known in the literature. 2 The Sampling Procedure First note that by taking a singular value decomposition, we can restrict our attention to approximating a target matrix Λ that is real, non-negative, and diagonal. This is because the Frobenius norm is invariant to unitary transformations, and if P=UΛV∗for unitary U, V then ∥P−Q∥F=∥Λ−U∗QV∥F andQ′=U∗QVis an unbiased rank- rapproximation of Λ. Without loss of generality we assume Λ = diag(d1, . . . , d N,0, . . . , 0) where d1≥d2≥. . .≥dNandN=rank(P). Without yet knowing that it might be optimal, suppose our goal was to construct an unbiased rank- r approximation of Λ that is also diagonal. A reasonable idea might be to include component ias a nonzero component of Q′with probability proportional to di. Letting C1be this constant of proportionality, we would have r=NX i=1pi=NX i=1C1di and C1=rPN i=1di. 2 In particular, the largest component d1would be included with probability p1=rd1PN i=1di. If this probability is greater than one, then our original idea needs modification. In this case, let’s simply include the first component in every realization of Q′, and repeat the procedure on the remaining components. This results in a number of components k(called the heavy components [1]), which is the smallest number such
https://arxiv.org/abs/2505.09647v1
that (r−k)dk+1PN i=k+1di<1⇐⇒ (r−k)dk+1<NX i=k+1di, and these components are automatically included in every realization of the diagonal Q′. This notion of the firstkindices being heavy (where kis potentially zero) is well-defined in the sense that (r−k′)dk′+1<NX i=k′+1di=⇒(r−(k′+ 1)) dk′+2<NX i=k′+2di. Of the remaining N−kcomponents, sample a subset Iofr−kelements of the indices {k+ 1, . . . , N } with inclusion probabilities proportional to their divalues. There are many ways to do this sampling, and we describe one simple way in the following paragraph (see also [1, 10]). Let’s then set the resulting Q′=diag(Q′ 1, . . . , Q′ N,0, . . . , 0) by the components Q′ i=  di , i= 1, . . . , k c=PN j=k+1dj r−k, i∈I 0 ,otherwise .(2) The matrix Q′is an unbiased, rank- rapproximation of Λ by construction. As promised, one way to sample the index set Iis as follows. Divide the interval [0 , r−k] up into N−k disjoint segments, with segment i(fori=k+ 1, . . . , N ) having length proportional to di. Then randomly generate a single number S∼unif[0,1], and if S+jis in segment ifor some j= 0,1, . . . , r −k−1, then include the index iinI. Otherwise, do not include that index iinI. It may be desirable to randomly permute the order the segments, but this is not required to achieve the correct inclusion probabilties. One can verify this gives the desired inclusion probabilities by zooming in on a single, unit-length subin- terval [ z, z+ 1], z∈Zof [0, r−k]. If this subinterval entirely contains segment i, then segment iwill be included with probability equal to its length (which is proportional do di). If part of segment ilies in an adjacent subinterval, then the sum of the lengths of the two parts will be the inclusion probability (this uses the fact that no segment can have length more than one, so the event that both parts are chosen cannot happen). No segment can intersect more than two subintervals with nonzero measure, since their lengths are bounded by one. Putting this all together, we summarize the unbiased low-rank approximation procedure in Algorithm 1 below. An illustration of the algorithm applied to grayscale images (treated as real-valued matrices) is shown in Figures 1 and 2. The runtime of the algorithm is dominated by the cost of computing the singular value decomposition of the matrix Pat the beginning. 3 Matching Lower Bound In this section, we show that the lower bound from the supplementary material in [2] matches the expected error achieved by Algorithm 1. This shows that Algorithm 1 produces unbiased low-rank approximations that are optimal in the sense that they minimize (1). For any fixed matrix B, we have E∥Q′−Λ∥2 F=E∥Q′−(Λ−B)∥2 F− ∥B∥2 F (3) ≥ min X∈Cn×m,rank(X)≤r∥X−(Λ−B)∥2 F− ∥B∥2 F. (4) 3 Figure 1: Upper-left is an image of the third author’s pet cat “Nutmeg”, of rank 288, and the lower-right is the average of 16 unbiased low-rank approximations, all instances of Algorithm 1 with rank r= 30. The images in the middle are
https://arxiv.org/abs/2505.09647v1
the unbiased approximations. 4 Figure 2: Upper-left is an image of rank ≈700, and the lower-right is the average of 16 unbiased low-rank approximations, all instances of Algorithm 1 with rank r= 3. The images in the middle are the unbiased approximations. Algorithm 1 Unbiased Low-Rank Approximation with Minimum Distortion 1:given matrix P∈Cn×m, rank constraint r 2:compute SVD P=UΛV∗, Λ = diag(d1, . . . , d N,0, . . . , 0) 3:fork’=0,. . . ,N-1 do 4: if(r−k′)dk′+1<PN i=k′+1dithen 5: k←k′ 6: break 7: end if 8:end for 9:S←unif[0,1] 10:I← {} 11:length ←0 12:C←r−kPN i=k+1di 13:fori=k+1,. . . ,N do 14: length ←length +Cdi 15: iflength ≥Sthen 16: I←I∪ {i} 17: S←S+ 1 18: end if 19:end for 20:Q′=diag(Q′ 1, . . . , Q′ N,0, . . . , 0) where Q′ i=  di , i= 1, . . . , k c=1 C=PN j=k+1dj r−k, i∈I 0 ,otherwise 21:return Q=UQ′V∗ 5 Equation (3) follows by the unbiasedness of Q′so that E[B∗(Q′−Λ)] = 0. In equation (4), we lower bound the expected error by the smallest error that can be achieved, with any fixed low-rank matrix X, between Λ−BandX. In particular, we pick Bso that Λ−B=diag(d1, . . . , d k, c, . . . , c|{z} N−k,0, . . . , 0) (5) where cis defined as in (2). A key property that is needed is that dk≥cso that the diagonal elements in (5) are non-increasing. This follows because by definition the index kis heavy, so (r−(k−1))dk≥NX j=kdj=⇒dk≥PN j=k+1dj r−k=c . Using this property and the Eckart-Young-Mirsky Theorem, it is clear that the expression in (4) is minimized exactly by the first rdiagonal components of (5). Furthermore, if Q′is defined as in (2), then the error in (3) is the same as in (4) for every realization of Q′. Therefore, for our choice of Q′, the inequality in (4) is actually equality, and we exactly match the minimum possible expected error. References [1] L. Barnes, T. Chow, E. Cohen, K. Frankston, B. Howard, F. Kochman, D. Scheinerman, and J. Van- derKam, “Efficient unbiased sparsification,” in 2024 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2024, pp. 1854–1859. [2] F. Benzing, M. M. Gauy, A. Mujika, A. Martinsson, and A. Steger, “Optimal kronecker-sum approx- imation of real time recurrent learning,” in International Conference on Machine Learning . PMLR, 2019, pp. 604–613. [3] P. Fearnhead and P. Clifford, “On-line inference for hidden markov models via particle filters,” Journal of the Royal Statistical Society Series B: Statistical Methodology , vol. 65, no. 4, pp. 887–899, 2003. [4] A. Frieze, R. Kannan, and S. Vempala, “Fast monte-carlo algorithms for finding low-rank approxima- tions,” Journal of the ACM (JACM) , vol. 51, no. 6, pp. 1025–1041, 2004. [5] T. Vogels, S. P. Karimireddy, and M. Jaggi, “Powersgd: Practical low-rank gradient compression for distributed optimization,” Advances in Neural Information Processing Systems , vol. 32, 2019. [6] M. Carlsson, D. Gerosa, and C. Olsson, “An unbiased approach to low rank recovery,” SIAM Journal on Optimization , vol. 32, no. 4, pp. 2969–2996,
https://arxiv.org/abs/2505.09647v1
ITMM – 2024 INFORMATION SPREADING IN RANDOM GRAPHS EVOLVING BY NORROS-REITTU MODEL N. M. Markovich1, D. V. Osipov2 1V.A. Trapeznikov Institute of Control Sciences Russian Academy of Sciences Profsoyuznaya Str. 65, 117997 Moscow Russia 2The Department of Discrete Mathematics Moscow Institute of Physics and Technology, Moscow Russia The paper is devoted to the spreading of a message within the ran- dom graph evolving by the Norros-Reittu preferential attachment model. The latter model forms random Poissonian numbers of edges between newly added nodes and existing ones. For a pre-fixed time 𝑇*, the probability mass functions of the number of nodes obtained the message and the total number of nodes in the graph, as well as the distribution function of their ratio are derived. To this end, the success probability to disseminate the message from the node with the message to the node without message is proved. The exposition is illustrated by the simulation study. Keywords: Evolution, random graph, Norros-Reittu preferential attachment model, information spreading Introduction Preferential attachment models (PAMs) attract the attention of many researchers due to numerous applications to the evolution of real-world net- works, such as social ones. PAMs allow us to model heavy-tailed empirical distributions of in- and out-degrees of nodes, as well as their PageRanks. An evolution starts from an initial graph 𝐺0that contains at least one isolated node. Usually, it is assumed that a new node is attached to existing nodes with the prefixed 𝑚≥1 new edges. However, PAMs became more flexible and realistic when 𝑚is a random number. In the Norros-Reittu model [1] and Poisson PAM in [3] the number of new edges is modeled by the Poisson distribution. The randomness of 𝑚is especially important during the start- up of the network. The discrepancy in the degree distribution between the traditional and Poisson PAMs becomes negligible over time, [3]. Markovich N.M. was supported by the Russian Science Foundation RSF, project number 24-21-00183.arXiv:2505.09713v1 [math.ST] 14 May 2025 2 N. M. Markovich, D. V. Osipov Spreading information is an important problem in random networks such as multi-agent systems, telecommunication overlay networks, parallel com- putation in computational Grid systems [4], [5], social networks and spread of infections [6], [7], [8] and gossip algorithms [9]. Let𝐺𝑛= (𝑉𝑛, 𝐸𝑛),𝑛= 0,1,2, . . .denote the sequence of random graphs evolved by an evolution model, where 𝑉𝑛is a set of nodes and 𝐸𝑛is a set of edges. We denote the cardinality of the set 𝐴as #𝐴. The evolution begins from the initial graph 𝐺0. We may assume that 𝐺0contains a single node with the message. Our objective is to study a spreading of a unique message in undirected graphs evolved by the Norros-Reittu model. We aim to analyze the growth of the set of nodes 𝑆𝑘that obtained the message at the evolution step 𝑘. Let𝑁𝑘= #𝑉𝑘be the number of nodes in the graph in the evolution step 𝑘. By convention, we assume that the message can be transmitted from a node 𝑤∈𝑆𝑘to a node 𝑣∈𝑉𝑘+1∖𝑆𝑘at step 𝑘+ 1 if there exists an edge between these nodes. The transmission of the message is governed by
https://arxiv.org/abs/2505.09713v1
the ticks of a Poissonian clock, which drive the evolution of the graph, followed by the propagation of the message to the new node. We assume that message transmissions occur for a fixed time 𝑇*.𝐾*is the number of ticks of the clock (or evolution steps) up to 𝑇*. We aim to consider the dynamics of 𝑁𝑘and𝑆𝑘over time and obtain distributions of # 𝑆𝑘,𝑁𝑘, #𝑆𝐾*,𝑁𝐾*, #𝑆𝐾*/𝑁𝐾*. The paper is organized as follows. The statement of the problem is formulated in Section 1. In Section 2 related results are recalled. Our main results are presented in Section 3. The simulation study is given in Section 4. We finalize with conclusions. 1. Statement of the problem We aim to obtain the following probability mass functions (pmfs): 𝑃{#𝑆𝐾*=𝑖|Λ}, 𝑃{𝑁𝐾*=𝑖}, (1) and the distribution function of the proportion of nodes with the message at maximum step 𝐾*(a tick of a Poissonian clock) in the fixed time 𝑇* 𝑃{#𝑆𝐾*/𝑁𝐾*≤𝑥|Λ}. (2) A sequence Λ is defined in the next section. A new node appends to the graph by a tick of a Poissonian clock. The interarrival times between consec- utive pairs of new nodes are exponentially distributed with intensity 𝜆. The probability that the number of ticks 𝜈(𝑡) in time 𝑡is equal to 𝑘= 0,1,2, ... Information Spreading in Random Graphs Evoving by Norros-Reittu Model 3 is 𝑃{𝜈(𝑡) =𝑘}=(𝜆𝑡)𝑘𝑒−𝜆𝑡 𝑘!. (3) The number of evolution steps in time 𝑇*is𝐾*=𝜈(𝑇*), and its expectation is𝐸(𝐾*) =𝜆𝑇*. 2. Related work Here we consider the Norros-Reittu model [1], which is one of the key PAMs. In this model, new nodes join existing ones with probabilities de- pending on their mean degrees {Λ𝑖},𝑖= 0,1, ...called ”capacities” by a random number of edges and create self-loops. This allows us to model more complex network structures due to possible multiple edges and self loops. In [1], a sequence Λ = (Λ 0,Λ1, . . .) is formed beforehand. Here, {Λ𝑖}are assumed to be independent, strictly positive random variables (r.v.s) with a common distribution function 𝐹such that 𝑃{Λ1> 𝑥}=𝑥−𝜏+1ℓ(𝑥), 𝜏 > 1, (4) ℓ(𝑥) is a slowly varying function, i.e. by definition lim 𝑥→∞ℓ(𝑡𝑥)/ℓ(𝑥) = 1 for any𝑡 >0. For 𝜏∈(1,2],𝐸[Λ0] =∞, and for 𝜏∈(2,∞)𝐸[Λ0]<∞hold by Breiman’s theorem; see [10]. It is assumed that 𝑃(Λ0≥1) = 1. 𝐿𝑁=∑︀𝑁 𝑖=0Λ𝑖is the total capacity of the nodes in 𝐺𝑁. Using Λ, we construct a sequence of random graphs 𝐺𝑁,𝑁= 0,1, . . ., where the set of nodes in 𝐺𝑁is𝑉𝑁={0, . . . , 𝑁}. Let 𝐸𝑁(𝑖, 𝑗) be the number of edges between the nodes 𝑖and𝑗in𝐺𝑁. The graph sequence evolves through the following iterative procedure starting from the initial graph 𝐺0that contains a single isolated node. Step 1. One forms a random number of self-loops around the initial node in 𝐺0such that 𝐸0(0,0) is a r.v. with distribution Poisson(Λ 0). Step N + 1. To construct 𝐺𝑁+1from 𝐺𝑁, perform the following changes. (i) New edges are added to the new node 𝑁+ 1, namely, for 𝑖∈ {0, . . . , 𝑁 + 1}, the number 𝐸𝑁+1(𝑖, 𝑁+ 1) is generated as a Poisson r.v. with mean E[𝐸𝑁+1(𝑖,
https://arxiv.org/abs/2505.09713v1
𝑁+ 1)] =Λ𝑖Λ𝑁+1 𝐿𝑁+1. (5) New edges are added independently of the existing graph structure. 4 N. M. Markovich, D. V. Osipov (ii) Each old edge of 𝐺𝑁is independently deleted with probability 𝑃𝑁+1:= 1−𝐿𝑁 𝐿𝑁+1. The evolution model can play a double role: it serves for the growing net- work (evolution) and for the spread of information. We focus on spreading a single message among the nodes during the evolution by the Norros-Reittu PAM. At each evolution step 𝑘,𝑘= 1,2, ...we may observe two events: either #𝑆𝑘= #𝑆𝑘−1or #𝑆𝑘= #𝑆𝑘−1+ 1. The increase in # 𝑆𝑘is considered as success. It is assumed that the initial node has a single message, i.e. #𝑆0= 1. Our achievements will be based on the following lemma proved for the linear PAM proposed in [2]. Lemma 1. [11] The conditional pmf of # 𝑆𝑘for a maximum number of evolution steps 𝐾*in fixed time 𝑇*is the following. For 1 ≤𝑖≤𝐾*it holds 𝑃{#𝑆𝐾*=𝑖|𝐺𝐾*−1}=𝑒−𝜆𝑇*∞∑︁ 𝑘=𝑖(𝜆𝑇*)𝑘 𝑘!𝑃{#𝑆𝑘=𝑖|𝐺𝑘−1}, where 𝑃{#𝑆𝑘=𝑖|𝐺𝑘−1}=∑︁ 𝑐,𝑘,𝑖−1𝑖−1∏︁ 𝑛=1𝑝𝑗𝑛𝑘∏︁ 𝑚=𝑖(1−𝑝𝑗𝑚)1{𝑘≥𝑖≥2} +𝑘∏︁ 𝑚=1(1−𝑝𝑚)1{𝑘≥𝑖= 1}+𝑘∏︁ 𝑛=1𝑝𝑛1{𝑖=𝑘+ 1},=𝜓(𝑖, 𝑗, 𝑘 ),(6) 𝑃{#𝑆𝑘=𝑖|𝐺𝑘−1}= 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒, (7) and success probabilities 𝑝1, ..., 𝑝 𝑘correspond to the PAM proposed in [2]. Here,∑︀ 𝑐,𝑘,𝑗denotes the sum of all(︀𝑘 𝑗)︀ =𝑘!/(𝑗!(𝑘−𝑗)!) distinct index combinations among {𝑗1, 𝑗2, ..., 𝑗 𝑘}of length 𝑗. The sum∑︀ 𝑐,𝑘,𝑗corresponds to a Poisson binomial distribution. An integer-valued r.v. 𝑋is called the Poisson binomial and is denoted as 𝑋∼𝑃𝐵(𝑝1, 𝑝2, ..., 𝑝 𝑘), if𝑋=𝑑𝜉1+...+𝜉𝑘, where 𝜉1, ..., 𝜉 𝑘are independent Bernoulli rvs with parameters 𝑝1, 𝑝2, . . . , 𝑝 𝑘. The probability distribution of𝑋is𝑃{𝑋=𝑗}=∑︀ 𝐴∈[𝑘],‖𝐴‖=𝑗(︁∏︀ 𝑖∈𝐴𝑝𝑖∏︀ 𝑖̸=𝐴(1−𝑝𝑖))︁ , where the sum ranges over all subsets of [ 𝑘] ={1, ..., 𝑘}of size 𝑗[12]. Our results are based on Lemma 1, where 𝑝𝑘,𝑘= 1,2, ...correspond to the Norros-Reittu PAM. Information Spreading in Random Graphs Evoving by Norros-Reittu Model 5 3. Main results Let the initial graph 𝐺0contain a single node with a message, i.e. # 𝑉0= #𝑆0= 1, # 𝐸0= 0. By the Norros-Reittu model, each new node connects all old nodes existing at the previous evolution step by a Poisson-distributed random number of new edges. The message cannot be transmitted to a new node, if there is no one new edge connecting the latter node with old nodes with the message. In other words, if the number of new edges from a new node to all old nodes with the message is equal to zero, then transmission is unsuccessful. Otherwise, the transmission is successful irrespective of the number of new edges. Remark 1. The removal of old edges and self-loops does not affect the propagation of the message. In the Norros-Reittu PAM the number of new edges and their appending to all nodes are independent at each evolution step. The distribution (4) affects the graph structure. As depicted in Figure 1, the graph evolves through distinct topological phases. Early-stage growth ( 𝑁≤20) exhibits random tree-like structures, while mature networks ( 𝑁≥100) develop scale-free properties consistent with empirical web graphs. The choice of the distribution parameter 𝜏balances heavy-tailed node capacities: the lower 𝜏(e.g., 𝜏= 1.5) leads to numerous high-capacity hubs, enabling rapid
https://arxiv.org/abs/2505.09713v1
message propagation, while the higher 𝜏(e.g., 𝜏= 3.5) results inhomogeneous capacities, slowing the spread. 3.1. S u c c e s s p r o b a b i l i t y 𝑝𝑘 Lemma 2. Under the conditions of graph evolution according to the Norros-Reittu PAM, let # 𝑆0= 1, and Λ be distributed as in (4). Then the success probability 𝑝𝑘+1,𝑘= 0,1, ..., conditionally on Λ and 𝑆𝑘is the following 𝑝𝑘+1=E{#𝑆𝑘+1−#𝑆𝑘|𝑆𝑘,Λ}= 1−∏︁ 𝑤∈𝑆𝑘𝑒−Λ𝑤Λ𝑘+1 𝐿𝑘+1 (8) Proof. By the Norros-Reittu model it follows that ∘for each node 𝑤∈𝑉𝑘number of edges between the new node 𝑘+ 1 and an existing node 𝑤is defined independently with an intensity determined by their capacities Λ 𝑤and Λ 𝑘+1; 6 N. M. Markovich, D. V. Osipov Figure 1. The evolution of a network generated using the Norros-Reittu model with regularly varying distributed node capacities (see (4) with 𝜏= 2.5), where the nodes with messages are marked in black, without message −in grey. Size of circles is proportional to node weight (Λ). Multiple edges are shown as a single edge, self-loops are not shown. On the left side −the graph with 𝑁2= 3; in the middle - Emergent Phase ( 𝑁19= 20), where the preferential attachment creates hub structures; on the right −Mature Network ( 𝑁99= 100) with the giant connected component. The first line is responsible for 𝜏= 1.5, the second line for 𝜏= 2.5, and the third line for 𝜏= 3.5. Information Spreading in Random Graphs Evoving by Norros-Reittu Model 7 ∘the average number of edges between the nodes 𝑤∈𝑉𝑘and𝑘+ 1 is given by (5): E[𝐸𝑘+1(𝑤, 𝑘+ 1)|Λ] =Λ𝑤Λ𝑘+1 𝐿𝑘+1,where 𝐿𝑘+1=𝑘+1∑︁ 𝑖=0Λ𝑖. For the Poisson distribution (3), the probability that there are no edges between nodes 𝑤and𝑘+ 1 at step 𝑘+ 1 is 𝑃{𝐸𝑘+1(𝑤, 𝑘+ 1) = 0|Λ}=𝑒−Λ𝑤Λ𝑘+1 𝐿𝑘+1. Since edges between the new node 𝑘+ 1 and different existing nodes 𝑤∈𝑆𝑘are created independently, the probability that the new node does not connect to any node in 𝑆𝑘is equal to the product of probabilities of the absence of edges for all 𝑤∈𝑆𝑘 𝑃{𝑆𝑘=𝑆𝑘+1|𝑆𝑘,Λ}=∏︁ 𝑤∈𝑆𝑘𝑒−Λ𝑤Λ𝑘+1 𝐿𝑘+1. Hence, (8) follows. Remark 2. Lemma 2 is valid for any initial graph with at least one node with the message, # 𝑆0≥1. 3.2. P r o b a b i l i t y m a s s f u n c t i o n o f 𝑆𝑘 To analyze the pmfs of 𝑁𝑘and # 𝑆𝑘, we will use the Poisson binomial distribution model. Each evolution step 𝑘corresponds to an independent experiment defined as follows: ∘Experiment Description : At step 𝑘+ 1, a new node 𝑣𝑘+1is added to the graph. Edges between 𝑣𝑘+1and existing nodes are created independently, with the number of edges between 𝑣𝑘+1and each existing node 𝑤following a Poisson distribution with mean Λ𝑤Λ𝑘+1 𝐿𝑘+1. This process is independent of previous steps. ∘Message Transmission Condition : The message is transmitted to𝑣𝑘+1if and only if at least one edge is created between 𝑣𝑘+1and any node 𝑤∈𝑆𝑘. If no such edges are formed, the transmission fails. The success probability 𝑝𝑘+1at step 𝑘+ 1 is given by (8). Since edge
https://arxiv.org/abs/2505.09713v1
creation is independent across steps, the sequence of successes {𝑝𝑘}forms a Poisson binomial distribution model. Further analysis is based on the arguments of [11]. 8 N. M. Markovich, D. V. Osipov From Lemma 1 we have that the pmf of 𝑆𝑘,𝑘≥1, for the maximum number of evolution steps 𝐾*within a fixed time 𝑇*is defined as follows: 𝑃{#𝑆𝐾*=𝑖|Λ}=𝑒−𝜆𝑇*∞∑︁ 𝑘=𝑖(𝜆𝑇*)𝑘 𝑘!𝑃{#𝑆𝑘=𝑖|Λ} (9) where 𝑃(#𝑆𝑘=𝑖|Λ) substitutes 𝑃{#𝑆𝑘=𝑖|𝐺𝑘−1}in (6), (7). Corollary 1. Let the conditions of Lemma 2 be fulfilled. The proba- bility of complete non-propagation at evolution step 𝐾≥1 is given by 𝑃{#𝑆𝐾= 1|Λ}=𝑒−𝛼, 𝛼 = Λ 0𝐾∑︁ 𝑘=1Λ𝑘 𝐿𝑘. (10) Proof. As # 𝑆𝑘= 1 for each 1≤𝑘≤𝐾, then by (8) 1−𝑝𝑘=𝑒−Λ0Λ𝑘/𝐿𝑘. Hence, (10) follows by (6). Now we obtain (1) and (2). Lemma 3. Under the conditions of Lemma 2, the pmf of 𝑁𝑘,𝑘≥1, for the maximum number of evolution steps 𝐾*within a fixed time 𝑇*is defined as follows: 𝑃{𝑁𝐾*=𝑖}=(𝜆𝑇*)𝑖−1𝑒−𝜆𝑇* (𝑖−1)!, 𝑖≥1. Proof. Consider the process of adding new nodes to a graph. Since exactly one node is added at each step, the number of nodes 𝑁𝐾*at step 𝐾*is equal to 𝐾*+𝑁0=𝐾*+ 1. According to the condition, the addition of nodes follows a Poisson pro- cess, then from the above reasoning and (3), the desired result follows 𝑃{𝑁𝐾*=𝑖}=𝑃{𝐾*=𝑖−1}=(𝜆𝑇*)𝑖−1𝑒−𝜆𝑇* (𝑖−1)!. Lemma 4. Under the conditions of Lemma 2, let 𝐾*be determined as in (3). Then it holds 𝑃{#𝑆𝐾*/𝑁𝐾*≤𝑥|Λ}=𝑒−𝜆𝑇*+𝑒−𝜆𝑇*∞∑︁ 𝑘=1(𝜆𝑇*)𝑘 𝑘!⌊𝑥(𝑘+1)⌋∑︁ 𝑖=1𝜓(𝑖, 𝑗, 𝑘 ), where 𝜓(𝑖, 𝑗, 𝑘 ) is determined by (6). Information Spreading in Random Graphs Evoving by Norros-Reittu Model 9 Proof. By (9) we have 𝑃{#𝑆𝐾*/𝑁𝐾*≤𝑥|Λ}=𝑃{#𝑆𝐾*≤𝑥(𝐾*+ 1)|Λ} =⌊𝑥(𝐾*+1)⌋∑︁ 𝑖=1𝑃{#𝑆𝐾*=𝑖|Λ} =𝑃{𝐾*= 0}+∞∑︁ 𝑘=1⌊𝑥(𝑘+1)⌋∑︁ 𝑖=1𝑃{#𝑆𝐾*=𝑖, 𝐾*=𝑘|Λ} =𝑒−𝜆𝑇*+𝑒−𝜆𝑇*∞∑︁ 𝑘=1⌊𝑥(𝑘+1)⌋∑︁ 𝑖=1(𝜆𝑇*)𝑘 𝑘!𝑃{#𝑆𝑘=𝑖|Λ} Finally, using that 𝑃{#𝑆𝑘=𝑖|Λ}=𝜓(𝑖, 𝑗, 𝑘 ) (by Lemma 1) we get 𝑃{#𝑆𝐾*/𝑁𝐾*≤𝑥|Λ}=𝑒−𝜆𝑇*+𝑒−𝜆𝑇*∞∑︁ 𝑘=1(𝜆𝑇*)𝑘 𝑘!⌊𝑥(𝑘+1)⌋∑︁ 𝑖=1𝜓(𝑖, 𝑗, 𝑘 ) 4. Simulation study The simulation results of the Norros-Reittu model were used to plot the dynamics of the ratio # 𝑆𝑘/𝑁𝑘as a function of the size of the graph 𝑁𝑘for the following parameters: the node capacity distribution corre- sponds to parameters 𝜏∈{1.5,2.5,3.5}in (4), governing the heavy-tailed capacity distribution; initial conditions corresponding to the initial graph size𝑁0∈{10,50,100}and the initial number of nodes with the message #𝑆0∈{1,5,10}, see Figure 2. Each plot was averaged over 20 simulation runs. For each 𝜏, three panels are shown, corresponding to different 𝑁0with lines representing # 𝑆𝑘/𝑁𝑘 dynamics for different # 𝑆0. We may conclude the following. ∘Impact of initial nodes with the message ( 𝑆0):Increasing # 𝑆0 accelerates the growth of the # 𝑆𝑘/𝑁𝑘ratio during early graph evo- lution. However, in case 𝜏= 1.5 for large 𝑁𝑘(𝑁𝑘>800), the de- pendence on # 𝑆0diminishes, and all curves converge to a common limit. ∘Role of parameter 𝜏:The lower 𝜏values corresponding to distri- butions (4) with heavier tails promote faster message spreading due to high-capacity nodes actively forming connections. For instance, at 𝜏= 1.5 and 𝑁0= 10, the # 𝑆𝑘/𝑁𝑘ratio reaches 0 .7–0.8 by𝑁𝑘= 200, and ultimately exceeds 0 .9 at 𝑁𝑘≈800. In contrast, for 𝜏= 3.5, similar values (# 𝑆𝑘/𝑁𝑘>0.9) are not observed even at 𝑁𝑘= 3000. 10 N. M. Markovich, D. V. Osipov Figure 2. The average value of #
https://arxiv.org/abs/2505.09713v1
𝑆𝑘/𝑁𝑘against 𝑁𝑘for 20 repetitions of graph evolution by the Norros-Reittu PAM: on the left side −𝑁0= 10 and # 𝑆0∈ {1,5,10}; in the middle−𝑁0= 50 and # 𝑆0∈{1,5,10}; on the right−𝑁0= 100 and # 𝑆0∈{1,5,10}, where the first line is responsible for 𝜏= 1.5, the second line for 𝜏= 2.5, and the third line for 𝜏= 3.5. ∘Impact of initial graph size ( 𝑁0):Larger 𝑁0values amplify the effect of initial conditions: differences between # 𝑆0= 1,5,10 are sig- nificant. For smaller 𝑁0the dynamics rapidly stabilize, and variations in𝑆0become less pronounced. ∘Process stabilization: In all cases, the # 𝑆𝑘/𝑁𝑘ratio stabilizes as the graph grows, indicating equilibrium in the message dissemination. 5. Conclusions We study the propagation of a message in the Norros-Reittu PAM within a fixed time interval 𝑇*. The evolution starts from an initial graph contain- ing the 𝑆0nodes with the message, where the network grows according to a heavy-tailed capacity distribution governed by the tail index 𝜏−1. The Information Spreading in Random Graphs Evoving by Norros-Reittu Model 11 message is propogated to a new node only if it connects to at least one existing node with the message, with probabilities determined by the node capacities Λ 𝑖. We obtained the following results. ∘The pmfs for # 𝑆𝐾*(the number of nodes with the message), 𝑁𝐾* (the total number of nodes), and the distribution function of their ratio # 𝑆𝐾*/𝑁𝐾*at time 𝑇*. ∘Analytical expression for the success probability 𝑝𝑘, linking it to the Poisson binomial distribution and the node capacity dynamics. ∘Empirical validation through simulations, demonstrating the critical impact of 𝜏,𝑁0, and # 𝑆0on the # 𝑆𝑘/𝑁𝑘ratio. Evidence of equilib- rium in the message dissemination as 𝑁𝑘approaches 3000 nodes. Our future work may concern the following problems: ∘a generalization to time-dependent parameters (e.g., dynamic of 𝜏), node removal mechanisms and different initial graphs; ∘a robust estimation of parameters of the PAM associated with the network; ∘application to real-world networks, such as social media or IoT sys- tems. REFERENCES 1. Norros, I., Reittu, H.: On a conditionally poissonian graph process. Advances in Applied Probability, 38(1), 59–75. DOI: 10.1239/aap/1143936140. (2006) 2. Wan, P., Wang, T., Davis, R.A., Resnick, S.I.: Are extreme value estimation methods useful for network data? Extremes, vol. 23, pp. 171–195. (2020) 3. Wang, T., Resnick, S.: Poisson Edge Growth and Preferential Attachment Networks. Methodol Comput Appl Probab 25, 8 (2023).https://doi.org/10.1007/s11009-023-09997-y 4. Mosk-Aoyama, D., Shah, D.: Computing separable functions via gos- sip. In Proceedings of the 25th ACM symposium on Principles of dis- tributed computing (PODC ’06), ACM, New York, USA 113–122. DOI: 10.1145/1146381.1146401. (2006) 5. Censor-Hillel, K., Shachnai, H.: Partial Information Spreading with Appli- cation to Distributed Maximum Coverage. In Proceedings of the 29th ACM symposium on Principles of distributed computing (PODC’10), ACM, New York, USA. 161–170. DOI: 10.1145/1835698.1835739 (2010) 6. Patwardhan, S., Rao, V.K., Fortunato, S., Radicchi, F.: Epidemic Spreading in Group-Structured Populations. Physical Review X, 13(4), 041054. DOI: 10.1103/PhysRevX.13.041054 (2023) 12 N. M. Markovich, D. V. Osipov 7. Zeng, L., Tang M., Liu, Y., Yeop Yang, S., Do, Y.: Contagion dynamics in time-varying metapopulation networks with node’s activity and attractive-
https://arxiv.org/abs/2505.09713v1
ness. arXiv:2311.13856v1 [physics.soc-ph] (2023) 8. Pang, G., Pardoux, E., Velleret, A.: Stochastic SIR model with indi- vidual heterogeneity and infection-age dependent infectivity on large non- homogeneous random graphs. arXiv:2502.04225v1 (2025) 9. Shah, D.: Gossip algorithms. Foundations and Trends in Networking 3(1), 1–125 (2008) 10. Jessen, A.H., Mikosch, T.: Regularly varying functions. Publ. Inst. Math. (Beograd) (N.S.), 80, 171–192. (2006) 11. Markovich, N.M.: Spreading of a limited lifetime information in networks evolved by preferential attachment. Submission to e-Journal Reliability: Theory and Applications (2024) 12. Tang, W., Tang, F.: The Poisson Binomial Distribution— Old & New. Statist. Sci. 38(1) 108–119. DOI: 10.1214/22-STS852. (2023)
https://arxiv.org/abs/2505.09713v1
arXiv:2505.09860v1 [stat.ME] 14 May 2025Robust and Computationally Efficient Trimmed L-Moments Estimation for Parametric Distributions Chudamani Poudyal1 University of Central Florida Qian Zhao2 Robert Morris University Hari Sitaula3 Montana Technological University © Copyright of this Manuscript is held by the Authors! Abstract . This paper proposes a robust and computationally efficient e stimation frame- work for fitting parametric distributions based on trimmed L-moments. Trimmed L- moments extend classical L-moment theory by downweighting or excluding extreme order statistics, resulting in estimators that are less sen sitive to outliers and heavy tails. We construct estimators for both location-scale and shape p arameters using asymmetric trimming schemes tailored to different moments, and establi sh their asymptotic proper- ties for inferential justification using the general struct ural theory of L-statistics, deriving simplified single-integration expressions to ensure numer ical stability. State-of-the-art algorithms are developed to resolve the sign ambiguity in es timating the scale parameter for location-scale models and the tail index for the Fréchet model. The proposed esti- mators offer improved efficiency over traditional robust alte rnatives for selected asym- metric trimming configurations, while retaining closed-fo rm expressions for a wide range of common distributions, facilitating fast and stable comp utation. Simulation studies demonstrate strong finite-sample performance. An applicat ion to financial claim severity modeling highlights the practical relevance and flexibilit y of the approach. Keywords . Computational efficiency; Location-scale models; L-statistics; Robust esti- mation; Tail index estimation; Trimmed L-moments. 1Corresponding Author : Chudamani Poudyal, PhD, ASA, is an Assistant Professor in t he Depart- ment of Statistics and Data Science, University of Central F lorida, Orlando, FL 32816, USA. e-mail : Chudamani.Poudyal@ucf.edu 2Qian Zhao, PhD, ASA, is an Associate Professor in the Departm ent of Mathematics, Robert Morris University, Moon Township, PA 15108, USA. e-mail :zhao@rmu.edu 3Hari Sitaula, PhD, is an Assistant Professor in the Departme nt of Mathematical Sciences, Montana Technological University, Butte, MT 59701, USA. e-mail :Hsitaula@mtech.edu 1 Introduction Robust and computationally efficient parameter estimation i s essential in statistical learning, espe- cially under data contamination, measurement error, or hea vy tails. While the maximum likelihood estimator (MLE) is asymptotically efficient under correct mo del assumptions, it is well known to be highly sensitive to deviations from ideal conditions and to contamination in the data ( Hampel et al., 1986,Huber and Ronchetti ,2009). These issues frequently arise in real-world settings suc h as biomedical studies, engineering reliability analysis, an d environmental monitoring, where irregu- larities and extreme values are often present. Robust stati stics provides a formal framework for addressing such challenges by developing estimators that m aintain good performance even when standard assumptions are violated. These procedures aim to reduce the influence of atypical data while preserving efficiency, ensuring stability and interpr etability in practical applications. The theory of robust statistics has been systematically dev eloped since the mid-1960s ( Tukey , 1960), with foundational work introducing methods to reduce the sensitivity of estimators to model misspecification and data contamination ( Huber and Ronchetti ,2009). Within this framework, the method of trimmed moments (MTM)—a subclass of L-moments ( Chernoff et al.,1967)—has gained attention
https://arxiv.org/abs/2505.09860v1
as a practical and computationally efficien t alternative to likelihood-based esti- mation. MTM replaces classical moments with trimmed moment s computed by excluding extreme order statistics, yielding estimators that are finite, stab le, and robust to outliers. These estimators have been developed for various distributional families, i ncluding location-scale models, exponential- type distributions, and heavy-tailed models ( Brazauskas et al.,2009,Brazauskas and Kleefeld ,2009, Kleefeld and Brazauskas ,2012). MTMs have also been applied in several domains: Opdyke and Cavallo (2012) used them in operational risk modeling, Kim and Jeon (2013) applied them in credibility theory, and Haoet al. (2014) used them to construct bootstrap confidence intervals for r eliability data. Their application to incomplete loss data has been fur ther examined by Poudyal (2021) and Poudyal and Brazauskas (2023), demonstrating that MTM provides an effective framework fo r en- hancing robustness by mitigating the influence of heavy poin t masses at left truncation and right censoring boundaries ( Gatti and Wüthrich ,2025). Despite the existence of general asymptotic distributiona l results for MTM, independently estab- lished by Chernoff et al. (1967) and Brazauskas et al. (2009), and their equivalence formally shown byPoudyal (2025), most existing implementations adopt a single trimming co nfiguration uniformly across all moments, regardless of the number of parameters t o be estimated or the distributional 1 features of the data. This constraint can lead to different or suboptimal subsets of data being used for estimating each moment, reducing efficiency or discardin g relevant information. To address this limitation, Poudyal (2025) developed computationally tractable expressions for the asymptotic dis- tribution of MTM estimators that allow for distinct trimmin g proportions for each moment. This extension maintains the foundational asymptotic structur e ofL-statistics ( Chernoff et al.,1967) while allowing practitioners to achieve a more effective rob ustness-efficiency trade-off tailored to the nature of the parameter being estimated. Therefore, this work introduces a general and flexible frame work for robust estimation using trimmed L-moments that allows for distinct trimming proportions across different moments . This framework tailors trimming to moment-specific sensiti vity, accommodating asymmetry where needed and enabling a more effective balance between robustn ess and efficiency. The proposed framework includes previously studied symmetric-trimmin g methods as special cases, while extend- ing the modeling flexibility to better adapt to real-world da ta structures. We develop closed-form estimators and their asymptotic distributional propertie s for location-scale and Fréchet models un- der the general L-statistic framework, along with computational expressio ns for analytic variance. The estimators do not require iterative optimization, maki ng them computationally efficient and scalable to large datasets. The explicit nature of the estim ators ensures numerical stability and reproducibility—key requirements for scalable statistic al methods in scientific computing and in- dustrial analytics. We also address the sign ambiguity in sc ale and shape estimation through a principled selection strategy based on proximity to the ful l-sample estimate. The proposed meth- ods are validated through extensive simulation studies and real-data applications, confirming their strong finite-sample performance and practical relevance. The remainder of the paper is organized as follows. Section 2introduces the
https://arxiv.org/abs/2505.09860v1
general L-estimator framework and derives its theoretical properties, specific ally focusing on MTM. Section 3is the section where we implement the designed methodology for spe cific parcomprehensive simulation study, and Section 5provides real-data applications. Section 6concludes with a discussion and future directions. 2 2 General Method of Trimmed L-Moments Consider a random variable Xwith the cdf F(x|θ), where θ= (θ1,...,θk),for some positive integerk, is the parameter vector to be estimated. Consider a random s ampleX1,...,X nof sizen with the corresponding order statistics X1:n≤...≤Xn:n. Then the method of trimmed moments (MTM) estimators of θ1,θ2,...,θkare found as follows: • Compute the sample trimmed moments /hatwideTj=1 n−⌊naj⌋−⌊nbj⌋n−⌊nbj⌋/summationdisplay i=⌊naj⌋+1hj(Xi:n),1≤j≤k. (1) Theh′ jsin (1) are specially chosen functions for mathematical convenie nce and are typically specified by the data analyst. For a detailed discussion, we r efer the reader to Brazauskas et al. (2009) and Poudyal (2021), The proportions 0≤aj,bj≤1should be selected based on the desired balance between efficiency and robustness. • Compute the corresponding population trimmed moments Tj=1 1−aj−bj/integraldisplaybj ajHj(u)du,1≤j≤k,v:= 1−v, v∈[0,1]. (2) In (2),F−1(u|θ) = inf{x:F(x|θ)≥u}is the quantile function, and Hj:=hj◦F−1. • Match the sample and population trimmed moments from ( 1) and ( 2) to get the following system of equations for θ1,θ2,...,θk:   T1(θ1,...,θk) =/hatwideT1, ... Tk(θ1,...,θk) =/hatwideTk.(3) A solution, say /hatwideθT=/parenleftig /hatwideθ1,/hatwideθ2,...,/hatwideθk/parenrightig , if it exists, to the system of equations ( 3) is called the method of trimmed moments (MTM) estimator of θ. Thus,/hatwideθj=:gj/parenleftig /hatwideT1,/hatwideT2,...,/hatwideTk/parenrightig ,1≤j≤k, are the MTM estimators of θ1,θ2,...,θk. Asymptotically, Eq. ( 1) is equivalent to a general structure of L-statistics under the condition 0≤aj<bj≤1withaj+bj<1; seeSerfling (1980, p. 264). Specifically, /hatwideTj:=1 nn/summationdisplay i=1Jj/parenleftbiggi n+1/parenrightbigg hj(Xi:n),1≤j≤k, (4) 3 with the specified weights-generating function: Jj(s) =/braceleftbigg (1−aj−bj)−1;aj< s <bj, 0; otherwise .1≤j≤k, (5) Similarly, Eq. ( 2) is equivalent to Tj≡Tj(θ)≡Tj(θ1,...,θk) =/integraldisplay1 0Jj(u)Hj(u)du. (6) We define /hatwideT:=/parenleftig /hatwideT1,/hatwideT1,...,/hatwideTk/parenrightig andT:= (T1,T2,...,T k). (7) As sample statistics, the vector /hatwideTis expected to converge in distribution to the correspondin g population parameter T. In total, there are six possible combinations of trimming p roportions (ai,bi)and(aj,bj)for1≤i,j≤k; seePoudyal (2025) for a detailed discussion. Among these, we focus on the case defined by the following inequality: 0≤aj≤ai<bj≤bi≤1. (8) Under the trimming inequality ( 8), and following the general asymptotic distributional res ults of Chernoff et al. (1967), the asymptotic distribution of the vector /hatwideT, along with the computational expressions developed by Poudyal (2025), is summarized in Theorem 1. The result is based on the following two integral quantities and a kernel function. Following Poudyal (2025), for1≤i≤kand0≤a≤b≤1, define   Ii(a,b) :=bHi(b)−aHi(a)−/integraltextb aHi(v)dv, Ii(a,b) :=bHi(b)−aHi(a)+/integraltextb aHi(v)dv,(9) and define the kernel function K(w,v)as K(w,v) :=K(v,w) = min{w,v}−wv, for0≤w,v≤1. (10) Theorem 1. With the trimming proportions satisfying inequality (8), it follows that √n/parenleftig /hatwideT−T/parenrightig ∼AN(0,ΣT),ΣT=/bracketleftbig σ2 ij/bracketrightbigk i,j=1, σ2 ij= Γ(i,j)V(i,j), (11) where Γ≡Γ(i,j) =/productdisplay r=i,j(1−ar−br)−1, (12) 4 V(i,j) =/integraldisplaybi ai/integraldisplaybj ajK(v,w)H′ j(v)H′ i(w)dvdw (13) =Ij(aj,ai)Ii(ai,bi)+biHi(bi)Ij(ai,bj)−aiHi(ai)Ij(ai,bj)+/integraldisplaybj aiHi(v)Hj(v)dv +/bracketleftbig bjHj(bj)−aiHj(ai)/bracketrightbig/integraldisplaybi bjHi(v)dv−/bracketleftbig aiHj(ai)+bjHj(bj)/bracketrightbig/integraldisplaybj aiHi(v)dv −/parenleftigg/integraldisplaybj aiHj(v)dv/parenrightigg/parenleftigg/integraldisplaybj aiHi(v)dv/parenrightigg −/parenleftigg/integraldisplaybj aiHj(v)dv/parenrightigg/parenleftigg/integraldisplaybi bjHi(v)dv/parenrightigg . (14) Different trimming proportions for different moments were us ed by Brazauskas and Kleefeld (2009) to estimate the parameters of generalized Pareto distribut ions. While they approximated the en-
https://arxiv.org/abs/2505.09860v1
triesσ2 ijof the variance-covariance matrix ΣTdirectly from Eq. ( 13) using a numerical bivariate trapezoidal rule ( Brazauskas and Kleefeld ,2009, Appendix A.2), our approach derives the simplified closed-form expression in Eq. ( 14), as presented in Theorem 1. Note 1. If the trimming inequality (8)is replaced by 0≤ai≤aj<bi≤bj≤1, (15) then the asymptotic result in Theorem 1remains still valid by simply interchanging the indices iand j. Using the delta method (see, e.g., Serfling ,1980, Theorem A, p. 122), along with /hatwideT=/parenleftig /hatwideT1,...,/hatwideTk/parenrightig andθj=gj(T1(θ),...,T k(θ)),we present the following asymptotic result for /hatwideθT. Theorem 2 (Delta method) .The MTM-estimator of θ, denoted by/hatwideθT, has the following asymptotic distribution: /hatwideθT=/parenleftig /hatwideθ1,...,/hatwideθk/parenrightig ∼AN/parenleftbigg θ,1 nST/parenrightbigg ,ST:=DTΣTD′ T, (16) where the Jacobian DTis given by DT=/bracketleftbigg ∂gi ∂/hatwideTj/vextendsingle/vextendsingle/vextendsingle/hatwideT=T/bracketrightbigg k×k=: [dij]k×kand the variance-covariance matrixΣThas the same form as in Theorem 1. With the asymptotic distributional properties from Theore m2, the asymptotic performance of MTM-estimators are assessed by computing their asymptotic relative efficiency (ARE) in relation to the maximum likelihood estimator (MLE). For a model param eterized by kparameters, the ARE is defined as (see, e.g., Serfling ,1980): ARE(C,MLE) =/parenleftbiggdet(ΣMLE) det(ΣC)/parenrightbigg1/k , (17) 5 whereΣMLEandΣCare the asymptotic covariance matrices of the MLE and the MTM -estimator C, respectively, with detdenoting the determinant operation on a square matrix. The M LE serves as a reference due to its superior asymptotic efficiency regar ding variance, contingent on specific regularity conditions being met. For additional insights, consult Serfling (1980), Section 4.1. Note 2. With trimming proportions satisfying 0≤ai≤bi≤1, ifai>0,bi>0, andai+bi<1for 1≤i≤k, then the resulting estimators are globally robust with low er and upper breakdown points LBP= min{a1,a2,...,ak}and UBP = min{b1,b2,...,bk}. These breakdown points quantify resistance to outliers: ob servations with order less than n×LBP or greater than n×(1−UBP)are effectively excluded from the estimation procedure. Suc h trimming ensures robustness against extreme values on both ends of th e distribution. For a rigorous treatment of breakdown points and robust estimation under heavy-tail ed models, see Hampel et al.(1986) and Serfling (2002). 3 Parametric Examples In this section, we derive general MTM estimators for the loc ation and scale parameters of broad location-scale families, which are not necessarily symmet ric, while also highlighting the advantages of the general MTM framework when the distribution is symmet ric about zero. We also obtain the entries of the corresponding asymptotic variance-covaria nce matrix. For numerical illustrations, we consider the lognormal and Fréchet distributions and prese nt state-of-the-art algorithms designed to resolve the sign ambiguity in estimating the scale parame ter for location-scale models and the tail index for the Fréchet model. Additionally, we evaluate the asymptotic relative efficiency (ARE) of the MTM estimators with respect to the maximum likelihood estimator (MLE), as defined in Eq. (17). 3.1 Location Scale Model Consider X1,X2,...,X niid∼X, whereXis a location-scale random variable with the CDF F(x) =F0/parenleftbiggx−θ σ/parenrightbigg ,−∞< x <∞, (18) where−∞< θ <∞andσ >0are, respectively, the location and scale parameters of X, and F0is the standard parameter-free version of F, i.e., with θ= 0andσ= 1. The corresponding 6 percentile/quantile function of Xis given by F−1(u) =θ+σF−1 0(u). (19) Since
https://arxiv.org/abs/2505.09860v1
we are estimating two unknown parameters, θandσ, we equate the first two sample trimmed- moments with their corresponding population trimmed-mome nts. Further, knowing −∞< θ <∞ andσ >0, we choose h1(x) =xandh2(x) =x2. (20) From Eq. ( 6), we note that Hj:=hj◦F−1. Then, from Eq. ( 19) and Eq. ( 20), we have H1(u) =h1/parenleftbig F−1(u)/parenrightbig =F−1(u) =θ+σF−1 0(u), (21) =⇒dH1(u) =σdF−1 0(u), (22) H2(u) =h2/parenleftbig F−1(u)/parenrightbig =θ2+2θσF−1 0(u)+σ2/bracketleftbig F−1 0(u)/bracketrightbig2, (23) =⇒dH2(u) = 2θσdF−1 0(u)+2σ2F−1 0(u)dF−1 0(u). (24) With only two parameters, θandσ, to estimate, we focus on the trimming inequality derived fr om (8), stated explicitly as: 0≤a2≤a1<b2≤b1≤1. (25) With the trimming proportions (a1,b1)and(a2,b2)as given in ( 25), then from Eq. ( 1), the first two sample trimmed-moments are given by:   /hatwideT1=1 n−⌊na1⌋−⌊nb1⌋n−⌊nb1⌋/summationdisplay i=⌊na1⌋+1h1(Xi:n) =1 n−⌊na1⌋−⌊nb1⌋n−⌊nb1⌋/summationdisplay i=⌊na1⌋+1Xi:n, /hatwideT2=1 n−⌊na2⌋−⌊nb2⌋n−⌊nb2⌋/summationdisplay i=⌊na2⌋+1h2(Xi:n) =1 n−⌊na2⌋−⌊nb2⌋n−⌊nb2⌋/summationdisplay i=⌊na2⌋+1X2 i:n.(26) The corresponding first two population trimmed-moments usi ng Eq. ( 2) takes the form:   T1≡T1(θ,σ) =1 1−a1−b1/integraldisplayb1 a1H1(u)du=θ+σc1(a1,b1), T2≡T2(θ,σ) =1 1−a2−b2/integraldisplayb2 a2H2(u)du=θ2+2θσc1(a2,b2)+σ2c2(a2,b2),(27) 7 where ck(a,b)≡ck(F0,a,b) =1 b−a/integraldisplayb a/bracketleftbig F−1 0(u)/bracketrightbigkdu, k≥1. (28) Equating T1=/hatwideT1andT2=/hatwideT2, and solving the resulting system of equations, we obtain th e explicit expressions for θandσas:   /hatwideθT=/hatwideT1−c1(a1,b1)/hatwideσT=:g1/parenleftig /hatwideT1,/hatwideT2/parenrightig , /hatwideσT=±1/radicalig η(a1,b2)/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig1/2 +/hatwideT1/parenleftbig c1(a1,b1)−c1(a2,b2)/parenrightbig η(a1,b2)=:g2/parenleftig /hatwideT1,/hatwideT2/parenrightig ,(29) where for 1≤i, j≤2, η(ai,bj) :=c2 1(ai,bi)−2c1(ai,bi)c1(aj,bj)+c2(aj,bj), ηr:=η(aj,bj) η(ai,bj). (30) Note 3. The motivation for imposing the trimming inequality (25)is to ensure that the same data points are trimmed from both tails for the first and second mom ents, particularly when the sample data are approximately symmetric about zero, an approach th at, to our knowledge, has not been fully addressed in the existing literature. For illustration, consider the sample dataset: −15,−13,−8,−4,−2,3,5,7,9,12. •Under equal trimming proportions, say (a1,b1) = (a2,b2) = (0.2,0.2), the trimmed samples are: Trimmed sample for the first moment: −8,−4,−2,3,5,7. Trimmed sample for the second moment: (−4)2,52,72,(−8)2,92,122. In this case, observations −15,−13,9, and12are trimmed for the first moment, whereas −15,−13,−2, and3are trimmed for the second moment. Hence, different data poin ts are being trimmed for the first and second moments. •Now consider unequal trimming proportions, e.g., (a1,b1) = (0.2,0.2)and(a2,b2) = (0,0.4), which yield: Trimmed sample for the first moment: −8,−4,−2,3,5,7. 8 Trimmed sample for the second moment: (−8)2,(−4)2,(−2)2,32,52,72. In this case, the same observations −15,−13,9, and12are trimmed from both the first and second moments. Therefore, if the data arise from a distribution symmetric a bout zero, ensuring that the same ob- servations are trimmed from both the first and second moments generally requires using different trimming proportions, i.e., (a1,b1)/ne}ationslash= (a2,b2). Even when all observations are negative, equal trimming pro portions—e.g., (a1,b1) = (a2,b2) = (0.1,0.2)—can still result in trimming different observations for the first and second moments. In contrast, if all data points are positive, then equal trimmi ng proportions typically lead to trimming the same observations for both moments. Nevertheless, to enhance robustness, one may deliberately choose different trimming proportions that result in trimming different observations for the first a nd second moments—even when the underlying model assumes strictly positive support. In suc h cases, it is advisable to select trimming parameters that satisfy the
https://arxiv.org/abs/2505.09860v1
condition 0≤a1≤a2<b1≤b2≤1, (31) which allows greater flexibility in capturing potential dis tributional asymmetries and tail behaviors. Therefore, our motivation for adopting the trimming inequa lity(25)is to ensure consistent trimming across both moments when the underlying distribution is sym metric about zero, whereas inequality (31)offers an alternative strategy to enhance robustness when th e random variable is strictly positive. Proposition 1. With the trimming proportions (ai,bi)and(aj,bj)satisfying the inequality (8), it follows that (i)η/parenleftbig ai,bj/parenrightbig =c2 1/parenleftbig ai,bi/parenrightbig −2c1/parenleftbig ai,bi/parenrightbig c1/parenleftbig aj,bj/parenrightbig +c2/parenleftbig aj,bj/parenrightbig >0. (ii)ck/parenleftbig ai,bi/parenrightbig ≥ck/parenleftbig aj,bj/parenrightbig ,for any odd positive integer k. (iii)0< ηr=η(aj,bj) η(ai,bj)≤1. (iv)c2/parenleftbig aj,bj/parenrightbig ≥ηrc2 1/parenleftbig ai,bi/parenrightbig . 9 (v)T2−ηrT2 1≥0. Proof. See Appendix A. Note 4. Under the inequality condition (25), no fixed ordering can be established between c2(a2,b2) andc2 1(a1,b1), or equivalently between T2andT2 1. To illustrate both possible directions, consider the standard normal distribution, i.e., F0= Φ, withθ= 0andσ= 1. Then: 1. Fora2= 0.02,b2= 0.75anda1= 0.05,b1= 0.99, T2=c2(a2,b2) = 0.5702>0.0066 =c2 1(a1,b1) =T2 1. 2. Fora2= 0.02,b2= 0.75anda1= 0.50,b1= 0.99, T2=c2(a2,b2) = 0.5702<0.5773 =c2 1(a1,b1) =T2 1. However, by Proposition 1(iv), the inequality is restored as below c2(a2,b2)≥ηrc2 1(a1,b1),equivalently T2−ηrT2 1≥0. Similarly, for a nonzero location parameter, say θ= 5andσ= 2, we observe that T2= 19.9010≤T2 1= 42.5046,butT2= 19.9010≥ηrT2 1= 10.8001. (32) Corollary 1. Under the trimming inequality (15), that is, (31)for location-scale models, all re- sults of Proposition 1remain valid except for Part (ii), which instead takes the form ck/parenleftbig ai,bi/parenrightbig ≤ ck/parenleftbig aj,bj/parenrightbig ,for any odd positive integer k. Note 5. Define σFT=1/radicalig η(a1,b2)/parenleftbig T2−ηrT2 1/parenrightbig1/2andσST=T1/parenleftbig c1(a1,b1)−c1(a2,b2)/parenrightbig η(a1,b2). (33) Under the trimming inequality (25), a consistent directional relationship between σFTandσSTis not guaranteed. For illustration, consider X∼N(θ= 10,σ= 3). Then, for (a1,b1) = (0.02,0.02)and(a2,b2) = (0.00,0.03), 10 we have σFT= 2.192>0.808 =σST. However, for (a1,b1) = (0.02,0.02)and(a2,b2) = (0.00,0.10), we obtain σFT= 0.400<2.600 =σST. Since the trimmed estimators/parenleftig /hatwideθT,/hatwideσT/parenrightig in Eq. ( 29) are derived from Eq. ( 27), it is guaranteed in theory that one of the solutions for σfrom the±branch must be positive. However, in finite samples, it is possible that /hatwideσT<0,in which case the proposed trimmed L-moment estimation fails. Thus, to determine the appropriate sign of /hatwideσTin Eq. ( 29), the following strategy is proposed. First, as established in Proposition 1, it holds theoretically that T2−ηrT2 1≥0.Therefore, assuming/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig1/2 ≥0,we consider /hatwideσFT=1/radicalig η(a1,b2)/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig1/2 and/hatwideσST=/hatwideT1/parenleftbig c1(a1,b1)−c1(a2,b2)/parenrightbig η(a1,b2). (34) But fora1/ne}ationslash=a2orb1/ne}ationslash=b2, it is possible to have /hatwideT2−ηr/hatwideT2 1<0,i.e.,/hatwideT2//hatwideT2 1< ηr,and in that case we consider /hatwideσFT=1/radicalig η(a1,b2)/parenleftig/vextendsingle/vextendsingle/vextendsingle/hatwideT2−ηr/hatwideT2 1/vextendsingle/vextendsingle/vextendsingle/parenrightig1/2 and/hatwideσST=/hatwideT1/parenleftbig c1(a1,b1)−c1(a2,b2)/parenrightbig η(a1,b2). (35) Thus, for location-scale models, it is guaranteed that /hatwideσFT≥0.However,/hatwideσST∈R. If/hatwideσFT</hatwideσST,we define/hatwideσ−:=−/hatwideσFT+/hatwideσST>0.Similarly, if/hatwideσFT>|/hatwideσST|,we define/hatwideσ+:=/hatwideσFT+/hatwideσST> 0.Finally, if/hatwideσFT≤|/hatwideσST|and/hatwideσST≤0,then the method fails to produce a nonnegative estimate of /hatwideσT≥0. • Ifa1=a2=aandb1=b2=b, then it follows that η(a1,b2) =η(a2,b2),that is,ηr= 1. Also, c1(a1,b1)−c1(a2,b2) = 0.Thus, /hatwideσT=/hatwideσ+=/parenleftigg/hatwideT2−/hatwideT2 1 c2(a,b)−c1(a,b)2/parenrightigg1/2 , 11 Algorithm 1: Trimmed Estimation of/parenleftig /hatwideθT,/hatwideσT/parenrightig in Location-Scale Models Input: Sample data and trimming proportions (a1,b1),(a2,b2)satisfying ( 25) or (31). Output: Trimmed estimator vector/parenleftig /hatwideθT,/hatwideσT/parenrightig . 1: Compute ; 2:/hatwideT1and/hatwideT2from Eq. ( 26); 3:c1(a1,b1), c1(a2,b2), andc2(a2,b2)from Eq. ( 28); 4:η(a1,b2), η(a2,b2), andηrfrom Eq. ( 30); 5:/hatwideσFTand/hatwideσSTfrom Eq. ( 34) or Eq. ( 35); 6:/hatwideσMLEfrom
https://arxiv.org/abs/2505.09860v1
Eq. ( 36); 7: Set/hatwideσ+←/hatwideσFT+/hatwideσSTand/hatwideσ−←−/hatwideσFT+/hatwideσST; 8: ifa1=a2andb1=b2then 9:/hatwideσST←0, resulting in/hatwideσT←/hatwideσFT; 10: return /hatwideσT; 11: end 12: else if (a2≤a1andb1≤b2)or(a1≤a2andb2≤b1)then 13: if max{/hatwideσ−,/hatwideσ+}≤0then /* Exit and update trimming proportions to ensure /hatwideσT>0. */ 14: Ensure max{/hatwideσ−,/hatwideσ+}>0; 15: end 16: else if /hatwideσ−≤0and/hatwideσ+>0then 17:/hatwideσT←/hatwideσ+; 18: end 19: else if /hatwideσ−>0and/hatwideσ+≤0then 20:/hatwideσT←/hatwideσ−; 21: end 22: else if min{/hatwideσ−,/hatwideσ+}≥0then 23: if|/hatwideσ−−/hatwideσMLE|<|/hatwideσ+−/hatwideσMLE|then 24: /hatwideσT←/hatwideσ− 25: end 26: else 27: /hatwideσT←/hatwideσ+; 28: end 29: end 30: end 31: return/hatwideσT; 32:/hatwideθT←/hatwideT1−c1(a1,b1)/hatwideσT; 33: return/parenleftig /hatwideθT,/hatwideσT/parenrightig ; and this result coincides exactly with the solution present ed in Eq. (2.7) of Brazauskas et al. (2009), as expected. • Ifa1/ne}ationslash=a2orb1/ne}ationslash=b2, selecting the appropriate sign in Eq. ( 29) becomes nontrivial. For either trimming inequality ( 25) or (31), we begin by computing the standard deviation of the full 12 sampleX1,X2,...,X nas /hatwideσMLE=/radicaltp/radicalvertex/radicalvertex/radicalbt1 nn/summationdisplay i=1/parenleftbig Xi−X/parenrightbig2. (36) Ifmax{/hatwideσ−,/hatwideσ+}≤0,then there is no positive solution for the scale parameter σ. Otherwise, if/hatwideσ−≤0and/hatwideσ+>0, then we simply assign /hatwideσT=/hatwideσ+.Similarly, if/hatwideσ−>0and/hatwideσ+≤0,then /hatwideσT=/hatwideσ−.Finally, assuming min{/hatwideσ−,/hatwideσ+}≥0,and following a rationale similar to that used in selecting a decision threshold in logistic classificatio n, we choose between /hatwideσ−and/hatwideσ+based on their proximity to the maximum likelihood estimate. That is, /hatwideσT=/braceleftigg /hatwideσ−,if|/hatwideσ−−/hatwideσMLE|<|/hatwideσ+−/hatwideσMLE|, /hatwideσ+,if|/hatwideσ+−/hatwideσMLE|≤|/hatwideσ−−/hatwideσMLE|.(37) A formal procedure for estimating the parameter vector/parenleftig /hatwideθT,/hatwideσT/parenrightig is summarized in Algorithm 1. Note 6. For(a1,b1) = (a2,b2) = (a,b), it follows that η(a1,b2) =η(a2,b2),that is,ηr= 1, and the result given by Eq. (29)coincides exactly with the solution presented in Eq. (2.7) o fBrazauskas et al. (2009), as expected. From Theorem 1along with Eqs. ( 22) and ( 24), we calculate the entries of the covariance matrix ΣT=/bracketleftig σ2 ij/bracketrightig2 i,j=1as below: σ2 11= Γ(1,1)V(1,1) = Γ(1,1)/integraldisplayb1 a1/integraldisplayb1 a1K(w,v)H′ 1(w)H′ 1(v)dvdw =σ2Γ(1,1)/integraldisplayb1 a1/integraldisplayb1 a1K(w,v)dF−1 0(v)dF−1 0(w)dvdw =σ2Λ111, (38) σ2 12= Γ(1,2)V(1,2) = Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)H′ 1(w)H′ 2(v)dvdw = Γ(1,2){2θσ2/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)dF−1 0(v)dF−1 0(w)dvdw +2σ3/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)F−1 0(v)dF−1 0(v)dF−1 0(w)dvdw} = 2θσ2Λ121+2σ3Λ122, (39) σ2 22= Γ(2,2)V(2,2) 13 = Γ(2,2)/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)H′ 2(w)H′ 2(v)dvdw = Γ(2,2){4θ2σ2/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)dF−1 0(v)dF−1 0(w)dvdw +8θσ3/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)F−1 0(w)dF−1 0(v)dF−1 0(w)dvdw +4σ4/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)F−1 0(w)F−1 0(v)dF−1 0(v)dF−1 0(w)dvdw} = 4θ2σ2Λ221+8θσ3Λ222+4σ4Λ223, (40) where the notations Λijk, for1≤i,j≤2and1≤k≤3do not depend on the parameters to be estimated and are listed in Appendix B. Note 7. For the equal trimming proportions (a1,b1) = (a2,b2) = (a,b), we have Λ111= Λ121= Λ221=c∗ 1,Λ122= Λ222=c∗ 2,andΛ223=c∗ 3, where the notations c∗ i,i= 1,2,3can be found in Brazauskas et al.(2009). As defined in Corollary 2, the entries of the matrix DT= [dij]2 i,j=1,are obtained by differentiating the functions gifrom Eqs. ( 29): d11= 1−c1(a1,b1)∂g2 ∂/hatwideT1, d12=−c1(a1,b1)∂g2 ∂/hatwideT2. The two entries d21andd22depend on the sign of the /hatwideσTas seen in Eq. ( 37). That is, if/hatwideσT=/hatwideσ−, then d− 21=∂g2 ∂/hatwideT1=ηr/hatwideT1/radicalig η(a1,b2)/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig−1 2+c1(a1,b1)−c1(a2,b2) η(a1,b2), (41) d− 22=∂g2 ∂/hatwideT2=−1 2/radicalig η(a1,b2)/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig−1 2. (42) And, if/hatwideσT=/hatwideσ+,then d+ 21=∂g2 ∂/hatwideT1=−ηr/hatwideT1/radicalig η(a1,b2)/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig−1 2+c1(a1,b1)−c1(a2,b2) η(a1,b2), (43) d+ 22=∂g2 ∂/hatwideT2=1 2/radicalig η(a1,b2)/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig−1 2. (44) 14 Lemma 1. Define D− T=/bracketleftigg d11d12 d− 21d− 22/bracketrightigg andD+ T=/bracketleftigg d11d12 d+ 21d+ 22/bracketrightigg , then it follows that det (D− T)+det(D+ T) = 0. We will be using the Jacobian matrix DTto compute the ARE as presented
https://arxiv.org/abs/2505.09860v1
in Eq. ( 17), whereΣC= DTΣTD′ T.Since both matrices DTandΣTare2×2square matrices, it follows from Schott (2017, Theorem 1.7) that det (DTΣTD′ T) = (det(DT))2det(ΣT).Therefore, by the property established in Lemma 1, choosing either D− TorD+ Tfor the Jacobian matrix DTdoes not affect the final ARE calculation. Thus, without loss of generality, we proceed w ithDT=D+ T. Further, define Ω :=/vextendsingle/vextendsingleσ/parenleftbig c2(a2,b2)−c1(a1,b1)c1(a2,b2)/parenrightbig +θ/parenleftbig c1(a2,b2)−c1(a1,b1)/parenrightbig/vextendsingle/vextendsingle, then it follows that d11=∂g1 ∂/hatwideT1/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=ηr Ω/bracketleftbig σc2 1(a1,b1)+θc1(a1,b1)/bracketrightbig +c2(a2,b2)−c1(a1,b1)c1(a2,b2) η(a1,b2), d12=∂g1 ∂/hatwideT2/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=−c1(a1,b1) 2Ω, d21=∂g2 ∂/hatwideT1/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=−1 Ωη(a1,b2)/bracketleftbig Ω/parenleftbig c1(a2,b2)−c1(a1,b1)/parenrightbig +/parenleftbig θ+σc1(a1,b1)/parenrightbig η(a2,b2)/bracketrightbig , d22=∂g2 ∂/hatwideT2/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=1 2Ω. Therefore, we get /parenleftig /hatwideθT,/hatwideσT/parenrightig ∼AN/parenleftbigg (θ,σ),1 nST/parenrightbigg ,whereST=DTΣTD′ T. (45) Thus, from Eq. ( 17) and Eq. ( 45), we have ARE/parenleftig/parenleftig /hatwideθT,/hatwideσT/parenrightig ,/parenleftig /hatwideθMLE,/hatwideσMLE/parenrightig/parenrightig = (det(SMLE)/det(ST))0.5. (46) Numerical values of the AREs, computed using Eq. ( 46), are reported in Table 1for various trimming proportions satisfying inequality ( 25), under a normal distribution with fixed σ= 3and varying location parameter θ. The corresponding ARE curve is shown in the top panel of Figu re1(solid black line) for trimming proportions (a1,b1) = (0.05,0.05)and(a2,b2) = (0.00,0.10). 15 Table 1: FromN(θ,σ2= 32), and we vary the location parameter θ. Inequality used ( 25). Proportions θ (a1,b1)(a2,b2)−25−15−10−5 0 5 10 15 25 (0.02,0.02)(0.02,0.02) 0.943 0.943 0.943 0.943 0.943 0.943 0.943 0.943 0.943 (0.02,0.02)(0.00,0.04) 0.903 0.931 0.944 0.952 0.946 0.903 0.794 0.599 0.121 (0.05,0.05)(0.05,0.05) 0.872 0.872 0.872 0.872 0.872 0.872 0.872 0.872 0.872 (0.05,0.05)(0.00,0.10) 0.878 0.890 0.897 0.901 0.883 0.746 0.206 0.334 0.650 (0.10,0.10)(0.10,0.10) 0.769 0.769 0.769 0.769 0.769 0.769 0.769 0.769 0.769 (0.10,0.10)(0.00,0.20) 0.851 0.850 0.849 0.842 0.805 0.330 0.684 0.797 0.831 (0.15,0.15)(0.15,0.15) 0.676 0.676 0.676 0.676 0.676 0.676 0.676 0.676 0.676 (0.15,0.15)(0.00,0.30) 0.812 0.809 0.806 0.797 0.753 0.249 0.788 0.810 0.815 Due to space constraints, we omit the ARE table for the revers e setting where θ= 5is fixed and σvaries, but the associated curve is displayed in the bottom p anel of Figure 31(solid black line), using the same trimming configuration. Although similar tables can be generated for other trimming proportions under inequality ( 31), we include only the corresponding ARE curves. For the case with σ= 3fixed and varying θ, the top panel of Figure 31shows the result as a dash-dot blue curve. Likewise, when θ= 5is fixed and σ varies, the bottom panel presents the corresponding ARE cur ve using the same line style. Several key findings emerge from Table 1and Figure 1. In Figure 1, the horizontal red dotted line represents the ARE value obtained from a normal distributio n using identical trimming proportions for both moments, i.e., (a1,b1) = (a2,b2) = (0.05,0.05). When the location parameter θis close to zero, the AREs corresponding to trimming inequalities ( 25) or (31) exceed those obtained under equal trimming for both moments. This observation supports the primary motivation of this study: when the data and underlying model are approximately symmet ric about the origin, applying the same trimming to both moments can unintentionally exclude d ifferent sets of observations, thereby reducing efficiency (see Note 3). In contrast, under inequalities ( 25) or (31), asymmetric trimming may preserve informative
https://arxiv.org/abs/2505.09860v1
observations for specific moments , leading to higher efficiency. However, when|θ|becomes large, trimming inequalities ( 25) and ( 31) do not trim the same data points across moments, unlike equal trimming, which can lea d to reduced efficiency and lower AREs. Table 1also illustrates that the ARE generally decreases with incr easing trimming proportions. Nonetheless, the decline in ARE is slower under asymmetric t rimming (inequalities ( 25) and ( 31)) compared to equal trimming. In other words, the ARE curves in Figure 1exhibit similar shapes 16 -300 -200 -100 0 100 200 3000.10.20.30.40.50.60.70.80.91ARE 0.10.20.30.40.50.60.70.80.91 0 5 10 15 20 250.10.20.30.40.50.60.70.80.91ARE 0.10.20.30.40.50.60.70.80.91 ARE1(θ-varies,σ= 3) = ARE(θ-varies,σ= 3,(a1,b1) = (0.05,0.05),(a2,b2) = (0.00,0.10)) ARE2(θ-varies,σ= 3) = ARE(θ-varies,σ= 3,(a1,b1) = (0.05,0.05),(a2,b2) = (0.10,0.00)) ARE1(θ= 5,σ-varies) = ARE(θ= 5,σ-varies,(a1,b1) = (0.05,0.05),(a2,b2) = (0.00,0.10)) ARE2(θ= 5,σ-varies) = ARE(θ= 5,σ-varies,(a1,b1) = (0.05,0.05),(a2,b2) = (0.10,0.00)) Figure 1: Normal ARE curves under trimming inequalities ( 25) or (31). across trimming levels, but their height decreases more gra dually under asymmetric configurations, suggesting a more stable robustness-efficiency trade-off. Importantly, the lowest ARE values occur when T2−ηrT2 1→0+,indicating near-singularity in the scale estimation formula. If the true values of θandσresult in T2−ηrT2 1→0+,this may lead to /hatwideT2−ηr/hatwideT2 1<0,or equivalently, /hatwideT2//hatwideT2 1< ηr,making the estimation unstable. For instance, with fixedθ= 5and varying σ(Figure 1, bottom panel), the ARE drops sharply as T2−ηrT2 1→0+, 17 with the trimming inequality ( 25). Otherwise, for both trimming inequalities, the ARE stabi lizes asσincreases, showing that the MTM method performs reliably ev en with widely dispersed or contaminated data. For a fixed negative value, such as θ=−5, the ARE curves in Figure 1(bottom panel) retain their shape, but the roles of the black and blue curves are reversed . 3.2 Fréchet Model As a member of the location-scale family of distributions, t he pdf and cdf are, respectively, given by f(x) =α σ/parenleftbiggx−θ σ/parenrightbigg−(α+1) exp/bracketleftbigg −/parenleftbiggσ x−θ/parenrightbiggα/bracketrightbigg ;x > θ, F(x) = exp/bracketleftigg −/parenleftbiggσ x−θ/parenrightbiggα/bracketrightigg , where−∞< θ <∞,σ >0, andα >0are, respectively, the location, scale, and shape paramete rs. But to investigate the performance of general MTM for strictl y positive distribution, we set the location parameter θ= 0. Thus, we are left to estimate the scale parameter, σ, and the shape parameter, α, of the distribution. Furthermore, for ease of estimation, instead of directly estimating the shape parameter α, we estimate the tail index , defined as its reciprocal, β:= 1/α. The moments and quantile functions are given by E/bracketleftig Xk/bracketrightig =σkΓ(1−kβ);kβ <1,andF−1(u) =σ(−log(u))−β. Unlike for location-scale member, we take the two functions h1(x) = log(x)andh2(x) = (log( x))2. (47) Thus, with Hj:=hj◦F−1, it follows that H1(u) =h1/parenleftbig F−1(u)/parenrightbig = log(σ)−βlog(−log(u)), (48) =⇒H′ 1(u) =−β ulog(u), (49) H2(u) =h2/parenleftbig F−1(u)/parenrightbig = (log(σ))2−2βlog(σ)log(−log(u))+β2(log(−log(u)))2,(50) =⇒H′ 2(u) =2β2log(−log(u)) ulog(u)−2βlog(σ) ulog(u)=2β ulog(u)(βlog(−log(u))−log(σ)).(51) 18 0 1 2 3 4 5 x0.00.20.40.60.81.0DensitiesFrechet( β = 0. 2, σ = 2) Frechet( β = 0.5, σ = 2) Frechet( β = 1, σ = 2) Frechet( β = 2, σ = 2) Frechet( β = 5, σ = 2) Figure 2: Frechet Densities. A larger β(i.e., a smaller shape parameter α) implies a
https://arxiv.org/abs/2505.09860v1
heavier tail. This is becauseS(x;σ,β1)≤S(x;σ,β2)for allx≥0andβ1≤β2, whereSdenotes the survival function. With the trimming proportions (a1,b1)and(a2,b2)as given in ( 25), then from Eq. ( 1), the first two sample trimmed-moments are given by:   /hatwideT1=1 n−⌊na1⌋−⌊nb1⌋n−⌊nb1⌋/summationdisplay i=⌊na1⌋+1h1(Xi:n) =1 n−⌊na1⌋−⌊nb1⌋n−⌊nb1⌋/summationdisplay i=⌊na1⌋+1log(Xi:n), /hatwideT2=1 n−⌊na2⌋−⌊nb2⌋n−⌊nb2⌋/summationdisplay i=⌊na2⌋+1h2(Xi:n) =1 n−⌊na2⌋−⌊nb2⌋n−⌊nb2⌋/summationdisplay i=⌊na2⌋+1(log(Xi:n))2.(52) The corresponding first two population trimmed-moments usi ng Eq. ( 2) takes the form:   T1=1 1−a1−b1/integraldisplayb1 a1H1(u)du= log(σ)−βκ1(a1,b1), T2=1 1−a2−b2/integraldisplayb2 a2H2(u)du= (log(σ))2−2βlog(σ)κ1(a2,b2)+β2κ2(a2,b2),(53) where κk(a,b) :=1 b−a/integraldisplayb a[∆(u)]kdu,∆(u) := log(−log(u)), k≥1, (54) 19 do not depend on the parameters βandσto be estimated. Setting(T1,T2) =/parenleftig /hatwideT1,/hatwideT2/parenrightig from Eqs. ( 52) and ( 53), and solving for βandσ, we get the explicit trimmed L-estimators as   /hatwideβT=±1/radicalig ζ(a1,b2)/parenleftig /hatwideT2−ζr/hatwideT2 1/parenrightig1/2 +/hatwideT1/parenleftbig κ1(a2,b2)−κ1(a1,b1)/parenrightbig ζ(a1,b2)=:g1/parenleftig /hatwideT1,/hatwideT2/parenrightig , /hatwideσT= exp/braceleftig /hatwideT1+/hatwideβTκ1(a1,b1)/bracerightig =:g2/parenleftig /hatwideT1,/hatwideT2/parenrightig ,(55) where, as in Eq. ( 30), and for 1≤i,j≤2, we define ζ(ai,bj) :=κ2 1(ai,bi)−2κ1(ai,bi)κ1(aj,bj)+κ2(aj,bj),1≤i,j≤2, ζr:=ζ(aj,bj) ζ(ai,bj).(56) We now summarize some results, similar to those in Propositi on1, in Corollary 2. Corollary 2. With the trimming proportions (ai,bi)and(aj,bj)satisfying the inequality (8), it follows that (i)ζ/parenleftbig ai,bj/parenrightbig =κ2 1/parenleftbig ai,bi/parenrightbig −2κ1/parenleftbig ai,bi/parenrightbig κ1/parenleftbig aj,bj/parenrightbig +κ2/parenleftbig aj,bj/parenrightbig >0. (ii)κk/parenleftbig ai,bi/parenrightbig ≤κk/parenleftbig aj,bj/parenrightbig ,for any odd positive integer k. (iii)0< ζr=ζ(aj,bj) ζ(ai,bj)≤1. (iv)κ2/parenleftbig aj,bj/parenrightbig ≥ζrκ2 1/parenleftbig ai,bi/parenrightbig . (v)T2−ζrT2 1≥0. Proof. See Appendix A. Similar to Corollary 1, we have the following result. Corollary 3. Under the trimming inequality (15), that is, (31)for Fréchet model, all results of Corollary 2remain valid except for Part (ii), which instead takes the form κk/parenleftbig ai,bi/parenrightbig ≥κk/parenleftbig aj,bj/parenrightbig , for any odd positive integer k. Note 8. Define βFT=1/radicalig ζ(a1,b2)/parenleftbig T2−ζrT2 1/parenrightbig1/2andβST=T1/parenleftbig κ1(a2,b2)−κ1(a1,b1)/parenrightbig ζ(a1,b2). (57) 20 Similarly as mentioned in Note 5, under the trimming inequality (25), a consistent directional relationship between βFTandβSTis not guaranteed. For illustration, consider X∼Fréchet(β= 2,σ= 3). Then, for (a1,b1) = (0.02,0.02)and(a2,b2) = (0.00,0.03), we have βFT= 1.860>0.139 =βST. However, for (a1,b1) = (0.02,0.02)and(a2,b2) = (0.00,0.20), we obtain βFT= 0.738<1.262 =βST. As in Section 3.1, the trimmed estimators/parenleftig /hatwideβT,/hatwideσT/parenrightig in Eq. ( 55) are derived from Eq. ( 53), ensuring that one of the solutions from the ±branch for βis theoretically positive. However, for finite samples, it is possible that /hatwideβT<0,in which case the proposed trimmed L-moment estimation is invalid. To determine the correct sign of /hatwideβT, in Eq. ( 55), we adopt the following strategy. As established in Corollary 2, it holds that T2−ζrT2 1≥0.Therefore, assuming/parenleftig /hatwideT2−ηr/hatwideT2 1/parenrightig1/2 ≥0, we consider /hatwideβFT=1/radicalig ζ(a1,b2)/parenleftig /hatwideT2−ζr/hatwideT2 1/parenrightig1/2 and/hatwideβST=/hatwideT1/parenleftbig κ1(a2,b2)−κ1(a1,b1)/parenrightbig ζ(a1,b2). (58) But fora1/ne}ationslash=a2orb1/ne}ationslash=b2, it is possible to have /hatwideT2−ζr/hatwideT2 1<0,i.e.,/hatwideT2//hatwideT2 1< ζr,and in that case we consider /hatwideβFT=±1/radicalig ζ(a1,b2)/parenleftig/vextendsingle/vextendsingle/vextendsingle/hatwideT2−ζr/hatwideT2 1/vextendsingle/vextendsingle/vextendsingle/parenrightig1/2 and/hatwideβST=/hatwideT1/parenleftbig κ1(a2,b21)−κ1(a1,b1)/parenrightbig ζ(a1,b2). (59) Therefore, for Fréchet models, we have /hatwideβFT≥0,by construction. In contrast, /hatwideβSTmay take any real value, i.e., /hatwideβST∈R. If/hatwideβFT</hatwideβST,we define/hatwideβ−:=−/hatwideβFT+/hatwideβST>0.Similarly, if/hatwideβFT>/vextendsingle/vextendsingle/vextendsingle/hatwideβST/vextendsingle/vextendsingle/vextendsingle,we define/hatwideβ+:=/hatwideβFT+/hatwideβST> 0.Finally, if/hatwideβFT≤/vextendsingle/vextendsingle/vextendsingle/hatwideβST/vextendsingle/vextendsingle/vextendsingleand/hatwideβST≤0,then the method fails to yield a nonnegative estimate of /hatwideβT≥0. 21 Algorithm 2: Trimmed Estimation of/parenleftig /hatwideβT,/hatwideσT/parenrightig in Fréchet Model Input: Sample data and trimming proportions (a1,b1),(a2,b2)satisfying ( 25) or (31). Output: Trimmed estimator vector/parenleftig /hatwideβT,/hatwideσT/parenrightig . 1: Compute ; 2:/hatwideT1and/hatwideT2from Eq. ( 52); 3:κ1(a1,b1), κ1(a2,b2), andκ2(a2,b2)from Eq. ( 54); 4:ζ(a1,b2), ζ(a2,b2), andζrfrom Eq. ( 56); 5:/hatwideβFTand/hatwideβSTfrom Eq. ( 58) or Eq. (
https://arxiv.org/abs/2505.09860v1
59); 6:/hatwideβMLEfrom Eq. ( 60); 7: Set/hatwideβ+←/hatwideβFT+/hatwideβSTand/hatwideβ−←−/hatwideβFT+/hatwideβST; 8: ifa1=a2andb1=b2then 9:/hatwideβST←0, resulting in /hatwideβT←/hatwideβFT; 10: return /hatwideβT; 11: end 12: else if (a2≤a1andb1≤b2)or(a1≤a2andb2≤b1)then 13: if max/braceleftig /hatwideβ−,/hatwideβ+/bracerightig ≤0then /* Exit and update trimming proportions to ensure /hatwideβT>0. */ 14: Ensure max/braceleftig /hatwideβ−,/hatwideβ+/bracerightig >0; 15: end 16: else if /hatwideβ−≤0and/hatwideβ+>0then 17:/hatwideβT←/hatwideβ+; 18: end 19: else if /hatwideβ−>0and/hatwideβ+≤0then 20:/hatwideβT←/hatwideβ−; 21: end 22: else if min/braceleftig /hatwideβ−,/hatwideβ+/bracerightig ≥0then 23: if|/hatwideβ−−/hatwideβMLE|<|/hatwideβ+−/hatwideβMLE|then 24: /hatwideβT←/hatwideβ− 25: end 26: else 27: /hatwideβT←/hatwideβ+; 28: end 29: end 30: end 31: return/hatwideβT; 32:/hatwideσT←exp/braceleftig /hatwideT1+/hatwideβTκ1(a1,b1)/bracerightig ; 33: return/parenleftig /hatwideβT,/hatwideσT/parenrightig ; • Ifa1=a2=aandb1=b2=b, then it follows that ζ(a1,b2) =η(a2,b2),that is,ζr= 1. Also, 22 κ1(a1,b1)−κ1(a2,b2) = 0.Thus, /hatwideβT=/hatwideβ+=/parenleftigg/hatwideT2−/hatwideT2 1 κ2(a,b)−κ1(a,b)2/parenrightigg1/2 . • Ifa1/ne}ationslash=a2orb1/ne}ationslash=b2, selecting the appropriate sign in Eq. ( 55) becomes nontrivial. To proceed, and following Nawa and Nadarajah (2025, Eq. (6)), the L-moment estimate of βis given by /hatwideβLM=log(2)+log(/summationtextn i=1(i−1)Xi:n)−log/parenleftbig n(n−1)X/parenrightbig log(2), but this closed-form expression is valid only if β <1. Thus, instead of relying on this restricted formula, we treat βas a tunable parameter and estimate it using the method of max imum like- lihood. Following Bücher and Segers (2018),/hatwideβMLEis the unique root of the strictly increasing function ξ(β) =β+n/summationdisplay i=1x−1/β ilog(xi)/parenleftiggn/summationdisplay i=1x−1/β i/parenrightigg−1 −n−1n/summationdisplay i=1log(xi),i.e.,ξ/parenleftig /hatwideβMLE/parenrightig = 0.(60) The maximum likelihood estimator of σis given by /hatwideσMLE=/parenleftigg 1 nn/summationdisplay i=1x−1//hatwideβMLE i/parenrightigg−/hatwideβMLE . (61) Ifmax/braceleftig /hatwideβ−,/hatwideβ+/bracerightig ≤0,there is no valid positive solution for the tail index parame terβ. Otherwise, if /hatwideβ−≤0and/hatwideβ+>0,we set/hatwideβT=/hatwideβ+.Similarly, if/hatwideβ−>0and/hatwideβ+≤0,we set/hatwideβT=/hatwideβ−.Finally, if min/braceleftig /hatwideβ−,/hatwideβ+/bracerightig ≥0,we select the estimate closer to the maximum likelihood estimate. That is, /hatwideβT=  /hatwideβ−,if/vextendsingle/vextendsingle/vextendsingle/hatwideβ−−/hatwideβMLE/vextendsingle/vextendsingle/vextendsingle</vextendsingle/vextendsingle/vextendsingle/hatwideβ+−/hatwideβMLE/vextendsingle/vextendsingle/vextendsingle, /hatwideβ+,if/vextendsingle/vextendsingle/vextendsingle/hatwideβ+−/hatwideβMLE/vextendsingle/vextendsingle/vextendsingle≤/vextendsingle/vextendsingle/vextendsingle/hatwideβ−−/hatwideβMLE/vextendsingle/vextendsingle/vextendsingle.(62) Note 9. Heuristically, the shape parameter αof a distribution is often approximately inversely proportional to the coefficient of variation (CV). Thus, defin ingβ=α−1, we have that βis ap- proximately proportional to the CV. Therefore, to solve Eq. (60)numerically, we can initialize the iterative algorithm as βstart=/hatwidestCV. A formal procedure for estimating the parameter vector/parenleftig /hatwideβT,/hatwideσT/parenrightig is summarized in Algorithm 2. 23 We now calculate ΣTby using Theorem 1. That is, σ2 11= Γ(1,1)V(1,1) = Γ(1,1)/integraldisplayb1 a1/integraldisplayb1 a1K(w,v)H′ 1(w)H′ 1(v)dvdw =β2Γ(1,1)/integraldisplayb1 a1/integraldisplayb1 a1K(w,v) vwlog(v) log(w)dvdw =β2Ψ111, σ2 12= Γ(1,2)V(1,2) = Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)H′ 1(w)H′ 2(v)dvdw = 2β2Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v) vwlog(v)log(w)(log(σ)−βlog(−log(v)))dvdw = 2β2log(σ)Ψ121−2β3Ψ122, (63) σ2 22= Γ(2,2)V(2,2) = Γ(2,2)/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)H′ 2(w)H′ 2(v)dvdw = 4β2Γ(2,2)/integraldisplayb2 a2/integraldisplayb2 a2K(w,v) vwlog(v)log(w)(βlog(−log(w))−log(σ))(βlog(−log(v))−log(σ))dvdw = 4β2(log(σ))2Ψ221−8β3log(σ)Ψ222+4β4Ψ223, (64) where the double integral terms Ψijk, for1≤i,j≤2and1≤k≤3, do not depend on the parameters to be estimated. The corresponding simplified si ngle-integral expressions, derived using Theorem 1, are provided in Appendix A. The Jacobian matrix DTis found by differentiating the functions g1andg2from Eq. ( 55) and then evaluating its derivatives at (T1,T2). That is, d11=∂g1 ∂/hatwideT1/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=−ζrT1/radicalig ζ(a1,b2)/parenleftbig T2−ζrT2 1/parenrightbig−1 2+κ1(a2,b2)−κ1(a1,b1) ζ(a1,b2), (65) d12=∂g1 ∂/hatwideT2/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=1 2/radicalig ζ(a1,b2)/parenleftbig T2−ζrT2 1/parenrightbig−1 2, (66) d21=∂g2 ∂/hatwideT1/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=/parenleftbig 1+d11κ1/parenleftbig a1,b1/parenrightbig/parenrightbig exp/braceleftbig T1+βκ1(a1,b1)/bracerightbig =σ/parenleftbig 1+d11κ1/parenleftbig a1,b1/parenrightbig/parenrightbig ,(67) d22=∂g2 ∂/hatwideT2/vextendsingle/vextendsingle/vextendsingle/vextendsingle (T1,T2)=d12κ1/parenleftbig a1,b1/parenrightbig exp/braceleftbig T1+βκ1(a1,b1)/bracerightbig =σd12κ1/parenleftbig a1,b1/parenrightbig . (68) 24 Thus, we get /parenleftig /hatwideβT,/hatwideσT/parenrightig ∼AN/parenleftbigg (β,σ),1 nST/parenrightbigg ,whereST=DTΣTD′ T. (69) Again from Bücher and Segers (2018), we can derive /parenleftig /hatwideβMLE,/hatwideσMLE/parenrightig ∼AN/parenleftbigg (β,σ),1 nSMLE/parenrightbigg , (70) where SMLE=6 π2/bracketleftbiggβ2(1−γ)σβ2 (1−γ)σβ2(σβ)2/parenleftbig (γ−1)2+π2/6/parenrightbig/bracketrightbigg giving det (SMLE) =6β4σ2 π2, (71) andγ:=−Γ′(1) = 0.57721566490 is the Euler–Mascheroni constant. Finally,
https://arxiv.org/abs/2505.09860v1
from Eq. ( 17) and Eq. ( 69), we have ARE/parenleftig/parenleftig /hatwideβT,/hatwideσT/parenrightig ,/parenleftig /hatwideβMLE,/hatwideσMLE/parenrightig/parenrightig = (det(SMLE)/det(ST))0.5. (72) Table 2: From Frechet (β,σ= 2), and we vary the tail index β, reciprocal of the shape parameter. A larger β(i.e., a smaller shape parameter α) implies a heavier tail. Inequality used ( 25). Proportions β (a1,b1)(a2,b2) 0.1 0.2 0.5 1 2 5 10 15 25 (0.02,0.02)(0.02,0.02) 0.771 0.771 0.771 0.771 0.771 0.771 0.771 0.771 0.771 (0.02,0.02)(0.00,0.04) 0.259 0.633 0.786 0.815 0.827 0.833 0.834 0.835 0.835 (0.05,0.05)(0.05,0.05) 0.754 0.754 0.754 0.754 0.754 0.754 0.754 0.754 0.754 (0.05,0.05)(0.00,0.10) 0.458 0.004 0.610 0.759 0.809 0.833 0.840 0.842 0.844 (0.10,0.10)(0.10,0.10) 0.693 0.693 0.693 0.693 0.693 0.693 0.693 0.693 0.693 (0.10,0.10)(0.00,0.20) 0.760 0.624 0.036 0.560 0.736 0.802 0.819 0.824 0.828 (0.15,0.15)(0.15,0.15) 0.623 0.623 0.623 0.623 0.623 0.623 0.623 0.623 0.623 (0.15,0.15)(0.00,0.30) 0.812 0.762 0.439 0.296 0.674 0.786 0.810 0.817 0.822 Like in Section 3.1, several key findings emerge from Table 2and Figure 3. In Figure 3, the horizontal red dotted line represents the ARE value obtained using iden tical trimming proportions for both moments, i.e., (a1,b1) = (a2,b2) = (0.05,0.05). Unlike the location-scale model, where the distribution ma y be symmetric about the origin, the Fréchet model considered here is strictly positive. Theref ore, trimming inequalities ( 25) and ( 31) lead to different subsets of data being excluded for different moments. As shown in the top panel of Figure 3, and under trimming inequality ( 25), the ARE values remain higher than those from equal trimming when T2−ζrT2 1≫0.For example, the ARE drops sharply at approximately β= 0.2and σ= 2, whereT2−ζrT2 1= 5.2052×10−7. 25 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50.10.20.30.40.50.60.70.80.91ARE 0.10.20.30.40.50.60.70.80.91 0 5 10 15 20 25 30 35 40 45 500.10.20.30.40.50.60.70.80.91ARE 0.10.20.30.40.50.60.70.80.91 ARE1(β-varies,σ= 2) = ARE(β-varies,σ= 2,(a1,b1) = (0.05,0.05),(a2,b2) = (0.00,0.10)) ARE2(β-varies,σ= 2) = ARE(β-varies,σ= 2,(a1,b1) = (0.05,0.05),(a2,b2) = (0.10,0.00)) ARE1(β= 2,σ-varies) = ARE(β= 2,σ-varies,(a1,b1) = (0.05,0.05),(a2,b2) = (0.00,0.10)) ARE2(β= 2,σ-varies) = ARE(β= 2,σ-varies,(a1,b1) = (0.05,0.05),(a2,b2) = (0.10,0.00)) Figure 3: Fréchet ARE curves under trimming inequalities ( 25) or (31). Forβ≥1andσ= 1, the ARE values under trimming inequality ( 25) exceed those under equal trimming, as seen by the black curve lying above the red dotte d line. This indicates that for heavier- tailed Fréchet distributions, using distinct trimming pro portions for each moment can yield higher efficiency than applying the same trimming to both. In contrast, the ARE values under trimming inequality ( 31) remain consistently flat. This is because 26 the scenario T2−ζrT2 1→0+rarely occurs under this configuration. The interpretation of the bottom panel in Figure 3is similar. The ARE curve corresponding to trimming inequality ( 25) decreases with increasing σuntilT2−ζrT2 1→0+,after which it begins to rise again. The ARE curve under trimming inequality ( 31) remains nearly constant throughout. 4 Simulation Study This section complements the theoretical results develope d in previous sections with simulation studies. The primary objectives are to determine the sample size required for the estimators to become effectively unbiased (given their asymptotic unbias edness), validate their asymptotic nor- mality, and evaluate their finite-sample relative efficienci
https://arxiv.org/abs/2505.09860v1
es (REs) in relation to their corresponding asymptotic relative efficiencies (AREs). To compute the RE of MTM estimators, we use the MLE as the benchmark. Accordingly, the definition of ARE in Eq. ( 17) is adapted for finite-sample performance as follows: RE(MTM, MLE ) =asymptotic variance of MLE estimator small-sample variance of a competing MTM estimator, (73) where the numerator is defined in Eq. ( 17), and the denominator is expressed as:  det E/bracketleftbigg/parenleftig /hatwideθ−θ/parenrightig2/bracketrightbigg E/bracketleftig/parenleftig /hatwideθ−θ/parenrightig (/hatwideσ−σ)/bracketrightig E/bracketleftig/parenleftig /hatwideθ−θ/parenrightig (/hatwideσ−σ)/bracketrightig E/bracketleftig (/hatwideσ−σ)2/bracketrightig  1/2 . From a specified distribution F, we generate 10,000 samples of a given length nusing Monte Carlo simulations. For each sample, we estimate the parameters of Fusing various T-estimators under trimming inequality ( 25) or (31), and compute the sample mean and relative efficiency (RE) fro m 10,000 replicates. This procedure is repeated 10 times, and the averages of the resulting 10 means and 10 REs, along with their standard deviations, are report ed. We start the simulation study with the normal distribution N/parenleftbig θ= 0.1,σ2= 52/parenrightbig , using the following specifications. • Sample size: n= 100,500,1000. • Estimators of θandσ: 27 –MLE, –MTM using trimming proportions specified by either inequali ty (25) or (31) Table 3: Normal model with parameters θ= 0.1andσ= 5. Estimator n= 100 n= 500 n= 1000 n→∞ (a1,b1)(a2,b2)/hatwideθ/θ/hatwideσ/σ/hatwideθ/θ/hatwideσ/σ/hatwideθ/θ/hatwideσ/σ/hatwideθ/θ/hatwideσ/σ Mean values of /hatwideθ/θand/hatwideσ/σ. MLE 0.98 0.99 1.01 1.00 1.00 1.00 1 1 (0.00,0.00)(0.00,0.00) 0.98 0.99 1.01 1.00 1.00 1.00 1 1 (0.00,0.05)(0.00,0.05) 1.10 1.00 1.01 1.00 1.00 1.00 1 1 (0.00,0.10)(0.00,0.10) 1.10 1.00 1.02 1.00 1.01 1.00 1 1 With strict trimming inequality ( 25). (0.10,0.00)(0.05,0.05) 0.84 1.00 0.98 1.00 0.98 1.00 1 1 (0.05,0.05)(0.00,0.10) 1.00 0.99 1.00 1.00 1.00 1.00 1 1 (0.10,0.10)(0.00,0.20) 1.01 0.99 1.00 1.00 1.00 1.00 1 1 (0.15,0.15)(0.00,0.30) 0.97 0.99 1.01 1.00 1.01 1.00 1 1 With strict trimming inequality ( 31). (0.00,0.10)(0.05,0.05) 1.07 1.00 1.02 1.00 1.01 1.00 1 1 (0.05,0.05)(0.10,0.00) 1.00 0.99 0.99 1.00 1.00 1.00 1 1 (0.10,0.10)(0.20,0.00) 0.99 0.99 1.00 1.00 1.00 1.00 1 1 (0.15,0.15)(0.30,0.00) 1.00 0.99 1.00 1.00 1.00 1.00 1 1 (0.25,0.50)(0.50,0.25) 1.03 1.00 1.00 1.00 1.00 1.00 1 1 Finite-sample efficiencies (RE) of MTMs relative to MLEs. MLE 0.999 0.996 0.994 1 (0.00,0.00)(0.00,0.00) 0.999 0.996 0.994 1 (0.00,0.05)(0.00,0.05) 0.930 0.934 0.929 0.932 (0.00,0.10)(0.00,0.10) 0.877 0.876 0.876 0.872 With strict trimming inequality ( 25). (0.10,0.00)(0.05,0.05) 0.874 0.870 0.872 0.872 (0.05,0.05)(0.00,0.10) 0.884 0.882 0.881 0.883 (0.10,0.10)(0.00,0.20) 0.808 0.801 0.805 0.805 (0.15,0.15)(0.00,0.30) 0.753 0.748 0.752 0.752 With strict trimming inequality ( 31). (0.00,0.10)(0.05,0.05) 0.874 0.880 0.872 0.876 (0.05,0.05)(0.10,0.00) 0.888 0.889 0.881 0.884 (0.10,0.10)(0.20,0.00) 0.807 0.805 0.810 0.807 (0.15,0.15)(0.30,0.00) 0.758 0.754 0.760 0.754 (0.25,0.50)(0.50,0.25) 0.493 0.491 0.488 0.491 The simulation results are summarized in Table 3. All estimators in the normal case accurately recover both the location parameter θand the scale parameter σ, becoming nearly unbiased for sample sizes as small as n= 100 . The relative bias of θestimators under trimming inequality ( 25) is generally negative, while that under inequality ( 31) tends to be positive. This pattern aligns with 28 the explanation in Note 3: whenθ= 0.1, trimming under inequality
https://arxiv.org/abs/2505.09860v1
( 25) is more likely to remove the same observations from both moments, preserving balanc e. In contrast, under inequality ( 31), whereb1≤b2, fewer large values are excluded from the second moment, pot entially inflating the location estimate. The finite relative efficiencies (FREs) al so approach their asymptotic limits, with some converging from above. Table 4: Fréchet model with parameters β= 5andσ= 2. Estimator n= 100 n= 500 n= 1000 n→∞ (a1,b1)(a2,b2)/hatwideβ/β/hatwideσ/σ/hatwideβ/β/hatwideσ/σ/hatwideβ/β/hatwideσ/σ/hatwideβ/β/hatwideσ/σ Mean values of /hatwideθ/θand/hatwideσ/σ. MLE 0.99 1.17 1.00 1.03 1.00 1.01 1 1 (0.00,0.00)(0.00,0.00) 0.99 1.19 1.00 1.03 1.00 1.02 1 1 (0.00,0.05)(0.00,0.05) 1.00 1.18 1.00 1.03 1.00 1.02 1 1 (0.00,0.10)(0.00,0.10) 1.00 1.19 1.00 1.03 1.00 1.02 1 1 With strict trimming inequality ( 25). (0.10,0.00)(0.05,0.05) 1.01 1.15 1.00 1.03 1.00 1.01 1 1 (0.05,0.05)(0.00,0.10) 1.00 1.17 1.00 1.03 1.00 1.02 1 1 (0.10,0.10)(0.00,0.20) 1.00 1.18 1.00 1.03 1.00 1.02 1 1 (0.15,0.15)(0.00,0.30) 1.00 1.18 1.00 1.03 1.00 1.02 1 1 With strict trimming inequality ( 31). (0.00,0.10)(0.05,0.05) 1.00 1.19 1.00 1.03 1.00 1.02 1 1 (0.05,0.05)(0.10,0.00) 0.99 1.26 1.00 1.05 1.00 1.02 1 1 (0.10,0.10)(0.20,0.00) 0.99 1.27 1.00 1.05 1.00 1.03 1 1 (0.15,0.15)(0.30,0.00) 0.99 1.29 1.00 1.05 1.00 1.03 1 1 (0.25,0.50)(0.50,0.25) 1.00 1.23 1.00 1.04 1.00 1.02 1 1 Finite-sample efficiencies (RE) of MTMs relative to MLEs. MLE 0.729 0.934 0.971 1 (0.00,0.00)(0.00,0.00) 0.509 0.651 0.671 0.690 (0.00,0.05)(0.00,0.05) 0.626 0.803 0.831 0.856 (0.00,0.10)(0.00,0.10) 0.629 0.817 0.849 0.875 With strict trimming inequality ( 25). (0.10,0.00)(0.05,0.05) 0.448 0.588 0.606 0.627 (0.05,0.05)(0.00,0.10) 0.614 0.784 0.809 0.833 (0.10,0.10)(0.00,0.20) 0.583 0.753 0.773 0.802 (0.15,0.15)(0.00,0.30) 0.549 0.737 0.759 0.786 With strict trimming inequality ( 31). (0.00,0.10)(0.05,0.05) 0.563 0.723 0.753 0.774 (0.05,0.05)(0.10,0.00) 0.378 0.512 0.526 0.548 (0.10,0.10)(0.20,0.00) 0.348 0.467 0.487 0.509 (0.15,0.15)(0.30,0.00) 0.317 0.448 0.470 0.489 (0.25,0.50)(0.50,0.25) 0.308 0.424 0.436 0.457 Similarly, we continue the simulation study with the Fréche t distribution F(β= 5,σ= 2), using the following specifications. 29 • Sample size: n= 100,500,1000. • Estimators of θandσ: –MLE, –MTM using trimming proportions specified by either inequali ty (25) or (31) The simulation results are summarized in Table 4. As in the normal case, all estimators in the Fréchet model accurately recover both parameters βandσas the sample size increases and relative bias decreases. The estimator for βbecomes nearly unbiased for sample sizes as small as n= 100 . However, the relative bias of the σestimators is generally lower under trimming inequality ( 25) compared to inequality ( 31), which is expected. For instance, under inequality ( 25) with(a1,b1) = (0.05,0.05)and(a2,b2) = (0.00,0.10), a greater proportion of large observations are excluded fr om the second moment calculation. In contrast, under inequali ty (31) with the same (a1,b1)but (a2,b2) = (0.10,0.00), the second moment retains all larger values due to the absen ce of right-side trimming, resulting in higher bias for the scale parameter σ. The finite relative efficiencies (FREs) also converge toward their asymptotic counterparts, thoug h at a slower rate, with some approaching from above. 5 Real Data Analysis In this section, we apply both the MTM and MLE approaches to an alyze the normalized damage amounts from the 30 most damaging
https://arxiv.org/abs/2505.09860v1
hurricanes in the United St ates between 1925 and 1995, as reported by Pielke and Landsea (1998). This dataset has previously been examined using the same trimming proportions for all moments by Brazauskas et al. (2009), and using the same winsorizing proportions for all moments by Zhaoet al. (2018). The damage figures were normalized to 1995 dollars, accounting for inflation, changes in personal prop erty values, and coastal county population growth. Our objective is to assess how initial assumptions a nd parameter estimation methods influence model fit. Both lognormal and Fréchet models provide a good fit to the hurr icane damage data, with maximum likelihood estimates summarized in the MLE row of Table 5. The corresponding p-values are 0.9678 and 0.7611, respectively, and the fitted CDFs are shown along side the empirical CDF in Figure 4. At 30 0 10 20 30 40 50 60 700.0 0.2 0.4 0.6 0.8 1.0 Damage (in Billions of Dollars)F(x) Empirical Lognormal Frechet Figure 4: Empirical cumulative distribution function (CDF) of the hu rricane damage data overlaid with fitted CDFs from the lognormal and Fréchet models, illustrat ing the diagnostic fit performance of each model. the 5% significance level, neither model is rejected, indica ting that both are plausible for modeling the data. To further evaluate the robustness of the proposed flexible t rimming approach, we follow Brazauskas et al. (2009) and introduce a mild data modification by replacing the larg est observation, 72.303, with 723.03. The resulting parameter estimates and goodness-of -fit (GOF) measures, using different trimming proportions defined by inequalities ( 25) and ( 31), are presented in Table 5for both the original and modified datasets. The goodness-of-fit is asses sed using the mean absolute deviation FIT:=1 3030/summationdisplay j=1/vextendsingle/vextendsingle/vextendsingle/vextendsinglelog/parenleftbigg /hatwideF−1/parenleftbiggj−0.5 30/parenrightbigg/parenrightbigg −log(Xj:30)/vextendsingle/vextendsingle/vextendsingle/vextendsingle, between the log-fitted and log-observed data ( Brazauskas et al.,2009), along with AIC and BIC. Several conclusions emerge from this analysis. First, the g oodness-of-fit (GOF) statistics for most robust MTM estimators remain stable under data modification , provided the trimming proportions are nonzero. In contrast, the MLE fit changes substantially. The GOF values are notably higher 31 Table 5: Parameter estimates and goodness-of-fit measures for the lo gnormal and Fréchet models fitted to the original and modified hurricane loss data. Est. Proportion Lognormal Model Fréchet Model (a1,b1) ( a2,b2) /hatwideθ/hatwideσ FIT AIC BIC /hatwideβ/hatwideσ∗FIT AIC BIC Original Data MLE – 22.80 0.83 0.1036 1446 1449 0.72 5.35 0.1277 1446 1448 T1 (0,0) (0 ,0) 22.80 0.83 0.1036 1446 1449 0.65 5.48 0.1168 1446 1449 T2(0,1/30) (0 ,1/30) 22.79 0.80 0.1069 1446 1449 0.69 5.44 0.1199 1446 1449 T3(1/30,1/30) (1 /30,1/30) 22.77 0.85 0.1013 1446 1449 0.70 5.39 0.1222 1446 1449 T4(7/30,7/30) (7 /30,7/30) 22.78 0.77 0.1133 1447 1449 0.66 6.03 0.1219 1448 1451 T5 (0,0) (0 ,1/30) 22.80 2.91 1.6402 1494 1497 0.54 5.83 0.1719 1454 1457 T6(1/30,1/30) (0 ,2/30) 22.77 7.30 5.1024 1558 1561 0.58 5.73 0.1443 1450 1453 T7(1/30,1/30) (2 /30,0) 22.77 0.88 0.1016 1446 1449 0.63 5.60 0.1186 1447 1450 T8(0,3/30) (0 ,0) 22.80 0.90 0.1107 1447 1449 0.60 5.64 0.1298 1448 1451 T9(4/30,5/30) (5 /30,2/30) 22.76 0.87
https://arxiv.org/abs/2505.09860v1
0.1026 1446 1449 0.66 5.73 0.1114 1447 1449 T10(7/30,15/30) (15 /30,7/30) 22.78 0.78 0.1108 1447 1449 0.67 6.00 0.1250 1447 1450 Modified Data MLE – 22.88 1.10 0.2932 1467 1470 0.77 5.47 0.1896 1456 1459 T1 (0,0) (0 ,0) 22.88 1.10 0.2932 1467 1470 0.86 5.26 0.2121 1457 1460 T2(0,1/30) (0 ,1/30) 22.79 0.80 0.1838 1475 1478 0.69 5.44 0.1813 1457 1460 T3(1/30,1/30) (1 /30,1/30) 22.77 0.85 0.1781 1472 1475 0.70 5.39 0.1818 1457 1460 T4(7/30,7/30) (7 /30,7/30) 22.78 0.77 0.1900 1478 1480 0.66 6.03 0.1846 1459 1462 T5 (0,0) (0 ,1/30) 22.88 3.86 2.3122 1515 1518 1.50 3.62 0.7212 1476 1479 T6(1/30,1/30) (0 ,2/30) 22.77 7.30 5.0255 1552 1554 0.58 5.73 0.2211 1463 1466 T7(1/30,1/30) (2 /30,0) 22.77 1.41 0.4813 1471 1474 1.00 4.61 0.3133 1461 1464 T8(0,3/30) (0 ,0) 22.87 1.27 0.3887 1468 1471 0.85 5.27 0.2112 1457 1460 T9(4/30,5/30) (5 /30,2/30) 22.76 0.87 0.1793 1472 1474 0.66 5.73 0.1766 1458 1461 T10(7/30,15/30) (15 /30,7/30) 22.78 0.78 0.1876 1477 1479 0.67 6.00 0.1844 1459 1461 Note: Est. stands for Estimators. Fréchet estimated scale parame ter/hatwideσ=/hatwideσ∗×109. forT5andT6under both models. This may be attributed, as discussed in No te3, to the strictly positive support of the sample data and the fitted lognormal m odel, suggesting that trimming based on inequality ( 31) is more appropriate than inequality ( 25), which assumes approximate symmetry about the origin. Second, the increase in GOF is more pronounced for T7than for MLE when comparing original and modified data. This is expected, as T7involves no right-tail trimming for the second moment, allowing the inflated maximum value to influence the fit. Third, under T2,T3,T4,T9, andT10, parameter estimates for both models remain unchanged between original and modified data, reflecting the robustnes s of MTM. Notably, T10—which uses disjoint middle portions of the data for the two moments—pro duces low GOF values under the Fréchet model, even with data contamination. Fourth, the Fréchet tail index estimate /hatwideβis consistently below one, except for T5andT7under the modified data, suggesting a lighter right tail in the samp le. This supports the higher p-value 32 observed for the lognormal fit compared to the Fréchet fit. Ove rall, the lognormal model appears more suitable for the original data, while the robust Fréche t model is better suited when the data contain a large outlier. 6 Conclusion This paper introduced a general and flexible framework for ro bust parametric estimation based on trimmed L-moments (MTM), allowing distinct trimming proportions fo r different moments. The proposed approach extends traditional trimmed moment m ethods by enabling asymmetric and moment-specific trimming strategies, which improve robust ness against outliers and model misspec- ification without sacrificing computational tractability. Estimators derived under this framework maintain closed-form expressions and avoid iterative opti mization, making them suitable for large- scale data analysis. We derived their asymptotic propertie s under the general theory of L-statistics and provided analytical variance expressions to support co mparative efficiency analysis. Simulation studies across various scenarios demonstrate t hat the proposed estimators offer strong finite-sample performance and effectively balance robustne ss and efficiency. The flexibility to assign distinct trimming proportions
https://arxiv.org/abs/2505.09860v1
to different moments enhance s adaptability to the data structure and contamination patterns. This advantage is especially evid ent in asymmetric or heavy-tailed settings, where the general MTM consistently outperforms classical a nd symmetric-trimming methods in terms of bias and mean squared error. The practical utility of the proposed methodology was furth er validated using a real-world dataset of the 30 most damaging hurricanes in the United States. Both l ognormal and Fréchet models were fitted using MLE and several MTM variants. The results confirm ed that MLE is highly sensitive to data perturbations, while MTM estimates remained stable and interpretable. When the largest loss was inflated by a factor of 10, the properly designed MTM e stimators remained consistent, whereas the MLE fit deteriorated considerably, especially u nder the Fréchet model. Moreover, goodness-of-fit metrics (e.g., AIC, BIC, and empirical quant ile deviation) reinforced the robustness and adaptability of the MTM framework in both original and mo dified datasets. Overall, the proposed general MTM approach offers a simple ye t powerful extension of L-estimation techniques, making it a valuable tool for robust inference i n the presence of contamination or heavy tails. Future work may include extending the framewor k to mixture models, multivariate 33 distributions, incorporating covariate information, or a pplying the methodology to other domains such as environmental risk, finance, and reliability analys is. 34 References Brazauskas, V., Jones, B.L., and Zitikis, R. (2009). Robust fit ting of claim severity distributions and the method of trimmed moments. Journal of Statistical Planning and Inference ,139(6), 2028–2043. Brazauskas, V. and Kleefeld, A. (2009). Robust and efficient fit ting of the generalized Pareto distribution with actuarial applications in view. Insurance: Mathematics & Economics ,45(3), 424–435. Bücher, A. and Segers, J. (2018). Maximum likelihood estimat ion for the Fréchet distribution based on block maxima extracted from a time series. Bernoulli ,24(2), 1427–1462. Chernoff, H., Gastwirth, J.L., and Johns, Jr., M.V. (1967). A symptotic distribution of linear com- binations of functions of order statistics with applicatio ns to estimation. Annals of Mathematical Statistics ,38(1), 52–72. Gatti, S. and Wüthrich, M.V. (2025). Modeling lower-trunca ted and right-censored insurance claims with an extension of the MBBEFD class. European Actuarial Journal ,15(1), 199–240. Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., and Stahel , W.A. (1986). Robust Statistics: The Approach Based on Influence Functions . John Wiley & Sons, Inc., New York. Hao, S., Yang, J., and Li, W. (2014). A robust bootstrap confid ence interval for the two-parameter Weibull distribution based on the method of trimmed moments .2014 Prognostics and System Health Management Conference (PHM-2014 Hunan) , pages 478–481. Huber, P.J. and Ronchetti, E.M. (2009). Robust Statistics . Second edition. John Wiley & Sons, Inc., Hoboken, NJ. Kim, J.H.T. and Jeon, Y. (2013). Credibility theory based on trimming. Insurance: Mathematics & Economics ,53(1), 36–47. Kleefeld, A. and Brazauskas, V. (2012). A statistical applic ation of the quantile mechanics approach: MTM estimators for the parameters of tand gamma distributions. European Journal of Applied Mathematics ,23(5), 593–610. Nawa, V. and Nadarajah, S. (2025). Logarithmic method of mom ents estimators for the
https://arxiv.org/abs/2505.09860v1
Fréchet distribution. Journal of Computational and Applied Mathematics ,457, Paper No. 116293, 9. Opdyke, J. and Cavallo, A. (2012). Estimating operational r isk capital: the challenges of truncation, the hazards of MLE, and the promise of robust statistics. Journal of Operational Risk ,7(3), 3–90. Pielke, Jr., R.A. and Landsea, C.W. (1998). Normalized hurr icane damages in the United States: 1925-1995. Weather and Forecasting ,13, 621–631. Poudyal, C. (2021). Robust estimation of loss models for log normal insurance payment severity data.ASTIN Bulletin – The Journal of the International Actuarial Association ,51(2), 475–507. Poudyal, C. (2025). On the asymptotic normality of trimmed a nd winsorized L-statistics. Commu- nications in Statistics – Theory and Methods ,54(10), 3114–3133. 35 Poudyal, C. and Brazauskas, V. (2023). Finite-sample perfor mance of the T- andW-estimators for the Pareto tail index under data truncation and censoring. Journal of Statistical Computation and Simulation ,93(10), 1601–1621. Schott, J.R. (2017). Matrix Analysis for Statistics . Wiley Series in Probability and Statistics, third edition. John Wiley & Sons, Inc., Hoboken, NJ. Serfling, R. (2002). Efficient and robust fitting of lognormal d istributions. North American Actuarial Journal ,6(4), 95–109. Serfling, R.J. (1980). Approximation Theorems of Mathematical Statistics . John Wiley & Sons, New York. Shaked, M. and Shanthikumar, J.G. (2007). Stochastic Orders . Springer, New York. Tukey, J.W. (1960). A survey of sampling from contaminated d istributions. Contributions to Probability and Statistics , pages 448–485. Stanford University Press, Stanford, CA. Zhao, Q., Brazauskas, V., and Ghorai, J. (2018). Robust and effi cient fitting of severity models and the method of Winsorized moments. ASTIN Bulletin – The Journal of the International Actuarial Association ,48(1), 275–309. 36 Appendix A: Proofs Proof of Proposition 1:Define Uτ∼Uniform/parenleftbig aτ,bτ/parenrightbig andWτ:=F−1 0(Uτ), τ∈{i,j}. Then, for any positive integer k, it follows that ck/parenleftbig aτ,bτ/parenrightbig =E/bracketleftig Wk τ/bracketrightig . (i) Thus, we get η/parenleftbig ai,bj/parenrightbig =c2 1/parenleftbig ai,bi/parenrightbig −2c1/parenleftbig ai,bi/parenrightbig c1/parenleftbig aj,bj/parenrightbig +c2/parenleftbig aj,bj/parenrightbig = (E[Wi])2−2E[Wi]E[Wj]+E/bracketleftbig W2 j/bracketrightbig = (E[Wi]−E[Wj])2+E/bracketleftbig W2 j/bracketrightbig −(E[Wj])2 = (E[Wi]−E[Wj])2+Var[Wj]. (74) SinceWjis a non-degenerate random variable giving Var[Wj]>0and(E[Wi]−E[Wj])2≥0, Eq. (74) takes the form η/parenleftbig ai,bj/parenrightbig = (E[Wi]−E[Wj])2+Var[Wj]>0. (ii) With the given inequality ( 8),Ujis smaller than Uiin stochastic order (see, e.g., Shaked and Shanthikumar , 2007, Ch 1), i.e., Uj≤stUi. Being a quantile function of the standard location-scale fa mily of distributions, F−1 0is strictly increasing, then again it follows that ( Shaked and Shanthikumar , 2007, Theorem 1.A.3), Wj≤stWi. For odd positive integer k,g(x) =xkis strictly an in- creasing function, giving Wk j≤stWk i. Thus, E/bracketleftig Wk j/bracketrightig ≤E/bracketleftig Wk i/bracketrightig ,i.e.,ck/parenleftbig aj,bj/parenrightbig ≤ck/parenleftbig ai,bi/parenrightbig . (75) (iii) From (i), it immediately follows that 0< ηr. Further, we know that η/parenleftbig aj,bj/parenrightbig =Var[Wj]. Thus from Eq. ( 74), it follows that ηr=η(aj,bj) η(ai,bj)=Var[Wj] (E[Wi]−E[Wj])2+Var[Wj]≤1. (iv) We have c2/parenleftbig aj,bj/parenrightbig ≥ηrc2 1/parenleftbig ai,bi/parenrightbig ⇐⇒ c2/parenleftbig aj,bj/parenrightbig η/parenleftbig ai,bj/parenrightbig ≥c2 1/parenleftbig ai,bi/parenrightbig η/parenleftbig aj,bj/parenrightbig ⇐⇒ c2/parenleftbig aj,bj/parenrightbig/parenleftbig c2 1(ai,bi)−2c1(ai,bi)c1(aj,bj)+c2(aj,bj)/parenrightbig ≥c2 1/parenleftbig ai,bi/parenrightbig/parenleftbig c2/parenleftbig aj,bj/parenrightbig −c2 1/parenleftbig aj,bj/parenrightbig/parenrightbig ⇐⇒ c2 2/parenleftbig aj,bj/parenrightbig −2c1(ai,bi)c1(aj,bj)c2(aj,bj)+c2 1(ai,bi)c2 1(aj,bj) ⇐⇒/parenleftbig c2/parenleftbig aj,bj/parenrightbig −c1(ai,bi)c1(aj,bj)/parenrightbig2≥0, a valid inequality as desired. 37 (v) Forθ= 0andσ= 1, this converts to the inequality given in (iv). Further, if
https://arxiv.org/abs/2505.09860v1
θis positive, then this inequality again follows immediately from (iv). But in general, it follows that η/parenleftbig ai,bj/parenrightbig T2−η/parenleftbig aj,bj/parenrightbig T2 1 =(c2/parenleftbig aj,bj/parenrightbig σ−c1/parenleftbig ai,bi/parenrightbig c1/parenleftbig aj,bj/parenrightbig σ+c1/parenleftbig aj,bj/parenrightbig θ−c1/parenleftbig ai,bi/parenrightbig θ)2 =(σ/parenleftbig c2(aj,bj)−c1(ai,bi)c1(aj,bj)/parenrightbig +θ/parenleftbig c1(aj,bj)−c1(ai,bi)/parenrightbig)2 ≥0. Thus, η/parenleftbig ai,bj/parenrightbig T2−η/parenleftbig aj,bj/parenrightbig T2 1≥0 =⇒T2−ηrT2 1≥0, (76) asη(ai,bj)>0from(i). Proof of Corollary 2:(i),(iii),(iv), and(v)follow by similar arguments as in Proposition 1. Note that log(−log(u))is a decreasing function of u∈(0,1), which establishes (ii). Appendix B: Asymptotic Covariance Matrix Entries Here we summarize the simplified single-integral expressio ns for the asymptotic variance-covariance entries corresponding to the parametric examples discusse d in Section 3. By applying Theorem 1under the trimming inequality ( 25), the definitions of the notations Λijk, for1≤i,j≤2and1≤k≤3, as used in Eqs. ( 38)–(40), are given below: Λ111= Γ(1,1)/integraldisplayb1 a1/integraldisplayb1 a1K(w,v)dF−1 0(v)dF−1 0(w)dvdw = Γ(1,1)/braceleftig a1a1/bracketleftbig F−1 0(a1)/bracketrightbig2+b1b1/bracketleftbig F−1 0/parenleftbig b1/parenrightbig/bracketrightbig2−2a1b1F−1 0(a1)F−1 0/parenleftbig b1/parenrightbig −2(1−a1−b1)/bracketleftbig a1F−1 0(a1)+b1F−1 0/parenleftbig b1/parenrightbig/bracketrightbig c1(a1,b1) −[Γ(1,1)]−1c2 1(a1,b1)+(1−a1−b1)c2(a1,b1)/bracerightig , Λ121= Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)dF−1 0(v)dF−1 0(w)dvdw = Γ(1,2){a2a1F−1 0(a1)F−1 0(a2)+b1b2F−1 0/parenleftbig b1/parenrightbig F−1 0/parenleftbig b2/parenrightbig −a1b2F−1 0(a1)F−1 0/parenleftbig b2/parenrightbig −a2b1F−1 0(a2)F−1 0/parenleftbig b1/parenrightbig −(1−a1−b2)/bracketleftbig 2a1F−1 0(a1)+b1F−1 0/parenleftbig b1/parenrightbig +b2F−1 0/parenleftbig b2/parenrightbig/bracketrightbig c1/parenleftbig a1,b2/parenrightbig −(1−a1−b2)2c2 1/parenleftbig a1,b2/parenrightbig +(1−a1−b2)c2/parenleftbig a1,b2/parenrightbig +(1−a1−b1)/bracketleftbig a1F−1 0(a1)−a2F−1 0(a2)/bracketrightbig c1/parenleftbig a1,b1/parenrightbig +(a1−a2){a1F−1 0(a1)−b1F−1 0/parenleftbig b1/parenrightbig −(1−a1−b1)c1/parenleftbig a1,b1/parenrightbig}c1(a2,a1) +(b2−b1){b2F−1 0/parenleftbig b2/parenrightbig −a1F−1 0(a1)−(1−a1−b2)c1/parenleftbig a1,b2/parenrightbig}c1/parenleftbig b2,b1/parenrightbig}, 38 Λ122= Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)F−1 0(v)dF−1 0(v)dF−1 0(w)dvdw =Γ(1,2) 2{a2a1F−1 0(a1)/bracketleftbig F−1 0(a2)/bracketrightbig2+b1b2F−1 0/parenleftbig b1/parenrightbig/bracketleftbig F−1 0/parenleftbig b2/parenrightbig/bracketrightbig2 −a2b1F−1 0/parenleftbig b1/parenrightbig/bracketleftbig F−1 0(a2)/bracketrightbig2−a1b2F−1 0(a1)/bracketleftbig F−1 0/parenleftbig b2/parenrightbig/bracketrightbig2 −(1−a1−b2)/bracketleftig a1/bracketleftbig F−1 0(a1)/bracketrightbig2+b2/bracketleftbig F−1 0/parenleftbig b2/parenrightbig/bracketrightbig2/bracketrightig c1(a1,b2) −(1−a1−b2)/bracketleftbig a1F−1 0(a1)+b1F−1 0/parenleftbig b1/parenrightbig/bracketrightbig c2(a1,b2) −(1−a1−b2)2c1(a1,b2)c2(a1,b2) +(1−a1−b2)c3(a1,b2) +(1−a1−b1)/bracketleftig a1/bracketleftbig F−1 0(a1)/bracketrightbig2−a2/bracketleftbig F−1 0(a2)/bracketrightbig2/bracketrightig c1(a1,b1) +(a1−a2){a1F−1 0(a1)−b1F−1 0/parenleftbig b1/parenrightbig −(1−a1−b1)c1(a1,b1)}c2(a2,a1) +(b2−b1){b2/bracketleftbig F−1 0/parenleftbig b2/parenrightbig/bracketrightbig2−a1/bracketleftbig F−1 0(a1)/bracketrightbig2−(1−a1−b2)c2(a1,b2)}c1(b2,b1)}, Λ221= Λ111,witha1replaced by a2andb1replaced by b2, Λ222= Γ(2,2)/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)F−1 0(w)dF−1 0(v)dF−1 0(w)dvdw =Γ(2,2) 2{a2a2/bracketleftbig F−1 0(a2)/bracketrightbig3+b2b2/bracketleftbig F−1 0(b2)/bracketrightbig3 −a2b2F−1 0(a2)F−1 0(b2)/bracketleftbig F−1 0(a2)+F−1 0(b2)/bracketrightbig −(1−a2−b2)/bracketleftig a2/bracketleftbig F−1 0(a2)/bracketrightbig2+b2/bracketleftbig F−1 0(b2)/bracketrightbig2/bracketrightig c1(a2,b2) −(1−a2−b2)/bracketleftbig a2F−1 0(a2)+b2F−1 0(b2)/bracketrightbig c2(a2,b2) −[Γ(2,2)]−1c1(a2,b2)c2(a2,b2)+(1−a2−b2)c3(a2,b2)}, Λ223= Γ(2,2)/integraldisplayb2 a2/integraldisplayb2 a2K(w,v)F−1 0(w)F−1 0(v)dF−1 0(v)dF−1 0(w)dvdw =Γ(2,2) 4{a2a2/bracketleftbig F−1 0(a2)/bracketrightbig4+b2b2/bracketleftbig F−1 0(b2)/bracketrightbig4−2a2b2/bracketleftbig F−1 0(a2)/bracketrightbig2/bracketleftbig F−1 0(b2)/bracketrightbig2 −2(1−a2−b2)/bracketleftig a2/bracketleftbig F−1 0(a2)/bracketrightbig2+b2/bracketleftbig F−1 0(b2)/bracketrightbig2/bracketrightig c2(a2,b2) −[Γ(2,2)]−1c2 2(a2,b2)+(1−a2−b2)c4/parenleftbig a2,b2/parenrightbig}. Note 10. The notations Λijk, for1≤i,j≤2and1≤k≤3, can be similarly evaluated under the trimming inequality (15), i.e., (31), as noted in Note 1. It follows that Eq. ( 48) is equivalent to Eq. ( 21), and Eq. ( 50) is equivalent to Eq. ( 24), under the substitutions log(σ)/ma√sto→θ,β/ma√sto→−σ, and∆(u)/ma√sto→F−1 0(u)foru∈(0,1). Consequently, the expressions for Ψijk, with1≤i,j≤2and1≤k≤3, as defined in Eqs. ( 63)–(64), can be obtained by applying Theorem 1under the trimming inequality ( 25), following the structure of Λijkfor the same index ranges, and are given below: Ψ111= Γ(1,1)/integraldisplayb1 a1/integraldisplayb1 a1K(w,v) vwlog(v) log(w)dvdw 39 = Γ(1,1){a1a1[∆(a1)]2+b1b1/bracketleftbig ∆/parenleftbig b1/parenrightbig/bracketrightbig2−2a1b1∆(a1) ∆/parenleftbig b1/parenrightbig −2(1−a1−b1)/bracketleftbig a1∆(a1)+b1∆/parenleftbig b1/parenrightbig/bracketrightbig κ1/parenleftbig a1,b1/parenrightbig −[Γ(1,1)]−1κ2 1/parenleftbig a1,b1/parenrightbig +(1−a1−b1)κ2/parenleftbig a1,b1/parenrightbig}, Ψ121= Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v) vwlog(v)log(w)dvdw = Γ(1,2){a2a1∆(a1) ∆(a2)+b1b2∆/parenleftbig b1/parenrightbig ∆/parenleftbig b2/parenrightbig −a1b2∆(a1) ∆/parenleftbig b2/parenrightbig −a2b1∆(a2) ∆/parenleftbig b1/parenrightbig −(1−a1−b2)/bracketleftbig 2a1∆(a1)+b1∆/parenleftbig b1/parenrightbig +b2∆/parenleftbig b2/parenrightbig/bracketrightbig κ1/parenleftbig a1,b2/parenrightbig −(1−a1−b2)2κ2 1/parenleftbig a1,b2/parenrightbig +(1−a1−b2)κ2/parenleftbig a1,b2/parenrightbig +(1−a1−b1)[a1∆(a1)−a2∆(a2)]κ1/parenleftbig a1,b1/parenrightbig +(a1−a2){a1∆(a1)−b1∆/parenleftbig b1/parenrightbig −(1−a1−b1)κ1/parenleftbig a1,b1/parenrightbig}κ1(a2,a1) +(b2−b1){b2∆/parenleftbig b2/parenrightbig −a1∆(a1)−(1−a1−b2)κ1/parenleftbig a1,b2/parenrightbig}κ1/parenleftbig b2,b1/parenrightbig}, Ψ122= Γ(1,2)/integraldisplayb1 a1/integraldisplayb2 a2K(w,v)log(−log(v)) vwlog(v)log(w)dvdw =Γ(1,2) 2{a2a1∆(a1) [∆(a2)]2+b1b2∆/parenleftbig b1/parenrightbig/bracketleftbig ∆/parenleftbig b2/parenrightbig/bracketrightbig2 −a2b1∆/parenleftbig b1/parenrightbig [∆(a2)]2−a1b2∆(a1)/bracketleftbig ∆/parenleftbig b2/parenrightbig/bracketrightbig2 −(1−a1−b2)/bracketleftig a1[∆(a1)]2+b2/bracketleftbig ∆/parenleftbig b2/parenrightbig/bracketrightbig2/bracketrightig κ1(a1,b2) −(1−a1−b2)/bracketleftbig a1∆(a1)+b1∆/parenleftbig b1/parenrightbig/bracketrightbig κ2(a1,b2) −(1−a1−b2)2κ1(a1,b2)κ2(a1,b2)
https://arxiv.org/abs/2505.09860v1
arXiv:2505.09976v1 [math.PR] 15 May 2025Results related to the Gaussian product inequality conject ure for mixed-sign exponents in arbitrary dimension Guolie Lana, Fr´ ed´ eric Ouimetb, Wei Sunc aSchool of Economics and Statistics, Guangzhou University, Guangzhou, China bDepartment of Mathematics and Statistics, McGill Universi ty, Montreal, Canada cDepartment of Mathematics and Statistics, Concordia Unive rsity, Montreal, Canada Abstract This note establishes that the opposite Gaussian product inequality (GPI) of the type proved by Russell & Sun (2022a) in two dimensions, and partially extended to higher dimensions by Zhou et al.(2024), continues to hold for an arbitrary mix of positive and negative exponents. A general quantitative lowe r bound is also obtained conditionally on the GPI conjecture being true. Keywords: centered Gaussian random vector, componentwise convex order , Gaussian product inequality, Loewner order, multivariate normal distribution, opposite Gaussian produc t inequality, real linear polarization constant 2020 MSC: Primary: 60E15; Secondary 26A48, 44A10, 62E15, 62H10, 62H12 1. Introduction The real linear polarization constant conjecture grew out of the w ork of Ben´ ıtez et al.(1998) on obtaining sharp lower bounds for the norms of products of polynomials on real Banach sp aces, and, in the linear case, products of bounded linear functionals. A decade later, Frenkel (2008) recast the pro blem restricted to linear functionals on a real Euclidean space as the following Gaussian product inequality (GPI) conjectur e: for every real centered Gaussian random vector X= (X1,...,X d) withd∈Nand eachm∈N, E/bracketleftBiggd/productdisplay i=1X2m i/bracketrightBigg ≥d/productdisplay i=1E/bracketleftbig X2m i/bracketrightbig . (1) Inequality (1) implies the (restricted) real linear polarization const ant conjecture (Malicet et al., 2016). It is also closely linked to the U-conjecture: if two polynomials in the components of Xare independent, an orthogonal transformation can be found so that each depends on a disjoint subset of variables ; see, e.g., Kagan et al.(1973); Malicet et al.(2016). Building on this formulation, Li & Wei (2012) proposed a strengthen ed conjecture allowing arbitrary positive expo- nents, represented as ν= (ν1,...,ν d)∈(0,∞)d. The statement, denoted by GPId(ν), reads GPId(ν) :E/bracketleftBiggd/productdisplay i=1|Xi|2νi/bracketrightBigg ≥d/productdisplay i=1E/bracketleftbig |Xi|2νi/bracketrightbig . (2) The conjecture is settled for d= 2 because ( |X1|,|X2|) is multivariate totally positive of order 2 (Karlin & Rinott, 1981). In dimension 3, numerous special cases have been confirmed: (a) Lan et al.(2020):GPI3(p,p,q) for allp,q∈N; (b) Russell & Sun (2023): GPI3(1,p,q) for allp,q∈N; (c) Russell & Sun (2024): GPI3(2,3,p) for allp∈N; (d) Herry et al.(2024):GPI3(p,q,r) for allp,q,r∈N; (e) Kim & Kim (2025): GPI3(p,q,r) forp∈Nandq,r∈(0,∞); (f) Kim et al.(2025):GPI3(p,q,r) for allp,q,r∈(1/2)N. Email addresses: langl@gzhu.edu.cn (Guolie Lan), frederic.ouimet2@mcgill.ca (Fr´ ed´ eric Ouimet), wei.sun@concordia.ca (Wei Sun) Preprint submitted to Statistics & Probability Letters May 16, 2025 For higher d, additional results appear under various covariance constraints ; see, for instance, Genest & Ouimet (2022), Russell & Sun (2022b), and Edelmann et al.(2023). Interesthasalsoturnedto scenarioswheresomeexponentsin(2 ) arenegative(subject tothe expectationsbeingfinite). When every exponent is negative, Wei (2014) proved the inequality and even extended it to multivariate gamma random vectors (also called permanental vectors). Wei’s results, and rela ted developments, extend to traces and determinants of disjoint principal blocks of Wishart random matrices, as shown by
https://arxiv.org/abs/2505.09976v1
Ge nest & Ouimet (2023); Genest et al.(2024, 2025). In the Gaussian setting, the mixed-sign case, where both positive a nd negative exponents appear, remains largely unresolved. The two-dimensional situation was settled by Russell & Sun (2022a) (tighter quantitative bounds have also been proved by Hu et al.(2023)), and partial progress for general dwas achieved recently by Zhou et al.(2024). Letting Σ = (σij)1≤i,j≤dbe the covariance matrix of X, they established E/bracketleftBigg |X1|−2ν1d/productdisplay i=2X2 i/bracketrightBigg ≥/braceleftBiggd/productdisplay i=2/bracketleftbigg 1−σ2 1i σ11σii/bracketrightbigg/bracerightBigg E/bracketleftbig |X1|−2ν1/bracketrightbigd/productdisplay i=2E[X2 i], (3) for anyν1∈[0,1/2), and E/bracketleftBigg/parenleftBiggd−1/productdisplay i=1|Xi|−2νi/parenrightBigg |Xd|2νd/bracketrightBigg ≤E/bracketleftBiggd−1/productdisplay i=1|Xi|−2νi/bracketrightBigg E/bracketleftbig |Xd|2νd/bracketrightbig , (4) forν1,...,ν d−1∈[0,1/2) andνd∈(0,∞). The present note investigates the general mixed-sign problem, th ereby significantly generalizing (3) and (4). Except for a narrow unresolved case discussed in Remark 2, the analysis re solves the question through Theorems 1 and 2. The remainder of the paper is organized as follows. Section 2 states the main results; proofs are given in Section 3. In the interest of self-containment, some technical lemmas used in the proofs are relegated to Appendix A. 2. Results The first theorem below generalizes (3) from one Gaussian compone nt having a negative exponent to an arbitrary subset of the Gaussian components having negative exponents. T he positive exponents νiare not restricted to be equal to 1, but this comes at the cost of assuming the validity of a lower-or der GPI. However, this also means that if the GPI assumption is known to hold in the literature, then the result of the t heorem holds ‘unconditionally’. For a comprehensive list of results related to the GPI, refer to Section 1 of Genest et al.(2024). Theorem 1 (Lower bound) .LetX∼ Nd(0d,Σ)for some positive definite matrix Σ, and let ∅ /\e}atio\slash=J ⊆ {1,...,d}be given. Assume that νj∈[0,1/2)for allj∈ Jandνi∈(0,∞)for alli∈ J∁. Then, conditionally on GPId−|J|((νi)i∈J∁)being true, we have E /productdisplay j∈J|Xj|−2νj×/productdisplay i∈J∁|Xi|2νi ≥E /productdisplay j∈J|Xj|−2νj ×/productdisplay i∈J∁/braceleftbigg(Σ/ΣJJ)ii Σii/bracerightbiggνi E/bracketleftbig |Xi|2νi/bracketrightbig , whereJ∁≡ {1,...,d}\J, the matrix ΣII′denotesΣrestricted to the rows and columns indexed in IandI′, respectively, andΣ/ΣJJ= ΣJ∁J∁−ΣJ∁JΣ−1 JJΣJJ∁is the Schur complement of ΣJJinΣ. Remark 1. In Theorem 1, the factor (Σ/ΣJJ)ii/Σiiis smaller or equal to 1becauseΣ/ΣJJ/√recedesequalΣin the Loewner order. In the special case where J={1}andGPId(1,...,1)is known to hold (Frenkel, 2008), then we recover (3)with (Σ/ΣJJ)ii Σii=σii−σi1σ−1 11σ1i σii= 1−σ2 1i σ11σii, i∈ {2,...,d}. From the work of Wei (2014), the GPI is known to hold when all the ex ponents are negative, so the following corollary is immediate. Corollary 1. In the setting of Theorem 1, we have E /productdisplay j∈J|Xj|−2νj×/productdisplay i∈J∁|Xi|2νi ≥/productdisplay j∈JE/bracketleftbig |Xj|−2νj/bracketrightbig ×/productdisplay i∈J∁/braceleftbigg(Σ/ΣJJ)ii Σii/bracerightbiggνi E/bracketleftbig |Xi|2νi/bracketrightbig . Next, the goal is to generalize (4) by replacing |Xd|2νdwith a product of positive powers of the absolute Gaussian components indexed in J∁or, more generally, a componentwise convex function of such comp onents. 2 Theorem 2 (Upper bound) .LetX∼ Nd(0d,Σ)for some positive definite matrix Σ, and let ∅ /\e}atio\slash=J ⊆ {1,...,d}be given. Assume that νj∈[0,1/2)for allj∈ J. Then, for any componentwise convex function ψ:Rd−|J|→R, we have E /productdisplay j∈J|Xj|−2νj×ψ((Xi)i∈J∁) ≤E /productdisplay j∈J|Xj|−2νj ×E/bracketleftbig ψ((Xi)i∈J∁)/bracketrightbig . Since (xi)i∈J∁/ma√sto→/producttext i∈J∁|xi|2νiis componentwise convex for all νi∈[1/2,∞), i∈ J∁,
https://arxiv.org/abs/2505.09976v1
the following corollary is immediate. Corollary 2. LetX∼ Nd(0d,Σ)for some positive definite matrix Σ, and let ∅ /\e}atio\slash=J ⊆ {1,...,d}be given. Assume that νj∈[0,1/2)for allj∈ Jandνi∈[1/2,∞)for alli∈ J∁. Then, we have E /productdisplay j∈J|Xj|−2νj×/productdisplay i∈J∁|Xi|2νi ≤E /productdisplay j∈J|Xj|−2νj ×E /productdisplay i∈J∁|Xi|2νi . Remark 2. The case where some of the exponents νi, i∈ J∁,lie in the interval (0,1/2)remains open. One possible approach would be to exploit the Bernstein function integra l representation, |x|2ν=ν/integraltext∞ 0(1−e−sx2)s−ν−1ds/Γ(1−ν), which holds for x∈Randν∈(0,1/2); see, e.g., Schilling et al.(2012, Eq. (1)). However, we were unable to turn this representation into a successful argument. A treatment of t his case therefore lies beyond the scope of our paper. In the two-dimensional case, we have the following generalization of Corollary 2. Proposition 1. Suppose that the two-dimensional random vector (X,Y)has an elliptical distribution, meaning that (X,Y)law=RΣ1/2U, whereRis a nonnegative random variable (the radial part), Σ1/2is the symmetric square root of some positive definite matrix Σ, andUis a random vector uniformly distributed on the unit circle S1that is independent of R. Also, letfandgbe any nonincreasing and nondecreasing functions on [0,∞), respectively. Then E[f(|X|)g(|Y|)]≤E[f(|X|)]E[g(|Y|)]. 3. Proofs Proof of Theorem 1. Without loss of generality, assume that νj∈(0,1/2) for allj∈ J, and re-index the components ofXso thatJ={1,...,k}with 1≤k≤d−1 andJ∁={k+1,...,d}. For every x∈Randν∈(0,1/2), the identity |x|−2ν=1 Γ(ν)/integraldisplay∞ 0e−sx2sν−1ds (5) holds; see, e.g., Schilling et al.(2012, Eq. (2)). Applying (5) to |X1|−2ν1,...,|Xk|−2νkand using Fubini’s theorem yields E k/productdisplay j=1|Xj|−2νjd/productdisplay i=k+1|Xi|2νi =1/producttextk j=1Γ(νj)/integraldisplay (0,∞)kE/bracketleftBigg exp(−X⊤TsX/2)d/productdisplay i=k+1|Xi|2νi/bracketrightBiggk/productdisplay j=1sνj−1 jds,(6) wheres= (s1,...,s k) andTs= diag(2s1,...,2sk,0,...,0). By a renormalization argument, we have E/bracketleftBigg exp(−X⊤TsX/2)d/productdisplay i=k+1|Xi|2νi/bracketrightBigg =/integraldisplay (0,∞)dexp(−x⊤Tsx/2)exp(−x⊤Σ−1x/2) (2π)d/2|Σ|1/2d/productdisplay i=k+1|xi|2νidx =|(Σ−1+Ts)−1|1/2 |Σ|1/2/integraldisplay (0,∞)dexp{−x⊤(Σ−1+Ts)x/2} (2π)d/2|(Σ−1+Ts)−1|1/2d/productdisplay i=k+1|xi|2νidx =|(Σ−1+Ts)−1|1/2 |Σ|1/2E/bracketleftBiggd/productdisplay i=k+1|Y(s) i|2νi/bracketrightBigg , whereY(s)= (Y(s) 1,...,Y(s) d)∼ Nd(0d,(Σ−1+Ts)−1). Then, using the assumption that GPId−k((νi)i>k) holds, we obtain E/bracketleftBigg exp(−X⊤TsX/2)d/productdisplay i=k+1|Xi|2νi/bracketrightBigg ≥|(Σ−1+Ts)−1|1/2 |Σ|1/2d/productdisplay i=k+1E/bracketleftBig |Y(s) i|2νi/bracketrightBig =E/bracketleftBig exp(−X⊤TsX/2)/bracketrightBigd/productdisplay i=k+1E/bracketleftBig |Y(s) i|2νi/bracketrightBig . 3 By applying the last bound in (6) and using (5) together with Fubini’s th eorem, it follows that E k/productdisplay j=1|Xj|−2νjd/productdisplay i=k+1|Xi|2νi ≥E k/productdisplay j=1|Xj|−2νj d/productdisplay i=k+1E/bracketleftBig |Y(s) i|2νi/bracketrightBig . (7) Finally, to evaluate E/bracketleftBig |Y(s) i|2νi/bracketrightBig , note that, for i>k, E/bracketleftBig |Y(s) i|2νi/bracketrightBig = 2/integraldisplay∞ 0u2νi1/radicalBig 2πVar(Y(s) i)exp/braceleftBigg −u2 2Var(Y(s) i)/bracerightBigg du =2νi{Var(Y(s) i)}νi √π/integraldisplay∞ 0tνi−1/2e−tdt=Γ(νi+1/2)√π2νi{Var(Y(s) i)}νi, by applying the change of variable t=u2/(2Var(Y(s) i)). Next, write Σ in block form: Σ =/bracketleftbigg A B B⊤C/bracketrightbigg , whereAisk×k,Bisk×(d−k) andCis (d−k)×(d−k). LetS= diag(2s1,...,2sk) so thatTs= diag(S,0(d−k)×(d−k)). The matrix inversion formula for block partitions in Lemma 1 (i) gives Σ−1+Ts=/bracketleftBigg A−1+S+A−1B(Σ/A)−1B⊤A−1−A−1B(Σ/A)−1 −(Σ/A)−1B⊤A−1(Σ/A)−1/bracketrightBigg ≡/bracketleftbiggP Q Q⊤R/bracketrightbigg . Then Lemma 1 (ii) yields (Σ−1+Ts)−1=/bracketleftBigg (A−1+S)−1⋆ ⋆ (Σ/A)+B⊤A−1(A−1+S)−1A−1B/bracketrightBigg . Letejdenote thejth standard basis vector. Since B⊤A−1(A−1+S)−1A−1Bis positive definite and Var(Xi) = Σii, then, fori>k, Var(Y(s) i) =e⊤ i−k{(Σ/A)+B⊤A−1(A−1+S)−1A−1B}ei−k≥e⊤ i−k(Σ/A)ei−k=(Σ/A)i−k,i−k ΣiiVar(Xi), and therefore, E/bracketleftBig |Y(s) i|2νi/bracketrightBig ≥/braceleftbigg(Σ/A)i−k,i−k Σii/bracerightbiggνiΓ(νi+1/2)√π2νi{Var(Xi)}νi=/braceleftbigg(Σ/A)i−k,i−k Σii/bracerightbiggνi E[|Xi|2νi]. The conclusion follows by plugging this lower bound in (7). Proof of Theorem 2. Following the same steps as in the proof of Theorem 1 but without app lyingGPId−k((νi)i>k), we have E k/productdisplay j=1|Xj|−2νjψ((Xi)i>k) =1/producttextk j=1Γ(νj)/integraldisplay (0,∞)kE/bracketleftBig exp(−X⊤TsX/2)ψ((Xi)i>k)/bracketrightBigk/productdisplay j=1sνj−1 jds, with E/bracketleftBig exp(−X⊤TsX/2)ψ((Xi)i>k)/bracketrightBig =E/bracketleftBig exp(−X⊤TsX/2)/bracketrightBig E/bracketleftBig ψ((Y(s) i)i>k)/bracketrightBig . Since Σ−1/√recedesequalΣ−1+Tsin the Loewner order, then (Σ−1+Ts)−1/√recedesequalΣ, and thus Y(s)is smaller or equal to Xin the
https://arxiv.org/abs/2505.09976v1
componentwise convex order, written Y(s)≤ccxX, by Lemma 2, so that E/bracketleftBig ψ((Y(s) i)i>k)/bracketrightBig ≤E[ψ((Xi)i>k)]. Combining the last two equations and applying (5) again together with Fubini’s theorem, we find that E k/productdisplay j=1|Xj|−2νj×d/productdisplay i=k+1|Xi|2νi ≤E k/productdisplay j=1|Xj|−2νj ×E[ψ((Xi)i>k)]. This concludes the proof. 4 Proof of Proposition 1. Letfandgbe any nonincreasing and nondecreasing functions on [0 ,∞), respectively. For all r∈[0,∞), denote the scaled functions fr(x) =f(rx) andgr(y) =g(ry). Given the stochastic representation of elliptical distributions in the statement of the proposition, and the rotation al invariance of U= (U1,U2), it is sufficient to show that, for all r∈[0,∞), E[fr(|(Σ1/2U)1|)gr(|(Σ1/2U)2|)]≤E[fr(|U1|)]E[gr(|U2|)], because we can just integrate the above inequality on both sides wit h respect to the distribution of Rto obtain the inequality claimed in the statement of the proposition. Moreover, fo rr∈[0,∞), the scaled functions frandgrremain nonincreasing and nondecreasing, respectively. Therefore, to c onclude, we can ignore rand any scalings related to the rows of Σ1/2; it is sufficient to prove E[f(|α·U|)g(|β·U|)]≤E[f(|e1·U|)]E[g(|e2·U|)], (8) whereαandβdenote the first and second rows of Σ1/2under the assumption /bardblα/bardbl2=/bardblβ/bardbl2= 1, and eidenotes the ith standard basis vector. To this end, consider Tthe 2×2 matrix for a π/2 counterclockwise rotation, and the function H(u) ={f(|α·u|)−f(|Tβ·u|)}{g(|β·u|)−g(|Tα·u|)},u∈S1. By Pythagoras, we have, for any u∈S1, |α·u|2+|Tα·u|2=|β·u|2+|Tβ·u|2=/bardblu/bardbl2 2= 1. It follows that |α·u|2−|Tβ·u|2=|β·u|2−|Tα·u|2, which in turn implies that |α·u| ≥ |Tβ·u| ⇔ |β·u| ≥ |Tα·u|. Sincefis nonincreasing and gis nondecreasing, we haveH(u)≤0 for all u∈S1, and thus E[H(U)]≤0. On the other hand, it follows from the rotational symmetry of UandTβ·Tα=α·βthat E[f(|α·U)g(|β·U)] =E[f(|Tβ·U)g(|Tα·U)]. Similarly, since α·Tα=Tβ·β=e1·e2= 0, we have E[f(|α·U)g(|Tα·U)] =E[f(|Tβ·U)g(|β·U)] =E[f(|e1·U)g(|e2·U)]. Putting the last three equations together yields 2E[f(|α·U)g(|β·U)]−2E[f(|e1·U)g(|e2·U)]≤0, which proves (8). This concludes the proof. Appendix A. Technical lemmas The first lemma contains well-known formulas from matrix analysis for the inverse of a 2 ×2 block matrix; see, e.g., Theorem 2.1 of Lu & Shiou (2002). Lemma 1. LetΣandMbe symmetric matrices of width at least 2, each partitioned into 2×2block matrices. (i) IfAis invertible and the Schur complement Σ/A=C−B⊤A−1Bis invertible, then Σ =/bracketleftbiggA B B⊤C/bracketrightbigg ⇒Σ−1=/bracketleftBigg A−1+A−1B(Σ/A)−1B⊤A−1−A−1B(Σ/A)−1 −(Σ/A)−1B⊤A−1(Σ/A)−1/bracketrightBigg . (ii) IfRis invertible and the Schur complement M/R=P−QR−1Q⊤is invertible, then M=/bracketleftbiggP Q Q⊤R/bracketrightbigg ⇒M−1=/bracketleftBigg (M/R)−1−(M/R)−1QR−1 −R−1Q⊤(M/R)−1R−1+R−1Q⊤(M/R)−1QR−1/bracketrightBigg . The second lemma states that multivariate normal random vectors follow the componentwise convex order according to the Loewner order of their covariance matrices; see, e.g., Shak ed & Shanthikumar (2007, Example 7.A.26). Lemma 2. LetΣ1,Σ2be twod×dpositive definite matrices, and suppose that Y∼ Nd(0d,Σ1)andX∼ Nd(0d,Σ2). If Σ1/√recedesequalΣ2, thenYis smaller or equal to Xin the componentwise convex order, written Y≤ccxX, meaning that for every componentwise convex function ψ:Rd→R, we have E[ψ(Y)]≤E[ψ(X)]. 5 Funding Guolie Lan’s research is supported by the National Natural Science Foundation of China (No. 12171335) and the Guangdong Provincial Natural Science Foundation (No. 2023A151 5012170). Fr´ ed´ eric Ouimet’s Research Associate posi- tion at McGill University is funded through Christian Genest’s resear ch grants (Grant RGPIN-2024-04088 [Natural Sci- ences and Engineering Research Council of Canada] and Grant 950 -231937 [Canada Research Chairs Program]). Fr´ ed´ eric Ouimet’s postdoctoral fellowship is funded through the Natural Sc iences and Engineering Research Council of Canada (Grant RGPIN-2024-05794to Anne
https://arxiv.org/abs/2505.09976v1
MacKay). Wei Sun’s research is funded through the Natural Sciences and Engineering Research Council of Canada (Grant RGPIN-2025-06779). References Ben´ ıtez, C., Sarantopoulos, Y., & Tonge, A. 1998. Lower bounds f or norms of products of polynomials. Math. Proc. Cambridge Philos. Soc. ,124(3), 395–408. Edelmann, D., Richards, D., & Royen, T. 2023. Product inequalities fo r multivariate Gaussian, gamma, and positively upper orthant dependent distributions. Statist. Probab. Lett. ,197, 109820, 8 pp. Frenkel, P. E. 2008. Pfaffians, Hafnians and products of real linea r functionals. Math. Res. Lett. ,15(2), 351–358. Genest, C., & Ouimet, F. 2022. A combinatorialproofofthe Gaussia n product inequality beyond the MTP 2case.Depend. Model.,10(1), 236–244. Genest, C., & Ouimet, F. 2023. Miscellaneous results related to the G aussian product inequality conjecture for the joint distribution of traces of Wishart matrices. J. Math. Anal. Appl. ,523(1), 126951, 10 pp. Genest, C., Ouimet, F., & Richards, D. 2024. On the Gaussian produc t inequality conjecture for disjoint principal minors of Wishart random matrices. Electron. J. Probab. ,29, Paper No. 166, 26 pp. Genest, C., Ouimet, F., & Richards, D. 2025. An explicit Wishart momen t formula for the product of two disjoint principal minors. Proc. Amer. Math. Soc. ,153(3), 1299–1311. Herry, R., Malicet, D., & Poly, G. 2024. A short proof of a strong for m of the three dimensional Gaussian product inequality. Proc. Amer. Math. Soc. ,152(1), 403–409. Hu, Z.-C., Zhao, H., & Zhou, Q.-Q. 2023. Quantitative versions of the two-dimensional Gaussian product inequalities. J. Inequal. Appl. Paper No. 2, 11 pp. Kagan, A. M., Linnik, Yu. V., & Rao, C. R. 1973. Characterization Problems in Mathematical Statistics . John Wiley & Sons, New York-London-Sydney. Karlin, S., & Rinott, Y. 1981. Total positivity properties of absolute value multinormal variables with applications to confidence interval estimates and related probabilistic inequalities. Ann. Statist. ,9(5), 1035–1049. Kim, B., & Kim, J. 2025. Extension of a strong form of the three-dime nsional Gaussian product inequality. Statist. Probab. Lett. ,216, 110276, 6 pp. Kim, B., Kim, J., & Kim, J. 2025. Three-dimensional Gaussian product in equality with positive integer order moments. J. Math. Anal. Appl. ,542(2), 128804, 21 pp. Lan, G., Hu, Z.-C., & Sun, W. 2020. The three-dimensional Gaussian p roduct inequality. J. Math. Anal. Appl. ,485(2), 123858, 19 pp. Li, W. V., & Wei, A. 2012. A Gaussian inequality for expected absolute p roducts. J. Theoret. Probab. ,25(1), 92–99. Lu, T.-T., & Shiou, S.-H. 2002. Inverses of 2 ×2 block matrices. Comput. Math. Appl. ,43(1-2), 119–129. Malicet, D., Nourdin, I., Peccati, G., & Poly, G. 2016. Squared chaotic random variables: New moment inequalities with applications. J. Funct. Anal. ,270(2), 649–670. Russell, O., & Sun, W. 2022a. An opposite Gaussian product inequality .Statist. Probab. Lett. ,191, 109656, 6 pp. Russell, O., & Sun, W. 2022b. Some new Gaussian product inequalities. J. Math. Anal. Appl. ,515(2), 126439, 21 pp. Russell, O., & Sun, W. 2023. Moment ratio inequality of bivariate Gauss ian distribution and three-dimensional Gaussian product inequality. J. Math. Anal. Appl. ,527(1, part 2), 127410, 27 pp. Russell, O., & Sun, W. 2024. Using
https://arxiv.org/abs/2505.09976v1