text
string
source
string
convex cone Sd +ofd×dsymmetric PSD matrices in the maximum entry-wise distance. Thus, one replaces ˆΣnby7 ˆΣn,PSD∈argmin S∈Sd +max 1≤j,k≤d/vextendsingle/vextendsingleSj,k−ˆΣn,j,k/vextendsingle/vextendsingle. By the minimizing property of ˆΣn,PSDand Σ∈Sd +, the triangle inequality implies that max 1≤j,k≤d/vextendsingle/...
https://arxiv.org/abs/2504.08435v1
that for a constant Cdepending only onb2,c, andm, P/parenleftbigg max 1≤j,k≤d/vextendsingle/vextendsingle˜Σn,j,k−ˆΣn,j,k/vextendsingle/vextendsingle> C/parenleftBig η1−2 mn+/bracketleftBiglog(dn) n/bracketrightBig1 2−1 m/parenrightBig/parenrightbigg ≤12 n,(D.5) which together with the triangle inequality and Theorem D....
https://arxiv.org/abs/2504.08435v1
13) immediately as in the end of the proof of Theorem 2.1. Next, observe that (grant the quotients are well-defined) An:=/vextendsingle/vextendsingle/vextendsingle/vextendsinglemax j=1,...,d1√n˜σn,jn/summationdisplay i=1/bracketleftbig φˆαj,ˆβj(˜Xi,j)−µj/bracketrightbig −max j=1,...,d1√nσ2,jn/summationdisplay i=1/bracke...
https://arxiv.org/abs/2504.08435v1
absolute constant C(since Σ 0,j,j= 1 for all j= 1,...,d). Consider the first the case of C1/bracketleftbigg η1−2 mn+/parenleftBiglog(dn) n/parenrightBig1 2−1 m/bracketrightbigg ≤b2 1 2, (E.7) whereC1is the constant from Theorem 3.1depending only on b2,c, andm. Thus, by that theorem there exists a set Enof probability at...
https://arxiv.org/abs/2504.08435v1
by the Gaussian-to-Gaussian comparison inequality as stated in P roposition 2.1 of Chernozhuokov et al.(2022), Bn≤C/parenleftBig log2(d) max 1≤j,k≤d/vextendsingle/vextendsingle˜Σn,j,k−Σj,k/vextendsingle/vextendsingle/parenrightBig1/2 ≤C/parenleftBigg log2(d)/bracketleftbigg η1−2 mn+/parenleftBiglog(dn) n/parenrightBig1...
https://arxiv.org/abs/2504.08435v1
arXiv:2504.08482v1 [math.ST] 11 Apr 2025Winsorized mean estimation with heavy tails and adversaria l contamination Anders Bredahl Kock∗ University of Oxford Department of Economics 10 Manor Rd, Oxford OX1 3UQ anders.kock@economics.ox.ac.ukDavid Preinerstorfer WU Vienna University of Economics and Business Institute for...
https://arxiv.org/abs/2504.08482v1
settings. Lugosi and Mendelson (2021) have shown that a sample-split based winsorized1mean estimator has sub-Gaussian concentration properties in an adversarial contamination set- ting.2The multivariate case was studied as well. In the present pap er, we focus on the univariate case and usethe ideas in Lugosi and Mende...
https://arxiv.org/abs/2504.08482v1
of) a random variable with another one with a continuous cdf itself has a continuous cdf. Thus, continuity of the cdf o f the observations can be enforced (without changing µ) by adding, e.g., mean zero Gaussian noise to the ˜Xiand thus implicitly on the unobserved Xi.3 3 Performance guarantees for known η We first stud...
https://arxiv.org/abs/2504.08482v1
of cor, equivalently, λ1,calso determines a value of λ2,c(δ,n). In contrast to λ1,c, there is no natural lower bound for λ2,c(δ,n) so we focus on allowing λ1,carbitrarily close to one, while keeping λ2,c(δ,n) small such thatεc<1/2.7 We write ˆ α= ˆαc=˜X∗ ⌈εcn⌉,ˆβ=ˆβc=˜X∗ ⌈(1−εc)n⌉forεc∈(0,1/2), and ˆµn,c(η) =1 nn/summa...
https://arxiv.org/abs/2504.08482v1
3.2.A conservative upper bound on the contamination rate suffices : For ¯η≥η, the estimator ˆ µn,c(¯η) satisfies the bound in Theorem 3.1(as well as the more general one in Theorem C.1in the appendix) with ¯ ηreplacing η. 4 Adapting to unknown ηby Lepski’s method In practice the least upper bound ηon the contamination rat...
https://arxiv.org/abs/2504.08482v1
estimator as ˜ µn,c= ˆµn,c(ηˆg) which is an element of the grid of estimators/braceleftbig ˆµn,c(ηj):j∈[gmax]/bracerightbig and thus arguably more natural than ˆ µn. In Remark D.1in the appendix we establish an upper bound on |˜µn,c−µ|similar to that in Theorem 4.1. 5 Relaxed moment assumptions So far our results have ...
https://arxiv.org/abs/2504.08482v1
Devroye, L., M. Lerasle, G. Lugosi, and R. I. Oliveira (2016): “Sub-Gaussian mean estimators,” Annals of Statistics , 44, 2695 – 2725. Diakonikolas, I., G. Kamath, D. Kane, J. Li, A. Moitra, and A. S tewart (2019): “Robust estimators in high-dimensions without the computa tional intractability,” SIAM Journal on Computi...
https://arxiv.org/abs/2504.08482v1
Qp(Z) thep-quantile of the distribution ofZ, that is Qp(Z) = inf/braceleftbig z∈R:P(Z≤z)≥p/bracerightbig . (A.1) Theorem 3.1is a special case of Theorem C.1below. To prove the latter, we follow the proof strategy used in Section 2.1 of Lugosi and Mendelson (2021): we first establish in Lemma B.3that on a set Gnof probab...
https://arxiv.org/abs/2504.08482v1
number with that property), the lemma remains valid for anyη≥η. Lemma B.3. Fixc∈(1,√ 1.5),n∈N,δ∈(0,1). Furthermore, let X1,...,X nbe i.i.d. with continuous cdf and (1)be satisfied. If εc∈(0,1/2)withεcas defined in (5), each of(B.2)–(B.5)below holds with probability at least 1−δ/6: ˜X∗ ⌈εcn⌉> Qεc−c−1εc(X1); (B.2) ˜X∗ ⌈(1−...
https://arxiv.org/abs/2504.08482v1
2(c2−1)]·b, which is the case for εcas in (5). 19 Subcase b >(εc−c−1εc):In this subcase b > εc−c−1εc= (εc+c−1εc)·εc−c−1εc εc+c−1εc=c−1 c+1(εc+c−1εc), such that ( εc+c−1εc)<(c+1)/(c−1)·b. Thus, ( B.9) implies Sn>(εc+c−1εc)n−/radicalbig 2(c+1)/(c−1)bn−bn/3 = (εc+c−1εc)n−/parenleftbig/radicalbig 2(c+1)/(c−1)+1/3/parenrigh...
https://arxiv.org/abs/2504.08482v1
now establish ( B.14). Using H¨ older’s inequality to bound the first two summands on the right-hand side of ( B.16) and the first inequality of Lemma B.2to bound the last two 23 summands, it follows that EφQε−a,Q1−ε−a(X1)−µ ≥ −σm(ε−a)1−1 m−σm(ε+a)1−1 m−σm (ε−a)1 m(ε−a)−σm (1−ε−a)1 m(ε+a) =−2σm(ε−a)1−1 m−σm/parenleftBig ...
https://arxiv.org/abs/2504.08482v1
3.1(cf. Re- mark3.2) thatµ∈I(ηj) with probability at least 1 −δ/gmax. Ifεc(ηj)≥0.5 thenI(ηj) =R 26 andµ∈I(ηj) with probability one. Thus, by the union bound, µ∈g∗/intersectiondisplay j=1I(ηj) with probability at least 1 −δ. On/braceleftbig µ∈/intersectiontextg∗ j=1I(ηj)/bracerightbig , which we shall supposeto occur in...
https://arxiv.org/abs/2504.08482v1
, which hasthesamedistributionas therandomvariablein ( E.6), sothat wecan analogously conclude (since at most ηnof the˜Xidiffer from Xi) that with probability at least 1 −δ/6 (redefining ˜Sn) ˜Sn=n/summationdisplay i=11/parenleftbig˜Xi≤Qε′c+c−1ε′c(X1)/parenrightbig >(ε′ c+c−1ε′ c)n−/radicalbig 0.5log(6/δ)n−ηn≥ε′ cn. Thus...
https://arxiv.org/abs/2504.08482v1
arXiv:2504.08513v1 [math.PR] 11 Apr 2025Measure Theory of Conditionally Independent Random Function Evaluation Felix Benning felix.benning@uni-mannheim.de University of Mannheim April 14, 2025 Abstract The next evaluation point xn+1of a random function f= (f(x))x∈X (a.k.a. stochastic process or random field) is often ch...
https://arxiv.org/abs/2504.08513v1
they corr espond to stable physical states. This research on high-dimensional random functions has also lead to insights about the loss landscapes found in machine l earning [e.g. 6,4]. Measurability of random evaluations. The evaluation f(X) of a random function f= (f(x))x∈Xat a random location Xis a complicated measu...
https://arxiv.org/abs/2504.08513v1
. (1) In practice, this hope is often naively treated as self-evid ent [e.g. 23, Lemma 5.1, p. 3258]. But while the collection of probability kerne ls (κx[0:n])x[0:n]∈Xn+1 may be treated as a function in x[0:n], there is no guarantee this function is even measurable. This means that the term on the right in ( 1) might ...
https://arxiv.org/abs/2504.08513v1
Nx, specifically Nx= {U=x}. But the union of these null sets over all possible values of xis not a null set. Joint null set can ensure that the consistency of on e kernel implies the other. A sufficient criterion for such a joint null set is a noti on of continuity. The following result implies that a continuous, joint co...
https://arxiv.org/abs/2504.08513v1
for all previsible sequences (Xk)k∈N0and all A∈ B(E) P(Z∈A| FX n)a.s.=κ(W,f(X0), . . . , f(Xn), X[0:n];A). 3This is a non-trivial problem by itself [see e.g. 1, Thm. 18.19]. 4There always exists such a random element Wsince the identity map from the measurable space (Ω ,A) into (Ω ,F) is measurable and clearly generate...
https://arxiv.org/abs/2504.08513v1
a consistent joint conditional distribu tion κfor Z,f(x)given F. That is P(Z,f(X)∈B| F)(ω) =κ(ω, X(ω);B) for all F-measurable X. Furthermore x∝mapsto→κ(ω, x;·)is continuous with respect to the weak topology on the space of measures for all ω∈Ω. If˜κis another joint conditional distri- bution for Z,f(x)given F, that is ...
https://arxiv.org/abs/2504.08513v1
example, the random function was a dependent vari- able. In Theorem 3.1we gave a sufficient condition for a unique continuous and consistent conditional distributions to exist in this c ase where the func- tion value is the dependent variable. It is now time to consid er the case where functions evaluated at random point...
https://arxiv.org/abs/2504.08513v1
y;A3×A2)] (5)=E[1A1(ξ1)1A2(ξy 2)1A3(ξ3)] (6)=E[1A1(ξ1)1A2(ξy 2)κ3|2,1(ξ1, ξy 2;A3)] (∗)=E/bracketleftBig 1A1(ξ1)/integraldisplay 1A2(x2)κ3|2,1(ξ1, x2, y;A3)κ3,2|1(ξ1, y;E3×dx2)/bracketrightBig Note that the constant function g≡yis always measurable for the application of (5). The last step ( ∗) is implied by disintegra...
https://arxiv.org/abs/2504.08513v1
(Xk)k∈N0andB∈ B(E) P/parenleftbig Z∈A| FX n/parenrightbiga.s.=κ/parenleftbig W,f0(X0), . . . , fn(Xn), X[0:n];B/parenrightbig . (ii) Let κbe a joint conditional distribution for Z,fn(xn)| F,(fk(xk))k∈[0:n) such that xn∝mapsto→κ(y[0:n), x[0:n];·) is continuous in the weak topology for all x[0:n)∈Xnand all y[0:n)∈Xn. The...
https://arxiv.org/abs/2504.08513v1
U ) for some measurable function ˜h. Since Uis independent from ( Z,f,Fk−1), it is independent from ˜Wasσ(˜W)⊆σ(f,Fk−1). Using Prop. 6.13 from Kallenberg [ 15] again, Xkis thereby independent from ( Z,f) conditional on ˜W. What remains is the proof of (ii). Let xnbe fixed and define ˜Z= (Z,fn(xn)). Since κis a joint cond...
https://arxiv.org/abs/2504.08513v1
extension theorem [e.g. 17, Sec. 14.3], allows for the construction of random measures on product spaces. This is o nly compatible with the product topology, i.e. the topology of point-wise c onvergence. But the evaluation map is generally not continuous with respect to t his topology [ 10, Prop. 2.6.11]. (iii)ensures ...
https://arxiv.org/abs/2504.08513v1
thereby a compact exhaustion. It is straightforward to check that the metric defined in (ii)is a metric, so we will only prove this metric induces the compact-open topology. (I)The compact-open topology is a subset of the metric topology . We need to show that the sets M(K, U ) are open with respect to the metric. This ...
https://arxiv.org/abs/2504.08513v1
satisfy ( 11). Observe that g∈Vsince for all j g(Cj)⊆g(Oxj)⊆Br/5(f(xj))⊆Uj. Pick any other h∈V. Then for all x∈KNthere exists isuch that x∈Oxi⊆Ci and by definition of Vthis implies h(x)∈Uiand also g(x)∈Uiand thereby dY(h(x), g(x))≤r/2. This uniform bound implies dN(h, g)≤r/2 and therefore d(g, h)≤/parenleftBigN/summatio...
https://arxiv.org/abs/2504.08513v1
9789811273919. doi: 10.1142/97 89811273926 0029. [3] M. Bayati and A. Montanari. The Dynamics of Message Passi ng on Dense Graphs, with Applications to Compressed Sensing. IEEE Transactions on Information Theory , 57(2):764–785, Feb. 2011. ISSN 1557-9654. doi: 10.1109/TIT.2010.2094817. [4] A. Choromanska, Mi. Henaff, M....
https://arxiv.org/abs/2504.08513v1
of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise. Journal of Basic Engineering , 86(1):97–106, Mar. 1964. ISSN 0021-9223. doi: 10.1115/1. 3653121. [20] G. Matheron. Principles of geostatistics. Economic Geology , 58(8):1246– 1266, Dec. 1963. ISSN 0361-0128. doi: 10.2113/gsecongeo.5...
https://arxiv.org/abs/2504.08513v1
Regularized Infill Criteria for Multi-objective Bayesian Optimization with Application to Aircraft Design R. Grapin∗, ISAE-SUPAERO, Université de Toulouse, 31055 Toulouse, FRANCE Y. Diouane†, Polytechnique Montréal, Montréal, QC, Canada. J. Morlier‡, ICA, Université de Toulouse, ISAE-SUPAERO, MINES ALBI, UPS, INSA, CNR...
https://arxiv.org/abs/2504.08671v1
iterations. The super efficient global optimization with mixture of experts (SEGOMOE) framework [ 5–8] is an extension of the well-known unconstrainedefficientglobaloptimizationframework [ 4]tohandleconstrainedsingle-objectiveoptimizationproblems. For multi-objective Bayesian optimization problems, scalarization techni...
https://arxiv.org/abs/2504.08671v1
Mixture Of Experts has been proposed by ONERA&ISAE-SUPAERO.TheinitialalgorithmSuperEfficientGlobalOptimization(SEGO)algorithm[ 28]has beenenhancedbytheuseofMixtureofExperts(MOE)[ 6],theKrigingwithPartialLeastSquares(KPLS)[ 29]forhigh dimensionalproblems,differentacquisitionfunctioncriteriaforhighlynonlinearobjectivefun...
https://arxiv.org/abs/2504.08671v1
to include additional information can be very useful for an efficient enrichment step. In mono-objectiveoptimization,regularizationtechniquesareknowntoleadasignificantimprovementwithinBO.The Watson and Barnes ( 𝑊𝐵2) [21] and the scaled 𝑊𝐵2[5] criteria are among existing regularization techniques in the literature. ...
https://arxiv.org/abs/2504.08671v1
defined such that for two sets of points 𝐴and𝐵, if the elements of𝐴dominates those of 𝐵then𝐼(𝐴)≤𝐼(𝐵). One efficient way to estimate the compliance indicator is given †https://github.com/SMTorg/smt 5 by inverted generational distance plus ( 𝐼𝐺𝐷+) [44]. The𝐼𝐺𝐷+indicator is defined as follows, for a given se...
https://arxiv.org/abs/2504.08671v1
= sum) given by Eq. (4). As a conclusionfrom our numericaltests, inparticular ifthe targeted problem ishigh dimensional ( 𝑑 >10), we 8 recommend to use the combination of 𝑃𝐼and the regularization based on the maximum of the mean prediction (i.e., reg = max). (a) Unconstrained problems (from left to right: ZDT1, ZDT2...
https://arxiv.org/abs/2504.08671v1
the “ CERAS” bi-objective optimization problem. Function/variable Quantity Range Minimize Fuel mass 1 Operating Weight Empty 1 Total objective functions 2 with respect to x position of Mean Aerodynamic Chord 1 [16.,18.](𝑚) Wing aspect ratio 1 [5.,11.] Horizontal tail aspect ratio 1 [1.5,6.] Wing taper aspect ratio 1 [...
https://arxiv.org/abs/2504.08671v1
factor of 1.3 compared to the 𝑀𝑃𝐼criterion (reg = sum). For illustration, four points have been selected on the Pareto front and their associated “ CERAS” geometries are 14 representedinFig.10: thefirstpointisgivenbytheminimumoffuelburn( min𝑓1),twopointsarelocatedalongthe front, and the forth one is associated to t...
https://arxiv.org/abs/2504.08671v1
solutionlies onthe boundaryofthe firstconstraint for thepointsthat alsosatisfy the second constraint, making of the Pareto optima a discontinuous set as shown on Fig. 13. •The third constrained problem is from Osyczka and Kundu paper [41]: f:R6−→ R2 x↦−→ [−( 25(𝑥1−2)2+(𝑥2−2)2+(𝑥3−1)2+(𝑥4−4)2+(𝑥5−1)2),𝑥2 1+𝑥2 2+�...
https://arxiv.org/abs/2504.08671v1
, edited by G. Rudolph, T. Jansen, N. Beume, S. Lucas, and C. Poloni, Springer Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 784–794. [14]Emmerich, M., Giannakoglou, K., and Naujoks, B., “Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels,” IEEE Transactions on Evolutio...
https://arxiv.org/abs/2504.08671v1
Stilz,V.,andRegis,R., “Improvement ofefficientglobaloptimizationwithapplicationtoaircraftwingdesign,” 17thAIAA/ISSMOMultidisciplinaryAnalysisand Optimization Conference , Washington D.C., USA, 2016, p. 4001. https://doi.org/10.2514/6.2016-4001. [35]Bartoli,N.,Lefebvre,T.,Dubreuil,S.,Olivanti,R.,Bons,N.,Martins,J.,Bouhl...
https://arxiv.org/abs/2504.08671v1
ESTIMATION OF CHANGE POINTS FOR NON-LINEAR (AUTO-)REGRESSIVE PROCESSES USING NEURAL NETWORK FUNCTIONS CLAUDIA KIRCH Otto-von-Guericke-University Magdeburg, Germany, Department of Mathematics, Institute of Mathematical Stochastics STEFANIE SCHWAAR Fraunhofer ITWM, Fraunhofer Platz 1, Kaiserslautern andUniversity of Appl...
https://arxiv.org/abs/2504.08956v1
that they do have power against all kinds of changes in the parame- ters, where Davis et al. [1995] used a likelihood-ratio test statistics while Huˇ skov´ a et al. [2007] used the corresponding (equivalent but easier to compute) score type statistic. To move away from the linear structure assumed in these papers Kirch...
https://arxiv.org/abs/2504.08956v1
< m =m(n) =⌊λn⌋ ≤n, 0< λ≤1 is called the change point and the regression functions g1andg2are assumed to be unknown. We use a single-layer neural network as an approximation for the non-linear regres- sion function allowing us to construct change point methods based on least-squares estimation. The use of a shallow neu...
https://arxiv.org/abs/2504.08956v1
which follow from the following assumptions that have also been used by Kirch and Kamgaing [2012]. However, because we use a universal test statistic capable of detecting all changes in the best approximating parameters, we have to make slightly stronger moment assumptions. A.1 Let{X(1) t:t∈Z}and{X(2) t:t∈Z}be stationa...
https://arxiv.org/abs/2504.08956v1
V= lim n→∞1 nEh ∇Qn(˜θ)(∇Qn(˜θ))Ti andMas inA.6. The second part of the proposition gives in particular the rate of convergence which is used in the proofs below. 0Letθ= (ν0, µ1, . . . , µ h) with µi= (νi, αi, βi). 1A symmetry transformation πkof (ν0, µ1, . . . , µ h) is defined as πk(ν0, µ1, . . . , µ h) = ( ν0+ µk, µ...
https://arxiv.org/abs/2504.08956v1
the inverse of Γ in Theorem 3.1 a) – or a matrix of lower rank that also uncorrelates the remaining components in a similar manner. Since usually Γ is not known and estimators are often not very precise even in for a moderate number of unknown parameters and even more so if the long-run covariance (as in the misspecifi...
https://arxiv.org/abs/2504.08956v1
analyzes, we use the following estimator ˆΓ = ( ˆΓij)i,j=1,...,h(p+2)+1 for Γ = (Γ ij)i,j=1,...,h(p+2)+1 ˆΓij=1 n−h(p+ 2)−1 k0X t=1(qi(t,ˆθ1 N))(qj(t,ˆθ1 N))T+nX t=k0+1(qi(t,ˆθ2 N))(qj(t,ˆθ2 N))T!(12) with q(t, θ) :=∇f(Yt, θ)(Xt−f(Yt, θ)),k0= arg max1≤k<n1√n S(k;ˆθN) ˆΣ−1and ˆΣ is the covariance matrix estimator. 3.2.c...
https://arxiv.org/abs/2504.08956v1
one from the mean change situation as in Antoch and Hu˘ skov´ a [1999]. In fact, this result for the mean change is contained as a special case where the neural network has h= 0 hidden layers. The same holds true in a regression setup with only exogenous variables that are i.i.d. 4.Simulations and Application 4.1.Simul...
https://arxiv.org/abs/2504.08956v1
+ exp(0 .5∗(1 + 0 .7Xt−1)))−1+εtt≤m , µ+α(1 + exp(0 .5∗(1 +βXt−1)))−1+εtt > m We consider different changes, where some will effect the mean and one will not. In detail, we define the following models GAR 1: µ= 0.1, α= 1, β= 0.7, GAR 2: µ= 0.5, α=−1, β= 0.7, GAR 3: µ= 0.5, α= 1, β=−0.7,GAR 4: µ= 0.5, α=−1, β=−0.7. The ...
https://arxiv.org/abs/2504.08956v1
to one, particularly for autoregressive (AR) processes. In the case of mean shifts in AR processes, the test designed to detect changes in the mean shows a distinct advantage. Conversely, for threshold autoregressive (TAR) processes, the derivative-based test exhibits significantly better performance. Notably, under co...
https://arxiv.org/abs/2504.08956v1
parame- ter. In the following analysis, we use ρ= 0.02 to obtain a smoother estimate. The Fuller-transformed returns better satisfy the assumptions of the NLAR model. 14 CHANGE POINTS FOR NL(A)R-PROCESSES WITH NEURAL NETWORKS For the change point detection (CPT) procedure and the corresponding estimator, a single model...
https://arxiv.org/abs/2504.08956v1
the significance level α= 0.01. Even for this level the change point is quite significant, so the dot-com bubble had a significant influence on the stock prices of the DAX. 2000 3000 4000 5000 6000 7000 8000 01.2000 01.2002 01.2004 01.2006 01.2008 (a)DAX, h=1 2000 3000 4000 5000 6000 7000 8000 01.2000 01.2002 01.2004 0...
https://arxiv.org/abs/2504.08956v1
Q n(θ) and its first and second derivative towards Eθ. This result will be needed to prove the null asymptotics as given in Theorem 3.1. Proposition 6.1. Let assumptions A.1,A.3 andA.4 hold and Qn(·)be as in (5) andEθ,θ∈Θ, as in (6). Then for model (2)andΘ⊂Rq(q= (p+d+ 2)h+ 1) compact, we have under H0as well as under H...
https://arxiv.org/abs/2504.08956v1
prove c), first note that by consistency of ˆAnit holds ∥ˆA1/2 nA−1/2−Id∥=oP(1). Hence Tn(η, γ;ˆAn)−Tn(η, γ, A ) ≤max 1≤k<nw(k/n) ˆA1 2nA−1/2(A1/2S(k;ˆθN)−A1 2S(k;ˆθN) ≤ ∥ˆA1/2 nA−1/2−Id∥Tn(η, γ;A) =oP(1). The result for the consistency follows because by simular arguments Eh ∇f(Yt,˜θ) Xt−f(Yt,˜θ)i ˆAn= Eh ∇f(Yt,˜θ)...
https://arxiv.org/abs/2504.08956v1
6.4. Letq(t, θ)be as in N.3 and let Assumptions A.1–A.7 hold. Denote ˜S(a, b;θ) =Pb t=a(q(t, θ)−Eq(1, θ))and˜S(k;θ) =˜S(1, k;θ). Then exists ς∈(0,1) such that for fixed κ >0we have max 1≤k≤m−κ 1 m−k˜S(k+ 1, m;˜θ) A=κ−ς 4+2ςOP(1), (21) max 1≤k≤m−κ 1 m−k ˜S(k+ 1, m;ˆθN)−˜S(k+ 1, m;˜θ) A=oP(1), (22) max 1≤k≤m1 n ˜S(k;ˆθ...
https://arxiv.org/abs/2504.08956v1
by (28), the law of large numbers and the√n-consistency of ˆθNfor˜θ(see Propostion 2.1). For s > λ the same arguments apply after splitting the sum at the change point m. In this case, it holds by the definition of ˜θ Es(˜θ) =E[∇f(Y1, θ)(X1−f(Y1, θ))] s−λ 1−λ(s−λ) , which finishes the proof on noting that s−λ 1−λ(s−λ...
https://arxiv.org/abs/2504.08956v1
m;˜θ) A Eq(1,˜θ) A =O(1)oP(1) = oP(1), max αn<k<m −κ B5 n(m−k) ≤2w2 γ(m/n)m nmax αn<k<m −κ 1 m−k˜S(k+ 1, m;˜θ) A Eq(1,˜θ) A =OP(κ−ς 4+2ς). By the lower bound in Lemma 6.6 (c) it holds max α≤k<mn(m−k) |B6|=O(1) as well as B6<0 for k < m with max 1≤k<m−κB6≤cfor an arbitrary constant c <0. This implies max 1≤k<m−κ B1+B2+...
https://arxiv.org/abs/2504.08956v1
uniform convergence and the exis- tence of the limit for n→ ∞ , we finally conclude lim n→∞lim κ→∞P( ˆm−m≤x) = lim n→∞lim κ→∞P( ˆm−m≤,|ˆm−m| ≤κ) = lim κ→∞lim n→∞P( ˆm−m≤,|ˆm−m| ≤κ) =P(arg max Wl≤x). 28 CHANGE POINTS FOR NL(A)R-PROCESSES WITH NEURAL NETWORKS 7.Results Simulation Studies 7.1.Representative of ˜θ.We run a...
https://arxiv.org/abs/2504.08956v1
Econometrics and Statistics . Davis, R. A., Huang, D., and Yao, Y.-C. (1995). Testing for a Change in the Pa- rameter Values and Order of an Autoregressive Model. The Annals of Statistics , 23(1):282 – 304. Franke, J., Hefter, M., Herzwurm, A., Ritter, K., and Schwaar, S. (2022). Adaptive quantile computation for brown...
https://arxiv.org/abs/2504.08956v1
BAYESIAN SHRINKAGE PRIORS SUBJECT TO LINEAR CONSTRAINTS A P REPRINT Zhi Ling Saw Swee Hock School of Public Health National University of Singapore lingzhi@nus.edu.sgShozen Dan Department of Mathematics Imperial College London shozen.dan21@imperial.ac.uk April 15, 2025 ABSTRACT In Bayesian regression models with catego...
https://arxiv.org/abs/2504.09052v1
approach for incorporating linear constraints into Bayesian regression models with multivariate normal priors. Given coefficient vector β= (β1, β2, ..., β K)⊤∈RK, such that βk|λk∼ N 0, λ2 k , k= 1,2, ..., K (1) where λkcan be either a fixed hyperparameter or a random variable that forms a Gaussian scale mixture. Cons...
https://arxiv.org/abs/2504.09052v1
guaranteed by Lemma 6) Ω =LL⊤, withL∈R(K−J)×(K−J).; 4Draw a sample z∼ N 0, IK−J . 5Set β=m∗+MLz. 6return β; Theorem 2. The above procedure returns sample from Eq. 2. Proof. The mean is E[β] =m∗+MLE[z] =m∗; The covariance is Var(β) =MLVar(z)L⊤M⊤=MLL⊤M⊤=MΩM⊤= Σ∗. Therefore, the above algorithm first simplifies the samp...
https://arxiv.org/abs/2504.09052v1
The Horseshoe prior [Carvalho et al.] adopts a local-global hierarchical structure for adaptive shrinkage in regression analysis. For each regression coefficient, βk=N(0, τ2λ2 k), k= 1,2, ..., K λj∼C+(0,1) τ∼C+(0,1)(17) The global scale parameter τcontrols the overall shrinkage level, while the local scale parameter λj...
https://arxiv.org/abs/2504.09052v1
invertible. Lemma 4. The covariance matrix Σ∗=D−DA⊤(ADA⊤)−1AD. (A.24) is positive semi-definite. Proof. Observe that Σ∗=D−DA⊤ A⊤DA−1A⊤D=D1/2h I−D1/2A⊤ ADA⊤−1AD1/2i D1/2. Let P=D1/2A⊤ ADA⊤−1AD1/2. Since Ais full row rank and Dis a diagonal matrix with non-negative diagonal elements, it is known that ADA⊤is inverti...
https://arxiv.org/abs/2504.09052v1
Statistical Inference for High-Dimensional Robust Linear Regression Models via Recursive Online-Score Estimation Dian Zheng and Lingzhou Xue Department of Statistics, The Pennsylvania State University Abstract This paper introduces a novel framework for estimation and inference in penalized M-estimators applied to robu...
https://arxiv.org/abs/2504.09253v1
growth and reproduction of humans and animals. The riboflavin production dataset, derived from 2 Bacillus subtilis and provided by DSM Nutritional Products, is available in the R package “hdi”. This dataset includes 71 observations, with a response variable representing the logarithm of the riboflavin production rate a...
https://arxiv.org/abs/2504.09253v1
and the true parameters, addressing the problem from an estimation perspective rather than an inferential one. Con- sequently, they are not directly equipped to resolve the second challenge encountered in real data analysis, such as addressing skewness and outlier effects. In this paper, we address both challenges simu...
https://arxiv.org/abs/2504.09253v1
of heavy-tailed errors and data contamination. The rest of this paper is organized as follows: Section 2.1 introduces our methods, Sec- tion 2.2 discusses the landscape analysis, and Section 2.3 establishes asymptotic normality. In Section 3, we evaluate the performance of our proposed procedure through simulation stud...
https://arxiv.org/abs/2504.09253v1
∂βcM(t) j0 . (3) Here,eβis still an initial estimator, derived from min β∈B(r) 1 nnX i=1ℓ Yi−XT iβ +pX j=1λ|βj| , where B( r) is the p-dimensional ball centered at origin with radius r. The radius rshould be chosen sufficiently large to ensure that the true parameter vector β0is feasible. The function l(·) is ...
https://arxiv.org/abs/2504.09253v1
j0=eβj0. More specifically, for l= 1,2, . . ., we can iteratively update ˆβj0by ˆβ(l) j0=ˆβ(l−1) j0−Pn−1 t=01 ˆσcM(t) j0,j0bZt+1,j0 ℓ′ Yt+1−(Xt+1,j0βj0+Xt+1,cM(t) j0eβcM(t) j0) n−1X t=01 ˆσcM(t) j0,j0bZt+1,j0∂ ∂βj0 ℓ′ Yt+1−(Xt+1,j0βj0+Xt+1,cM(t) j0eβcM(t) j0) | {z } Γ∗,(l−1) n, where we use a shorthand and writ...
https://arxiv.org/abs/2504.09253v1
the global minimizer. Theorem 1 provides ℓ1/ℓ2estimation error bounds for all the stationary points, which is crucial for further inference. Moreover, part (b) proves the uniqueness of the solution if nmeets a stricter order criterion, in addition to the continuity requirement being fulfilled. The result in Part (a) is...
https://arxiv.org/abs/2504.09253v1
procedure satisfies the sure screening property. Condition (b) assumes that the initial estimator eβsatisfies certain ℓ1andℓ2 error bounds, which are crucial for ensuring valid subsequent inference. Condition (c) assumes that any linear combination of the covariates Xfollows a sub-Gaussian distribution. Conditions (a)-...
https://arxiv.org/abs/2504.09253v1
21 .5(1.99) 91 .0 34 .3(3.41) 100.0 35.4(3.67) β5 95.4 21 .5(1.88) 93 .8 34 .0(3.33) 0.0 35.5(3.64) σ= 10β1 95.2 21 .1(1.87) 90 .4 59 .8(7.50) 0.0 64.3(8.01) β2 94.2 20 .8(1.73) 91 .8 60 .7(7.42) 0.0 63.3(7.88) β3 95.6 21 .1(1.88) 92 .4 61 .3(7.51) 100.0 64.3(7.95) β5 95.2 21 .1(1.89) 90 .0 60 .6(7.37) 0.0 64.4(7.89) T...
https://arxiv.org/abs/2504.09253v1
evident that genes 4002nd, 4003rd, 4004th, and 4006th exhibit high correlation in each pair. This observation is further supported by existing biological evidence, indicating that these genes collectively belong to the SigY regulon (Pedreira et al., 2021). In contrast, Shi et al. (2021) only claimed three important gen...
https://arxiv.org/abs/2504.09253v1
A. and Montanari, A. (2014). Confidence intervals and hypothesis testing for high-dimensional regression. J. Mach. Learn. Res. , 15(1):2869–2909. Li, D., Srinivasan, A., Chen, Q., and Xue, L. (2023a). Robust covariance matrix estimation for high-dimensional compositional data with application to sales data analysis. Jo...
https://arxiv.org/abs/2504.09253v1
Yu, X., Li, D., and Xue, L. (2024a). Fisher’s combined probability test for high-dimensional covariance matrices. Journal of the American Statistical Association , 119(545):511–524. Yu, X., Li, D., Xue, L., and Li, R. (2023). Power-enhanced simultaneous test of high- dimensional mean vectors and covariance matrices wit...
https://arxiv.org/abs/2504.09253v1
arXiv:2504.09276v1 [q-fin.ST] 12 Apr 2025On the rate of convergence of estimating the Hurst parameter of rough stochastic volatility models Xiyue Han∗and Alexander Schied∗ April 15, 2025 Abstract In [8], easily computable scale-invariant estimator Rs nwas constructed to estimate the Hurst parameter of thedrifted fracti...
https://arxiv.org/abs/2504.09276v1
an estimator /hatwideRnthat estimates the Hurst parameter Hin (1.1) based on discrete observations of the integrated variance ( 1.3) was constructed in [ 8]. In contrast to other estimation schemes [ 3,6], our estimator is constructed in a strictly pathwise setting and, in f act, estimates the so-called roughness expon...
https://arxiv.org/abs/2504.09276v1
2−n/2/radicalbig logn/parenrightbig . (1.8) Here, the rate of convergence was only established under the ass umption that gis the identity function. However, this result cannot be applied directly to establish the cons istency or the convergence rate of Rs nfor rough stochastic volatility models. In these models, we ty...
https://arxiv.org/abs/2504.09276v1
eachn∈N, plugging δ=δninto the above inequality yields that P/parenleftigg sup 0≤i≤2n−1/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2n(2H−3 2)/vextenddouble/vextenddouble/vextenddouble/vextenddouble¯ϑ2n,i√τH/vextenddouble/vextenddouble/vextenddouble/vextenddouble ℓ2−1/vextendsingle/vextendsing...
https://arxiv.org/abs/2504.09276v1
x(τ♯ 2n+1,2k)/parenrightbig/parenrightbig2/parenleftbig ϑy 2n,k/parenrightbig2≤/parenleftbig g′(x(τb n,0))/parenrightbig2(1+δn)2τH. Finally, by the Cauchy–Schwarz inequality, for n≥n1,x, we have 2(4H−3)n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2n−1/summationdisplay k=0g′/parenleftbig x(τ♯ 2...
https://arxiv.org/abs/2504.09276v1
law of ( Xt)t∈[0,1]is absolutely contin- uous with respect to the law of ( x0+WH t)t∈[0,1]. Hence, it suffices to prove this assertion for fractional Brownian motion WHandYH t=/integraltextt 0g(WH s)ds. Now, suppose that n= 2mform∈N. It then follows from Lemma 2.5that with probability one, /vextendsingle/vextendsingle/ve...
https://arxiv.org/abs/2504.09276v1
arXiv:2504.09564v1 [math.ST] 13 Apr 2025THE WEAK-FEATURE-IMPACT EFFECT ON THE NPMLE IN MONOTONE BINARY REGRESSION BYDARIO KIEFFER1,aAND ANGELIKA ROHDE1,b 1Albert-Ludwigs-Universität Freiburg,adario.kieffer@stochastik.uni-freiburg.de ;bangelika.rohde@stochastik.uni-freiburg.de The nonparametric maximum likelihood estima...
https://arxiv.org/abs/2504.09564v1
ession model, weak fea- ture impact, convergence rate, limiting distribution, pha se transition. 1 2 then became the most important tool for deriving limits in no nparametric maximum like- lihood estimation under several shape constraints. In that article, it was also the first time theL1-limiting behaviour was consider...
https://arxiv.org/abs/2504.09564v1
the repesentation, we restrict our attention to isotonic binary regression. Clearly, the extremal case of no impact corresponds to x/ma√sto→P(Y=1|X=x)being constant, while a very steep increase from 0to1or even the jump function P(Y=1|X)=1{X≥c}for some real number cis what one might consider as fully related. With this...
https://arxiv.org/abs/2504.09564v1
Z(s)+s2/bracerightbig . (ii)Ifnδ2 n−→0, then √n/parenleftbigˆΦn(x0)−Φn(x0)/parenrightbig −→L/radicalbig Φ0(0)(1−Φ0(0))W∗,ℓ(FX(x0)). Note that, as long as nδ2 n−→ ∞ (slow regime ), the limiting law is a scaled Chernoff distribution as in the classical setting for a fixed function . The proof is based on the switch relati...
https://arxiv.org/abs/2504.09564v1
by FX, writepXfor the continuous Lebesgue density if it exists and write Fnfor the empirical distribution function of X1,...,X n, where we also define F−1 n: [0,1]→R, F−1 n(a)..=inf{x∈R|Fn(x)≥a}. Moreover, we write F..={Φ:R→[0,1]|Φmonotonically increasing } for the set of monotonically increasing functions mapping Rinto...
https://arxiv.org/abs/2504.09564v1
aspect of the following result is that Hellinger consistency of ˆΦnis actually independent of the chosen sequence (δn)n∈Nand is thus independent of the level of feature impact. THEOREM 3.1 (Hellinger consistency). We have d(ˆΦn,Φn)−→P0 forn−→∞. As shown in Theorem A.5, Hellinger consistency of ˆΦnist not only independe...
https://arxiv.org/abs/2504.09564v1
its greatest convex minorant and let g∗,ℓ denote the left-hand derivative of g∗on the interior of I, i.e. g∗,ℓ(x)=sup s<xinf t≥xg(t)−g(s) t−s. For more details on this, see (Ch. 3.3, Groeneboom and Jongbloed ,2014 ). For the remainder of this section, we will assume that PXis supported on X=[−T,T]for someT >0and has a ...
https://arxiv.org/abs/2504.09564v1
X(s)}/bracerightig >FX(x0)/parenrightig . From Lemma 6.4, we know that the sequence of processes inside the argmin+converges weakly in the space ℓ∞([0,1])to the process /parenleftbig/radicalbig Φ0(0)(1−Φ0(0))W(s)−vs/parenrightbig s∈[0,1] as long asnδ2β n−→0and from Proposition 6.5, we obtain convergence in distributi...
https://arxiv.org/abs/2504.09564v1
Φ′ 0(0)>0/parenrightig =P/parenleftig Φ′ 0(0)/parenleftig4Φ0(0)(1−Φ0(0)) Φ′ 0(0)2pX(x0)/parenrightig1/3 argmin s∈R/braceleftbig Z(s)+s2/bracerightbig <−v/parenrightig =P/parenleftig −/parenleftig4Φ0(0)(1−Φ0(0))Φ′ 0(0) pX(x0)/parenrightig1/3 argmin s∈R/braceleftbig Z(s)+s2/bracerightbig <v/parenrightig =P/paren...
https://arxiv.org/abs/2504.09564v1
the left-hand derivative of Gn, whereGndenotes the greatest convex minorat of Yn. Note that this implies ˆΦn(X(i))=gn(i/n)=gn◦Fn(X(i)), i=1,...,n. Defining ˜Un(a)..=argmin+ t∈[0,1]{Λn(t)−at}for alla∈[0,1], note that ˜Un(a)maps into the set{i/n|i=0,...,n}and we have F−1 n◦˜Un(a)=Un(a), (6) whereUndenotes the generalized ...
https://arxiv.org/abs/2504.09564v1
the proof of (Theorem 1, Durot ,2008 ). From a technical point of view, however, the n-dependence of Φn and its vanishing derivative required us to do some adjustme nts. In particular, it turned out 14 thatnδ2 n−→∞ forn−→∞ is in fact a necessary assumption and we will see it being used multiple times throughout the pro...
https://arxiv.org/abs/2504.09564v1
multiple times throughout the proof. For ease of notations, let X(a)..= argmins∈R{Z(s)+(s−a)2}for a∈Rand C..=/integraldisplay∞ 0Cov(|X(0)|,|X(a)−a|)da. Further, letσ2 n(t)..=E[(Yn−Φn(X))2|X=t]fort∈[−T,T]and define µn..=E[|X(0)|]/integraldisplayT −T(4σ2 n(t)Φ′ 0(δnt))1/3pX(t)−1/3dt, THE WEAK-FEATURE-IMPACT EFFECT 17 as w...
https://arxiv.org/abs/2504.09564v1
of all the jumping points of ˆΦn, letTn 1,...,Tn jndenote the jumping points of ˆΦnand setTn 0..=X(1), as well asTn jn+1..=X(n)andTn jn+2..=T. Then, from our characterization of ˆΦnbetween two jumping points (see Section 1), we know ˆΦn|(−∞,Tn 0)=0,ˆΦn|[Tn jn+1,∞)=ˆΦn(X(n)),ˆΦn|[Tn j,Tn j+1)=/summationtextn ℓ=1Yn ℓ1{Tn...
https://arxiv.org/abs/2504.09564v1
n/summationdisplay i=1E[/⌊ard⌊lVn i/⌊ard⌊l21{/⌊ard⌊lVn i/⌊ard⌊l>ε}]≤k nn/summationdisplay i=1E[1{/⌊ard⌊lVn i/⌊ard⌊l2>ε2}]≤k nn/summationdisplay i=1E[1{k>nε2}]=k1{k>nε2}−→0 22 forn−→∞ . For the sum of the covariance matrices of Vi, note that for j,ℓ∈{1,...,k}, we have /parenleftign/summationdisplay i=1Cov(Vn i)/parenri...
https://arxiv.org/abs/2504.09564v1