text string | source string |
|---|---|
x,y)⟩|. This completes the proof of our result. 5 Proof of Corollary 1.2 Recall the statement of Corollary 1.2. In the limit N→ ∞ , where the graph Gin the main result approaches a graphon W: [0,1]2→[0,1], defined over the random variables x, y∈[0,1], then dG,W(g, h) := 1 −Ex exp Ey|xlog (1−d(g(y), h(y))) W(x, y)... | https://arxiv.org/abs/2505.06405v1 |
for ( gN1+1, . . . , g N). The vectors 16 h(1)andh(2)are defined similarly. Then we have dX,G1⊔G2,P(g,h) = 1−1 NNX j=1Y i∈{1,...,N} (j,i)∈E1⊔E2[1−di(gi, hi)]1 pji = 1−1 N N1X j=1Y i∈{1,...,N} (j,i)∈E1[1−di(gi, hi)]1 pji | {z } A+ NX j=N1+1Y i∈{1,...,N} (j,i)∈E2[1−di(gi, hi)]1 pji | {z } B ... | https://arxiv.org/abs/2505.06405v1 |
Notice that for each vertex u1∈V(G1), the corresponding multiplicand in A(v1,v2), that is, 1−d(u1,u2)(g(u1,u2), h(u1,u2))1 p(u2,v2), where ( u2, v2)∈E(G2), is independent of the vertex u1being connected to any other vertex inG1. Hence, by varying gandh, we can view A(u1,v2)as a real-valued function on the space A(u1,... | https://arxiv.org/abs/2505.06405v1 |
errors. IEEE Communications Surveys & Tutorials , 12(1):87–96, 2010. [12] Deen Dayal Mohan, Bhavin Jawade, Srirangaraj Setlur, and Venu Govindaraj. Deep metric learning for computer vision: A brief overview. In Handbook of Statistics, Special Issue - Deep Learning , volume 48, page 59. 2023. Book Chapter Published In H... | https://arxiv.org/abs/2505.06405v1 |
Semiparametric semi-supervised learning for general targets under distribution shift and decaying overlap Lorenzo Testa1,2, Qi Xu1, Jing Lei1, and Kathryn Roeder1,3 1Department of Statistics & Data Science, Carnegie Mellon University 2L’EMbeDS, Sant’Anna School of Advanced Studies 3Department of Computational Biology, ... | https://arxiv.org/abs/2505.06452v1 |
conditions (Tsiatis, 2006). In SS learning, by contrast, decaying overlap is the norm, and asymptotic theory must be adapted accordingly. Moreover, much of the existing work in semi-supervised learning (Angelopoulos et al., 2023a; Xu et al., 2025) has been developed under the assumption that labels are missing complete... | https://arxiv.org/abs/2505.06452v1 |
2022), quantile estimation (Chakrabortty et al., 2022), and linear regression (Azriel et al., 2022; Cai and Guo, 2020). Notable exceptions are Song et al. (2024), which studies general M-estimation problems, and Xu et al. (2025), which encompasses all parameters satisfying some form of functional smoothness. However, t... | https://arxiv.org/abs/2505.06452v1 |
these frameworks assume strict positivity. As a result, applying semiparametric tools in semi-supervised learning with shrinking labeling fractions and decaying overlap requires substantial modifications to both asymptotic analysis and estimator construction. 1.2 Our contributions We propose a general framework for ide... | https://arxiv.org/abs/2505.06452v1 |
2016), where we revisit a question posed by Hsu and Lin (2023) and estimate the association between biomarkers previously identified (Cheng et al., 2021) and survival outcomes in patients with breast cancer. Concluding remarks and potential directions for future research are discussed in Section 6. Additional theoretic... | https://arxiv.org/abs/2505.06452v1 |
overlap to decay with the sample size, requiring new tools to characterize identifiability, efficiency, and convergence rates in this more general setting. Remark 2.3 (Triangular arrays) .Thepropensity score , defined asπ⋆(x) = P[R=1|X=x], plays a central role in our theoretical development under decaying overlap. In p... | https://arxiv.org/abs/2505.06452v1 |
theoretical standpoint, the full-data estimator ˆθFis of little practical relevance, as it relies on outcome labels that are missing in the unlabeled portion of the data. A natural first approximation is to construct an estimator using only the labeled sample. However, under the missing at random (MAR) assumption, such... | https://arxiv.org/abs/2505.06452v1 |
P[j] n+Nwith thej-th fold. Then, we learn ˆµ[−j]onˆP[−j]. For most supervised methods, learning ˆµ[−j]onˆP[−j]will involve using only labeled observations within ˆP[−j]. Then, we compute the estimator ˆθˆµby solving the estimating equation JX j=1X i∈ P[j] n+Nϕ Di;θ⋆;ˆµ[−j];π⋆ =0 . (4) Estimating the true projection f... | https://arxiv.org/abs/2505.06452v1 |
space where the propensity score is small. Consequently, while the factor an,Nin the condition vanishes, the tails of the influence function may become heavier due to extreme reweighting. This interplay can make the Lindeberg condition more delicate – not less. In this sense, the condition acts as a technical safeguard... | https://arxiv.org/abs/2505.06452v1 |
only consistency. 3.3 Prediction-powered inference under decaying MAR-SS In this Section, we extend the framework developed so far to incorporate additional information from an off-the-shelf predictive model fthat maps the covariate space to the outcome space. We do not impose assumptions on the quality of f, treating ... | https://arxiv.org/abs/2505.06452v1 |
naturally accommodate the integration of multiple predictive models; that is, we can simultaneously leverage a collection of off-the-shelf models {f1, . . . , fm}within our estimation procedure, allowing us to flexibly combine information from diverse sources and enhance robustness in practical applications. 4 Simulati... | https://arxiv.org/abs/2505.06452v1 |
observed-data influence function takes the form ϕ(D;θ⋆) =µ⋆(X) +R π⋆(X)(Y−µ⋆(X))−θ⋆, (21) whereµ⋆: Rp→ Rkis equal to E[Y|X]. The previous influence function motivates the well-known augmented inverse probability weighting (AIPW ) estimator (Bang and Robins, 2005) ˆθAIPW ˆµ;ˆπ=1 n+Nn+NX i=1 ˆµ(Xi) +Ri ˆπ(Xi)(Yi−ˆµ(Xi))... | https://arxiv.org/abs/2505.06452v1 |
All other simulation results are shown in the Supplementary Material Section C. The boxplot of root mean squared error (RMSE) indicates that our proposed DS3estimator, the OR andIPWmethods achieve low and stable error across simulation replications. In contrast, the naive 12 DS3 OR IPW Naive Method0246RMSE RMSE Distrib... | https://arxiv.org/abs/2505.06452v1 |
panel: joint bivariate distribution over the floor bivariate grid of the observed position of the device Ycompared to the distribution of predicted positions from the estimated outcome model using signal recorded by remote sensors. The discrepancy between observed and predicted outcomes may suggest a missing-at-random ... | https://arxiv.org/abs/2505.06452v1 |
patients – either still alive or deceased from other causes – as unlabeled. The resulting dataset includes n=611 labeled observations and N=1231 unlabeled observations. We frame this task as a coefficient estimation problem in linear regression. Let Yidenote the survival time (in months) since initial diagnosis for ind... | https://arxiv.org/abs/2505.06452v1 |
result aligns with the findings of Alfarsi et al. (2020), who identify DBN1 as a prognostic biomarker in luminal breast cancer. Their study shows that elevated DBN1 expression is associated with more aggressive tumor characteristics and higher rates of relapse among patients receiving adjuvant endocrine therapy. While ... | https://arxiv.org/abs/2505.06452v1 |
Lindeberg conditions and provided practical estima- tors that remain consistent in this more challenging regime. We further extended our methodology to incorporate black-box prediction models in the spirit of prediction-powered inference ( PPI), broadening its applicability to MAR labeling mechanisms. Our simulation st... | https://arxiv.org/abs/2505.06452v1 |
its applications. Journal of the Royal Statistical Society Series B: Statistical Methodology , 82(2):391–419, 2020. Abhishek Chakrabortty, Guorong Dai, and Raymond J Carroll. Semi-supervised quantile estimation: Robust and efficient inference in high dimensional settings. arXiv preprint arXiv:2201.10208 , 2022. Olivier... | https://arxiv.org/abs/2505.06452v1 |
of 2,433 breast cancers refine their genomic and transcriptomic landscapes. Nature communications , 7(1):11479, 2016. Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika , 70(1):41–55, 1983. Christoph Rothe. Robust confidence intervals fo... | https://arxiv.org/abs/2505.06452v1 |
we show the result for a single cross-fitting fold. In particular, we denote the distribution where ˆµis trained as ˆPand the distribution where the influence functions are approximated as Pn+N. Similarly, we denote as ¯µthe population limit of ˆµ. The propensity score is known, so that ˆπ=¯π=π⋆. ByVon Mises expansion,... | https://arxiv.org/abs/2505.06452v1 |
do so by analyzing each term separately. First term (influence function). By definition of influence function, E[ϕ(D;θ⋆;¯µ;¯π)]=0, and thus the first term has mean zero. We now analyze the variance. Define ¯Vθ⋆= V[ϕ(D;θ⋆;¯µ;¯π)]= E ϕ(D;θ⋆;¯µ;¯π)ϕ(D;θ⋆;¯µ;¯π)T . By Assumption 3.2, c, we haveλmax ¯Vθ⋆ ≍a−1 n,N. Theref... | https://arxiv.org/abs/2505.06452v1 |
= E (Yℓ− E[Yℓ])2 = E E Yℓ−µ⋆ ℓ(X)2|X + E µ⋆ ℓ(X)− E[Yℓ]2|X ≥σ2 ℓ.(48) We now show that each diagonal element of ¯Vθ⋆is of the same rate as a−1 n,N. In particular, for each ℓ=1, . . . , kwe have: ¯Vθ⋆[ℓ,ℓ] = E ϕ(D;θ⋆;¯µ;π⋆)2 ℓ = E ¯µℓ(X) +R(Yℓ−¯µℓ(X)) π⋆(X)− E[Yℓ]2 = E(R−π⋆(X)) (Yℓ−¯µℓ(X)) π⋆(X)+Yℓ− ... | https://arxiv.org/abs/2505.06452v1 |
= 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.4: Multivariate outcome mean simulation results. Empirical coverage is reported for the MCAR setting, linear outcome regression model ˆµ, and a logistic propensity model ˆπ. Each panel c... | https://arxiv.org/abs/2505.06452v1 |
= 100, N = 10000 DS3OR IPW Naive Method0.00.51.0 n = 1000, N = 100000 DS3OR IPW Naive Method0.00.51.0 n = 10000, N = 1000000 Figure C.9: Linear coefficients simulation results. Empirical coverage is reported for the setting with logistic true propensity score π⋆, linear outcome regression model ˆµ, and a constant prope... | https://arxiv.org/abs/2505.06452v1 |
N = 100000 DS3OR IPW Naive Method02550RMSE n = 100, N = 10000 DS3OR IPW Naive Method020 n = 1000, N = 100000 DS3OR IPW Naive Method02550 n = 10000, N = 1000000 Figure C.14: Linear coefficients simulation results. RMSE is reported for the setting with logis- tic true propensity score π⋆, linear outcome regression model ... | https://arxiv.org/abs/2505.06452v1 |
arXiv:2505.07016v1 [cs.IT] 11 May 2025Multi-Terminal Remote Generation and Estimation Over a Broadcast Channel With Correlated Priors Maximilian Egger, Rawad Bitar, Antonia Wachter-Zeh, Nir Weinberger and Deniz Gündüz Abstract —We study the multi-terminal remote estimation problem under a rate constraint, in which the ... | https://arxiv.org/abs/2505.07016v1 |
AI-R (ERC-Consolidator Grant, EP/X030806/1). The work of N.W. was partly supported by the Israel Science Foundation (ISF), grant no. 1782/22.multi-terminal setting, the use of a broadcast channel from the encoder to the decoders can exploit potential correlations of samples observed from the decoders’ prior distributio... | https://arxiv.org/abs/2505.07016v1 |
the potential gains in the case of joint prior distributions with arbitrary correlation. We focus on a specific instance of the problem, where the samples obtained by the decoders are used to evaluate a function of interest. Our contributions are as follows. As a preliminary step, we refine a known result on the sampli... | https://arxiv.org/abs/2505.07016v1 |
results apply to the case where the function is different for each of the decoders, we assume for clarity the same function fat all decoders. The system model is depicted in Fig. 1. The direct method for estimating I(f)by the decoder is for the encoder to sample from pQ, and then transmit deterministic quantized sample... | https://arxiv.org/abs/2505.07016v1 |
cannot be improved. However, when a broadcast link is available, and the joint prior distribution is of correlated samples, the cost compared to the unicast case can be reduced. As discussed above, in the extreme case of equal and fully dependent pYi, broadcasting the same index to all decoders is trivially effective, ... | https://arxiv.org/abs/2505.07016v1 |
thus refer to this strategy as “hierarchical sampling”. It is not obvious whether the proposed strategy is inferior to the standard MRC procedure described in Section II. Thus, we next analyze the performance. Remark 1. We note that the communication cost to transmit the indices mkandℓi,kcan be further reduced by known... | https://arxiv.org/abs/2505.07016v1 |
all c∈ C. In the most extreme case, the function has zero variance conditioned on a block. Then, an intermediate result of the proof of Theorem 1 shows the following improved result.Corollary 2. Letfbe constant on the support of pQ|C(·|c) for all c∈ C. For some fixed tc≥0, letϵ≜ϵ(tc). When nc≥log(DKL(pQC∥pC) +tc)we hav... | https://arxiv.org/abs/2505.07016v1 |
thereby the probability law pCof the random variable C. The encoder now samples from pY1,Y2, and the decoders observe samples from the marginals through shared randomness. Decoder 1, by observing samples from pY1, can thus obtain samples from pCby invoking g1(·). Decoder 2draws the same samples for Cby observing sample... | https://arxiv.org/abs/2505.07016v1 |
a one-time overhead. This overhead vanishes with increasing K. If the joint distribution pY1,Y2was unknown to the decoders, it is similarly possi- ble that the encoder determines desirable functions gi(·)to minimize subsequent communication costs and transmits the partitioning of the alphabet given by gi(·)to the indiv... | https://arxiv.org/abs/2505.07016v1 |
, vol. 33, pp. 16 131–16 141, 2020. [9] C. T. Li and A. El Gamal, “Strong functional representation lemma and applications to coding theorems,” IEEE Transactions on Information Theory , vol. 64, no. 11, pp. 6967–6978, 2018.[10] G. Flamich, S. Markou, and J. M. Hernández-Lobato, “Fast relative entropy coding with a* cod... | https://arxiv.org/abs/2505.07016v1 |
arXiv:2505.07295v1 [math.ST] 12 May 2025GMM with Many Weak Moment Conditions and Nuisance Parameters: General Theory and Applications to Causal Inference Rui Wang1, Kwun Chuen Gary Chan∗1, and Ting Ye∗1 1Department of Biostatistics, University of Washington Abstract : Weak identification is a common issue for many stat... | https://arxiv.org/abs/2505.07295v1 |
proximal causal inference is a general framework for the identification and estimation of causal effects with unmeasured confounding. This approach typically requires two types of proxies for the unmeasured confounders: treatment proxies and outcome proxies (Miao et al., 2018; Tchetgen Tchetgen et al., 2024), where the... | https://arxiv.org/abs/2505.07295v1 |
(Chernozhukov et al., 2022) and empirical likelihood (Bravo et al., 2020). Another challenge is to deal with many weak moment conditions. The related weak IVs prob- lem has been extensively studied in econometrics literature (Staiger and Stock, 1997; Stock and Wright, 2000; Han and Phillips, 2006; Newey and Windmeijer,... | https://arxiv.org/abs/2505.07295v1 |
Section 3.2. We establish the high-level conditions for our proposed two-step CUE to be consistent and asymptotically normal. Consequently, our results can be applied to future problems from verifying our conditions. We provide three examples and verify the high-level regularity condition in those cases. The first two ... | https://arxiv.org/abs/2505.07295v1 |
Ip(the p-dimensional identity matrix), and Assump- tion 1(b) reduces to GTΩ−1Gbeing a fixed positive semidefinite matrix. This scenario corre- sponds to the usual GMM asymptotics (Hansen, 1982) with GTΩ−1Gbeing the asymptotic vari- ance of efficient GMM. When µN/√ N→0, the parameter β0is weakly identified. Later on in ... | https://arxiv.org/abs/2505.07295v1 |
depend on βanymore, therefore the expectation of the objective function will be minimized at β0. In practice, Ω(β, η0,N) can be replaced by its empirical counterpart bΩ(β, η0,N), where bΩ(β, η):= 1 NPN i=1g(Oi;β, η)g(Oi;β, η)T, as long as supβ∈B∥Ω(β, η0,N)−Ω(β, η0,N)∥converges to 0 sufficiently fast. This leads to the ... | https://arxiv.org/abs/2505.07295v1 |
of β, then g1(o;β, η)h1(β)+g2(o;β, η)h2(β)also satisfy global Neyman orthogonality on B2. 9 In practice, we may have an initial moment function ˜ g(o;β,˜η) with ˜ ηbeing the nuisance parameter in the initial moment condition with true value ˜ η0,N. Usually, ˜ η0,Nis a subset of η0,N, and the nuisance parameter can be w... | https://arxiv.org/abs/2505.07295v1 |
rate of√ N µNsupβ∈B∥bg(β,bη)−bg(β, η0,N)∥becomes√ Nmδ N µN. We can see that under the standard asymptotic regime when mis a constant and µN=√ N, we need δN=o(1), that is as long as the nuisance estimate is consistent, the estimator will be consistent. However, when the moments are very weak in the sense that√m/µ N=Cfor... | https://arxiv.org/abs/2505.07295v1 |
are again easy-to-verify assumptions in the examples we provide. The next two assumptions are about nuisance parameter estimation. Assumption 4. (Global Neyman orthogonality and continuous second order Gateaux derivative). The moment function g(o;β, η)satisfies Global Neyman orthogonality on B. Fur- thermore, the secon... | https://arxiv.org/abs/2505.07295v1 |
convergence rates for two or more nuisance parameters estimators, see supplementary materials for more details. Under the usual asymptotics where mis finite and µN=√ N, we require δN=o(√ N) andλN=o(1), which coincides with usual estimation results of doubly-robust consistency result. 14 Unlike the usual GMM estimator, ... | https://arxiv.org/abs/2505.07295v1 |
r = 1, ..., p, η, η′∈ TN}are suitably measurable, and they are VC-type classes, which means their uniform covering numbers satisfy supQlogN(ϵ∥F∥Q,2,F,∥· ∥Q,2)≤vlog(a/ϵ)for all 0< ϵ≤1,F ∈ G , where Fis measurable envelope functions for F. Furthermore, we assume ∥F∥P0,N,qexists for all q≥1. 16 (b) The following condition... | https://arxiv.org/abs/2505.07295v1 |
estimation of the nuisance parameters, which specialize Assumptions 9 to the case with separable moment functions: Assumption 12. (Separable score regularity and nuisance parameters estimation). 18 (a)g[b](o;η)satisfies global Neyman orthogonality for any b∈ {1,2, ..., B}onB. The second-order Gateux derivative D(2) β,t... | https://arxiv.org/abs/2505.07295v1 |
downward bias. 3.3 Over-identification test We propose an over-identification test as a diagnostic tool, extending the traditional J-test for over- identification (Hansen, 1982). In practice, one often wants to test whether all moment conditions hold simultaneously—that is, whether EP0,N[g(O;β0, η0,N)] = 0. The null hy... | https://arxiv.org/abs/2505.07295v1 |
X ;β))(Z− ηZ(X)), where ηZis a function of X, with true value ηZ,0,N(X) =EP0,N[Z|X]. Then, EP0,N[˜g(O;β0, ηZ,0,N)] = 0, which defines a valid moment condition for estimating β0. For ease of presentation, we consider the simple model that γ(a, z, x ;β) =βa. Discussions with a more complicated γ(a, z, x ;β) =Pp j=1β(j)f(... | https://arxiv.org/abs/2505.07295v1 |
asymptotic normality of CUE when Zis high-dimensional weak instrument variable is given by the following theorem: Theorem 5. Suppose Assumption 13 holds. The CUE estimator is consistent and asymptotically normal, with the asymptotic distribution being the same as that in Theorem 2. 4.2 Multiplicative structural mean mo... | https://arxiv.org/abs/2505.07295v1 |
am−dimensional vector of treatment proxies and Wbe an one-dimensional outcome proxy. We assume the following structural mean model holds for potential outcomes, that is, EP0,N[Y(a)− Y(0)|Z, A =a, U, X ] =βa,0a. To construct moment conditions for βa,0, we impose the following assumptions: Assumption 15. (Identification ... | https://arxiv.org/abs/2505.07295v1 |
dimension of instrumental variable mis 4N−1/4, they are 22 ,26,33,40,47 under each sample size. They are generated from the model Zj=X1+X2+X3+ϵj, where j= 1, ..., m ,ϵj∼uniform[ −3,3]. Treatment: We generate a latent continuous treatment variable A∗from the model A∗=αmX j=1N−1/2−j/(3m)Zj+X1+ sin( X2) + expit( X3) +U+N(... | https://arxiv.org/abs/2505.07295v1 |
dimension of treatment proxy mis 4N−1/4, they are 22 ,26,33,40,47 29 Table 1: Simulation results for ASMM and MSMM. Column rejection rate reports the type one error for over-identification test with nominal level 0.05. Column Estimate reports the aver- age point estimates. Column Bias reports the average bias. Column S... | https://arxiv.org/abs/2505.07295v1 |
downward bias of βwfor 2SLS. The standard errors of CUE estimates are larger than those of 2SLS estimates. The over-identification test controls the type-one error in all the simulation settings. Table 2: Simulation results for proximal causal inference. Column rejection rate reports the type one error for over-identif... | https://arxiv.org/abs/2505.07295v1 |
pointed by Bound et al. (1995), the 2SLS estimate may suffer from weak instruments issue even though the sample size is enormous. To illustrate our method- ology, we estimate the following three models using two-step GMM and our proposed estimator two-step CUE to estimate the effect of education attainment on wage. Mod... | https://arxiv.org/abs/2505.07295v1 |
Model 1: ASMMGMM 0.0556 0.0354 [-0.013,0.125] CUE 0.0481 0.0770 [-0.103,0.199] Model 2: ASMMGMM 0.315 0.194 [-0.065,0.695] CUE 0.305 0.294 [-0.271,0.881] Model 3: MSMMGMM 0.381 0.179 [0.030,0.731] CUE 0.232 0.238 [-0.234,0.698] 7 Discussion In this paper, we study the estimation under many weak moment conditions in the... | https://arxiv.org/abs/2505.07295v1 |
on strongly identified functionals of weakly identified functions. Bickel, P. J., Klaassen, C. A., Bickel, P. J., Ritov, Y., Klaassen, J., Wellner, J. A., and Ritov, Y. (1993). Efficient and adaptive estimation for semiparametric models , volume 4. Springer. Bound, J., Jaeger, D. A., and Baker, R. (1995). Problems with... | https://arxiv.org/abs/2505.07295v1 |
Morrison, J., Munaf` o, M. R., Palmer, T., Schooling, C. M., Wallace, C., Zhao, Q., et al. (2022). Mendelian randomization. Nature Reviews Methods Primers , 2(1):6. Staiger, D. and Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica , 65(3):557–586. Stock, J. H. and Wright, J. H. ... | https://arxiv.org/abs/2505.07295v1 |
Pnbe the empirical measure. Let Fbe a class of measurable functions mapping from OtoRwith envelope function F. 1 It is often of interest to study the convergence rate of empirical processes. Let ∥Gn∥F= sup f∈F|Gnf| Our goal is to give a high-probability tail bound for ∥Gn∥F. Chapter 2.14 of Chernozhukov et al. (2014b) ... | https://arxiv.org/abs/2505.07295v1 |
Windmeijer (2009). Lemma S5. IfAbe positive semidefinite matrix, if ∥bA−A∥Fp− →0,λmin(A)≥1/C,λmax(A)≤C, then with probability approaching 1 ξmin(bA)≥1/(2C)andξmax≤2C. The following lemma is Corollary 7.1 of Zhang and Chen (2021). Lemma S6. Let{Xi}N i=1be identically distributed but not necessarily independent and assum... | https://arxiv.org/abs/2505.07295v1 |
N1 q+1 α√ N∥M∥P,2N2 q! holds for all 1 ≤j≤msimultaneously and for any α >0, supg∈G(j) η∥g∥2 P,2≤σ2≤ ∥F∥2 P,2. By the assumption ∥F∥P,q<∞for all q, taking σasr(j) N, we have with probability larger than 1 −mN−1, 8 sup g∈G(j) η|Gng| ≲(1 +α)K r(j) Ns vloga∥F∥P0,N,2 r(j) N +vN1/q∥F∥P0,N,q√ Nloga∥F∥P0,N,2 r(j) N! +K(q) ... | https://arxiv.org/abs/2505.07295v1 |
β∈B∥(Pn−P0,N)(Ω(β,bη)−Ω(β, η0,N))∥=oprm N . Proof. We will vectorize the matrices Ω( β,bηl) and Ω( β, η0,N) asm2dimensional vectors. Then we can apply the same techniques we used in Lemma S9 and S10. For the first term, we have sup β∈B∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥ ≤sup β∈B∥(Pn,l−P0,N)(Ω(β,bηl)−Ω(β, η0,N))∥F = su... | https://arxiv.org/abs/2505.07295v1 |
∂β(k) β=β0=G(k),TbΩ(β0,bη)−1bg(β0,bη) + (N−1NX i=1ˇU(k) i)TbΩ(β0,bη)−1bg(β,bη) ∂˜Q(β,bη) ∂β(k) β=β0=G(k),T¯Ω(β0, η0,N)−1bg(β0,bη) + (N−1NX i=1˜U(k) i)TΩ(β0, η0,N)−1bg(β,bη) where ˜U(k) i=G(k) i(β0,bηl(i))−G(k)−EP0,N(G(k) igT i)¯Ω−1gi(β0,bηl(i)), ˇU(k) i=G(k) i(β0,bηl(i))−G(k)− N−1NX i=1G(k) i(β0,bηl(i))gi(β0,bηl(i)) ... | https://arxiv.org/abs/2505.07295v1 |
∂β(k)=∂bg(β, η)T ∂β(k)bΩ(β, η)−1bg(β, η)−1 2bg(β, η)bΩ−1∂bΩ(β, η) ∂β(k)bΩ−1bg(β, η), ∂2bQ(β, η) ∂β(k)∂β(r)=∂2bg(β, η)T ∂β(k)∂β(r)bΩ(β, η)−1bg(β, η) −∂bg(β, η)T ∂β(k)bΩ−1(β, η)∂bg(β, η)T ∂β(r)bΩ−1(β, η)bg(β, η) −∂bg(β, η)T ∂β(r)bΩ−1(β, η)∂bg(β, η)T ∂β(k)bΩ−1(β, η)bg(β, η) +∂bg(β, η)T ∂β(k)bΩ(β, η)−1∂bg(β, η)T ∂β(r) +bg(... | https://arxiv.org/abs/2505.07295v1 |
µNST N(β−β0). By identification assump- tion, ST N(β−β0) µN ≤C√ N∥P0,N[g(β, η0,N)] µN. Therefore, it remains to prove that µ−1 N√ N∥P0,N[g(bβ, η0,N)]∥=op(1). Sincebβis the minimizer of bQ(β,bη), we have µ−2 NNbQ(bβ,bη)≤µ−2 NNbQ(β0,bη) Later on, we will prove that sup ∥δ(β)∥≤Cµ−2 NN|bQ(β,bη)−Q(β, η0,N)|=op(1). (4) for a... | https://arxiv.org/abs/2505.07295v1 |
4 Proof. The variance estimator can be written as S−1 NbV S−1,T N N= (NS−1 NbHS−1,T N)−1(NS−1 NbDTbΩbDS−1,T N)(NS−1 NbHS−1,T N)−1. We introduce some new notations to assist the proof: bD(j)(β, η) =1 NNX i=1∂gi(β, η) ∂βj−1 NNX i=1∂gi(β, η) ∂βjgi(β, η)TbΩ(β, η)−1bg(β, η), bD(β, η) = (bD(1), ...,bD(p)), ˜H=∂2bQ(bβ, η0,N) ... | https://arxiv.org/abs/2505.07295v1 |
Assumption 17. (Linear score regularity and nuisance parameters estimation). (a) Given a random subset Iof[N]of size N/L, we have the nuisance parameter estimator bη (estimated using samples outside I) belongs to a realization set TN∈TNwith probability 1−∆N, where TNcontains η0,Nand satisfies the following conditions. ... | https://arxiv.org/abs/2505.07295v1 |
Similarly, µ−1 N√ N∥bg(β′, η0,N)−bg(β, η0,N)∥=µ−1 N√ N|β′−β|∥bG∥≲cM|β′−β|, where cM:=µ−1 N√ N∥bG∥=Op(1). Lemma S27. (EP0,N∥gi∥4+EP0,N∥Gi∥4)m/N→0. Proof. The proof is similar to Lemma S24. The only difference is that we use the assumption m3/N→0. Lemma S28. gsatisfies Global Neyman orthogonality. Proof. Letη1= (ηY,1, ηA... | https://arxiv.org/abs/2505.07295v1 |
case when M= Ω. The proof for other cases are similar. Ω(β, η0) =(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(Y−ηY,0,N−βA+βηA,0,N)2 =(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T[(A−ηA,0,N)2β2+ 2β(A−ηA,0,N)(Y−ηY,0,N) + (Y−ηY,0,N)2]. Therefore sup β∈B (Pn−P0,N)Ω(β, η0,N) ≤ (Pn−P0,N)(Z−ηZ,0,N(X))(Z−ηZ,0,N(X))T(A−ηA,0,N)2 + (Pn−P0,N)(Z−ηZ,0,N(X))(Z−ηZ,0,N(... | https://arxiv.org/abs/2505.07295v1 |
that exp( β) is bounded and β∈ B, 1/C≤ξmin(Ω(β, η0,N))≤ξmax(Ω(β, η0,N))≤ Cfor all β∈ B, we have c1µ2 N/N≤g[1](O;η0)Tg[1](O;η0)≤c2µ2 N/N for some constants c1andc2. Since g(β, η0,N) =g(β, η0,N)−g(β0, η0,N) =g[1](η0,N)(exp( β)−exp(β0)), similar to the ASMM case, we have √ N∥g(β, η0,N)∥/µN=√ N/µ N|exp(β)−exp(β0)|q gT(η0,N... | https://arxiv.org/abs/2505.07295v1 |
stated in Assumption 12 hold. Lemma S42. Assumption 12 holds for the MSMM example. Proof. We have shown Neyman othogonality condition and λN≤N−1/2δN. It remains to show thatzN≤δNforaN=r(b1),(j,q) N, r(b1,b2),(j,k,l,r ) N. We will show that r(1),(1,1) N≤δNas example. Other proofs are similar and omitted. We pick η1from ... | https://arxiv.org/abs/2505.07295v1 |
Therefore, ∥δ(β)∥ ≲∥√ NGST,−1 Nδ(β)∥ ≤√ N∥bg(β, η0,N)∥/µN+ √ Nbg(β0, η0,N)/µN+µ−1 N√ N1 NNX i=1(Gi−G)(β−β0) | {z } cM. Since we have already shown that ∥bg(β0, η0)∥=Op(p m/N ), therefore√ N∥bg(β, η0,N)∥/µN= Op(√m/µ N) =O(1). It remains to show that sup β∈B µ−1 N√ N1 NNX i=1(Gi−G)(β−β0) =Op(1). Since EP0,N∥(bG−G)(β−β0)∥... | https://arxiv.org/abs/2505.07295v1 |
arXiv:2505.07383v1 [math.ST] 12 May 2025Some insights into depth estimators for location and scatter in the multivariate setting Jorge G. Adrover∗Marcelo Ruiz† Abstract The concept of statistical depth has received considerable attention as a way to extend the notions of the median and quantiles to other statistical mo... | https://arxiv.org/abs/2505.07383v1 |
Matem´ atica, Astronom´ ıa, F´ ısica y Computaci´ on - Universidad Nacional de C´ ordoba, CIEM and CONICET, Argentina. jorge.adrover@unc.edu.ar †Departamento de Matem´ atica, Facultad de Ciencias Exactas, F´ ısico-Qu´ ımicas y Naturales, Universidad Nacional de R´ ıo Cuarto, Argentina. mruiz@exa.unrc.edu.ar 1 wedges fo... | https://arxiv.org/abs/2505.07383v1 |
1,the deepest location estimator is the median. In the univariate case the model (1) is the usual location-scale model Y=µ0+σ0U,σ0>0,with U∼P0centrosymmetric. The residual smallness 2 concept in (2) is easily adapted for a joint estimation of location and scale as it follows: The depth of (µ, σ)∈R×(0,∞) is taken to be,... | https://arxiv.org/abs/2505.07383v1 |
considered in a broader scenario showing a behavior also described in some other simultaneous procedures in the robust literature. For instance, a similar behavior is described in the simultaneous M-estimation for location and scale (Huber and Ronchetti, 2009, p.141) and in the context of simultaneous M-estimation for ... | https://arxiv.org/abs/2505.07383v1 |
PE 0 =BS ˆΓ, ε, P 0 BI ˆΓ, ε, , PE 0 =BI ˆΓ, ε, , P 0 which entails that the maximum bias can be computed using µ0=0and Σ 0=I.The maximum bias for Tukey’s median turns out to be B ˆθT, ε,Φ = Φ−1 1+ε 2(1−ε) (Chen and Tyler (2002)). Interest in scatter depth has been revitalized over the past decade, particula... | https://arxiv.org/abs/2505.07383v1 |
obtain that (see Lemma 10 below) inf Σ∈F(M),P∈Pε(F0)P ˆΣ−Σ 2 op≤C∗ maxnp n, B2 E(ε)o +log (1 /δ) n ≥α. with BE(ε) =h 1√βΦ−1(a(ε))−1i .By these means, the concentration inequality is also able to uncover the maximum bias of the deepest as it will be shown in the next Section. 4 Maximum bias for the deepest scatter m... | https://arxiv.org/abs/2505.07383v1 |
Φ−1 δ 1−δ a normal quantile ,with δ < 1/3.Letε0be the fixed point of g; i.e., ε0=g(ε0). (5) LetPn= (1−ε)Φ (·) +εHn(·) be a sequence of contaminated distributions in the ε−contamination neighborhood Pε(Φ), Dnthe depth induced by these contaminated distributions and {(ˆµn,ˆσn)}nthe deepest estimators associated to thos... | https://arxiv.org/abs/2505.07383v1 |
f′(y) =1p y2+ 6 ln 2[φ(a+b)a−φ(a) (a+b)]<0 Ify >−p ln (4) ,then a >0, a < a +b, φ(a+b)< φ(a) and f′(y)<0,which entails the result. Since the function yδ= Φ−1 δ 1−δ is increasing in δthen the composition f(yδ) =h 1 3 yδ+ 2p y2 δ+ 6 ln 2 , yδ is decreasing in δ. Consequently, g(y) is a product of two positive decre... | https://arxiv.org/abs/2505.07383v1 |
2,xU−xL 2 > εforall j∈ {1,2,3}. IfPε A(1) xL+xU 2,xU−xL 2 = (1−ϵ)Φ(xL)> εthen xL>Φ−1(ϵ/(1−ϵ)) = yϵ. Moreover, by (i) of Lemma 1 ε < P ε A(3)xL+xU 2,xU−xL 2 = (1 −ϵ) Φ(xU)−(Φ(xL+xU 2) = (1 −ϵ)h(xL, xU) ≤(1−ε)h(M(yε0), yε0) =g(ϵ). By Lemma 4 we get that ε < ε 0. Lemma 7 Given the contaminated distributions Pn=... | https://arxiv.org/abs/2505.07383v1 |
set of symmetric and definite positive matrices Σsuch that the largest eigenvalue λ1(Σ)is less than a constant M > 0.Forε∈[0, ε′], ε′<1/3and (p+ log (1 /δ))/nsufficiently small, there exists a constant C > 0(depending on ε′but independent of p, n, ε ),such that, with probability greater than 1−2δ,it holds that inf θ,Σ∈... | https://arxiv.org/abs/2505.07383v1 |
, 0≤ sup u∈Sp−1xu−1≤1 2√β1 ϕ Φ−1 a(ε) +1 2an,δan,δ+1√βΦ−1(a(ε))−1 =gS(ε, n, δ, p ) Then, BE(ε) =1√βΦ−1(a(ε))−1 , If the maximum in (13) occurs at Φ(√β)−Φ √βinfu∈Sp−1xu ,then inf u∈Sp−1xu≤1 and we call b(ε) =3 4−ε 2(1−ε)=3−5ε 4(1−ε),which is decreasing in ε.Thus, we get that 0≤Φ(p β)−Φp βinf u∈Sp−1xu ≤ε 2 (1−... | https://arxiv.org/abs/2505.07383v1 |
∞ or their smallest eigenvalue l(n) p→0.Then ε >1/3. Proof. Put aε,n(w) = (1 −ε)P0 wtΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nw wtΓ−1/2 n ≤1 Pp j=1l(n)−1 j (wtej)2 + εPn wtΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≤1 Pp j=1l(n)−1 j (wtej)2 bε,n(w) = (1 −ε)P0 wt rΓ−1/2 n wtΓ−1/2 n XXtΓ−1/2 nwr wtΓ−1/2 n ≥1 Pp j=1l(n)−1 j (... | https://arxiv.org/abs/2505.07383v1 |
Ifr < l1/2 1then Be r=∅, Gbe r=−∞andgbe r=∞. {v1,v2, . . . ,vp} ∈Ae rthen Gae r=g(v1) and gae r=g(vp). Thus, D(Γ, Pε,r) = min {(1−ε)g(vp) +ε,(1−ε) (1−g(v1))}. Ifr > l1/2 1then {v2, ...,vp}/∈Be r,v1∈Be r, Gbe r=g(v1). Take now v0,r∈Sp−1such that vt 0,r Γ−r2v1vt 1 v0,r= 0.Since g(v2)≤g(v0,r)< g(v1), gbe r=g(v0,r). Obse... | https://arxiv.org/abs/2505.07383v1 |
to analyze, min (1−ε)P0 wt rΓr wtrΓ−1/2 r XXtΓ−1/2 rwr wtrΓ−1/2 r ≤1Pp j=2l(r)−1 j(wtrej)2 + εδ r2l(r)−1/2 1 wt reetwr≤1 , (1−ε)P0 wt rΓ−1/2 r wtrΓ−1/2 r XXtΓ−1/2 rwr wtrΓ−1/2 r ≥1Pp j=2l(r)−1 j(wt rej)2 + εδ r2l(r)−1/2 1 wt reetwr≥1 where wris a vector which yields the ... | https://arxiv.org/abs/2505.07383v1 |
ε−contamination neighborhood. Therefore ε∗≤1/3. Next, we follow closely to Theorem 4.1 and 4.2 of Chen and Tyler (2002). Let us take the contaminated distribution Pε,Q= (1−ε)P0+εQandA⊂ {Γ⪰0}, definir ∥A∥= sup Γ∈A( ∥Γ∥op ∥βI∥op,∥βI∥op ∥Γ−1∥op) L(η, P0) = {Γ⪰0 :D(Γ, P0)≥DM(P0)−η} Λ (ε, P0) = inf QDM(Pε,Q) δ(ε, P0) =Λ (ε,... | https://arxiv.org/abs/2505.07383v1 |
and Z. Ren. (2018). Robust covariance and scatter matrix estimation under Huber’s contamination model. The Annals of Statistics , 46 (5):1932–1960. •Z. Chen and D. E. Tyler. The influence function and maximum bias of Tukey’s median. The Annals of Statistics , 30(6):1737–1759, 2002. •Donoho, D. L. and M. Gasko (1992). B... | https://arxiv.org/abs/2505.07383v1 |
arXiv:2505.07649v1 [math.ST] 12 May 2025Constructing Bayes Minimax Estimators via Integral Transformations Dominique Fourdrinier∗William E. Strawderman† Martin T. Wells‡ May 23, 2025 Abstract The problem of Bayes minimax estimation for the mean of a mult ivariate normal distribution under quadratic loss has attracted s... | https://arxiv.org/abs/2505.07649v1 |
marginal theunderlying priorisnecessarily improper. However, ifthefunctio n√missuperharmonic the prior may be proper. The results presented in this article focus on the construction of B ayes minimax pro- cedures. In analyzing such estimators, it is useful to distinguish be tweenconstruction and verification . Construct... | https://arxiv.org/abs/2505.07649v1 |
under suitable integrability con- ditions, that an unbiased estimator of the risk of δπequalsk+ 2divγ(X) +∝ba∇dblγ(X)∝ba∇dbl2= k+4∆/radicalbig m(X)/slashbig/radicalbig m(X) where the divergence and the Laplacian are denoted by div and ∆, respectively. Thus the risk of δπequalsEθ[∝ba∇dblδν−θ∝ba∇dbl2] =k+4Eθ[∆/radicalbig... | https://arxiv.org/abs/2505.07649v1 |
2u2−k+1 2<0 (3.1) whereF(u)is theI-transform of order (k−2)/2offin(2.6). 2. Let ϕ(u) =1 u2∞/summationdisplay j=0bjuj be any non positive generalized series. The function F define d by F(u) = (c1z1(u)+c2z2(u))2u(k−1)/2exp(u2/2), (3.2) wherez1andz2are two linearly independent solutions of the second order d ifferential equ... | https://arxiv.org/abs/2505.07649v1 |
forb≤(k−2)2/4. When the difference of the roots, ρ2−ρ1, is not integer or zero, two independent solutions of ( 3.11) are z1(u) =uρ1∞/summationdisplay j=0cjujandz2(u) =uρ2∞/summationdisplay j=0cjuj, 7 where the coefficient cjcan be determined by plugging the series solutions into ( 3.11), which yields ∞/summationdisplay j=... | https://arxiv.org/abs/2505.07649v1 |
9 Corollary 4.1 Assume that the prior is a variance mixture of normal distrib utions with mixing density h. 1. A sufficient condition for the corresponding spherical (ge neralized) Bayes estimator of the mean of the normal distribution to be minimax is G′(s) G(s)−2G′′(s) G′(s)≤k s∀s >0 (4.3) whereGis the Laplace transfor... | https://arxiv.org/abs/2505.07649v1 |
suitable inverse Laplace transform and show that its Laplace transform satisfies ( 4.3), and that the function hthat results is positive and integrable. 11 4.2.1 Example 1 TakeL−1[G](t) =tn1]0,1[(t) wherenis a nonnegative integer. Then an easy calculation gives G(s) =n! sn+1−e−sn/summationdisplay j=0n! j!sn−j+1 =n! sn+1... | https://arxiv.org/abs/2505.07649v1 |
ϕ(s)≤1 sE∗ s/bracketleftbigg {2(α+1)−α}−(β−1)T 1−T+γσT 1−σT/bracketrightbigg =1 s/braceleftbigg α+2+E∗ s/bracketleftbigg (1−β)T 1−T+γTT 1−σT/bracketrightbigg/bracerightbigg ≤α+2 s, since the integrand term is negative. Case 2: Assume γ >0 Similarly ϕ(s)≤1 s/braceleftbigg α+2+E∗ s/bracketleftbigg (1−β)T 1−T/bracketright... | https://arxiv.org/abs/2505.07649v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.