text
string
source
string
arXiv:2505.10272v1 [cs.LG] 15 May 2025Spike-timing-dependent Hebbian learning as noisy gradient descent Niklas Dexheimer1∗Sascha Gaudlitz2,∗Johannes Schmidt-Hieber1 1University of Twente2Humboldt-Universität zu Berlin {n.dexheimer, a.j.schmidt-hieber }@utwente.nl sascha.gaudlitz@hu-berlin.de Abstract Hebbian learning i...
https://arxiv.org/abs/2505.10272v1
contribution lies in connecting STDP to noisy gradient descent and providing a rigorous convergence analysis of the noisy learning scheme. To this end, we introduce a learning rule for the weights w1, . . . , w d, which captures the locality and spike-time dependence of Hebbian STDP. We rewrite the learning rule as a n...
https://arxiv.org/abs/2505.10272v1
neurons and one output (or postsynaptic) neuron. The ithinput neuron has a mean firing rateλi>0describing the expected number of spikes per time unit. The vector λ= (λ1, . . . , λ d) contains the dmean firing rates. The strength of the connection between the ithinput neuron and the output neuron is modulated by the wei...
https://arxiv.org/abs/2505.10272v1
gradient ∇L(p) =−p⊙(p− ∥p∥21)∈Rd,p∈Rd. (6) Dropping O(α2)terms is only done for illustrative purposes. Our main result (Theorem 2.2) applies to the original learning rule (3). The subsequent lemma summarises the key properties of the loss function Lfrom (5). Ford= 3, Figure 2 visualises the loss function Land the learn...
https://arxiv.org/abs/2505.10272v1
dominated by the gradient update on Θ, if the learning rate is small enough (see (7). 4 3.Similarly as in (4), we restate the learning increments of the dynamics as the sum of the true gradient and a centred noise vector ξ(k). By (7), this decomposition yields linear convergence of p1(k)→1onΘ. 4.To find a lower bound f...
https://arxiv.org/abs/2505.10272v1
3: Considered biological neural network with spike trains and membrane potential Ytof the postsynaptic neuron.We study a biological neural network consisting of din- put (or presynaptic) neurons, which are connected to one output (or postsynaptic) neuron. For the subsequent ar- gument, we assume that the spike times of...
https://arxiv.org/abs/2505.10272v1
neuron j∗(k+ 1)∈[d]and, in a second step, picking a spike time τ∗among the spike times of thej∗(k+ 1)thpresynaptic neuron. As the spike times of the jthpresynaptic neuron are generated from a Poisson process with intensity λjand result, by (10), in a jump of height wj(t) =wj(tk)for 6 tk< t≤tk+1, the probability that th...
https://arxiv.org/abs/2505.10272v1
consider a flow on probability vectors, a different perspective is to use the natural geometry of the probability simplex. To this end, we consider the interior of the probability simplex M:= int( P)as a Riemannian manifold with tangent space TpM={x∈R:1⊤x= 0}for every p∈ M . A natural metric on Mis given by the Fisher ...
https://arxiv.org/abs/2505.10272v1
time increases. If multiple intensities are equal, convergence is up to permutations within the group of equal intensities. We propose Algorithm 1, which constitutes an STDP rule as lines 3 - 4 can be implemented using the spike-trains and the learning rule (11). Algorithm 1: Aligning multiple output neurons Input: K∈N...
https://arxiv.org/abs/2505.10272v1
allow for several output neurons. Moreover, we limited the analysis to static mean firing rates of the presynaptic neurons. A more realistic scenario amounts to considering time-dependent mean firing rates that are piecewise constant, corresponding to different input patterns [ 6]. When extending the analysis to time-v...
https://arxiv.org/abs/2505.10272v1
J. L. Van Hemmen. Representation of input structure in synaptic weights by spike-timing-dependent plasticity. Physical Review E , 82(2):021912, 2010. [13] D. O. Hebb. The Organization of Behavior . Wiley, 1949. [14] J. Hofbauer and K. Sigmund. Evolutionary Games and Population Dynamics . Cambridge University Press, 199...
https://arxiv.org/abs/2505.10272v1
Computation , 31(12):2523–2561, 2019. [34] A. Vigneron and J. Martinet. A critical survey of STDP in spiking neural networks for pattern recognition. In 2020 International Joint Conference on Neural Networks (IJCNN) , pages 1–9, 2020. [35] S. Weissmann, S. Klein, W. Azizian, and L. Döring. Almost sure convergence of st...
https://arxiv.org/abs/2505.10272v1
contained in (3). Recall that |Yi(k)| ≤Q, for all i∈[d], k= 0,1, . . . Lemma A.2. Fori∈[d]andk= 0,1, . . . define ξi(k):=pi(k) Eh Yi(k)−dX j=1pj(k)Yj(k) Fk−1i − Yi(k)−dX j=1pj(k)Yj(k) , (17) 12 and assume α <1/Q. Then for any i∈[d]andk= 0,1, . . ., there exists a random variable θi(k), satisfying |θi(k)| ≤α22Q2 (1−...
https://arxiv.org/abs/2505.10272v1
arbitrary. It follows by Lemma A.2, that on Ω(k)the bound pu(k+ 1) = pu(k) +αpu(k) pu(k)− ∥p(k)∥2 −αξu(k)−θu(k) ≤pu(k) +αpu(k) p1(k)− ∥p(k)∥2 −αξu(k)−θu(k) holds. Consequently, on Ω(k), we have p1(k+ 1)−pu(k+ 1) =p1(k)−pu(k) +α p1(k)−pu(k) p1(k)− ∥p(k)∥2 +α ξj(k)−ξ1(k) −θ1(k) +θu(k). We have pu(k)≤1−p1(k)and ...
https://arxiv.org/abs/2505.10272v1
Fℓ−1i − Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 Fℓ−1i 1Ω(ℓ)i =α2kX ℓ=0Eh Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 Fℓ−1i 1Ω(ℓ)−Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ) Fℓ−1i2 1Ω(ℓ)i ≤α2kX ℓ=0Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i =α2kX ℓ=0Eh (1−p1(ℓ))Y1(ℓ)−dX j=2pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i 16 ≤2α2kX ℓ=0Eh (1−p1(ℓ))Y1(ℓ)2 1Ω(ℓ)+dX j=2pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i ≤2Q2α2kX ℓ=0E...
https://arxiv.org/abs/2505.10272v1
ϕ′is monotonically increasing. Thus, for a probability vector q= (q1, . . . , q d), we have dX i=1qiϕ′(qi) qi− ∥q∥2 2 =dX i=1qiϕ′(qi) qidX j=1qj−dX j=1q2 j =dX i,j=1qiqjϕ′(qi) qi−qj =X 1≤i<j≤dqiqj ϕ′(qi)−ϕ′(qj) qi−qj ≥0. (Ifϕis strictly convex, then strict equality holds if and only if qis one of the stationa...
https://arxiv.org/abs/2505.10272v1
generated from a Poisson process with intensity λj. Thus, if U∼Poisson( λj(t+−t∗)), the probability that the jthpresynaptic neuron spikes in (t∗,t+)is given by P U̸= 0 = 1−P U= 0 = 1−exp −λj(t+−t∗) ≈λj(t+−t∗)≈et∗−tkwjλjP ℓ̸=jwℓλℓ. We can moreover approximate the denominator on the right hand side by the full sumP...
https://arxiv.org/abs/2505.10272v1
arXiv:2505.10630v1 [cs.LG] 15 May 2025How many measurements are enough? Bayesian recovery in inverse problems with general distributions Ben Adcock Department of Mathematics Simon Fraser University CanadaNick Huang Department of Mathematics Simon Fraser University Canada Abstract We study the sample complexity of Bayes...
https://arxiv.org/abs/2505.10630v1
strive to answer to this question in the broadest possible terms, with a theoretical framework that allows for very general types of distributions. We now describe the corresponding conditions needed and present simplified a version of our main results. (i) Closeness of the real and approximate distributions. Namely, W...
https://arxiv.org/abs/2505.10630v1
matrices, simplified) .Consider the setup of Theorem 1.1, whereAis a distribution of subgaussian random matrices. Then there is a constant c >0(depending on the subgaussian parameters β, κ > 0; see Definition 3.4) such that P[∥x∗−ˆx∥ ≥34(η+σ)]≲δ,whenever m≥c·[log(Cov η,δ(P)) + log(1 /δ)]. 2 Later in Theorem 3.5, we sli...
https://arxiv.org/abs/2505.10630v1
sparse vectors. This extends a classical result for deterministic compressed sensing to the Bayesian setting. 1.3 Significance The significance of this work is as follows. See §1.4 for additional discussion. 1.We provide the first results for Bayesian recovery with arbitrary distributions real and approximate prior dis...
https://arxiv.org/abs/2505.10630v1
their results to allow for arbitrary distributions AandEfor the forward operator and noise, respectively. Using Theorem 1.1, we derive guarantees for the case of subsampled orthogonal transforms (Theorem 1.3, which, unlike Gaussian random matrices, is very relevant to applications. In particular, if Uis a DFT matrix ou...
https://arxiv.org/abs/2505.10630v1
stated otherwise, X=Y=Rnand the cost function cis the Euclidean distance. 2.2 Approximate covering numbers As a measure of complexity of measures, we will define the concept of approximate covering numbers as introduced in Jalal et al. [47]. Definition 2.1 (Approximate covering number) .Let(X,F,P)be a probability space...
https://arxiv.org/abs/2505.10630v1
τ;E)pE(v),∀u, v∈Rn,∥u∥ ≤τ,∥u−v∥ ≤ε. 3 Main results We now present our main results. The first, an extension of Theorem 1.1, is a general result that holds for arbitrary distribution R,P,AandE. Theorem 3.1. Let1≤p≤ ∞ ,0≤δ≤1/4,ε, η, t > 0,c, c′≥1andσ≥ε/δ1/p. LetEbe a distribution on RmandR,Pbe distributions in Rnsatisfyi...
https://arxiv.org/abs/2505.10630v1
+ log(1 /δ)]. (3.4) This theorem is derived from Theorem 3.1 by showing that the various concentration bounds are exponentially small in mfor subgaussian random matrices (see §B). It is a direct generalization of [47], which considered the Gaussian case only. It shows that subgaussian random matrices are near-optimal f...
https://arxiv.org/abs/2505.10630v1
Rn. To derive Theorem 3.8 from Theorem 3.1 (see §B), we show exponentially-fast concentration in m/µ(U;D). Had we considered the whole of Rn, then, since µ(U;Rn) =n, this would have lead to an undesirable measurement condition of the form m=O(n)scaling linearly in the ambient dimension n. 4 Covering number and sample c...
https://arxiv.org/abs/2505.10630v1
matrices, we also need to consider the coherence. For P=Psas above, one can easily show that µ(U; supp( Ps)−supp(Ps))≤2sµ∗(U), where µ∗(U) =n·max i,j|uij|2. (4.3) The term µ∗(U)is often referred to as the coherence of U(see, e.g., [ 4, Defn. 5.8] or [ 24]). Notice thatµ∗(U)≈1for DFT matrices, which is one reason why su...
https://arxiv.org/abs/2505.10630v1
As shown therein, this can lead to significant performance gains over sampling uniformly at random. We believe a similar approach can be considered in the Bayesian setting as a consequence of our general framework. We intend to explore this in future work. Finally, our results in this paper are theory, and strive to st...
https://arxiv.org/abs/2505.10630v1
measurements. IEEE J. Sel. Areas Inf. Theory , 3(3):502–512, 2022. [15] A. Berk, S. Brugiapaglia, Y . Plan, M. Scott, X. Sheng, and O. Yilmaz. Model-adapted Fourier sampling for generative compressed sensing Model-adapted Fourier sampling for generative compressed sensing . arXiv:2310.04984 , 2023. [16] S. Bhadra, V . ...
https://arxiv.org/abs/2505.10630v1
M. Fazlyab, A. Robey, H. Hassani, M. Morari, and G. J. Pappas. Efficient and accurate estimation of Lipschitz constants for deep neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 33. Curran Associate...
https://arxiv.org/abs/2505.10630v1
Advances in Neural Information Processing Systems , volume 33, pages 713–727. Curran Associates, Inc., 2020. [49] Z. Kadkhodaie and E. Simoncelli. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, edit...
https://arxiv.org/abs/2505.10630v1
Y . C. Eldar. Theoretical perspectives on deep learning methods in inverse problems. IEEE J. Sel. Areas Inf. Theory , 3(3):433–453, 2022. [65] V . Shah and C. Hegde. Solving linear inverse problems using GAN priors: An algorithm with provable guarantees. In 2018 IEEE international conference on Acoustics, Speech and Si...
https://arxiv.org/abs/2505.10630v1
pushforward measure G♯γ"k[ i=1Bη(zi)# =γ"k[ i=1G−1(Bη(zi))# ≥γ"k[ i=1Bη/L(xi)# ≥1−δ. This gives the result. Lemma A.2 (Approximate covering number of a normal distribution) .LetP=N(0, σ2I)onRn. Then its approximate covering number (Definition 2.1) satisfies Covη,δ(P)≤ 1 +2√nσt ηn ,where t= 1 +r 2 nlog(1 /δ). Proof. O...
https://arxiv.org/abs/2505.10630v1
that Ehas density pE(e) = (2 πσ2/m)−m/2exp −m 2σ2∥e∥2 . Therefore pE(u) pE(v)= expm 2σ2(∥v∥ − ∥ u∥) (∥v∥+∥u∥) . Now suppose that ∥u∥ ≤τand∥u−v∥ ≤ε. Then pE(u) pE(v)≤expm(2τ+ε) 2σ2ε . Hence Dshift(ε, τ;E)≤expm(2τ+ε) 2σ2ε , which gives the second result. B.2 Subgaussian concentration inequalities Lemma B.2 (Lower...
https://arxiv.org/abs/2505.10630v1
−cm), for some universal constant c >0and Dshift(2ε′,2σ;E) = expm(4σ+ 2ε′) 2σ22ε′ ≲1, where we used the facts that m≥1andσ≥ε/δ1/p. We deduce that p≲δ+ exp( k−c(β, κ)m) for a possibly different constant c(β, κ)>0. The condition (3.4) onmand(3.1) now give that p≲δ, as required. Proof of Theorem 3.8. Letp=P[∥x∗−ˆx∥ ≥34(...
https://arxiv.org/abs/2505.10630v1
metric with respect to dandW∞be the Wasserstein- ∞ metric with cost function d. Then dH(supp( µ),supp( ν))≤W∞(µ, ν). In particular, supp( µ)⊆Bη(supp( ν))for any η≥W∞(µ, ν). 20 Proof. We follow the arguments presented in [44]. Since dH(supp( µ),supp( ν)) = max( sup x∈supp( ν)d(x,supp( µ)),sup y∈supp( µ)d(y,supp( ν))) we...
https://arxiv.org/abs/2505.10630v1
the mixture property, Hi≪ H and therefore its Radon-Nikodym derivative hi=dHi dHexists. This means we may write p=aiZ P(ˆy∼ H j|y∗)hi(y∗) dH(y∗). By definition, we have P(ˆy∼ H j|y∗) =P(y∗∼ H j|y∗)and using (C.2), we deduce that p=Zaiajhi(y)hj(y∗) Pk l=1alhl(y∗)dH(y∗) We now write p=Zaihi(y∗)hj(y∗)aj aihi(y∗) +ajhj(y∗)...
https://arxiv.org/abs/2505.10630v1
x, x∈S˜x,ext, and therefore I2=Ex∼P ext[EA∼A[E(BA−Ax)1Ccx]]≤Ex∼P ext[EA∼A[E(BA,x−Ax)1Ccx]]. But we notice that BA,x−Ax=Bc√c(η+σ). Now since η≥0, we have Bc√c(η+σ)⊆Bc σ√c. Hence E(BA,x−Ax) =E(Bc√c(η+σ))≤ E(Bc σ√c)≤Dupp(σ√c;E), and therefore I2≤Dupp(σ√c;E). Combining this with (C.5), (C.6) and (C.7), we deduce that EA[He...
https://arxiv.org/abs/2505.10630v1
Z Z Z 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dE(e)dA(A)dΠ(x∗, z∗). 25 Define E={(x∗, z∗) :∥x∗−z∗∥ ≤ε}and observe that Π(E) = 1 . Then, for fixed A,e, we have Z Z 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗) =Z EZ 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗) Z Z 1B2,ˆx(z∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗) =Z EZ 1B2,ˆx(z∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z...
https://arxiv.org/abs/2505.10630v1
to the statement (concerning the set Sthe distribution in whose support it is contained) which is important for proving our main result. Lemma C.7 (Decomposing distributions) .LetR,Pbe arbitrary distributions on Rn,p≥1and η, ρ, δ > 0. IfWp(R,P)≤ρandk∈Nis such that min{log Cov η,δ(P),log Cov η,δ(R)} ≤k, (C.15) then ther...
https://arxiv.org/abs/2505.10630v1
B(ui,η)f(x) dP(x) + 1 F(ˆu)(1−c∗)≡ Q′(F), which gives the result for the first marginal. For the other, we have Π(E,Rn) =lX i=1Z B(ui,η)∩Ef(x) dP(x) +P E\l[ i=1B(ui, η)! . 28 By definition of f, this is precisely Π(E,Rn) =P E∩l[ i=1B(ui, η)! +P E\l[ i=1B(ui, η)! ≡ P(E), which gives the result for the second marginal. N...
https://arxiv.org/abs/2505.10630v1
as required. Using (C.17) , we also have the analogous result for E2. Hence Ω(Ec) = Ω( E1∪E2)≤ Ω(E1) + Ω( E2) =: 2 δ′≤2δ′, and consequently, Ω(E) = 1−2δ′≥1−2δ, where E:={(x1, x2, x3) :∥x1−x3∥ ≤ρ/δ1/pand∥x1−x2∥ ≤η}.(C.18) 4. Decomposing P,R.Finally, we define P′,P′′,R′,R′′andQby conditioning on the events E andEc, as fo...
https://arxiv.org/abs/2505.10630v1
≥(c+ 1)( η+σ)](C.20) andD={x∗−z∗: (x∗, z∗)∈supp(Π) }, for Πbeing the W∞-optimal coupling of R′,P′ guaranteed by (ii). Lemma C.1 implies that supp(Π) ⊆supp(R′)×supp(P′)and therefore D⊆supp(R′)−supp(P′). Now (iii) implies that supp(P′)⊆supp(P). Similarly, (iii) implies that supp(R′)⊆supp(R). But Lemma C.2 and (ii) imply ...
https://arxiv.org/abs/2505.10630v1
˜z,ext(·|A, u)]. Now fix A∈Rm×n. LetH˜z,int,Abe the distribution of y∗=Az∗+eforz∗∼ P ˜z,intande∼ E independently, and define H˜z,ext,Asimilarly. Then, by Fubini’s theorem, we have Q(˜z)Ez∗∼Γ˜z,A∼A,e∼E,ˆz∼P(·|A,u)[1E]≤1 1−2δ′EA∼A,e∼EP[y∗∼ H ˜z,int,A,ˆy∼ H ˜z,ext,A(·|y∗)]. Now let H˜z,Abe the distribution of y=Az+eforz∼ ...
https://arxiv.org/abs/2505.10630v1
arXiv:2505.10715v1 [stat.ME] 15 May 2025Dependency-Aware Shrinkage Priors for High Dimensional Regression Javier Enrique Aguilar1,∗and Paul-Christian B¨ urkner1,† 1Department of Statistics, TU Dortmund University, Germany Abstract: In high dimensional regression, global local shrinkage priors have gained significant tr...
https://arxiv.org/abs/2505.10715v1
computational inefficiencies and overfitting (B¨ uhlmann and Van De Geer, 2011; Giraud, 2014; Hastie et al., 2015; Wain- wright, 2019). Among shrinkage priors, the class of continuous global-local shrinkage priors has gained significant attention (Mitchell et al., 1988; Carvalho et al., 2010; Ar- magan et al., 2013; Bh...
https://arxiv.org/abs/2505.10715v1
resemble frequentist regularization techniques (such as interpreting the log-prior as a penalty), the comparison can be misleading. The theoretical properties of Bayesian shrinkage priors and frequentist regularizers differ in fundamental ways (Simpson et al., 2017; Castillo, 2024). For example, under suitable conditio...
https://arxiv.org/abs/2505.10715v1
estimation in structured settings but do not yield substantial improvements in prediction. We conclude that, while prior dependence structures can be useful in specific inferential contexts, they are not universally beneficial and should not be used by default. In high dimensional problems, the advantages of flexible p...
https://arxiv.org/abs/2505.10715v1
priors of this form (3) as dependency- aware shrinkage priors (DASP). The standard shrinkage prior model (2) is re- covered by setting Ω = I. 2.2.1. Related priors Priors for regression coefficients that capture dependence structures typically do so through the incorporation of a covariance matrix. In what follows, we ...
https://arxiv.org/abs/2505.10715v1
George and McCulloch (1993, 1995, 1997), who propose the spike-and-slab prior with a non-diagonal precision matrix to encode depen- dencies among coefficients. In their formulation, the prior on btakes the form N(0, τ2ΓΩ−1Γ), where Γ is a diagonal matrix of binary inclusion indicators γi∈ {0,1}fori= 1, . . . , p . Thei...
https://arxiv.org/abs/2505.10715v1
the user, either through direct selection or by placing a prior distribution on it; both of which present open opportunities for further investigation. In contrast, our dependency-aware shrinkage prior avoids imposing any con- straint on the second moment of the local scales λi. This flexibility allows it to encompass ...
https://arxiv.org/abs/2505.10715v1
deviates substantially from independence (i.e. large ∥Ω−1−I∥2). Conversely, the upper bound shows that the difference can be negligible when shrinkage is weak J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 9 (i.e., large λpand hence large λ1), the design is well-conditioned (i.e. large νp), and the...
https://arxiv.org/abs/2505.10715v1
consider commonly used correlation patterns, including the autoregressive model of order 1 (AR1), moving average models of orders 1 and 2 (MA1 and MA2), as well as their blocked counterparts: BAR1, BMA1, and BMA2. These same structures are also employed in our experiments in Section 3. We provide their algebraic defini...
https://arxiv.org/abs/2505.10715v1
multivariate Gaussian, even when p(b|λ, τ) is. The plots in Figure 2 show how the correlation from the conditional distribution of bis propagated to the marginal distributions. The standard uncorrelated priors are represented when ρ= 0. The contour plots in Figure 2 show prior mass concentrated near the axes, indicatin...
https://arxiv.org/abs/2505.10715v1
signal (i.e., a large effect), κi≈0, whereas for noise, κi≈1. Shrinkage factors offer a unified framework for comparing the behavior of different shrinkage priors across a range of hyperparameter settings (Polson and Scott, 2012; Tadesse and Vannucci, 2021). A desirable property of such priors is their ability to effec...
https://arxiv.org/abs/2505.10715v1
fying meff≤p, and reflects how the combination of local and global shrinkage affects model complexity. When prior information or domain expertise suggests a plausible number of active predictors, visualizing the prior distribution of meffunder different hyperparameter settings offers practical guidance for prior elicit...
https://arxiv.org/abs/2505.10715v1
other, we can model the dependencies between the regression coefficients bbased on the structure of X. A well-known example of this approach is Zellner’s gprior (see Section 2.2.1), which achieves this by setting the prior covariance matrix of the coefficients proportional to the covari- ance of MLE, i.e., Cov( b|σ, g)...
https://arxiv.org/abs/2505.10715v1
the model can be viewed as augmenting the data with additional pseudo observa- tions that share the same design matrix X, but with response values set to zero (Zellner, 1986). These pseudo-observations are scaled by the co- efficients λiandτ, which determine their relative weight. Smaller values ofλ2 iτ2correspond to f...
https://arxiv.org/abs/2505.10715v1
Ledoit and Wolf, 2020). A key advancement in this area is the linear shrinkage estimator introduced by Ledoit and Wolf (2004b), given by Σ∗=φ1I+φ2S=β2 δ2µI+α2 δ2S. (21) where δ2=α2+β2and Σ∗minimizes the expected quadratic loss E ˜Σ−ΣX 2 F subject to ˜Σ =φ1I+φ2S, with respect to nonrandom coefficients φ1, φ2. Here, ∥·∥F...
https://arxiv.org/abs/2505.10715v1
covariance structures with those assumed in our simulations. We considered the following shrinkage priors: Beta Prime (BP), Dirichlet- Laplace (DL), Horseshoe (HS), Regularized Horseshoe (RHS), and R2D2 (D2) (Bai and Ghosh, 2019; Carvalho et al., 2010; Bhattacharya et al., 2015; Piironen and Vehtari, 2017; Zhang et al....
https://arxiv.org/abs/2505.10715v1
95% marginal credibil- ity intervals. We report average coverage proportion, average width, sensitivity, specificity (power) and coverage of non zeros (Neyman et al., 1933; Berger, 1985; Benjamini and Hochberg, 1995). We also show Receiver operating characteris- tic (ROC) curves to understand how coverage properties wo...
https://arxiv.org/abs/2505.10715v1
regularization across coefficients, improving pa- rameter recovery under structured sparsity. However, a particularly challenging J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 22 p = 50 p = 250 R2 = 0.2 R2 = 0.8 R2 = 0.2 R2 = 0.8Block3 Random Coefs rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95 D2DLHSR...
https://arxiv.org/abs/2505.10715v1
under the fixed block coefficient setting, for the BMA1 and BAR1 structures, respectively. Figure 7 shows that incorporating dependency information consistently reduces RMSE for nonzero coefficients across all scenarios. Notably, improvements appear even in the most challenging setting: high dimensionality ( p= 250), l...
https://arxiv.org/abs/2505.10715v1
includes gene ex- pression levels for 20 genes across 120 samples obtained from microarray exper- iments on mammalian eye tissue (Scheetz et al., 2006). It is available in the R package flare (Li et al., 2024). Figure 9 displays histograms of the pairwise correlations among covariates. We refrain from displaying the fu...
https://arxiv.org/abs/2505.10715v1
-12.7 (2.3) -14.2 (4.6) -22.4 (5.2) 4. Discussion We propose an extension to the continuous global-local shrinkage priors frame- work that allows for the inclusion of dependence structures via a correlation matrix Ω. Specifically, we move from the standard independent prior on each coefficient, bi|λi, τ, σ∼ N (0, λ2 iτ...
https://arxiv.org/abs/2505.10715v1
modeling of p(p−1)/2 correlations, an impractical task in most real world applications. For shrinkage priors to remain practically useful, it is essential that their complexity remains manageable, ide- ally governed by a small number of hyperparameters, which is satisfied by our approach. 4.1. Conclusion Our study purs...
https://arxiv.org/abs/2505.10715v1
The Horseshoe prior is defined hierarchically as: bj|λj, τ, σ∼ N 0, σ2λ2 jτ2 , λj∼ C+(0,1), τ∼ C+(0,1), σ∼p(σ) where C+(0,1) denotes the half-Cauchy distribution with unit scale (Carvalho et al., 2010). Notice that the Horseshoe prior does not possess hyperparameters others than the one present in the distribution of...
https://arxiv.org/abs/2505.10715v1
norm: ∥AB∥2≤ ∥A∥2∥B∥2, (27) •IfAandCare invertible then: ∥ABC∥2≥ ∥A−1∥−1 2∥B∥2· ∥C−1∥−1 2. (28) •Rayleigh characterization. Let A∈Rp×pbe a real symmetric matrix. Then its smallest and largest eigenvalues can be characterized as (Anderson, 2003) λmin(A) = min ∥x∥2=1x′Ax, λ max(A) = max ∥x∥2=1x′Ax. (29) •Weyl’s inequalit...
https://arxiv.org/abs/2505.10715v1
( i, j)-th entry of the matrix is given by Corr( i, j) =ρ|i−j|, (41) where ρ∈(−1,1) is the autocorrelation parameter. 5.3.2. Moving Average of order 1 The moving average correlation matrix of order 1 MA(1) captures correlations between immediate neighbors. The ( i, j)-th entry of the matrix is Corr( i, j) =  1 if ...
https://arxiv.org/abs/2505.10715v1
B¨ urkner / Dependency-Aware Shrinkage Priors 36 p = 50 p = 250 rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95R2 = 0.2 R2 = 0.8 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.50.60.70.80.91.0 0.40.60.81.00.50.60.70.80.91.0 0.40.60.81.00.60.70.80.91.0 0.70.80.91.00.60.70.80.91.0 0.80.91.0 F...
https://arxiv.org/abs/2505.10715v1
in BAR1). Table 2 Coverage metrics under the BMA1 structure with fixed block signals ( p= 250 , R2= 0.8).We compare shrinkage priors across two correlation levels ( ρ= 0.5and ρ= 0.95). Models with an ”O” suffix incorporate the BMA1 structure to induce dependency-aware shrinkage. Model ID Coverage Specificity Sensitivit...
https://arxiv.org/abs/2505.10715v1
URL https://projecteuclid.org/journals/ electronic-journal-of-statistics/volume-17/issue-1/ Intuitive-joint-priors-for-Bayesian-linear-multilevel-models--The/ 10.1214/23-EJS2136.full 2, 7, 13, 14, 30 Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis . Wiley. Google-Books-ID: 1Ts4nwEACAAJ. 17,...
https://arxiv.org/abs/2505.10715v1
Data: Methods, Theory and Applications . Springer Series in Statistics. Berlin, Heidelberg: Springer. URL https://link.springer.com/10.1007/978-3-642-20192-9 2 B¨ urkner, P.-C. (2017). “brms: An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software , 80(1): 1–28. URL https://www.jstatsof...
https://arxiv.org/abs/2505.10715v1
Statistical Science , 7(4): 473 – 483. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/ss/1177011137 21 Ghosal, S., Ghosh, J. K., and Vaart, A. W. v. d. (2000). “Convergence rates of posterior distributions.” The Annals of Statistics , 28(2): 500 – 531. Publisher: Institute of Mathematical ...
https://arxiv.org/abs/2505.10715v1
Wiley & Sons. Google-Books-ID: 0QzvAAAAMAAJ. 7 Johnstone, I. M. and Silverman, B. W. (2004). “Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences.” The Annals of Statistics , 32(4): 1594–1649. Publisher: Institute of Mathematical Statistics. URL https://projecteuclid.org/journals/ ann...
https://arxiv.org/abs/2505.10715v1
London. Series A, Containing Papers of a Mathematical or Physical Character , 231(694-706): 289–337. Publisher: Royal Society. URL https://royalsocietypublishing.org/doi/10.1098/rsta.1933. 0009 20 O’Hagan, A. and Pericchi, L. (2012). “Bayesian heavy-tailed models and conflict resolution: A review.” Brazilian Journal of...
https://arxiv.org/abs/2505.10715v1
fourth Berkeley symposium on mathematical statistics and probability, vol- ume 1: contributions to the theory of statistics , volume 4, 547–562. University of California Press. 9 Scheetz, T. E., Kim, K.-Y. A., Swiderski, R. E., Philp, A. R., Braun, T. A., Knudtson, K. L., Dorrance, A. M., DiBona, G. F., Huang, J., Casa...
https://arxiv.org/abs/2505.10715v1
analysis with g-prior distributions.” Bayesian inference and decision tech- niques . 2, 5, 17 — (1996). “Models, prior information, and Bayesian analysis.” Journal of Econometrics , 75(1): 51–68. URL https://www.sciencedirect.com/science/article/pii/ 0304407695017682 2, 17 Zhang, Y. D., Naughton, B. P., Bondell, H. D.,...
https://arxiv.org/abs/2505.10715v1
arXiv:2505.10738v2 [stat.ME] 19 May 2025STATISTICALLY SIGNIFICANT LINEARREGRESSION COEFFICIENTS SOLELYDRIVENBYOUTLIERS INFINITE-SAMPLE INFERENCE Felix Reichel∗ May 2025 A Preprint Abstract Inthispaper, weinvestigatetheimpactofoutliersonthestatisticalsignificanceofcoefficientsinlinear regression. We demonstrate, through...
https://arxiv.org/abs/2505.10738v2
Normalized Maximum Ordinary Residual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.4 Distributional Properties and Critical Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.5 Upper Bound of Single Outlier Test Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5 On the No...
https://arxiv.org/abs/2505.10738v2
. . . . . . . . . . . . . . . . . . . 10 6.10 Spline Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.11 Generalized Linear Models (GLMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.12 Generalized Additive Models (GAMs) . . . . . . . . . . ...
https://arxiv.org/abs/2505.10738v2
predictor variable [18]. This paper investigates how outliers can lead to misleading conclusions in regression analysis. Through simulation using R, we demonstrate the fragility of OLS-based inference when a finite sample inference is manipulated by insertion of a single outlier. To mitigate this issue, we additionally...
https://arxiv.org/abs/2505.10738v2
[18, 19]. We can show this with a simple example. We generate 100 observations (x,y)inRusing the seed: 123 with no true relationship. Model 1, based on this clean data, shows no significant coefficient. In Model 2, we add one extreme outlier. This one point changes the slope to 1.62 and creates a highly significant res...
https://arxiv.org/abs/2505.10738v2
outliers), an approximate critical value can be obtained using the Bonferroni correction. To control the family-wise error rate at level α, we compare Rnto the quantile of the Student’s t-distribution with n−p−1degrees of freedom: P/parenleftbig Rn>t 1−α/(2n),n−p−1/parenrightbig ≤α. (17) 4.3 Normalized Maximum Ordinary...
https://arxiv.org/abs/2505.10738v2
These assumptions include linearity of the model, meaning the outcome variable yis expressed as a 8 linear combination. The covariate matrix Xmust have full column rank so that X⊤Xis invertible, ensuring that the parameter estimates are uniquely defined. Exogeneity is also required, meaning the regressors are uncorrela...
https://arxiv.org/abs/2505.10738v2
incorrect inferences. The normal- ity assumption of the residuals is critical for valid t-tests, and its violation—commonly due to outliers— necessitates careful diagnostic checks or the use of robust methods. It is advised to always conduct residual diagnostics and robust regression techniques to assess and mitigate t...
https://arxiv.org/abs/2505.10738v2
model fit to the contaminated data. These plots provide visual evidence of how outliers affect model assumptions and how robust methods mitigate their influence. A.1 Residual Plots: SLR OLS Model (Clean Data/No Outlier) Figure 2: Diagnostic plots for the OLS model. The residuals appear homoscedastic (constant variance)...
https://arxiv.org/abs/2505.10738v2
arXiv:2505.10747v1 [math.ST] 15 May 2025Assumption-lean weak limits and tests for two-stage adaptive experiments Ziang Niu and Zhimei Ren Department of Statistics and Data Science, University of Pennsylvania May 24, 2025 Abstract Adaptive experiments are becoming increasingly popular in real-world appli- cations for ef...
https://arxiv.org/abs/2505.10747v1
stage . In the pilot stage, i.i.d. data DP∼PPare collected, and informs a selection algorithm S(DP). Then in the follow-up stage, new data DF∼ PF(DP) are gathered according to the output of selection algorithm S, resulting in data that are conditionally i.i.d. given DP(see Algorithm 1 for a complete description). The c...
https://arxiv.org/abs/2505.10747v1
et al. (2020) and Hadad et al. (2021) establish asymptotic normality results on the outcome means by imposing strong assumptions on the signal strength or the distribution of the potential outcomes.1Hirano et al. (2023) provide general representation of the limiting distribution of test statistics in a multi-stage setu...
https://arxiv.org/abs/2505.10747v1
al., 2021). We establish weak convergence results under minimal assumptions. Building on this foundation, we propose a valid and computationally efficient bootstrap procedure for hypothesis testing in the pres- ence of non-normal limiting null distributions. Specifically, our main contributions are summarized as follow...
https://arxiv.org/abs/2505.10747v1
WIPW test statistic. In Section 3, we present the formal results on weak convergence and the bootstrap methodology, instantiate our general theory in various adaptive experiments, and establish the connection to existing works. In Section 4, we evaluate the finite- sample performance of the derived tests. We conclude t...
https://arxiv.org/abs/2505.10747v1
S. Remark 1 (Generality of S).Our main results readily generalize to settings where the selection algorithm depends on more complex functions of the data beyond the sim- ple difference in means. However, for clarity of presentation and broad applicability, we focus on selection algorithms Sbased on the estimator of the...
https://arxiv.org/abs/2505.10747v1
equivalent to sample mean estimator. 7 A broad class of weighting choices. We consider the class of weights in the form h(t) N(s)≡¯em N(s,Ht−1)/N1/2, where different choices of mallow for a variety of weighting strategies: •m= 0:Constant weighting ,h(t) N(s)≡1/N1/2; •m= 1/2:Adaptive weighting ,h(t) N(s)≡¯e1/2 N(s,Ht−1)...
https://arxiv.org/abs/2505.10747v1
state our main weak limit results in Section 3.1, followed by several remarks in Section 3.2. Section 3.3 describes the phase transition of the limiting distribution of the test statistic across different signal strengths. We then apply the general results to two specific adaptive experimental designs in Section 3.4. I...
https://arxiv.org/abs/2505.10747v1
be expressed as weighted sums of two dependent Gaussian vectors, A(t)≡ (A(t)(0), A(t)(1))⊤fort∈[2]. Intuitively, A(1)corresponds to the randomness in the pilot stage and A(2)to that in the follow-up stage and is dependent on A(1). Concretely, the distributions of A(t)can be defined as A(1)∼N(0,Σ(1)) and A(2)|A(1)∼N(0,Σ...
https://arxiv.org/abs/2505.10747v1
3 is known as the positivity assumption (Crump et al., 2009; Imbens et al., 2015). The adaptive weighting enables less stringent requirement on the sampling probability in the second stage encouraging further exploitation. Assumption 4 allows minimum sampling probability in the second stage to go to zero at the rate sl...
https://arxiv.org/abs/2505.10747v1
more intuitive way. We will discuss how the limiting distributions of TNandWN change as signal strength cchanges. Specifically, the shapes of limiting distributions are determined by the covariance Cov(2)(A(1)) and random weights w(2) W,V(s), which are both influenced by the signal strength c. Consider the following tw...
https://arxiv.org/abs/2505.10747v1
Our results accommodate this full range of signal strengths while maintaining minimal assumptions on the sampling functions and underlying distributions. Comparison to existing literature. To highlight the significance of our results, we compare our results to those in related work. Zhang et al. (2020) establish asymp-...
https://arxiv.org/abs/2505.10747v1
rate lN, the follow-up sampling probability P[A(2) uN= 0|H1] corresponds to an ε-greedy algorithm with ε= 2lN: (1−lN) 1(S(1) N(0)≥S(1) N(1)) + lN 1(S(1) N(0)< S(1) N(1)). (8) 15 Subgroup enrichment experiments. The assignment variable Amay indicate subgroup membership rather than treatment assignment. In this context, ...
https://arxiv.org/abs/2505.10747v1