text
string | source
string |
|---|---|
arXiv:2505.10272v1 [cs.LG] 15 May 2025Spike-timing-dependent Hebbian learning as noisy gradient descent Niklas Dexheimer1∗Sascha Gaudlitz2,∗Johannes Schmidt-Hieber1 1University of Twente2Humboldt-Universität zu Berlin {n.dexheimer, a.j.schmidt-hieber }@utwente.nl sascha.gaudlitz@hu-berlin.de Abstract Hebbian learning is a key principle underlying learning in biological neural net- works. It postulates that synaptic changes occur locally, depending on the activities of pre- and postsynaptic neurons. While Hebbian learning based on neuronal firing rates is well explored, much less is known about learning rules that account for precise spike-timing. We relate a Hebbian spike-timing-dependent plasticity rule to noisy gradient descent with respect to a natural loss function on the probability simplex. This connection allows us to prove that the learning rule eventually identi- fies the presynaptic neuron with the highest activity. We also discover an intrinsic connection to noisy mirror descent. 1 Introduction x1 x2 ... xdyw1 w2 wd Input NeuronsOutput Neuron Figure 1: Neural network with a sin- gle output neuronHebbian learning is a fundamental concept in computational neuroscience, dating back to Hebb [13]. In this work, we provide a rigorous analysis of a Hebbian spike-timing-dependent plasticity (STDP) rule. Those are learning rules for the synaptic strength parameters that only depend on the spike times of the involved neurons. More precisely, we consider a neural network composed ofdpresynaptic/input neurons, which are connected to one postsynaptic/output neuron. The presynaptic neurons communicate with the postsynaptic neuron by sending spike sequences, the so-called spike-trains. Reweighted by synaptic strength parameters w1, . . . , w d≥0, they contribute to the postsynaptic membrane potential. Whenever the postsynaptic membrane potential exceeds a threshold, the postsynaptic neuron emits a spike, and the membrane potential is reset to zero. Experiments have shown the following stylised facts, which lie at the core of Hebbian learning based on spikes: (1) Locality: The change of the synaptic weight widepends only on the spike-train of neuron iand the postsynaptic spike-train. (2) Spike-timing: The change of the synaptic weight widepends on the relative timing of presynaptic spikes of neuron iand of the postsynaptic neuron. More precisely, a pre-post spike sequence tends to increase wi, whereas a post-pre sequence tends to decrease wi. We refer to Morrison et al. [24] for a more comprehensive list of experimental results on STDP rules. Hebbian learning rules are well-studied if instead of the precise timings of pre- and postsynaptic spikes, only the mean firing rates are taken into account. These rate-based models exhibit many desirable properties, including performing streaming PCA [ 19,16] and receptive field development [10, Section 11.1.4]. Much less is known if the precise timing of pre- and postsynaptic spikes is considered, since the intrinsic randomness of the dynamics complicates the mathematical analysis. Common approaches to understanding STDP restrict to the mean behaviour after taking the ensemble average, e.g. [ 10,18,12], or compute the full distribution using the master equation of the Markov ∗These authors contributed equally to this work. Preprint. Under review. process [ 10, Section 11.2.4]. Unfortunately, the latter is only feasible in specific scenarios. We refer to [1, 17, 37, 30, 11, 34] and the references therein for further results on STDP. Our main
|
https://arxiv.org/abs/2505.10272v1
|
contribution lies in connecting STDP to noisy gradient descent and providing a rigorous convergence analysis of the noisy learning scheme. To this end, we introduce a learning rule for the weights w1, . . . , w d, which captures the locality and spike-time dependence of Hebbian STDP. We rewrite the learning rule as a noisy gradient descent scheme with respect to a suitable loss function. The connection to noisy gradient descent and stochastic approximation [ 20,28] paves the way for applying mathematical tools from stochastic process theory to analyse the STDP rule. Our analysis of STDP is inspired by the work on noisy gradient descent for non-convex loss functions of Mertikopoulos et al. [23]. By refining their arguments and carefully tracking the error terms, we show an exponentially fast alignment of the output neuron with the input neuron of the highest mean firing rate on an event of high probability. The specialisation of the output neuron to the input neuron of the highest intensity is related to the winner-take-all mechanism in decision making [ 9,38,26,21,33]. The competitive nature of Hebbian STDP has been observed by [ 32,31,12] and the specialisation to few input neurons is important for receptive field development [ 6]. By connecting Hebbian STDP to noisy gradient descent, we are able to provide a mathematical analysis beyond ensemble averages and to quantify the speed of convergence. Taking into account the intrinsic geometry of the probability simplex, we also relate our learning rule to noisy mirror descent, more precisely to noisy entropic gradient descent, which has been proposed for brain-like learning by Cornford et al. [7]. The key contributions are: 1.STDP as noisy gradient descent. We deduce a new framework, in which Hebbian STDP is interpreted as noisy gradient descent. This connection allows us to employ powerful tools from the theory of stochastic processes for analysing Hebbian STDP. 2.Linear convergence. We prove the alignment of the output neuron with the input neuron of highest intensity at exponential rate on an event of high probability. 3.Connection to noisy mirror descent. We relate our learning rule to noisy mirror descent, more specifically to entropic gradient descent. This connection facilitates the integration of techniques from both areas, potentially leading to future synergistic effects. 1.1 Notation Linear algebra. For a positive integer d, we write [d]:={1, . . . , d }and1:= (1, . . . , 1)⊤∈Rd. Fori∈[d]we denote by eitheithstandard basis vector of Rd. The Hadamard product between two vectors a,b∈Rdis denoted by a⊙b:= (a1b1, . . . , a dbd)⊤∈Rd. We write I∈Rd×dfor the identity matrix on Rdand∥u∥2=Pd i=1u2 ifor the squared Euclidean norm of a vector u∈Rd. Probability. M(1,p)denotes the multinomial distribution with one trial ( n= 1) and probability vector p= (p1, . . . , p d)⊤, that is ξ∼M(1,p)if only only if P(ξ=i) =pifor any i∈[d]. We denote by P:=n p∈Rd:pi≥0∀i∈[d],dX i=1pi= 1o the probability simplex in Rd. We denote by 1Athe indicator function on a set A. 1.2 Hebbian inspired learning rule Inspired by Hebbian learning, we consider an unsupervised learning dynamic with dinput (or presynaptic)
|
https://arxiv.org/abs/2505.10272v1
|
neurons and one output (or postsynaptic) neuron. The ithinput neuron has a mean firing rateλi>0describing the expected number of spikes per time unit. The vector λ= (λ1, . . . , λ d) contains the dmean firing rates. The strength of the connection between the ithinput neuron and the output neuron is modulated by the weight parameter wi≥0, and changes to encode the information of the input firing rates. 2 We introduce a Hebbian STDP rule in Subsection 2.3 and show that, under some assumptions on the spike-trains, it is equivalent to the following dynamics. If w(0) = ( w1(0), . . . , w d(0))⊤are the d weights at initialisation, the updating rule from w(k)tow(k+ 1) is given by w(k+ 1) = w(k)⊙ 1+α(B(k) +Z(k)) , k = 0,1, . . . (1) where α > 0is the learning rate. The d-dimensional vector B(k)is the standard basis vector pointing to the presynaptic neuron, which triggered the (k+ 1)stpostsynaptic spike. It is given asB(k) =Pd i=1 1ζk=iei, the one-hot encoding of independent multinomial random variables ζk∼M(1,p(k)), with k-dependent probability vector p(k) =λ⊙w(k) λ⊤w(k)∈Rd, k = 0,1, . . . (2) Since the probabilities pi(k)model the probability that the (k+ 1)stpostsynaptic spike is triggered by neuron i= 1, . . . , d , we call them (postsynaptic) spike-triggering-probabilities . The i.i.d. d- dimensional vectors Z(k),k= 0,1, . . . model the contribution of presynaptic spikes, which did not trigger the (k+ 1)stpostsynaptic spike, to the weight change. They are modelled to have i.i.d. components Z1(k), . . . , Z d(k), which are supported in [−(Q−1),(Q−1)], for some Q >1, and centred such that E[Z(k)] = 0 . In the remainder of the paper, we analyse the long-run behaviour of p(k)ask→ ∞ under the learning rule (3). We say that the output neuron aligns with the jthinput neuron if pj(k)→1as k→ ∞ . Since the input intensities λ1, . . . , λ d>0are fixed throughout the dynamic, this condition is equivalent to wj(k)/Pd i=1wi(k)→1ask→ ∞ . Figure 1 visualises the learning rule (2). 2 Representation as noisy gradient descent We continue by relating the learning rule (3)to noisy gradient descent. For notational simplicity, define Y(k):=B(k) +Z(k)fork= 0,1, . . .. Combining the weight updates (1)with the formula for the probabilities pfrom (2), we find p(k+ 1) =λ⊙(w(k)⊙(1+αY(k))) λ⊤(w(k)⊙(1+αY(k)))=p(k)⊙(1+αY(k)) p(k)⊤(1+αY(k)), k = 0,1, . . . . (3) The normalisation in the denominator and the multiplicative nature of the update ensures that the dynamic of p(k)is restricted to the probability simplex. By a Taylor expansion around α= 0, we find p(k+ 1) = p(k)⊙ 1+α Y(k)−p(k)⊤Y(k)1 +O(α2). Since E Y(k)−p(k)⊤Y(k)1|p(k) =p(k)− ∥p(k)∥21, the random vectors ξ(k):=p(k)⊙ Y(k)−p(k)⊤Y(k)1−p(k) +∥p(k)∥21 , k = 0,1, . . . . are centred. The distribution of ξ(k)depends on w(k)andp(k). Up to O(α2)-terms, we can write the learning rule (3) as a noisy gradient descent scheme p(k+ 1) = p(k)⊙(1+α(p(k)− ∥p(k)∥21)) +αξ(k) =p(k)−α∇L(p(k)) +αξ(k), k = 0,1, . . . (4) for the loss function L(p):=−1 3dX i=1p3 i+1 4 dX i=1p2 i!2 =−1 3p⊤(p⊙p) +1 4∥p∥4,p∈Rd(5) with
|
https://arxiv.org/abs/2505.10272v1
|
gradient ∇L(p) =−p⊙(p− ∥p∥21)∈Rd,p∈Rd. (6) Dropping O(α2)terms is only done for illustrative purposes. Our main result (Theorem 2.2) applies to the original learning rule (3). The subsequent lemma summarises the key properties of the loss function Lfrom (5). Ford= 3, Figure 2 visualises the loss function Land the learning dynamics (3). Lemma 2.1. All critical points of the loss function (5)can be written as p∗=1 |S|P j∈Sejfor some S⊆[d]. Every critical point with |S| ≥2is a saddle point. The local minima of the loss function L from (5)are the standard basis vectors {e1, . . . ,ed}. Furthermore, every local minimum of Lis also a global minimum. 3 Figure 2: Contour plot of the loss function Lfrom (5)on the probability simplex Pford= 3 with different overlays. Left: Three sample trajectories of (3)with different initial configurations p(0). Middle: Stream plot of the gradient field given by (6). Right: 100 sample trajectories of (3) withp(0) = (0 .3,0.3,0.4)⊤. All trajectories are simulated with 2000 iteration steps, learning rate α= 0.01andZ(k)∼Unif([−1,1]d). 2.1 Linear convergence of the learning rule We state the convergence guarantee for the learning rule (3). Renaming the indices, we can assume thatp1(0)is the largest initial probability. Provided that p1(0)is strictly larger than each other component of p(0), the theorem shows linear convergence of the first component to 1on an event Θ in expectation. The probability of Θcan be chosen arbitrarily close to one by reducing the learning rateα. Theorem 2.2. Given ε∈(0,1), assume ∆ := p1(0)−max i=2,...,dpi(0)>0and0< α≤∆2 16Q2 (1−Qα)3∧1 256(1−p1(0)) 4∆ d+∆2 ε . Then there exists an event Θwith probability ≥1−ε/2such that E ∥p(k)−e1∥1 1Θ ≤2(1−p1(0)) exp −α 16 4∆ d+ ∆2 k! ,for all k= 0,1, . . . Consequently, given δ >0, P ∥p(k)−e1∥1≥δ ≤εfor all k≥16d α∆(4 + d∆)log4(1−p1(0)) εδ . Remark 2.3.If all weights are equal at the starting point of the learning algorithm, the assumption p1(0)−max i=2,...,dpi(0)>0, is equivalent to requiring λ1(0)−max i=2,...,dλi(0)>0. In this case, the convergence of p(k)toe1corresponds to the network performing a winner-take-all mechanism [9,38,26,33,21]. The competitive selection of input neurons in Hebbian STDP has been observed by [32,31,12], among others. Our results extend existing findings by going beyond ensemble averages and also provide a rate of convergence for the random dynamics. The convergence on an event of high probability is in line with other recent results on noisy/stochastic gradient descent for non-convex loss functions, see e.g. [ 23,35] or [ 8, Theorem 2.5]. Contrary to these results, we can choose a constant learning rate and obtain linear convergence. To illustrate the reason for this, we give a brief overview of the proof of Theorem 2.2. The full proof can be found in Section A.1. 1. We restrict the analysis to the event Θon which p1(k)−max i=2,...,dpi(k)≥c >0, holds for all iterates k. On this event, the derivative of the first component can be bounded from below by p1(k)(p1(k)− ∥p(k)∥2)≳1−p1(k). (7) 2.As described in (3), we apply a Taylor approximation to the original dynamics. We bound the error term for the ithcomponent pi(k+ 1) by the order α2pi(k)(1−pi(k)). The approximation error is
|
https://arxiv.org/abs/2505.10272v1
|
dominated by the gradient update on Θ, if the learning rate is small enough (see (7). 4 3.Similarly as in (4), we restate the learning increments of the dynamics as the sum of the true gradient and a centred noise vector ξ(k). By (7), this decomposition yields linear convergence of p1(k)→1onΘ. 4.To find a lower bound for the probability of the chosen event Θ, we employ a similar strategy as Mertikopoulos et al. [23]. Through the representation (4), we can show that Θoccurs, as soon as M(k):=αPk i=1ξ(i)is uniformly bounded by some sufficiently small constant. As(M(k))k∈Nis a martingale, the probability of the latter event can be controlled through Doob’s submartingale inequality (see (22)). 5.To apply Doob’s submartingale inequality, we bound the second moment of M(k). Since the variance of the components of ξ(k)is also dominated by 1−p1(k), we achieve a bound of the order α2P∞ i=1(1−p1(i)) 1Θ. This series is summable as we have linear convergence to0, which allows us to choose a constant learning rate α. 2.2 Associated gradient flow In this subsection, we consider the associated gradient flow of probabilities p(t)as a vector-valued function, which solves the ODE d dtp(t) =p(t)⊙ p(t)− ∥p(t)∥21 =−∇L p(t) , t≥0 (8) and is initialized by the probability vector p(0). By definition,d dtPd i=1pi(t) = 0 , such thatPd i=1pi(t) =Pd i=1pi(0) = 1 . Since the updating rule is multiplicative, the gradient flow produces for all t≥0a probability vector. The gradient flow (8)also occurs as a specific replicator equation in evolutionary game theory, see Hofbauer and Sigmund [14, Chapter 7]. Example 2.4.Ford= 2, the gradient flow admits an explicit solution. In this case, p(t) = (p1(t), p2(t))⊤. Ifp(0) = (1 /2,1/2)⊤, then this is a stationary solution and p(t) =p(0) = (1/2,1/2)⊤for all t≥0. Ifp1(0)>1/2, then, p1(t) =1 2+1 2√ Ce−t+ 1,with C:=1 (2p1(0)−1)2−1. (9) Ifp1(0)<1/2, then p2(t) = 1−p1(t)>1/2follows the dynamic in (9). A proof for (9)is given in the supplementary material. This formula implies that p1(t)converges exponentially fast to 1. The following lemma summarises different properties of the gradient flow (8). In its statement, differentiable on [0,1]means differentiable on (0,1)and continuous on [0,1]. Lemma 2.5. The gradient flow (8)exhibits the following properties. (a)Ifϕ: [0,1]→Ris a convex and differentiable function, then t7→Pd i=1ϕ(pi(t))is monotone increasing. (b)Leti, j∈[d]withi̸=j. Ifpi(0)> pj(0),respectively pi(0) = pj(0),thenpi(t)> pj(t), respectively pi(t) =pj(t),for all t≥0. Moreover, if ∆:=p1(0)−max i=2,...,dpi(0)>0, then p1(t)≥max i=2,...,dpi(t) + ∆ for all t≥0. Lemma 2.5 implies that the q-norm t7→ |p(t)|qis monotonically increasing whenever 1≤q <∞. The result also implies that if instead ϕis concave and differentiable, then t7→Pd i=1ϕ(pi(t)) is monotonically decreasing. Although the loss function L(p)in(5)does not satisfy a global Polyak-Łojasiewicz condition, in particular it is not globally convex, we can deduce the following convergence for the ODE (8). Theorem 2.6. Assume p1(0)≥max i=2,...,dpi(0) + ∆ , for some ∆>0. Then ∥e1−p(t)∥1≤2 1−p1(0) exp −∆ d(1 + ( d−1)∆)t , that is linear convergence of p(t)→e1ast→ ∞ . 5 2.3 Biological plausibility of the proposed learning rule T1 T2 ... Tdt1,t2, . . . Yt=d/summationdisplay j=1/summationdisplay τ∈Tj∩(tk,t]wj(τ)e−(t−τ)w1 w2 wd Input NeuronsOutput Neuron Figure
|
https://arxiv.org/abs/2505.10272v1
|
3: Considered biological neural network with spike trains and membrane potential Ytof the postsynaptic neuron.We study a biological neural network consisting of din- put (or presynaptic) neurons, which are connected to one output (or postsynaptic) neuron. For the subsequent ar- gument, we assume that the spike times of the dinput neurons are given by the corresponding jump times of d independent Poisson processes (X(1) t)t≥0, . . . , (X(d) t)t≥0 with respective intensities λ1, . . . , λ d. All neurons are excitatory and each connection between an input neuron j∈[d]and the output neuron has a time varying and non- negative synaptic strength parameter, which we denote by wj(t)≥0. An idealized model is that a spike of the jthinput neuron at time τcauses an exponentially decaying contribution to the postsynaptic membrane potential of the form t7→ wj(τ)Ce−c(t−τ)1t≥τ. We set the parameters c, C > 0to one, as this can always be achieved by a time change t7→ tcand a change of units of the voltage in the membrane potential. IfTjdenotes the spike times of neuron j∈[d], the postsy- naptic membrane potential (Yt)t≥0is given by Y0= 0and Yt=Pd j=1P τ∈Tj,τ≤twj(τ)e−(t−τ)for all t≥0untilYt≥S, where S >0is a given threshold value. Once the threshold Sis surpassed, the postsynaptic neuron emits a spike and its membrane potential is reset to its rest value, which we assume to be 0. Afterwards, the incoming spikes will contribute to rebuilding the postsynaptic membrane potential. If t0:= 0<t1<t2< . . . denote the postsynaptic spike times, the membrane potential at arbitrary time is therefore given by Yt=dX j=1X τ∈Tj∩(tk,t]wj(τ)e−(t−τ)for all tk< t≤tk+1. (10) We consider the following pair-based spike-timing-dependent plasticity (STDP) rule ([ 11, Section 19.2.2]): A spike of the jthpresynaptic neuron at time τcauses the weight parameter function t7→wj(t)to decrease at τbyαe−(τ−t−), where t−is the last postsynaptic spike time before τand to increase at any postsynaptic spike time tkbyαP τ∈Tj∩(tk,tk+1]e−(τ−tk), with α >0the learning rate. As common in the literature, spike times that occurred before tkonly have a minor influence and are neglected in the updating of the weights after tk. The termP τ∈Tj∩(tk,tk+1]e−(τ−tk)is then the trace ([11, Equation (19.12)]) of the jthpresynaptic neuron at time tk+1. For mathematical convenience, we will assume that all weight-updates in (tk,tk+1]are delayed to the postsynaptic spike times tkin the sense that the learning rule becomes wj(tk+1) =wj(tk) 1 +αX τ∈Tj∩(tk,tk+1]e−(tk+1−τ)−e−(τ−tk) , k = 0,1, . . . (11) The postsynaptic spike times tkare the moments at which the postsynaptic membrane potential Yt reaches the threshold S. They depend on the presynaptic spike times, however, the exact dependence is hard to characterise in the assumed model. For mathematical tractability, we will instead work with an adjusted rule to select the postsynaptic spike times t1,t2, . . .. A key observation is that Yt only increases at the presynaptic spike times and tk+1has to happen at a presynaptic spike time. Assuming tkhas been selected as a reference point, we propose a rule that selects a presynaptic spike timeτ∗>tkand then sets tk+1=τ∗. This is done in two stages by first choosing a presynaptic
|
https://arxiv.org/abs/2505.10272v1
|
neuron j∗(k+ 1)∈[d]and, in a second step, picking a spike time τ∗among the spike times of thej∗(k+ 1)thpresynaptic neuron. As the spike times of the jthpresynaptic neuron are generated from a Poisson process with intensity λjand result, by (10), in a jump of height wj(t) =wj(tk)for 6 tk< t≤tk+1, the probability that the jthpresynaptic neuron caused the spike can be modelled as λjwj(tk) Pd ℓ=1λℓwℓ(tk). (12) Drawing the index j∗(k+ 1) from this distribution, the postsynaptic firing time tkis drawn from a distribution on Tj∗(k+1)∩(tk,∞), which can depend on the parameters {λℓ, wℓ(tk):ℓ∈[d]}. Here, a distribution should be selected such that the sampled postsynaptic spike times closely resemble the real postsynaptic spike times. To derive the biologically inspired learning rule considered in this work, the specific form of this distribution is irrelevant. We proceed by arguing that the proposed selection rule for the next postsynaptic spike time tk+1 captures the key features of the original dynamic. The selection probabilities (12) for the presynaptic neuron provide a realistic model if the weights are small compared to the threshold. The formula also holds if all weights are equal, which is rigorously proved in Lemma A.5 of the supplementary material. If, however, all weights are much larger than the threshold S, every presynaptic spike causes a postsynaptic spike. The proof of Lemma A.5 can be adapted to this case to show that the probability that the jthneuron emits the first spike is λj/Pd ℓ=1λℓ. Since Hebbian learning is intrinsically unstable, we argue that the subsequently derived learning rule describes the dynamic at the beginning of the learning process. This view is corroborated by experimental results, see point (vi) of Morrison et al. [24, Section 2.1]. Compared to the original definition of tk+1, the proposed sampling scheme has the advantage that the presynaptic spike times, which were not selected as postsynaptic spike time, add centred noise to the updates. More precisely, one can show that by the construction of tkand the properties of the underlying Poisson processes, the conditional distribution τ|{τ∈(tk,tk+1)}is uniformly distributed on(tk,tk+1). By the symmetry relation e−(b−u)−e−(u−a)=−(e−(b−v)−e−(v−a))∈[−1,1], which holds for all real numbers a≤u≤bwithv=b+a−u∈[a, b], this implies that conditionally onτ∈(tk,tk+1), the random variable e−(tk+1−τ)−e−(τ−tk)is centred and supported on [−1,1]. The update rule (11) then becomes wj(tk+1) =wj(tk) 1 +α 1{j=j∗(k+1)} 1−etk−tk+1 +αX τ∈Tj∩(tk,tk+1)Z(τ, j) , (13) with centred random variables Z(τ, j)satisfying |Z(τ, j)| ≤1. Assuming that the postsynaptic firing rate is slow compared to the learning dynamic, we discard the term etk−tk+1≪1. Since j∗(k+ 1) follows a multinomial distribution with parameters λjwj(tk)/(Pd ℓ=1λℓwℓ(tk)), the term 1{j=j∗(k+1)}corresponds to the jthcomponent of B(k)in(1). This motivates the learning rule (1). Additional details on the derivation are given in Subsection A.3 of the supplementary material. 3 A mirror descent perspective In this section, we rewrite the gradient flow (8)as natural gradient descent on the probability simplex and relate the discrete-time learning rule (3) for the probabilities pto noisy mirror gradient descent. Recall from (4)that the learning rule (3)can be interpreted as noisy gradient descent with respect to the loss function Lfrom (5)in the Euclidean geometry. As we
|
https://arxiv.org/abs/2505.10272v1
|
consider a flow on probability vectors, a different perspective is to use the natural geometry of the probability simplex. To this end, we consider the interior of the probability simplex M:= int( P)as a Riemannian manifold with tangent space TpM={x∈R:1⊤x= 0}for every p∈ M . A natural metric on Mis given by the Fisher information metric / Shahshahani metric [4,14], which is induced by the metric tensor dp:TpM × T pM → R,(u,v)7→u⊤diag(p)−1vatp∈ M . Here, diag(p)∈Rd×dis the diagonal matrix with diagonal entries given by p. We refer to Figure 1 of Mertikopoulos and Sandholm [22] for an illustration of unit balls in this metric. The (Riemannian) gradient of the loss function ˜L(p) =−∥p∥2/2with respect to dis given by ∇d˜L(p) = diag( p)∇˜L(p)∈ TpM, where we denote by ∇˜Lthe Euclidean gradient of ˜L. The Riemannian gradient flow is called natural gradient flow in information geometry [ 3] and Shahshahani gradient flow in evolutionary game theory 7 [15]. When transforming the Euclidean gradient flow for Lto a Riemannian gradient flow on the probability simplex, the part +∥p∥21is orthogonal to TpM. Consequently, it does not contribute a direction on the probability simplex. Consequently, the Riemannian gradient flow of ˜Land the Euclidean gradient flow of Lcoincide. The mirror descent algorithm [25] prescribes the discrete-time optimisation algorithm p(k+ 1)∈argmin p∈M p⊤∇f(p(k)) +1 αΦ(p,p(k)) , k = 0,1, . . . , (14) where f:M → Ris the function to be minimised and Φ:M × M → R+is a suitable proximity function. Euclidean gradient descent is recovered by the choice Φ(p,p(k)) =∥p−p(k)∥2. It is well-known that the natural gradient flow is the continuous-time analogue of the exponentiated gradient descent orentropic mirror descent , where Φ(p,p(k)) = KL( p∥p(k))is chosen as the Kullback–Leibler divergence between pandp(k)[2,36,27]. Consequently, the gradient flow (8) can also be viewed as continuous-time version of entropic mirror descent with respect to f=˜L. This connection transfers to the discrete-time and noisy updating rule (3). An alternative approach for connecting our proposed discrete-time learning rule (3)to entropic mirror descent is included in Subsection A.4 of the supplementary material. 4 Multiple weight vectors x1 x2 ... xdy1 y2 ... yd Input NeuronsOutput Neurons Figure 4: Neural network with din- put/output neurons.The learning rule (1)aligns the output neuron with the input neuron of the highest intensity, but no information about the remaining input neurons is unveiled. As a proof-of-concept, we generalise the learning algorithm (1)to estimate the order of the intensities λ1, . . . , λ d. To this end, we consider d different output neurons, which are connected to the dinput neurons via the weight vectors w1, . . . ,wd∈Rd. The weights at time kare combined into the matrix W(k) = [w1(k)···wd(k)]∈Rd×d, k = 0,1, . . . and the corresponding probabilities p1, . . . ,pdare combined into the matrix P(k) = [p1(k)···pd(k)]∈Rd. By reordering the neurons, we can achieve λ1≤λ2≤ ··· ≤ λd. If the intensities are strictly ordered, our goal is the alignment of the jthoutput neuron with the jthinput neuron, which amounts to the convergence of P(k)to the identity matrix I∈Rd×das
|
https://arxiv.org/abs/2505.10272v1
|
time increases. If multiple intensities are equal, convergence is up to permutations within the group of equal intensities. We propose Algorithm 1, which constitutes an STDP rule as lines 3 - 4 can be implemented using the spike-trains and the learning rule (11). Algorithm 1: Aligning multiple output neurons Input: K∈N: number of iterations, W(0)∈Rd×d: weight initialisation, α1, . . . , α d: learning rates of the output neurons. 1fork= 0,1, . . . , K −1do 2 forj= 1, . . . , d do 3 Receive Bj(k)∼M(1,pj(k))withpj(k)←λ⊙wj(k)/λ⊤wj(k)and Zj(k)∼Unif([−1,1]d)from spike trains; 4 Compute the base change ∆wj(k)←αjwj(k)⊙(Bj(k) +Zj(k)); 5 Update wj(k+ 1)←∆wj(k)−j−1X i=1(∆wj(k))⊤wi(k) ∥wi(k)∥2wi(k); 6 end 7end Output: The weight evolution W(k) = [w1(k)···wd(k)],k= 0, . . . , K and probability evolution P(k) = [p1(k)···pd(k)],k= 1, . . . , K . 8 Figure 5: Probability matrix P(k)arising from the weight dynamic W(k)of Algorithm 1 for dimensions n=d= 3. The weights are initialised equally, and the intensities are given by λ= (10,7.5,5)⊤. The resulting initial probabilities are p1(0) = p2(0) = p3(0) = (4 /9,1/3,2/9)⊤. Left: A single trajectory with learning rates 10−3(1,0.75,0.5)⊤and4×104iterations. The markers × and•correspond to the probabilities at k= 4×103andk= 104. Middle & right: The Frobenius error ∥P(k)−I∥2/2of 100 trajectories with learning rates 10−3(1,0.75,0.5)⊤and10−4(1,0.75,0.5)⊤, respectively. Algorithm 1 is inspired by Sanger’s rule [ 29] for learning dprincipal components in streaming PCA. The first weight vector w1(k)aligns with e1by Theorem 2.2 since its dynamic equals the learning rule(1). By removing the components of the change ∆wj(k)in the direction of w1(k), . . . ,wj−1(k) in line 5 of Algorithm 1, the weight vector wj(k)is forced to converge to ej, similarly to the Gram–Schmidt algorithm. Simulations of the corresponding probability matrix P(k)with varying learning rates and Zdrawn i.i.d. from Unif([−1,1]d)are included in Figure 5. We choose different learning rates for the d vectors satisfying α1>···> αd>0. This ordering ensures fast convergence of the lower order weight vectors to the correct standard basis vector and counteracts the impact of initial misalignments of the higher order weight vectors. The simulation of Algorithm 1 shown in Figure 5 displays a decrease of the Frobenius error ∥P(k)−I∥2/2over the iteration index k, when averaged (blue line). Nevertheless, we observe that for a single trajectory, the error can plateau around 1 and 2. Given that the probability vectors tend to converge to standard basis vectors {e1, . . . ,ed}, a non-vanishing error is due to an incorrect ordering or duplicates. Consequently, the error ∥P(k)−I∥2/2corresponds to the number of output neurons aligning with the incorrect input neuron, and plateaus at 1, 2 and 3 can arise. Theorem 2.2 shows that this phenomenon can be mitigated by slower learning rates, which is corroborated by decreasing the base learning rate from 10−3to10−4in the simulation. 5 Discussion and limitations Small biological neural network. In this paper, we mathematically analyse the convergence behaviour of a small biological neural network with dpresynaptic/input neurons and one postsynap- tic/output neuron. It is natural to generalise the setting to larger networks. As outlined in Section 4, the most natural extension is to
|
https://arxiv.org/abs/2505.10272v1
|
allow for several output neurons. Moreover, we limited the analysis to static mean firing rates of the presynaptic neurons. A more realistic scenario amounts to considering time-dependent mean firing rates that are piecewise constant, corresponding to different input patterns [ 6]. When extending the analysis to time-varying input patterns and multiple carefully coupled postsynaptic neurons, we conjecture that STDP rules performs a version of online principal component analysis (online PCA), similarly as rate-based models [11, Section 19.3.2], [19, 16]. Weight explosion. The learning rule (1)for the weights w(k)causes them to increase without bound as the iteration index kincreases. When the weights exceed the spike threshold, the model becomes biologically implausible and the derivation of the probabilities (12) is no longer valid. This unstable nature is well-known to be intrinsic to Hebbian learning algorithms and is commonly countered by soft or hard bounds, or by including mean-reverting terms to the dynamic Gerstner et al. [11, pages 497-498 ]. We follow a different route, namely viewing Hebbian learning as a temporal phase of limited length, which is followed by a stabilising homeostatic learning phase. This view is corroborated by experimental results, compare Point (vi) in Morrison et al. [24, Section 2.1]. 9 Acknowledgments and Disclosure of Funding All authors acknowledge support from ERC grant A2B (grant agreement no. 101124751). S. G. has been partially funded by DFG CRC/TRR 388 “Rough Analysis, Stochastic Dynamics and Related Fields”, Project B07. Parts of the research were carried out while the authors visited the Simons Institute in Berkeley. References [1]L. Abbott and S. Song. Temporally asymmetric Hebbian learning, spike liming and neural re- sponse variability. In M. Kearns, S. Solla, and D. Cohn, editors, Advances in Neural Information Processing Systems , volume 11. MIT Press, 1998. [2]F. Alvarez, J. Bolte, and O. Brahic. Hessian Riemannian gradient flows in convex programming. SIAM Journal on Control and Optimization , 43(2):477–501, 2004. [3]S.-i. Amari. Natural gradient works efficiently in learning. Neural Computation , 10(2):251–276, 1998. [4]S.-i. Amari and H. Nagaoka. Methods of Information Geometry . Translations of Mathematical Monographs. American Mathematical Society, 2007. [5]A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters , 31(3):167–175, 2003. [6]C. Clopath, L. Büsing, E. Vasilaki, and W. Gerstner. Connectivity reflects coding: A model of voltage-based STDP with homeostasis. Nature Neuroscience , 13(3), 2010. [7]J. Cornford, R. Pogodin, A. Ghosh, K. Sheng, B. A. Bicknell, O. Codol, B. A. Clark, G. Lajoie, and B. A. Richards. Brain-like learning with exponentiated gradients, 2024. URL http: //biorxiv.org/lookup/doi/10.1101/2024.10.25.620272 . [8]S. Dereich and A. Jentzen. Convergence rates for the Adam optimizer, 2024. URL https: //arxiv.org/abs/2407.21078 . [9]J. Feldman and D. Ballard. Connectionist models and their properties. Cognitive Science , 6(3): 205–254, 1982. [10] W. Gerstner and W. M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity . Cambridge University Press, 2002. [11] W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition . Cambridge University Press, 2014. [12] M. Gilson, A. N. Burkitt, D. B. Grayden, D. A. Thomas, and
|
https://arxiv.org/abs/2505.10272v1
|
J. L. Van Hemmen. Representation of input structure in synaptic weights by spike-timing-dependent plasticity. Physical Review E , 82(2):021912, 2010. [13] D. O. Hebb. The Organization of Behavior . Wiley, 1949. [14] J. Hofbauer and K. Sigmund. Evolutionary Games and Population Dynamics . Cambridge University Press, 1998. [15] J. Hofbauer and K. Sigmund. Evolutionary game dynamics. Bulletin of the American Mathe- matical Society , 40(4):479–519, 2003. [16] D. Huang, J. Niles-Weed, and R. Ward. Streaming k-PCA: Efficient guarantees for Oja’s algorithm, beyond rank-one updates. In M. Belkin and S. Kpotufe, editors, Proceedings of Thirty Fourth Conference on Learning Theory , volume 134 of Proceedings of Machine Learning Research , pages 2463–2498. PMLR, 2021. [17] R. Kempter, W. Gerstner, and J. L. Van Hemmen. Hebbian learning and spiking neurons. Physical Review E , 59(4):4498–4514, 1999. 10 [18] R. Kempter, W. Gerstner, and J. L. V . Hemmen. Intrinsic stabilization of output rates by spike-based hebbian learning. Neural Computation , 13(12):2709–2741, 2001. [19] S. Kumar and P. Sarkar. Oja 's algorithm for streaming sparse PCA. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 74528–74578. Curran Associates, Inc., 2024. [20] T. L. Lai. Stochastic approximation: invited paper. The Annals of Statistics , 31(2):391 – 406, 2003. [21] N. Lynch, C. Musco, and M. Parter. Winner-Take-All computation in spiking neural networks, 2019. [22] P. Mertikopoulos and W. H. Sandholm. Riemannian game dynamics. Journal of Economic Theory , 177:315–364, 2018. [23] P. Mertikopoulos, N. Hallak, A. Kavis, and V . Cevher. On the almost sure convergence of stochastic gradient descent in non-convex problems. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 1117–1128. Curran Associates, Inc., 2020. [24] A. Morrison, M. Diesmann, and W. Gerstner. Phenomenological models of synaptic plasticity based on spike timing. Biological Cybernetics , 98(6):459–478, 2008. [25] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method efficiency in optimization . Wiley-Interscience series in discrete mathematics. Wiley, 1983. [26] M. Oster and S.-C. Liu. Spiking inputs to a winner-take-all network. In Y . Weiss, B. Schölkopf, and J. Platt, editors, Advances in Neural Information Processing Systems , volume 18. MIT Press, 2005. [27] G. Raskutti and S. Mukherjee. The information geometry of mirror descent. IEEE Transactions on Information Theory , 61(3):1451–1457, 2015. [28] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics , 22(3):400–407, 1951. [29] T. D. Sanger. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks , 2(6):459–473, 1989. [30] J. Sjöström and W. Gerstner. Spike-timing dependent plasticity. Scholarpedia , 5(2):1362, 2010. [31] S. Song and L. Abbott. Cortical development and remapping through spike timing-dependent plasticity. Neuron , 32(2):339–350, 2001. [32] S. Song, K. D. Miller, and L. F. Abbott. Competitive Hebbian learning through spike-timing- dependent synaptic plasticity. Nature Neuroscience , 3(9):919–926, 2000. [33] L. Su, C.-J. Chang, and N. Lynch. Spike-based winner-take-all computation: Fundamental limits and order-optimal circuits. Neural
|
https://arxiv.org/abs/2505.10272v1
|
Computation , 31(12):2523–2561, 2019. [34] A. Vigneron and J. Martinet. A critical survey of STDP in spiking neural networks for pattern recognition. In 2020 International Joint Conference on Neural Networks (IJCNN) , pages 1–9, 2020. [35] S. Weissmann, S. Klein, W. Azizian, and L. Döring. Almost sure convergence of stochastic gradient methods under gradient domination. Transactions on Machine Learning Research , 2025. [36] A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization. Proceedings of the National Academy of Sciences , 113(47), 2016. [37] Yang Dan and Mu-ming Poo. Spike timing-dependent plasticity of neural circuits. Neuron , 44 (1):23–30, 2004. [38] A. L. Yuille and N. M. Grzywacz. A Winner-Take-All mechanism based on presynaptic inhibition feedback. Neural Computation , 1(3):334–347, 1989. 11 A Technical Appendix We first study the loss landscape p7→L(p) =−1 3dX i=1p3 i+1 4dX i=1p2 i . Lemma 2.1 identifies the stationary points if we view the landscape as a function on Rd. Proof of Lemma 2.1. The formula (6)for the gradient ∇L(p)shows that the set of critical points is given by Crit:={p∈P:∇L(p) = 0} ={p∈P:pi∈ {0,∥p∥2} ∀i∈[d]} =n p∈P:∃n∈ {1, . . . , d }, J⊂[d]with#J=nsuch that p=1 nX j∈Jejo . To identify local the extrema, we compute the Hessian matrix J(p) = 2pp⊤+∥p∥2I−2 diag( p)∈Rd×d,p∈Rd, where diag(p)∈Rd×dis the diagonal matrix with diagonal entries given by p. Substituting a critical pointp∗withn∈[d]non-zero entries yields (up to permutations of rows and columns) J(p∗) =1 n Jn 0 0Id−n , J n=−In+2 n1n×n, n∈[d], where 1n×nis the n×nmatrix consisting of ones. The corresponding eigenvalues are 1 n>0 with multiplicity 1and eigenspace En:= span(Pd i=1ei), −1 n<0with multiplicity n−1and eigenspace E⊥ n, 1 n>0 with multiplicity d−nand eigenspace span(en+1, . . . ,ed),(15) where E⊥ nis the orthogonal complement of Eninspan(e1, . . . ,en). Consequently, only those critical points p∗∈Crit are local minima, which have n= 1, i.e.p∗∈ {e1, . . . ,ed}. Since all local minima attain the same loss −1/12andL(p)→ ∞ as∥p∥ → ∞ , every local minimum is also a global minimum. Remark A.1.The eigenvalues of the Hessian of the loss function computed in (15) also imply that if n≥2, then p∗∈Crit is a saddle point in Rd. Interestingly, when restricting to directions within the probability simplex, the case n=dis not a saddle point, but a maximum, since the directionPd i=1eiis orthogonal to P. A.1 Proofs for Subsection 2.1 In the following we will always assume that p1(0)≥max i=2,...,dpi(0) + ∆ , (16) for some ∆∈(0,1). This is a deterministic constraint. The randomness occurs because of the noise in the updates. We assume that all random variables are defined on a filtered probability space (Ω,F,P), and denote by Fn,n= 0,1, . . ., the natural filtration of (Bn,Zn)n∈N. By a slight abuse of notation we also introduce F−1={∅,Ω}. In particular it then holds that pnisFn−1-measurable forn= 0,1, . . .. The starting point for the proof of the linear convergence of STDP is given by the following Lemma, which explicitly bounds the error term in the Taylor approximation
|
https://arxiv.org/abs/2505.10272v1
|
contained in (3). Recall that |Yi(k)| ≤Q, for all i∈[d], k= 0,1, . . . Lemma A.2. Fori∈[d]andk= 0,1, . . . define ξi(k):=pi(k) Eh Yi(k)−dX j=1pj(k)Yj(k) Fk−1i − Yi(k)−dX j=1pj(k)Yj(k) , (17) 12 and assume α <1/Q. Then for any i∈[d]andk= 0,1, . . ., there exists a random variable θi(k), satisfying |θi(k)| ≤α22Q2 (1−Qα)3pi(k) 1−pi(k) , almost surely, such that pi(k+ 1) = pi(k) +αpi(k) pi(k)− ∥p(k)∥2 −αξi(k)−θi(k). Proof. By definition pi(k+ 1) = pi(k)1 +αYi(k) 1 +αPd j=1pj(k)Yj(k), k = 0,1, . . . , i ∈[d]. Now, for a, b∈[−Q, Q]the first two derivatives of the function f(x): 0,1 Q →R, x7→1 +ax 1 +bx, are given by f′(x) =a(1 +bx)−b(1 +ax) (1 +bx)2=a−b (1 +bx)2 f′′(x) =−2ba−b (1 +bx)3. Thus, a Taylor expansion around x= 0gives that there exists some γ∈(0, x), such that f(x) = 1 + ( a−b)x−ba−b (1 +bγ)3x2. Hence we obtain that for some γ∈(0, α), pi(k+ 1) = pi(k) +αpi(k) Yi(k)−dX j=1pj(k)Yj(k) −α2pi(k)Pd j=1pj(k)Yj(k) Yi(k)−Pd j=1pj(k)Yj(k) 1 +γPd j=1pj(k)Yj(k)3. Using that |Yi(k)| ≤Qalmost surely for all i∈[d], k= 0,1, . . ., the absolute value of the error term can be bounded as follows α2pi(k)Pd j=1pj(k)Yj(k) Yi(k)−Pd j=1pj(k)Yj(k) 1 +γPd j=1pj(k)Yj(k)3 ≤α2pi(k)Q (1−Qα)3 Yi(k)−dX j=1pj(k)Yj(k) ≤α2pi(k)Q (1−Qα)3 (1−pi(k))|Yi(k)|+dX j=1,j̸=ipj(k)|Yj(k)| ≤α22Q2 (1−Qα)3pi(k)(1−pi(k)). Since pi(k)isFk−1-measurable for any i∈[d]andE[Yi(k)|Fk−1] =pi(k), we also obtain ξi(k) =pi(k) pi(k)− ∥p(k)∥2−Yi(k) +dX j=1pj(k)Yj(k) , which concludes the proof. 13 For∆given in (16), we define a sequence of benign events Ω(k):=n p1(u)≥max i=2,...,dpi(u) +∆ 2,∀u∈[k]o , k = 1,2, . . . , and due to the assumption (16) we set Ω(0) = Ω . On the above events, the gradient is bounded away from zero and p1(k) =1 ddX j=1 p1(k)−pj(k) +1 d≥(d−1)∆ 2d+1 d. (18) Using these properties, we can prove a recursive upper bound for 1−p1(k). Proposition A.3. If 0< α≤(1−Qα)3 8Q2∆, then, on the event Ω(k), 1−p1(k+ 1)≤ 1−α∆ 4d 1 +∆ 2(d−1) 1−p1(k) +αξ1(k). Proof. By definition, we have on the event Ω(k), p1(k)− ∥p(k)∥2=dX j=1pj(k) p1(k)−pj(k) ≥∆ 2 1−p1(k) . (19) The constraint imposed on the learning rate implies that α <1/Qand Lemma A.2 becomes applicable. Now, combining the previous inequality with the assumption on αand applying Lemma A.2 with i= 1, as well as (18), gives, on the event Ω(k), 1−p1(k+ 1) ≤1−p1(k)−α∆ 2p1(k) 1−p1(k) +αξ1(k) +θ1(k) ≤1−p1(k)−α∆ 2−2Q2 (1−Qα)3α p1(k) 1−p1(k) +αξ1(k) ≤1−p1(k)−∆ 4αp1(k) 1−p1(k) +αξ1(k) ≤ 1−α∆ 4d 1 +∆ 2(d−1) 1−p1(k) +αξ1(k). This concludes the proof. Having understood the dynamics of pon the favourable event Ω(k), we aim for a lower bound for its probability. A key step is the following Lemma, which states that Ω(k)is fulfilled as soon as Mj(k):=kX ℓ=0αξj(ℓ) 1Ω(ℓ), k = 0,1, . . . (20) withξj(ℓ)defined in (17), exhibit a uniform concentration behaviour. Lemma A.4. Define the sets Ej(k):=n max u∈[k]|Mj(u)| ≤∆ 4o , E (k):=d\ j=1Ej(k), k = 0,1, . . . . Then if 0< α≤∆2(1−Qα)3 16Q2, the following set inclusion holds for any k= 0,1, . . . E(k)⊆Ω(k+ 1). 14 Proof. Letu∈ {2, . . . , d }be
|
https://arxiv.org/abs/2505.10272v1
|
arbitrary. It follows by Lemma A.2, that on Ω(k)the bound pu(k+ 1) = pu(k) +αpu(k) pu(k)− ∥p(k)∥2 −αξu(k)−θu(k) ≤pu(k) +αpu(k) p1(k)− ∥p(k)∥2 −αξu(k)−θu(k) holds. Consequently, on Ω(k), we have p1(k+ 1)−pu(k+ 1) =p1(k)−pu(k) +α p1(k)−pu(k) p1(k)− ∥p(k)∥2 +α ξj(k)−ξ1(k) −θ1(k) +θu(k). We have pu(k)≤1−p1(k)and thus, on Ω(k), −θ1(k) +θu(k)≥ −α22Q2 (1−Qα)3 p1(k) 1−p1(k) +pu(k)(1−pu(k)) ≥ −α22Q2 (1−Qα)3 1−p1(k) +pu(k) ≥ −α24Q2 (1−Qα)3 1−p1(k) ≥ −α∆2 4 1−p1(k) , invoking the constraint on the learning rate in the last step. From the assumptions on αwe deduce that on the event Ω(k), p1(k+ 1)−pu(k+ 1)−α(ξj(k)−ξ1(k)) ≥p1(k)−pu(k) +α p1(k)−pu(k) p1(k)− ∥p(k)∥2 −α∆2 4 1−p1(k) ≥p1(k)−pu(k) +α∆ 2 p1(k)−pu(k) 1−p1(k) −α∆2 4 1−p1(k) ≥p1(k)−pu(k) +α∆2 4 1−p1(k) −α∆2 4 1−p1(k) ≥p1(k)−pu(k), where we applied (19) in the third to last inequality. Because of Ω(k)⊆Ω(k−1)it then follows, (p1(k+ 1)−pu(k+ 1)) 1Ω(k)≥(p1(k)−pu(k)) 1Ω(k)+α(ξj(k)−ξ1(k)) 1Ω(k) = 1Ω(k) (p1(k)−pu(k)) 1Ω(k−1)+α(ξj(k)−ξ1(k)) 1Ω(k) . This gives (p1(k+ 1)−pu(k+ 1)) 1Ω(k) ≥ 1Ω(k) p1(0)−pu(0) +kX ℓ=0α(ξj ℓ−ξ1 ℓ) 1Ω(ℓ) ≥∆ 1Ω(k)− |Mj(k)| − |M1(k)|.(21) We want to prove by induction that E(k)⊆Ω(k+ 1) for all k= 0,1, . . .. For k= 0, this directly follows from (21), since Ω(0) = Ω due to assumption (16). Assume the assertion holds for some k= 0,1, . . .. Hence, it holds E(k+ 1)⊆E(k)⊆Ω(k+ 1) , such that for any u∈ {2, . . . , d }it holds on E(k+ 1) by (21) (p1(k+ 2)−pu(k+ 2))≥∆− |Mj(k+ 1)| − |M1(k+ 1)| ≥∆ 2, which proves the assertion. Having assembled the previous results, we are able to prove the linear convergence of STDP stated in Theorem 2.2. As Proposition A.3 already suggests the desired behaviour of ponΩ(k), the main 15 part of the proof is to show that Ω(k)is satisfied with large probability. For that we deploy Doob’s submartingale inequality, which states that for a martingale (Xn)n∈N, anyp≥1,and any u >0, P max i∈[n]|Xi| ≥u ≤E[|Xn|p] up. (22) This will be applied to derive lower bounds for the event E(k)defined in Lemma A.4, which are also lower bounds for the probability of Ω(k+ 1) by the same Lemma. For the reader’s convenience we restate Theorem 2.2 before giving its proof. Theorem 2.2. Given ε∈(0,1), assume ∆ := p1(0)−max i=2,...,dpi(0)>0and0< α≤∆2 16Q2 (1−Qα)3∧1 256(1−p1(0)) 4∆ d+∆2 ε . Then there exists an event Θwith probability ≥1−ε/2such that E ∥p(k)−e1∥1 1Θ ≤2(1−p1(0)) exp −α 16 4∆ d+ ∆2 k! ,for all k= 0,1, . . . Consequently, given δ >0, it holds P ∥p(k)−e1∥1≥δ ≤εfor all k≥16d α∆(4 + d∆)log4(1−p1(0)) εδ . Proof of Theorem 2.2. The recursive definition ensures that p(k)isFk−1-measurable. Thus, also Ω(k)∈ Fk−1for any k= 0,1, . . .. One can check that (Mi(k))k=0,1,..., defined in (20), forms a martingale for each i∈[d]. This allows us to apply Doob’s submartingale inequality. To apply it withp= 2, we deduce the following bound on the second moment, E[M1(k)2] =α2EhkX ℓ=0ξ1(ℓ) 1Ω(ℓ)2i =α2kX ℓ=0E[(ξ1(ℓ) 1Ω(ℓ))2] +kX i,j=0,i̸=jE[ξ1(i) 1Ω(i)ξ1(j) 1Ω(j)] =α2kX ℓ=0E[(ξ1(ℓ) 1Ω(ℓ))2] +kX i,j=0,i̸=jEh ξ1(i∧j) 1Ω(i∧j)E[ξ1(i∨j) 1Ω(i∨j)|Fi∧j]i =α2kX ℓ=0E[(ξ1(ℓ) 1Ω(ℓ))2] =α2kX ℓ=0Eh (p1(ℓ))2 Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ) Fℓ−1i − Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i ≤α2kX ℓ=0Eh Eh Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)
|
https://arxiv.org/abs/2505.10272v1
|
Fℓ−1i − Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 Fℓ−1i 1Ω(ℓ)i =α2kX ℓ=0Eh Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 Fℓ−1i 1Ω(ℓ)−Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ) Fℓ−1i2 1Ω(ℓ)i ≤α2kX ℓ=0Eh Y1(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i =α2kX ℓ=0Eh (1−p1(ℓ))Y1(ℓ)−dX j=2pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i 16 ≤2α2kX ℓ=0Eh (1−p1(ℓ))Y1(ℓ)2 1Ω(ℓ)+dX j=2pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i ≤2Q2α2kX ℓ=0Eh (1−p1(ℓ))2+dX j=2pj(ℓ)2 1Ω(ℓ)i ≤4Q2α2kX ℓ=0Eh (1−p1(ℓ)) 1Ω(ℓ)i , where we used that E[ξ1(k2)|Fk1] = 0 , for any k2> k1= 0,1, . . ., together with |Y1(k)| ≤Q, the inequality (a+b)2≤2(a2+b2)fora, b∈Rand1−p1 ℓ≤1. Arguing similarly we obtain for u∈ {2, . . . , d }, E[(Mu k)2] =α2EhkX ℓ=0(ξu(ℓ) 1Ω(ℓ))2i ≤α2kX ℓ=0Eh (pu(k))2 Yu(ℓ)−dX j=1pj(ℓ)Yj(ℓ)2 1Ω(ℓ)i ≤4Q2α2kX ℓ=0Eh pu(ℓ) 1Ω(ℓ)i . Hence, applying a union bound, Doob’s submartingale inequality (22) withp= 2 gives for any k= 0,1, . . . P(E(k)) = 1 −Pd[ j=1max u∈[k]|Mj(u)| ≥∆/4 ≥1−dX j=1P max u∈[k]|Mj(u)| ≥∆/4 ≥1−64Q2α2 ∆2kX ℓ=0Eh (1−p1(ℓ)) 1Ω(ℓ)i +dX j=2kX ℓ=0Eh pj(ℓ) 1Ω(ℓ)i . = 1−128Q2α2 ∆2kX ℓ=0Eh (1−p1(ℓ)) 1Ω(ℓ)i . Proposition A.3 gives for any k= 0,1, . . . the bound E[(1−p1(k+ 1)) 1Ω(k+1)]≤E[(1−p1(k+ 1)) 1Ω(k)] ≤Eh 1−α∆ 4d 1 +∆ 2(d−1) 1−p1(k) 1Ω(k)+αξ1(k) 1Ω(k)i = 1−α∆ 4d 1 +∆ 2(d−1) E[ 1−p1(k) 1Ω(k)], which implies E 1−p1(k) 1Ω(k) ≤ 1−p1(0) 1−α∆ 4d 1 +∆ 2(d−1)k . (23) We set Θ:=∞\ k=0Ω(k) =n p1(u)≥max i=2,...,dpi(u) +∆ 2,∀u∈No . The continuity of probability measures and Lemma A.4 then imply P(Θ) = lim k→∞P(Ω(k)) ≥lim k→∞P(E(k)) 17 ≥1−128Q2α2 ∆2∞X ℓ=0Eh (1−p1(ℓ)) 1Ω(ℓ)i ≥1−128Q2(1−p1(0))α2 ∆2∞X ℓ=0 1−α∆ 4d 1 +∆ 2(d−1)ℓ = 1−1024Q2(1−p1(0))dα ∆3 2 + ∆( d−1) ≥1−2048Q2(1−p1(0))α ∆3 4 d+ ∆ ≥1−ε 2, where we used that we can assume d≥2without loss of generality. Additionally, (23) and the elementary inequality 1−x≤exp(−x), which is valid for any real number x,give E[ 1−p1(k) 1Θ]≤E[ 1−p1(k) 1Ω(k)] ≤(1−p1(0)) 1−α∆ 4d 1 +∆ 2(d−1)k ≤(1−p1(0)) exp −α∆ 4d 1 +∆ 2(d−1) k . When d= 1, the right hand side of this inequality is 0. For d≥2, we can also use the bound d−1≥d/2. Together with ∥p(k)−e1∥1= 1−p1(k) +dX i=2pi(k) = 2 1−p1(k) , this concludes the proof of the first statement. For the proof of the second statement, we apply Markov’s inequality to obtain P ∥p(k)−e1∥1≥δ ≤P(ΘC) +P ∥p(k)−e1∥1 1Θ≥δ ≤ε 2+ 2(1−p1(0)) exp −α 16 4∆ d+ ∆2 k! δ−1. Hence, if k≥ α 16 4∆ d+ ∆2!−1 log4(1−p1(0)) εδ =16d α∆(4 + d∆)log4(1−p1(0)) εδ , then, P ∥p(k)−e1∥1≥δ ≤ε. A.2 Proofs for Subsection 2.2 This section contains the proofs for Lemma 2.5 and Theorem 2.6 on the gradient flow. Proof of Formula (9).Throughout the proof we set p(t):=p1(t)and do not use the previous notation p1(t), p2(t)for the first and second probability. For d= 2, the gradient flow ODE (8) becomes d dtp(t) =p(t)2−pt p(t)2+ (1−p(t))2 = 3p(t)2−2p(t)3−p(t). (24) 18 Rewriting this in the variable u(t) = 1−2p(t)gives the dynamic, d dtu(t) =−2d dtp(t) =−6p(t)2+ 4p(t)3+ 2p(t) =1 2 1−u(t)2 u(t). This is solved by u(t) =−1/√ Ce−t+ 1since −d dt1√ Ce−t+ 1=− −1 2 ·−Ce−t (Ce−t+ 1)3/2=1 2 1−u(t)2 u(t). Thus, p(t) =1 2(1−u(t))solves (24). Finally, Cis determined by the initial condition p(0) = 1 2(1−1/√ C+ 1) . Proof of Lemma 2.5. (a)Since ϕis convex,
|
https://arxiv.org/abs/2505.10272v1
|
ϕ′is monotonically increasing. Thus, for a probability vector q= (q1, . . . , q d), we have dX i=1qiϕ′(qi) qi− ∥q∥2 2 =dX i=1qiϕ′(qi) qidX j=1qj−dX j=1q2 j =dX i,j=1qiqjϕ′(qi) qi−qj =X 1≤i<j≤dqiqj ϕ′(qi)−ϕ′(qj) qi−qj ≥0. (Ifϕis strictly convex, then strict equality holds if and only if qis one of the stationary points described above.) Using this and the gradient flow formula d dtdX i=1ϕ(pi(t)) =dX i=1ϕ′(pi(t))d dtpi(t) =dX i=1pi(t)ϕ′(pi(t)) pi(t)− ∥p(t)∥2 2 ≥0, proving the result. (b) By definition it holds for i, j∈[d] d dt pi(t)−pj(t) =pi(t) pi(t)− ∥p(t)∥2 −pj(t) pj(t)− ∥p(t)∥2 = pi(t)−pj(t) pi(t) +pj(t)− ∥p(t)∥2 . (25) Ifpi(0)> pj(0), we can apply Grönwall’s inequality together with (25) to obtain for any t≥0, pj(t)−pi(t)≤(pj(0)−pi(0)) expZt 0pi(s) +pj(s)− ∥p(s)∥2ds <0. Similarly, if pi(0) = pj(0),we obtain pj(t)−pi(t)≤(pj(0)−pi(0)) expZt 0pi(s) +pj(s)− ∥p(s)∥2ds = 0, and pi(t)−pj(t)≤(pi(0)−pj(0)) expZt 0pi(s) +pj(s)− ∥p(s)∥2ds = 0, concluding the proof. To prove the second statement, we have p1(t) =p1(0) +Zt 0p1(s) p1(s)− ∥p(s)∥2 ds≥p1(0)−t, 19 and similarly, for any i∈ {2, . . . , d }, pi(t)≤pi(0) + t≤p1(0) + t−∆. Hence, whenever t∈[0,∆/2], we find p1(t)≥max i=2,...,dpi(t), which also implies p1(t)≥p1(t)2+ max i=2,...,dpi(t) 1−p1(t) ≥ ∥p(t)∥2, for all t∈[0,∆/2]. Therefore for any t∈[0,∆/2], i∈[d]it holds d dt p1(t)−pi(t) = p1(t)−pi(t) p1(t) +pi(t)− ∥p(t)∥2 ≥0, which implies for any i∈ {2, . . . , d }andt∈[0,∆/2], p1(t)−pi(t)≥p1(0)−pi(0)≥∆, Applying this argument iteratively concludes the proof. For the reader’s convenience we restate Theorem 2.6 before giving its proof. Theorem 2.6. Assume p1(0)≥max i=2,...,dpi(0) + ∆ , for some ∆>0. Then ∥e1−p(t)∥1≤2(1−p1(0)) exp −∆ d(1 + ( d−1)∆)t , that is linear convergence of p(t)→e1ast→ ∞ . Proof of Theorem 2.6. Arguing as in (19) and (18), Lemma 2.5 (b) implies d dt(1−p1(t)) =−p1(t)(p1(t)− ∥p(t)∥2) ≤ −µ(1−p1(t)), where µ:=∆ d((d−1)∆ + 1) . Grönwall’s inequality entails 1−p1(t)≤(1−p1(0)) exp( −µt), which gives ∥p(t)−e1∥1= 1−p1(t) +dX i=2pi(t) = 2(1 −p1(t))≤2(1−p1(0)) exp( −µt). 20 A.3 Proofs for Subsection 2.3 In this subsection, we heuristically derive the expression for the probabilities λjwj(tk) Pd ℓ=1λℓwℓ(tk), j = 1, . . . , d, (26) in(12). To this end, we assume that the weights are small compared to the threshold S, that the weights are only updated at the postsynaptic spike times, and thatPd ℓ=1λℓwℓ(tk)≫S. For convenience, we write wℓforwℓ(tk)and all ℓ∈[d]. The constraintPd ℓ=1λℓwℓ≫Sguarantees that after the postsynaptic spike time tk, the membrane potential Ytwill again reach Sand thus emit another spike at time tk+1. Taking the expectation of the membrane potential Yt=Pd j=1P τ∈Tj∩(tk,t]wje−(t−τ)with respect to all except the jthspike-train, gives Zt:=X τ∈Tj∩(tk,t]wje−(t−τ)+X ℓ̸=jwℓλℓZt tke−(t−s)ds =X τ∈Tj∩(tk,t]wje−(t−τ)+X ℓ̸=jwℓλℓ 1−e−(t−tk) , for all tk≤t <tk+1. Introduce t∗:= inf{t≥tk:Zt≥S−wj}and write t+for the first time after t∗where t7→Zt∗+e−(t∗−tk)dX ℓ̸=jwℓλℓ 1−e−(t−t∗) | {z } =:Vt reaches the threshold S. If there are sufficiently many neurons, the probability that the jthpresynaptic neuron spikes at time t∗is small and will be neglected. We have Zt∗=S−wjsuch that Vt+=wj. Approximating 1−e−(t+−t∗)≈t+−t∗gives t+−t∗≈et∗−tkwjP ℓ̸=jwℓλℓ. Thejthpresynaptic neuron causes the next postsynaptic spike if and only if it spikes in the interval (t∗,t+). The spike times of the jthpresynaptic neuron are
|
https://arxiv.org/abs/2505.10272v1
|
generated from a Poisson process with intensity λj. Thus, if U∼Poisson( λj(t+−t∗)), the probability that the jthpresynaptic neuron spikes in (t∗,t+)is given by P U̸= 0 = 1−P U= 0 = 1−exp −λj(t+−t∗) ≈λj(t+−t∗)≈et∗−tkwjλjP ℓ̸=jwℓλℓ. We can moreover approximate the denominator on the right hand side by the full sumPd ℓ=1wℓλℓ. Since the probabilities add up to one, we must have et∗−tk≈1. This shows that the probability of the jthpresynaptic neuron triggering the first postsynaptic spike after tkis approximately given by (26). Lemma A.5. Consider the setting outlined in Subsection 2.3. If at some time point t>0, all weights are the same, then the probability that the jthneuron triggers the next postsynaptic spike after tis given by λjPd ℓ=1λℓ. Proof. Since all weights are the same, we can denote their value by w. The jthneuron causes a postsynaptic spike if and only if it is the first one to spike after the postsynaptic membrane potential Ythas reached a level ≥S−w. Ast∗= inf{t≥t:Yt≥S−w}is a jump time and a stopping time, we can restart the process at t∗. As the increments of Poisson processes are independent, and 21 the time between the jumps is exponentially distributed with parameters λj, the probability that the jthneuron causes the next presynaptic spike is given by P Xj= min( X1, . . . , X d) , where (Xi)i∈[d]are independent random variables satisfying Xi∼Exp(λi). IfU∼Exp(λ)and V∼Exp(λ′)are independent, then, U+V∼Exp(λ+λ′)andP(U≤V) =λ/(λ+λ′). Thus, mini̸=jXi∼Exp(P i̸=jλi), and P Xj= min( X1, . . . , X d) =P Xj≤min i̸=jXi =λjPd ℓ=1λℓ. A.4 On the connection to entropic mirror descent An alternative approach to connecting our proposed learning rule (3)for the probabilities pand the entropic mirror descent in discrete-time is as follows. The entropic mirror descent step (14) with Kullback–Leibler divergence and potential fcan be solved explicitly and yields pi(k+ 1) =pi(k) exp(−α(∇f(p(k)))i) Pd j=1pj(k) exp(−α(∇f(p(k)))j), i∈[d], k= 0,1, . . . , (27) see Section 5 of Beck and Teboulle [5] for details. With f(p) =˜L(p) =∥p∥2/2and the first order approximation exp(x)≈1 +xfor small xwe deduce pi(k+ 1) =pi(k) exp(−α(∇˜L(p(k)))i) Pd j=1pj(k) exp(−α(∇˜L(p(k)))j)=pi(k) exp( αpi(k)) Pd j=1pj(k) exp( αpj(k)) =pi(k) exp( α(pi(k)− ∥p(k)∥2)) Pd j=1pj(k) exp( α(pj(k)− ∥p(k)∥2)) ≈pi(k)(1 + α(∇L(p(k)))i) Pd j=1pj(k)(1 + α(∇L(p(k)))j) for any i= 1, . . . , d andk= 0,1, . . .. As our proposed learning rule (3)is a noisy version of the last line, it is naturally connected to noisy entropic gradient descent. 22
|
https://arxiv.org/abs/2505.10272v1
|
arXiv:2505.10630v1 [cs.LG] 15 May 2025How many measurements are enough? Bayesian recovery in inverse problems with general distributions Ben Adcock Department of Mathematics Simon Fraser University CanadaNick Huang Department of Mathematics Simon Fraser University Canada Abstract We study the sample complexity of Bayesian recovery for solving inverse prob- lems with general prior, forward operator and noise distributions. We consider posterior sampling according to an approximate prior P, and establish sufficient conditions for stable and accurate recovery with high probability. Our main result is a non-asymptotic bound that shows that the sample complexity depends on (i) the intrinsic complexity of P, quantified by its approximate covering number , and (ii) concentration bounds for the forward operator and noise distributions. As a key application, we specialize to generative priors, where Pis the pushforward of a latent distribution via a Deep Neural Network (DNN). We show that the sample complexity scales log-linearly with the latent dimension k, thus establishing the efficacy of DNN-based priors. Generalizing existing results on deterministic (i.e., non-Bayesian) recovery for the important problem of random sampling with an orthogonal matrix U, we show how the sample complexity is determined by the co- herence ofUwith respect to the support of P. Hence, we establish that coherence plays a fundamental role in Bayesian recovery as well. Overall, our framework unifies and extends prior work, providing rigorous guarantees for the sample complexity of solving Bayesian inverse problems with arbitrary distributions. 1 Introduction Inverse problems are of fundamental importance in science, engineering and industry. In a standard setting, the aim is to recover an unknown vector (e.g., an signal or image) x∗∈Rnfrom measurements y=Ax∗+e∈Rm. (1.1) Heree∈Rmis measurement noise and A∈Rm×n, often termed the measurement matrix , represents the forwards operator. While simple, the discrete, linear problem (1.1) is sufficient to model many important applications [ 23,60,64]. It is common to solve (1.1) using a Bayesian approach (see, e.g., [28,67]). Here one assumes that x∗is drawn from some prior distribution R. However, in practice, Ris never known exactly. Especially in modern settings that employ Deep Learning (DL) [ 11,31], it is common to learn an approximate prior Pand then recover x∗from ybyapproximate posterior sampling , i.e., sampling ˆxfrom the posterior P(·|y, A). An increasingly popular approach involves using generative models to learn P(see, e.g., [11, 19, 31, 64, 66, 76] and references therein). A major concern in many inverse problems is that the number of measurements mis highly limited, due to physical constraints such as time (e.g., in Magnetic Resonance Imaging (MRI)), power (e.g., in portable sensors), money (e.g., seismic imaging), radiation exposure (e.g., X-Ray CT), or other factors [ 4,60,64]. Hence, one aims to recover x∗well while keeping the number of measurements mas small as possible. With this in mind, in this work we address the following broad question: How many measurements suffice for stable and accurate recovery of x∗∼ R via approximate posterior Preprint. Under review. sampling ˆx∼ P(·|y, A), and what are conditions on R,Pand the distributions AandEof the measurement matrix Aand noise e, respectively, that ensure this recovery? 1.1 Overview In this work, we
|
https://arxiv.org/abs/2505.10630v1
|
strive to answer to this question in the broadest possible terms, with a theoretical framework that allows for very general types of distributions. We now describe the corresponding conditions needed and present simplified a version of our main results. (i) Closeness of the real and approximate distributions. Namely, Wp(R,P)is small, where Wp denotes the Wassertstein p-metric for some 1≤p≤ ∞ . (ii) Low-complexity of P.Since m≪nin many applications, to have any prospect for accurate recovery we need to impose that P(or equivalently R, in view of the previous assumption) has an inherent low complexity. Following [ 47], we quantify this in terms of its approximate covering number Covη,δ(P). This is equal to the minimum number of balls of radius ηrequired to cover a region of Rnhaving P-measure at least 1−δ. See Definition 2.1 for the full definition. (iii) Concentration of A.We consider constants Clow(t) =Clow(t;A, D)≥0andCupp(t) = Cupp(t;A, D)≥0(see Definition 2.3 for the full definition) such that PA∼A[∥Ax∥ ≤t∥x∥]≤Clow(t),PA∼A[∥Ax∥ ≥t∥x∥]≤Cupp(t) for all x∈D:= supp( P)−supp(P)andt >0. IfAis isotropic, i.e., EA∼A∥Ax∥2=∥x∥2, ∀x∈Rn, as is often the case in practice, these constants measure how fast ∥Ax∥2concentrates around its mean. Notice that this condition is imposed only on D= supp( P)−supp(P), rather than the whole space Rn. As we see later, this is crucial in obtaining meaningful recovery guarantees. Finally, in order to present a simplified result in this section, we now make several simplifying assumptions. Both of these will be relaxed in our full result, Theorem 3.1. (iv) Gaussian noise. Namely, E=N(0,σ2 mI)where σ >0. (v) Bounded forwards operators. We assume that ∥A∥ ≤θa.s. for A∼ A and some θ >0. Theorem 1.1 (Simplified main result) .Let1≤p≤ ∞ ,0< δ≤1/4,η >0and suppose that conditions (i)–(v) hold with Wp(R,P)≤ε/(2mθ)andσ≥ε/δ1/p. Suppose that x∗∼ R ,A∼ A, e∼ E independently and ˆx∼ P(·|y, A), where y=Ax∗+e. Then, for any d≥2, P ∥x∗−ˆx∥ ≥(8d2+ 2)( η+σ) ≲δ+ Cov η,δ(P)h Clow(1/d) +Cupp(d) + e−m/16i .(1.2) Note that in this and all subsequent results, the term of the left-hand side of (1.2) is the probability with respect to all variables, i.e., x∗∼ R ,A∼ A,e∼ E andˆx∼ P(·|y, A). This result is extremely general, in that it allows for essentially arbitrary (real and approximate) signal distributions Rand Pand an essentially arbitrary distribution Afor the forwards operator. In broad terms, it bounds the probability that the error ∥x∗−ˆx∥of posterior sampling exceeds a constant times the noise level σplus an arbitrary parameter η. It does so in terms of the approximate covering number Covη,δ(P), which measures the complexity of the approximate distribution P, the concentration bounds ClowandCuppforA, which measure how much Aelongates or shrinks a fixed vector, and an exponentially-decaying term e−m/16, which stems from the (Gaussian) noise. In particular, by analyzing these terms for different classes of distributions PandA, we can derive concrete bounds for various exemplar problems. We next describe two such problems. 1.2 Examples We first consider Ato be a distribution of subgaussian random matrices . Here A∼ A if its entries are i.i.d. subgaussian random variables with mean zero and variance 1/m(see Definition 3.4). Theorem 1.2 (Subgaussian measurement
|
https://arxiv.org/abs/2505.10630v1
|
matrices, simplified) .Consider the setup of Theorem 1.1, whereAis a distribution of subgaussian random matrices. Then there is a constant c >0(depending on the subgaussian parameters β, κ > 0; see Definition 3.4) such that P[∥x∗−ˆx∥ ≥34(η+σ)]≲δ,whenever m≥c·[log(Cov η,δ(P)) + log(1 /δ)]. 2 Later in Theorem 3.5, we slightly refine and generalize this result. This theorem shows the efficacy of Bayesian recovery with subgaussian random matrices: namely, the sample complexity scales linearly in the distribution complexity, i.e., the log of the approximate covering number. Gaussian random matrices are very commonly studied, due to their amenability to analysis and tight theoretical bounds [ 4,20,32,47], with Theorem 1.2 being a case in point. However, they are largely irrelevant to practical inverse problems, where physical constraints impose certain structures on the forwards operator distribution A[4,60,64]. For instance, in MRI physical constraints mean that the measurements are samples of the Fourier transform of the image. This has motivated researchers to consider much more practically-relevant distributions, in particular, so-called subsampled orthogonal transforms (see, e.g., [ 4]). Here U∈Rn×nis a fixed orthogonal matrix – for example, the matrix of the Discrete Fourier Transform (DFT) in the case of MRI – and the distribution Ais defined by randomly selecting mrows of U. See Definition 3.6 for the formal definition. Theorem 1.3 (Subsampled orthogonal transforms, simplified) .Consider the setup of Theorem 1.1, where Ais a distribution of subsampled orthogonal matrices based on matrix U. Then there is a universal constant c >0such that P[∥x∗−ˆx∥ ≥34(η+σ)]≲δ,whenever m≥c·µ(U;D)·[log(Cov η,δ(P)) + log(1 /δ)], where D= supp( P)−supp(P)andµ(U;D)is the coherence of Urelative to D, defined as µ(U;D) =nsupn ∥Ux∥2 ∞/∥x∥2:x∈D, x̸= 0o . See Theorem 3.8 for the full result. This result shows that similar stable and accurate recovery to the Gaussian case can be achieve using subsampled orthogonal matrices, provided the number of measurements scales like the coherence. We discuss this term further in §3.2. Theorems 1.2 and 1.3 establish a key relationship for successful Bayesian recovery in inverse prob- lems, in each case relating the number of measurements mto the intrinsic complexity log(Cov η,δ(P)) ofP. It is therefore informative to see how this complexity behaves for cases of interest. As noted, it is common to use a generative model to learn P. This means that P=G♯γ, where G:Rk→Rnis a Deep Neural Network (DNN) and γis some fixed probability measure on the latent space Rk. Typically, γ=N(0, I). IfGisL-Lipschitz, we show in Proposition 4.1 that log(Cov η,δ(P)) =O(klog[L(√ k+ log(1 /δ))/η]). (1.3) This scales log-linearly in the latent space dimension k, confirming the intrinsic low-complexity of the distribution P. Combining (1.3) with Theorems 1.2 and 1.3 we see that posterior sampling achieves stable and accurate recovery, provided the number of measurements mscales log-linearly (and thereby near-optimally) with k. Further, in order to compare with deterministic settings such as classical compressed sensing, which concerns the recovery of s-sparse vectors, we also consider distributions P=Psofs-sparse vectors. In this case, we show in Proposition 4.3 that log(Cov η,δ(P)) =O(slog(n/s) + log[(√s+ log(1 /δ))/η]). (1.4) Hence smeasurements, up to log terms, are sufficient for recovery of approximately
|
https://arxiv.org/abs/2505.10630v1
|
sparse vectors. This extends a classical result for deterministic compressed sensing to the Bayesian setting. 1.3 Significance The significance of this work is as follows. See §1.4 for additional discussion. 1.We provide the first results for Bayesian recovery with arbitrary distributions real and approximate prior distributions R,Pand forwards operator and noise distributions A,E. 2.Unlike much of the theory of Bayesian inverse problems, which is asymptotic in nature [ 11,28,67], our results are non-asymptotic . They hold for arbitrary values of the various paremeters, within given ranges (e.g., the failure probability δ, the noise level σ, the number of measurements m, and so forth). 3.For priors defined by Lipschitz generative DNNs, we establish the first result demonstrating that the sample complexity of Bayesian recovery depends log-linearly on the latent dimension kand logarithmically on the Lipschitz constant. 3 4.For the important class of subsampled orthogonal transforms, we show the sample complexity of Bayesian recovery depends on the coherence , thus resolving several key open problems in the literature (see next). 5.It is increasingly well-known that DL-based methods for inverse problems are susceptible to hallucinations and other undesirable effects [ 10,13,16,23,27,37,39,42,45,57,58,59,61,62,75]. This is a major issue that may limit the uptake of these methods in safety-critical domains such as medical imaging [ 23,52,54,58,70,72,73]. Our results provide theoretical guarantees for stable and accurate, and therefore show conditions under which hallucinations provably cannot occur. This is not only theoretically interesting, but it also has practical consequences in the development of robust DL methods for inverse problems – a topic we intend to explore in future work. 1.4 Related work Bayesian methods for inverse problems have become increasingly popular over the last several decades [ 28,67]. Many state-of-the-art DL methods for inverse problems now follow a Bayesian approach (see [ 6,11,19,43,53,60] and references therein). Moreover, learned priors such as those stemming from generative models, are now increasingly used in applications [ 1,6,11,19,26,31,35, 43, 46, 49, 50, 51, 55, 60, 64, 66, 74, 76]. This work is motivated in part by the (non-Bayesian) theory of generative models for solving inverse problems (see [ 20,31,40,41,46,64,65,66,76] and references therein). This was first developed in [20], where compressed sensing techniques were used to show recovery guarantees for a Gaussian random matrix Awhen computing an approximate solution to (1.1) in the range Σ := ran( G)of a Lipschitz map G:Rk→Rn, typically assumed to be a generative NN. This is a deterministic approach. Besides the random forward operator and (potentially) random noise, it recovers a fixed (i.e., nonrandom) underlying signal x∗in a deterministic fashion with a point estimator ˆxthat is obtained as a minimizer of the empirical ℓ2-loss minz∈Σ∥Az−y∥2. In particular, no information about the latent space distribution γis used. In this work, following, e.g., [ 1,19,46,47,66] and others, we consider a Bayesian setting, where x∗∼ R is random and where we quantify the number of measurements that suffice for accurate and stable recovery via posterior sampling ˆx∼ P(·|y, A). Our work is a generalization of [ 47], which considered Bayesian recovery in with Gaussian random matrices and standard Gaussian noise. We significantly extend
|
https://arxiv.org/abs/2505.10630v1
|
their results to allow for arbitrary distributions AandEfor the forward operator and noise, respectively. Using Theorem 1.1, we derive guarantees for the case of subsampled orthogonal transforms (Theorem 1.3, which, unlike Gaussian random matrices, is very relevant to applications. In particular, if Uis a DFT matrix our work addresses open problems posed in [ 46, §3] and [ 64, §II.F] on recovery guarantees with Fourier measurements. Moreover, addressing an open problem posed in [ 47, §6], we derive concrete bounds for the approximate covering number of distributions given by Lipschitz generative DNNs (see (1.3) and Proposition 4.1) and distributions of sparse vectors (see (1.4) and Proposition 4.3). In particular, we demonstrate stable and accurate recovery in a Bayesian sense with a number of measurements that scales linearly in the model complexity, i.e., kin the former case and sin the latter case. Recently [ 14] extended the results of [ 20] in the non-Bayesian setting from Gaussian random matrices to subsampled orthogonal transforms when considering ReLU generative DNNs. Theorem 1.3 provides a Bayesian analogue of this work. We also generalize [ 14] by allowing for general measurement distributions Aand priors P, which need not be arise from ReLU generative DNNs. Classical compressed sensing considers the recovery of approximately s-sparse vectors from (1.1) . However, it has been extended to consider much more general types of low-complexity signal models, such as joint or block sparse vectors, tree sparse vectors, cosparse vectors and many others [5,12,21,29,33,69]. However, most recovery guarantees for general model classes consider only (sub)Gaussian random measurements (see e.g., [ 12,32]). Recently, [ 3] introduced a general framework for compressed sensing that allows for essentially arbitrary low-complexity models and arbitrary (random) measurement matrices. Our is a Bayesian analogue of this deterministic work. Similar to [ 3], a key feature of our work is that we consider arbitrary real and approximate signal distributions P,R(analogous to arbitrary low-complexity models) and arbitrary distributions Afor the forwards operator. Unsurprisingly, a number of the conditions for our main result, Theorem 1.1 – namely, the low-complexity condition (ii) and concentration condition (iii) – share similarities to 4 those that ensure stable and accurate recovery in non-Bayesian compressed sensing. See Remarks 2.2 and 3.3 for further discussion. However, the proof techniques used in this work are entirely different. 2 Preliminaries 2.1 Notation We write ∥·∥for the ℓ2-norm on the Euclidean space Rn. We let Br(x) ={z∈Rn:∥z−x∥ ≤r} and, when x= 0, we write Br:=Br(0). Given a set X⊆Rn, we write Xc=Rn\Xfor its complement. We also write Br(X) =S x∈XBr(x)for the r-neighbourhood of X. Let(X,F, µ)be a Borel probability space. We write supp( µ)for its support, i.e., the smallest closed setA⊆Xfor which µ(A) = 1 . Given probability spaces (X,F1, µ),(Y,F2, ν), we write Γ = Γ µ,ν for the set of couplings, i.e., probability measures on the product space (X×Y, σ(F1⊗ F 2))whose marginals are µandν, respectively. Given a cost function c:X×Y→[0,∞)and1≤p <∞, the Wasserstein- pmetric is defined as Wp(µ, ν) = inf γ∈ΓZ X×Yc(x, y)pdγ(x, y)1/p . Ifp=∞, then W∞(µ, ν) = inf γ∈Γ(esssupγc(x, y)). In this paper, unless
|
https://arxiv.org/abs/2505.10630v1
|
stated otherwise, X=Y=Rnand the cost function cis the Euclidean distance. 2.2 Approximate covering numbers As a measure of complexity of measures, we will define the concept of approximate covering numbers as introduced in Jalal et al. [47]. Definition 2.1 (Approximate covering number) .Let(X,F,P)be a probability space and δ, η≥0. Theη, δ-approximate covering number of Pis defined as Covη,δ(P) = min( k∈N:∃{xi}k i=1⊆supp(P),P k[ i=1Bη(xi)! ≥1−δ) . This quantity measures how many balls of radius ηare required to cover at least 1−δof the P- mass of Rn. See [ 47] for further discussion. Note that [ 47] does not require the centres xiof the approximate cover belong to supp(P). However, this is useful in our more general setting and presents no substantial restriction. At worst, this requirement changes ηby a factor of 1/2. Remark 2.2 (Relation to non-Bayesian compressed sensing) Note that when δ= 0, the approxi- mate covering number Covη,0(P)≡Covη(supp( P))is just the classical covering number of the setsupp(P), i.e., the minimal number of balls of radius ηthat cover supp(P). Classical covering numbers play a key role in (non-Bayesian) compressed sensing theory. Namely, the covering number of the model class Σ⊆Rndirectly determines the number of measurements that suffice for stable and accurate recover. See, e.g., [ 3,32]. In the Bayesian setting, the approximate covering number plays the same role – see Theorem 1.1. 2.3 Bounds for AandE Since our objective is to establish results that hold for arbitrary measurement and noise distributions AandE, we require several key definitions. These are variety of (concentration) bounds. Definition 2.3 (Concentration bounds for A).LetAbe a distribution on Rm×n,t≥0andD⊆Rn. Then a lower concentration bound for Ais any constant Clow(t) =C−(t;A, D)≥0such that PA∼A{∥Ax∥ ≤t∥x∥} ≤ Clow(t;A, D),∀x∈D. Similarly, an upper concentration bound for Ais any constant Cupp(t) =Cupp(t;A, D)≥0such that PA∼A{∥Ax∥ ≥t∥x∥} ≤ Cupp(t;A, D),∀x∈D. Finally, given t, s≥0an(upper) absolute concentration bound for Ais any constant Cabs(s, t;A, D) such that P(∥Ax∥> t)≤Cabs(s, t;A, D),∀x∈D,∥x∥ ≤s. 5 Notice that if Ais isotropic, i.e., E∥Ax∥2=∥x∥2,∀x∈Rn, then ClowandCuppdetermine how well∥Ax∥concentrates around its mean ∥x∥for any fixed x∈D. In general, one can show that Clow(t;A,Rn), Cup(t;A,Rn) = (t−1)−2=O(t−2),t→ ∞ . However, in order to obtain desirable sample complexity estimates (e.g., Theorems 1.2 and 1.3), we need to derive concentration bounds that decay exponentially in m. A crucial component of this analysis is considering the concentration bound over some subset D(related to the support of P), as, in general, one cannot expect fast concentration over the whole of Rn. See Remarks 3.2 and 3.9. Definition 2.4 (Concentration bound for E).LetEbe a distribution in Rmandt≥0. Then an (upper) concentration bound for Eis any constant Dupp(t) =Dupp(t;E)≥0such that E(Bc t) =Pe∼E(∥e∥ ≥t)≤Dupp(t;E). Notice that this bound just measures the probability that the noise is large. We also need the following bound, which estimates how much the density of Echanges in a σ-neighbourhood of the origin when perturbed by an amount ε. Definition 2.5 (Density shift bounds for E).LetEbe a distribution in Rmwith density pEand ε, τ≥0. Then a density shift bound for Eis any constant Dshift(ε, τ) =Dshift(ε, τ;E)≥0(possibly +∞) such that pE(u)≤Dshift(ε,
|
https://arxiv.org/abs/2505.10630v1
|
τ;E)pE(v),∀u, v∈Rn,∥u∥ ≤τ,∥u−v∥ ≤ε. 3 Main results We now present our main results. The first, an extension of Theorem 1.1, is a general result that holds for arbitrary distribution R,P,AandE. Theorem 3.1. Let1≤p≤ ∞ ,0≤δ≤1/4,ε, η, t > 0,c, c′≥1andσ≥ε/δ1/p. LetEbe a distribution on RmandR,Pbe distributions in Rnsatisfying Wp(R,P)≤εand min(log Cov η,δ(R),log Cov η,δ(P))≤k (3.1) for some k∈N. Suppose that x∗∼ R ,A∼ A ,e∼ E independently and ˆx∼ P(·|y, A), where y=Ax∗+e. Then p:=P[∥x∗−ˆx∥ ≥(c+ 2)( η+σ)]satisfies p≤2δ+Cabs(ε/δ1/p, tε/δ1/p;A, D1) +Dupp(c′σ;E) + 2Dshift(tε/δ1/p, c′σ;E)ek" Clow 2√ 2√c;A, D2! +Cupp√c 2√ 2;A, D2 + 2Dupp√cσ 2√ 2;E# , where D1=Bε/δ1/p(supp( P))∩supp(R)−supp(P) (3.2) and D2=supp(P)−supp(P)ifPattains the minimum in (3.1) supp(R)−supp(P)otherwise. (3.3) This theorem bounds the probability of unstable or inaccurate recovery pin terms of the various parameters using the constants introduced in the previous section and the approximate covering numbers of R,P. This result is powerful in its generality, but as a consequence, rather opaque. Later in this section we use it to deduce more explicit results for specific distributions of interest. Remark 3.2 (The concentration bounds in Theorem 3.1) A particularly important facet of this result, for the reasons discussed above, is that the various concentration bounds Cabs,ClowandCupp are taken over sets D1,D2– given by (3.2) and(3.3) , respectively, and related to the support of P andR– rather than the whole space Rn. We exploit this fact crucially later in Theorem 3.8. Remark 3.3 (Relation to non-Bayesian compressed sensing) The constants ClowandCuppare similar, albeit not identical to similar conditions such as the Restricted Isometry Property (RIP) (see, e.g., [ 4, Chpt. 5]) or Restricted Eigenvalue Condition (REC) [ 17,20] that appear in non- Bayesian compressed sensing. There, one considers a fixed model class Σ⊆Rn, such as the set Σs ofs-sparse vectors or, as in [ 20], the range ran(G)of a generative DNN. Conditions such as the RIP or REC impose that ∥Ax∥is concentrated above and below around ∥x∥for all xbelonging to the difference set Σ−Σ. In Theorem 3.1, assuming Pattains the minimum in (3.1) , there is a similar condition with Σreplaced by supp(P). Indeed, Clow(2√ 2/√ 2;A, D2)measures how small ∥Ax∥ is in relation to ∥x∥, with Cupp(√c/(2√ 2);A, D2)measuring how large ∥Ax∥is in relation to ∥x∥. 6 3.1 Example: Subgaussian random matrices with Gaussian noise We now apply this theorem to the first example introduced in §1.2. Recall that a random variable X onRissubgaussian with parameters β, κ > 0ifP(|X| ≥t)≤βe−κt2for all t >0. Definition 3.4 (Subgaussian random matrix) .A random matrix A∈Rm×nissubgaussian with parameters β, κ > 0ifA=1√meA, where the entries of eAare independent mean-zero sugaussian random variables with variance 1and the same subgaussian parameters β,κ. Note that 1/√mis a scaling factor which ensures that Aisisotropic , i.e.,E∥Ax∥2=∥x∥2,∀x∈Rn. Theorem 3.5. Let1≤p≤ ∞ ,0≤δ≤1/4,ε, η > 0andσ≥ε/δ1/p. LetE=N(0,σ2 mI)andA be a distribution of subgaussian random matrices with parameters β, κ > 0. Suppose that x∗∼ R , A∼ A ,e∼ E independently and ˆx∼ P(·|y, A), where y=Ax∗+e. Then there is a constant c(β, κ)>0depending on β, κ only such that P[∥x∗−ˆx∥ ≥34(η+σ)]≲δ, provided m≥c(β, κ)·[min(log Cov η,δ(R),log Cov η,δ(P))
|
https://arxiv.org/abs/2505.10630v1
|
+ log(1 /δ)]. (3.4) This theorem is derived from Theorem 3.1 by showing that the various concentration bounds are exponentially small in mfor subgaussian random matrices (see §B). It is a direct generalization of [47], which considered the Gaussian case only. It shows that subgaussian random matrices are near-optimal for Bayesian recovery, in the sense that mscales linearly with the log of the approximate covering number (3.4). We estimate these covering numbers for several key cases in §4. 3.2 Example: randomly-subsampled orthogonal transforms with Gaussian noise As discussed, subgaussian random matrices are largely impractical. We now discuss the more practical case of subsampled orthogonal transforms. Definition 3.6 (Randomly-subsampled orthogonal transform) .LetU∈Rn×nbe orthogonal (i.e., U⊤U=UU⊤=I). and write u1, . . . , u n∈Rnfor its rows. Let X1, . . . , X n∼i.i.d.Ber(m/n)be independent Bernoulli random variables with P(Xi= 1) = m/n andP(Xi= 0) = 1 −m/n . Then we define a distribution Aas follows. We say that A∼ A if A=rn m u⊤ i1... u⊤ iq , where {i1, . . . , i q} ⊆ { 1, . . . , n }is the set of indices ifor which Xi= 1. The factorp n/m ensures that E(A∗A) =I. Note that the number of measurements qin this model is itself a random variable, with E(q) =m. However, qconcentrates exponentially around its mean. Definition 3.7. LetU∈Rn×nandD⊆Rn. The coherence of Urelative to Dis µ(U;D) =n·supn ∥Ux∥2 ∞/∥x∥2:x∈D, x̸= 0o . Coherence is a well-known concept in classical compressed sensing with sparse vectors. Definition 3.7 is a generalization that allows for arbitrary model classes Σ. This definition is similar to that of [14], which considered non-Bayesian compressed sensing with generative models. It is related to the (somewhat more general) concept of variation introduced in [3]. Theorem 3.8. Let1≤p≤ ∞ ,0≤δ≤1/4,ε, η > 0andσ≥ε/δ1/p. LetE=N(0,σ2 mI)and Abe a distribution of randomly-subsampled orthogonal matrices based on a matrix U. Suppose thatx∗∼ R ,A∼ A,e∼ E independently and ˆx∼ P(·|y, A), where y=Ax∗+e. Then there is a universal constant c >0such that P[∥x∗−ˆx∥ ≥34(η+σ)]≲δ, provided m≥c·µ(U;D)·[log(Cov η,δ(P)) + log(1 /δ)],where D= supp( P)−supp(P). (3.5) 7 This theorem can be considered a generalization of results from [ 3,14] to the Bayesian setting. In [3,14], the measurement conditions scale linearly with µ(U;D), where D= Σ−ΣandΣ⊆Rnis the low-complexity model class. Similarly, the number of measurements (3.5) scales linearly with respect to the coherence relative to D= supp( P)−supp(P), which, as discussed in Remark 3.3, plays the role of the low-complexity model class in the Bayesian setting. It is known in classical compressed sensing that coherence determines the sample complexity of recovering sparse vectors from randomly-subsampled orthogonal transforms [ 24]. A similar argument can be made in the Bayesian setting. Notice that µ(U;D)≤µ(U;Rn) =n. However, we are particularly interested in cases where µ(U;D)≪n, in which case (3.5) may be significantly smaller than the ambient dimension n. We discuss this in the context of several examples in the next section. Remark 3.9 (Concentration over subsets) Theorem 3.8 illustrates why it is important that Theo- rem 3.1 involves concentration bounds over subsets of
|
https://arxiv.org/abs/2505.10630v1
|
Rn. To derive Theorem 3.8 from Theorem 3.1 (see §B), we show exponentially-fast concentration in m/µ(U;D). Had we considered the whole of Rn, then, since µ(U;Rn) =n, this would have lead to an undesirable measurement condition of the form m=O(n)scaling linearly in the ambient dimension n. 4 Covering number and sample complexity estimates We conclude by applying our results to two different types of approximate prior distributions. 4.1 Generative DNNs Proposition 4.1 (Approximate covering number for a Lipschitz pushforward of a Gaussian measure) . LetG:Rk→Rnbe Lipschitz with constant L≥0, i.e.,∥G(x)−G(z)∥ ≤L∥x−z∥,∀x, z∈Rk, and define P=G♯γ, where γ=N(0, I)is the standard normal distribution on Rk. Then log(Cov η,δ(P))≤klog" 1 +2√ kL η 1 +r 2 klog(1 /δ)!# . (4.1) This result shows that Phas low complexity, since log(Cov η,δ(P))scales log-linearly in k. Notice thatLonly appears logarithmically in (4.1) . While it is not the main focus of this work, we note that Lipschitz constants of DNNs have been studied quite extensively [ 34,68,71]. Moreover, it is also possible to design and train DNNs with small Lipschitz constants [56]. This result, combined with Theorem 3.5 shows that accurate and stable Bayesian recovery with such a prior with sample complexity that is near-optimal in the latent dimension k, i.e.O(klog(k)). Remark 4.2 (Quadratic bottleneck) In Theorem 3.8, the measurement condition (3.5) also depends on the coherence. This quantity has been considered in [ 14] for the case of ReLU DNNs. In particular, if a ReLU DNN G:Rk→Rnhas random weights drawn from a standard normal distribution, then its coherence µ(U;D)scales like O(k)up to log factors [ 14, Thm. 3]. Combining this with Theorem 3.8 and Proposition 4.1, we see that the overall sample complexity for Bayesian recovery scales like O(k2log(k))in this case. This is worse than the subgaussian case, where there is no coherence factor. Thequadratic bottleneck that occurs when considering subsampled orthogonal matrices also arises in the non-Bayesian setting, as was discussed in detail in [ 14]. Its removal is an open problem (see §5). Note also that the coherence is not guaranteed to be small for general (in particular, trained) DNNs. However, [14] also discuss strategies for training generative models to have small coherence. 4.2 Distributions of sparse vectors Lets∈ {1, . . . , n }. We define a distribution P=Psofs-sparse vectors in Rnas follows. To draw x∼ P , we first choose a support set S⊆ {1, . . . , n },|S|=s, uniformly at random amongst all possible n s such subsets. We then define xi= 0,i /∈S, and for each i∈Swe draw xirandomly and independently from N(0,1). Note that there are other ways to define distributions of sparse vectors, which can be analyzed similarly. However, for brevity we only consider the above setup. 8 Proposition 4.3 (Approximate covering number for distributions of sparse vectors) .LetP=Psbe a distribution of s-sparse vectors in Rn. Then log(Cov η,δ(Ps))≤s" logen s + log 1 +2√s η 1 +r 2 slog(1 /δ)!!# . (4.2) As in the previous case, we deduce Bayesian recovery from O(slog(n/s))subgaussian measurements, i.e., near-optimal, log-linear sample complexity. In the case of randomly-subsampled orthogonal
|
https://arxiv.org/abs/2505.10630v1
|
matrices, we also need to consider the coherence. For P=Psas above, one can easily show that µ(U; supp( Ps)−supp(Ps))≤2sµ∗(U), where µ∗(U) =n·max i,j|uij|2. (4.3) The term µ∗(U)is often referred to as the coherence of U(see, e.g., [ 4, Defn. 5.8] or [ 24]). Notice thatµ∗(U)≈1for DFT matrices, which is one reason why subsampled Fourier transforms are particularly effective in (non-Bayesian) compressed sensing. Our work implies as similar conclusion in the Bayesian setting: indeed, substituting (4.2) and(4.3) into(3.5) we immediately deduce that the measurement condition for Bayesian recovery with Psbehaves like m=O(s2)up to log terms. Remark 4.4 (Quadratic bottleneck) Once more we witness a quadratic bottleneck. In the non- Bayesian setting, one can show stable and accurate recovery of sparse vectors from O(s)measure- ments, up to log terms (see, e.g., [ 4, Cor. 13.15]). However, this requires specialized theoretical techniques that heavily leverage the structure of the set of sparse vectors. In the setting of this paper, the bottleneck arises from the generality of the approach considered in this work: specifically, the fact our main results hold for arbitrary probability distributions P. 5 Limitations and future work We end by discussing a number of limitations and avenues for future work. First, although our main result Theorem 3.1 is very general, we have only applied it to a number of different cases, such as Lipschitz pushforward measures and Gaussian random matrices or subsampled orthogonal transforms. We believe many other important problems can be studied as corollaries of our main results. This includes sampling with heavy-tailed vectors [ 48], sampling with random convolutions [63], multi-sensor acquisition problems [ 25], generative models augmented with sparse deviations [30], block sampling [ 3,18,22], with applications to practical MRI acquisition, sparse tomography [7], deconvolution and inverse source problems [ 8]. We believe our framework can also be applied to various types of non-Gaussian noise, as well as problems involving sparsely-corrupted measurements [48]. We are actively investigating applying our framework to these problems. Second, as noted in Remarks 4.2 and 4.4 there is a quadratic bottleneck when considering subsampled orthogonal transforms. In the non-Bayesian case, this can be overcome in the case of (structured) sparse models using more technical arguments [ 3]. We believe similar ideas could also be exploited in the Bayesian setting. On a related note, both [ 14,20] also consider ReLU generative models in the non-Bayesian setting, and derive measurement conditions that do not involve the Lipschitz constant of the DNN. It is unclear whether these results can be extended to the Bayesian setting. Third, our main result involves the density shift bound (Definition 2.5). In particular, the noise distribution should have a density. This is a rather unpleasant technical assumption. It would be interesting to see if it could be removed through a refined analysis. Fourth, as noted in [ 14], the coherence µ(U;D)arising in Theorem 3.8 may be high. In the non- Bayesian setting, this has been addressed in [ 2,3,15] by using a nonuniform probability distribution for drawing rows of U, with probabilities given in terms of so-called local coherences ofUwith relative to D.
|
https://arxiv.org/abs/2505.10630v1
|
As shown therein, this can lead to significant performance gains over sampling uniformly at random. We believe a similar approach can be considered in the Bayesian setting as a consequence of our general framework. We intend to explore this in future work. Finally, our results in this paper are theory, and strive to study the sample complexity of Bayesian recovery. We do not address the practical problem of sampling from the posterior. This is a key computational challenge in Bayesian inverse problems [ 11]. However, efficient techniques for doing this are emerging. See [46, 66] and references therein. 9 Acknowledgments and Disclosure of Funding BA was acknowledges support from the Natural Sciences and Engineering Research Council of Canada of Canada (NSERC) through grant RGPIN-2021-611675. References [1]A. Aali, M. Arvinte, S. Kumar, and J. I. Tamir. Solving inverse problems with score-based generative priors learned from noisy data. In 2023 57th Asilomar Conference on Signals, Systems, and Computers , pages 837–843, 2023. [2]B. Adcock, J. M. Cardenas, and N. Dexter. CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 19990–20037, 2023. [3]B. Adcock, J. M. Cardenas, and N. Dexter. A unified framework for learning with nonlinear model classes from arbitrary linear samples. In International Conference on Machine Learning , 2024. [4]B. Adcock and A. C. Hansen. Compressive Imaging: Structure, Sampling, Learning . Cambridge University Press, Cambridge, UK, 2021. [5]B. Adcock, A. C. Hansen, C. Poon, and B. Roman. Breaking the coherence barrier: a new theory for compressed sensing. Forum Math. Sigma , 5:e4, 2017. [6]J. Adler and O. Öktem. Deep Bayesian inversion. In Data-Driven Models in Inverse Problems , pages 359–412. De Gruyter, Berlin, Boston, 2025. [7]G. S. Alberti, A. Felisi, M. Santacesaria, and S. I. Trapasso. Compressed sensing for inverse problems and the sample complexity of the sparse Radon transform. J. Eur. Math. Soc. (in press) , 2025. [8]G. S. Alberti, A. Felisi, M. Santacesaria, and S. I. Trapasso. Compressed sensing for inverse problems ii: applications to deconvolution, source recovery, and MRI. arXiv:2501.01929 , 2025. [9]L. Ambrosio, E. Brué, and D. Semola. Lectures on Optimal Transport . UNITEXT. Springer, Cham, Switzerland, 2nd edition, 2024. [10] V . Antun, F. Renna, C. Poon, B. Adcock, and A. C. Hansen. On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. USA , 117(48):30088– 30095, 2020. [11] S. Arridge, P. Maass, O. Öktem, and C.-B. Schönlieb. Solving inverse problems using data- driven models. Acta Numer. , 28:1–174, 2019. [12] R. G. Baraniuk, V . Cevher, M. F. Duarte, and C. Hedge. Model-based compressive sensing. IEEE Trans. Inform. Theory , 56(4):1982–2001, 2010. [13] C. Belthangady and L. A. Royer. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nature methods , 16(12):1215–1225, 2019. [14] A. Berk, S. Brugiapaglia, B. Joshi, Y . Plan, M. Scott, and O. Yilmaz. A coherence parameter characterizing generative compressed sensing with fourier
|
https://arxiv.org/abs/2505.10630v1
|
measurements. IEEE J. Sel. Areas Inf. Theory , 3(3):502–512, 2022. [15] A. Berk, S. Brugiapaglia, Y . Plan, M. Scott, X. Sheng, and O. Yilmaz. Model-adapted Fourier sampling for generative compressed sensing Model-adapted Fourier sampling for generative compressed sensing . arXiv:2310.04984 , 2023. [16] S. Bhadra, V . A. Kelkar, F. J. Brooks, and M. A. Anastasio. On hallucinations in tomographic image reconstruction. IEEE Trans. Med. Imaging , 40(11):3249–3260, 2021. 10 [17] P. J. Bickel, Y . Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Ann. Statist. , 37(4):1705–1732, 2009. [18] J. Bigot, C. Boyer, and P. Weiss. An analysis of block sampling strategies in compressed sensing. IEEE Trans. Inform. Theory , 62(4):2125–2139, 2016. [19] P. Bohra, J. Pham, T.-A. Dong, and M. Unser. Bayesian inversion for nonlinear imaging models using deep generative priors. IEEE Trans. Comput. Imag. , 8:1237–1249, 2023. [20] A. Bora, A. Jalal, E. Price, and A. G. Dimakis. Compressed sensing using generative models. InInternational Conference on Machine Learning , pages 537–546, 2017. [21] A. Bourrier, M. E. Davies, T. Peleg, P. Pérez, and R. Gribonval. Fundamental performance limits for ideal decoders in high-dimensional linear inverse problems. IEEE Trans. Inform. Theory , 60(12):7928–7946, 2014. [22] C. Boyer, J. Bigot, and P. Weiss. Compressed sensing with structured sparsity and structured acquisition. Appl. Comput. Harmon. Anal. , 46(2):312–350, 2019. [23] M. Burger and T. Roith. Learning in image reconstruction: A cautionary tale. SIAM News , 57(08), Oct 2024. [24] E. J. Candès and Y . Plan. A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inform. Theory , 57(11):7235–7254, 2011. [25] I.-Y . Chun and B. Adcock. Compressed sensing and parallel acquisition. IEEE Trans. Inform. Theory , 63(8):4860–4882, 2017. [26] H. Chung and J. C. Ye. Score-based diffusion models for accelerated mri. Medical Image Analysis , 80:102479, 2022. [27] M. J. Colbrook, V . Antun, and A. C. Hansen. The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and smale’s 18th problem. Proc. Natl. Acad. Sci. USA , 119(12):e2107151119, 2022. [28] M. Dashti and A. M. Stuart. The bayesian approach to inverse problems. In R. Ghanem et al., editor, Handbook of Uncertainty Quantification . Springer, 2017. [29] M. A. Davenport, M. F. Duarte, Y . C. Eldar, and G. Kutyniok. Introduction to compressed sensing. In Y . C. Eldar and G. Kutyniok, editors, Compressed Sensing: Theory and Applications , pages 1–64. Cambridge University Press, Cambridge, UK, 2012. [30] M. Dhar, A. Grover, and S. Ermon. Modeling sparse deviations for compressed sensing using generative models. In International Conference on Machine Learning , pages 1214–1223. PMLR, 2018. [31] A. G. Dimakis. Deep generative models and inverse problems. In P. Grohs and G. Kutyniok, ed- itors, Mathematical Aspects of Deep Learning , chapter 9, pages 400–421. Cambridge University Press, Cambridge, UK, 2022. [32] S. Dirksen. Dimensionality reduction with subgaussian matrices: a unified theory. Found. Comput. Math. , 16:1367–1396, 2016. [33] M. F. Duarte and Y . C. Eldar. Structured compressed sensing: from theory to applications. IEEE Trans. Signal Process. , 59(9):4053–4085, 2011. [34]
|
https://arxiv.org/abs/2505.10630v1
|
M. Fazlyab, A. Robey, H. Hassani, M. Morari, and G. J. Pappas. Efficient and accurate estimation of Lipschitz constants for deep neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 33. Curran Associates, Inc., 2019. [35] B. T. Feng, J. Smith, M. Rubinstein, H. Chang, K. L. Bouman, and W. T. Freeman. Score-based diffusion models as principled priors for inverse imaging. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) , pages 10486–10497, 2023. 11 [36] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing . Appl. Numer. Harmon. Anal. Birkhäuser, New York, NY , 2013. [37] M. Genzel, J. Macdonald, and M. Marz. Solving inverse problems with deep neural networks – robustness included? IEEE Trans. Pattern Anal. Mach. Intell. , 45(1):1119–1134, 2023. [38] C. R. Givens and R. M. Shortt. A class of Wasserstein metrics for probability distributions. Michigan Math. J. , 31(2):231–240, 1984. [39] N. M. Gottschling, V . Antun, A. C. Hansen, and B. Adcock. The troublesome kernel – on hallucinations, no free lunches and the accuracy-stability trade-off in inverse problems. SIAM Rev., 67(1):73–104, 2025. [40] P. Hand, O. Leong, and V . V oroninski. Phase retrieval under a generative prior. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc., 2018. [41] P. Hand and V . V oroninski. Global guarantees for enforcing deep generative priors by empirical risk. In S. Bubeck, V . Perchet, and P. Rigollet, editors, Proceedings of the Thirty-First Confer- ence on Learning Theory , volume 75 of Proceedings of Machine Learning Research , pages 970–978. PMLR, 2018. [42] D. P. Hoffman, I. Slavitt, and C. A. Fitzpatrick. The promise and peril of deep learning in microscopy. Nature Methods , 18(2):131–132, 2021. [43] M. Holden, M. Pereyra, and K. C. Zygalakis. Bayesian imaging with data-driven priors encoded by neural networks. SIAM J. Imaging Sci. , 15(2):892–924, 2022. [44] L. Monsaingeon (https://mathoverflow.net/users/33741/leo monsaingeon). Proving the in- equality involving Hausdorff distance and Wasserstein infinity distance. MathOverflow. URL:https://mathoverflow.net/q/467377 (version: 2024-03-21). [45] Y . Huang, T. Würfl, K. Breininger, L. Liu, G. Lauritsch, and A. Maier. Some investigations on robustness of deep learning in limited angle tomography. In International Conference on Medical Image Computing and Computer-Assisted Intervention , pages 145–153, 2018. [46] A. Jalal, M. Arvinte, G. Daras, E. Price, A. G. Dimakis, and J. Tamir. Robust compressed sensing mri with deep generative priors. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P. S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 14938–14954. Curran Associates, Inc., 2021. [47] A. Jalal, S. Karmalkar, A. Dimakis, and E. Price. Instance-optimal compressed sensing via posterior sampling. In 38th International Conference on Machine Learning , pages 4709–4720, 2021. [48] A. Jalal, L. Liu, A. G. Dimakis, and C. Caramanis. Robust compressed sensing using generative models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors,
|
https://arxiv.org/abs/2505.10630v1
|
Advances in Neural Information Processing Systems , volume 33, pages 713–727. Curran Associates, Inc., 2020. [49] Z. Kadkhodaie and E. Simoncelli. Stochastic solutions for linear inverse problems using the prior implicit in a denoiser. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 13242–13254. Curran Associates, Inc., 2021. [50] B. Kawar, M. Elad, S. Ermon, and J. Song. Denoising diffusion restoration models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 23593–23606. Curran Associates, Inc., 2022. [51] B. Kawar, G. Vaksman, and M. Elad. SNIPS: solving noisy inverse problems stochastically. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 21757–21769. Curran Associates, Inc., 2021. 12 [52] R. F. Laine, I. Arganda-Carreras, R. Henriques, and G. Jacquemet. Avoiding a replication crisis in deep-learning-based bioimage analysis. Nature Methods , 18(10):1136–1144, 2021. [53] R. Laumont, V . D. Bortoli, A. Almansa, J. Delon, A. Durmus, and M. Pereyra. Bayesian imaging using plug & play priors: When langevin meets tweedie. SIAM J. Imaging Sci. , 15(2):701–737, 2022. [54] X. Liu, B. Glocker, M. M. McCradden, M. Ghassemi, A. K. Denniston, and L. Oakden-Rayner. The medical algorithmic audit. The Lancet Digital Health , 4(5):e384–e397, 2022. [55] G. Luo, M. Blumenthal, M. Heide, and M. Uecker. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn. Reson. Med. , 90(1):295–311, 2023. [56] T. Miyato, T. Kataoka, M. Koyama, and Y . Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations , 2018. [57] J. N. Morshuis, S. Gatidis, M. Hein, and C. F. Baumgartner. Adversarial robustness of MR image reconstruction under realistic perturbations. arXiv:2208.03161 , 2022. [58] Matthew J Muckley, Bruno Riemenschneider, Alireza Radmanesh, Sunwoo Kim, Geunu Jeong, Jingyu Ko, Yohan Jun, Hyungseob Shin, Dosik Hwang, Mahmoud Mostapha, et al. Results of the 2020 fastMRI challenge for machine learning MR image reconstruction. IEEE Trans. Med. Imaging , 2021. [59] C. R. Noordman, D. Yakar, J. Bosma, F. F. J. Simonis, and H. Huisman. Complexities of deep learning-based undersampled MR image reconstruction. Eur. Radiol. Exp. , 7:58, 2023. [60] G. Ongie, A. Jalal, C. A. Metzler, R. G. Baraniuk, A. G. Dimakis, and R. Willett. Deep learning techniques for inverse problems in imaging. IEEE J. Sel. Areas Inf. Theory , 1(1):39–56, 2020. [61] A. Raj, Y . Bresler, and B. Li. Improving robustness of deep-learning-based image reconstruction. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 7932–7942. PMLR, 13–18 Jul 2020. [62] A. J. Reader and B. Pan. AI for PET image reconstruction. Brit. J. Radiol. , 96(1150):20230292, 2023. [63] J. Romberg. Compressive sensing by random convolution. SIAM J. Imaging Sci. , 2(4):1098– 1128, 2009. [64] J. Scarlett, R. Heckel, M. R. D. Rodrigues, P. Hand, and
|
https://arxiv.org/abs/2505.10630v1
|
Y . C. Eldar. Theoretical perspectives on deep learning methods in inverse problems. IEEE J. Sel. Areas Inf. Theory , 3(3):433–453, 2022. [65] V . Shah and C. Hegde. Solving linear inverse problems using GAN priors: An algorithm with provable guarantees. In 2018 IEEE international conference on Acoustics, Speech and Signal Processing (ICASSP) conference on acoustics, speech and signal processing (ICASSP) , pages 4609–4613. IEEE, 2018. [66] Y . Song, L. Shen, L. Xing, and S. Ermon. Solving inverse problems in medical imaging with score-based generative models. In International Conference on Learning Representations , 2022. [67] A. M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer. , 19:451–559, 2010. [68] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, J. Ian Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations , 2014. [69] Y . Traonmilin and R. Gribonval. Stable recovery of low-dimensional cones in Hilbert spaces: one RIP to rule them all. Appl. Comput. Harmon. Anal. , 45(1):170–205, 2018. [70] G. Varoquaux and V . Cheplygina. Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ digital medicine , 5(1):1–8, 2022. 13 [71] A. Virmaux and K. Scaman. Lipschitz regularity of deep neural networks: analysis and efficient estimation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc., 2018. [72] E. Wu, K. Wu, R. Daneshjou, D. Ouyang, D. E. Ho, and J. Zou. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine , 27(4):582–584, 2021. [73] T. Yu, T. Hilbert, G. G. Piredda, A. Joseph, G. Bonanno, S. Zenkhri, P. Omoumi, M. B. Cuadra, E. J. Canales-Rodríguez, T. Kober, et al. Validation and generalizability of self-supervised image reconstruction methods for undersampled MRI. arXiv:2201.12535 , 2022. [74] M. Zach, F. Knoll, and T. Pock. Stable deep MRI reconstructions using generative priors. IEEE Trans. Med. Imag. , 42(12):3817–3831, 2023. [75] C. Zhang, J. Jia, B. Yaman, S. Moeller, S. Liu, M. Hong, and M. Akçakaya. Instabilities in conventional multi-coil MRI reconstruction with small adversarial perturbations. In 2021 55th Asilomar Conference on Signals, Systems, and Computers , pages 895–899, 2021. [76] Z. Zhao, J. C. Ye, and Y . Bresler. Generative models for inverse imaging problems: from mathematical foundations to physics-driven applications. IEEE Signal Process. Mag. , 40(1):148– 163, 2023. 14 A Covering number estimates and the proofs of Propositions 4.1 and 4.3 The proof of Proposition 4.1 relies on the following two lemmas. Lemma A.1 (Approximate covering number under a Lipschitz pushforward map) .LetG:Rk→Rn be Lipschitz with constant L≥0, i.e., ∥G(x)−G(z)∥ ≤L∥x−z∥,∀x, z∈Rk, and define P=G♯γ, where γis any probability distribution on Rk. Then Covη,δ(P)≤Covη/L,δ(γ),∀δ, η≥0. Proof. Let{xi}k i=1⊆supp( γ)and define zi=G(xi)∈supp( G♯γ)fori= 1, . . . , k . Let z∈G(Bη/L(xi))and write z=G(x)for some x∈Bη/L(xi). Then ∥z−zi∥=∥G(x)−G(xi)∥ ≤L∥x−xi∥ ≤η. Hence x∈Bη(xi). Since zwas arbitrary, we deduce that G(Bη/L(xi))⊆Bη(zi). It follows that Bη/L(xi)⊆G−1(Bη(zi)). Now suppose that γhSk i=1Bη/L(xi)i ≥1−δ. Then, by definition of the
|
https://arxiv.org/abs/2505.10630v1
|
pushforward measure G♯γ"k[ i=1Bη(zi)# =γ"k[ i=1G−1(Bη(zi))# ≥γ"k[ i=1Bη/L(xi)# ≥1−δ. This gives the result. Lemma A.2 (Approximate covering number of a normal distribution) .LetP=N(0, σ2I)onRn. Then its approximate covering number (Definition 2.1) satisfies Covη,δ(P)≤ 1 +2√nσt ηn ,where t= 1 +r 2 nlog(1 /δ). Proof. Observe that, for t≥0, P(Bc√nσt) =P(X≥nt2)≤(t2e1−t2)n/2. where X∼χ2 nis a chi-squared random variable, and the inequality follows from a standard Chernoff bound. Now t2≤e2tgives that P(Bc√nσt)≤(e−(t−1)2)n/2. Now set t= 1 +q 2 nlog(1 /δ)so that P(Bc√nσt)≤δ. Hence, we have shown that Covη,δ(P)≤Covη(B√nσt), where Covηis the classical covering number of a set, i.e., Covη(A) = min( k:∃{xi}k i=1⊆A, A⊆k[ i=1Bη(xi)) . Using standard properties of covering numbers (see, e.g., [4, Lem. 13.22], we get Covη(B√nσt) = Covη/(√nσt)(B1)≤ 1 +2√nσt ηn , as required. Proof of Proposition 4.1. By Lemma A.1, Covη,δ(P)≤Covη/L,δ(N(0, I)). The result now follows from Lemma A.2. 15 To prove Proposition 4.3, we first require the following lemma. Lemma A.3 (Approximate covering number of a mixture) .LetP=Pr i=1piPibe a mixture of probability distributions PionRn, where pi≥0,∀i, andPr i=1pi= 1. Then Covη,δ(P)≤rX i=1Covη,δ(Pi). Proof. For each i= 1, . . . , r , let{x(i) j}ki j=1⊆Rn, and, in particular, 3 x(i) j∈supp(Pi), be such that Pi ki[ j=1Bη(x(i) j) ≥1−δ. Then P r[ i=1ki[ j=1Bη(x(i) j) =rX i=1piPi r[ i=1ki[ j=1Bη(x(i) j) ≥rX i=1piPi ki[ j=1Bη(x(i) j) ≥rX i=1pi(1−δ) = 1−δ. Notice that supp(Pi)⊆supp(P), therefore, x(i) j∈supp(P). The result now follows. Proof of Proposition 4.3. LetS={S:S⊆ {1, . . . , n },|S|=s}. Then we can write Psas the mixture Ps=|S|X i=11 |S|PS, wherePSis defined as follows: x∼ PSifxi= 0fori /∈Sand, for i∈S,xiis drawn independently from the standard normal distribution on R. Notice that PS=G♯γ, where γ=N(0, I)is the standard, multivariate normal distribution on RsandG:Rs→Rnis a zero-padding map. The map Gis Lipschitz with constant L= 1. Hence, by Lemmas A.1 and A.2, Covη,δ(PS)≤ 1 +2√st ηs , t= 1 +r 2 slog(1 /δ) We now apply Lemma A.3 and the fact that |S|=n s ≤en ss , the latter being a standard bound, to obtain Covη,δ(Ps)≤en ss 1 +2√st ηs , t= 1 +r 2 slog(1 /δ). Taking logarithms gives the result. 16 B Concentration inequalities and the proofs of Theorems 3.5 and 3.8 We now aim to prove Theorems 3.5 and 3.8. To do this, we first derive concentration inequalities for subsgaussian random matrices and subsampled orthogonal transforms. B.1 Gaussian concentration and density shift bounds Lemma B.1 (Concentration and density shift bounds for Gaussian noise) .LetE=N(0,σ2 mI). Then the upper concentration bound Dupp(t;E)(Definition 2.4) can be taken as Dupp(t;E) =t2 σ2e1−t2 σ2m/2 ,∀t > σ. and the density shift bound Dshift(ε, τ;E)(Definition 2.5) can be taken as Dshift(ε, τ;E) = expm(2τ+ε) 2σ2ε ,∀ε, τ≥0. Proof. Write e∼ E ase=σ√mn, where n∼ N(0, I). Then P(∥e∥ ≥t) =P(∥n∥2≥t2m/σ2) =P(X≥t2m/σ2), where X=∥n∥2∼χ2 mis a chi-squared random variable with mdegrees of freedom. Using a standard Chernoff bound once more, we have P(X≥zm)≤(ze1−z)m/2, for any z >1. Setting z=t2 σ2, we have P(∥e∥ ≥t)≤t2 σ2e1−t2 σ2m/2 , which gives the first result. For the second result, we recall
|
https://arxiv.org/abs/2505.10630v1
|
that Ehas density pE(e) = (2 πσ2/m)−m/2exp −m 2σ2∥e∥2 . Therefore pE(u) pE(v)= expm 2σ2(∥v∥ − ∥ u∥) (∥v∥+∥u∥) . Now suppose that ∥u∥ ≤τand∥u−v∥ ≤ε. Then pE(u) pE(v)≤expm(2τ+ε) 2σ2ε . Hence Dshift(ε, τ;E)≤expm(2τ+ε) 2σ2ε , which gives the second result. B.2 Subgaussian concentration inequalities Lemma B.2 (Lower and upper concentration bounds for subgaussian random matrices) .LetAbe a distribution of subgaussian random matrices with parameters β, κ > 0(Definition 3.4). Then the lower, upper and absolute concentration bounds for A(Definition 2.3) can be taken as Cupp(t;A,Rn) =Clow(1/t;A,Rn) = 2 exp( −c(t, β, κ )m) for any t >1, where c(t, β, κ )>0depends on β, κ only. 17 Proof. Letx∈Rnand observe that P(∥Ax∥ ≥t∥x∥)≤P ∥Ax∥2− ∥x∥2 ≥(t2−1)∥x∥2 P(∥Ax∥ ≤t−1∥x∥)≤P ∥Ax∥2− ∥x∥2 ≥(1−t−2)∥x∥2. (B.1) We now use [ 36, Lem. 9.8]. Note that this result only considers a bound of the form P(|∥Ax∥2− ∥x∥2| ≥s∥x∥2)fors∈(0,1). But the proof straightforwardly extends to s >0. B.3 Concentration inequalities for randomly-subsampled orthogonal transforms Lemma B.3 (Concentration bounds for randomly-subsampled orthogonal transforms) .LetD⊆Rn andAbe a distribution of randomly-subsampled orthoognal transforms based on a matrix U (Definition 3.6). Then the lower and upper concentration bounds for A(Definition 2.3) can be taken as Cupp(t;A, D) =Clow(1/t;A, D) = 2 exp −mc(t) µ(U;D) for any t >1, where µ(U;D)is a in Definition 3.7 and c(t)>0depend on tonly. Proof. Due to (B.1), it suffices to bound P ∥Ax∥2− ∥x∥2 ≥s∥x∥2 fors >0. The result uses Bernstein’s inequality for bounded random variables (see, e.g., [ 4, Thm. 12.18]). Let x∈D. By definition of Aand the fact that Uis orthogonal, we can write ∥Ax∥2− ∥x∥2=nX i=1n mIXi=1−1 |⟨ui, x⟩|2=:NX i=1Zi. Notice that the random variables Ziare independent, with E(Zi) = 0 . We also have |Zi| ≤n m|⟨ui, x⟩|2≤µ(U;D) m∥x∥2=:K and nX i=1E|Zi|2≤nX i=1n2|⟨ui, x⟩|4 m2E(I2 Xi=1)≤KnX i=1n|⟨ui, x⟩|2 mm n=K∥x∥2=:σ2. Therefore, by Bernstein’s inequality, P(|∥Ax∥2− ∥x∥2| ≥s∥x∥2)≤2 exp −s2∥x∥4/2 σ2+Ks∥x∥2/3)! = 2 exp −ms2/2 µ(U;D)(1 + s/3) for any s >0andx∈D. The result now follows. Lemma B.4 (Absolute concentration bounds for subsampled orthogonal transforms) .LetD⊆Rn andAbe a distribution of randomly-subsampled orthogonal transforms based on a matrix U (Definition 3.6). Then ∥A∥ ≤p n/m a.s. for A∼ A, and consequently the absolute concentration bound for A(Definition 2.3) can be taken as Cabs(s, t;A, D) = 0 for any s≥0andt≥sp n/m . Proof. Recall that Aconsists of qrows of an orthogonal matrix Umultiplied by the scalarp n/m . Hence ∥A∥ ≤p n/m∥U∥=p n/m . Now let x∈Dwith∥x∥ ≤s. Then ∥Ax∥ ≤ ∥ A∥∥x∥ ≤ ∥A∥s. Therefore ∥Ax∥p n/ms ≤t, meaning that P(∥Ax∥> t) = 0 . This gives the result. 18 B.4 Proofs of Theorems 3.5 and 3.8 Proof of Theorem 3.5. Letp=P[∥x∗−ˆx∥ ≥34(η+σ)]. We use Theorem 3.1 with c= 32 ,c′= 2 andt= 2. Letε′=ε/δ1/p. Then Theorem 3.1 gives p≲δ+Cabs(ε′,2ε′;A,Rn) +Dupp(2σ;E) + 2Dshift(2ε′,2σ;E)ek[Clow(1/2;A,Rn) +Cupp(2;A,Rn) + 2Dupp(2σ;E)], Consider Cabs(ε′,2ε′;A,Rn). Ifx∈Rnwith∥x∥ ≤s, then P(∥Ax∥> t)≤P(∥Ax∥>(t/s)∥x∥). Hence, in this case, we may take Cabs(ε′,2ε′;A,Rn) =Cupp(2;A,Rn). (B.2) Now by Lemma B.2, we have that Clow(1/2;A,Rn) =Cupp(2;A,Rn) = exp( −c(β, κ)m), where c(β, κ)>0depends on β, κ only. Also, by Lemma B.1, we have Dupp(2σ;E) = 2e−1m/2= exp(
|
https://arxiv.org/abs/2505.10630v1
|
−cm), for some universal constant c >0and Dshift(2ε′,2σ;E) = expm(4σ+ 2ε′) 2σ22ε′ ≲1, where we used the facts that m≥1andσ≥ε/δ1/p. We deduce that p≲δ+ exp( k−c(β, κ)m) for a possibly different constant c(β, κ)>0. The condition (3.4) onmand(3.1) now give that p≲δ, as required. Proof of Theorem 3.8. Letp=P[∥x∗−ˆx∥ ≥34(η+σ)]. In this case, the forwards operator A satisfies ∥A∥ ≤p n/m (Lemma B.4). Hence we may apply Theorem 1.1 with θ=p n/m and d= 2to obtain p≲δ+ Cov η,δ(P) [Clow(1/2;A, D) +Cupp(2;A, D) + exp( −cm)] for some universal constant c >0, where D= supp( P)−supp(P). Lemma B.3 now gives that p≲δ+ Cov η,δ(P) exp −cm µ(U;D) + exp( −cm) for a possibly different constant c >0. The result now follows from the condition (3.5) on m. 19 C Proofs of Theorems 1.1 and 3.1 To prove these theorems, we first require some additional background on couplings, along with several lemmas. We then require a series of technical lemmas. C.1 Background on couplings For a number of our results, we require some background on couplings. We first recall some notation. Given probability spaces (X,F1, µ),(Y,F2, ν), we write Γ = Γ µ,νfor the set of couplings, i.e., probability measures on the product space (X×Y, σ(F1⊗ F 2))whose marginals are µandν, respectively. For convenience, we write π1:X×Y→Xandπ2:X×Y→Yfor the projections π1(x, y) =xand likewise π2(x, y) =y. In particular, for any coupling γwe have π1♯γ=µand π2♯γ=ν, where ♯denotes the pushforward operation. As an immediate consequence, we observe that for any measurable function φ:X→R, Z Xφ(x) dµ(x) =Z Xφ(x) dπ1♯γ(x) =Z X×Yφ(x) dγ(x, y). (C.1) Given a cost function c:X×Y→[0,∞), the Wasserstein- pmetric is defined as Wp(µ, ν) = inf γ∈ΓZ X×Yc(x, y)pdγ(x, y)1/p for1≤p <∞and W∞(µ, ν) = inf γ∈Γ esssupγc(x, y) . We say that γ∈Γis aWp-optimal coupling ofµandνif Z X×Yc(x, y)pdγ(x, y)1/p =Wp(µ, ν) for1≤p <∞or esssupγc(x, y) =W∞(µ, ν) when p=∞. Note that such a coupling exists whenever X, Y are Polish spaces, and when the cost function is lower semicontinuous [ 38]. In our case, we generally work with Euclidean spaces with the cost function being the Euclidean norm, hence both conditions are satisfied. For convenience, if γis probability measure on the product space (X×Y, σ(F1⊗F2)), we will often write γ(E1, E2)instead of γ(E1×E2)forEi∈ Fi,i= 1,2. Moreover, if x∈Xis a singleton, we write γ(x, E 2)forγ({x} ×E2)and likewise for γ(E1, y). We now need several lemmas on couplings. Lemma C.1. Suppose that (X,F1, µ),(Y,F2, ν)are Borel probability spaces, and let γbe a coupling of µ, νon the space (X×Y, σ(F1⊗ F 2)). Then supp( γ)⊆supp( µ)×supp( ν). Proof. Let(x, y)∈supp( γ). Then γ(Ux,y)>0for every open set Ux,y⊆X×Ythat contains (x, y). Now, to show that (x, y)∈supp( µ)×supp( ν), we show that x∈supp( µ)andy∈supp( ν). LetUx⊆Xbe open with x∈Ux. By definition, µ(Ux) =γ(Ux×Rn). Since Ux×Rnis open and contains (x, y), it follows that γ(Ux×Rn)>0. Since Uxwas arbitrary, we deduce that x∈supp( µ). The argument that y∈supp( ν)is identical. Lemma C.2. LetXbe a Polish space with a complete metric d. Let µ, ν be Borel probability measures on X. LetdHbe the Hausdorff
|
https://arxiv.org/abs/2505.10630v1
|
metric with respect to dandW∞be the Wasserstein- ∞ metric with cost function d. Then dH(supp( µ),supp( ν))≤W∞(µ, ν). In particular, supp( µ)⊆Bη(supp( ν))for any η≥W∞(µ, ν). 20 Proof. We follow the arguments presented in [44]. Since dH(supp( µ),supp( ν)) = max( sup x∈supp( ν)d(x,supp( µ)),sup y∈supp( µ)d(y,supp( ν))) we may, without loss of generality, assume that the maximum is achieved by supx∈supp( ν)d(x,supp( µ)) =: D. Take a sequence {xn}n∈N⊆supp( ν)such that Dn:=d(xn,supp( µ))→D. Since xn∈supp( ν), for any ε >0, we have ν(Bε(xn))>0. Note thatBε(xn)is measurable as we assume Xis Borel. For each n∈N, define εn=1 nDn. We show that for all x∈Bεn(xn), y∈supp( µ),d(x, y)> D n(1−1 n). By triangle inequality, we have d(x, y)≥d(xn, y)−d(xn, x)≥Dn−Dn/n=Dn(1−1/n). Notice also that Dn(1−1 n)≤Dand converges to Dasn→ ∞ . This implies that A:={(x′, y′) :d(x′, y′)> D n(1−1/n)} ⊇Bεn(xn)×supp( µ). Now consider any coupling γ∈Γµ,ν. We have γ(A)≥γ(Bεn(xn)×supp( µ)) =γ(Bεn(xn)×X) =ν(Bεn(xn))>0. Therefore ess supγd(x, y)> D n(1−1/n). Now since Dn(1−1 n)→Dwe have ess supγd(x, y)≥ D. This is holds for any coupling, therefore the result follows. When working with a coupling between a finitely-supported distribution and a continuous distribution, the following lemma is often useful. Lemma C.3. Let(X,F1, µ),(Y,F2, ν)be probability spaces, such that νis finitely supported on a set S⊆Y. Letγbe a coupling of µ, νandE⊆X×Ybeγ-measurable. Write Ey={x: (x, y)∈E} for the slice of Eaty∈Y. Then γ(E) =X s∈S,s∈π2(E)γ(Es× {s}). Proof. Write γ(E) =X s∈S,s∈π2(E)γ(Es× {s}) +γ(ˆE), where ˆE=E\S s∈S,s∈π2(E)(Es×{s}). It suffices to show that γ(ˆE) = 0 . Since F⊆π−1 2(π2(F)) for any set F, we have γ(ˆE)≤γ(π−1 2(π2(ˆE)) =ν(π2(ˆE)) =ν(π2(ˆE)∩S). But ˆE={(x, y)∈E:y /∈S} and therefore π2(ˆE)∩S=∅. The result now follows. C.2 Technical lemmas C.2.1 Separation lemma The lemma considers a scenario where two random variables are drawn for a mixture of kprobability distributions. The second random variable is conditioned on the draw of the first. It then considers the probability that the two random variables are drawn from different distributions in the mixture, bounding this in terms of their Total Variation (TV) distance. It generalizes [47, Lem. 3.1]. Lemma C.4 (Separation lemma) .LetH1, . . . ,Hkbe Borel probability measures and consider the mixture H=Pk i=1aiHi. Lety∗∼ H andˆy∼ H(·|y∗). Then P[y∗∼ H i,ˆy∼ H j]≤1−TV(Hi,Hj). To clarify, in this lemma and elsewhere we use the notation y∗∼ H i(and similar) to mean the event thaty∗is drawn from the ith distribution Hi. We also write H(·|y∗)for the posterior distribution. 21 Proof. Note that we may assume without loss of generality that ˆyis conditionally independent of the event y∗∼ H i. In this case, ˆyfollows the mixture distributionPk i=1P(y∗∼ H i|y∗)Hi(·), where P(y∗∼ H i|y∗)are the posterior weights. Note that if the Hihave densities hiwith respect to some measure, these weights are given by P(y∗∼ H i|y∗) =aihi(y∗) Pk j=1ajhj(y∗). (C.2) We now write p: =P[y∗∼ H i,ˆy∼ H j] =P[y∗∼ H i]P[y∗∼ H i|ˆy∼ H j] =P[y∗∼ H i]E[P(ˆy∼ H j|y∗)|y∗∼ H i] =aiE[P(ˆy∼ H j|y∗)|y∗∼ H i]. SinceE[y∗|y∗∼ H i]∼ H i, we have p=aiE[P(ˆy∼ H j|y∗)|y∗∼ H i] =aiZ P(ˆy∼ H j|y∗) dHi(y∗). Now, because of
|
https://arxiv.org/abs/2505.10630v1
|
the mixture property, Hi≪ H and therefore its Radon-Nikodym derivative hi=dHi dHexists. This means we may write p=aiZ P(ˆy∼ H j|y∗)hi(y∗) dH(y∗). By definition, we have P(ˆy∼ H j|y∗) =P(y∗∼ H j|y∗)and using (C.2), we deduce that p=Zaiajhi(y)hj(y∗) Pk l=1alhl(y∗)dH(y∗) We now write p=Zaihi(y∗)hj(y∗)aj aihi(y∗) +ajhj(y∗) +P l̸=i,jalhl(y∗)dH(y∗) ≤Zaihi(y∗)hj(y∗)aj aihi(y∗) +ajhj(y∗)dH(y∗) =Zaihi(y∗)hj(y∗)aj (aihi(y∗) +ajhj(y∗))dH(y∗) =Zaihi(y∗)hj(y∗)aj aihi(y∗) +ajhj(y∗)dH(y∗) ≤Zaihi(y∗)hj(y∗)aj max{aihi(y∗), ajhj(y∗)}dH(y∗) =Z min{aihi(y∗), ajhj(y∗)}dH(y∗) = 1−Z1 2(hi(y∗) +hj(y∗))−min{aihi(y∗), ajhj(y∗)}dH(y∗) = 1−Z1 2|hi(y∗)−hj(y∗)|dH(y∗) = 1−TV(Hi,Hj), as required. C.2.2 Disjointly-supported measures induce well-separated measurement distributions The following lemma pertains to the pushforwards of measures supported in Rnvia the forwards operator Aand noise e. Specifically, it states that if two distributions PintandPextare disjointly supported then their corresponding pushforwards Hint,AandHext,Aare, on average with respect toA∼ A, well-separated, in the sense of their TV-distance. It is generalization of [ 47, Lem. 3.2] that allows for arbitrary distributions Aof the forward operators, as opposed to just distributions of Gaussian random matrices. 22 Lemma C.5 (Disjointly-supported measures induce well-separated measurement distributions) .Let ˜x∈Rn, σ≥0, η≥0, c≥1,Pextbe a distribution supported in the set S˜x,ext={x∈Rn:∥x−˜x∥ ≥c(η+σ)} andPintbe a distribution supported in the set S˜x,int={x∈Rn:∥x−˜x∥ ≤η}. Given A∈Rm×n, letHint,Abe the distribution of y=Ax∗+ewhere x∗∼ P intande∼ E independently, and define Hext,Ain a similar way. Then EA∼A[TV(Hint,A,Hext,A)]≥1− Clow2√c;A, Dext +Cupp√c 2;A, Dint + 2Dupp√cσ 2;E , where Dext={x−˜x:x∈supp(Pext)},Dint={x−˜x:x∈supp(Pint)}andCupp(·;A), Clow(·;A)andDupp(·;E)are as in Definitions 2.3 and 2.4, respectively. Notice that the average TV-distance is bounded below by the concentration bounds ClowandCupp forA(Definition 2.3) and the concentration bound DuppforE(Definition 2.4). This is unsurprising. The pushforward measures are expected to be well-separated if, firstly, the action of Aapproximately preserves the lengths of vectors (which explains the appearance of ClowandCupp) and, secondly, adding noise by Edoes not, with high probability, cause well-separated vectors to become close to each other (which explains the appearance of Dupp). Also as expected, as cincreases, i.e., the distributions PintandPextbecome further separated, the average TV-distance increases. Proof. Given A∈Rm×n, let BA={y∈Rm:∥y−A˜x∥ ≤√c(η+σ)}. We claim that EA[Hext,A(BA)]≤Clow2√c;A, Dext +Dupp(σ√c;E), (C.3) EA[Hint,A(BA)]≥1− Cupp√c 2;A, Dint +Dupp√c 2σ;E . (C.4) Notice that these claims immediately imply the result, since EA∼ATV(Hext,A,Hint,A)≥EA∼A[Hint,A(BA)]−EA∼A[Hext,A(BA)]. Therefore, the rest of the proof is devoted to showing (C.3) and (C.4). For the former, we write EA∼A[Hext,A(BA)] =EA∼AZ Z 1BA(Ax+e) dPext(x) dE(e) =EA∼AZ Z 1BA(Ax+e) dE(e) dPext(x) =EA∼A[Ex∼P ext[E(BA−Ax)]] =Ex∼P ext[EA∼A[E(BA−Ax)]],(C.5) where BA−Ax={b−Ax:b∈BA}. We now bound Ex∼P ext[EA∼AE(BA−Ax)]. Given x∈Rn, letCx={A:∥Ax−A˜x∥<2√c(η+σ)} ⊆Rm×nand write I1=Ex∼P ext[EA∼AE(BA−Ax)1Cx], I 2=Ex∼P ext[EA∼AE(BA−Ax)1Ccx] so that Ex∼P ext[EA∼A[E(BA−Ax)]] = I1+I2. (C.6) We will bound I1, I2separately. For I1, we first write I1=Ex∼P ext[EA∼A[E(BA−Ax)1Cx]] ≤Ex∼P ext[EA∼A[1Cx]] =Ex∼P ext[PA∼A(∥Ax−A˜x∥<2√c(η+σ))], 23 where the inequality follows from the fact that E(BA−Ax)≤1. Now since x∼ P ext, we have x∈S˜x,extand therefore ∥x−˜x∥ ≥c(η+σ). Hence Ex∼P ext[PA∼A(∥Ax−A˜x∥<2√c(η+σ)]≤Ex∼P ext PA∼A ∥Ax−A˜x∥<2√c∥x−˜x∥ . Since the outer expectation term has x∼ P ext, we have that x∈supp(Pext)with probability one. Using Definition 2.3, we deduce that I1≤Clow2√c;A, Dext . (C.7) We now bound I2. Let x∈S˜x,extandA∈Cc x, i.e.,∥A(x−˜x)∥>2√c(η+σ). We now show thatBA⊆BA,x, where BA,x={y∈Rm:∥y−Ax∥ ≥√c(η+σ)}. Suppose that y∈BA, i.e., ∥y−A˜x∥ ≤√c(η+σ). We have ∥y−Ax∥=∥y−A˜x+A˜x−Ax∥ ≥ ∥A(˜x−x)∥ − ∥ y−A˜x∥ >2√c(η+σ)−√c(η+σ) =√c(η+σ), and therefore y∈BA,x, as required. Using this, we have E(BA−Ax)≤ E(BA,x−Ax),∀A∈Cc
|
https://arxiv.org/abs/2505.10630v1
|
x, x∈S˜x,ext, and therefore I2=Ex∼P ext[EA∼A[E(BA−Ax)1Ccx]]≤Ex∼P ext[EA∼A[E(BA,x−Ax)1Ccx]]. But we notice that BA,x−Ax=Bc√c(η+σ). Now since η≥0, we have Bc√c(η+σ)⊆Bc σ√c. Hence E(BA,x−Ax) =E(Bc√c(η+σ))≤ E(Bc σ√c)≤Dupp(σ√c;E), and therefore I2≤Dupp(σ√c;E). Combining this with (C.5), (C.6) and (C.7), we deduce that EA[Hext,A(BA)]≤Ex∼P ext[EA∼A[E(BA−Ax)]] = I1+I2≤Clow(2/√c;A, Dext)+Cupp(σ√c;E), which shows (C.3). We will now establish (C.4). With similar reasoning to (C.5), we have EA∼A[Hint,A(Bc A)] =Ex∼P int[EA∼A[E(Bc A−Ax)]] Proceeding as before, let Dx={A:∥Ax−A˜x∥<√c 2(η+σ)},I1=Ex∼P int[EA∼A[E(Bc A− Ax)]1Dcx], and I2=Ex∼P int[EA∼A[E(Bc A−Ax)]1Dx]so that EA∼A[Hint,A(Bc A)] =Ex∼P int[EA∼A[E(Bc A−Ax)]] = I1+I2. (C.8) The terms I1, I2are similar to those considered in the previous case. We bound them similarly. For I1, we have, by dropping the inner probability terms, I1≤Ex∼P int[EA∼A[1Dcx]] =Ex∼P int PA∼A(∥A(x−˜x)∥ ≥√c 2(η+σ)) . Since x∈S˜x,int, we have ∥x−˜x∥ ≤η≤η+σwhich gives Ex∼P int[PA∼A(∥A(x−˜x)∥ ≥√c 2(η+σ))]≤Ex∼P int PA∼A(∥A(x−˜x)∥ ≥√c 2∥x−˜x∥) and therefore I1≤Cupp√c 2;A, Dint . (C.9) We now bound I2. Let x∈S˜x,intand suppose that A∈Dx, i.e.,∥x−˜x∥ ≤ηand∥A(x−˜x)∥<√c 2(η+σ). Define ˆBA,x={y∈Rm:∥y−Ax∥<√c 2(η+σ)}. We will show ˆBA,x⊆BAin this case. Let y∈BA,x. Then ∥y−A˜x∥ ≤ ∥ y−Ax∥+∥Ax−A˜x∥<√c 2(η+σ) +√c 2(η+σ) =√c(η+σ), 24 as required. This implies that Bc A⊆ˆBc A,x. Hence E(Bc A−Ax)≤ E(ˆBc A,x−Ax) =E(Bc√c 2(η+σ))≤ E(Bc√c 2σ) which implies that I2≤Dupp(√c 2σ;E).Combining with (C.8) and (C.9) we get EA[Hint,A(Bc A)]≤Ex∼P int[EA∼A[E((BA−Ax)c)]]≤Cupp√c 2;A, Dint +Dupp√c 2σ;E , which implies (C.4). This completes the proof. C.2.3 Replacing the real distribution with the approximate distribution We next establish a result that allows one to upper bound the failure probability based on draws from the real distribution Rwith the failure probability based on draws from the approximate distribution P. This lemma is a key technical step that aligns the prior distribution with the posterior. The specific bound is given in terms of the Wasserstein distance between RandPand several of the concentration bounds defined in §2. This is a significant generalization of [ 47, Lem. 3.3] that allows for arbitrary distributions A,Efor the forwards operator and noise. Lemma C.6 (Replacing the real distribution with the approximate distribution) .Letε, σ, d, t ≥0, c≥1,Ebe a distribution on RmandR,Pbe distributions on Rnsuch that W∞(R,P)≤ε. LetΠ ben an W∞-optimal coupling of RandPand define the set D={x∗−z∗: (x∗, z∗)∈supp(Π) }. Let p=Px∗∼R,A∼A,e∼E,ˆx∼P(·|Ax∗+e,A)[∥x∗−ˆx∥ ≥d+ε] and q=Pz∗∼R,A∼A,e∼E,ˆz∼P(·|Az∗+e,A)[∥z∗−ˆx∥ ≥d]. Then p≤Cabs(ε, tε;A, D) +Dupp(cσ;E) +Dshift(tε, cσ ;E)q, where Cabs(ε, tε;A, D),Dupp(cσ;E)andDshift(tε, cσ ;E)are as in Definitions 2.3, 2.4 and 2.5, respectively. As expected, this lemma involves a trade-off. The constant Cabs(ε, tε;A, D)is made smaller (for fixed ε)by making the constant tlarger. However, this increases Dshift(tε, cσ ;E), which is compensated by making csmaller. However, this in turn increases the constant Dupp(cσ;E). Proof. Define the events B1,ˆx={x∗:∥x∗−ˆx∥ ≥d+ε}, B 2,ˆz={z∗:∥z∗−ˆz∥ ≥d} so that p=Px∗∼R,A∼A,e∼E,ˆx∼P(·|Ax∗+e,A)[x∗∈B1,ˆx] q=Pz∗∼P,A∼A,e∼E,ˆz∼P(·|Az∗+e,A)[z∗∈B2,ˆz]. (C.10) Observe that p=Ex∗∼R[EA∼AEy|A,x∗[Eˆx∼P(·|Ax∗+e,A)[1B1,ˆx]]] =Z Z Z Z 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dE(e)dA(A)dR(x∗) and similarly q=Z Z Z Z 1B2,ˆz(z∗)dP(·|Az∗+e, A)(ˆz)dE(e)dA(A)dP(z∗). Therefore, to obtain the result, it suffices to replace samples from the real distribution Rwith samples from the approximate distribution Pand to replace the indicator function of B1,ˆxby the indicator function over B2,ˆz. For the first task, we use couplings. Since W∞(R,P)≤ε, there exists a coupling Πbetween R,PwithΠ(∥x∗−z∗∥ ≤ε) = 1 . By (C.1), we can write p=Z
|
https://arxiv.org/abs/2505.10630v1
|
Z Z Z 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dE(e)dA(A)dΠ(x∗, z∗). 25 Define E={(x∗, z∗) :∥x∗−z∗∥ ≤ε}and observe that Π(E) = 1 . Then, for fixed A,e, we have Z Z 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗) =Z EZ 1B1,ˆx(x∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗) Z Z 1B2,ˆx(z∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗) =Z EZ 1B2,ˆx(z∗)dP(·|Ax∗+e, A)(ˆx)dΠ(x∗, z∗). We now show 1B1,ˆx(x∗)≤1B2,ˆx(z∗)for(x∗, z∗)∈E. Let (x∗, z∗)∈Eand suppose that x∗∈B1,ˆx. Then ∥x∗−ˆx∥ ≥d+εand, since ∥x∗−z∗∥ ≤ε, we also have that ∥z∗−ˆx∥ ≥dand therefore z∗∈B2,ˆx, as required. Hence Z 1B1,ˆx(x∗)dP(Ax∗+e, A)(ˆx)≤Z 1B2,ˆx(z∗)dP(Ax∗+e, A)(ˆx) for(x∗, z∗)∈E. Now, since indicator functions are non-negative, Fubini’s theorem immediately implies that p=Z Z Z Z 1B1,ˆx(x∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) ≤Z Z Z Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗). Having introduced the coupling Πand replaced 1B1,ˆxby1B2,ˆx, to establish the result it remains to replace the conditional distribution P(·|Ax∗+e, A)byP(·|Az∗+e, A). With a similar technique to that used in the proof of Lemma C.5, we define Cx∗,z∗={A:∥A(x∗−z∗)∥> tε}and I1=Z Z 1Cx∗,z∗(A)Z Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) I2=Z Z 1Cc x∗,z∗(A)Z Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) so that p≤Z Z Z Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) =I1+I2.(C.11) We first bound I1. As before, we write I1=Z Z 1Cx∗,z∗(A)Z Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) ≤Z Z 1Cx∗,z∗(A) dA(A) dΠ( x∗, z∗). Recalling the definition of the set Eabove, we get Z Z 1Cx∗,z∗(A) dA(A) dΠ( x∗, z∗)≤Z EPA∼A{∥A(x∗−z∗)∥> tε}dΠ(x∗, z∗). Using the definition of C0,EandD, we deduce that I1≤Cabs(ε, tε;A, D). (C.12) Now we bound I2. We further split the integral I2as follows: I2=I21+I22, (C.13) where I21=Z Z 1Cc x∗,z∗(A)Z 1Bccσ(e)Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) I22=Z Z 1Cc x∗,z∗(A)Z 1Bcσ(e)Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) 26 Let us first find an upper bound for I21. We have I21=Z Z 1Cc x∗,z∗(A)Z 1Bccσ(e)Z 1B2,ˆx(z∗) dP(·|Ax∗+e, A)(ˆx) dE(e) dA(A) dΠ( x∗, z∗) ≤Z Z 1Cc x∗,z∗(A)Z 1Bccσ(e) dE(e) dA(A) dΠ( x∗, z∗) ≤Z 1Bccσ(e) dE(e), and therefore, by Definition 2.4, I21=E(Bc cσ)≤Dupp(cσ;E). (C.14) We now find a bound for I22. We first use Definition 2.5 to write I22=Z Z 1Cc x∗,z∗(A)Z 1Bcσ(e)pE(e)Z 1B2,ˆx(z∗)dP(·|Ax∗+e, A)(ˆx) dedA(A) dΠ( x∗, z∗). Now define the new variable e′=e+A(x∗−z∗). Since, in the integrand, ∥e∥ ≤cσ(due to the indicator function 1Bcσ(e)) and∥A(x∗−z∗)∥ ≤tε(due to the indicator function 1Cc x∗,z∗(A)), Definition 2.5 yields the bound I22≤Dshift(tε, cσ ;E)Z Z 1Cc x∗,z∗(A)Z 1B2σ(e′−A(x∗−z∗))pE(e′) ×Z 1B2,ˆx(z∗) dP(·|Az∗+e′, A)(ˆx) de′dA(A) dΠ( x∗, z∗). We now drop the first two indicator functions and relabel the variables e′andˆxaseandˆz, respectively, to obtain I22≤Dshift(tε, cσ ;E)Z Z Z Z 1B2,ˆz(z∗) dP(·|Az∗+e, A)(ˆz) dE(e) dA(A) dΠ( x∗, z∗). This gives I22≤Dshift(tε, cσ ;E)q, where qis as in (C.10). Combining this with (C.13), we deduce that I2≤Dupp(cσ, ε) +Dshift(tε, cσ ;E)q. To complete the proof, we combine this with (C.11) and(C.12) , and then recall (C.10) once more. C.2.4 Decomposing distributions The following lemma is in large part similar to [ 47, Lem. A.1]. However, we streamline and rewrite its proof for clarity and completeness, fix a number of small issues and make an addition
|
https://arxiv.org/abs/2505.10630v1
|
to the statement (concerning the set Sthe distribution in whose support it is contained) which is important for proving our main result. Lemma C.7 (Decomposing distributions) .LetR,Pbe arbitrary distributions on Rn,p≥1and η, ρ, δ > 0. IfWp(R,P)≤ρandk∈Nis such that min{log Cov η,δ(P),log Cov η,δ(R)} ≤k, (C.15) then there exist distributions R′,R′′,P′,P′′, a constant 0< δ′≤δand a discrete distribution Q withsupp(Q) =Ssatisfying (i)min{W∞(P′,Q), W∞(R′,Q)} ≤η, (ii)W∞(R′,P′)≤ρ δ1/p, (iii)P= (1−2δ′)P′+ (2δ′)P′′andR= (1−2δ′)R′+ (2δ′)R′′, (iv)|S| ≤ek, (v) and S⊆supp(P)ifPattains the minimum in (C.15) withS⊆supp(R)otherwise. 27 This lemma states that two distributions that are close in Wasserstein p-distance, and for which at least one has small approximate covering number (C.15) , can be decomposed into mixtures (iii) of distributions, where the following holds. One of the distributions, say P′, is close (i) in Wasserstein- ∞distance to a discrete distribution Qwith the cardinality of its support (iv) bounded by the approximate covering number. The other R′is close in Wasserstein- ∞distance to P′. Moreover, both mixtures (iii) are dominated by these distributions: the ‘remainder’ terms P′′andR′′associated with a small constant δ′≤δ, meaning they are sampled with probability ≤δwhen drawing from eitherPorR. Note that if p <∞then the Wasserstein- ∞distance between R′andP′may get larger as δshrinks, i.e., as the remainder gets smaller. However, this does not occur when p=∞, as (ii) is independent of δin this case. Proof. Without loss of generality, we assume that log Cov η,δ(P)≤k. Then Covη,δ(P)≤ekand hence there is a set S={ui}l i=1⊆supp(P)withl≤ek, where the uiare the centres of the balls used to cover at least 1−δof the measure of P. That is, P"l[ i=1B(ui, η)# =Px∼P" x∈l[ i=1B(ui, η)# =: 1−c∗≥1−δ. We now define f:Rn→Rso that f(x) = 0 ifxlies outside these balls, and otherwise, f(x)is the equal to the reciprocal of the number of balls in which xis contained. Namely, f(x) =(1Pl i=11B(ui,η)(x)ifx∈Sl i=1B(ui, η) 0 otherwise. We divide the remainder of the proof into a series of steps. 1. Construction of Q′.We will now define a finite measure Q′. The point of Q′is to, concentrate the mass of the measure Pinto the centres of the balls ui. If the sets B(ui, η)are disjoint, then this is straightforward. However, to ensure that Q′is indeed a probability measure, we need to normalize and account for any non-trivial intersections. This is done via the function f. Pick some arbitrary ˆu /∈ {u1, . . . , u l}and define Q′=lX i=1 Z B(ui,η)f(x) dP(x)! δui+c∗δˆu. Observe that Z dQ′(x) =lX i=1Z B(ui,η)f(x) dP(x) +c∗=P l[ i=1B(ui, η)! +c∗= (1−c∗) +c∗= 1, and therefore Q′is a probability distribution supported on the finite set S∪ {ˆu}. 2. Coupling Q′,P.Now that we have associated all the mass of Pwith the points ui, we can define a coupling Πbetween Q′andPthat associates the mass of Panduiwith a single measure. Moreover, this measure will keep points within ηdistance of each other with high probability. We define Πas follows for measurable sets E, F⊆Rn: Π(E, F) =lX i=11F(ui)Z B(ui,η)∩Ef(x) dP(x) + 1 F(ˆu)P E\l[ i=1B(ui, η)! . To see that this is a coupling, we first observe that Π(Rn, F) =lX i=11F(ui)Z
|
https://arxiv.org/abs/2505.10630v1
|
B(ui,η)f(x) dP(x) + 1 F(ˆu)(1−c∗)≡ Q′(F), which gives the result for the first marginal. For the other, we have Π(E,Rn) =lX i=1Z B(ui,η)∩Ef(x) dP(x) +P E\l[ i=1B(ui, η)! . 28 By definition of f, this is precisely Π(E,Rn) =P E∩l[ i=1B(ui, η)! +P E\l[ i=1B(ui, η)! ≡ P(E), which gives the result for the second marginal. Note that Πwas only defined for product sets, but, since Q′is finitely supported, it follows directly from Lemma C.3 that it extends to arbitrary measurable sets in the product sigma-algebra. We now show that Π[∥x1−x2∥> η]≤c∗≤δ. That is, we show that most points drawn from Πare within ηdistance of each other. By law of total probability we have Π(∥x1−x2∥> η) =lX i=1Π(∥x1−x2∥> η|x2=ui)Π(x2=ui) + Π(∥x1−x2∥> η|x2= ˆu)Π(x2= ˆu) =lX i=1Π(Ui, ui)Q′(ui) + Π( ˆU,ˆu)Q′(ˆu), where Ui={x:∥x−ui∥> η}andˆU={x:∥x−ˆu∥> η}. Notice that Ui∩B(ui, η) =∅and therefore Π(Ui, ui) =Z {x:∥x−ui∥>η}∩B(ui,η)fdP= 0. Hence lX i=1Π(Ui, ui)Q′(ui) + Π( ˆU,ˆu)Q′(ˆu) = Π( ˆU,ˆu)Q′(ˆu) and, since Q′(ˆu) =c∗, we have Π(ˆU,ˆu)Q′(ˆu)≤c∗≤δ. This gives Π(∥x1−x2∥> η)≤δ, (C.16) as required. 3. Coupling P,R.The next step is to introduce R. With the assumption that Wp(R,P)≤ρ, by definition there exists a coupling Γbetween PandRsuch that EΓ[∥x1−x2∥p]≤ρp. Markov’s inequality then gives that Γ ∥x1−x2∥ ≥ρ δ1/p ≤EΓ[∥x1−x2∥p] ρp δ≤δ. (C.17) 4. Coupling P,Q′,R.We next couple P,Q′andR. Before doing so, we first discuss the goal of our final coupling. Recall that we have the distribution P, the distribution Πthat couples P,Q′closely except for up to δmass of P, and Γwhich keeps P,Rclose again except for up to δof the mass of Γ. We want to decompose Pinto the portions that are ηclose to Q′, and points that are not. These will become P′andP′′, respectively. At the same time, we want to decompose Rto points that are ρ δ1/pclose to P′, and points that are not. Naturally this will become R′andR′′. To achieve this, we couple P,Q′andRin this step and then use this to construct the final decomposition in the next step. We have measures P,Q′andRand couplings ΠofP,Q′andΓofP,R. We will in a sense, couple Π,Γ. Since (Rn)3is a Polish space, by [9, Lem. 8.4], there exists a coupling Ωwith π1,2♯Ω = Π , π 1,3♯Ω = Γ , where π1,2(x1, x2, x3) = ( x1, x2)and likewise for π1,3. One should intuitively think of the x1 component as samples from P, thex2component as samples from Q, and the x3component as samples from R. With the base measure defined, we still want to ensure that x1, x3are sampled closely, and x1, x2are as well. Consider the event such that x1, x3areρ δ1/pclose and x1, x2areη close: namely, E:={(x1, x2, x3) :∥x1−x3∥ ≤ρ/δ1/pand∥x1−x2∥ ≤η}. 29 Split up the negation of the two events of ∥x1−x3∥ ≤ρ/δ1/pand∥x1−x2∥ ≤ηinto the events E1:={(x1, x2, x3) :∥x1−x2∥> η, z ∈Rn}, E2:={(x1, x2, x3) :∥x1−x3∥> ρ/δ1/p, y∈Rn}, so that Ec=E1∪E2. We will now show Ω(E1)≤δ. Write E1=E′ 1×Rnwhere E′ 1={(x1, x2) : ∥x1−x2∥> η}satisfies Π(E′ 1)≤δ′by (C.16). Then δ′≥Π(E′ 1) =Z 1E′ 1(x1, x2) dπ1,2♯Ω(x1, x2) =Z 1E′ 1(p1,2(x1, x2, x3)) dΩ( x1, x2, x3) =Z 1E1(x1, x2, x3) dΩ( x1, x2, x3) = Ω(E1),
|
https://arxiv.org/abs/2505.10630v1
|
as required. Using (C.17) , we also have the analogous result for E2. Hence Ω(Ec) = Ω( E1∪E2)≤ Ω(E1) + Ω( E2) =: 2 δ′≤2δ′, and consequently, Ω(E) = 1−2δ′≥1−2δ, where E:={(x1, x2, x3) :∥x1−x3∥ ≤ρ/δ1/pand∥x1−x2∥ ≤η}.(C.18) 4. Decomposing P,R.Finally, we define P′,P′′,R′,R′′andQby conditioning on the events E andEc, as follows: P′(A) = Ω( A,Rn,Rn|E), R′(A) = Ω( Rn,Rn, A|E), P′′(A) = Ω( A,Rn,Rn|Ec), R′(A) = Ω( Rn,Rn, A|Ec), Q(A) = Ω( Rn, A,Rn|E). This gives P(A) = Ω( A,Rn,Rn) = Ω( E)Ω(A,Rn,Rn|E) + Ω( Ec)Ω(A,Rn,Rn|Ec) = (1−2δ′)P′(A) + 2δ′P′′(A) and similarly R(A) = Ω( Rn, A,Rn) = Ω( E)Ω(Rn, A,Rn|E) + Ω( Ec)Ω(Rn, A,Rn|Ec) = (1−2δ′)R′(A) + 2δ′R′′(A). We now claim that these distributions satisfy (i)-(v) in the statement of the lemma. We have already shown that (iii) holds. To show (i), we define a coupling γofP′,Qbyγ(B) = Ω( B,Rn|E) for any B⊆(Rn)2. Observe that γ(A,Rn) = Ω( A,Rn,Rn|E) =P′(A)andγ(Rn, A) = Ω(Rn, A,Rn|E) =Q(A). Hence this is indeed a coupling of P′,Q. Therefore it suffices to show that γ(B) = 0 , where Bis the event {∥x1−x2∥> η}. We have γ(B) = Ω( B,Rn|E). Recall that for (x1, x2, x3)∈E, we have ∥x1−x2∥ ≤η. Hence Ω(B,Rn|E) = 0 . Therefore, W∞(P′,Q)≤η, which gives (i). Similarly, for (ii) we define a coupling γ′ofR′,P′byγ′(B) = Ω( B|E)where B:={(x1, x2, x3) : (x1x3)∈B, x 2∈Rn}. With similar reasoning as the previous case, γ′is a coupling of R′,P′and, for(x1, x2, x3)∈Ewe have ∥x1−x3∥ ≤ρ/δ1/p, so letting Bbe the event {∥x1−x3∥> ρ/δ1/p}, we conclude that W∞(R′,P′)≤ρ/δ1/p. This gives (ii). Finally we verify (iv) and (v). First recall that both properties hold for Q′by construction. The results now follow from the fact that Q′(·) = Ω( Rn,·,Rn)andQ(·) = Ω( Rn,·,Rn|E). C.3 Proof of Theorem 3.1 We now prove Theorem 3.1. This follows a similar approach to that of [ 47, Thm. 3.4], but with a series of significant modifications to account for the substantially more general setup considered in this work. We also streamline the proof and clarify a number of key steps. 30 Proof of Theorem 3.1. By Lemma C.7, we can decompose P,Rinto measures P′,P′′andR′,R′′, and construct a finite distribution Qsupported on a finite set Ssuch that (i)min{W∞(P′,Q), W∞(R′,Q)} ≤η, (ii)W∞(R′,P′)≤ε′:=ε δ1/p, (iii)P= (1−2δ′)P′+ (2δ′)P′′andR= (1−2δ′)R′+ (2δ′)R′′for some 0≤δ′≤δ, (iv)|S| ≤ek, (v) and S⊆supp(P)ifPattains the minimum in (C.15) with S⊆supp(R)otherwise. It is helpful to briefly recall the construction of these sets. Beginning with δ, ηas parameters for the approximate covering numbers, the distribution Qconcentrates 1−δof the mass of Pinto the centres of the η-radius balls used. Then the distributions P′,R′are the measures P,Rwithin the balls. We now write p:=Px∗∼R,A∼A,e∼E,ˆx∼P(·|y,A)[∥x∗−ˆx∥ ≥(c+ 2)η+ (c+ 2)σ] ≤Px∗∼R,A∼A,e∼E,ˆx∼P(·|y,A)[∥x∗−ˆx∥ ≥(c+ 2)η+ (c+ 1)σ+ε′] ≤2δ′+ (1−2δ′)Px∗∼R′,A∼A,e∼E,ˆx∼P(·|y,A)[∥x∗−ˆx∥ ≥(c+ 1)( η+σ) +ε′] =: 2δ′+ (1−2δ′)q.(C.19) Here, in the first inequality we used the fact that σ≥ε′, and in the second, we used the decomposition R= (1−2δ′)R′+ 2δ′R′′and the fact that δ′≤δ. We now bound qby using Lemma C.6 to replace the distribution R′by the distribution P′. Writing u=Az∗+e, this lemma and (ii) give that q≤Cabs(ε′, tε′;A, D) +Dupp(c′σ;E) +Dshift(tε′, c′σ;E)r, where r=Pz∗∼P′,A∼A,e∼E,ˆz∼P(·|u,A)[∥z∗−ˆz∥
|
https://arxiv.org/abs/2505.10630v1
|
≥(c+ 1)( η+σ)](C.20) andD={x∗−z∗: (x∗, z∗)∈supp(Π) }, for Πbeing the W∞-optimal coupling of R′,P′ guaranteed by (ii). Lemma C.1 implies that supp(Π) ⊆supp(R′)×supp(P′)and therefore D⊆supp(R′)−supp(P′). Now (iii) implies that supp(P′)⊆supp(P). Similarly, (iii) implies that supp(R′)⊆supp(R). But Lemma C.2 and (ii) imply that supp(R′)⊆Bε′(supp( P′)). Therefore D⊆Bε′(supp( P))∩supp(R)−supp(P) =D1, where D1as in (3.2). We now bound r. Observe first that W∞(P′,Q)≤η′:=η+ε′. Indeed, from (i) either W∞(P′,Q)≤ηorW∞(R′,Q)≤η. In the former case, the inequality trivially holds. In the latter case, we can use the triangle inequality and (ii) to obtain the desired bound. This implies that there is a coupling ΓofP′,QwithesssupΓ∥x−y∥ ≤η′. Fix ˜z∈Sand, for any Borel set E⊆Rn, define Γ˜z(E) =Γ(E,˜z) Q(˜z). Then it is readily checked that Γ˜z(·)defines a probability measure. Note also that Γ˜zis supported on a ball of radius η′around ˜z, since esssupΓ(∥x−y∥)≤η′. Recall that Γis a coupling between P′ andQ. LetE⊆Rnbe a Borel set. Then Lemma C.3 gives that P′(E) = Γ( E,Rn) =X ˜z∈SΓ((E,Rn)˜z,˜z) =X ˜z∈SΓ(E,˜z) =X ˜z∈SΓ˜z(E)Q(˜z). Therefore, we can express P′as the mixture P′(·) =X ˜z∈SΓ˜z(·)Q(˜z). 31 Define the event E={∥z∗−ˆz∥ ≥(c+ 1)( η+σ)} ⊆Rn×Rnso that the probability rdefined in (C.20) can be expressed as r=Ez∗∼P′,A∼A,e∼E,ˆz∼P(·|A,u)[1E]. Using the above expression for P′we now write r=Z Z Z Z 1E(z∗,ˆz) dP(·|A, u)(ˆz) dE(e) dA(A) dP′(z∗) =Z Z Z Z 1E(z∗,ˆz) dP(·|A, u)(ˆz) dE(e) dA(A) d X ˜z∈SQ(˜z)Γ˜z(·)! (z∗) =X ˜z∈SQ(˜z)Z Z Z Z 1E(z∗,ˆz) dP(·|A, u)(ˆz) dE(e) dA(A) dΓ ˜z(z∗) , where the last line holds as Q(˜z)is a constant. Hence r=X ˜z∈SQ(˜z)Pz∗∼Γ˜z,A∼A,e∼E,ˆz∼P(·|A,u)[E]. (C.21) Now we bound each term in this sum. We do this by decomposing Pinto a mixture of three probability measures depending on ˜z∈S. To do this, let θ=c(η+σ)and observe that, for any Borel set E⊆Rn, P(E) =P(E∩Bθ(˜z)) +P(E∩Bc θ(˜z)) =P(E∩Bθ(˜z)) +P(E∩Bc θ(˜z)) + (1 −2δ′)Q(˜z)Γ˜z(E)−(1−2δ′)Q(z∗)Γ˜z(E) =P(E∩Bθ(˜z))−(1−2δ′)Q(˜z)Γ˜z(E∩Bθ(˜z)) +P(E∩Bc θ(˜z))−(1−2δ′)Q(˜z)Γ˜z(E∩Bc θ(˜z)) + (1−2δ′)Q(˜z)Γ˜z(E). Now define the constants c˜z,mid=P(Bθ(˜z))−(1−2δ′)Q(˜z)Γ˜z(Bθ(˜z)), c ˜z,ext=P(Bc θ(˜z))−(1−2δ′)Q(˜z)Γ˜z(Bc θ(˜z)). and let P˜z,int(E) = Γ ˜z(E) P˜z,mid(E) =1 c˜z,mid(P(E∩Bθ(z∗))−(1−2δ′)Q(z∗)Γ˜z(E∩Bθ(z∗))), P˜z,ext(E) =1 c˜z,ext(P(E∩Bc θ(z∗))−(1−2δ′)Q(z∗)Γ˜z(E∩Bc θ(z∗))). ThenPcan be expressed as the mixture P= (1−2δ′)Q(˜z)P˜z,int+c˜z,midP˜z,mid+c˜z,extP˜z,ext. (C.22) To ensure this is a well-defined mixture, we need to show that P˜z,midandP˜z,extare probability measures. However, by (iii) we have, for any Borel set E⊆Rn, P(E)≥(1−2δ′)P′(E) = (1 −2δ′)X ˜z∈SΓ˜z(E)Q(˜z)≥(1−2δ′)Γ˜z(E)Q(˜z). Therefore, P˜z,midandP˜z,extare well-defined, provided the constants c˜z,int, c˜z,ext>0. However, if one of these constants is zero, then we can simply exclude this term from the mixture (C.22) . For the rest of the theorem, we will assume that, at least, c˜z,ext>0. It is now useful to note that supp(P˜z,mid)⊆Bθ(˜z)and supp(P˜z,ext)⊆Bc θ(˜z), which follows immediately from their definitions, and also that supp(P˜z,int)⊆Bη′(˜z)⊆Bθ(˜z). 32 where in the second inclusion we used the fact that η′=η+ε/δ1/p≤η+σ≤c(η+σ) =θ, as σ≥ε/δ1/pandc≥1. We now return to the sum (C.21) . Consider an arbitrary term. First, observe that, for z∗∼ P, we haveP(z∗∼Γ˜z) =Q(˜z)(1−2δ′)by (C.22). Hence Q(˜z)Ez∗∼Γ˜z,ˆz∼P(·|A,u)[1E] =P(z∗∼Γ˜z) 1−2δ′Ez∗∼P,ˆz∼P(·|A,u)[1E|z∗∼Γ˜z] =P(z∗∼Γ˜z) (1−2δ′)1 P(z∗∼Γ˜z)Z Z 1E1z∗∼Γ˜zdP(z∗) dP(·|A, u)(ˆz). Recall that z∗∼Γ˜zis supported in Bη′(˜z). Therefore, for the event Eto occur, i.e., ∥z∗−ˆz∥> (c+ 1)( η+σ), it must be that ˆz∈Bc θ(˜z), which means that ˆz∼ P ˜z,ext(·|A, u). Hence Q(˜z)Ez∗∼Γ˜z,ˆz∼P(·|A,u)[1E]≤1 1−2δ′Z Z 1ˆz∼P˜z,ext(·|A,u)1z∗∼Γ˜zdP(z∗) dP(·|A, u)(ˆz) =1 1−2δ′P[z∗∼ P ˜z,int,ˆz∼ P
|
https://arxiv.org/abs/2505.10630v1
|
˜z,ext(·|A, u)]. Now fix A∈Rm×n. LetH˜z,int,Abe the distribution of y∗=Az∗+eforz∗∼ P ˜z,intande∼ E independently, and define H˜z,ext,Asimilarly. Then, by Fubini’s theorem, we have Q(˜z)Ez∗∼Γ˜z,A∼A,e∼E,ˆz∼P(·|A,u)[1E]≤1 1−2δ′EA∼A,e∼EP[y∗∼ H ˜z,int,A,ˆy∼ H ˜z,ext,A(·|y∗)]. Now let H˜z,Abe the distribution of y=Az+eforz∼ P ande∼ E independently. Then Lemma C.4 (with H=H˜z,A,H1=H˜z,int,A,H2=H˜z,mid,A,H3=H˜z,ext,Aanda1= (1−2δ′)Q(˜z), a2=c˜z,mid,a3=c˜z,ext) gives Q(˜z)Ez∗∼Γ˜z,A∼A,e∼E,ˆz∼P(·|A,u)[1E]≤1 1−2δ′EA∼A[1−TV(H˜z,int,A,H˜z,ext,A)]. Finally, summing over all ˜zwe deduce that r=X ˜z∈SQ(˜z)Ez∗∼Γ˜z,A∼A,e∼E,ˆz∼P(·|A,u)[1E]≤1 1−2δ′X ˜z∈SEA∼A[1−TV(H˜z,int,A,H˜z,ext,A)]. (C.23) Now recall that H˜z,int,Ais the pushforward of a measure P˜z,intsupported in Bη′(˜z), where η′= η+ε′≤η+σandH˜z,ext,Ais the pushforward of a measure P˜z,extsupported in Bc θ(˜z), where θ=c(η+σ)≥c 2(η′+σ). Therefore, Lemma C.5 (with creplaced by c/2) gives that EA∼A[1−TV(H˜z,int,A,H˜z,ext,A)]≤Clow 2√ 2√c;A, D˜z,ext! +Cupp√c 2√ 2;A, D˜z,int + 2Dupp√cσ 2√ 2;E , where D˜z,ext={x−˜z:x∈supp(P˜z,ext)}andD˜z,int={x−˜z:x∈supp(P˜z,int)}. It follows immediately from (C.22) that supp(P˜z,int),supp(P˜z,ext)⊆supp(P). Moreover, ˜z∈Sand therefore D˜z,ext, D˜z,int⊆D2, where D2is as in (3.3). Using this, the previous bound and (C.23), we deduce that r≤|S| 1−2δ′" Clow 2√ 2√c;A, D2! +Cupp√c 2√ 2;A, D2 + 2Dupp√cσ 2√ 2;E# . To complete the proof, now substitute this into (C.19) and (C.20), to obtain p≤2δ′+ [Cabs(ε′, tε′;A, D1) +Dupp(c′σ;E)] + 2Dshift(tε′, c′σ;E)|S|" Clow 2√ 2√c;A, D2! +Cupp√c 2√ 2;A, D2 + 2Dupp√cσ 2√ 2;E# . The result now follows after recalling (iv), i.e., |S| ≤ek, and the fact that δ′≤δ≤1/4. 33 C.4 Proof of Theorem 1.1 Finally, we now show how Theorem 3.1 implies the simplified result, Theorem 1.1. Proof of Theorem 1.1. Letp=P ∥x∗−ˆx∥ ≥(8d2+ 2)( η+σ) . We use Theorem 3.1 with ε replaced by ε/(2mθ). Letc= 8d2,c′= 2,t=θandε′=ε/(2δ1/pmθ). Then Theorem 3.1 gives that p≲δ+Cabs(ε′, θε′;A, D1) +Dupp(2σ;E) + 2Dshift(θε′,2σ;E)ek[Clow(1/d;A, D2) +Cupp(d;A, D2) + 2Dupp(dσ;E)], where D1=Bε′(supp( P))∩supp(R)−supp(P)⊆Bε′(supp( P)−supp(P)), D2=D= supp( P)−supp(P)andk=⌈log Cov η,δ(P)⌉. Now since ∥Ax∥ ≤ ∥ A∥∥x∥ ≤θ∥x∥, ∀x∈Rn, we make take Cabs(ε′, θε′;A, D1) = 0 . Moreover, by Lemma B.1, we have Dshift(θε′,2σ;E)≤expm(4σ+θε′) 2σ2θε′ = expε δ1/pσ+ε2 8δ2/pσ2m ≲1 where we used the facts that m≥1andσ≥ε/δ1/p. Hence p≲δ+ ek[C−(1/d;A, D2) +C+(d;A, D2) +C(dσ;E)]. Finally, Lemma B.1 implies that Dupp(dσ;E)≤ de1−dm/2≤exp(−m/16), where in the final step we used the fact that d≥2. This gives the result. 34
|
https://arxiv.org/abs/2505.10630v1
|
arXiv:2505.10715v1 [stat.ME] 15 May 2025Dependency-Aware Shrinkage Priors for High Dimensional Regression Javier Enrique Aguilar1,∗and Paul-Christian B¨ urkner1,† 1Department of Statistics, TU Dortmund University, Germany Abstract: In high dimensional regression, global local shrinkage priors have gained significant traction for their ability to yield sparse estimates, improve parameter recovery, and support accurate predictive modeling. While recent work has explored increasingly flexible shrinkage prior struc- tures, the role of explicitly modeling dependencies among coefficients re- mains largely unexplored. In this paper, we investigate whether incorpo- rating such structures into traditional shrinkage priors improves their per- formance. We introduce dependency-aware shrinkage priors, an extension of continuous shrinkage priors that integrates correlation structures inspired by Zellner’s gprior approach. We provide theoretical insights into how de- pendence alters the prior and posterior structure, and evaluate the method empirically through simulations and real data. We find that modeling de- pendence can improve parameter recovery when predictors are strongly correlated, but offers only modest gains in predictive accuracy. These find- ings suggest that prior dependence should be used selectively and guided by the specific inferential goals of the analysis. Keywords and phrases: Prior specification, shrinkage priors, structured shrinkage, high dimensional regression, regularization. 1. Introduction Regression analysis has long been a cornerstone of statistical modeling (Gelman and Hill, 2006; Hastie et al., 2009, 2015; Gelman et al., 2020). As datasets have grown in size and complexity, the need for models capable of handling high dimensional settings while avoiding overfitting has become increasingly evident (Buehlmann et al., 2014; Giraud, 2014; Vershynin, 2018). To address this challenge, a wide array of priors with strong theoretical and empirical properties has been proposed over the past decade (Carvalho et al., 2010; Griffin and Brown, 2010; Bhattacharya et al., 2015; Piironen and Vehtari, 2017; Roˇ ckov´ a and George, 2018; Zhang et al., 2020). Letyi∈Rdenote the ith response variable, related to pexplanatory variables x′ i= (xi1, . . . , x ip)∈Rpthrough the linear regression model yi=α+x′ ib+εi, i= 1, . . . , n, (1) where α∈Ris the intercept, b= (b1, . . . , b p)′∈Rpis the vector of regression ∗Corresponding author: e-mail: javier.aguilarr@icloud.com url: https://jear2412.github.io †e-mail: paul.buerkner@gmail.com url:https://paul-buerkner.github.io 1 J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 2 coefficients, and εiis the error term, assumed to satisfy εi∼ N (0, σ2) with unknown residual variance σ2(Gelman et al., 2013). Although model (1) is standard, we are particularly interested in the high dimensional regime where p > n (Tibshirani, 1996; Giraud, 2014; Wainwright, 2019). In this setting, a central challenge is the specification of prior distributions for the coefficients band the residual variance σ2(Gelman, 2006; Tadesse and Vannucci, 2021). Introducing additional assumptions about the data-generating process can simplify this task. In particular, if we assume that the true model is sparse—that is, only a small subset of the coefficients are nonzero—then shrinkage priors become an attractive modeling choice (Johnstone and Silver- man, 2004; Castillo and Vaart, 2012; Pas et al., 2016, 2017). Sparse solutions are heavily recommended: dense models not only hinder interpretability and statis- tical parsimony, but may also lead to
|
https://arxiv.org/abs/2505.10715v1
|
computational inefficiencies and overfitting (B¨ uhlmann and Van De Geer, 2011; Giraud, 2014; Hastie et al., 2015; Wain- wright, 2019). Among shrinkage priors, the class of continuous global-local shrinkage priors has gained significant attention (Mitchell et al., 1988; Carvalho et al., 2010; Ar- magan et al., 2013; Bhattacharya et al., 2015; Pas et al., 2016; Bhadra et al., 2016; Zhang et al., 2020; Tadesse and Vannucci, 2021). These priors balance sparsity and flexibility by combining global shrinkage, which controls the over- all level of shrinkage, with local shrinkage, which allows individual coefficients to escape strong penalization. Their popularity stems from a combination of ex- cellent theoretical properties—such as minimax optimality and desirable poste- rior concentration rates—and strong empirical performance (Castillo and Vaart, 2012; Castillo et al., 2015; Pas et al., 2016; Ghosh and Chakrabarti, 2017; Pas, 2021; Tadesse and Vannucci, 2021). Furthermore, their compatibility with effi- cient computational strategies and probabilistic programming frameworks has contributed to their widespread adoption (George and McCulloch, 1993; Piiro- nen and Vehtari, 2017; Bhattacharya et al., 2016; Aguilar and B¨ urkner, 2023). Incorporating prior knowledge about regression coefficients can, in princi- ple, improve both parameter estimation and predictive performance, provided that the prior sufficiently reflects true underlying structure (Pas et al., 2017; Simpson et al., 2017). For instance, modeling multivariate dependencies among coefficients allows to capture joint uncertainty in related parameters (Gelman et al., 2013, 2020). While such approaches have been explored in non-shrinkage contexts, analogous developments within shrinkage prior frameworks remain relatively limited, since conditional independence of the coefficients is typically assumed (Zellner, 1996; Agliari and Parisetti, 1988; Casella et al., 2010; Griffin and Brown, 2013; Griffin et al., 2024). A foundational example of priors that incorporate dependencies is Zellner’s g prior, which uses the design matrix to encode the dependence structure of the re- gression coefficients via their prior covariance matrix (Zellner, 1986; Maruyama and George, 2011; Li and Clyde, 2018). Although not originally conceived as a shrinkage prior, the gprior exhibits a global shrinkage structure and remains widely used in Bayesian model selection and averaging due to its closed-form marginal likelihood (Robert and Casella, 2004; Liang et al., 2008; Maruyama J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 3 and George, 2011; Li and Clyde, 2018). Some approaches attempt to encode coefficient dependence by treating regu- larized objectives as priors; specifically, by normalizing expressions of the form b′Ωb+R(b), where Ω is a positive semidefinite matrix and R(b) is a penalty term (Casella et al., 2010; Pauger and Wagner, 2019; Goplerud, 2021). Choosing R(b) as a convex penalty leads to computationally tractable optimization problems (Tibshirani, 1996; Hastie et al., 2015). However, translating these penalties into fully Bayesian priors is not straightforward: the role of hyperparameters and their interaction with the likelihood can be difficult to interpret or tune (Casella et al., 2010; Hahn et al., 2015; Griffin et al., 2024). Moreover, the key advantages of regularization methods, namely computational efficiency and the ability to produce sparse solutions, are often diminished in the Bayesian setting (Hahn et al., 2015). Additionally, although these constructions superficially
|
https://arxiv.org/abs/2505.10715v1
|
resemble frequentist regularization techniques (such as interpreting the log-prior as a penalty), the comparison can be misleading. The theoretical properties of Bayesian shrinkage priors and frequentist regularizers differ in fundamental ways (Simpson et al., 2017; Castillo, 2024). For example, under suitable conditions on the design ma- trix, the Lasso estimator can consistently recover sparse signals (Zhao and Yu, 2006). In contrast, the Laplace prior, despite its formal similarity to the Lasso penalty, tends to over-shrink coefficients due to its insufficient tail mass, often leading to biased estimates and poor uncertainty quantification (Castillo et al., 2015; Pas et al., 2017; Bhadra et al., 2019; Castillo, 2024). While initial analo- gies between Bayesian and frequentist approaches may be instructive, they can obscure important differences in parameter recovery, shrinkage behavior, and predictive performance. A more recent and notable development is the Structured Shrinkage Prior proposed by Griffin et al. (2024), which incorporates dependence by construct- ing a covariance matrix as the elementwise product of the second-moment ma- trix of the local scales and an arbitrary structure-imposing matrix. While this framework generalizes existing approaches and offers conceptual elegance, it is strongly limited to shrinkage priors with unit second moments for the lo- cal scales. This constraint is nontrivial, as the second moment is sensitive to the choice of hyperparameters—an important tuning mechanism in high dimen- sional models—and may not equal one in practice. See Section 2 for further details. While prior dependence structures have been explored in low dimensional asymptotics, their role in high dimensional models remains poorly understood. For instance, Hagar and Stevens (2024) show that in low dimensions, complex dependence encoded through copula-based priors may be overridden by the like- lihood as sample size increases. However, their results rely on the Bernstein–von Mises theorem and do not apply in high dimensional settings, where the behav- ior of posterior dependence structures remains largely an open question (Ghosal et al., 2000; Ghosh, 2003; Castillo et al., 2015). J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 4 1.1. Contributions of this paper The main aim of this paper is to investigate whether traditional continuous global-local shrinkage priors can benefit from the inclusion of dependency struc- tures in high dimensional sparse settings, and to determine the conditions un- der which such benefits arise. To this end, we introduce a generalization of global-local shrinkage priors, which we term dependency-aware shrinkage priors (DASP). Our approach extracts correlation information from the design matrix of the predictors and combines it with the shrinkage scales in a data driven manner. To evaluate the impact of incorporating dependence structures, we present both theoretical and empirical results. We characterize how introducing de- pendence modifies the structure of the prior and posterior distributions. We conduct simulation studies using both the true correlation matrix and our data- driven approach. The results show that incorporating dependence can improve parameter recovery when strong correlation exists among groups of coefficients. However, the gains in predictive performance are generally modest. We fur- ther validate our method on multiple real-world datasets, where we observe consistent patterns: dependency-aware priors may aid
|
https://arxiv.org/abs/2505.10715v1
|
estimation in structured settings but do not yield substantial improvements in prediction. We conclude that, while prior dependence structures can be useful in specific inferential contexts, they are not universally beneficial and should not be used by default. In high dimensional problems, the advantages of flexible prior de- pendence appear modest and highly context dependent. Our conclusion is that such structures should be employed judiciously, with clear justification based on the goals of the analysis. 2. Methods 2.1. Shrinkage priors Continuous global-local shrinkage priors are constructed as scale mixtures of normal distributions (West, 1987), yielding a hierarchical model for the regres- sion coefficients. Each coefficient biis modeled as bi|σ, τ, λ i∼ N(0, σ2τ2λ2 i), λ i∼p(λi), τ∼p(τ), σ∼p(σ), (2) where τis a global scale parameter that controls the overall level of shrinkage, andλiare local scale parameters that allow coefficient-specific deviations. The global scale reflects the belief that most coefficients are near zero, while the local scales allow signals to escape shrinkage when warranted by the data. This hierarchy encourages sharing of information about sparsity across coefficients while retaining flexibility at the individual level. We include the residual variance σ2within the prior variance of bi, following standard practice, as this improves adaptivity to varying signal-to-noise ratios (Pas, 2021). J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 5 With appropriate choices for the priors on λiandτ, global-local shrinkage priors induce approximate sparsity, shrinking irrelevant signals strongly toward zero while avoiding excessive shrinkage of relevant signals. The combination of local and global scales also endows these priors with desirable theoretical proper- ties, such as minimax optimality, posterior consistency, and optimal contraction rates under sparsity (Castillo and Vaart, 2012; Ghosh and Chakrabarti, 2017; Pas et al., 2016; Pas, 2021). Although they do not produce exact zeros, which may be required in some applications, model selection can be handled post hoc by decoupling inference from selection (Hahn et al., 2015; Piironen et al., 2020). The specific choices for the priors on λiandτgive rise to the wide ar- ray of shrinkage priors used today. Common examples include the Horseshoe, Three-Parameter Beta, Normal-Gamma, and Beta Prime priors, among others (Carvalho et al., 2010; Armagan et al., 2011; Griffin and Brown, 2010; Bai and Ghosh, 2019). These models belong to the family of shrinkage priors that treat the local scales λias conditionally independent. More recently, alternative ap- proaches have been proposed that introduce joint modeling of the local scales via a multivariate distribution; examples of this include the Dirichlet–Laplace, R2D2, and Generalized R2 Decomposition priors (Bhattacharya et al., 2015; Zhang et al., 2020; Aguilar and B¨ urkner, 2025). 2.2. Dependency-aware shrinkage priors Regression coefficients are typically deemed as conditionally independent a priori under the shrinkage prior setup (2). We propose a natural extension of this model by considering dependencies in the regression coefficients via correlation matrix Ω. The model we propose is following: b|σ, τ, λ ∼ N 0, σ2τ2DλΩ Dλ , λi∼p(λi), τ∼p(τ), σ∼p(σ),(3) where D λis the diagonal matrix that contains the scales λias diagonal elements and Ω is a correlation matrix. We refer to
|
https://arxiv.org/abs/2505.10715v1
|
priors of this form (3) as dependency- aware shrinkage priors (DASP). The standard shrinkage prior model (2) is re- covered by setting Ω = I. 2.2.1. Related priors Priors for regression coefficients that capture dependence structures typically do so through the incorporation of a covariance matrix. In what follows, we discuss two such priors: Zellner’s gprior and Structured Shrinkage Priors (SSPs), emphasizing their respective advantages and identifying their limitations. We argue that the prior introduced in (3) offers a natural and robust extension that directly addresses these limitations. Zellner’s gprior (Zellner, 1986; Agliari and Parisetti, 1988; Maruyama and George, 2011; Li and Clyde, 2018) incorporates dependence information directly from the design matrix Xby specifying b|σ, g∼ N 0, σ2g(X′X)−1 , σ∼p(σ), (4) J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 6 where g >0 is either fixed or endowed with a prior distribution, and Xis the design matrix composed of the covariate vectors x′ i. This formulation ensures that the prior covariance of bis proportional to the covariance of the maximum likelihood estimator (MLE) ˆb, thus preserving scale invariance with respect to the regressors (Hoff, 2009). Zellner’s prior is widely appreciated for its analytical tractability, particularly in Bayesian model selection. When gis fixed, it yields closed-form expressions for the marginal likelihood and posterior, thereby elim- inating the need for sampling-based approximations (Robert and Casella, 2004; Maruyama and George, 2011; Li and Clyde, 2018). Within this framework, gfunctions as a global shrinkage parameter, apply- ing uniform regularization across all components of b. Although ( X′X)−1pro- vides some localized scaling, the gprior lacks adaptivity, as it does not include coefficient-specific local scales λi(Liang et al., 2008; Gelman et al., 2013). As a result, fixed or poorly chosen values of gcan lead to over-shrinkage of relevant variables or under-penalization of noise. While various priors for ghave been proposed to mitigate this (Liang et al., 2008; Maruyama and George, 2011; Li and Clyde, 2018), the absence of local scales ultimately excludes the gprior from the class of adaptive shrinkage priors. In high dimensional settings, the gprior is further limited by its reliance on (X′X)−1, making it sensitive to collinearity and rank deficiency. Common remedies include replacing X′Xwith ( X′X+ηI) for some η >0, though this in- troduces an arbitrary tuning parameter. Alternatively, the Moore–Penrose pseu- doinverse can be used, but this yields an improper prior with support limited to the column space of X, leading to an improper posterior. Reparametrization via QR or SVD decomposition offers a more a numerically stable alternative by restricting the prior to the column space of X, but this comes at the cost of interpretability with respect to the original covariates. By contrast, our formulation (3) is capable of mimicking the gprior through appropriate specification of Ω, as shown in Section 2.3. Moreover, the inclusion of local scale parameters λidirectly addresses the issue of adaptivity (Liang et al., 2008), offering a flexible and robust alternative in both low and high dimensional settings. In the broader context of shrinkage priors, a structurally related idea ap- pears in the work of
|
https://arxiv.org/abs/2505.10715v1
|
George and McCulloch (1993, 1995, 1997), who propose the spike-and-slab prior with a non-diagonal precision matrix to encode depen- dencies among coefficients. In their formulation, the prior on btakes the form N(0, τ2ΓΩ−1Γ), where Γ is a diagonal matrix of binary inclusion indicators γi∈ {0,1}fori= 1, . . . , p . Their work suggests using Ω = ( X′X)−1 γ, where Xγ is the submatrix of Xcorresponding to the active variables, and also explores special cases such as Ω = Iand Ω = ( X′X)−1, independent of γ. However, in practice, their implementations focus on mixtures of normals with diagonal Ω, retaining conditional independence among coefficients which gives rise to the typical spike-and-slab prior. Our formulation differs in several key respects: we consider continuous local scales λiinstead of binary indicators, yielding a fully continuous global-local prior that avoids discrete model search (Hahn et al., 2015; Tadesse and Vannucci, 2021), and we construct Ω from the full design J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 7 matrix or a structured estimate thereof, allowing scalable and adaptive shrink- age in high dimensional settings. We show the latter in Section 2.3. Turning to continuous global-local formulations, Griffin et al. (2024) propose the Structured Shrinkage Priors (SSPs) framework, which introduces depen- dence structures via a product representation of normal scale mixtures. Specifi- cally, if z∼ N(0,Φ) and λis a vector of stochastic scales independent of z, then the elementwise product b=λ◦zdefines a scale mixture of normals (West, 1987), with prior covariance: Σb=E(λλ′)◦Φ. (5) In standard settings, Φ = Iyields unstructured shrinkage priors. However, when Φ̸=I, identifiability issues arise: the individual scales λiare not separately identifiable from the diagonal entries of Φ. To address this, Griffin et al. (2024) impose the constraint E(λ2 i) = 1 , i= 1, . . . , p . While this restores identifiabil- ity, it significantly restricts the flexibility of the prior by limiting the range of hyperparameter choices available for the distribution of λi. For instance, under this constraint, the Normal-Gamma prior (Griffin and Brown, 2010), with λ2∼Gamma( α, β), is only admissible when α=β. Simi- larly, heavy-tailed priors such as the Beta Prime (Johnson et al., 1994), which are often used to induce marginal heavy tails in b, become infeasible unless their pa- rameter values satisfy the constraint (Pas et al., 2016; Ghosh and Chakrabarti, 2017; Bai and Ghosh, 2019; Zhang et al., 2020; Aguilar and B¨ urkner, 2023). This constraint also prevents straightforward generalizations of well-established priors like the Horseshoe (Carvalho et al., 2010), since it assumes λi∼ C+(0,1), implying that E(λ2 i) does not exist. Finally, we observe that the matrix Φ in the formulation of Structured Shrink- age Priors (SSPs) remains relatively underexplored, even though it plays a crit- ical role in determining the properties of the model. As Griffin et al. (2024) illustrate, altering Φ within traditional shrinkage priors can lead to a notable increase in computational cost, emphasizing the importance of a more system- atic study of its effects. In addition, the task of specifying Φ is typically left to
|
https://arxiv.org/abs/2505.10715v1
|
the user, either through direct selection or by placing a prior distribution on it; both of which present open opportunities for further investigation. In contrast, our dependency-aware shrinkage prior avoids imposing any con- straint on the second moment of the local scales λi. This flexibility allows it to encompass a broad class of shrinkage priors, including those with undefined or heavy-tailed moments, as long as Ω is appropriately specified. In the following sections, we analyze some properties of our proposed prior and examine the practical consequences of incorporating Ω. 2.2.2. Conditional means First, we discuss how the conditional posterior mean is affected by the presence of Ω. The conditional posterior distribution of bunder our dependency-aware J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 8 shrinkage prior (3) is multivariate Gaussian. Its mean and covariance matrix are given by E(b|y, λ, τ, σ, Ω) = Q−1 ΩX′y,Cov(b|y, λ, τ, σ, Ω) = σ2Q−1 Ω, (6) where QΩ:=X′X+1/τ2D−1 λΩ−1D−1 λ.We explicitly condition on Ω to highlight its role in shaping both the posterior mean and covariance structure. Assuming thatXis of full rank, the posterior mean can be alternatively expressed in terms of the MLE ˆbas E(b|y, λ, τ, σ, Ω) = τ2DλΩDλ τ2DλΩDλ+ (X′X)−1−1ˆb. (7) This expression makes explicit the nature of shrinkage applied to ˆbthrough the interaction of the global scale τ, the local scales λ, and the structure-inducing matrix Ω. To assess the specific influence of Ω, we consider the difference between the posterior means obtained under Ω and the identity matrix I: E(b|y, λ, τ, σ, Ω)−E(b|y, λ, τ, σ, I ) = Q−1 Ω−Q−1 I X′y. (8) This formulation allows us to isolate and quantify the effect of introducing Ω on the regularization applied to the MLE. In particular, the term Q−1 Ω− Q−1 Icaptures the deviation from standard isotropic shrinkage, revealing how structured dependence alters the posterior behavior. LetA,B, and Cbe invertible matrices of dimension p, and let x∈Rpbe a vector. Denote by ∥A∥2the spectral norm of A, i.e., its largest singular value. To derive two-sided bounds for the spectral norm difference between Q−1 Ωand Q−1 I, we make use of the resolvent identity A−1−B−1=A−1(B−A)B−1, the submultiplicative property ∥AB∥2≤ ∥A∥2∥B∥2and its reverse form ∥ABC∥2≥ ∥A−1∥−1 2∥B∥2∥C−1∥−1 2for invertible AandC, as well as Weyl’s inequality for eigenvalues of symmetric matrices (Weyl, 1912; Horn and Johnson, 2012). (See Appendix 5 for the proof). ∥Ω−1−I∥2 λ2 1 ν1+1 λ2pωp ν1+1 λ2p≤ ∥Q−1 Ω−Q−1 I∥2≤∥Ω−1−I∥2 λ2p νp+1 λ2 1ω1 νp+1 λ2 1.(9) Here, λ1,ω1, and ν1denote the largest local scale, the largest eigenvalue of Ω, and the largest eigenvalue of X⊤X, respectively, while λp,ωp, and νpdenote the corresponding smallest values. We assume τ2= 1 for notational simplicity. These inequalities quantify how structural deviations in the prior covariance, the spread of local shrinkage parameters, and the conditioning of the design matrix influence the posterior precision matrix. In particular, the difference between the structured and independent posterior precisions is amplified when shrinkage is globally strong (i.e. small λ1and hence small λp), the design is poorly conditioned (i.e. small νp), or the prior structure
|
https://arxiv.org/abs/2505.10715v1
|
deviates substantially from independence (i.e. large ∥Ω−1−I∥2). Conversely, the upper bound shows that the difference can be negligible when shrinkage is weak J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 9 (i.e., large λpand hence large λ1), the design is well-conditioned (i.e. large νp), and the structural deviation is modest (i.e. small ∥Ω−1−I∥2). In this regime, the dependence structure plays a limited role, and the posterior mean is largely data driven. Crucially, Ω alters the geometry of the posterior. Through the matrix DλΩDλ in (7), it interacts with the design matrix to modulate how prior structure shapes the shrinkage pattern. When Ω encodes interpretable structure, such as sparsity or neighborhood dependencies, it reweights the contribution of each component ofˆbalong meaningful axes in parameter space. Clearly, this can also enhance stability in settings with ill-conditioned designs or multicollinearity. 2.2.3. Divergences We analyze how the inclusion of Ω alters our model by studying the diver- gence between probability measures, while keeping all other components fixed. In particular, we focus on the Kullback–Leibler (KL) divergence (Kullback and Leibler, 1951), which quantifies the discrepancy between two probability distri- butions. A key advantage of using divergence-based metrics (R´ enyi, 1961) is that they account for the entire distribution, rather than relying solely on pointwise comparisons. LetP ∼ N (µP,ΣP) and Q∼ N(µQ,ΣQ) be two pdimensional multivari- ate normal distributions. Their KL divergence has the following closed-form expression (Pardo, 2018): KL (P ||Q ) =1 2 (µQ−µP)′Σ−1 Q(µQ−µP) + tr Σ−1 QΣP −ln|ΣP| |ΣQ|−p . (10) Now consider Pas the standard shrinkage prior for bwith Ω = I(see (2)), and letQbe a prior of the form (3). The Kullback–Leibler (KL) divergence between the conditional prior distributions for b|σ, τ, λ depends explicitly on Ω and is given by KL (P ||Q ) = tr Ω−1 + ln|Ω| −p. (11) This expression emphasizes how Ω influences the relative entropy between the two priors. The trace term tr(Ω−1) indicates that nearly ill-conditioned (or nearly singular) matrices lead to a large KL divergence, reflecting different pat- terns of joint shrinkage. Specifically, this suggests that small eigenvalues of Ω result in strong heterogeneous shrinkage along particular directions, as the model is more sensitive to those directions with weaker or nearly non-invertible com- ponents. The term ln |Ω|captures the overall volume scaling of the covariance structure imposed by Ω. The KL divergence of the conditional posteriors of b|y, σ, τ, λ also possesses a closed-form expression. We do not include it here since it is more difficult to interpret. Figure 1 illustrates how the KL divergence varies across different specifica- tions of Ω. In the provided examples, each correlation matrix Ω is parameterized by a single hyperparameter ρ∈(0,1), which fully determines its structure. We J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 10 AR1 MA1 MA2 BAR BMA1 BMA2 0.00.20.40.60.81.00.00.20.40.60.81.00.00.20.40.60.81.00.00.20.40.60.81.00.00.20.40.60.81.00.00.20.40.60.81.010−210−1100101102 10−210−1100101102 10−1101103 10−1101103105 10−1101103 10−1101103 ρlog KL Divergence# Covariates 10 50 250 500 Fig 1 . KL divergence as a function of the correlation parameter ρfor various structures of Ω. The corresponding algebraic forms are provided in Appendix 5.
|
https://arxiv.org/abs/2505.10715v1
|
consider commonly used correlation patterns, including the autoregressive model of order 1 (AR1), moving average models of orders 1 and 2 (MA1 and MA2), as well as their blocked counterparts: BAR1, BMA1, and BMA2. These same structures are also employed in our experiments in Section 3. We provide their algebraic definitions in Appendix 5. The results in Figure 1 show that the KL divergence increases monotonically with the number of covariates, and the discrepancy between the conditional distributions becomes more pronounced as ρincreases—especially with a sharp rise around ρ≈0.9. This indicates that differences between prior specifications are most substantial when parameters are strongly correlated. Consequently, we conjecture that posterior inference under the DASP will diverge most noticeably from standard shrinkage priors in high-correlation settings. 2.2.4. Contour plots of marginal distributions The properties of shrinkage priors are typically analyzed through the marginal distributions of individual coefficients, p(bi), focusing on two desirable features: strong concentration near zero to shrink noise and heavy tails to retain large signals (Pas et al., 2014, 2016; Pas, 2021). In contrast, the joint prior distribution of the full coefficient vector, p(b) =Z p(b, λ, τ )dλ dτ, (12) is rarely studied, largely because most priors assume conditional independence among the components of b. However, the joint distribution can reveal important aspects of the prior’s global behavior (Piironen et al., 2020). In our setting, where we explicitly introduce dependencies across coefficients, examining p(b) becomes particularly meaningful. Unlike the marginals, the joint captures how shrinkage is applied collectively across b, offering insights into how the prior balances local adaptivity with global structure. Since a closed-form expression for p(b) is generally intractable, we rely on Monte Carlo approximations (Robert and Casella, 2004). Figure 2 illustrates J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 11 ρ= 0 ρ= 0.5 ρ= 0.9 ρ= 0.95BP DL HS NG R2D2 b1b2Marginal Joint Priors Fig 2 . Monte Carlo approximations of the bivariate joint marginal prior distribution p(b1, b2) under different shrinkage priors:Beta Prime (BP), Dirichlet-Laplace (DL), Horseshoe (HS), Normal-Gamma (NG), and R2D2, and for various correlation levels ρ. We use default hyper- parameters for each prior, with σ= 1, and Ω = (1 −ρ)I+ρJJ′, where Jis a2-dimensional vector of ones. how introducing a dependency structure through Ω influences the joint prior distribution p(b). In this analysis, we fix σ= 1, p= 2 and set Ω =1ρ ρ1 , varying the value of ρto control the degree of correlation. We apply this proce- dure to several widely used shrinkage priors: Beta Prime (BP), Dirichlet–Laplace (DL), Horseshoe (HS), Normal–Gamma (NG), and R2D2 (Bai and Ghosh, 2021; Bhattacharya et al., 2015; Carvalho et al., 2010; Griffin and Brown, 2010; Zhang et al., 2020), using default hyperparameter values as recommended in the litera- ture. We show the form of these priors as well as their default hyperparameters in Appendix 5. Ω represents the correlation between b= (b1, b2)′, as expressed in the condi- tional distribution p(b|λ, τ). However, the joint distribution p(b) can exhibit a markedly different dependency structure. This is particularly evident because p(b) is rarely a
|
https://arxiv.org/abs/2505.10715v1
|
multivariate Gaussian, even when p(b|λ, τ) is. The plots in Figure 2 show how the correlation from the conditional distribution of bis propagated to the marginal distributions. The standard uncorrelated priors are represented when ρ= 0. The contour plots in Figure 2 show prior mass concentrated near the axes, indicating a prior preference for sparse vectors. For shrinkage to take place, it is expected J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 12 ρ= 0 ρ= 0.5 ρ= 0.9 ρ= 0.95DL HS NGR2D2 −2−1012−2−1012−2−1012−2−1012 b1b2 0.25 0.5 1 2Conditional Prior distribution Fig 3 . Monte Carlo approximations of the unnormalized conditional prior distribution p(b1| b2)for varying values of b2andρ. As both b2andρincrease, the distribution shifts toward larger values of b1, indicating reduced shrinkage. that the contours show a non-convex behavior, approximating the axes. As ρ increases, the prior mass shifts toward the main diagonal, suggesting that, a priori, the coefficients are expected to take similar values. the resulting convexity of certain contours indicates that the dependency-aware shrinkage prior may not always shrink for fixed values of ρ, potentially weakening the prior’s regularizing effect. When high correlations are present, DASP forces both coefficients to take the same value. This behavior is akin to Ridge regularization, which also sets coefficients to the same values if they belong to highly correlated, same-scaled variables (Hoerl and Kennard, 1970). 2.2.5. Conditional distributions Figure 3 displays Monte Carlo approximations of the conditional distributions p(b1|b2), using the same covariance structure Ω as in previous figures, while varying ρandb2. When ρ= 0, increasing b2leads to a marginal distribution forb1with reduced mass near zero. As both ρandb2increase, the conditional distribution shifts markedly to the right, with b2primarily determining the magnitude of this shift. The pronounced spikes in the plot arise because we are visualizing p(b1|b2) rather than the fully conditional p(b1|b2, λ, τ), which are Gaussian. Consistent with the contour plots in Figure 2, the marginal distri- bution becomes smoother as ρincreases, inducing less shrinkage on b1in the presence of stronger correlation. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 13 2.2.6. Shrinkage properties Shrinkage properties can be studied through shrinkage factors, which quantify how strongly each coefficient biis pulled toward zero (Carvalho et al., 2010; Piironen and Vehtari, 2017; Aguilar and B¨ urkner, 2023). The well-known nor- mal means problem (Castillo and Vaart, 2012) arises in Model (1) when the design matrix is the identity, i.e., X=I. Under a standard shrinkage prior, an application of Fubini’s theorem yields E(bi|yi, λi) =λ2 i 1+λ2 i= (1−κi)yi, where the quantity κi:=1 1 +λ2 i, (13) is referred to as the shrinkage factor (Carvalho et al., 2010; Piironen and Ve- htari, 2017; Bai and Ghosh, 2019; Aguilar and B¨ urkner, 2023). It represents the proportion of the posterior mean of bithat is attributed to shrinkage toward zero after observing the data y. Since 0 ≤κi≤1, the law of total expectation implies |E(bi|yi)|=|(1−E(κi|yi))yi| ≤ |yi|. In this context, yiserves as MLE of bi, soκican be interpreted as the degree to which the MLE is shrunk toward zero. When bicorresponds to
|
https://arxiv.org/abs/2505.10715v1
|
signal (i.e., a large effect), κi≈0, whereas for noise, κi≈1. Shrinkage factors offer a unified framework for comparing the behavior of different shrinkage priors across a range of hyperparameter settings (Polson and Scott, 2012; Tadesse and Vannucci, 2021). A desirable property of such priors is their ability to effectively distinguish between noise and signal components. This can be studied by examining the prior distribution of the shrinkage factor κi, which reveals how aggressively the prior shrinks coefficients under varying hyperparameter configurations. For instance, the Horseshoe prior gained popularity due to the bimodal na- ture of its implied distribution for κi∼Beta(1 /2,1/2), with mass concentrated near 0 and 1 (Carvalho et al., 2010; Armagan et al., 2011; Pas et al., 2017). This characteristic enables the prior to strongly shrink noise (large κi) while retain- ing signals (small κi). In contrast, earlier shrinkage priors often concentrated their mass either near minimal shrinkage or excessive shrinkage, limiting their adaptability (Carvalho et al., 2009). The notion of shrinkage factors from Equation (13) can be extended to set- tings with nontrivial dependency structures (Ω ̸=I) by examining the condi- tional posterior mean of b|σ, τ, λ given in Equation (6). This allows us to assess how the posterior mean deviates from the MLE under more general dependence structures. Specifically, we define the matrix-valued shrinkage factor as: κΩ=I−τ2DλΩDλ(τ2DλΩDλ+ (X′X)−1)−1. (14) This definition becomes cumbersome when approaching shrinkage factors from the perspective of MLE shrinkage. Unlike the classical scalar shrinkage factors, κΩis matrix-valued and generally non-diagonal, reflecting the induced depen- dence across coefficients due to Ω. To illustrate this, consider the special case J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 14 with τ=σ= 1, Ω =1ρ ρ1 , and X=I. In this case, the conditional posterior mean becomes: E(b|y, λ) =1 1 +λ2 1+λ2 2+λ2 1λ2 2(1−ρ2)λ2 1(1 +λ2 2(1−ρ))ˆb1+ρλ1λ2ˆb2 λ2 2(1 +λ2 1(1−ρ))ˆb2+ρλ1λ2ˆb1 (15) When ρ= 0, κΩreduces to a diagonal matrix, and we recover the classical shrinkage factors κi=1 1+λ2 iby taking the diagonal. The correlation param- eterρcontrols the degree of interaction between the local scales λ1andλ2. Asρincreases, the posterior means of b1andb2become more entangled, and shrinkage toward zero weakens unless the MLEs are already near zero. This also reflects a global pooling effect, where coefficients are increasingly pulled toward each other. Hence posterior estimates are pulled not only toward zero, but also toward one another. Importantly, the introduction of correlation through Ω does not eliminate shrinkage toward zero. Instead, it makes shrinkage less aggressive in the direc- tion of differences between MLEs, especially when both estimates are large in magnitude. The correlation parameter ρcontrols a trade-off between promot- ing sparsity (via local shrinkage through λi) and encouraging similarity across coefficients (via pooling). Full shrinkage remains possible due to the continued flexibility of the local scales λi. While shrinkage factors offer a local, coefficient-wise view of regularization, a more global perspective is provided by the effective number of parameters, meff, introduced by Piironen and Vehtari (2017) and defined as: meff=pX i=1(1−κi). (16) This quantity serves as a measure of the model’s effective dimensionality, satis-
|
https://arxiv.org/abs/2505.10715v1
|
fying meff≤p, and reflects how the combination of local and global shrinkage affects model complexity. When prior information or domain expertise suggests a plausible number of active predictors, visualizing the prior distribution of meffunder different hyperparameter settings offers practical guidance for prior elicitation (Piironen and Vehtari, 2017; Aguilar and B¨ urkner, 2023). The user can rely on meffto match the expected model complexity to domain knowledge without needing to examine each λiindividually. This approach is especially informative in high- dimensional regimes, where balancing flexibility and parsimony is critical. We extend this definition to our dependency-aware shrinkage prior by using the matrix valued shrinkage factor κΩ, leading to meff= tr( I−κΩ). (17) This extension preserves the original interpretation: the trace still quantifies the influence of the data on the posterior mean, but now incorporates the depen- dence structure among coefficients induced by Ω. In other words, meffcontinues J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 15 ρ=0 ρ=0.5 ρ=0.9 ρ=0.95BPDLHSNGR2D2 0255075100025507510002550751000255075100 meffEffective number of parameters Fig 4 . Implied priors of the effective number of parameters (Equation (14)) for different shrinkage priors (rows) and correlations (columns). We have used the default hyperparameters for the different shrinkage priors, set p= 100 covariates, σ= 1andΩ = (1 −ρ)I+ρJJ′, where Jis ap-dimensional vector of ones. to represent an effective model size, but under a prior where shrinkage is ap- plied not just individually, but collectively. This generalization is particularly meaningful because it reflects how correlation alters the balance between spar- sity (zero shrinkage) and similarity (pooling). Notably, we recover the classical definition when Ω = I. The posterior distribution of meffserves as a useful diagnostic to monitor how the model adapts to the data. Comparing posterior expectations of meff across different prior choices (e.g., independent vs. correlated shrinkage) helps quantify how strongly the priors influence model complexity in practice, and to what extent dependencies in Ω are effectively “used” in posterior inference. We illustrate the prior distribution of mefffor various shrinkage priors in Figure 4. For these simulations, we fixed p= 100, used X=I, and employed standard hyperparameters for each prior. The correlation structure was speci- fied as Ω = (1 −ρ)I+ρJJ′, where Jis ap-dimensional vector of ones, and we varied ρto assess the impact of correlation. As ρincreases, the effective model sizemeffgrows, indicating less global shrinkage. This is due to the increased dependence among the coefficients, which results in more pooling across them. However, this effect can be moderated by adjusting the hyperparameters re- sponsible for shrinkage, such as increasing concentration near zero, which would counterbalance the growth in effective model size induced by the correlation. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 16 2.3. Specification of dependence structures A significant challenge in the use of model (3) is the specification of the matrix Ω. We discuss our preferred approach to specifying Ω. Potential alternative methods are detailed in the Discussion section. It is possible to extract correlational information about the regression coeffi- cients bdirectly from the design matrix X. Since Xencodes information about how the predictors relate to each
|
https://arxiv.org/abs/2505.10715v1
|
other, we can model the dependencies between the regression coefficients bbased on the structure of X. A well-known example of this approach is Zellner’s gprior (see Section 2.2.1), which achieves this by setting the prior covariance matrix of the coefficients proportional to the covari- ance of MLE, i.e., Cov( b|σ, g) =g σ2(X′X)−1where g >0. An alternative interpretation of Zellner’s prior is that it uses a covariance structure for bthat is proportional to the true covariance matrix of the predictors, Σ X. This is be- cause when nis sufficiently large, the unbiased estimator of the covariance of X is approximately X′X/n, which converges to Σ Xasn→ ∞ andpis fixed. Inspired by Zellner’s prior, we propose a generalization of shrinkage priors that incorporates the structure of the design matrix X, aligning with the prior structure of our proposed model in Equation (3). Specifically, we derive the dependence structure Ω directly from X, using its covariance matrix Σ Xto inform the relationships between the regression coefficients. Let Cov( X) = Σ Xdenote the true covariance matrix of X. To define Ω, we first compute the inverse of Σ X, i.e., the precision matrix of X, which we denote by Θ X= Σ−1 X. We then standardize Θ Xto obtain a correlation matrix, which we use as Ω. Explicitly, we write: Ω = Cor(Θ X) = Cor Σ−1 X , (18) where the Cor operator indicates standardization of the matrix by dividing each off-diagonal entry by the product of the corresponding diagonal entries. This results in a matrix Ω that encodes the correlation structure between the regression coefficients, reflecting the dependencies among them based on the design matrix X. This leads to the following prior covariance for b: Cov(b|λ, τ, σ ) =τ2DλCor Σ−1 X Dλ. (19) This choice of Ω has several key interpretations: 1.Relationship to the partial precisions . Let Ψ Xdenote the partial pre- cision matrix of X(Lauritzen, 1996; Lam, 2020). Under our formulation, we have Ωij=−Ψij, i̸=j. which implies that Ω ijencodes the conditional correlation between biand bjafter adjusting for all other variables. Since bfollows a conditional multivariate Gaussian distribution, whenever Ω ij= 0, the coefficients bi andbjare conditionally uncorrelated given all other variables (Giraud, 2014). This structure allows us to model the dependencies between the J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 17 regression coefficients directly from the design matrix. The relationship to partial precision matrices emphasizes how Ω reflects conditional de- pendencies between coefficients, which is critical for understanding the underlying structure in the regression model. 2.Prior-Likelihood agreement . The conditional posterior mean of b| σ, τ, λ (see Equation (7)) suggests that aligning the prior correlation struc- ture of Ω with the correlation structure implied by ( X′X)−1enhances the coherence between the prior and the likelihood (Zellner, 1996; O’Hagan and Pericchi, 2012). Specifically, both the prior and the likelihood would then imply the same posterior correlation structure. In principle, this alignment should reduce prior-likelihood conflict and facilitate inference. 3.Imaginary observations . Another interpretation of this prior structure is in terms of imaginary or pseudo-observations. Specifically,
|
https://arxiv.org/abs/2505.10715v1
|
the model can be viewed as augmenting the data with additional pseudo observa- tions that share the same design matrix X, but with response values set to zero (Zellner, 1986). These pseudo-observations are scaled by the co- efficients λiandτ, which determine their relative weight. Smaller values ofλ2 iτ2correspond to fewer pseudo-observations for the ith coefficient, resulting in a weaker prior, while larger values strengthen the prior by in- corporating more pseudo-data. This perspective emphasizes how the prior adjusts based on the observed data. In practice, the true covariance matrix Σ Xis typically unknown. Instead, it must be estimated from the observed design matrix X, which can be particularly challenging when p > n . We discuss methods for estimating Σ Xin Section 2.4. By using a suitable estimator for Σ X, we can proceed with standard Bayesian inference. Importantly, since the entire regression model is conditioned on X, allowing the prior to depend on it poses no issues in terms of consistency or model specification . 2.4. Estimation of covariance matrices In practice, Σ Xis rarely known and must be estimated from the design matrix X. When p < n , and Xis of full rank, a natural approach is to proceed in a way that mirrors Zellner’s gprior. Specifically, we propose setting the correlation structure implied by Xas: Ω = Cor S−1 , (20) where S=1 n−1X′Xis the sample estimator of Σ X, which is an unbiased esti- mator of Σ X, i.e.,E(S) = Σ X(Anderson, 2003). This choice assumes that the design matrix Xcontains sufficient information to estimate the relationships between the predictors and therefore the dependence structure of the regression coefficients (Zellner, 1986; George and McCulloch, 1993). When Xis not of full rank or when p > n , a different approach is required. To illustrate why, consider the sample covariance estimator Sof Σ X. It is known that this estimator performs poorly when pis large relative to n. In particular, J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 18 when p/n→c∈(0,∞) as both pandngrow, Sbecomes an inconsistent es- timator of Σ X(Ledoit and Wolf, 2004a,b; Lam, 2020; Oriol and Miot, 2025). Even in the simple case where Σ X=I, the empirical distribution of the eigen- values of Swill not converge to a point mass at 1, as expected. Instead, it will follow the Marˇ cenko-Pastur distribution (Marˇ cenko and Pastur, 1967; Bai and Silverstein, 2010; Wainwright, 2019). This phenomenon highlights the need for a more robust approach to approximating Σ X. Among the wide range of methods proposed for estimating Σ Xin high- dimensional settings, we focus on approaches that (i) do not impose structural assumptions for consistency guarantees and (ii) prevent the presence of diverg- ing eigenvalues as n, p→ ∞ . This ensures that our approach remains as general as possible while minimizing the burden on users, who would otherwise need to specify structural assumptions on Σ Xor provide prior information about the data. Shrinkage covariance estimation, in particular, provides well-conditioned estimates and has demonstrated significant empirical improvements, even in complex settings (Lam, 2020;
|
https://arxiv.org/abs/2505.10715v1
|
Ledoit and Wolf, 2020). A key advancement in this area is the linear shrinkage estimator introduced by Ledoit and Wolf (2004b), given by Σ∗=φ1I+φ2S=β2 δ2µI+α2 δ2S. (21) where δ2=α2+β2and Σ∗minimizes the expected quadratic loss E ˜Σ−ΣX 2 F subject to ˜Σ =φ1I+φ2S, with respect to nonrandom coefficients φ1, φ2. Here, ∥·∥Fdenotes the scaled Frobenius norm, i.e., ∥A∥F= tr( A′A)/p. The shrinkage parameters in Equation 21 are computed as follows: µ=tr(S) p, α2=∥ΣX−µI∥2 F, β2=E∥S−ΣX∥2 F. (22) Since Σ Xis unknown, consistent estimators for µ,α2, and β2must be used to construct a consistent estimator S∗of Σ∗. The explicit form of S∗is provided in Appendix 5; see also Ledoit and Wolf (2004b) for derivation details. One key advantage of the LedoitWolf S∗estimator is that it does not require computationally expensive procedures, such as cross-validation or numerical op- timization. Moreover, Ledoit and Wolf (2004b) demonstrate that its empirical counterpart S∗is consistent under high dimensional general asymptotics-where both pandngrow at a proportional rate (Girko, 1992; Silverstein, 1995)- retain- ing the same properties as Σ∗. Furthermore, this estimator remains optimal in terms of minimizing the expected Frobenius loss, regardless of the distribution of the design matrix X(Ledoit and Wolf, 2004b, 2020; Lam, 2020). The resulting covariance matrix is always positive definite and well-conditioned, even in settings where p≫n. Crucially, S∗preserves the eigenvectors of the sample covariance matrix Swhile shrinking the eigenvalues toward a multiple of the identity matrix (Ledoit and Wolf, 2004a,b). Although there are extensions of this method—such as shrinkage toward alternative target matrices or combining J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 19 linear shrinkage with sparsity constraints—these typically introduce additional hyperparameters that require careful tuning (Ledoit and Wolf, 2020; Lam, 2020; Oriol and Miot, 2025). Given our goal of minimizing the user’s burden, we focus on the standard Ledoit-Wolf estimator. Consequently, we practically compute Ω as Ω = Cor (S∗)−1 . (23) 2.5. Workflow of the Method We shortly summarize the practical steps for applying a dependency-aware shrinkage prior. The procedure begins by selecting a standard shrinkage prior. If the covariance matrix Σ Xis known or specified by the user, the dependence ma- trix Ω is computed directly via Equation (18). When Σ Xis unknown and p < n , Ω is constructed using the sample covariance matrix as per Equation (20). In high-dimensional settings ( p > n ) or when Xis ill-conditioned, we recommend estimating Σ Xusing the Ledoit–Wolf shrinkage estimator S∗, Equation (21), and computing Ω via Equation (23). This workflow integrates dependence in- formation into the prior while maintaining numerical stability. We adopt this procedure in the experiments presented in Section 3. 3. Experiments 3.1. Simulations We conducted simulation studies to evaluate the impact of incorporating de- pendency structures in shrinkage priors on model performance. Specifically, we examined whether encoding dependence via correlation matrices, as discussed in Section 2, influences parameter recovery and predictive performance. Our simu- lation conditions reflect realistic scenarios encountered in practice. Additionally, we analyze real world datasets commonly used as benchmarks for dependence- structured models. This allows us to compare commonly observed empirical
|
https://arxiv.org/abs/2505.10715v1
|
covariance structures with those assumed in our simulations. We considered the following shrinkage priors: Beta Prime (BP), Dirichlet- Laplace (DL), Horseshoe (HS), Regularized Horseshoe (RHS), and R2D2 (D2) (Bai and Ghosh, 2019; Carvalho et al., 2010; Bhattacharya et al., 2015; Piironen and Vehtari, 2017; Zhang et al., 2020). These priors have been widely adopted in Bayesian modeling over the last decade due to their strong theoretical prop- erties and empirical performance. While this list is not exhaustive, it adequately represents the main approaches explored in shrinkage prior research. Given that we are interested in the sparse scenario, we have used default hyperparameters that imply strong shrinkage as recommended in the literature. See Appendix 5 for details. All models were implemented in Stan (Carpenter et al., 2017; Stan Develop- ment Team, 2024), which employs an adaptive Hamiltonian Monte Carlo (HMC) sampler known as the No-U-Turn Sampler (NUTS) (Neal, 2011; Brooks et al., J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 20 2011; Hoffman and Gelman, 2014) to sample draws from posterior distributions. The associated data and code are available at https://osf.io/fuean/ . 3.1.1. Evaluation Metrics To assess the effect of incorporating correlation structures into shrinkage priors, we compare models with and without dependence structures. Let Mdenote a model with a standard shrinkage prior and MΩthe same model incorporating a correlation matrix Ω. Since the only difference between these models is the inclusion of Ω, any variation in performance can be attributed to its presence. For a given quantity of interest Q, we define the change in performance as ∆Q=Q(MΩ)− Q(M).We evaluate model performance using two primary metrics: out-of-sample predictive performance and parameter recovery (Robert, 2007; Vehtari and Ojanen, 2012). Out-of-Sample Predictive Performance Predictive accuracy was assessed using the expected log-pointwise predictive density (ELPD) (Vehtari and Oja- nen, 2012; Vehtari et al., 2016), computed as ELPD =PNnew i=1ln 1 SPS s=1p(ynew,i|θ(s)) , where θ(s)represents the sth draw from the posterior distribution p(θ|y) for s= 1, . . . , S . The ELPD quantifies predictive performance across a set of Nnew observations unseen during model training. Higher ELPD values indicating bet- ter accuracy (Bernardo and Smith, 2009; Vehtari and Ojanen, 2012). Since log- probability scores are widely recommended as a default Bayesian evaluation metric, ELPD provides a robust benchmark for model comparison (Bernardo and Smith, 2009; Vehtari et al., 2016). Parameter Recovery We assess parameter recovery by computing the poste- rior Root Mean Squared Error (RMSE) for the regression coefficients b, defined as RMSE =1 KPK k=1r 1 SPS s=1 b(s) k−bk2 , where b(s) kis the sth posterior draw for coefficient bk, and bkdenotes its true value. RMSE provides a global measure of estimation error, naturally capturing the bias-variance tradeoff (Robert, 2007; Bernardo and Smith, 2009). To gain deeper insights, we compute three varia- tions of RMSE: 1) Overall RMSE (averaged across all coefficients). 2) RMSE for truly zero coefficients (measuring the ability to shrink the effect of irrele- vant predictors). 3) RMSE for truly nonzero coefficients (assessing how well true signals are recovered). Coverage We examine coverage properties by using
|
https://arxiv.org/abs/2505.10715v1
|
95% marginal credibil- ity intervals. We report average coverage proportion, average width, sensitivity, specificity (power) and coverage of non zeros (Neyman et al., 1933; Berger, 1985; Benjamini and Hochberg, 1995). We also show Receiver operating characteris- tic (ROC) curves to understand how coverage properties would change as we modifiy the size of the credibility interval (Bradley, 1997). See Appendix 5 for these results. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 21 Convergence Diagnostics To ensure reliable posterior inference, we report two key MCMC diagnostics: 1) The ˆRstatistic (Vehtari et al., 2021), which compares within-chain and between-chain variance with values close to 1.0 in- dicating convergence. 2) The Effective Sample Size (ESS) (Geyer, 1992), which measures the number of effectively independent samples. Higher values indicate lower autocorrelation and better mixing (Brooks et al., 2011; Margossian et al., 2024). 3.1.2. Generative Models We use model (1) as the generative model for our simulations, adapting it to accommodate various types of data encountered in practice, varying sparsity lev- els and correlation structures among covariates. The design matrix Xis drawn from a zero-mean multivariate normal distribution with covariance Σ X, gener- ated from one of the following: (1) AR1: autoregressive of order 1; (2) MA q : moving average processes with q∈1,2; and (3) BAR1 and BMA q: blocked versions using nonoverlapping blocks of size 5 with zero off-block correlations. All structures are parameterized by ρ∈0.5,0.95, which governs the correlation strength. See Appendix 5 for details. We fix n= 100 while varying p∈ {50,250}to study both low and high dimen- sional settings. The residual variance σ2is calibrated to yield R2 0∈ {0.5,0.8} (Gelman et al., 2013). We construct Ω using either the true Σ Xor its estimate: the sample covariance Swhen p < n , and the Ledoit–Wolf shrinkage estimator ˜Swhen p > n . Regression coefficients biare generated under two schemes commonly used in the shrinkage prior literature (Carvalho et al., 2010; Griffin and Brown, 2010; Bhattacharya et al., 2015; Zhang et al., 2020). 1) In the Block Coefficients setup, the first and last five entries of bare set to b∗∈ {3,7}, with all others set to zero. 2) In the Random Block Coefficients setup, the same blocks are sampled fromN(0,Σb), while the rest remain zero. We consider two versions of Σ b: (A) diagonal with variance 9, and (B) AR(1) with ρb= 0.8 and the same variance. Sparsity is induced by setting each coefficient to zero with probability 0.75. Lastly, we sample the intercept as b0∼ N(0,3). 3.1.3. Results We focus on the subset of simulations using the blocked versions of the correla- tion matrix with the true value of Σ X. The results presented here are representa- tive of the full results, which can be found on OSF ( https://osf.io/fuean/ ). We hypothesize that our dependency-aware shrinkage priors are most effec- tive when groups (or blocks) of highly correlated predictors collectively carry strong signal, i.e when several correlated covariates each contribute substan- tially to predicting the response. In such cases, encoding correlation in the prior should allow for more coherent
|
https://arxiv.org/abs/2505.10715v1
|
regularization across coefficients, improving pa- rameter recovery under structured sparsity. However, a particularly challenging J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 22 p = 50 p = 250 R2 = 0.2 R2 = 0.8 R2 = 0.2 R2 = 0.8Block3 Random Coefs rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95 D2DLHSRHS D2DLHSRHS D2DLHSRHS D2DLHSRHS−1000−5000500 −200−1000100 −5000500 −200−1000100−2000200400 −50050 −400−2000200 −1000−500 −20−10010 −2502550 −60−40−20020−20−1001020 −20−100 −20020 −10−50510 ModelΔ ELPD Fig 5 .∆ELPD for each model under the BAR1 structure across simulation scenarios. Model names represent the difference between the dependency-aware and standard versions, where positive values indicate improved predictive performance of the dependency-aware ver- sions. We have omitted the BP prior due to high variability. scenario arises when strong signal predictors are highly correlated with irrele- vant predictors. Here, the prior faces a tension between two competing goals: promoting sparsity by shrinking noise coefficients to zero, and preserving sig- nal strength by allowing large coefficients to escape shrinkage. This trade-off can lead to suboptimal inference—either by over-shrinking the signal due to its proximity to noise, or by inflating noise variables that inherit mass from nearby signals. Figures 5 and 6 display the results of the BAR1 and BMA1 experiments, respectively, and illustrate the impact of incorporating the dependency matrix Ω on predictive performance. Overall, the findings indicate that predictive per- formance is not meaningfully improved by introducing dependency structures into shrinkage priors. In most configurations, the distribution of ELPD differ- ences is centered near zero, suggesting that dependency-aware priors perform comparably to their standard counterparts, at least on average. This pattern is particularly evident in the low-dimensional setting, where even under high cor- relation and clearly structured signals, the inclusion of Ω offers no measurable advantage. Similarly, in high dimensional settings, the inclusion of dependency structures does not produce strong predictive improvements across priors and design sce- narios. These results imply that standard shrinkage priors are already capable of achieving strong predictive performance and that additional modeling flex- ibility through dependence structures does not translate into predictive gains. However, a notable observation is that predictive accuracy remains stable across J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 23 p = 50 p = 250 R2 = 0.2 R2 = 0.8 R2 = 0.2 R2 = 0.8Block3 Random Coefs rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95 D2DLHSRHS D2DLHSRHS D2DLHSRHS D2DLHSRHS−1000−50005001000 −1000−50005001000 −1000−50005001000 −1000−5000500−5000 −600−3000300600 −500−2500250500 −400−2000200−30030 −50050100 −25025 −50−25025−90−60−300 −60−40−200 −20−100102030 −40−20020 ModelΔ ELPD Fig 6 .∆ELPD for each model under the BMA1 structure across simulation scenarios. Model names represent the difference between the dependency-aware and standard versions, where positive values indicate improved predictive performance of the dependency-aware ver- sions. We have omitted the BP prior due to high variability. all conditions, indicating that dependency-aware priors do not harm generaliza- tion. While gains in prediction are limited, these priors remain useful when the focus shifts to parameter recovery, which we discuss below. Figures 7 and 8 illustrate the impact of incorporating correlational infor- mation into the prior on parameter recovery
|
https://arxiv.org/abs/2505.10715v1
|
under the fixed block coefficient setting, for the BMA1 and BAR1 structures, respectively. Figure 7 shows that incorporating dependency information consistently reduces RMSE for nonzero coefficients across all scenarios. Notably, improvements appear even in the most challenging setting: high dimensionality ( p= 250), low signal ( R2= 0.2), and strong correlation ( ρ= 0.95). When the signal is stronger ( R2= 0.8), and thus noise is reduced, dependency-aware shrinkage priors still yield substantial im- provements in parameter recovery. These results support the hypothesis that when Ω accurately reflects the groupwise structure of the relevant predictors, dependency-aware priors facilitate more effective signal recovery. Figure 8 fo- cuses on recovery of nonzero coefficients under the BAR1 structure. We limit the display to signal coefficients due to space constraints, but the results for all and zero coefficients mirror the patterns observed under BMA1. Contrary to BMA1, however, recovery of nonzero coefficients for BAR1 is not improved, and in fact, RMSE worsens uniformly across conditions. The difference in performance between BMA1 and BAR1 can be attributed to the structure of the prior correlation matrix Ω. While both scenarios assign strong signals to groups of correlated predictors, the implied Ω matrices differ in how they encode these relationships. In BAR1, Ω has a tridiagonal block J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 24 R2 = 0.2 R2 = 0.8 rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95all nonzeroes zeroes p = 50 p = 250 p = 50 p = 250 p = 50 p = 250BP D2 DL HS RHS BP D2 DL HS RHS BP D2 DL HS RHS BP D2 DL HS RHS−5−4−3−2−10 −6−4−20 −5−4−3−2−10 −8−6−4−20 0.000.250.500.75 −1012−1.0−0.50.0 −6−4−20 −0.9−0.6−0.30.0 −6−4−20 −0.20.00.20.40.6 −1012−1.0−0.50.00.5 −0.50.00.51.01.5 −2−10 −2.0−1.5−1.0−0.50.0 012 01234−0.50.00.5 01234 −2.0−1.5−1.0−0.50.0 −1.5−1.0−0.50.0 012 0246 ModelΔ RMSE Fig 7 .∆RMSE for each model under the BMA1 structure with fixed block signals . Rows correspond to all coefficients, nonzero (signal) coefficients, and zero (noise) coefficients. Model names represent the difference between the dependency-aware and standard versions, where negative values indicate improved parameter recovery of the dependency-aware versions. Dependency-aware priors show consistent gains for signal coefficients, while differences for noise coefficients are more modest. structure, inducing local dependencies that connect only neighboring predictors within each block. In contrast, BMA1 produces blocks in which all predictors within a block are strongly correlated with one another, resulting in a dense (pos- sibly high) correlation pattern. This richer within-block structure in Ω leads to stronger prior correlations among the nonzero coefficients, which enhances pa- rameter recovery in the presence of structured signals. These results suggest that the effectiveness of dependency-aware priors depends not just on the presence of correlated signals, but on how exactly the signals are correlated. 3.2. Real world case studies We evaluate how incorporating covariate dependence structures affects the pre- dictive performance of shrinkage priors using two real high dimensional datasets. The first dataset, Cereal, contains starch content measurements for 15 samples, each with 145 infrared spectral predictors, and is provided by the Rpackage chemometrics (Filzmoser, 2023). The second dataset, Eye,
|
https://arxiv.org/abs/2505.10715v1
|
includes gene ex- pression levels for 20 genes across 120 samples obtained from microarray exper- iments on mammalian eye tissue (Scheetz et al., 2006). It is available in the R package flare (Li et al., 2024). Figure 9 displays histograms of the pairwise correlations among covariates. We refrain from displaying the full correlation matrices due to the high dimensionality of the covariate space. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 25 p = 50 p = 250 R2 = 0.2 R2 = 0.8 R2 = 0.2 R2 = 0.8Block3 rho = 0.5 rho = 0.95 BPD2DLHSRHS BPD2DLHSRHS BPD2DLHSRHS BPD2DLHSRHS0123 02460123 −50510150123 02460123 −50510 ModelΔ RMSENonzero coefficients Fig 8 .∆RMSE for each model under the BMA1 structure with fixed block sig- nals. Only nonzero coefficients are shown. Model names represent the difference between the dependency-aware and standard versions, where negative values indicate improved parameter recovery of the dependency-aware versions. Improvements are limited and parameter recovery often worsens across conditions. To assess out-of-sample predictive performance, we use ELPD. See Section 3.1.1 for details. As an independent test set is not available, we rely on exact leave-one-out cross-validation (LOO-CV) (Vehtari et al., 2016). For each fold, the ELPD is computed on the held-out observation and then aggregated across all folds to obtain an overall estimate. We compare the performance of the Horseshoe, Regularized Horseshoe, Dirichlet-Laplace, and R2D2 priors, along with their dependency-aware variants (Carvalho et al., 2010; Piironen and Ve- htari, 2017; Bhattacharya et al., 2015). We summarize the results in Table 1, which reports ELPD differences and standard deviations computed via pairwise comparisons against the best-performing model for each dataset. Negative values indicate lower predictive accuracy rel- ative to the best model. Across real world correlation structures, a consistent pattern emerges: standard shrinkage priors often outperform their dependency- aware counterparts, even in settings with substantial predictor dependence (see Figure 9). While one might expect that explicitly modeling such correlations would enhance predictive performance, the empirical results suggest that stan- dard priors already adapt well to complex structures without requiring explicit dependence modeling. Cereal Eyes −1.0 −0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.0 Pairwise correlations Fig 9 . Histograms of pairwise correlations among covariates across the datasets used in the case study. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 26 Table 1 Differences in ELPD and standard deviations for the datasets, computed via pairwise comparisons with the model having the highest ELPD (first column). The initial value is zero, and subsequent columns display negative values that indicate the difference with the best model (Vehtari et al., 2023). See Vehtari et al. (2016) for details on standard error calculations. Models ending in “O” denote dependency-aware variants and are highlighted in bold blue for emphasis. Dataset Cereal DLO DL D2 D2O HS RHS HSO RHSO (n, p) = (15 ,145) 0.0 (0.0) -4.5 (2.9) -4.9 (3.4) -8.7 (5.6) -17.7 (11.5) -20.1 (12.5) -20.7 (11.6) -27.7 (13.4) Eye HS D2 RHS HSO RHSO D2O DL DLO (n, p) = (20 ,120) 0.0 (0.0) -2.5 (2.3) -2.6 (1.1) -5.8 (1.5) -10.0 (1.6)
|
https://arxiv.org/abs/2505.10715v1
|
-12.7 (2.3) -14.2 (4.6) -22.4 (5.2) 4. Discussion We propose an extension to the continuous global-local shrinkage priors frame- work that allows for the inclusion of dependence structures via a correlation matrix Ω. Specifically, we move from the standard independent prior on each coefficient, bi|λi, τ, σ∼ N (0, λ2 iτ2σ2), to a multivariate prior b|σ, τ, λ ∼ N 0, σ2τ2DλΩDλ , where D λis a diagonal matrix containing the local scales λi, and Ω is a correlation matrix encoding dependencies among coefficients. Importantly, our construction imposes no additional constraints on the dis- tributions of the local scales λi, the global scale τ, or their hyperparameters. This preserves the full flexibility of the original shrinkage prior framework. It enables a multivariate generalization of continuous shrinkage priors such as the Horseshoe, without restricting their marginal heavy-tailed behavior or impos- ing constraints on finite-moment conditions, as was necessary in some earlier approaches (Griffin et al., 2024). In the absence of knowledge about the true dependence structure, we propose estimating Ω using information from the design matrix X, following ideas akin to Zellner’s gprior. When p < n , we use the empirical covariance matrix of X. In the high dimensional case ( p > n ), where this estimator becomes unstable, we adopt an automated shrinkage-based estimator that is computationally efficient and enables a fully automated specification of Ω. Our simulation studies yield mixed results regarding the benefits of incor- porating dependence structures. While the intuition that modeling correlations can improve performance is appealing, our results indicate that the benefits are scenario-specific. In particular, improvements in parameter recovery emerge when groups of highly correlated predictors align with the true signal structure. We attribute this to the regularization effect of Ω on the joint prior for b, which tends to mitigate excessive shrinkage of correlated signals. However, when the signal structure does not reflect the correlation pattern, such as in scenarios with randomly distributed nonzero coefficients, this alignment is lost, and the incorporation of Ω offers no meaningful advantage. Importantly, we find that predictive performance remains stable even when Ω does not match the signal, suggesting that dependency-aware priors can be safely employed without risk of degradation. If a practitioner is concerned about the limited shrinkage induced J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 27 by incorporating Ω, hyperparameters of the prior distribution can be adjusted to compensate and the exerted shrinkage can be quantified via the distribution of the effective size of the model. It is worth noting that, for the same set of hyper- parameters, dependency-aware variants generally require more computational time to fit. We emphasize that our approach to incorporating dependency structures via Ω in the shrinkage prior is only one possible strategy for modeling dependen- cies. Alternatives include placing a prior directly over Ω or requiring the user to specify it manually. Placing a prior over Ω results in a highly overparameter- ized model, with p(p−1)/2 additional parameters to estimate, increasing data requirements and computational burden. Manual specification shifts a heavy burden onto the user, requiring explicit
|
https://arxiv.org/abs/2505.10715v1
|
modeling of p(p−1)/2 correlations, an impractical task in most real world applications. For shrinkage priors to remain practically useful, it is essential that their complexity remains manageable, ide- ally governed by a small number of hyperparameters, which is satisfied by our approach. 4.1. Conclusion Our study pursued two specific objectives: First, to develop a streamlined, au- tomated procedure for introducing dependence structures into shrinkage priors. Second, to evaluate whether doing so yields tangible practical benefits. The latter question is often overlooked in the literature, where methodological inno- vations are proposed without systematically investigating whether they mean- ingfully improve inference or prediction. Ultimately, while the incorporation of dependence structures into shrinkage priors is theoretically appealing, our results suggest that in many practical set- tings, the added complexity is difficult to justify. Standard shrinkage priors, with their simplicity, robustness, and strong predictive performance, often provide a better balance between model flexibility and computational efficiency. We there- fore recommend that practitioners prioritize simpler formulations unless there is strong prior evidence of structured correlations among predictors. In short, dependence modeling should be seen not as a default strategy, but as a targeted tool—useful when needed, but costly when applied indiscriminately. Acknowledgments Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foun- dation) Project 500663361. We acknowledge the computing time provided on the Linux HPC cluster at Technical University Dortmund (LiDO3), partially funded in the course of the Large-Scale Equipment Initiative by DFG Project 271512359. Paul-Christian B¨ urkner further acknowledges support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via the Collab- orative Research Center 391 (Spatio-Temporal Statistics for the Transition of Energy and Transport) – 520388526. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 28 The authors would like to thank Yuexi Wang, Luna Fazio, and David Kohns for their thoughtful comments and discussion on earlier versions of the manuscript. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 29 5. Appendix 5.1. Shrinkage priors We consider the following shrinkage priors, with default hyperparameter settings detailed below. Following the recommendations of Gelman (2006), we place a Half Student- tprior on the error scale parameter σ, with νdegrees of freedom and scale parameter η. Consistent with the approach of B¨ urkner (2017), we set η≈sd(y), as both the prior mean and variance are proportional to η. 5.1.1. Beta Prime The Beta Prime (BP) prior is specified via: bj|λj, σ∼ N 0, λ2 jσ2 , λ2 j∼BetaPrime( α, β), σ∼p(σ) where BetaPrime( α, β) has density proportional to λα−1(1 +λ)−α−β(Bai and Ghosh, 2019, 2021). The suggested default choices are α≤0.5 to generate unbounded marginal distributions for β. We set α∼Gamma(1 ,2) and β= 1. 5.1.2. Dirichlet Laplace The Dirichlet Laplace (DL) prior induces global-local shrinkage through the following hierarchy: bj|ψj, ϕ, τ, σ ∼ N 0, ψj(ϕjτ)2σ2 , ψj∼Exp (1 /2), ϕ= (ϕ1, . . . , ϕ p)∼Dirichlet( aπ, . . . , a π), τ∼Gamma( naπ,1/2) σ∼p(σ) where Exp(1 /2) denotes the exponential distribution with rate 1 /2. We set aπ= 0.5 as suggested in the Simulation Study by Bhattacharya et al. (2015). 5.1.3. Horseshoe
|
https://arxiv.org/abs/2505.10715v1
|
The Horseshoe prior is defined hierarchically as: bj|λj, τ, σ∼ N 0, σ2λ2 jτ2 , λj∼ C+(0,1), τ∼ C+(0,1), σ∼p(σ) where C+(0,1) denotes the half-Cauchy distribution with unit scale (Carvalho et al., 2010). Notice that the Horseshoe prior does not possess hyperparameters others than the one present in the distribution of σ. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 30 5.1.4. Normal-Gamma The Normal Gamma prior proposed by Griffin and Brown (2010) is specified as follows: βi|ψi, σ∼ N(0, σ2ψ2 i), ψ2 i|λ, γ∼Gamma λ,1 2γ2 , ν= 2λγ2, ν|λ∼InverseGamma(2 , M), λ∼Exponential(1) . Griffin and Brown (2010) argue that centering the prior for λaround 1 in- troduces variability around the Bayesian LASSO prior. In this specification, ν has expectation M, which serves as an informed guess for the squared ℓ2-norm ofb. Following this recommendation, we set M=1 ppX i=1ˆb2 i, where ˆbis the maximum likelihood estimator (MLE) of bwhen Xhas full rank, and the minimum-length least squares estimator when p > n (Brown and Brown, 1994). Setting λ= 1/2,M= 1 and 1 /ν∼Gamma(1 /2,1) results in the Horseshoe prior (Griffin and Brown, 2021). 5.1.5. R2D2 The R2D2 prior places a distribution on the proportion of explained variance R2, and allocates the total prior variance ω2among coefficients using a Dirichlet distribution Zhang et al. (2020). Specifically, a Beta prior is assigned to R2, which induces a Beta Prime prior on ω2with parameters a1, a2>0. The total variance ω2is decomposed across coefficients using proportions ϕ= (ϕ1, . . . , ϕ p), where ϕ∼Dirichlet( α) and λ2 i=ϕiω2. Originally, Zhang et al. (2020) specified a Double Exponential prior for the coefficients, while Aguilar and B¨ urkner (2023) proposed a Normal prior. Following the recommendations of Aguilar and B¨ urkner (2023), we adopt a Normal prior specification and set the hyperparameters α= (aπ, ..., a π) with aπ= 0.25,a2= 0.5 to ensure unbounded marginal distributions and heavy-tailed behavior. Aguilar and B¨ urkner (2025) show that other decompositions of variance are available, however we have only considered the Dirichlet in our experiments. The full hierarchical structure of the R2D2 prior is: bj|λ2 j, σ2∼ N 0, σ2λ2 j , λ2 j=ϕjω2, ϕ∼Dirichlet( aπ, . . . , a π), ω2∼BetaPrime( a1, a2)σ∼p(σ). J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 31 5.2. Bounds In this subsection we prove the bounds presented in Section 2.2.2 shown in (9). Our objective is to bound the spectral norm of Q−1 Ω−Q−1 I, where QΩ=X′X+D−1 λΩ−1D−1 λ, QI=X′X+D−2 λ.(24) In the following let λ1= max iλi, λp= min iλi, ν1=λmax(X′X), νp= λmin(X′X), ω1=λmax(Ω), ωp=λmin(Ω) where λmax(A) and λmin(A) denote the maximum and minimum eigenvalues of Arespectively. We make use of the following: •LetA∈Rp×pbe a real symmetric matrix and x∈Rpa nonzero vector. Then its spectral norm is given by ∥A∥2= max x̸=0∥Ax∥2 ∥x∥2= max i|λi(A)|, (25) where λi(A) denotes the i-th eigenvalue of A. In particular, if Ais positive semidefinite, the spectral norm equals its largest eigenvalue. •The resolvent identity, which states that for invertible matrices AandB: A−1−B−1=A−1(B−A)B−1. (26) •The submultiplicative property of the spectral
|
https://arxiv.org/abs/2505.10715v1
|
norm: ∥AB∥2≤ ∥A∥2∥B∥2, (27) •IfAandCare invertible then: ∥ABC∥2≥ ∥A−1∥−1 2∥B∥2· ∥C−1∥−1 2. (28) •Rayleigh characterization. Let A∈Rp×pbe a real symmetric matrix. Then its smallest and largest eigenvalues can be characterized as (Anderson, 2003) λmin(A) = min ∥x∥2=1x′Ax, λ max(A) = max ∥x∥2=1x′Ax. (29) •Weyl’s inequality for symmetric matrices AandB(Weyl, 1912): λmin(A+B)≥λmin(A) +λmin(B), λmax(A+B)≤λmax(A) +λmax(B).(30) We begin by proving the upper bound. An application of the resolvent identity yields Q−1 Ω−Q−1 I=Q−1 Ω(QI−QΩ)Q−1 I=Q−1 ΩD−1 λ(Ω−1−I)D−1 λQ−1 I. (31) J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 32 Taking spectral norms and applying the submultiplicative property: ∥Q−1 Ω−Q−1 I∥2≤ ∥Q−1 Ω∥2· ∥D−1 λ(Ω−1−I)D−1 λ∥2· ∥Q−1 I∥2. We now proceed to bound each term. First, applying the submultiplicative property to the norm in the middle of the right hand, we have ∥D−1 λ(Ω−1−I)D−1 λ∥2≤ ∥D−1 λ∥2 2∥Ω−1−I∥2=1 λ2p∥Ω−1−I∥2.(32) Next, we bound the spectral norms of Q−1 ΩandQ−1 I. Since QΩandQIare symmetric positive definite, the spectral norms of Q−1 ΩandQ−1 Iare given by the reciprocals of their smallest eigenvalues of QΩandQIrespectively. By Weyl’s inequality: λmin(QΩ) =λmin X′X+D−1 λΩ−1D−1 λ ≥λmin(X′X) +λmin(D−1 λΩ−1D−1 λ) ≥νp+1 λ2 1ω1. (33) In the last line we have used λmin(D−1 λΩ−1D−1 λ) =1 λmax(DλΩDλ)≥1 λmax(D2 λ)λmax(Ω)=1 λ2 1ω1, which follows from the submultiplicative property of the spectral norm. Using the same argument we obtain λmin(QI)≥νp+1 λ2 1. Therefore ∥Q−1 Ω∥2≤ νp+1 λ2 1ω1−1 ,∥Q−1 I∥2≤ νp+1 λ2 1−1 . (34) Combining (32) and (34) yields ∥Q−1 Ω−Q−1 I∥2≤∥Ω−1−I∥2 λ2p νp+1 λ2 1ω1 νp+1 λ2 1(35) We proceed to prove the lower bound. From the identity: Q−1 Ω−Q−1 I=Q−1 ΩD−1 λ(Ω−1−I)D−1 λQ−1 I, we apply the reverse inequality for products of invertible matrices shown in 28 to get ∥Q−1 Ω−Q−1 I∥2≥ ∥QΩ∥−1 2∥D−1 λ(Ω−1−I)D−1 λ∥2∥QI∥−1 2. (36) J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 33 To lower bound ∥QΩ∥−1 2and∥QI∥−1 2we make use of Weyl’s inequality on ∥QΩ∥2and∥QI∥2: λmax(QΩ) =λmax X′X+D−1 λΩ−1D−1 λ ≤λmax(X′X) +λmax(D−1 λΩ−1D−1 λ) ≤ν1+1 λ2pωp. (37) In the last line we have used that λmax(D−1 λΩ−1D−1 λ) =1 λmin(DλΩDλ)≤1 λmin(D2 λ)λmin(Ω)=1 λ2pωp. This statement follows from the Rayleigh characterization shown in 29. Given that λmin(DλΩDλ) = min ∥x∥2=1x′DλΩDλxand letting y=Dλxleads to ∥y∥2=y′y=x′D2 λx. This implies that λmin(D2 λ)≤ ∥y∥2≤λmax(D2 λ). Therefore x′DλΩDλx≥y′Ωy≥λmin(Ω)y′y. Taking the minimum yields λmin(DλΩDλ)≥λmin(Ω)λmin(D2 λ) =ωpλ2 p. A similar arguments produces λmax(QI)≤ν1+1 λ2p. (38) Therefore ∥QΩ∥2≤ν1+1 λ2pωp,∥QI∥2≤ν1+1 λ2p, (39) We proceed to bound ∥D−1 λ(Ω−1−I)D−1 λ∥2in line 36. By applying the reverse inequality for products of invertible matrices we obtain ∥D−1 λ(Ω−1−I)D−1 λ∥2≥1 λ2 1· ∥Ω−1−I∥2, (40) Combining (39) and (40) results in ∥Q−1 Ω−Q−1 I∥2∥ ≥∥Ω−1−I∥2 λ2 1 ν1+1 λ2pωp ν1+1 λ2p 5.3. Correlation Matrices We have used the following correlation matrix structures in the main text. For further details on these and related constructions, see Rue and Held (2005) and Box et al. (2015). J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 34 5.3.1. Autoregressive of Order 1 The autoregressive correlation matrix of order 1, AR(1), assumes that the corre- lation between two observations decays exponentially with their distance. Specif- ically, for observations iandj, the
|
https://arxiv.org/abs/2505.10715v1
|
( i, j)-th entry of the matrix is given by Corr( i, j) =ρ|i−j|, (41) where ρ∈(−1,1) is the autocorrelation parameter. 5.3.2. Moving Average of order 1 The moving average correlation matrix of order 1 MA(1) captures correlations between immediate neighbors. The ( i, j)-th entry of the matrix is Corr( i, j) = 1 if i=j, ρif|i−j|= 1, 0 otherwise , where ρ∈(−1,1) measures the strength of correlation between adjacent obser- vations. 5.3.3. Moving Average of order 2 The moving average correlation matrix of order 2 MA(2) extends the MA(1) structure to also include second neighbors. The ( i, j)-th entry of the matrix is Corr( i, j) = 1 if i=j, ρ1if|i−j|= 1, ρ2if|i−j|= 2, 0 otherwise , where ρ1, ρ2∈(−1,1) and control the correlation between first and second neighbors, respectively. Additionally it is required that ρ1±ρ2<1. To parametrize the MA(2) in terms of only ρ∈(−1,1) we have set ρ1=ρandρ2= (1−ρ)ρ. 5.4. Ledoit-Wolf estimator Ledoit and Wolf (2004b) propose the following linear shrinkage estimator Σ∗=φ1I+φ2S=β2 δ2µI+α2 δ2S. (42) where δ2=α2+β2and Σ∗minimizes the expected quadratic loss E ˜Σ−ΣX 2 F subject to ˜Σ =φ1I+φ2S, with respect to nonrandom coefficients φ1, φ2. Here, J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 35 ∥·∥Fdenotes the scaled Frobenius norm, i.e., ∥A∥F= tr( A′A)/p. The shrinkage parameters in Equation are computed as follows: µ=tr(S) p, α2=∥ΣX−µI∥2 F, β2=E∥S−ΣX∥2 F. (43) Since Σ Xis unknown, consistent estimators for µ,α2, and β2must be used to construct a consistent estimator S∗of Σ∗. Proposition 1. LetX∈Rn×pbe a design matrix, and define mn=tr(S) p, d2 n=∥S−mnI∥2 F, b2 n= min 1 n2nX k=1 xk·x⊤ k·−S 2 F, d2 n! , a2 n=d2 n−b2 n, where xk·denotes the k-th row of X, and Sis the sample covariance matrix. Then: 1.mn,d2 n, and b2 nare consistent (in quadratic mean) estimators of µ,δ2, andβ2, respectively. 2. The resulting linear shrinkage estimator S∗=b2 n d2nmnI+a2 n d2nS (44) is a consistent estimator of the Ledoit–Wolf shrinkage estimator Σ∗. See Ledoit and Wolf (2004b) for proof and further details. 5.5. Simulations: Further results 5.5.1. Coverage Tables 2 and 3 summarize the coverage performance of various shrinkage priors under two correlation levels ( ρ= 0.5 and ρ= 0.95), with fixed p= 250 and high signal strength ( R2= 0.8) when using 95% posterior credibility intervals using the BMA1 and BAR1 structures respectively. Models with an “O” suffix correspond to dependency-aware priors. The high- R2setting ensures that signal is present and identifiable, making this a meaningful context to evaluate how different priors handle multicollinearity. When R2is low, most of the variation in the response remains unexplained, making it difficult to discern meaningful signals from noise regardless of the prior. Regarding the results of structure BMA1 shown in Table 2, all models main- tain high coverage and specificity across settings. The most striking differences emerge in sensitivity and nonzero coverage, particularly as ρincreases. While base priors suffer sharp declines in power at high ρ(for instance the HS drops from 0.729 to 0.352 while HSO moves from 0.921 to 0.906), dependency-aware J.E. Aguilar and P-C.
|
https://arxiv.org/abs/2505.10715v1
|
B¨ urkner / Dependency-Aware Shrinkage Priors 36 p = 50 p = 250 rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95R2 = 0.2 R2 = 0.8 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.50.60.70.80.91.0 0.40.60.81.00.50.60.70.80.91.0 0.40.60.81.00.60.70.80.91.0 0.70.80.91.00.60.70.80.91.0 0.80.91.0 False Positive Rate (1 − specificity)True Positive Rate (sensitivity)Base Model BP D2 DL HS RHS Omega Version FALSE TRUEROC curves Fig 10 .ROC curves for the BMA1 with fixed block signals ( p∈ {50,250},R2∈ {0.2,0.8}).We compare standard shrinkage priors (solid lines) and their dependency-aware counterparts (dashed lines). Models with an ”O” suffix incorporate the BMA1 structure to induce dependency-aware shrinkage. Performance gains are most visible at high ρand high R2, where structured shrinkage improves signal recovery. p = 50 p = 250 rho = 0.5 rho = 0.95 rho = 0.5 rho = 0.95R2 = 0.2 R2 = 0.8 0.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.50.60.70.80.91.0 0.70.80.91.00.50.60.70.80.91.0 0.60.70.80.91.00.50.60.70.80.91.0 0.750.800.850.900.951.000.50.60.70.80.91.0 0.70.80.91.0 False Positive Rate (1 − specificity)True Positive Rate (sensitivity)Base Model BP D2 DL HS RHS Omega Version FALSE TRUEROC curves Fig 11 .ROC curves for the BAR1 with fixed block signals ( p∈ {50,250},R2∈ {0.2,0.8}).We compare standard shrinkage priors (solid lines) and their dependency-aware counterparts (dashed lines). Models with an ”O” suffix incorporate the BAR1 structure to induce dependency-aware shrinkage. As a result, performance is comparable or slightly de- graded relative to the standard versions. priors remain robust, consistently recovering signals with high probability. This echoes the results of parameter recovery that we show in Section 2. In contrast to the BMA1 structure, where dependency-aware priors deliv- ered clear improvements under high collinearity, the BAR1 Fixed design proves more challenging. At ρ= 0.5, all models achieve high coverage and speci- ficity, but sensitivity remains modest—generally between 0.2 and 0.5. Notably, the dependency-aware variants often underperform their unstructured coun- terparts, suggesting that incorporating the BMA1-based precision matrix can over-shrink when the signal is not well aligned with the imposed structure. Asρincreases to 0.95, sensitivity deteriorates across all models, falling below 0.15. Even dependency-aware priors, which excelled under BMA1, fail to recover meaningful signal here. Figures 10 and 11 show ROC curves under the BMA1 and BAR1 structures, respectively, across varying levels of correlation ( ρ∈0.5,0.95), dimensionality J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 37 (p∈50,250), and signal strength ( R2∈0.2,0.8). As before, each standard shrinkage prior is plotted alongside its dependency-aware version (dashed line of the same color). Under the BMA1 structure, dependency-aware priors consistently outperform or match their unstructured counterparts, especially at high ρand high R2. In these settings, dependency-aware shrinkage priors improve true positive rates without sacrificing specificity, resulting in ROC curves that dominate or closely track above their base versions. In contrast, under the BAR1 structure, the dependency-aware variants perform similarly or slightly worse than the standard priors. The ROC curves largely overlap, and in some cases the dependency-aware versions lag behind. This reinforces the earlier observation that the utility of structured priors is highly design dependent: they are most effective when the imposed dependence structure is aligned with the true generative mechanism (as in BMA1), but offer limited or no benefit when the structure is mismatched (as
|
https://arxiv.org/abs/2505.10715v1
|
in BAR1). Table 2 Coverage metrics under the BMA1 structure with fixed block signals ( p= 250 , R2= 0.8).We compare shrinkage priors across two correlation levels ( ρ= 0.5and ρ= 0.95). Models with an ”O” suffix incorporate the BMA1 structure to induce dependency-aware shrinkage. Model ID Coverage Specificity Sensitivity (Power) Avg. CI Width Coverage Zero Coverage Nonzero Condition: ρ= 0.5 BP 0.984 0.996 0.831 1.032 0.996 0.696 BPO 0.989 0.997 0.990 1.028 0.997 0.794 D2 0.961 1.000 0.067 0.972 1.000 0.029 D2O 0.971 0.998 0.733 1.156 0.998 0.321 DL 0.989 0.999 0.860 0.869 0.999 0.733 DLO 0.992 0.999 0.988 0.819 0.999 0.817 HS 0.987 1.000 0.729 0.603 1.000 0.688 HSO 0.994 1.000 0.921 0.669 1.000 0.850 RHS 0.990 1.000 0.721 0.694 1.000 0.756 RHSO 0.993 1.000 0.942 0.758 1.000 0.825 Condition: ρ= 0.95 BP 0.974 0.995 0.569 0.916 0.995 0.463 BPO 0.984 0.997 0.983 0.891 0.997 0.656 D2 0.960 1.000 0.065 0.740 1.000 0.000 D2O 0.967 1.000 0.865 1.008 1.000 0.181 DL 0.976 0.999 0.494 0.787 0.999 0.419 DLO 0.989 1.000 0.967 0.674 1.000 0.735 HS 0.975 1.000 0.352 0.457 1.000 0.371 HSO 0.991 1.000 0.906 0.550 1.000 0.781 RHS 0.967 0.993 0.310 0.557 0.993 0.346 RHSO 0.991 1.000 0.940 0.636 1.000 0.769 J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 38 Table 3 Coverage metrics under the BAR1 structure with fixed block signals ( p= 250 , R2= 0.8).We compare shrinkage priors across two correlation levels ( ρ= 0.5and ρ= 0.95). Models with an ”O” suffix incorporate the BMA1 structure to induce dependency-aware shrinkage. Model ID Coverage Specificity Sensitivity (Power) Avg. CI Width Coverage Zero Coverage Nonzero Condition: ρ= 0.5 BP 0.987 0.995 0.506 2.358 0.995 0.802 BPO 0.986 0.996 0.281 2.645 0.996 0.746 D2 0.996 1.000 0.492 2.385 1.000 0.915 D2O 0.993 1.000 0.158 2.869 1.000 0.817 DL 0.993 0.999 0.423 1.965 0.999 0.863 DLO 0.990 0.999 0.275 2.190 0.999 0.769 HS 0.993 1.000 0.323 1.469 1.000 0.823 HSO 0.989 1.000 0.233 1.505 1.000 0.735 RHS 0.994 1.000 0.377 1.720 1.000 0.869 RHSO 0.989 1.000 0.212 1.688 1.000 0.740 Condition: ρ= 0.95 BP 0.969 0.973 0.142 6.425 0.973 0.858 BPO 0.987 0.995 0.092 7.523 0.995 0.788 D2 1.000 1.000 0.031 3.554 1.000 0.994 D2O 0.996 1.000 0.062 5.119 1.000 0.894 DL 0.998 1.000 0.040 3.422 1.000 0.948 DLO 0.994 1.000 0.042 3.935 1.000 0.863 HS 0.985 1.000 0.065 1.363 1.000 0.629 HSO 0.984 1.000 0.056 1.493 1.000 0.606 RHS 0.992 1.000 0.042 1.571 1.000 0.796 RHSO 0.989 1.000 0.050 1.744 1.000 0.725 J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 39 References Agliari, A. and Parisetti, C. C. (1988). “A-g Reference Informative Prior: A Note on Zellner’s g Prior.” The Statistician , 37(3): 271. URL https://www.jstor.org/stable/2348164?origin=crossref 2, 5 Aguilar, J. E. and B¨ urkner, P.-C. (2025). “Generalized Decomposition Priors on R2.” Bayesian Analysis , 1 – 34. URL https://doi.org/10.1214/25-BA1524 5, 30 Aguilar, J. E. and B¨ urkner, P.-C. (2023). “Intuitive joint priors for Bayesian linear multilevel models: The R2D2M2 prior.” Electronic Journal of Statis- tics, 17(1): 1711–1767. Publisher: Institute of Mathematical Statistics and Bernoulli Society.
|
https://arxiv.org/abs/2505.10715v1
|
URL https://projecteuclid.org/journals/ electronic-journal-of-statistics/volume-17/issue-1/ Intuitive-joint-priors-for-Bayesian-linear-multilevel-models--The/ 10.1214/23-EJS2136.full 2, 7, 13, 14, 30 Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis . Wiley. Google-Books-ID: 1Ts4nwEACAAJ. 17, 31 Armagan, A., Clyde, M., and Dunson, D. B. (2011). “Generalized beta mixtures of Gaussians.” In Advances in neural information processing systems , 523– 531. 5, 13 Armagan, A., Dunson, D. B., and Lee, J. (2013). “Generalized double Pareto shrinkage.” Statistica Sinica , 23(1): 119. Publisher: NIH Public Access. 2 Bai, R. and Ghosh, M. (2019). “Large-scale multiple hypothesis testing with the normal-beta prime prior.” Statistics , 53(6): 1210–1233. Publisher: Taylor & Francis eprint: https://doi.org/10.1080/02331888.2019.1662017. URL https://doi.org/10.1080/02331888.2019.1662017 5, 7, 13, 19, 29 — (2021). “On the Beta Prime Prior for Scale Parameters in High-Dimensional Bayesian Regression Models.” Statistica Sinica . ArXiv:1807.06539 [stat]. URL http://arxiv.org/abs/1807.06539 11, 29 Bai, Z. and Silverstein, J. W. (2010). Spectral Analysis of Large Dimensional Random Matrices . Springer Series in Statistics. Springer. URL https://link.springer.com/book/10.1007/978-1-4419-0661-8 18 Benjamini, Y. and Hochberg, Y. (1995). “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society. Series B (Methodological) , 57(1): 289–300. Pub- lisher: [Royal Statistical Society, Wiley]. URL http://www.jstor.org/stable/2346101 20 Berger, J. (1985). Statistical Decision Theory and Bayesian Analysis . Springer Series in Statistics. Springer. URL https://books.google.de/books?id=oY_x7dE15_AC 20 Bernardo, J. M. and Smith, A. F. M. (2009). Bayesian Theory . John Wiley & Sons. Google-Books-ID: 11nSgIcd7xQC. 20 Bhadra, A., Datta, J., Polson, N. G., and Willard, B. (2016). “Default Bayesian analysis with global-local shrinkage priors.” Biometrika , 103(4): 955–969. Publisher: [Oxford University Press, Biometrika Trust]. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 40 URL http://www.jstor.org/stable/26363497 2 — (2019). “Lasso Meets Horseshoe: A Survey.” Statistical Science , 34(3): 405 – 427. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/19-STS700 3 Bhattacharya, A., Chakraborty, A., and Mallick, B. K. (2016). “Fast sam- pling with Gaussian scale mixture priors in high-dimensional regression.” Biometrika , 103(4): 985–991. URL https://doi.org/10.1093/biomet/asw042 2 Bhattacharya, A., Pati, D., Pillai, N. S., and Dunson, D. B. (2015). “Dirich- let–Laplace Priors for Optimal Shrinkage.” Journal of the American Statis- tical Association , 110(512): 1479–1490. Publisher: Taylor & Francis eprint: https://doi.org/10.1080/01621459.2014.960967. URL https://doi.org/10.1080/01621459.2014.960967 1, 2, 5, 11, 19, 21, 25, 29 Box, G., Jenkins, G., Reinsel, G., and Ljung, G. (2015). Time Series Analysis: Forecasting and Control . Wiley Series in Probability and Statistics. Wiley. URL https://books.google.de/books?id=rNt5CgAAQBAJ 33 Bradley, A. P. (1997). “The use of the area under the ROC curve in the evaluation of machine learning algorithms.” Pattern Recognition , 30(7): 1145–1159. URL https://www.sciencedirect.com/science/article/pii/ S0031320396001422 20 Brooks, Gelman, S., and Jones, A. (2011). Handbook of Markov Chain Monte Carlo . Chapman and Hall/CRC, 1 edition. 19, 21 Brown, P. J. and Brown, P. J. (1994). Measurement, Regression, and Calibra- tion. Oxford Statistical Science Series. Oxford, New York: Oxford University Press. 30 Buehlmann, P., Kalisch, M., and Meier, L. (2014). “High-Dimensional Statistics with a View Toward Applications in Biology.” Annual Review of Statistics and Its Application , 1(1): 255–278. eprint: https://doi.org/10.1146/annurev- statistics-022513-115545. URL https://doi.org/10.1146/annurev-statistics-022513-115545 1 B¨ uhlmann, P. and Van De Geer, S. (2011). Statistics for High-Dimensional
|
https://arxiv.org/abs/2505.10715v1
|
Data: Methods, Theory and Applications . Springer Series in Statistics. Berlin, Heidelberg: Springer. URL https://link.springer.com/10.1007/978-3-642-20192-9 2 B¨ urkner, P.-C. (2017). “brms: An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software , 80(1): 1–28. URL https://www.jstatsoft.org/index.php/jss/article/view/ v080i01 29 Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., and Riddell, A. (2017). “Stan: A Prob- abilistic Programming Language.” Journal of Statistical Software , 76(1): 1–32. URL https://www.jstatsoft.org/index.php/jss/article/view/ v076i01 19 J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 41 Carvalho, Polson, and Scott (2010). “The horseshoe estimator for sparse sig- nals.” Biometrika , 97(2): 465–480. Publisher: [Oxford University Press, Biometrika Trust]. URL http://www.jstor.org/stable/25734098 1, 2, 5, 7, 11, 13, 19, 21, 25, 29 Carvalho, C. M., Polson, N. G., and Scott, J. G. (2009). “Handling Sparsity via the Horseshoe.” In van Dyk, D. and Welling, M. (eds.), Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics , volume 5 of Proceedings of Machine Learning Research , 73–80. Hilton Clear- water Beach Resort, Clearwater Beach, Florida USA: PMLR. URL https://proceedings.mlr.press/v5/carvalho09a.html 13 Casella, G., Ghosh, M., Gill, J., and Kyung, M. (2010). “Penalized regression, standard errors, and Bayesian lassos.” Bayesian Analysis , 5(2). URL https://projecteuclid.org/journals/bayesian-analysis/ volume-5/issue-2/Penalized-regression-standard-errors-and-Bayesian-lassos/ 10.1214/10-BA607.full 2, 3 Castillo, I. (2024). “Bayesian nonparametric statistics, St-Flour lecture notes.” ArXiv:2402.16422 [math]. URL http://arxiv.org/abs/2402.16422 3 Castillo, I., Schmidt-Hieber, J., and van der Vaart, A. (2015). “Bayesian Linear Regression with Sparse Priors.” The Annals of Statistics , 43(5): 1986–2018. Publisher: Institute of Mathematical Statistics. URL https://www.jstor.org/stable/43818568 2, 3 Castillo, I. and Vaart, A. v. d. (2012). “Needles and Straw in a Haystack: Pos- terior concentration for possibly sparse sequences.” The Annals of Statistics , 40(4): 2069 – 2101. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/12-AOS1029 2, 5, 13 Filzmoser, P. (2023). chemometrics: Multivariate Statistical Analysis in Chemo- metrics . R package version 1.4.4. URL https://CRAN.R-project.org/package=chemometrics 24 Gelman, A. (2006). “Prior Distributions for Variance Parameters in Hierarchical Models.” Bayesian Analysis , 1. 2, 29 Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, A., and Rubin, D. (2013). Bayesian Data Analysis, Third Edition . Chapman & Hall/CRC Texts in Statistical Science. Taylor & Francis. URL https://books.google.de/books?id=ZXL6AQAAQBAJ 2, 6, 21 Gelman, A. and Hill, J. (2006). Data Analysis Using Regression and Multi- level/Hierarchical Models . Analytical Methods for Social Research. Cam- bridge University Press. 1 Gelman, A., Hill, J., and Vehtari, A. (2020). Regression and Other Stories . Analytical Methods for Social Research. Cambridge University Press. 1, 2 George, E. and McCulloch, R. (1995). “Stochastic search variable selection.” Markov Chain Monte Carlo in Practice . 6 George, E. I. and McCulloch, R. E. (1993). “Variable Selection Via Gibbs Sampling.” Journal of the American Statistical Association , 88(423): 881– 889. Publisher: [American Statistical Association, Taylor & Francis, Ltd.]. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 42 URL https://www.jstor.org/stable/2290777 2, 6, 17 — (1997). “Approaches for Bayesian Variable Selection.” Statistica Sinica , 7(2): 339–373. Publisher: Institute of Statistical Science, Academia Sinica. URL https://www.jstor.org/stable/24306083 6 Geyer, C. J. (1992). “Practical Markov Chain Monte Carlo.”
|
https://arxiv.org/abs/2505.10715v1
|
Statistical Science , 7(4): 473 – 483. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/ss/1177011137 21 Ghosal, S., Ghosh, J. K., and Vaart, A. W. v. d. (2000). “Convergence rates of posterior distributions.” The Annals of Statistics , 28(2): 500 – 531. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/aos/1016218228 3 Ghosh, J. K. (2003). Bayesian Nonparametrics . Springer Series in Statistics Ser. New York, NY: Springer New York. 3 Ghosh, P. and Chakrabarti, A. (2017). “Asymptotic Optimality of One-Group Shrinkage Priors in Sparse High-dimensional Problems.” Bayesian Analysis , 12(4): 1133–1161. Publisher: International Society for Bayesian Analysis. URL https://projecteuclid.org/journals/bayesian-analysis/ volume-12/issue-4/Asymptotic-Optimality-of-One-Group-Shrinkage-Priors-in-Sparse-High/ 10.1214/16-BA1029.full 2, 5, 7 Giraud, C. (2014). Introduction to High-Dimensional Statistics . Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis. URL https://books.google.de/books?id=qRuVoAEACAAJ 1, 2, 16 Girko, V. L. (1992). “G-analysis of high-dimensional observations.” Journal of Soviet Mathematics , 60(4): 1631–1635. URL https://doi.org/10.1007/BF01097622 18 Goplerud, M. (2021). “Modelling Heterogeneity Using Bayesian Structured Sparsity.” ArXiv:2103.15919 [stat]. URL http://arxiv.org/abs/2103.15919 3 Griffin, J. E. and Brown, P. J. (2010). “Inference with normal-gamma prior dis- tributions in regression problems.” Bayesian Analysis , 5(1): 171–188. Pub- lisher: International Society for Bayesian Analysis. 1, 5, 7, 11, 21, 30 — (2013). “Some Priors for Sparse Regression Modelling.” Bayesian Analysis , 8(3): 691 – 702. Publisher: International Society for Bayesian Analysis. URL https://doi.org/10.1214/13-BA827 2 — (2021). “Bayesian global-local shrinkage methods for regularisation in the high dimension linear model.” Chemometrics and Intelligent Laboratory Systems , 210: 104255. URL https://www.sciencedirect.com/science/article/pii/ S016974392100023X 30 Griffin, M., , and Hoff, P. D. (2024). “Structured Shrinkage Priors.” Journal of Computational and Graphical Statistics , 33(1): 1–14. Publisher: ASA Website. URL https://www.tandfonline.com/doi/citedby/10.1080/10618600. 2023.2233577 2, 3, 7, 26 Hagar, L. and Stevens, N. T. (2024). “Posterior Ramifications of Prior Depen- dence Structures.” ArXiv:2312.06437 [stat]. J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 43 URL http://arxiv.org/abs/2312.06437 3 Hahn, P. R., , and Carvalho, C. M. (2015). “Decoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective.” Journal of the American Statistical Association , 110(509): 435–448. Publisher: ASA Website eprint: https://doi.org/10.1080/01621459.2014.993077. URL https://doi.org/10.1080/01621459.2014.993077 3, 5, 6 Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition . Springer Science & Business Media. Google-Books-ID: tVIjmNS3Ob8C. 1 Hastie, T., Tibshirani, R., and Wainwright, M. (2015). Statistical Learning with Sparsity: The Lasso and Generalizations . ISSN. CRC Press. URL https://books.google.de/books?id=f-A_CQAAQBAJ 1, 2, 3 Hoerl, A. E. and Kennard, R. W. (1970). “Ridge Regression: Biased Estima- tion for Nonorthogonal Problems.” Technometrics , 12(1): 55–67. Publisher: [Taylor & Francis, Ltd., American Statistical Association, American Society for Quality]. URL http://www.jstor.org/stable/1267351 12 Hoff, P. D. (2009). A First Course in Bayesian Statistical Methods . Springer New York. Google-Books-ID: DykcMwEACAAJ. 6 Hoffman, M. D. and Gelman, A. (2014). “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.” J. Mach. Learn. Res. , 15(1): 1593–1623. Publisher: JMLR.org. 20 Horn, R. A. and Johnson, C. R. (2012). “Matrix Analysis.” ISBN: 9781139020411 Publisher: Cambridge University Press. URL https://www.cambridge.org/highereducation/books/ matrix-analysis/FDA3627DC2B9F5C3DF2FD8C3CC136B48 8 Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions .
|
https://arxiv.org/abs/2505.10715v1
|
Wiley & Sons. Google-Books-ID: 0QzvAAAAMAAJ. 7 Johnstone, I. M. and Silverman, B. W. (2004). “Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences.” The Annals of Statistics , 32(4): 1594–1649. Publisher: Institute of Mathematical Statistics. URL https://projecteuclid.org/journals/ annals-of-statistics/volume-32/issue-4/ Needles-and-straw-in-haystacks--Empirical-Bayes-estimates-of/ 10.1214/009053604000000030.full 2 Kullback, S. and Leibler, R. A. (1951). “On Information and Sufficiency.” The Annals of Mathematical Statistics , 22(1): 79 – 86. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/aoms/1177729694 9 Lam, C. (2020). “High-dimensional covariance matrix estimation.” Wiley Inter- disciplinary Reviews: Computational Statistics , 12(2). Num Pages: 21 Num- ber: 2. URL https://onlinelibrary.wiley.com/journal/19390068 16, 18, 19 Lauritzen, S. L. (1996). Graphical Models . Clarendon Press. Google-Books-ID: mGQWkx4guhAC. 16 Ledoit, O. and Wolf, M. (2004a). “Honey, I Shrunk the Sample Covariance J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 44 Matrix.” The Journal of Portfolio Management , 30(4): 110–119. Company: Institutional Investor Journals Distributor: Institutional Investor Journals In- stitution: Institutional Investor Journals Label: Institutional Investor Jour- nals Publisher: Portfolio Management Research Section: Primary Article. URL https://www.pm-research.com/content/iijpormgmt/30/4/110 18 — (2004b). “A well-conditioned estimator for large-dimensional covariance matrices.” Journal of Multivariate Analysis , 88(2): 365–411. URL https://www.sciencedirect.com/science/article/pii/ S0047259X03000964 18, 34, 35 — (2020). “Analytical Nonlinear Shrinkage of Large-Dimensional Covariance Matrices.” The Annals of Statistics , 48(5): 3043–3065. Publisher: Institute of Mathematical Statistics. URL https://www.jstor.org/stable/27028732 18, 19 Li, X., Zhao, T., Wang, L., Yuan, X., and Liu, H. (2024). “flare: Family of Lasso Regression.” URL https://cran.r-project.org/web/packages/flare/index.html 24 Li, Y. and Clyde, M. A. (2018). “Mixtures of g-Priors in Gen- eralized Linear Models.” Journal of the American Statistical As- sociation , 113(524): 1828–1845. Publisher: ASA Website eprint: https://doi.org/10.1080/01621459.2018.1469992. URL https://doi.org/10.1080/01621459.2018.1469992 2, 3, 5, 6 Liang, F., , P., Rui, , M., German, , C., Merlise A, , and Berger, J. O. (2008). “Mixtures of g Priors for Bayesian Variable Selection.” Journal of the American Statistical Association , 103(481): 410–423. Publisher: ASA Website eprint: https://doi.org/10.1198/016214507000001337. URL https://doi.org/10.1198/016214507000001337 2, 6 Margossian, C. C., Hoffman, M. D., Sountsov, P., Riou-Durand, L., Vehtari, A., and Gelman, A. (2024). “Nested : Assessing the Convergence of Markov Chain Monte Carlo When Running Many Short Chains.” Bayesian Analysis , 1 – 28. URL https://doi.org/10.1214/24-BA1453 21 Maruyama, Y. and George, E. I. (2011). “Fully Bayes factors with a generalized g-prior.” The Annals of Statistics , 39(5): 2740–2765. Publisher: Institute of Mathematical Statistics. URL https://projecteuclid.org/journals/ annals-of-statistics/volume-39/issue-5/ Fully-Bayes-factors-with-a-generalized-g-prior/10.1214/ 11-AOS917.full 2, 5, 6 Marˇ cenko, V. A. and Pastur, L. A. (1967). “DISTRIBUTION OF EIGEN- VALUES FOR SOME SETS OF RANDOM MATRICES.” Mathematics of the USSR-Sbornik , 1(4): 457. Publisher: IOP Publishing. URL https://iopscience.iop.org/article/10.1070/ SM1967v001n04ABEH001994/meta 18 Mitchell, T. J., , and Beauchamp, J. J. (1988). “Bayesian Variable Selection in Linear Regression.” Journal of the American Statisti- cal Association , 83(404): 1023–1032. Publisher: ASA Website eprint: J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 45 https://www.tandfonline.com/doi/pdf/10.1080/01621459.1988.10478694. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.1988. 10478694 2 Neal, R. M. (2011). MCMC using Hamiltonian dynamics . ArXiv:1206.1901 [stat]. URL http://arxiv.org/abs/1206.1901 19 Neyman, J., Pearson, E. S., and Pearson, K. (1933). “IX. On the problem of the most efficient tests of statistical hypotheses.” Philosophical Transactions of the Royal Society of
|
https://arxiv.org/abs/2505.10715v1
|
London. Series A, Containing Papers of a Mathematical or Physical Character , 231(694-706): 289–337. Publisher: Royal Society. URL https://royalsocietypublishing.org/doi/10.1098/rsta.1933. 0009 20 O’Hagan, A. and Pericchi, L. (2012). “Bayesian heavy-tailed models and conflict resolution: A review.” Brazilian Journal of Probability and Statistics , 26(4): 372–401. Publisher: [Brazilian Statistical Association, Institute of Mathemat- ical Statistics]. URL https://www.jstor.org/stable/43601225 17 Oriol, B. and Miot, A. (2025). “Ledoit-Wolf linear shrinkage with unknown mean.” Journal of Multivariate Analysis , 208: 105429. URL https://www.sciencedirect.com/science/article/pii/ S0047259X25000247 18, 19 Pardo, L. (2018). Statistical Inference Based on Divergence Measures . New York: Chapman and Hall/CRC. 9 Pas, S. L. v. d. (2021). “Theoretical guarantees for the horseshoe and other global-local shrinkage priors.” In Handbook of Bayesian Variable Selection , 133–160. Chapman and Hall/CRC. 2, 4, 5, 10 Pas, S. L. v. d., Kleijn, B. J. K., and Vaart, A. W. v. d. (2014). “The horseshoe estimator: Posterior concentration around nearly black vectors.” Electronic Journal of Statistics , 8(2): 2585 – 2618. Publisher: Institute of Mathematical Statistics and Bernoulli Society. URL https://doi.org/10.1214/14-EJS962 10 Pas, S. L. v. d., Salomond, J.-B., and Schmidt-Hieber, J. (2016). “Conditions for posterior contraction in the sparse normal means problem.” Electronic Journal of Statistics , 10(1): 976 – 1000. Publisher: Institute of Mathematical Statistics and Bernoulli Society. URL https://doi.org/10.1214/16-EJS1130 2, 5, 7, 10 Pas, S. L. v. d., Szab´ o, B., and Vaart, A. v. d. (2017). “Uncertainty Quantifi- cation for the Horseshoe (with Discussion).” Bayesian Analysis , 12(4): 1221 – 1274. Publisher: International Society for Bayesian Analysis. URL https://doi.org/10.1214/17-BA1065 2, 3, 13 Pauger, D. and Wagner, H. (2019). “Bayesian Effect Fusion for Categorical Predictors.” Bayesian Analysis , 14(2). URL https://projecteuclid.org/journals/bayesian-analysis/ volume-14/issue-2/Bayesian-Effect-Fusion-for-Categorical-Predictors/ 10.1214/18-BA1096.full 3 Piironen, J., Paasiniemi, M., and Vehtari, A. (2020). “Projective inference J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 46 in high-dimensional problems: Prediction and feature selection.” Electronic Journal of Statistics , 14(1): 2155 – 2197. Publisher: Institute of Mathematical Statistics and Bernoulli Society. URL https://doi.org/10.1214/20-EJS1711 5, 10 Piironen, J. and Vehtari, A. (2017). “Sparsity information and regularization in the horseshoe and other shrinkage priors.” Electronic Journal of Statis- tics, 11(2): 5018 – 5051. Publisher: Institute of Mathematical Statistics and Bernoulli Society. URL https://doi.org/10.1214/17-EJS1337SI 1, 2, 13, 14, 19, 25 Polson, N. G. and Scott, J. G. (2012). “Local Shrinkage Rules, L´ evy Processes and Regularized Regression.” Journal of the Royal Statistical Society Series B: Statistical Methodology , 74(2): 287–311. URL https://doi.org/10.1111/j.1467-9868.2011.01015.x 13 Robert, C. (2007). The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation . Springer Texts in Statistics. Springer New York. URL https://books.google.ch/books?id=6oQ4s8Pq9pYC 20 Robert, C. P. and Casella, G. (2004). Monte Carlo Statistical Methods . Springer Texts in Statistics. New York, NY: Springer. URL http://link.springer.com/10.1007/978-1-4757-4145-2 2, 6, 10 Roˇ ckov´ a, V. and George, E. I. (2018). “The Spike-and-Slab LASSO.” Journal of the American Statistical Association , 113(521): 431–444. Publisher: ASA Website eprint: https://doi.org/10.1080/01621459.2016.1260469. URL https://doi.org/10.1080/01621459.2016.1260469 1 Rue, H. and Held, L. (2005). Gaussian Markov Random Fields: Theory and Applications . New York: Chapman and Hall/CRC. 33 R´ enyi, A. (1961). “On measures of entropy and information.” In Proceedings of the
|
https://arxiv.org/abs/2505.10715v1
|
fourth Berkeley symposium on mathematical statistics and probability, vol- ume 1: contributions to the theory of statistics , volume 4, 547–562. University of California Press. 9 Scheetz, T. E., Kim, K.-Y. A., Swiderski, R. E., Philp, A. R., Braun, T. A., Knudtson, K. L., Dorrance, A. M., DiBona, G. F., Huang, J., Casavant, T. L., Sheffield, V. C., and Stone, E. M. (2006). “Regulation of gene expression in the mammalian eye and its relevance to eye disease.” Proceedings of the National Academy of Sciences of the United States of America , 103(39): 14429–14434. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1636701/ 24 Silverstein, J. W. (1995). “Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices.” Journal of Multi- variate Analysis , 55(2): 331–339. URL https://www.sciencedirect.com/science/article/pii/ S0047259X85710834 18 Simpson, D., Rue, H., Riebler, A., Martins, T. G., and Sørbye, S. H. (2017). “Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors.” Statistical Science , 32(1): 1 – 28. Publisher: Institute of Mathematical Statistics. URL https://doi.org/10.1214/16-STS576 2, 3 J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 47 Stan Development Team (2024). “Stan Modeling Language Users Guide and Reference Manual, Version 2.36.” URL http://mc-stan.org/ 19 Tadesse, M. and Vannucci, M. (2021). Handbook of Bayesian Variable Selec- tion. Chapman & Hall/CRC Handbooks of Modern Statistical Methods. CRC Press. URL https://books.google.de/books?id=Cn1TEAAAQBAJ 2, 6, 13 Tibshirani, R. (1996). “Regression shrinkage and selection via the lasso.” Jour- nal of the Royal Statistical Society. Series B (Methodological) , 267–288. Pub- lisher: JSTOR. 2, 3 Vehtari, A., Gabry, J., Magnusson, M., Yao, Y., B¨ urkner, P.-C., Paananen, T., and Gelman, A. (2023). “loo: Efficient leave-one-out cross-validation and WAIC for Bayesian models.” R package version 2.6.0. URL https://mc-stan.org/loo/ 26 Vehtari, A., Gelman, A., and Gabry, J. (2016). “Practical Bayesian model eval- uation using leave-one-out cross-validation and WAIC.” Statistics and Com- puting , 27(5): 1413–1432. Publisher: Springer Science and Business Media LLC. URL http://dx.doi.org/10.1007/s11222-016-9696-4 20, 25, 26 Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., and B¨ urkner, P.-C. (2021). “Rank-Normalization, Folding, and Localization: An Improved bRfor Assess- ing Convergence of MCMC (with Discussion).” Bayesian Analysis , 16(2): 667 – 718. URL https://doi.org/10.1214/20-BA1221 21 Vehtari, A. and Ojanen, J. (2012). “A survey of Bayesian predictive methods for model assessment, selection and comparison.” Statistics Surveys , 6(none): 142 – 228. Publisher: Amer. Statist. Assoc., the Bernoulli Soc., the Inst. Math. Statist., and the Statist. Soc. Canada. URL https://doi.org/10.1214/12-SS102 20 Vershynin, R. (2018). High-Dimensional Probability: An Introduction with Ap- plications in Data Science . Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. URL https://books.google.de/books?id=J-VjswEACAAJ 1 Wainwright, M. J. (2019). High-Dimensional Statistics: A Non-Asymptotic Viewpoint . Cambridge University Press. Google-Books-ID: IluHDwAAQBAJ. 2, 18 West, M. (1987). “On scale mixtures of normal distributions.” Biometrika , 74(3): 646–648. eprint: https://academic.oup.com/biomet/article- pdf/74/3/646/656269/74-3-646.pdf. URL https://doi.org/10.1093/biomet/74.3.646 4, 7 Weyl, H. (1912). “Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung).” Mathematische Annalen , 71(4): 441–479. URL https://doi.org/10.1007/BF01456804 8, 31 Zellner, A. (1986). “On assessing prior distributions and Bayesian regression J.E. Aguilar and P-C. B¨ urkner / Dependency-Aware Shrinkage Priors 48
|
https://arxiv.org/abs/2505.10715v1
|
analysis with g-prior distributions.” Bayesian inference and decision tech- niques . 2, 5, 17 — (1996). “Models, prior information, and Bayesian analysis.” Journal of Econometrics , 75(1): 51–68. URL https://www.sciencedirect.com/science/article/pii/ 0304407695017682 2, 17 Zhang, Y. D., Naughton, B. P., Bondell, H. D., and Reich, B. J. (2020). “Bayesian Regression Using a Prior on the Model Fit: The R2-D2 Shrinkage Prior.” Journal of the American Statistical Association , 0(0): 1–13. Publisher: Taylor & Francis eprint: https://doi.org/10.1080/01621459.2020.1825449. URL https://doi.org/10.1080/01621459.2020.1825449 1, 2, 5, 7, 11, 19, 21, 30 Zhao, P. and Yu, B. (2006). “On Model Selection Consistency of Lasso.” Journal of Machine Learning Research , 7(90): 2541–2563. URL http://jmlr.org/papers/v7/zhao06a.html 3
|
https://arxiv.org/abs/2505.10715v1
|
arXiv:2505.10738v2 [stat.ME] 19 May 2025STATISTICALLY SIGNIFICANT LINEARREGRESSION COEFFICIENTS SOLELYDRIVENBYOUTLIERS INFINITE-SAMPLE INFERENCE Felix Reichel∗ May 2025 A Preprint Abstract Inthispaper, weinvestigatetheimpactofoutliersonthestatisticalsignificanceofcoefficientsinlinear regression. We demonstrate, through numerical simulation using R, that a single outlier can cause an otherwise insignificant coefficient to appear statistically significant. We compare this with robust Huber regression, which reduces the effects of outliers. Afterwards, we approximate the influence of a single outlier on estimated regression coefficients and discuss common diagnostic statistics to detect influential observations in regression (e.g., studentized residuals). Furthermore, we relate this issue to the optional normality assumption in simple linear regression [15], required for exact finite-sample inference but asymptotically justified for large nby the Central Limit Theorem (CLT). We also address the general dangers of relying solely on p-values without performing adequate regression diagnostics. Finally, we provide a brief overview of regression methods and discuss how they relate to the assumptions of the Gauss-Markov theorem. Keywords: outliers, finite-sample inference, regression diagnostics, robust regression, t-statistics, p-values, p-hacking, linear models, single outlier test, CLT, normality assumption JEL Codes: C10, C12, C13, C80, C81 License: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). You are free to share and adapt the material, provided appropriate credit is given and any derivatives are distributed under the same license. ∗felix.reichel@jku.at B.S., Department of Economics, Johannes Kepler University of Linz, Linz, Austria, 4040 Thanks to T.Cunningham (MSU) this version now fixes a typo in the title. 1 Contents 1 Introduction 3 2 Linear Regression and Statistical Inference 3 2.1 OLS Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Distribution of the OLS Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Estimating Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.4 Standard Error and Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3 The Effect of a Single Outlier on Regression Coefficients 5 4 Single Outlier Tests for Linear Regression Models 5 4.1 Internally Studentized Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Maximum Absolute Internally Studentized Residual . . . . . . . . . . . . . . . . . . . . . . . 6 4.3
|
https://arxiv.org/abs/2505.10738v2
|
Normalized Maximum Ordinary Residual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.4 Distributional Properties and Critical Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.5 Upper Bound of Single Outlier Test Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 5 On the Normality Assumption in Regression Analysis 8 6 An Overview of Regression Methods 8 6.1 Deming Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2 Ridge Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.3 Lasso Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.4 Elastic Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.5 Robust Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.6 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.7 Principal Component Regression (PCR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.8 Partial Least Squares (PLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.9 LOESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.10738v2
|
. . . . . . . . . . . . . . . . . . . 10 6.10 Spline Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.11 Generalized Linear Models (GLMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.12 Generalized Additive Models (GAMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7 Conclusion 11 A Residual Diagnostics 13 A.1 Residual Plots: SLR OLS Model (Clean Data/No Outlier) . . . . . . . . . . . . . . . . . . . . 13 A.2 Residual Plots: SLR OLS Model With An Outlier . . . . . . . . . . . . . . . . . . . . . . . . 14 A.3 Residual Plots: SLR Huber Robust With An Outlier . . . . . . . . . . . . . . . . . . . . . . . 15 2 1 Introduction Linear regression is a foundational method widely used for modeling due to its simplicity, interpretability, and strong theoretical underpinnings. The classical simple linear regression (SLR) model in scalar notation is given by: Yi=β0+β1Xi+εi, εi∼N(0,σ2), (1) whereβ0(the intercept) and β1(the slope) are the coefficients capturing the linear relationship between the predictorXiand the dependent/outcome/response variable Yi, fori= 1,...,n. Estimation via OLS yields interpretable, closed-form solutions and enables simple statistical inference using t-tests and confidence intervals, assuming that the classical linear model conditions hold. As shown in [16], under assumptions SLR.1throughSLR.4for simple linear regression, the OLS estimates remain unbiased; SLR.5, which assumes normally distributed errors, is only required for valid finite-sample inference using the t-distribution, but can in general be relaxed in large samples due to the Central Limit Theorem. (CLT) One of the major advantages of linear regression is the interpretability of the estimated coefficient ˆβ1, representingtheexpectedchangeintheresponse Yforaone-unit(assumingnore-scalinge.g. standard-units) increase in the predictor X. Additionally, hypothesis testing on coefficients is conducted using Student’s t- statistic: t=ˆβ1 SE(ˆβ1)∼tn−2, H 0:β1= 0, (2) which provides measures of statistical significance with the associated p-values, assuming the normality of the error term or a (sufficiently) large sample size. Despite its upsides, a critical limitation of OLS is its strong sensitivity to outliers. The use of the squared loss function/summationtextn i=1(Yi−ˆYi)2formodelfittingscausesobservationswithlargeresidualstohaveadisproportionately high influence on the estimated coefficients. Although the sensitivity can in general be informative, it can also lead to highly misleading subsequent inferences and effect calculations. Even a single outlier can distort the estimated coefficient ˆβ1, resulting in underestimated standard errors and thus false statistical significance of the
|
https://arxiv.org/abs/2505.10738v2
|
predictor variable [18]. This paper investigates how outliers can lead to misleading conclusions in regression analysis. Through simulation using R, we demonstrate the fragility of OLS-based inference when a finite sample inference is manipulated by insertion of a single outlier. To mitigate this issue, we additionally fit a robust Huber regression [5]. In doing so, we aim to demonstrate the limitations of classical OLS regression based on the quadratic loss function for statistical inference and advocate the use of additional robust alternatives and model diagnostic tools. These include regression diagnostics (for example, residual analysis, leverage, studentized residuals, Cook’s distance [19]), model fit statistics (for example, R2, F tests), and formal outlier detection meth- ods such as single outlier statistical tests [20, 21, 17] using residuals, to ensure reliable and transparent communication of statistically significant results. 2 Linear Regression and Statistical Inference We begin by recalling the standard form of the linear regression model in matrix notation: Y=Xβ+ε, ε∼N(0,σ2I), (3) whereY∈Rnis the dependent/outcome/response variable, X∈Rn×pis the covariate matrix of predictors (assumedtohavefullcolumnrank), β∈Rpisthevectorofcoefficientstobeestimated, and ε∈Rnisitserror term, assumed to be independent and identically distributed with zero mean and constant variance σ2[16, Ch. 2]. These assumptions are known as the classical linear model assumptions denoted as MLR.1–MLR.5 in [16] in the context of multiple regression for example. 3 2.1 OLS Estimation The coefficients βare estimated using OLS, which minimizes the sum of squared residuals: ˆβ= arg min β∥Y−Xβ∥2. (4) The solution to this optimization problem is given by: ˆβ= (X⊤X)−1X⊤Y. (5) This formula is valid under assumption MLR.3 (no perfect multicollinearity), which ensures that X⊤Xis a invertible matrix product [16, Ch. 3]. 2.2 Distribution of the OLS Estimator Under optional assumption MLR.5, which states that the error term εis normally distributed, the OLS estimator ˆβalso follows a multivariate normal distribution: ˆβ∼N/parenleftbig β,σ2(X⊤X)−1/parenrightbig . (6) This result forms the basis for inference procedures such as hypothesis testing and confidence interval con- struction [16, Ch. 4]. 2.3 Estimating Variance Becauseσ2is unknown in practice, it is estimated using the residuals from the fitted model. Under assump- tionMLR.4 (homoskedasticity), an unbiased estimator of the variance is given by: ˆσ2=1 n−p∥Y−Xˆβ∥2=1 n−pn/summationdisplay i=1ˆε2 i, (7) where ˆε=Y−Xˆβis the vector of residuals [16, Ch. 3]. 2.4 Standard Error and Hypothesis Testing The standard error of the estimated coefficient ˆβjis computed as: SE(ˆβj) =/radicalig ˆσ2[(X⊤X)−1]jj. (8) To test the null hypothesis H0:βj= 0, we compute the corresponding t-statistic as: tj=ˆβj SE(ˆβj). (9) Under the assumptions of the classical linear model—and particularly MLR.5—this statistic follows a Student’st-distribution with n−por written as n−p−1(depending on intercept inclusion) degrees of freedom: tj∼tn−p. (10) The two-sided p-value can be calculated as: p-val. = 2·P(T >|tj|), T∼tn−p, (11) which tests whether the coefficient βjis statistically different from zero in the population regression model [16, Ch. 4]. 4 3 The Effect of a Single Outlier on Regression Coefficients OLS is sensitive to outliers. One unusual point can strongly affect the estimated slope, especially if the point has high leverage—that means, if it lies far from the center of the predictor distribution. This can make a non-significant result appear statistically significant
|
https://arxiv.org/abs/2505.10738v2
|
[18, 19]. We can show this with a simple example. We generate 100 observations (x,y)inRusing the seed: 123 with no true relationship. Model 1, based on this clean data, shows no significant coefficient. In Model 2, we add one extreme outlier. This one point changes the slope to 1.62 and creates a highly significant result. Model 3 uses robust regression (Huber’s M-estimator), which reduces the outlier’s impact and gives a smaller slope again (see Table 1). Table 1: Impact of a Single Outlier on the Statistical Significance of Linear Regression Estimates No Outlier With Outlier Robust Regression Variable Estimate SE Estimate SE Estimate SE Intercept −0.103 0.098−0.115 0.230−0.162 0 .097 Coefficient on x−0.052 0.107 1.620***0.171 0.184** 0.072 Residual Std. Error 0.971 (df = 98) 2.289 (df = 99) 0.966 (df = 99) R2/ Adj.R20.002 / -0.008 0.476 / 0.471 – F Statistic 0.241 (1, 98) 89.99*** (1, 99) – Observations 100 101 101 Notes:Model 1 is estimated on clean data. Model 2 adds one outlier, which changes the slope and makes the result statistically significant. Model 3 uses robust regression (Huber) to reduce the influence of the outlier. Robust models often omit R2or F-statistics because they do not apply directly. See Appendix A for Residual Plots. Significance levels: *p <0.1; **p <0.05; ***p <0.01 We can also illustrate the effect mathematically. When a single value in the data changes, the approximate change (using the result of a first-order Taylor approximation) in the OLS estimate is approximately given by: ∆ˆβ≈(X⊤X)−1x⊤ i∆yi, (12) wherexiis the row vector for the i-th observation. This shows that the change in the estimated slope depends on both the residual size ( ∆yi) and the leverage of the (outlier) point. Leverage is defined as: hi=xi(X⊤X)−1x⊤ i, (13) wherehimeasures how far xiis from the center of the predictor space. High-leverage points can have a disproportionate effect on the fit [18]. Robust methods help reduce this sensitivity. Huber’s M-estimator [5] uses a different loss function that grows quadratically near zero but linearly in the tails of the distribution, limiting the influence of large residuals. Another robust approach is Least Trimmed Squares (LTS), which fits the model using only the subset of observations with the smallest residuals [13]. Robust regression methods provide more stable and reliable estimates when datasets contain outliers. 4 Single Outlier Tests for Linear Regression Models Outlier detection in linear regression models typically revolves around residuals, i.e., differences between observedandfittedvalues. Severalteststatisticstargetunusuallylargeresidualstoidentifypotentialoutliers. 5 4.1 Internally Studentized Residuals The internally studentized residual for observation iis: Ri:=ei ˆσ√1−hii,whereei:=yi−ˆyi, (14) hii:=x⊤ i(X⊤X)−1xi,ˆσ2:=1 n−pn/summationdisplay j=1e2 j. (15) Here,hiiis the leverage (diagonal of the so-called hat matrix H:=X(X⊤X)−1X⊤). The denominator rescales residuals the by the local variance. Under the assumption of normally distributed errors ( MLR.5),Riapproximately follows a standard normal distribution for large n, but does not exactly follow a t-distribution. The externally studentized residual , which removes the ith observation when estimating variance, follows a tn−p−1distribution. 4.2 Maximum Absolute Internally Studentized Residual The test statistic for detecting a single outlier is the maximum studentized residual: Rn:= max i=1,...,n|Ri|. (16) Under the null hypothesis H0(no
|
https://arxiv.org/abs/2505.10738v2
|
outliers), an approximate critical value can be obtained using the Bonferroni correction. To control the family-wise error rate at level α, we compare Rnto the quantile of the Student’s t-distribution with n−p−1degrees of freedom: P/parenleftbig Rn>t 1−α/(2n),n−p−1/parenrightbig ≤α. (17) 4.3 Normalized Maximum Ordinary Residual Analternativeapproachavoidsusingthehatmatrix Handinsteadreliesontheunadjusted, orraw, residuals. The corresponding test statistic is the normalized maximum ordinary residual: R∗ n=√n·maxi|ei| ∥e∥2=√n·maxi|yi−ˆyi|/radicalig/summationtextn j=1e2 j. (18) This statistic highlights large absolute residuals relative to the overall residual magnitude. Since it does not account for leverage, it can be more sensitive to large deviations in response values. However, this also means it may overlook influential outliers associated with high-leverage points. 4.4 Distributional Properties and Critical Values The exact distributions of Rn(maximum studentized residual) and R∗ n(normalized maximum ordinary residual) are not analytically tractable. Therefore, critical values are typically estimated using one or more of the following methods: •Bonferroni-adjusted t-tests applied to individual residuals, •Monte Carlo or permutation tests under the null hypothesis H0, •Conservative bounds derived from F-distributions or inverted Student’s t-distributions. 6 4.5 Upper Bound of Single Outlier Test Statistics Ugah et al. [17] derive an upper bound as identical for both as: R∗ 0=/radicaligg (n−p)Fα/n, 1,n−p−1 n−p−1 +Fα/n, 1,n−p−1. (19) Figure 1: Upper bounds of critical values for Rnby sample size nand significance level α. 7 5 On the Normality Assumption in Regression Analysis Recall again MLR.5 for exact finite-sample inference. Under this assumption, the least squares estimator ˆβ= (X⊤X)−1X⊤y (20) is unbiased, efficient, and follows: ˆβ∼N(β,σ2(X⊤X)−1). (21) This enables valid inference using standard test statistics. The t-statistic for testing H0:βj= 0is: tj=ˆβj−βj SE(ˆβj)=ˆβj/radicalig ˆσ2(X⊤X)−1 jj∼tn−p, (22) where ˆσ2=1 n−pn/summationdisplay i=1(yi−ˆyi)2. (23) The residuals ˆεi, however, are correlated due to their dependence on the fitted values via the hat matrix H=X(X⊤X)−1X⊤, where ˆy=Hyandˆε= (I−H)y. While OLS remains unbiased and consistent under weaker assumptions (e.g., finite variance and exogeneity), the exact distribution of test statistics like tjrequires normality. Outlier Effects on Normality Assumption in Regression Analysis Outliers violate this assumption in two key ways: 1.Non-normality of residuals: A single outlier induces skewness or kurtosis in the residual distribu- tion, invalidating tj∼tn−p. Deviations from linearity are often visible in the Q–Q plot. =⇒Inflated or deflated Type I error rates. 2.Distortion of variance estimates: Outliers distort ˆσ2, affecting directly SE(ˆβj)and thus statistical inference. For high-leverage points hii(diagonal entries of the hat matrix), even a small residual ei can disproportionately influence: SE(ˆβj)∝/radicalig (X⊤X)−1 jj. (24) =⇒Misestimated p-values, false significance, or masked effects. (Robust methods reduce the outlier’s influence.) Under outlier contamination, the OLS estimator becomes: ˆβ=ˆβ(0)+ ∆(xo,yo), (25) where ∆increaseswithleverage hooandresidualmagnitude |eo|. Thus, ˆβcanbecomebiasedinfinitesamples and may become inconsistent, particularly if the outlier introduces dependence between the covariates and the errors. By theGauss–Markov theorem, OLS is known as the Best Linear Unbiased Estimator (BLUE) under classical conditions, regardless of normality. In large samples, the Central Limit Theorem (CLT) implies that ˆβis approximately normal, allowing for asymptotic inference even when the error distribution is not normal. 6 An Overview of Regression Methods Linear regression relies on the Gauss–Markov assumptions , which ensure that the OLS estimator is the BLUE.
|
https://arxiv.org/abs/2505.10738v2
|
These assumptions include linearity of the model, meaning the outcome variable yis expressed as a 8 linear combination. The covariate matrix Xmust have full column rank so that X⊤Xis invertible, ensuring that the parameter estimates are uniquely defined. Exogeneity is also required, meaning the regressors are uncorrelated with the error term: E[ε|X] = 0, which implies Cov(X,ε) = 0. The error terms must be ho- moscedastic, following a constant variance Var(ε) =σ2I, and they must be uncorrelated across observations, i.e.,Cov(εi,εj) = 0for alli̸=j. While these assumptions are sufficient for OLS to be unbiased and efficient, an additional assumption of normally distributed errors, ε∼N (0,σ2I), is required for exact finite-sample inference using t-tests and F-tests. In practice, these conditions are often violated. To address such limitations, various extensions and robust methods have been developed. Below, we outline some of the most widely used regression methods and highlight their relation to the underlying Gauss–Markov assumption. 6.1 Deming Regression Deming regression accounts for measurement error in both xandy, minimizing orthogonal distances: min β0,β1n/summationdisplay i=1(yi−β0−β1xi)2 1 +λ, λ =σ2 x σ2y(26) Useful when both variables are noisy [1]. Assumptions addressed: Violates fixed x; assumes homoscedastic, independent errors. 6.2 Ridge Regression Ridge regression applies an L2penalty to control variance from multicollinearity: min βn/summationdisplay i=1(yi−x⊤ iβ)2+λp/summationdisplay j=1β2 j (27) [2]. Assumptions addressed: Mitigates multicollinearity. 6.3 Lasso Regression Lasso uses an L1penalty to induce sparsity: min βn/summationdisplay i=1(yi−x⊤ iβ)2+λp/summationdisplay j=1|βj| (28) Some coefficients may be exactly zero [3]. Assumptions addressed: Mitigates multicollinearity. 6.4 Elastic Net Elastic Net combines L1andL2penalties: min βn/summationdisplay i=1(yi−x⊤ iβ)2+λ1/summationdisplay |βj|+λ2/summationdisplay β2 j (29) Balances sparsity and stability [4]. Assumptions addressed: Mitigates multicollinearity. 9 6.5 Robust Regression Robust regression reduces sensitivity to outliers by using a different loss function ρ(·)less sensitive than squared error: min βn/summationdisplay i=1ρ(yi−x⊤ iβ) (30) Huber’s loss is a common default choice for example [5]. Assumptions addressed: Handles non-normality and heteroscedasticity. 6.6 Quantile Regression Quantile regression estimates conditional quantiles by minimizing asymmetric loss: min βn/summationdisplay i=1ρτ(yi−x⊤ iβ), ρτ(u) =u(τ−1{u<0}) (31) [6]. Assumptions addressed: Handles heteroscedasticity and non-normality. 6.7 Principal Component Regression (PCR) PCR applies PCA to X, then regresses yon the top components. It reduces multicollinearity and variance [7]. Assumptions addressed: Mitigates multicollinearity. 6.8 Partial Least Squares (PLS) PLS projects Xonto components maximizing covariance with y, often outperforming PCR when predictors are correlated with the outcome [8]. Assumptions addressed: Mitigates multicollinearity. 6.9 LOESS LOESS fits local linear models weighted by distance: min β0,β1n/summationdisplay i=1wi(x)(yi−β0−β1xi)2(32) Flexible and nonparametric [9]. Assumptions addressed: Relaxes linearity; locally handles heteroscedasticity. 6.10 Spline Regression Spline regression uses piecewise polynomials with a smoothness penalty: min/summationdisplay (yi−f(xi))2+λ/integraldisplay (f′′(x))2dx (33) The smoothing parameter λcontrols complexity [10]. Assumptions addressed: Relaxes linearity. 10 6.11 Generalized Linear Models (GLMs) GLMs generalize linear models via a link function: g(E[yi]) =x⊤ iβ (34) Includes logistic and Poisson models [11]. Assumptions addressed: Handles non-normality and heteroscedasticity. 6.12 Generalized Additive Models (GAMs) GAMs allow additive nonlinear effects: E[yi] =α+f1(xi1) +···+fp(xip) (35) Eachfjis estimated nonparametrically (e.g., splines) [12]. Assumptions addressed: Relaxes linearity; handles non-normality and mild heteroscedasticity. 7 Conclusion Outliers can inflate the significance of regression coefficients, leading to
|
https://arxiv.org/abs/2505.10738v2
|
incorrect inferences. The normal- ity assumption of the residuals is critical for valid t-tests, and its violation—commonly due to outliers— necessitates careful diagnostic checks or the use of robust methods. It is advised to always conduct residual diagnostics and robust regression techniques to assess and mitigate these issues, even post outlier-removal, especially in finite-sample inference. Also, the use of outlier diagnostics utilizing both residuals and leverage is recommendable, such as standard- ized residuals, leverage plots, Cook’s distance, DFBETAS, and influence plots. Normality of residuals can be assessed using tools like the Q–Q plot, Shapiro–Wilk test, or histogram of residuals. 11 References [1] P. J. Cornbleet and N. Gochman. Incorrect least-squares regression coefficients in method-comparison analysis. Clinical Chemistry , 35(4):584–585, 1989. [2] Arthur E. Hoerl and Robert W. Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics , 12(1):55–67, 1970. [3] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B , 58(1):267–288, 1996. [4] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B , 67(2):301–320, 2005. [5] Peter J. Huber. Robust estimation of a location parameter. Annals of Mathematical Statistics , 35(1):73– 101, 1964. [6] Roger Koenker and Gilbert Bassett. Regression quantiles. Econometrica , 46(1):33–50, 1978. [7] Ian T. Jolliffe. Principal Component Analysis . Springer, 2nd edition, 2002. [8] Herman Wold, Axel Ruhe, Svante Wold, and William J. Dunn III. The collinearity problem in linear regression. the partial least squares (PLS) approach to generalized inverses. In W. J. Krzanowski, editor, Multivariate Analysis , pages 167–198. Academic Press, 1984. [9] William S. Cleveland. Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association , 74(368):829–836, 1979. [10] Carl de Boor. A Practical Guide to Splines . Springer, 1978. [11] Peter McCullagh and John A. Nelder. Generalized Linear Models . Chapman and Hall, 2nd edition, 1989. [12] Trevor Hastie and Robert Tibshirani. Generalized Additive Models . Chapman and Hall, 1990. [13] Peter J. Rousseeuw and Annick M. Leroy. Robust Regression and Outlier Detection . Wiley, 1987. [14] John Fox. An R and S-Plus Companion to Applied Regression . Sage Publications, 2002. [15] Amand F. Schmidt and Chris Finan. Linear regression and the normality assumption. Journal of Clinical Epidemiology , 98:146–151, 2018. [16] JeffreyM.Wooldridge. Introductory Econometrics: A Modern Approach . CengageLearning, 7thedition, 2019. [17] TobiasE.Ugah, EmmanuelI.Mba, MichealC.Eze, KingsleyC.Arum, IfeomaC.Mba, andHenriettaE. Oranye. Ontheupperboundsofteststatisticsforasingleoutliertestinlinearregressionmodels. Journal of Applied Mathematics , 2021:5 pages, 2021. [18] David A. Belsley, Edwin Kuh, and Roy E. Welsch. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity . John Wiley & Sons, 1980. [19] R. Dennis Cook. Detection of influential observations in linear regression. Technometrics , 19(1):15–18, 1977. [20] Vic Barnett and Toby Lewis. Outliers in Statistical Data . Wiley, 1978. [21] Sanford Weisberg. Multiple outlier detection in regression using studentized residuals. Technometrics , 22(2):139–144, 1980. 12 A Residual Diagnostics This appendix presents residual diagnostic plots for three models: (1) a clean OLS model without any outliers, (2) an OLS model with a single high-leverage outlier, and (3) a robust regression
|
https://arxiv.org/abs/2505.10738v2
|
model fit to the contaminated data. These plots provide visual evidence of how outliers affect model assumptions and how robust methods mitigate their influence. A.1 Residual Plots: SLR OLS Model (Clean Data/No Outlier) Figure 2: Diagnostic plots for the OLS model. The residuals appear homoscedastic (constant variance), symmetrically distributed, and independent. The Q-Q plot indicates approximate normality, validating the assumptions of OLS. 13 A.2 Residual Plots: SLR OLS Model With An Outlier Figure 3: Diagnostic plots for OLS model including a high-leverage outlier. The residuals show clear dis- tortion: heteroscedasticity, skewness, and heavy tails are evident. The Q-Q plot deviates significantly from the normal line, and the residuals vs. fitted plot shows a large residual corresponding to the outlier. This illustrates the breakdown of classical OLS assumptions. 14 A.3 Residual Plots: SLR Huber Robust With An Outlier Figure 4: Residuals vs. fitted values for the robust regression model applied to the data with an outlier. Compared to the standard OLS fit (A.2), the residuals are more evenly spread and the extreme influence of theoutlierisvisiblydiminished. Thisconfirmsthattherobustmethodeffectivelydownweightstheanomalous observation, preserving the integrity of the regression fit. 15
|
https://arxiv.org/abs/2505.10738v2
|
arXiv:2505.10747v1 [math.ST] 15 May 2025Assumption-lean weak limits and tests for two-stage adaptive experiments Ziang Niu and Zhimei Ren Department of Statistics and Data Science, University of Pennsylvania May 24, 2025 Abstract Adaptive experiments are becoming increasingly popular in real-world appli- cations for effectively maximizing in-sample welfare and efficiency by data-driven sampling. Despite their growing prevalence, however, the statistical foundations for valid inference in such settings remain underdeveloped. Focusing on two- stage adaptive experimental designs, we address this gap by deriving new weak convergence results for mean outcomes and their differences. In particular, our results apply to a broad class of estimators, the weighted inverse probability weighted (WIPW) estimators. In contrast to prior works, our results require significantly weaker assumptions and sharply characterize phase transitions in limiting behavior across different signal regimes. Through this common lens, our general results unify previously fragmented results under the two-stage setup. To address the challenge of potential non-normal limits in conducting inference, we propose a computationally efficient and provably valid plug-in bootstrap method for hypothesis testing. Our results and approaches are sufficiently gen- eral to accommodate various adaptive experimental designs, including batched bandit and subgroup enrichment experiments. Simulations and semi-synthetic studies demonstrate the practical value of our approach, revealing statistical phenomena unique to adaptive experiments. 1 Introduction Adaptive experiments are able to achieve substantial efficiency gains compared with traditional non-adaptive experimental designs. They often allocate resources more effectively and require fewer samples or observations to attain the same statistical power or estimation precision. Such designs have been successfully applied in areas such as clinical trials (Sampson et al., 2005; Hu et al., 2006; Magnusson et al., 2013), online learning (Slivkins et al., 2019; Lattimore et al., 2020), mobile health interven- tions (Klasnja et al., 2019; Liao et al., 2020), and online education platforms (Rafferty et al., 2019; Kizilcec et al., 2020). However, the adaptive design of these experiments introduces dependencies among observations, violating the independence and identical distribution (i.i.d.) assumptions 1 underlying classical inference methods. As a result, widely used estimators—such as the sample mean and inverse probability weighted estimator—may exhibit bias and non-normal sampling distributions under adaptive data collection (Bowden et al., 2017; Shin et al., 2019; Hadad et al., 2021; Shin et al., 2021). In practice, analyzing adap- tive experiments using conventional statistical tools while ignoring the dependencies can lead to severe selection bias (Dwork et al., 2015). The limited theoretical under- standing of the statistical behavior of adaptive experiments continues to hinder the development of valid and generalizable inference methods. This represents a major barrier to their reliable use in real-world applications. In this paper, we study the two-stage adaptive experiments , a design framework that has been widely adopted in practice (Sampson et al., 2005; Sladek et al., 2007; Sill et al., 2009; Wu et al., 2010; Gasperini et al., 2019; Lin et al., 2021; Kasy et al., 2021; Che et al., 2023; Schraivogel et al., 2023). The typical data generating process in these experiments can be outlined as follows: there are two stages of data collection, the pilot stage and the follow-up
|
https://arxiv.org/abs/2505.10747v1
|
stage . In the pilot stage, i.i.d. data DP∼PPare collected, and informs a selection algorithm S(DP). Then in the follow-up stage, new data DF∼ PF(DP) are gathered according to the output of selection algorithm S, resulting in data that are conditionally i.i.d. given DP(see Algorithm 1 for a complete description). The choice of Sdepends on the goal of the experiment. Common objectives include welfare maximization (Sampson et al., 2005; Wu et al., 2010; Che et al., 2023) and scientific exploration (Sladek et al., 2007; Gasperini et al., 2019). Algorithm 1: Two-stage adaptive experiment. Input: Selection algorithm S. Pilot stage: 1Observe i.i.d. data DPfrom the law PP. 2Apply the selection criteria S(DP) to modify the law PPtoPF(DP). Follow-up stage: 3Collect conditionally i.i.d. data DFfrom the modified law PF(DP). Output: Pooled outcome DP∪ D F. The main challenge of conducting valid statistical inference with data DP∪ D Fis to handle the complex dependence structure introduced by the selection algorithm S. 1.1 Relevant literature Existing work on inference for adaptive experiments largely falls into two main cate- gories, distinguished by the type of inference they offer. Conditional inference provides valid inference conditional on the output of selection algorithm S(DP). Approaches that achieve this guarantee include data splitting (Cox, 1975), data carving (Fithian et al., 2014; Chen et al., 2023), and randomization- based selective inference (Freidling et al., 2024). This type of guarantee captures the effect of the selection procedure and adjusts for conditional bias. However, when the estimand is a marginal parameter (e.g., outcome mean), conditional inference typically incurs an efficiency loss (Hu et al., 2006; Marschner, 2021). Moreover, due to the potential complex conditioning event, methods developed for conditional inference can be computationally demanding, with the exception of data splitting. 2 Marginal inference accounts for all sources of randomness, leading to more straight- forward interpretation when inferential target is the marginal estimand. Towards this end, different approaches have been proposed, with finite-sample or asymptotic guar- antees. Among the former, some seek to achieve exact finite-sample validity (Sampson et al., 2005; Sill et al., 2009; Wu et al., 2010; Neal et al., 2011; Nair et al., 2023). However, these methods are relatively restrictive as they are either computationally intensive and/or are highly sensitive to distributional assumptions. Anytime-valid in- ference methods (Johari et al., 2015; Howard et al., 2021; Howard et al., 2022; Maharaj et al., 2023; Ramdas et al., 2023; Waudby-Smith et al., 2024) provide finite-sample validity via probabilistic bounds. These bounds, however, are usually conservative and can substantially reduce power (although they generalize well beyond two-stage settings like Algorithm 1). In contrast, asymptotic inferential methods (Zhang et al., 2020; Hadad et al., 2021; Lin et al., 2021; Adusumilli, 2023; Hirano et al., 2023) tend to be statistically more efficient than conditional or anytime-valid approaches. Re- lying solely on large-sample behavior, asymptotic methods are generally agnostic to outcome distributions. Our work falls into the category of marginal inference with asymptotic validity. The works of Zhang et al. (2020), Hadad et al. (2021), Adusumilli (2023), and Hirano et al. (2023) are the most related ones. Zhang
|
https://arxiv.org/abs/2505.10747v1
|
et al. (2020) and Hadad et al. (2021) establish asymptotic normality results on the outcome means by imposing strong assumptions on the signal strength or the distribution of the potential outcomes.1Hirano et al. (2023) provide general representation of the limiting distribution of test statistics in a multi-stage setup, similar to that in Algorithm 1. Later, Adusumilli (2023) uses these representations to study the optimality of the tests under the same setup. Their results are built upon Le Cam’s theory on limits of experiments (Le Cam et al., 1972), which are valid only under contiguous alternatives and smooth (semi-)parametric outcome distributions. Despite the elegance of the results, the general representation requires verifying the existence of certain weak limits, which must be addressed on a case-by- case basis and therefore poses a barrier for practitioners. A more detailed comparison with the related literature is provided in Section 3.3. From the inferential perspective, strong assumptions about signal strength and data generating distributions in this line of work can have significant practical con- sequences. Misusing the limiting distribution in hypothesis testing may yield Type-I error inflation (see an example in Figure 6 in Appendix E.2). Likewise, the stringent distributional requirements on PFandPPare often unrealistic when outcomes display complex or non-standard behavior. A vivid illustration arises in exploratory biological studies, such as single-cell CRISPR screens (Dixit et al., 2016; Gasperini et al., 2019). In these experiments, the main outcomes, gene-expression measurements, typically ex- hibit overdispersion, measurement error, and technical batch effects, all of which may violate the prespecified distributional assumptions. These caveats highlight the need for a robust inferential framework that can accommodate a broader range of signal strengths and remain agnostic to the outcome distribution. Beyond inference, assumption-lean weak limits offer a compelling tool for experi- mental design and power analysis. Alternatively, simulation methods have been pro- 1The setting of Zhang et al. (2020) and Hadad et al. (2021) can be more general than the two-stage adaptive experiments we consider in Algorithm 1. 3 posed to design new adaptive experiments (Chapter VII; Food et al., 2019), which can bypass the derivation of limiting distributions. Simulation-based strategies typically rely on stringent parametric assumptions, are largely heuristic, and become computa- tionally intensive when navigating a vast parameter space. In contrast, distribution- agnostic weak limits provide a far more efficient alternative—so long as one can sample from the limiting distribution effectively. Pursuing this direction, Che et al. (2023) prove a joint weak-limit result for batched multi-armed bandits and leverage it to de- sign batch-level allocation rules that minimize Bayes simple regret. Their framework, however, prioritizes regret minimization rather than hypothesis testing, and is not sufficient for inference or for designing experiments that aim to maximize statistical testing power. 1.2 Our contributions To address these gaps in asymptotic inference of the two-stage experiments, we study the asymptotic distribution of a broad class of weighted inverse probability weighted (WIPW) statistics. This class includes several widely used statistics, such as the IPW statistic (Bowden et al., 2017) and the variance-stabilizing IPW statistic (Luedtke et al., 2016; Bibaut et al., 2021; Hadad et
|
https://arxiv.org/abs/2505.10747v1
|
al., 2021). We establish weak convergence results under minimal assumptions. Building on this foundation, we propose a valid and computationally efficient bootstrap procedure for hypothesis testing in the pres- ence of non-normal limiting null distributions. Specifically, our main contributions are summarized as follows. 1.Assumption-lean weak limits: We derive new weak convergence results for WIPW estimators in two-stage experimental settings (Algorithm 1). These re- sults apply to a wide range of signal strengths under minimal distributional assumptions. Such generality ensures valid inference across the null (zero sig- nal), contiguous (weak signal), and fixed (strong signal) regimes, addressing key demands in hypothesis testing. Our analysis also uncovers a smooth transition of the limiting distribution across signal regimes, offering a unified perspective that connects several existing results in the literature. The proofs of these results require a new set of probabilistic tools to account for the dependence structure induced by adaptive data collection. To this end, we derive new results on con- ditional normal approximation (Lemma 15) and continuous mapping theorems (Lemma 17 and 18), generalizing the existing unconditional tools (Chatterjee et al., 2008; Meckes, 2009a). These new tools may be of independent interest for analyzing other adaptive experiments. 2.A fast bootstrap testing procedure: Building on the general weak conver- gence results, we define a class of asymptotically valid tests using WIPW test statistics. The critical values in the tests are determined by quantiles of the non-normal limiting distributions under the null. To obtain the analytically in- tractable critical values, we propose a computationally efficient and provably valid plug-in bootstrap procedure to estimate the critical values. The procedure rests on the key insight that the derived weak limits can be expressed as a ran- domly weighted sum of dependent Gaussian random variables. By leveraging the 4 new bootstrap procedure, valid and practical hypothesis tests can be conducted despite the complexity of the limiting distribution. Importantly, the procedure is nonparametric and thus is agnostic to the outcome distribution. Moreover, it has time complexity that is independent of the sample size (conditional on estimated nuisance parameters), making it highly scalable. We apply our general results to two real-world adaptive experimental designs: batched bandit experiments andsubgroup enrichment experiments . Although these experiments arise in distinct scientific contexts, both can be naturally accommodated within our theoretical framework. We then conduct extensive numerical simulations to evaluate the finite-sample performance of the proposed bootstrap-based testing procedures. Additionally, we perform a semi-synthetic data analysis based on the Systolic Blood Pressure Intervention Trial (SPRINT) (Ambrosius et al., 2014), a large-scale randomized controlled study. The results demonstrate the practical util- ity of our methods in realistic settings. Notably, our analysis reveals several phe- nomena that are unique to adaptive designs and would not arise in classical ran- domized controlled experiments. Moreover, our results can be readily applied to the design of adaptive experiments, as they offer a clear understanding of the lim- iting behavior of the test statistics. Code to reproduce these analyses is available at https://github.com/ZiangNiu6/AdaInf-manuscript . 1.3 Organization of the paper Section 2 introduces the two-stage adaptive data collection procedure and the
|
https://arxiv.org/abs/2505.10747v1
|
WIPW test statistic. In Section 3, we present the formal results on weak convergence and the bootstrap methodology, instantiate our general theory in various adaptive experiments, and establish the connection to existing works. In Section 4, we evaluate the finite- sample performance of the derived tests. We conclude the paper with a discussion in Section 5. 2 Data generating procedure and test statistic We will discuss the data collection procedure in Section 2.1 and test statistics of interest in Section 2.2. 2.1 Two-stage adaptive data collection We denote the sample sizes for the pilot and follow-up stages as N1andN2, respec- tively, and treat them as given. The total sample size is defined as N≡N1+N2, and the sample size ratio for the two stages is fixed as qt≡Nt/N∈(0,1) for t∈ {1,2}. Throughout this paper, we adopt the triangular array framework, allowing the distri- bution to vary with N. To emphasize this dependence, we use the subscript Nwhen defining the random variables. Also, we define [ I]≡ {1, . . . , I }for any integer I≥1. In our setup, there are two competing treatments indexed by 0 and 1. Let A∈ {0,1}denote the assigned treatment. Suppose ( A(t) uN, Y(t) uN)u∈[Nt]denotes the observed data at stage t, where Y(t) uNis the observed outcome corresponding to assigned treatment 5 A(t) uN. LetHt=σ (A(t) uN, Y(t) uN)u∈[Nt] be the σ-algebra generated by the observed data at stage t. Additionally, define H0≡ {∅,Ω}. Adopting the potential outcome framework, forNsubjects, the potential outcomes are denoted as {(Y(t) uN(0), Y(t) uN(1)) : t∈[2], u∈[Nt]}. They are independently and identically distributed as ( YuN(0), YuN(1)) for any fixed N. To identify the distribution of potential outcome variables, we assume the following consistency andunconfoundedness conditions throughout this paper. •Consistency: Y(t) uN=A(t) uNY(t) uN(1) + (1 −A(t) uN)Y(t) uN(0), u∈[Nt], t∈[2]; •Unconfoundedness: (Y(t) uN(0), Y(t) uN(1))⊥ ⊥A(t) uN| Ht−1, u∈[Nt], t∈[2]. These are two assumptions that are commonly made in the literature of causal in- ference (Imbens et al., 2015). The consistency assumption states that the observed outcome is equal to the potential outcome under the assigned treatment. The un- confoundedness assumption states that the potential outcomes are independent of the treatment assignment, given the information from previous stages. Now we describe the observed data generating procedure. 1.Pilot stage: In the pilot stage, we observe ( A(1) uN, Y(1) uN) for u∈[N1], with treatment assignment probabilities e(s)≡P[A(1) uN=s]2fors∈ {0,1}, where e(0) + e(1) = 1. A selection algorithm S(introduced in Algorithm 1) determines treatment assignment for the follow-up stage. For an estimator S(1) N(0)−S(1) N(1) for the difference-in-means E[YuN(0)]−E[YuN(1)], the sampling probabilities are then updated based on S(S(1) N(0)−S(1) N(1)). 2.Follow-up stage: In the follow-up stage, we define the new sampling probabil- ities as P[A(2) uN= 0| H1] =S(S(1) N(0)−S(1) N(1)) and P[A(2) uN= 1| H1] = 1−P[A(2) uN= 0| H1]. With these updated probabilities, we collect the data ( A(2) uN, Y(2) uN) for u∈[N2]. These observations are independently and identically distributed, conditional on the information from the pilot stage ( H1). We make one comment on the selection algorithm
|
https://arxiv.org/abs/2505.10747v1
|
S. Remark 1 (Generality of S).Our main results readily generalize to settings where the selection algorithm depends on more complex functions of the data beyond the sim- ple difference in means. However, for clarity of presentation and broad applicability, we focus on selection algorithms Sbased on the estimator of the difference-in-means, S(1) N(1)−S(1) N(0). In fact, such algorithm reflects several important practical objectives in adaptive experimentation. A natural sampling strategy is to assign higher prob- ability to the treatment yielding a higher average outcome in the pilot stage. This 2We will assume that there exist 1 > c u> c l>0 such that e(s)∈(cl, cu) for any s∈ {0,1}to guarantee enough exploration for two competing treatments in the pilot stage. 6 strategy aligns well with the goal of revenue maximization, a widely studied objec- tive in e-commerce, online recommendation systems, and inventory control (Bakshy et al., 2018; Che et al., 2024). Similar selection algorithms are also applicable to the identification of optimal policies in political science (Offer-Westort et al., 2021). Moreover, such adaptive assignment rules can support objectives related to welfare and ethics (Burnett et al., 2020) by minimizing the allocation of inferior treatments. From another perspective, the treatment indicators 0and1may represent subgroups within a population. Selecting the subgroup that appears more beneficial based on observed outcomes is a common practice in clinical trials. These so-called subgroup enrich- ment designs have been extensively studied in the literature (Magnusson et al., 2013; Tanniou et al., 2016; Lin et al., 2021). The choice of the interim statistic S(1) N(0)−S(1) N(1) for estimating the difference in treatment means using pilot data is flexible. Throughout the paper, we consider the (scaled) IPW estimator for S(1) N(s), defined as: S(1) N(s)≡1 N1/2 1N1X u=11(A(1) uN=s)Y(1) uN e(s)fors∈ {0,1}. (1) This estimator is a popular choice in the literature on batched bandit algorithms (Agarwal et al., 2014; Dimakopoulou et al., 2017) and multi-stage clinical trials (Shen et al., 2014; Bowden et al., 2017). We believe our general theoretical framework can be extended to accommodate more sophisticated estimators, such as the augmented IPW estimator (Dimakopoulou et al., 2021).3For simplicity and consistency between the estimators used for selection and inference, we adopt the IPW estimator. 2.2 Weighted IPW test statistic We are interested in estimating the mean of the potential outcomes E[YuN(s)] for s∈ {0,1}, as well as the difference in means E[YuN(0)]−E[YuN(1)]. The WIPW estimator is a natural choice for estimating these quantities. As the name suggests, the WIPW estimator can be viewed as a weighted average of the IPW estimators from the two stages: WIPW( s)≡2X t=1Nth(t) N(s) P2 t=1Nth(t) N(s)ˆΛ(t) N(s) where h(t) N(s) is a weight function ,(2) andˆΛ(t) N(s) is a IPW statistic using data from stage t, and defined as ˆΛ(t) N(s)≡1 NtNtX u=1ˆΛ(t) uN(s) where ˆΛ(t) uN(s)≡1(A(t) uN=s)Y(t) uN ¯eN(s,Ht−1) and ¯eN(s,Ht−1)≡P[A(t) uN=s|Ht−1]. The choice of weights h(t) N(s) plays a critical role in the performance of the WIPW estimator. In what follows, we describe how the weights can be selected in practice. 3When there is no covariate, augmented IPW estimator is asymptotically
|
https://arxiv.org/abs/2505.10747v1
|
equivalent to sample mean estimator. 7 A broad class of weighting choices. We consider the class of weights in the form h(t) N(s)≡¯em N(s,Ht−1)/N1/2, where different choices of mallow for a variety of weighting strategies: •m= 0:Constant weighting ,h(t) N(s)≡1/N1/2; •m= 1/2:Adaptive weighting ,h(t) N(s)≡¯e1/2 N(s,Ht−1)/N1/2. The constant weighting corresponds to the usual IPW estimator (Bowden et al., 2017). The adaptive weighting method, is sometimes referred to as the variance-stabilizing weighting (Luedtke et al., 2016; Hadad et al., 2021; Bibaut et al., 2024). It is particu- larly useful for compensating the high variability in ˆΛ(2) N(s), arising from the potential downsampling in the follow-up stage caused by selection algorithm S. Other data- dependent weighting choices under the same class of statistics include m= 1. The resulting statistic can be shown to be (asymptotically) equivalent to the sample mean statistic after proper augmentation (Lemma 27). Our results in the main text can be applied to m= 0 and m= 1/2 and we extend the results to m= 1 in Appendix N.1. Two scaling schemes. Motivated by the normalized statistic proposed in Hadad et al. (2021), we study the following two test statistics based on the WIPW esti- mator, depending on whether normalization is applied. Define the normalization ˆSN≡(NˆVN(0) + NˆVN(1))1/2, where the variance esitmator ˆVN(s) is defined as ˆVN(s)≡2X t=1Nth(t) N(s) P2 t=1Nth(t) N(s)21 N2 tNtX u=1 ˆΛ(t) uN(s)−WIPW( s)2. (3) Then we can define the corresponding unnormalized and normalized test statistics as TN≡WIPW(0) −WIPW(1) and WN≡TN/ˆSN. It is commonly understood that tests based on normalized and unnormalized statistics are asymptotically equivalent as the normalization factor converges to a constant. This holds true in many settings with i.i.d. data, as exemplified by the Wald and score tests. With adaptively collected data, howover, normalization can impact the performance of the testing procedure. We refer the readers to simulation results in Section 4.1. A peek at the sampling distribution. Understanding the asymptotic distribu- tions of test statistics WIPW( s), TNandWNis important for downstream inferential tasks. To build intuition about the sampling distributions, we begin with a simulation study. Consider the scaled difference in means: cN≡√ N(E[YuN(0)]−E[YuN(1)]). Without loss of generality, we assume cN≤0in this paper. We plot the sampling dis- tribution of centered statistic√ NTN−cN, where cNtakes values in {0,−5,−10,−15} and repeat the computation 5000 independent times. The detailed simulation setup is provided in Appendix B and the results are shown in Figure 1. In the figure, we observe the following pattern: when cN= 0 (left-most panel), the sampling distribution of the estimator is highly skewed; as the magnitude of cN 8 cN=0 cN=−5 cN=−10 cN=−15 −1001020−1001020−1001020 −1001020Figure 1: Sampling distribution of√ NTN−cNwith adaptive weighting ( m= 1/2). increases, the distribution becomes more symmetric and eventually approaches a near- normal distribution (right-most panel). The shape transition of the sampling distri- bution from cN= 0 to cN=−15 suggests the limiting distribution of the test statistic depends on the signal strength cN. The results presented in the next section provide an exact characterization of such dependence. 3 Theoretical results The organization of this section is as follows. We
|
https://arxiv.org/abs/2505.10747v1
|
state our main weak limit results in Section 3.1, followed by several remarks in Section 3.2. Section 3.3 describes the phase transition of the limiting distribution of the test statistic across different signal strengths. We then apply the general results to two specific adaptive experimental designs in Section 3.4. In Section 3.5, we propose a bootstrap procedure for hypothesis testing based on the weak limits. 3.1 General theory: signal-dependent weak limits We first explicitly define the selection algorithm S(S(1) N(0)−S(1) N(1)). We allow the Sto vary across sample size Nso we will write SNto emphasize such dependence. Suppose ¯e(s, x)∈[0,1] is a sampling function for s∈ {0,1}such that ¯ e(0, x) + ¯e(1, x) = 1 for anyx∈R. Then define the sampling function SN(S(1) N(0)−S(1) N(1))≡min{1−lN,max{lN,¯e(0, S(1) N(0)−S(1) N(1))}}, (4) where lNis a positive sequence lN∈[0,1/2). We note that the minimum and maximum functions are mainly used for ensuring both treatments are assigned with nonzero probability in the follow-up stage when lN>0. This is a reasonable assumption for many practical applications/algorithms, as it is often desirable to maintain a certain level of exploration in the follow-up stage. Such purpose includes reducing the risk of assigning treatment to the inferior group or stablizing the variance of the downstream test statistic. Such clipping strategy has been widely adopted in the literature of adaptive experiments (Zhang et al., 2020; Hadad et al., 2021). The results in this section assume lN∈(0,1/2), i.e. strictly positive, but we will extend the results to lN= 0 in Section N.1. We defer the discussion on the choice of lNto Remark 2 and 3. 9 Define the extended Rspace as ¯R≡R∪ {−∞} . We consider the following as- sumptions. Assumption 1 (Moment conditions) .For any s∈ {0,1}, 0<inf N E Y2 uN(s) −E[YuN(s)]2 ≤sup N E Y4 uN(s)1/2<∞. Forp∈ {1,2},limN→∞E[Yp uN(s)]exist. Recalling cN≡√ N(E[YuN(0)]−E[YuN(1)]), we assume limN→∞cN=cforc∈[−∞,0]. Assumption 2 (Sampling designs) .The sampling function ¯e(s, x)satisfies ¯e(0, x) + ¯e(1, x) = 1 for any x∈¯R. Moreover, one of the following assumptions holds: 1.Lipschitz condition: For any s∈ {0,1},¯e(s, x)is a Lipschitz function over x∈¯R, with universal Lipschitz constant L > 0and¯e(s,−∞)takes values in {0,1}; 2.Step-function condition: There exist some m1∈R, K∈Nand continuous function g:¯R→¯Rwithg(−∞) =−∞, such that for any s∈ {0,1}, ¯e(s, x) =KX k=1ck 1(g(x)∈Ck)where ck∈[0,1]andCkare disjoint sets , where C1= [−∞, m1]andCkare open sets for k≥2, such that ∪K k=2Ck= (m1,∞). Assumption 3 (Constant weighting) .Suppose m= 0 is used and clipping rate lN (introduced in (4)) satisfies 0< cl<¯l=lN< cu<1/2for any N∈N. Assumption 4 (Adaptive weighting) .Suppose m= 1/2is used and clipping rate lN satisfies lN∈(0,1/2)for any N∈N. Moreover, limN→∞lN= 0andNlN→ ∞ . We postpone the discussion on the assumptions to Section 3.2. Before stating the theorem, we first describe the limiting distributions of the WIPW estimator and the test statistics TN, WN. Form of limiting distributions. To ease the presentation, we use ( a)2×2to denote a symmetric matrix with dimension 2, diagonal values to be 1 and off-diagonal values to be a. The limiting distirbutions of both WIPW( s) and the test stiatistics TNand WN, can
|
https://arxiv.org/abs/2505.10747v1
|
be expressed as weighted sums of two dependent Gaussian vectors, A(t)≡ (A(t)(0), A(t)(1))⊤fort∈[2]. Intuitively, A(1)corresponds to the randomness in the pilot stage and A(2)to that in the follow-up stage and is dependent on A(1). Concretely, the distributions of A(t)can be defined as A(1)∼N(0,Σ(1)) and A(2)|A(1)∼N(0,Σ(2)(A(1))), where Σ(1)≡(Cov(1))2×2andΣ(2)(A(1))≡(Cov(2)(A(1)))2×2. The covariance Cov(2)(A(1)) depends on the realization of A(1), and their explicit definitions can be found in Ap- pendix A. 10 •Weak limits of WIPW( s).Suppose Wstands for weighting scheme and takes values in {A,C}, standing for adaptive and constant weighting, respectively. We use ¯WW(s) to denote the limiting distributions of WIPW( s) after proper centering and scaling. We will used= to denote equality in distribution. Then ¯WW(s) can be written as ¯WW(s)d=2X t=1A(t)(s) ¯w(t) W(s) for any s∈ {0,1}, (5) where ¯ w(t) W(s) varies with different weighting schemes and ¯ w(2) W(s) may be depen- dent on A(1). The form of ¯ w(t) W(s) can be found in Appendix A. •Weak limits of TNand WN.Suppose Vstands for scaling scheme and takes value in {N,U}, standing for normalized ( WN) and unnormalized statistics ( TN), respectively. We use WW Vto denote the limiting distributions for TNandWN after proper centering and scaling. We emphasize the dependence of WW Von the limiting signal strength cby writing WW V=WW V(c). Then we can write WW V(c)d=2X t=1A(t)(0)w(t) W,V(0)−2X t=1A(t)(1)w(t) W,V(1) (6) where the weights w(t) W,V(s) can be written as w(t) W,U(s) = ¯w(t) W(s) and w(t) W,N(s) =¯w(t) W(s) (P1 s=0P2 t=1( ¯w(t) W(s))2)1/2. Note both ¯ w(t) W(s) and A(t)(s) also depend on the signal strength cbut we omit this dependence for simplicity. We summarize the definition of different limiting distributions in Table 1. Table 1: Summary of different limiting distributions. Wtakes values in {A,C}. Test statistic Estimand Weak limit Limiting weight WIPW( s) E[YuN(s)] ¯WW ¯w(t) W(s) TN E[YuN(0)]−E[YuN(1)] WW U w(t) W,U(s) WN E[YuN(0)]−E[YuN(1)] WW N w(t) W,N(s) Now we are ready to state the main results. Theorem 1 (Weak convergence) .Suppose Assumptions 1-2 hold. Recall the definition cN≡limN→∞√ N(E[YuN(0)]−E[YuN(1)]). The following statements hold. 1. Suppose Assumption 3 holds. Then,√ N(WIPW( s)−E[YuN(s)])converges weakly to¯WC(s)for any s∈ {0,1}. Moreover,√ NTN−cNandWN−cN/ˆSNconverge weakly to WC U(c)andWC N(c), respectively. 2. Suppose Assumption 4 holds. Then,√ N(WIPW( s)−E[YuN(s)])converges weakly to¯WA(s)for any s∈ {0,1}. Moreover,√ NTN−cNandWN−cN/ˆSNconverge weakly to WA U(c)andWA N(c), respectively. Proof of Theorem 1 can be found in Appendix J. 11 3.2 Remarks on the assumptions and technical challenges We present several remarks on the assumptions and technical challenges of proving Theorem 1. Remark 2 (Comments on Assumptions 1-4) .Assumption 1 is a mild regularity as- sumption. Assumption 2 imposes mild restrictions on the sampling function, providing substantial flexibility for the choice of sampling function ¯e(s, x)(and hence the selection algorithm S) to encode different experimental designs. To demonstrate the generality of these assumptions, we present two classes of experiments in Section 3.4. Now we comment on Assumption 3 and Assumption 4. Constant weighting, informed by As- sumption 3, requires the minimum sampling probability to be uniformly bounded away from 0. In the causal inference literature, Assumption
|
https://arxiv.org/abs/2505.10747v1
|
3 is known as the positivity assumption (Crump et al., 2009; Imbens et al., 2015). The adaptive weighting enables less stringent requirement on the sampling probability in the second stage encouraging further exploitation. Assumption 4 allows minimum sampling probability in the second stage to go to zero at the rate slower than 1/N. Similar assumption has also been adopted in Zhang et al. (2020) and Hadad et al. (2021). Remark 3 (Early-dropping experiments) .Relevant to Remark 2, one kind of adaptive sampling Theorem 1 does not cover is the so-called early-dropping experiments (Samp- son et al., 2005; Sill et al., 2009). In these experiments, the inferior treatment will be dropped from the follow-up stage. In this case, SN(S(1) N(0)−S(1) N(1))can be 0or1. We will show in Appendix N.1 that when m= 1, we can get rid of the clipping (4)by allowing lN= 0. Remark 4 (Technical challenges behind Theorem 1) .Despite the clean and nice re- sults, proving Theorem 1 is technically challenging. Most existing weak convergence results for adaptive experiments rely on variants of the martingale central limit the- orem (Zhang et al., 2020; Hadad et al., 2021). However, the limiting distributions derived in Theorem 1 are generally non-normal when c∈(−∞,0], rendering those results inapplicable in our setting. Instead, we establish weak convergence from first principles using the test function approach (Dudley, 2002). While inspired by the framework of Che et al. (2023), our proofs are considerably more technical due to two key challenges. First, unlike the test statistic adopted in Che et al. (2023),4our test statistics TNandWNhave more complex dependencies on the two-stage sampled data. In particular, we must establish joint convergence for a vector of statistics that includes possibly nonlinear functionals, such as SN(S(1) N(0)−S(1) N(1)). This poses a challenge for establishing the conditional convergence of the vector of statistics contributed from the second stage, necessitating an extension of classical asymptotic tools. Notably, we introduce two new continuous mapping theorems to handle convergence under condi- tioning (Lemma 17 and 18). Second, under adaptive weighting (Assumption 4), we allow the clipping rate lNto decay to zero relatively fast. As a result, higher-order moments of the test statistics (e.g. TNandWN) may diverge, depending on the rate at which lNvanishes. This imposes constraints on the choice of test functions used for establishing the weak convergence and we choose to work with bounded Lipschitz test 4We refer the reader to Appendix E.4 for more details on the test statistic analyzed in Che et al. (2023). 12 functions. As a by-product of our analysis, we establish a new version of the condi- tional CLT with bounded Lipschitz test functions (Lemma 15). These new theoretical results can be applied to general setting involving dependent data and thus may be of independent interest. 3.3 Phase transition and implication on hypothesis testing A key strength of our results in Theorem 1 is the weak limits can be expressed explicitly using weighted sums of two dependent Gaussian variables A(t)as shown in (5) and (6). This allows us to understand the limiting distributions of different test statistics in a
|
https://arxiv.org/abs/2505.10747v1
|
more intuitive way. We will discuss how the limiting distributions of TNandWN change as signal strength cchanges. Specifically, the shapes of limiting distributions are determined by the covariance Cov(2)(A(1)) and random weights w(2) W,V(s), which are both influenced by the signal strength c. Consider the following two regimes: •Strong signal regime. When c=−∞, i.e., the absolute difference between two expected outcomes is much larger than 1 /√ N, it can be shown that Cov(2)(A(1)) andw(2) W,V(s) are deterministic constants. This implies that the flucutation of the first stage estimator does not affect the final limit through the limiting covariance. Therefore the final limit WW V(−∞) follows a Gaussian distribution. •Zero and weak signal regimes. When c∈(−∞,0], the limiting distribution may no longer be a normal distribution. This is because A(1)will appear in the conditional distribution of A(2)|A(1)through the limiting covariance Cov(2)(A(1)). Similarly, w(2) W,V(s) depends on the realization of A(1). Therefore, when the signal is “weak”, the non-normal behavior is the “price” one needs to pay for choosing to use the adaptive sampling scheme. Additional insights on the non-normal limiting behaviors from double-dipping and data generating process perspectives can be found in Appendix D. To get better in- tuition, we simulate WW V(c) with V=U,W=A—this is the limiting distribution corresponding to the sampling distribution presented in Figure 1. We vary the limit- ing signal strength cand show the simulated results in Figure 2, which align closely with those in Figure 1. As capproaches −∞, i.e., as the signal gets stronger, the limiting distribution of WW V(c) approaches a normal distribution. Such phase tran- sition, as informed by Theorem 2, is smooth with respect to signal strength cunder 1-Wasserstein distance ,W1. Theorem 2 (Smooth transition of limiting distributions) .Suppose the assumptions of Theorem 1 hold. Then for any V ∈ {U ,N}andW ∈ {A ,C}, we have W1(WW V(−∞),WW V(c))→0asc→ −∞ . Proof of Theorem 2 can be found in Appendix L, where the definition of Wasserstein distance can also be found. Gathering these insights, we now discuss the implications of our results on hypothesis testing. 13 c = 0 c = −5 c = −10 c = −15 −1001020 −1001020 −1001020 −1001020Figure 2: Distribution WA U(c) as a function of limiting signal strength c. Implication on hypothesis testing. Another strength of our results lies in es- tablishing weak convergence under minimal moment conditions, mild assumptions on the sampling functions, and broad signal strength regimes. These assumption-lean properties are not merely of theoretical interest—they carry practical significance for downstream hypothesis testing. To demonstrate the implication of our results on the hypothesis testing, consider the null hypothesis H0N:E[YuN(0)]−E[YuN(1)] = 0 and two alternatives within the general hypothesis H1N:E[YuN(0)]−E[YuN(1)]̸= 0: H2N:E[YuN(0)]−E[YuN(1)] =b2√ Nand H3N:E[YuN(0)]−E[YuN(1)] =b3 Nβ, where b2, b3∈(−∞,0) and β∈[0,1/2). The contiguous alternative H2Ncorresponds to a weak signal regime, while H3Nreflects a strong signal setting. Ideally, a test should control the Type-I error under H0Nand achieve non-trivial power under H2N, while attaining power approaching one under H3N. It is also desirable for the test to remain assumption-lean with respect to the potential outcome distributions.
|
https://arxiv.org/abs/2505.10747v1
|
Our results accommodate this full range of signal strengths while maintaining minimal assumptions on the sampling functions and underlying distributions. Comparison to existing literature. To highlight the significance of our results, we compare our results to those in related work. Zhang et al. (2020) establish asymp- totic normality via stage-wise normalization under general hypotheses. Their method, however, relies on restrictive outcome distribution assumptions and yields lower power than tests based on sample means with pooled two-stage data (Hirano et al., 2023). Hadad et al. (2021) analyze the WIPW estimator for m= 1/2, but only in strong signal regimes (e.g., H3N), excluding H0Nand local alternatives. Both Zhang et al. (2020) and Hadad et al. (2021) achieve asymptotic normality under strong signals, potentially at the cost of power. Hirano et al. (2023) and Adusumilli (2023) derive asymptotic representations for general test statistics under batched designs and con- tiguous alternatives H2N, relying on classical limit-of-experiment theory by Le Cam et al. (1972). However, to apply Le Cam’s theory, one must first establish the required weak convergence of certain statistics (see, for example, Theorem 2 in (Hirano et al., 2023)). Their results also assume quadratic mean differentiability (QMD) (or smooth semiparametric models), which may be restrictive in practice. A complementary line of work uses diffusion approximations with increasing batch numbers (Fan et al., 2021; 14 Kuang et al., 2024), which differs from our two-stage setup with growing per-stage samples. Further discussion appears in Appendix E. 3.4 Case study: application to different adaptive experiments In this section, we demonstrate the applicability of Theorem 1 to a variety of adaptive experimental designs. Specifically, we focus on two widely used paradigms: batched bandit experiments andsubgroup enrichment designs . These designs have been studied in recent statistical and machine learning literature (Russo, 2016; Lin et al., 2021; Che et al., 2024; Freidling et al., 2024). Batched bandit experiments. Batched bandit experiments typically employ adap- tive algorithms to balance exploration and exploitation. Two commonly studied strate- gies are Thompson sampling and the ε-greedy algorithm . Thompson sampling is a Bayesian approach that selects actions according to their posterior probabilities of be- ing optimal. The ε-greedy algorithm chooses the empirically best arm with probability 1−ε/2 and explores the inferior arm with probability ε/2 when there are two arms. We will show that Theorem 1 applies to two-batch bandit experiments employing these algorithms. •Modified Thompson sampling: Assuming suitable prior distributions, the posterior for each expected outcome E[YuN(s)], conditional on pilot data, is (ap- proximately) N(S(1) N(s),1/2)/√N1, where Nrepresents the normal distribution. This yields a sampling function: ¯e(s, x) = (1 −Φ(x)) 1(s= 1) + Φ( x) 1(s= 0), which is Lipschitz continuous over x∈¯Rand satisfies the Lipschitz condition of Assumption 2. This algorithm has been used in Hadad et al. (2021). In- corporating a clipping rate lN(see Eq. (4)), the follow-up sampling probability P[A(2) uN= 0|H1] becomes a modified Thompson sampling rule: max{lN,min{1−lN,Φ(S(1) N(0)−S(1) N(1))}}. (7) •ε-greedy algorithm: Consider the non-smooth sampling function: ¯e(s, x) = 1(x <0) 1(s= 1) + 1(x≥0) 1(s= 0), which satisfies Step-function condition in Assumption 2. With clipping
|
https://arxiv.org/abs/2505.10747v1
|
rate lN, the follow-up sampling probability P[A(2) uN= 0|H1] corresponds to an ε-greedy algorithm with ε= 2lN: (1−lN) 1(S(1) N(0)≥S(1) N(1)) + lN 1(S(1) N(0)< S(1) N(1)). (8) 15 Subgroup enrichment experiments. The assignment variable Amay indicate subgroup membership rather than treatment assignment. In this context, YuN(0) and YuN(1) represent outcomes for two distinct subgroups. Adaptive enrichment designs aim to identify and focus on the subgroup that benefits more from the treatment, based on interim results from pilot stage. Common strategies include enrichment based on estimated effect size or interim p-value (Food et al., 2019; Ben-Eltriki et al., 2024). •Enrichment based on effect size: The sampling function in this case is: ¯e(s, x) = 1(x < β ) 1(s= 1) + 1(x≥β) 1(s= 0), where βis a pre-specified threshold for the effect size. •Enrichment based on interim p-values: Let ˆσdenote the estimated standard deviation of S(1) N(0)−S(1) N(1) under the null hypothesis H0N. Define left-sided and right-sided p-values as pl= Φ( x/ˆσ) and pr= 1−pl, respectively. Given a pre-specified significance level α, the sampling function based on interim p-values becomes: ¯e(s, x) = 1(pl< α) 1(s= 0) + 1(pr< α) 1(s= 1) + 1(pl∈[α,1−α])·0.5, which introduces randomization when the interim result is inconclusive. The follow-up sampling probabilities P[A(2) uN= 0|H1] andP[A(2) uN= 1|H1] can be simi- larly obtained based on these sampling functions. Remark 5 (Generalization of Theorem 1 to accommodate nuisance parameter) .The second example of subgroup enrichment experiments falls outside the scope of Theo- rem 1 due to the presence of the nuisance parameter ˆσin the sampling mechanism. To address this, we extend our theoretical results to accommodate such nuisance parame- ters. This generalization is presented in Theorem 5 in Appendix N.3. 3.5 Asymptotically valid tests with plug-in bootstrap Theorem 1 characterizes the limiting behavior of WIPW test statistics, forming the basis for constructing asymptotically valid tests. In this section, we focus on testing whether or not the difference in means E[YuN(0)]−E[YuN(1)] = 0 for demonstration. Similar results can be established for other hypotheses, for example, single outcome hypothesis E[YuN(s)] = 0 for some s∈ {0,1}. Based on Theorem 1, we can define the following asymptotically valid tests: for any W ∈ {A ,C}, ϕW U≡ 1 √ NTN≥Q1−α(WW U(0)) and ϕW N≡ 1 WN≥Q1−α(WW N(0)) .(9) The critical values Q1−α(WW V(0)) are the (1 −α)-th quantiles of the limiting distribution WW V(0) defined in Theorem 1. However, tests in (9) cannot be implemented in practice for two reasons. First, the asymptotic distributions WW V(0) are generally non-normal (see also the left-most subplot in Figure 2). Second, the limiting distribution involves unknown nuisance parameters, which have to be estimated using observed data. In this section, we propose a bootstrap procedure to address these challenges and construct valid tests, which can be implemented in practice. 16 A fast bootstrap procedure. Note that the expression of limiting distribution in (6) is a weighted sum of two dependent Gaussian variables. Motivated by such ob- servation, we propose a plug-in bootstrap procedure to obtain the quantile information Q1−α(WW V(0)). For the ease of presentation, we
|
https://arxiv.org/abs/2505.10747v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.