text
string
source
string
equipped inner product in HKas⟨·,·⟩K and the endowed norm as ∥ · ∥ K. For any v∈ Z, by writing Kv:=K(v,·)∈ H K, the reproducing property asserts that ⟨f, K v⟩K=f(v),∀f∈ H K. (11) Equipped with HK, we adopt KRR by regressing the response Y= (Y1, . . . , Y n)⊤onto the predicted inputs bg(X1), . . . ,bg(Xn) by solving the...
https://arxiv.org/abs/2505.20022v1
choice of f, we make the following blanket assumption. Recall that ∥ · ∥ ρis the induced norm of L2(ρ). Assumption 1. There exists some fH∈ H Ksuch that ∥fH−f∗∥2 ρ= inf f∈HK∥f−f∗∥2 ρ. We do not assume f∗∈ H Kbut rather the weaker Assumption 1. If the former holds, we immedi- ately have fH=f∗. Since HKis convex, existen...
https://arxiv.org/abs/2505.20022v1
eigenvalues {µj}∞ j=1arranged in non-increasing order and corresponding eigenfunctions {ϕj}∞ j=1that form an orthonormal basis of L2(ρ), such that LK=P∞ j=1µj⟨ϕj,·⟩ρϕj. Assumption 3 is common in the literature of kernel methods. It is guaranteed, for instance, by the celebrated Mercer’s theorem (Mercer, 1909) when the ...
https://arxiv.org/abs/2505.20022v1
κ,∥f∗∥∞,∥fH∥Kandγ2 ϵ. The explicit dependence of CandC′onκ,∥f∗∥∞,∥fH∥Kandγ2 ϵcan be found in the proof. Since the proof of Theorem 2 is rather long, we defer its full statement to Appendix B.2. In Section 3.4 we offer a proof sketch and highlight the main technical difficulties. As shown in Theorem 2, the excess risk i...
https://arxiv.org/abs/2505.20022v1
Kz−Kz′ K≤CK∥z−z′∥2,∀z, z′∈ Z. Assumption 5 can be verified for various kernels. One sufficient condition is the following uniform boundedness condition of the mixing partial derivatives of the kernel function (Blanchard et al., 2011): there exists some constant C >0 such that sup z,z′∈Z ∂2K(z, z′) ∂z∂z′ op≤C. (27) Cond...
https://arxiv.org/abs/2505.20022v1
et al. (2017, Theorem 1) shows that in the fixed design setting ( Zi’s are treated as deterministic), an empirical version of δnserves as a fundamental lower bound on the in-sample prediction risk of any estimator: inf fsup ∥f∗∥K≤1En[f(Z)−f∗(Z)]2≳bδn where the infimum is taken over all estimators based on {Yi, Zi}n i=1...
https://arxiv.org/abs/2505.20022v1
Consider the polynomial decay kernels. Grant model (4)withf∗∈ H Kand Assump- tions 1–5. For any η∈(0,1), the following holds with probability at least 1−η, E(bf◦bg)≲ηn−2α 2α+1+E∥bg(X)−Z∥2 2. (34) The first term in the risk bound (34) corresponds to the minimax optimal prediction risk established in Caponnetto and De Vi...
https://arxiv.org/abs/2505.20022v1
we provide a proof sketch of Theorem 2 and highlight its main challenges. The proof consists of three main components, which are discussed separately in the following subsections. 3.4.1 Bounding the empirical process with a reduction argument Recall that the excess loss function of f◦bgrelative to fH◦bgis: ℓf◦bg(y, x) ...
https://arxiv.org/abs/2505.20022v1
denote the eigenvalues of the kernel matrix Kxwith entries n−1K(bg(Xi),bg(Xj)) for i, j∈[n]. Since bRx(δ) depends only on the predicted inputs, its introduction is pivotal not only for uniformly bounding the cross-term but also, as detailed in the next section, for determining the rate of δx, the fixed point to the loc...
https://arxiv.org/abs/2505.20022v1
relationship between Rx(δ) and R(δ), however, is generally intractable without imposing strong assumptions on the relationship between ρxandρ. To deal with this mismatch issue, we have to work on the empirical counterparts of the afore- mentioned quantities. Start with the empirical counterpart of ψx(δ), defined as: fo...
https://arxiv.org/abs/2505.20022v1
construct such predictor from an auxiliary data X′∈Rn′×p(for simplicity, we assume n′=n) that is independent of the training data D. Under the factor model (3), the n×pmatrix X′= (X′ 1, . . . , X′ n)⊤satisfies X′=Z′A⊤+W′, (48) withZ′= (Z′ 1, . . . , Z′ n)⊤andW′= (W′ 1, . . . , W′ n)⊤. Prediction of Z′and estimation of ...
https://arxiv.org/abs/2505.20022v1
Stock and Watson, 2002). The requirement of λ1, . . . , λ rbeing distinct in (6) is used to ensure that the factor Zcan be consistently predicted. We remark that this requirement can be removed once kernels that are invariant to orthogonal transformation are used (see, also, Section 3.2.1). Under Assumptions 6 and 7, t...
https://arxiv.org/abs/2505.20022v1
expectation. The following theorem states the minimax lower bounds of the excess risk under the above specification. Its proof can be found in Appendix B.9. Theorem 3. Under models (3)and (4), consider any kernel function K∈ K. Assume nis sufficiently large such that d(δn)≥128 log 2 . There exists some constant c >0dep...
https://arxiv.org/abs/2505.20022v1
observable, Assumption 9 reduces to E[L(Y, f(Z))−L(Y, f∗(Z))]≍ ∥f−f∗∥2 ρ,∀f∈ Fb, a condition, or its variants, frequently adopted in the existing literature for analyzing general loss functions (Farrell et al., 2021; Li et al., 2019; Steinwart and Christmann, 2008; Wei et al., 2017). It can be seen from (63) that Assum...
https://arxiv.org/abs/2505.20022v1
Confidence intervals for diffusion index forecasts and inference for factor- augmented regressions. Econometrica , 74(4):1133–1150, 2006. Jushan Bai and Serena Ng. Simpler proofs for approximate factor models of large dimensions. arXiv preprint arXiv:2008.00254 , 2020. D Bank, N Koenigstein, and R Giryes. Machine Learn...
https://arxiv.org/abs/2505.20022v1
Processing Systems (NeurIPS) , 2020. Andreas Buja and Nermin Eyuboglu. Remarks on parallel analysis. Multivariate Behavioral Research , 27 (4):509–540, 1992. Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foun- dations of Computational Mathematics , 7:331–368, 2007. G....
https://arxiv.org/abs/2505.20022v1
estimators under general conditions. Annals of statis- tics, pages 755–770, 1998. Vladimir Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory , 47(5):1902–1914, 2001. Vladimir Koltchinskii and Dmitriy Panchenko. Rademacher processes and bounding the risk of func...
https://arxiv.org/abs/2505.20022v1
Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research , 21(140):1–67, 2020. Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. Advances in Neural Info...
https://arxiv.org/abs/2505.20022v1
true regression function f∗is set as f∗(Z) = 2 sin 3πZ1 + 3|Z1−0.5| −exp Z2 2−Z2 3 , the regression error ϵ∼N(0,0.82), the loading matrix Ais generated with rows independently sampled from N(0,diag(10 ,5.5,1)) and Wis generated with entries independently sampled from N(0,1.52). In each setting, we generate the trai...
https://arxiv.org/abs/2505.20022v1
1.85 1.83 1.82 (0.19) (0.10) (0.08) (0.07) (0.06) (0.05) LR-Z3.61 3.58 3.62 3.60 3.61 3.60 (0.29) (0.16) (0.13) (0.10) (0.11) (0.08) As shown in Table 2, the proposed KRR- bZ′outperforms all other predictors, except for the oracle approach, KRR- Z, and its variant KRR- bZthat does not use the auxiliary data. KRR- bZ′ha...
https://arxiv.org/abs/2505.20022v1
As expected, increasing SNR leads to better prediction performance for KRR- bZ′, KRR- bZand KRR- Xwhile the prediction errors of other predictors remain unchanged. However, KRR- Xcon- tinues to perform poorly even for the settings with large values of SNR due to underfitting. On the contrary, when the SNR is sufficient...
https://arxiv.org/abs/2505.20022v1
adding and subtracting terms, we have Ebf◦bg =E Y−(bf◦bg)(X)2−E[Y−(f◦bg)(X)]2+E[Y−(f◦bg)(X)]2−σ2. Note that E[Y−(f◦bg)(X)]2−σ2=Eh (Y−f∗(Z) +f∗(Z)−(f◦bg)(X))2i −σ2 =E[f∗(Z)−(f◦bg)(X)]2+ 2E[ϵ(f∗(Z)−(f◦bg)(X))] =E[f∗(Z)−(f◦bg)(X)]2, (66) where the penultimate step uses Y−f∗(Z) =ϵandϵis independent of Z,Xandbg. B.2 Pro...
https://arxiv.org/abs/2505.20022v1
+λ∥bf∥2 K−λ∥fH∥2 Ko . Sincebα >0, if we could show that En[ℓfbα(Y, X)] +λ∥fbα∥2 K−λ∥fH∥2 K>0, (77) then (77) also holds for bfin lieu of fbα, which contradicts (71). To prove (77), recalling that fbα∈ Fb, invoking the uniform bound in (75) with f=fbαgives −2En[ℓfbα(Y, X)] + 2 λ∥fH∥2 K−2λ∥fbα∥2 K≤ −E[ℓfbα(Y, X)] + 4 λ∥f...
https://arxiv.org/abs/2505.20022v1
statistical learning problems. Precisely, when deriving the bound on the error of a learning algorithm, the key step is to select an appropriate sub-root function, and its corresponding fixed point will appear in the prediction error bound. In Appendix C.1 we review the definition and some useful properties of sub-root...
https://arxiv.org/abs/2505.20022v1
any f∈ Fb, V 2E[ξ2 f(X)] =V 2E[(f◦bg)(X)−(fH◦bg)(X)]2 ≤VE[(f◦bg)(X)−f∗(Z)]2+VE[f∗(Z)−(fH◦bg)(X)]2 =VE[hf(Z, X)] + 2 VE[f∗(Z)−(fH◦bg)(X)]2 ≤V E[hf(Z, X)] + 4∥fH∥2 KE∆bg+ 4∥fH−f∗∥2 ρ by (19) with θ= 1 (87)=T(hf). It follows that for any δ≥0, Rn hf∈ Fh:T(hf)≤δ  ≤ R n hf∈ Fh:VE[ξ2 f(X)]≤2δ  =E  sup f∈Fb, VE[ξ2 f(...
https://arxiv.org/abs/2505.20022v1
. (102) To proceed, by applying Lemma 13, for any η∈(0,1), it holds with probability at least 1 −η that bR(δ)≤C R(δ) +κp log(2/η) n! , (103) provided that nδ≥32κ2log256(κ2∨1) η (104) and R(δ)≤δ. (105) 44 Collecting the bounds (98), (100), (102) and (103) yields ψx(δ)≤C(∥fH∥K∨1) R(δ) +r E∆bg n+κp log(2/η) n! (106) pro...
https://arxiv.org/abs/2505.20022v1
., so that ∥f−fH∥2 K=∞X j=1β2 j. Additionally, from (111), we have En[ξ2 f(X)] =En[(f◦bg)(X)−(fH◦bg)(X)]2=⟨f−fH,bTx,K(f−fH)⟩K=∞X j=1bµx,jβ2 j. This implies that f∈ F bsatisfies En[ξ2 f(X)]≤qif and only if the vector β= (β1, β1, . . .)⊤∈R∞ satisfies∞X j=1bµx,jβ2 j≤q,∞X j=1β2 j≤9∥fH∥2 K. 46 Additionally, any vector βsati...
https://arxiv.org/abs/2505.20022v1
for all i∈[n]. Then, by repeating the similar arguments of proving (94), we can apply Ledoux–Talagrand contraction inequality in Lemma 23 to obtain Rn ξ2 f∈Ξ2 b, T(ξ2 f)≤δ  ≤8κ∥fH∥KRn ξf∈Ξb, T(ξ2 f)≤δ  =8κ∥fH∥Kψxδ V so that for every δ≥δ⋆, ψ(δ)≥VRn ξ2 f∈Ξ2 b, T(ξ2 f)≤δ  . Step 4: Bounding δ⋆byδx.Using Lemma ...
https://arxiv.org/abs/2505.20022v1
log(1/η) R(qλ) +r E∆bg n+κp log(2/η) n! by Lemma 13 ≤C′qλ+C′C0(∥fH∥K∨1)γϵ r E∆bglog(1/η) n+κlog(2/η) n! by (123) ≤C′qλ+C′C0(∥fH∥K∨1)γϵ1 2E∆bg+3(κ∨1)log(2 /η) 2n ≤C′′qλ by (122) . Finally, since qλ≲λ∥fH∥2 Kby (72), choosing C0in (67) sufficiently large and using (72) complete the proof. The following lemma is used in ...
https://arxiv.org/abs/2505.20022v1
A(δ)≤2bA(2δ) + 2Gt. (125) Finally, using Lemma 20 ensures that bA(δ) is sub-root in δ. By the definition of sub-root function, bA(δ)/√ δis non-increasing in δso that bA(2δ)√ 2δ≤bA(δ)√ δ, which, combining with (125), completes the proof. 53 B.3.2 Relating empirical local Rademacher complexity in (41)to empirical kernel ...
https://arxiv.org/abs/2505.20022v1
on them. Proof. In view of (133), we turn to focus on bounding the difference between bDx(δ) andbD(δ). To this end, define the operators Σx:ℓ2(N)→ℓ2(N), Σx=1 nnX i=1Φ(bZi)Φ(bZi)⊤; Σ :ℓ2(N)→ℓ2(N), Σ =1 nnX i=1Φ(Zi)Φ(Zi)⊤.(135) For short, for any δ >0, we write A(δ) = (Σ + δI)−1 2(Σ−Σx)(Σ + δI)−1 2; B(δ) = (Σ + δI)−1(Σ−Σ...
https://arxiv.org/abs/2505.20022v1
≤s ¯∆bg δ∥u∥2 2, implying ∥T1(δ)∥op≤q ¯∆bg/δ. (143) Repeating similar arguments gives ∥T′ 1(δ)∥op≤q ¯∆bg/δ. (144) For II a, by using Cauchy-Schwarz inequality and (141), we have IIa≤1 nnX i=1 Φ(bZi)−Φ(Zi)⊤ Σ +δI−1 2u2 ≤1 nδnX i=1∆i,bg∥u∥2 2=1 δ¯∆bg∥u∥2 2, implying ∥T2(δ)∥op≤¯∆bg/δ. (145) Combining (142), (143), (...
https://arxiv.org/abs/2505.20022v1
≤1 n nX i=1∥T2(δ)∥2 op Σ +δI−1 2 Φ(bZi)−Φ(Zi) 2 2!1 2 nX i=1 Σ +δI−1 2Φ(Zi) 2 2!1 2 ≤¯∆bg δ3 2bD(δ). Repeating similar arguments gives Tr T′ 1(δ)T2(δ) ≤¯∆bg δ3 2bD(δ). Summing the bounds for each term in the right-hand side of (148) yields ∥A(δ)∥2 HS≤4¯∆bgbD(δ) δ+ 4¯∆bg δ3 2bD(δ) +¯∆bg δ2 . Finally, f...
https://arxiv.org/abs/2505.20022v1
0 for all j > n , for any δ >0, we have bD(δ) =nX j=1bµj bµj+δ=∞X j=1bµj bµj+δ= Tr(( bTK+δI)−1bTK). In the proof of this section, define for any δ >0, D(δ) := Tr(( TK+δI)−1TK) =∞X j=1µj µj+δ. Then by (132), we have 1 2D(δ)≤nR2(δ) δ≤D(δ). (150) Lemma 13. Grant Assumptions 2 and 3. Fix any η∈(0,1). For any δ >0such that ...
https://arxiv.org/abs/2505.20022v1
operators on HKwith H= 2κ2δ−1andS=κ2δ−1D(δ), for any η∈(0,1), the following holds with probability at least 1−η, ∥A(δ)∥HS(159)= 1 nnX i=1 E[ζ(Zi)]−ζ(Zi) HS≤4κ2 nδlog2 η+s 2κ2D(δ) nδlog2 η. (161) Bounding δ|Tr(B(δ))|.For any z∈ Z, define the random variable ξ(z) as ξ(z) :=δTr (TK+δI)−1KzK⊤ z(TK+δI)−1 , and it is cle...
https://arxiv.org/abs/2505.20022v1
E[ζ(Zi)]−ζ(Zi) op≤4κ2 3nδlog2Tr(S) η∥S∥op +s 2κ2 nδlog2Tr(S) η∥S∥op . (168) By using (132), we find that (TK+δI)−1 2TK(TK+δI)−1 2 op=µ1 µ1+δ≥µ1∧δ 2δ so that Tr(S) ∥S∥op=D(δ) ∥(TK+δI)−1 2TK(TK+δI)−1 2∥op≤2δD(δ) (µ1∧δ). Combining with (168) completes the proof. Remark 6. It is worth mentioning that bounding the LHS ...
https://arxiv.org/abs/2505.20022v1
n1 2α−1(d(δ) + 1)1−2α ≤Cα nδ2α−1 2α, where Cαis some constant depending only on α. Then solvingp Cα/n δ2α−1 4α=δand applying Lemma 18 yield δn≤C′ αn−2α 2α+1. Invoking Theorem 2 gives the bound (in order) n−2α 2α+1log (1 /η) +E∥bg(X)−Z∥2 2+log(1/η) n. This completes the proof. 72 B.8 Proof of Corollary 4 Proof. Fix any ...
https://arxiv.org/abs/2505.20022v1
define the function fβasfβ(z) =z⊤β. By the universality of the kernel K, for any β∈Sr−1, there must exist a function fϑ,β∈ H Ksuch that ∥fϑ,β−fβ∥ρ≤ ∥fϑ,β−fβ∥∞≤ϑ. 74 For any h:Rp→R, observe that sup θ∈ΘEθ[f∗(Z)−h(X)]2≥sup β∈Sr−1Eθ[fϑ,β(Z)−h(X)]2 ≥sup β∈Sr−11 2Eθ[fβ(Z)−h(X)]2−Eθ[fβ(Z)−fϑ,β(Z)]2 ≥sup β∈Sr−11 2Eθ[fβ(Z)−h(X...
https://arxiv.org/abs/2505.20022v1
17. Grant Assumptions 1, 2, 8 and 9. Fix any η∈(0,1). With probability at least 1−η, for any f∈ Fb, one has E[ℓf(Y, X)]≤2En[ℓf(Y, X)] +c1CLmax C2 ℓC−2 L,1 δx +c2 C2 ℓC−1 L+Cℓκ∥fH∥Klog(1/η) n+ 4CU∥fH∥2 KE∆bg+ 4CU∥fH−f∗∥2 ρ. Here c1andc2are some absolute positive constants. Proof. Define the scalar V:= 2C2 ℓC−1 Land t...
https://arxiv.org/abs/2505.20022v1
ψ: [0,∞)→[0,∞)is a sub-root function if it is nonnegative, nonde- creasing, and if δ7→ψ(δ)/√ δis nonincreasing for δ >0. A useful property about the sub-root function defined above is stated as follows. Lemma 18 (Lemma 3.2 in Bartlett et al. (2005)) .Ifψ: [0,∞)→[0,∞)is a nontrivial3sub-root function, then it is continu...
https://arxiv.org/abs/2505.20022v1
family of B-Lipschitz functions. Then, we have E sup θ∈TdX j=1εjψj(θj) ≤BE sup θ∈TdX j=1εjθj . C.3 Auxiliary concentration inequalities This section provides some useful concentration or tail inequalities that are used in the proof of the Appendix. The first lemma states a variant of the classic Bernstein’s ine...
https://arxiv.org/abs/2505.20022v1
arXiv:2505.20151v1 [stat.AP] 26 May 2025The evolving categories multinomial distribution: introduction with applications to movement ecology and vote transfer Ricardo Carrizo Vergara1, Marc Kéry1, Trevor Hefley2 1Population Biology Group, Swiss Ornithological Institute, 6204 Sempach, Switzerland. 2Department of Statist...
https://arxiv.org/abs/2505.20151v1
sense the distribution is not new. Our contribution is to pay special attention to this mathematical object, giving it a name, studying its basic properties, developing methods 1 for statistical inference and demonstrating its utility by addressing important questions in scientific fields such as ecology and sociology....
https://arxiv.org/abs/2505.20151v1
We illustrate by inferring vote transfer during the 2021 Chilean presidential election. Our work is organized as follows. In Section 2 we introduce the ECM distribution with its main properties: char- acteristic function, one-time and two-times marginal distributions, and first and second order moments. In Section 3 we...
https://arxiv.org/abs/2505.20151v1
on the number of individuals counted in each category at each time. For each kPt1, ..., nuandlPt1, ..., m ku, we define the random variable Qpkq l:““number of individuals belonging to category lat time k”. (2.3) We assemble the variables (2.3) into a single random arrangement Q:“` Qpkq l˘ kPt1,...,nu,lPt1,...,m ku. (2....
https://arxiv.org/abs/2505.20151v1
al., 2023), which is constructed as the sum of independent multinomial random vectors of size 1with different bins probabilities. Propositions 2.2 and 2.3 allow us to compute the first and second moment structures of Q. We use the Kronecker delta notation δl,l1“1ifl“l1, and δl,l1“0ifl‰l1. Proposition 2.4. The mean and ...
https://arxiv.org/abs/2505.20151v1
. . , ppk1|kq mk1|lq¯ (indep. sums) . (3.3) Similar distributions have been worked out in the context of evolving Poisson point processes from moving particles. See (Roques et al., 2022, Section 4), where a future point process conditioned on a past Poisson process is described as the union of an inhomogeneous Poisson ...
https://arxiv.org/abs/2505.20151v1
single-realization case, is to express the path probabilities as functions of a much smaller set of parameters for which inference is possible. The methods here presented can help for both scenarios. 4.1 Maximum Gaussian Likelihood Estimation (MGLE) IfQpNqis an ECM random arrangement of Nindividuals, it can be expresse...
https://arxiv.org/abs/2505.20151v1
and 3 allow us to obtain the log-likelihoods of the pairs ℓQpkq l,Qpk1q l1pθq. For the ECM case, the pairs at same time k“k1are sub-vectors of a multinomial vector, so their distribution is simple. When k‰k1a special structure appears in pQpkq l, Qpk1q l1q. In the case with just 1individual, it is a random binary vecto...
https://arxiv.org/abs/2505.20151v1
random measure (Kallenberg, 2017), more particularly a spatial point process (Daley & Vere-Jones, 2006). Let us assume the Nindividuals follow iid trajectories, that is, the processes pXjqj are iid with the distribution of a reference process X“pXptqqtět0. Let t1, ..., t nět0be survey time points (possibly continuously...
https://arxiv.org/abs/2505.20151v1
used trajectory model (Uhlenbeck & Ornstein, 1930; Gardiner, 2009, Section 3.8). One parameter (here σ) only occurs in the covariance of the movement, thus the space-time autocorrelation of the counts, but not in the mean. It is therefore impossible to estimate it using only the intensity information. For instance, an ...
https://arxiv.org/abs/2505.20151v1
which seems to be the most difficult to estimate in terms of reducing variance when adding extra information. In general, MCLE has less bias and variance than MGLE, especially for τandλ, where the former is under- and the latter is overestimated. MCLE presents few correlation among estimates. The Gaussian approximation...
https://arxiv.org/abs/2505.20151v1
infer movement parameters even under loss of information, we transformed the tracking data into counts to be modeled with the ECM distribution. Although individuals were released at different places and on different days, the telemetry data were translated in space-time so all individuals start at an origin x0“p0,0qand...
https://arxiv.org/abs/2505.20151v1
obtained through parametric bootstrapping. Bootstrapped sampling histograms are presented in Figure 4, and 5 contains the associated correlograms (with σ, τ,´t0inlog-scale, and αinlogit -scale). Pairwise MCLE results ˆτ (dist-unit)ˆσ (dist-unit/days1 2)ˆt0 (days)ˆα cℓ Estimate 0.0324 0.1085 ´0.1362 0.3682´8524.08 95% B...
https://arxiv.org/abs/2505.20151v1
both rounds, resulting in m1“˜m1`1categories at time 1andm2“3at time 2. Assume there arenDelectoral districts and that in each district jP t1, . . . , n Du, there is a known number of voters, denoted Nj. Since we focus on vote transfer, one-time probabilities at first round are not important, the focus being on the con...
https://arxiv.org/abs/2505.20151v1
vote is N“15 030 974 for both rounds. As districts, we considered the territorial division of Chile into comunas . V oters abroad were grouped into an extra fictional district. This resulted in nD“347districts with number of voters ranging from 233to403 129 . Null and blank votes are considered as abstention. We obtain...
https://arxiv.org/abs/2505.20151v1
counts of space-time Poisson processes can be subsumed as a special case of this distribution. In such case, the time index k“1, . . . , n refers to disjoint time intervals, and the categories at each krepresent disjoint spatial regions of count. The one-time probabilities govern the intensity. If we apply this logic t...
https://arxiv.org/abs/2505.20151v1
2014). One issue is still present when considering an underlying continuous Markov process Xin the scenario of Section 5.1. The Markovianity of Xdoes not imply the Markovianity of the counts. This is a form of continuous-to-discrete lumping, similar to encountered in discrete-valued Markov chains (Gurvits & Ledoux, 200...
https://arxiv.org/abs/2505.20151v1
sociological applications. How- ever, the ECM distribution can still be used as a basis for building more complex models. For example, counts of individuals with two or more different behaviors (such as in Section 5.1.2 but knowing the number of exploratory indi- viduals) could be modeled as the sum of independent ECM ...
https://arxiv.org/abs/2505.20151v1
used marginalization for the one-time probabilities ppkq l“ÿ l1,...,ln lk“lpp1,...,nq l1,...,ln.■ (A.7) A.3 Proof of Proposition 2.3 We remind that for two random vectors ⃗X,⃗Ywith values in RmandRnrespectively, the conditioned characteristic function φ⃗X|⃗Y“⃗ ysatisfies φ⃗X,⃗Yp⃗ξ, ⃗ ηq“ż Rnφ⃗X|⃗Y“⃗ yp⃗ξqei⃗ ηT⃗ ydµ⃗Yp...
https://arxiv.org/abs/2505.20151v1
l´1q) , (A.19) which is a product of Poisson characteristic functions. ■ A.7 Proof of Proposition 3.3 Let⃗ξ“pξ1, . . . , ξ mk1qPRmk1and⃗ η“pη1, . . . , η ˜mkqPR˜mk. We consider an arrangement ξ“pξpkq lqlPt1,...,m ku, kPt1,...,nu such that ξpk2q l“$ ’’& ’’%ξlifk2“k1 ηlifk2“kandlď˜mk 0otherwise .(A.20) Then, the joint ch...
https://arxiv.org/abs/2505.20151v1
probabilities pX,pY. Then,pX, Yqis said to follow a bivariate binomial distribution. Its characteristic function is then given by φX,Ypξ, ηq“” eipξ`ηqpX,Y`eiξppX´pX,Yq`eiηppY´pX,Yq`1´pX´pY`pX,YıN . (A.32) Consider the components pQpk1q l1, Qpkq lqof an ECM random arrangement, with k‰k1. Its characteristic function φQpk...
https://arxiv.org/abs/2505.20151v1
each method MGLE and MCLE, 1045 simulations-estimations were performed, in order to take out potential erratic scenarios and still aiming for a total of 1000 samples. By “erratic” we mean the following. The hessians at the optimum values proposed by optim are computed using the hessian function from the library numDeri...
https://arxiv.org/abs/2505.20151v1
. IN POINT CLOUDS ,THEORETICAL POINT VALUE IS INDICATED WITH A RED POINT ,THE SAMPLE MEAN IS INDICATED WITH AN EMPTY -INTERIOR BLUE CIRCLE . SAMPLE SIZE 1000 IN ALL SETTINGS . 27 (A) MGLE FOR ECM, N“104 (B) MCLE FOR ECM, N“104 (C) MGLE FOR ECM-P OISSON ,λ“104 (D) MCLE FOR ECM-P OISSON ,λ“104 FIGURE 9: C ORRELOGRAMS OF ...
https://arxiv.org/abs/2505.20151v1
Biometrika ,91(3), 729–737. Daley, D. J., & Vere-Jones, D. (2006). An introduction to the theory of point processes: volume I: elementary theory and methods . Springer Science & Business Media. Daskalakis, C., Kamath, G., & Tzamos, C. (2015). On the structure, covering, and learning of poisson multinomial distributions...
https://arxiv.org/abs/2505.20151v1
of spatial point patterns . John Wiley & Sons. Kallenberg, O. (2017). Random measures, theory and applications . Springer. Kawamura, K. (1973). The structure of bivariate Poisson distribution. In Kodai mathematical seminar reports (V ol. 25, pp. 246–256). Kemeny, J. G., Snell, J. L., & others. (1969). Finite Markov cha...
https://arxiv.org/abs/2505.20151v1
Sciences ,96(19), 10578–10581. 31 Seber, G. A. (1986). A review of estimating animal abundance. Biometrics , 267–292. Sisson, S. A., Fan, Y ., & Beaumont, M. (2018). Handbook of approximate Bayesian computation . CRC press. Smyth, P., Heckerman, D., & Jordan, M. I. (1997). Probabilistic independence networks for hidden...
https://arxiv.org/abs/2505.20151v1
arXiv:2505.20153v1 [math.ST] 26 May 2025Submitted to the Annals of Statistics SIMPLE, EFFICIENT ENTROPY ESTIMATION USING HARMONIC NUMBERS BYOCTAVIO CÉSAR MESNER *1,a 1Brown School, Washington University in St. Louis ,amesner@wustl.edu The estimation of entropy, a fundamental measure of uncertainty, is cen- tral to dive...
https://arxiv.org/abs/2505.20153v1
unknown. As entropy is a functional, whose domain is the space of PMFs, in order to accurately estimate the entropy of X, one must typically estimate pj, or some transformation of it, for each j∈S:={j∈N:pj>0}, the support of p. The earliest method for entropy estimation was the plug-in , (or maximum likelihood) estimat...
https://arxiv.org/abs/2505.20153v1
and Schürmann [38]. Wolpert and Wolf [46], working from a Bayesian paradigm, proposes another similar estimator, which is equal to bHψif prior offsets are all set to zero. Section 2.1 provides more background on previous estimators. The first main result bounds the estimators mean squared error, combining bounds on the...
https://arxiv.org/abs/2505.20153v1
if often the case, some recent methods [21, 47] use a variation of this adjustment for estimating \−pjlogpj=−bpjlogbpj−1 2nfor some values of jappearing in the sample. Paninski [31] develops the Best Upper Bound Estimator (BUB) by minimizing bias squared and variance of a linear function of the sequence (|{j∈N:mj=k}|:k...
https://arxiv.org/abs/2505.20153v1
Wolpert and Wolf [46] apply a Dirichlet prior with parameter δ:= (δ1,...,δ s)to the PMF yielding a Bayes estimator for entropy as (2.9) bHWW(Xn) :=ψ 1 +sX j=1mj+δj −sX j=1 mj+δjPs j=1mj+δj! ψ(1 +mj+δj). This estimator is equal to the proposed estimator if δ= 0. For entropy estimation, especially when n≪s, this mode...
https://arxiv.org/abs/2505.20153v1
logp=−P∞ m=1(1−p)m mand converges forp∈(0,1]to conclude the proof. Many papers have indicated that S\Xn, the support points of Xnot in the sample, con- tribute to estimator bias [21, 41, 44]. A distinctive feature of this estimator is that its bias can be expressed using P(Xn+1∈S\Xn) =E[(1−pX)n]. Specifically, support ...
https://arxiv.org/abs/2505.20153v1
knowledge, this is the first entropy estimator whose variance is calculated explicitly under the mild condition that pj=o(1/j). 8 THEOREM 3.3. LetXn:= X(1),...,X(n) be an i.i.d. random sample from pwhere pj=O j−1/α forα∈[0,1), then (3.11) Varh bH(Xn)i =Var[log pX] n+o1 n +O1 n2(1−α) . PROOF OF THEOREM 3.3. For b...
https://arxiv.org/abs/2505.20153v1
the proof shows that the bounded Lipschitz distance between√nh bH(Xn)−H(X)i and a Gaussian-distributed random variable converges to zero, which implies weak convergence of the former to the later. PROOF OF THEOREM 1.3. For random variables, UandVdefine the bounded Lipschitz metric as (3.25) d(U,V) := sup f∈BL(R)|E[f(U)...
https://arxiv.org/abs/2505.20153v1
an unbounded PMF with power-law decay. For each PMF and sample size, the simulation generates 100 datasets then applies all four estimation methods to each dataset. The estimation results are displayed as empirical mean squared error (MSE) and in a series ofviolin plots7representing the distribution of the entropy esti...
https://arxiv.org/abs/2505.20153v1
1000 1250 1500 1750 2000 Sample Size2.93.03.13.23.33.4Entropy Geometric(0.1) JVHW Miller PYM Plug-In Proposed True Entropy FIG3. The figure above shows plots for the geometric PMF with parameter p= 0.5. On the left are average mean squared errors (MSE) by sample size and on the right are corresponding violin plots. In ...
https://arxiv.org/abs/2505.20153v1
a large class discrete random variables and provides its mean squared error. The algebraic approach used throughout much of the paper provides the mathematical precision necessary to establish these findings. This development allows for statistical inference on entropy and other information theoretic measures for unbou...
https://arxiv.org/abs/2505.20153v1
(n+ 1)2(B.7) 16 using Claim 2 on line B.6. Together with S1= 1−p, then Sn=nX m=1mX k=13(1−p)m−(1−p)m−k mk+nX m=11−2(1−p)m m2(B.8) =nX m=1mX k=12(1−p)m mk−nX m=12(1−p)m m2(B.9) +nX m=1mX k=1(1−p)m−(1−p)m−k mk+nX m=11 m2. (B.10) Working with these lines separately, on line B.9, re-indexing, canceling, and using the fact ...
https://arxiv.org/abs/2505.20153v1
ip2 i(1−pi)m m∞X k=21 km nk =1 nX ip2 i(1−pi)m∞X k=11 k+ 1m nk (B.44) ≤ −P ip2 i(1−pi)m nlog 1−m n =o1 n . (B.45) For all values m≥m0+ 1, we use Lemma 3.2, nX m=m0+1P ip2 i(1−pi)m m∞X k=21 km nk =nX m=m0+1∞X k=2O mk−3+α knk(B.46) ≤∞X k=0Zn m0O xk−1+α (k+ 2)nk+2dx . (B.47) 19 Ifα∈(0,1), then ∞X k=0Zn m0O ...
https://arxiv.org/abs/2505.20153v1
k=m+1(1−z)m−1 1−q 1−zk k dz (B.89) =Zp 1−qnX m=1∞X k=m(1−z)m−1 1−q 1−zk kdz (B.90) =−Zp 1−qnX m=1∞X k=mZq 1−z(1−z)m−2 1−y 1−zk−1 dy dz (B.91) =−Zp 1−qZq 1−znX m=1(1−z)m−1 1−y 1−zm−1 ydy dz (B.92) =−Zp 1−qZq 1−znX m=1(1−y−z)m−1 ydy dz =−Zp 1−qZq 1−z1−(1−y−z)n y(y+z)dy dz . (B.93) Now, focusing on part of line...
https://arxiv.org/abs/2505.20153v1
C.9, let m(2) −1:=Pn i=2I X(2)=X(i) . IfX(1)= X(2), then m(2)=m(2) −1+ 1and if X(1)̸=X(2), then m(2)=m(2) −1. Recall that J(z+ 1) = J(z) +1 zforz∈N. Then, using the law of total expectation to separate then again to join, and Claim 1, Eh J m(2) −J(n) logp X(1)i (C.18) =P X(1)=X(2) Eh J m(2) −1+ 1 −J(n) lo...
https://arxiv.org/abs/2505.20153v1
[11] C HOW , C. and L IU, C. (1968). Approximating discrete probability distributions with dependence trees. IEEE transactions on Information Theory 14462–467. [12] D IMITROV , A. G., L AZAR , A. A. and V ICTOR , J. D. (2011). Information theory in neuroscience. Journal of computational neuroscience 301–5. [13] D URRET...
https://arxiv.org/abs/2505.20153v1
dependency, max-relevance, and min-redundancy. IEEE Transactions on pattern analysis and machine intelligence 271226–1238. 27 [33] P INCHAS , A., B EN-GAL, I. and P AINSKY , A. (2024). A Comparative Analysis of Discrete Entropy Esti- mators for Large-Alphabet Problems. Entropy 26369. [34] P ITMAN , J. and Y OR, M. (199...
https://arxiv.org/abs/2505.20153v1
arXiv:2505.20157v1 [math.ST] 26 May 2025Gaussian Process Methods for Covariate-Based Intensity Estimation Patric Dolmeta and Matteo Giordano ESOMAS Department, University of Turin Abstract We study nonparametric Bayesian inference for the intensity function of a covariate- driven point process. We extend recent results...
https://arxiv.org/abs/2505.20157v1
link functions, achieve minimax-optimal rates of posterior contraction towards the ground truth, in L1-distance. In this paper, we build on the investigation of [10], obtaining in the same setting optimal posterior contraction rates for a much wider class of (rescaled) Gaussian pri- ors, including widespread choices su...
https://arxiv.org/abs/2505.20157v1
1, . . . , d,be independent, almost surely locally bounded, centred and stationary Gaussian processes with integrable covari- ance functions. Further assume without loss of generality that Var[Z(h)(x)] = 1for all handx. Let Zbe given by Z(x) := [ ϕ(˜Z(1)(x)), . . . , ϕ (˜Z(d)(x))], x ∈RD, (4) where ϕis the standard nor...
https://arxiv.org/abs/2505.20157v1
tuning in practice. 2.4 Main result In our main result, we characterise the speed of asymptotic concentration of the poste- rior distributions Πn(·|N(n), Z(n))arising from the log-Gaussian priors (5), under the frequentist assumption that the data (N(n), Z(n))have been generated by some fixed groundtruth ρ0. Weworkinth...
https://arxiv.org/abs/2505.20157v1
also of interest in spatial statistics. For covariate-based intensity estimation, this setting can be formulated as having access to repeated (and possibly independent) realisations of the covariates and the points over a fixed observation win- dow. We refer to [10] for possible extensions of the proof techniques emplo...
https://arxiv.org/abs/2505.20157v1
intensities. In Pro- ceedings of the 26th Annual International Conference on Machine Learning (New York, NY, USA, 2009), ICML ’09, Association for Computing Machinery, pp. 9–16. [2]Baddeley, A., Chang, Y.-M., Song, Y., and Turner, R. Nonparametric estimation of the dependence of a spatial point process on spatial covar...
https://arxiv.org/abs/2505.20157v1
Str ong Lo w Degr ee Har dness f or the N umber P artitioning Pr oblem R ushil Mallar apu* Mar k Sellk e† A bstr act In the number partitioning pr oblem (NPP ) one aims t o partition a giv en set of 𝑁 r eal number s int o t w o subset s w ith appr o ximat ely equal sum. The NPP is a w ell-studied optimization pr oblem...
https://arxiv.org/abs/2505.20607v1
. . . . 4 1.2 Heuristic Optimalit y of Theor em 1. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 N ot ations and Pr eliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
https://arxiv.org/abs/2505.20607v1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2. 6 Truly R andom R ounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 R ef er enc es 21 * Har v ar d Univ er sit y . Em...
https://arxiv.org/abs/2505.20607v1
J ohnson, as w ell as T sai, look ed at utilizing alg orithms f or the NPP f or designing multipr oc essor sc heduler s or lar g e int egr at ed cir cuit s [ C GJ7 8] , [T sa 9 2] . C offman and L uek er further discuss ho w the NPP can be used f or r esour c e allocation pr oblems [ CL 91] . The NPP w as also used t o...
https://arxiv.org/abs/2505.20607v1
independently and unif ormly fr om the discr et e set { 1 , 2 , 3 , … , 2𝑀} , Gent and W alsh 2 c onjectur ed that the har dness of finding perf ect partitions (i. e ., w ith discr epancy zer o if ∑𝑖𝑔𝑖 is e v en, and one else ) w as c ontr olled b y the par amet er 𝜅 ≔𝑀 𝑁 [ GW 98] , [ GW 00] . Mert ens soon g a ...
https://arxiv.org/abs/2505.20607v1
no e x c eption, and e xhibit s an e xponentially w ide st atistical-t o - c omput ational g ap . F or the global optimum, K armar k ar-K arp -L uek er- Odlyzk o sho w ed that when the 𝑔𝑖 ar e i.i. d. r an- dom v ariables w ith sufficiently nic e distribution, ¹ the minimum discr epancy of ( 1.1) is Θ (√ 𝑁 2− 𝑁) w ...
https://arxiv.org/abs/2505.20607v1
C08] , [ AMS25] , [ GJK23] . In the seminal w or k [ GS14] , Gamarnik and Sudan sho w ed ho w t o use a str ong f orm of this disc onnectiv it y t o deduc e rig or ous har dness r esult s f or c lasses of st able alg orithms , in what has bec ome kno w n as the over lap g ap pr opert y ( OGP ) [ Gam 21] . In it s simpl...
https://arxiv.org/abs/2505.20607v1
simulat ed anneal- ing [ AF G96] , [SFD96] , [ J oh+8 9] , [ J oh+91] , [ Ali+05] . Pr e v iously , Gamarnik and Kızıldağ applied the OGP methodology t o the NPP , pr o v ing that a mor e c omplicat ed multi- OGP holds f or discr epancies of 2− Θ ( 𝑁 ) (i. e ., the st atistical near- optimum), but is absent f or small...
https://arxiv.org/abs/2505.20607v1
𝑔 ) ; this t erminology is motiv at ed b y the st atistical ph y sics lit er atur e [Mer01] . R ephr asing the pr ec eding discussion, the statisticall y optimal ener gy level is 𝐸 = Θ ( 𝑁 ) , while the best computational ener gy level curr ently kno w n t o be ac hie v able in poly nomial time is 𝐸 = Θ ( log2𝑁 ) ...
https://arxiv.org/abs/2505.20607v1
in Definition 1.2 . The fir st is tr aditional poly nomial degr ee , applicable f or alg orithms giv en in eac h c oor dinat e b y lo w degr ee poly nomial functions of the input s . In this case , w e sho w: Theor em 1.3 (Har dness f or LDP Alg orithms ) . Let 𝑔 ∼ 𝒩 ( 0 , 𝐼𝑁 ) be a standar d N ormal r andom vec to...
https://arxiv.org/abs/2505.20607v1