Monketoo's picture
Add files using upload-large-folder tool
3744075 verified

$D_0 > 0$ such that whenever the constant $D'$ in the definition of $S_i(\mathbf{p}^*)$ is larger than $D_0$,

ri,Si(p)uu(Q)Cu(nk2(k1)2j=1L(qjiqj)2qj)u/2δnu/2for all u1.(23)r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - \delta n^{-u/2} \quad \text{for all } u \ge 1. \quad (23)

The constant $D'$ is required to be large for the local asymptotic normality arguments to hold (refer again to [15, Chapter 2, Theorem 1.1] and [24, Ch. 7]).

Proposition 4.1. Let $\mathcal{P}$ be the Euclidean ball around $\mathbf{p}_U$ defined in (13). For a sufficiently large constant $D$ and any $u \ge 1$ we have

ri,Bayesuu(Q)Cu(nk2(k1)2j=1L(qjiqj)2qj)u/2o(nu/2).(24)r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - o(n^{-u/2}). \quad (24)

Proof. We can view $\mathcal{P}$ as a union of (uncountably many) parallel line segments with direction vector $\mathbf{v}_i$ defined in (15). Each of these line segments can be written as $S_i(\mathbf{p}^*)$ (see (16)), with a suitably chosen midpoint $\mathbf{p}^* \in \mathcal{P}$. Since the midpoints of all the line segments lie inside $\mathcal{P}$, which is a neighborhood of the uniform distribution, by (23) we have that for any estimator $\hat{\mathbf{p}}i$, the average $\ell_u^u$ estimation loss $r{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i)$ on any of these line segments $S_i(\mathbf{p}^*)$ with $D' \ge D_0$ is lower bounded by

ri,Si(p)uu(Q,p^i)ri,Si(p)uu(Q)Cu(nk2(k1)2j=1L(qjiqj)2qj)u/2δnu/2r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i) \ge r_{i,S_i(p^*)}^{\ell_u^u}(\mathbf{Q}) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - \delta n^{-u/2}

for $u \ge 1$. To compute the average estimation loss $r_{i,\text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i)$ on $\mathcal{P}$ we need to average over all the segments with weight proportional to the length of the segment. Given $D_0$, we can choose $D$ in (13) large enough so that the proportion of the segments $S_i(\mathbf{p}^*)$ with $D' \ge D_0$ out of all the segments in $\mathcal{P}$ is arbitrarily close to one (formally, denote the union of such segments as $\mathcal{P}_0$, then $\text{Vol}(\mathcal{P}_0)/\text{Vol}(\mathcal{P})$ can be made arbitrarily close to 1 as long as we set $D/D_0$ to be large enough). The average estimation loss along each of these segments is uniformly bounded below as in (23), and thus the average loss on $\mathcal{P}_0$ is lower bounded by the same quantity. Combining the fact that $\text{Vol}(\mathcal{P}_0)/\text{Vol}(\mathcal{P}) = 1 - o(1)$, we have

ri,Bayesuu(Q,p^i)Cu(nk2(k1)2j=1L(qjiqj)2qj)u/2o(nu/2)for all u1.r_{i, \text{Bayes}}^{\ell_u^u}(\mathbf{Q}, \hat{\mathbf{p}}_i) \ge C_u \left( \frac{nk^2}{(k-1)^2} \sum_{j=1}^{L} \frac{(q_{ji}-q_j)^2}{q_j} \right)^{-u/2} - o(n^{-u/2}) \quad \text{for all } u \ge 1.

This lower bound holds for any estimator $\hat{\mathbf{p}}_i$, and this implies the claimed lower bound (24). □