Monketoo's picture
Add files using upload-large-folder tool
8771e30 verified

Now by Claim 2.4, $\pi(\mathbf{t}_i)$ is uniformly distributed mod $\pi(\mathcal{L})$, and therefore by Lemma 2.3,

E[dist(π(ti),π(L))2]μ(π(L))2/4. \mathbb{E}[\mathrm{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L}))^2] \geq \mu(\pi(\mathcal{L}))^2/4.

Furthermore, since the $\mathbf{t}_i$ are independent and identically distributed with $\mathrm{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L})) \leq \mu(\pi(\mathcal{L}))$, we can apply the Chernoff-Hoeffding bound (Lemma 2.19) to get

Pr[dist(π(ti),π(L))2mμ(π(L))2/5]exp(Cm2). \mathrm{Pr} \left[ \sum \mathrm{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L}))^2 \leq m\mu(\pi(\mathcal{L}))^2/5 \right] \leq \exp(-Cm^2).

The result follows by noting that $\mu(\pi(\mathcal{L}))^2/(5k) \ge 3$ by Claim 2.1, together with the fact that $\det(\pi(\mathcal{L})) \ge 100^k$. $\square$

Corollary 3.5 (Soundness). For any $\varepsilon \in (0, 1/2)$ and lattice $\mathcal{L} \subset \mathbb{R}^n$ with basis B satisfying $n \ge 2$ and $\eta_\varepsilon(\mathcal{L}) \ge 100C_\eta(n)\sqrt{\log(1/\varepsilon)}$ (and in particular any lattice with $\eta_\varepsilon(\mathcal{L}) \ge 1000(\log(n)+2)\sqrt{\log(1/\varepsilon)})$, if $\mathbf{t}_1, \dots, \mathbf{t}_m$ are sampled uniformly from $\mathcal{P}(\mathbf{B})$, then the probability that there exists a proof $\mathbf{e}_1, \dots, \mathbf{e}_m$ with $\mathbf{e}_i \equiv \mathbf{t} \bmod \mathcal{L}$ and

eieiT3m \left\| \sum \mathbf{e}_i \mathbf{e}_i^T \right\| \le 3m

is at most $\exp(-\Omega(m^2))$. In other words, the proof system in Figure 1 is $\exp(-\Omega(m^2))$-statistically sound.

Proof. By Lemma 2.9, we have $\eta_{1/2} \ge 100C_\eta(n)$, and the result follows from Theorem 3.4. $\square$

Making the prover efficient. Finally, following [PV08] we observe that the prover in the proof system shown in Figure 1 can be made efficient if we relax the approximation factor. In particular, if $\eta_\epsilon(\mathcal{L}) \le 1/\sqrt{n\log n}$, then by Corollary 2.14, there is in fact an efficient prover. Theorem 1.2 then follows immediately from the above analysis.

3.2 A Proof via Entropy Approximation

We recall from Goldreich, Sahai, and Vadhan [GSV99] the Entropy Approximation problem, which asks us to approximate the entropy of the distribution obtained by calling some input circuit $C$ on the uniform distribution over its input space. In particular, we recall that [GSV99] proved that this problem is NISZK-complete. (Formally, we only need the fact that Entropy Approximation is in NISZK.)

Definition 3.6. An instance of the Entropy Approximation problem is a circuit $C$ and an integer $k$. It is a YES instance if $H(C(U)) > k+1$ and a NO instance if $H(C(U)) < k-1$, where $U$ is the uniform distribution on the input space of $C$.

Theorem 3.7 ([GSV99]). Entropy Approximation is NISZK-complete.

In the rest of this section, we show a Karp reduction from $O(\log(n)\sqrt{\log(1/\varepsilon)})$-GapSPP$_\varepsilon$ to Entropy Approximation. I.e., we give an efficient algorithm that takes as input a basis for a lattice $\mathcal{L}$ and outputs a circuit $C_\mathcal{L}$ such that (1) if $\eta_\varepsilon(\mathcal{L}) \le 1$, then $H(C_\mathcal{L}(U))$ is large; but (2) if $\eta_\varepsilon(\mathcal{L}) \ge C \log(n)\sqrt{\log(1/\varepsilon)}$, then $H(C_\mathcal{L}(U))$ is small.

Intuitively, we want to use a circuit that samples from the continuous Gaussian with parameter one modulo the lattice $\mathcal{L}$. Then, by Claim 2.7, if $\eta_\varepsilon(\mathcal{L}) \le 1$, the resulting distribution will be nearly uniform over $\mathbb{R}^n/\mathcal{L}$. On the other hand, we know that, with high probability, the continuous Gaussian lies in a set of