diff --git a/samples/texts/1693838/page_1.md b/samples/texts/1693838/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a06a0b27a2d00e5a653e527f20c502f94b5aad7
--- /dev/null
+++ b/samples/texts/1693838/page_1.md
@@ -0,0 +1,25 @@
+ON MODULI OF INSTANTON BUNDLES ON $\mathbb{P}^{2n+1}$
+
+VINCENZO ANCONA AND G. OTTAVIANI
+
+Let $M\mathcal{I}_{\mathbb{P}^{2n+1}}(k)$ be the moduli space of stable instanton bundles on $\mathbb{P}^{2n+1}$ with $c_2 = k$. We prove that $M\mathcal{I}_{\mathbb{P}^{2n+1}}(2)$ is smooth, irreducible, unirational and has zero Euler-Poincaré characteristic, as it happens for $\mathbb{P}^3$. We find instead that $M\mathcal{I}_{\mathbb{P}^5}(3)$ and $M\mathcal{I}_{\mathbb{P}^5}(4)$ are singular.
+
+# 1. Definition and preliminaries.
+
+Instanton bundles on a projective space $\mathbb{P}^{2n+1}(\mathbb{C})$ were introduced in [OS] and [ST]. In [AO] we studied their stability, proving in particular that special symplectic instanton bundles on $\mathbb{P}^{2n+1}$ are stable, and that on $\mathbb{P}^5$ every instanton bundle is stable.
+
+In this paper we study some moduli spaces $M\mathcal{I}_{\mathbb{P}^{2n+1}}(k)$ of stable instanton bundles on $\mathbb{P}^{2n+1}$ with $c_2 = k$. For $k=2$ we prove that $M\mathcal{I}_{\mathbb{P}^{2n+1}}(2)$ is smooth, irreducible, unirational and has zero Euler-Poincaré characteristic (Theor. 3.2), just as in the case of $\mathbb{P}^3$ [Har].
+
+We find instead that $M\mathcal{I}_{\mathbb{P}^5}(k)$ is singular for $k=3,4$ (theor. 3.3), which is not analogous with the case of $\mathbb{P}^3$ [ES], [P]. To be more precise, all points corresponding to symplectic instanton bundles are singular. Theor. 3.3 gives, to the best of our knowledge, the first example of a singular moduli space of stable bundles on a projective space. The proof of Theorem 3.3 needs help from a personal computer in order to calculate the dimensions of some cohomology group [BaS].
+
+We recall from [OS], [ST] and [AO] the definition of instanton bundle on $\mathbb{P}^{2n+1}(\mathbb{C})$.
+
+**Definition 1.1.** A vector bundle $E$ of rank $2n$ on $\mathbb{P}^{2n+1}$ is called an instanton bundle of quantum number $k$ if
+
+(i) The Chern polynomial is $c_t(E) = (1-t^2)^{-k} = 1 + kt^2 + \binom{k+1}{2}t^2 + \dots$
+
+(ii) $E(q)$ has natural cohomology in the range $-2n - 1 \le q \le 0$ (that is $h^i(E(q)) \ne 0$ for at most one $i = i(q)$)
+
+(iii) $E|_{\tau} \simeq \mathcal{O}_{\tau}^{2n}$ for a general line $\tau$.
+
+Every instanton bundle is simple [AO]. There is the following characterization:
\ No newline at end of file
diff --git a/samples/texts/1693838/page_10.md b/samples/texts/1693838/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/samples/texts/1693838/page_2.md b/samples/texts/1693838/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..8efa9f65b80da0598eca31f89844156c8f428229
--- /dev/null
+++ b/samples/texts/1693838/page_2.md
@@ -0,0 +1,19 @@
+**Theorem 1.2 ([ST], [AO]).** A vector bundle $E$ of rank $2n$ on $\mathbb{P}^{2n+1}$ satisfies the properties (i) and (ii) if and only if $E$ is the cohomology of a monad
+
+$$ (1.1) \qquad \mathcal{O}(-1)^k \xrightarrow{\Lambda} \mathcal{O}^{2n+2k} \xrightarrow{\mathcal{B}} \mathcal{O}(1)^k. $$
+
+With respect to a fixed system of homogeneous coordinates the morphism $A$ (resp. $B$) of the monad can be identified with a $k \times (2n + 2k)$ (resp. $(2n + 2k) \times k$) matrix whose entries are homogeneous polynomials of degree 1. Then the conditions that (1.1) is a monad are equivalent to:
+
+> $A, B$ have rank $k$ at every point $x \in \mathbb{P}^{2n+1}$, $A \cdot B = 0$.
+
+**Definition 1.3.** A bundle $S$ appearing in an exact sequence:
+
+$$ (1.2) \qquad 0 \to S^* \to \mathcal{O}^d \xrightarrow{\mathcal{B}} \mathcal{O}(1)^c \to 0 $$
+
+is called a Schwarzenberger type bundle (*STB*).
+
+The kernel bundle *Ker* *B* in the monad (1.1) is the dual of a STB.
+
+**Definition 1.4.** An instanton bundle is called special if it arises from a monad (1.1) where the morphism *B* is defined in some system of homogeneous coordinates $(x_0, \dots, x_n, y_0, \dots, y_n)$ on $\mathbb{P}^{2n+1}$ by the matrix
+
+$$ B = \begin{bmatrix} x_0 & & & \\ & \ddots & & \\ x_n & x_0 & & \\ & & \ddots & \\ y_0 & & & \\ & \ddots & & \\ y_n & y_0 & & \\ & & \ddots & \\ y_n \end{bmatrix} $$
\ No newline at end of file
diff --git a/samples/texts/1693838/page_3.md b/samples/texts/1693838/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..63a451a9df1fafbe6acd6c944126377693c39c9e
--- /dev/null
+++ b/samples/texts/1693838/page_3.md
@@ -0,0 +1,65 @@
+**Example 1.5.** Take
+
+$$
+A = \begin{bmatrix}
+y_n & \cdots & y_0 & & & -x_n & \cdots & -x_0 \\
+& \ddots & & \ddots & & & & \\
+& & \ddots & & \ddots & & & \\
+& & & \ddots & \ddots & & & \\
+\multicolumn{2}{c}{\begin{bmatrix} y_n & \cdots & y_0 \\ & \ddots & \\ & & \ddots \\ & & & \ddots \\ & & & & \ddots \\ & & & & & \ddots \\ & & & & & & \vdots \end{bmatrix}} &
+\multicolumn{2}{c}{-x_n \cdots -x_0}
+\end{bmatrix}
+$$
+
+$$
+B = \begin{bmatrix}
+[x_0] \\
+[ x_n \quad x_0 ] \\
+[ x_n ] \\
+[ y_0 ] \\
+[ y_n \quad y_0 ] \\
+[ \vdots ] \\
+[ y_n ]
+\end{bmatrix}
+$$
+
+$E = \text{Ker } B / \text{Im } A$ is a special instanton bundle.
+
+Property (iii) of the definition 1.1 can be checked by the following:
+
+**Theorem 1.6 [OS].** Let $E = \text{Ker } B / \text{Im } A$ as in (1.1). Let $r$ be the line joining two distinct points $P, Q \in \mathbb{P}^{2n+1}$. Then
+
+$$
+E|_{r} \cong \mathcal{O}_{r}^{2n} \Leftrightarrow A(P) \cdot B(Q) \text{ is an invertible matrix.}
+$$
+
+**Example 1.7.** Consider the special instanton bundle *E* of the example
+1.5. Let *P* = (1, 0, ..., 0, ..., 0), *Q* = (0, ..., 0, ..., 1). Then
+
+$$
+A(P) = \begin{bmatrix}
+& -1 \\
+& \cdot \\
+-1 & & \\
+& \cdot \\
+& & \ddots
+\end{bmatrix}
+$$
+
+$$
+B(Q) = \begin{bmatrix}
+1 \\
+\vdots \\
+\cdot \\
+\vdots \\
+1
+\end{bmatrix}
+$$
+
+and $A(P) \cdot B(Q) = \begin{bmatrix} -1 \\ \cdot \\ -1 \end{bmatrix}$ is invertible. Hence $E$ is trivial on the line $\{x_1 = \dots = x_n = y_0 = \dots = y_{n-1} = 0\}$.
+
+**Proposition 1.8.** Let $E$ be an instanton bundle as in (1.1). Then
+
+$$
+H^2(E \otimes E^*) = H^2[(\mathrm{Ker}\,B) \otimes (\mathrm{Ker}\,A^t)]
+$$
\ No newline at end of file
diff --git a/samples/texts/1693838/page_4.md b/samples/texts/1693838/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..2311d6b63629e9c30a5f18571861015b113c306a
--- /dev/null
+++ b/samples/texts/1693838/page_4.md
@@ -0,0 +1,42 @@
+*Proof.* See [AO] Theorem 3.13 and Remark 2.22.
+
+**Remark 1.9.** If $E \simeq E^*$, then
+
+$$H^2(E \otimes E^*) = H^2[(\ker A^t) \otimes (\ker A^t)] = H^2[(\ker B) \otimes (\ker B)].$$
+
+**Remark 1.10.** The single complex associated with the double complex obtained by tensoring the two sequences
+
+$$0 \to \ker A^t \to \mathcal{O}^{2n+2k} \xrightarrow{\Lambda^t} \mathcal{O}(1)^k \to 0$$
+
+$$0 \to \ker B^t \to \mathcal{O}^{2n+2k} \xrightarrow{\mathcal{B}^t} \mathcal{O}(1)^k \to 0$$
+
+gives the resolution
+
+$$0 \to (\ker A^t) \otimes (\ker B) \to \mathcal{O}^{2n+2k} \otimes \mathcal{O}^{2n+2k} \\ \to \mathcal{O}^{2n+2k} \otimes \mathcal{O}(1)^k \oplus \mathcal{O}(1)^k \otimes \mathcal{O}^{2n+2k} \xrightarrow{\alpha} \mathcal{O}(1)^k \otimes \mathcal{O}(1)^k \to 0$$
+
+where $\alpha = (A^t \otimes \text{id}, \text{id} \otimes B)$.
+
+Hence
+
+$$H^2(E \otimes E^*) = \operatorname{Coker} H^0(\alpha)$$
+
+and its dimension can be computed using [BaS]. For the convenience of the reader we sketch the steps needed in the computations.
+
+$A, B^t$ are given by $k \times (2n + 2k)$ matrices whose entries are linear homogeneous polynomials.
+
+$$A \otimes \mathrm{Id}_k = (a_1, \dots, a_{k(2n+2k)})$$
+
+and
+
+$$\mathrm{Id}_k \otimes B^t = (b_1, \dots, b_{k(2n+2k)})$$
+
+are both $k^2 \times (2n + 2k)k$ matrices. Let
+
+$$C = (a_1, \dots, a_{k(2n+2k)}, b_1, \dots, b_{k(2n+2k)}).$$
+
+We will denote by $\text{syz}_m C$ the dimension of the space of the syzygies of $C$ of degree $m$. Then
+
+$$h^2(E \otimes E^*) = h^0(\mathcal{O}(2)^{k^2}) - (4n + 4k)h^0(\mathcal{O}(1)^k) + \text{syz}_1 C \\
+= k(n + 1)[k(2n - 5) - 8n] + \text{syz}_1 C \\
+h^1(E \otimes E^*) = h^2(E \otimes E^*) + 1 - k^2 + 8n^2k - 4n^2 + 3nk^2 - 2n^2k^2 \\
+= 1 - 6k^2 - 8kn - 4n^2 + \text{syz}_1 C.$$
\ No newline at end of file
diff --git a/samples/texts/1693838/page_5.md b/samples/texts/1693838/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..af2232aae734b29dd350bc745560b815d23676f8
--- /dev/null
+++ b/samples/texts/1693838/page_5.md
@@ -0,0 +1,53 @@
+Note also that $h^0(E(1)) = \text{syz}_1 B^t - k$ and $h^0(E^*(1)) = \text{syz}_1 A - k$.
+
+**Remark 1.11.** In the same way we obtain
+
+$$h^1(E \otimes E^*(-1)) = \text{syz}_0 C$$
+
+$$h^2(E \otimes E^*(-1)) = 2k(nk - 2n - k) + \text{syz}_0 C.$$
+
+## 2. Example on $\mathbb{P}^5$.
+
+Let $(a, b, c, d, e, f)$ be homogeneous coordinates in $\mathbb{P}^5$.
+
+**Example 2.1.** ($k=3$) Let
+
+$$B^t = \begin{bmatrix}
+a & b & c & d & e & f \\
+a & b & c & d & e & f \\
+a & b & c & d & e & f
+\end{bmatrix}$$
+
+$$A = \begin{bmatrix}
+f & e & d & -c-b-a \\
+f & e & d & -c-b-a \\
+f & e & d & -c-b-a
+\end{bmatrix}$$
+
+The corresponding monad gives a special symplectic instanton bundle on $\mathbb{P}^5$ with $k=3$. With the notation of remark 1.10, using [BaS] we can compute $\text{syz}_0 C = 14$, $\text{syz}_1 C = 174$. Hence $h^2(E \otimes E^*) = 3$ from the formulas of Remark 1.10. Moreover $h^0(E(1)) = 4$.
+
+**Example 2.2.** ($k=3$) Let $B^t$ as in the Example 2.1 and
+
+$$A = \begin{bmatrix}
+f & e & d & -c-b-a \\
+c & d & 2f-b-a & -2c \\
+d & f & e & -c-b
+\end{bmatrix}$$
+
+We have $\text{syz}_0 C = 10$, $\text{syz}_1 C = 171$. Hence $h^2(E \otimes E^*) = 0$. We can compute also the syzygies of $B^t$ and $A$ and we get $h^0(E(1)) = 4$, $h^0(E^*(1)) = 3$, hence $E$ is not self-dual.
+
+**Example 2.3.** ($k=4$) Let
+
+$$B^t = \begin{bmatrix}
+a & b & c & d & e & f \\
+a & b & c & d & e & f \\
+a & b & c & d & e & f \\
+a & b & c & d & e & f
+\end{bmatrix}$$
+
+$$A = \begin{bmatrix}
+f & e & d & -c-b-a \\
+f & e & d & -c-b-a \\
+f & e & d & -c-b-a \\
+f & e & d & -c-b-a
+\end{bmatrix}$$
\ No newline at end of file
diff --git a/samples/texts/1693838/page_6.md b/samples/texts/1693838/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..56cc5af1abf061986e2d9ea4ed815e83b49cbf5c
--- /dev/null
+++ b/samples/texts/1693838/page_6.md
@@ -0,0 +1,39 @@
+$E$ is a special symplectic instanton bundle with $k = 4$. We compute
+
+$$h^2(E \otimes E^*) = 12.$$
+
+**Example 2.4.** ($k=4$) Let $B^t$ as in the Example 2.3. Let
+
+$$A = \begin{bmatrix}
+f & e & d & & -c & -b & -a \\
+& e & d & 2f & -b & -a & -2c \\
+3d & f & e & -3a & & -c & -b \\
+& f & e & d & -c & -b & -a
+\end{bmatrix}$$
+
+In this case $h^2(E \otimes E^*) = 6$, $h^0(E(1)) = 4$, $h^0(E^*(1)) = 3$.
+
+**Example 2.5.** ($k=4$) Let $B^t$ as in the Example 2.3. Let
+
+$$A = \begin{bmatrix}
+f & e & d & & -c & -b & -a \\
+e & d & 2f & -b & -a & -2c \\
+3d & f & e & -3a & & -c & -b \\
+5d & f & e & -5a & -c & -b - a - c & -b
+\end{bmatrix}$$
+
+Now $H^2(E \otimes E^*) = 0$, $h^0(E(1)) = 4$, $h^0(E^*(1)) = 2$.
+
+### 3. On the singularities of moduli spaces.
+
+The stable Schwarzenberger type bundles on $\mathbb{P}^m$ (see (1.2)) form a Zariski open subset of the moduli space of stable bundles. Let $N_{\mathbb{P}^m}(k, q)$ be the moduli space of stable STB whose first Chern class is $k$ and whose rank is $q$. The following proposition is easy and well known:
+
+**Proposition 3.1.** The space $N_{\mathbb{P}^m}(k,q)$ is smooth, irreducible of dimension $1-k^2-(q+k)^2+k(q+k)(m+1)$.
+
+We denote by $\text{MI}_{\mathbb{P}^{2n+1}}(k)$ the moduli space of stable instanton bundles with quantum number $k$. It is an open subset of the moduli space of stable $2n$-bundles on $\mathbb{P}^{2n+1}$ with Chern polynomial $(1-t^2)^{-k}$.
+
+On $\mathbb{P}^5$ (as on $\mathbb{P}^3$) all instanton bundles are stable by [AO], Theorem 3.6. $\text{MI}_{\mathbb{P}^{2n+1}}(2)$ is smooth ([AO] Theorem 3.14), unirational of dimension $4n^2 + 12n - 3$ and has zero Euler-Poincaré characteristic ([BE], [K]).
+
+**Theorem 3.2.** The space $\text{MI}_{\mathbb{P}^{2n+1}}(2)$ is irreducible.
+
+*Proof.* The moduli space $N = N_{\mathbb{P}^{2n+1}}(2, n+2)$ of stable STB of rank $2n+2$ and $c_1 = 2$ is irreducible of dimension $4n^2 + 8n - 3$ by Prop. 3.1
\ No newline at end of file
diff --git a/samples/texts/1693838/page_7.md b/samples/texts/1693838/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..50838144bc5f515193000b3966e92270b6d3bfcd
--- /dev/null
+++ b/samples/texts/1693838/page_7.md
@@ -0,0 +1,19 @@
+For a given instanton bundle $E$ there is a STB $S$ associated with $E$, which is stable ([AO], Theorem 2.8) and unique (ibid., Prop. 2.17). It is easy to prove that the map $\pi: M \to N$ defined by $\pi([E]) = [S]$ is algebraic, moreover $\pi$ is dominant by [ST]. If $m = [E] \in M$, the fiber $\pi^{-1}(\pi(m))$ is a Zariski open subset of the grassmannian of planes in the vector space $H^0(\mathbb{P}^{2n+1}, S^*(1))$, where $\pi(m) = [S]$; by the Theorem 3.14 of [AO], $h^0(\mathbb{P}^{2n+1}, S^*(1)) = 2n+2$, hence $\dim \pi^{-1}(\pi(m)) = 4n$.
+
+In order to prove that $M$ is irreducible, we suppose by contradiction that there are at least two irreducible components $M_0$ and $M_1$ of $M$. Then $M_0 \cap M_1 = \emptyset$ ($M$ is smooth), $\pi(M_0)$ and $\pi(M_1)$ are constructible subset of $N$ by Chevalley's theorem. Looking at the dimensions of $M_0, M_1, N$ and the fibers of $\pi$ we conclude that both $\pi(M_0)$ and $\pi(M_1)$ must contain an open subset of $N$, which implies $\pi(M_0) \cap \pi(M_1) \neq \emptyset$ by the irreducibility of $N$. This is a contradiction because the fibers of $\pi$ are connected. □
+
+For $n \ge 2$ and $k \ge 3$, it is no longer true that $\mathrm{MI}_{p^{2n+1}}(k)$ is smooth. In fact on $\mathbb{P}^5$ we have:
+
+**Theorem 3.3.** The space $\mathrm{MI}_{\mathbb{P}^5}(k)$ is singular for $k=3,4$. To be more precise, the irreducible component $M_0(k)$ of $\mathrm{MI}_{\mathbb{P}^5}(k)$ containing the special instanton bundles is generically reduced of dimension $54(k=3)$ or $65(k=4)$, and $\mathrm{MI}_{\mathbb{P}^5}(k)$ is singular at the points corresponding to special symplectic instanton bundles.
+
+*Proof.* Let $E_0$ be the special instanton bundle on $\mathbb{P}^5$ of the Example 2.2($k=3$) or of the Example 2.5($k=4$). Then $h^2(E_0 \otimes E_0^*) = 0$ and $M_0(k)$ is smooth at the point corresponding to $E_0$, of dimension $h^1(E_0 \otimes E_0^*) = 54(k=3)$ or $65(k=4)$. In particular, $M_0(k)$ is generically reduced. If $E_1$ is a special symplectic instanton bundle on $\mathbb{P}^5$, the computations in 2.1 and 2.3 show that $h^2(E_1 \otimes E_1^*) = 3(k=3)$ or $12(k=4)$, and $h^1(E_1 \otimes E_1^*) = 57$ or $77$ respectively. Hence $\mathrm{MI}_{\mathbb{P}^5}(k)$ is singular at $E_1$ for $k=3$ and $4$. □
+
+**Remark 3.4.** It is natural to conjecture that $\mathrm{MI}_{\mathbb{P}^{2n+1}}(k)$ is singular for all $n \ge 2$ and $k \ge 3$.
+
+**Theorem 3.5.** Let $E$ be an instanton bundle on $\mathbb{P}^{2n+1}$ with $c_2(E) = k$. Then
+
+$$h^1(E(t)) = 0 \text{ for } t \le -2 \text{ and } k-1 \le t.$$
+
+*Proof.* The result is obvious for $t \le -2$. It is sufficient to prove $h^1(S^*(t)) = 0$ for $t \ge k-1$. We have
+
+$$S^*(t) = \bigwedge^{2n+k-1} S(t-k).$$
\ No newline at end of file
diff --git a/samples/texts/1693838/page_8.md b/samples/texts/1693838/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..6b1cc3585c1d92ec2161817e304866d944209109
--- /dev/null
+++ b/samples/texts/1693838/page_8.md
@@ -0,0 +1,81 @@
+Taking wedge products of (1.2) we have the exact sequence
+
+$$
+\begin{align*}
+0 \to \mathcal{O}(t+1-2n-2k)^{\alpha_0} \to \dots \to \mathcal{O}(t-k-1)^{\alpha_{2n+k-2}} \\
+\qquad \to \mathcal{O}(t-k)^{\alpha_{2n+k-1}} \to \bigwedge^{2n+k-1} S(t-k) \to 0
+\end{align*}
+$$
+
+for suitable $\alpha_i \in \mathbb{N}$ and from this sequence we can conclude.
+
+Ellia proves Theorem 3.5 in the case of $\mathbb{P}^3$ ([E], Prop. IV.1). He also remarks that the given bound is sharp. This holds on $\mathbb{P}^{2n+1}$ as it is shown by the following theorem, which points out that the special symplectic instanton bundles are the “furthest” from having natural cohomology. $\square$
+
+**Theorem 3.6.** Let $E$ be a special symplectic instanton bundle on $\mathbb{P}^{2n+1}$ with $c_2 = k$. Then
+
+$$h^1(E(t)) \neq 0 \text{ for } -1 \le t \le k-2.$$
+
+*Proof*. For $n=1$ the thesis is immediate from the exact sequence
+
+$$0 \rightarrow \mathcal{O}(t-1) \rightarrow E(t) \rightarrow \mathcal{J}_C(t+1) \rightarrow 0$$
+
+where $C$ is the union of $k+1$ disjoint lines in a smooth quadric surface. Then the result follows by induction on $n$ by considering the sequence
+
+$$0 \rightarrow E(t-2) \rightarrow E(t-1)^2 \rightarrow E(t) \rightarrow E(t)|_{\mathbb{P}^{2n-1}} \rightarrow 0$$
+
+and the fact that, for a particular choice of the subspace $\mathbb{P}^{2n-1}$, the restriction $E|_{\mathbb{P}^{2n-1}}$ splits as the direct sum of a rank-2 trivial bundle and a special symplectic instanton bundle on $\mathbb{P}^{2n-1}$([ST] 5.9). $\square$
+
+**Remark 3.7.** In [OT] it is proved that if $E_k$ is a special symplectic instanton bundle on $\mathbb{P}^5$ with $c_2 = k$ then $h^1(\mathrm{End}\,E_k) = 20k - 3$.
+
+In the following table we summarize what we know about the component $M_0(k) \subset \mathrm{MI}_{\mathbb{P}^5}(k)$ containing $E_k$.
+
+**Table 3.10**
+
+
+
+
+ |
+ h1(Ek ⊗ Ek*) |
+ h2(Ek ⊗ Ek*) |
+ dim M0(k) |
+ MIP5(k) |
+
+
+
+
+ | k = 1 |
+ 14 |
+ 0 |
+ 14 |
+ open subset of P14 |
+
+
+ | k = 2 |
+ 37 |
+ 0 |
+ 37 |
+ smooth, irreduc., unirat. |
+
+
+ | k = 3 |
+ 57 |
+ 3 |
+ 54 |
+ singular |
+
+
+ | k = 4 |
+ 77 |
+ 12 |
+ 65 |
+ singular |
+
+
+ | k ≥ 2 |
+ 20k - 3 |
+ 3(k - 2)2 |
+ ? |
+ ? |
+
+
+
\ No newline at end of file
diff --git a/samples/texts/1693838/page_9.md b/samples/texts/1693838/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ed2bd59f9f8cae7cecc16bd3fe38c1f7205ab41
--- /dev/null
+++ b/samples/texts/1693838/page_9.md
@@ -0,0 +1,56 @@
+References
+
+[AO] V. Ancona and G. Ottaviani, *On the stability of special instanton bundles on P2n+1*,
+Trans. Amer. Math. Soc., **341** (1994), 677-693.
+
+[BE] J. Bertin and G. Elencwajg, *Symétries des fibrés vectoriels sur Pn et nombre d'Euler*, Duke Math. J., **49** (1982), 807-831.
+
+[BaS] D. Bayer and M. Stillman, *Macaulay, a computer algebra system for algebraic geometry*.
+
+[BoS] G. Bohnhorst and H. Spindler, *The stability of certain vector bundles on Pn*,
+Proc. Bayreuth Conference "Complex Algebraic Varieties", LNM **1507**, Springer Berlin
+(1992), 39-50.
+
+[E] Ph. Ellia, *Some vanishings for the cohomology of stable rank two vector bundles on P3*,
+J. reine angew. Math., **451** (1994), 1-14.
+
+[ES] G. Ellingsrud and S.A. Stromme, *Stable rank-2 bundles on P3 with c1 = 0 and c2 = 3*, Math. Ann., **255** (1981), 123-137.
+
+[Har] R. Hartshorne, *Stable vector bundles of rank 2 on P3*,
+Math. Ann., **238** (1978), 229-280.
+
+[K] T. Kaneyama, *Torus-equivariant vector bundles on projective spaces*, Nagoya Math.
+J., **111** (1988), 25-40.
+
+[M] M. Maruyama, *Moduli of stable sheaves*, II, J. Math. Kyoto Univ., **18** (1978),
+557-614.
+
+[OS] C. Okonek and H. Spindler, *Mathematical instanton bundles on P2n+1*,
+Journal reine angew. Math., **364** (1986), 35-50.
+
+[OT] G. Ottaviani and G. Trautmann, *The tangent space at a special symplectic instanton bundle on P2n+1*,
+Manuscr. Math., **85** (1994), 97-107.
+
+[P] J. LePotier, *Sur l'espace de modules des fibrés de Yang et Mills*, in Mathématique et Physique, Sém. E.N.S. 1979-1982, Basel-Stuttgart-Boston 1983.
+
+[S] R.L.E. Schwarzenberger, *Vector bundles on the projective plane*, Proc. London Math. Soc., **11** (1961), 623-640.
+
+[ST] H. Spindler and G. Trautmann, *Special instanton bundles on P2n+1, their geometry and their moduli*, Math. Ann., **286** (1990), 559-592.
+
+Received January 5, 1993. Both authors were supported by MURST and by GNSAGA of CNR.
+
+DIPARTIMENTO DI MATEMATICA
+VIALE MORGAGNI 67 A
+I-50134 FIRENZE
+
+E-mail address: ancona@udininw.math.unifi.it
+
+AND
+
+DIPARTIMENTO DI MATEMATICA
+VIA VETOIO, COPPITO
+I-67010 L'AQUILA
+
+E-mail address: ottaviani@vxscaq.aquila.infn.it
+
+Added in proof. After this paper has been written we received a preprint of R. Miró-Reig and J. Orus-Lacort where they prove that the conjecture stated in the Remark 3.4 is true.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_1.md b/samples/texts/1897687/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..576b8a69dfaa2d5847668697b4f53ce5d7fd6382
--- /dev/null
+++ b/samples/texts/1897687/page_1.md
@@ -0,0 +1,27 @@
+New (and Old) Proof Systems for Lattice Problems
+
+Navid Alamati*
+
+Chris Peikert†
+
+Noah Stephens-Davidowitz‡
+
+December 19, 2017
+
+Abstract
+
+We continue the study of statistical zero-knowledge (SZK) proofs, both interactive and noninteractive, for computational problems on point lattices. We are particularly interested in the problem GapSPP of approximating the $\epsilon$-smoothing parameter (for some $\epsilon < 1/2$) of an $n$-dimensional lattice. The smoothing parameter is a key quantity in the study of lattices, and GapSPP has been emerging as a core problem in lattice-based cryptography, e.g., in worst-case to average-case reductions.
+
+We show that GapSPP admits SZK proofs for remarkably low approximation factors, improving on prior work by up to roughly $\sqrt{n}$. Specifically:
+
+* There is a noninteractive SZK proof for $O(\log(n)\sqrt{\log(1/\epsilon)})$-approximate GapSPP. Moreover, for any negligible $\epsilon$ and a larger approximation factor $\tilde{O}(\sqrt{n\log(1/\epsilon)})$, there is such a proof with an efficient prover.
+
+* There is an (interactive) SZK proof with an efficient prover for $O(\log n + \sqrt{\log(1/\epsilon)/\log n})$-approximate coGapSPP. We show this by proving that $O(\log n)$-approximate GapSPP is in coNP.
+
+In addition, we give an (interactive) SZK proof with an efficient prover for approximating the lattice covering radius to within an $O(\sqrt{n})$ factor, improving upon the prior best factor of $\omega(\sqrt{n \log n})$.
+
+*Computer Science and Engineering, University of Michigan. Email: alamati@umich.edu.
+
+†Computer Science and Engineering, University of Michigan. Email: cpeikert@umich.edu. This material is based upon work supported by the National Science Foundation under CAREER Award CCF-1054495 and CNS-1606362, the Alfred P. Sloan Foundation, and by a Google Research Award. The views expressed are those of the authors and do not necessarily reflect the official policy or position of the National Science Foundation, the Sloan Foundation, or Google.
+
+‡Courant Institute of Mathematical Sciences, New York University. Email: noahsd@gmail.com. Supported by the National Science Foundation (NSF) under Grant No. CCF-1320188, and the Defense Advanced Research Projects Agency (DARPA) and Army Research Office (ARO) under Contract No. W911NF-15-C-0236. Part of this work was done while visiting the second author at the University of Michigan.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_10.md b/samples/texts/1897687/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..420564bd810de3401d0b2436d8b2100c8f25151b
--- /dev/null
+++ b/samples/texts/1897687/page_10.md
@@ -0,0 +1,37 @@
+**Theorem 2.11.** There is an efficient algorithm that takes as input a basis $\mathbf{B} \in \mathbb{Q}^{n \times n}$ and any parameter $s \ge \|\tilde{\mathbf{B}}\|_{\sqrt{\log n}}$ and outputs a sample from $D_{\mathcal{L},s}$, where $\mathcal{L} \subset \mathbb{R}^n$ is the lattice generated by $\mathbf{B}$.
+
+**Corollary 2.12.** There is an efficient algorithm that takes as input a (basis for a) lattice $\mathcal{L} \subset \mathbb{Q}^n$ and parameter $s \ge 2^n \eta_\varepsilon(\mathcal{L})$ and outputs a sample from $D_{\mathcal{L},s}$.
+
+*Proof.* Combine the above with the celebrated LLL algorithm [LLL82], which in particular allows us to find a basis for $\mathcal{L}$ with $\|\tilde{\mathbf{B}}\| \le 2^{n/2}\eta_\varepsilon(\mathcal{L})$. $\square$
+
+We also need the following result, which is implicit in [Ban93]. See, e.g., [DR16] for a proof.
+
+**Lemma 2.13.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ and $\varepsilon \in (0, 1/2)$,
+
+$$ \lambda_n(\mathcal{L}) \le 2\mu(\mathcal{L}) \le \sqrt{n} \cdot \eta_\varepsilon(\mathcal{L}). $$
+
+In particular, there exists a basis $\mathbf{B}$ of $\mathcal{L}$ with $\|\tilde{\mathbf{B}}\| \le \lambda_n(\mathcal{L}) \le \sqrt{n} \cdot \eta_{1/2}(\mathcal{L})$.
+
+**Corollary 2.14.** For any lattice $\mathcal{L} \subset \mathbb{Q}^n$ with basis $\mathbf{B}$, there exists preprocessing $P$ whose size is polynomial in the bit length of $\mathbf{B}$ and an efficient algorithm that, on input $P$ and $s \ge \sqrt{n \log n} \cdot \eta_{1/2}(\mathcal{L})$ outputs a sample from $D_{\mathcal{L},s}$.
+
+*Proof.* By Lemma 2.13, there exists a basis $\mathbf{B}'$ with $\|\tilde{\mathbf{B}}'\| \le \sqrt{n} \cdot \eta_{1/2}(\mathcal{L})$. By Lemma 2.10, the bit length of $\mathbf{B}'$ is polynomial in the bit length of $\mathbf{B}$. We use this as our preprocessing $P$. The result then follows by Theorem 2.11. $\square$
+
+## 2.5 Computational Problems
+
+Here we define two promise problems that will be considered in this paper.
+
+**Definition 2.15 (Covering Radius Problem).** For any approximation factor $\gamma = \gamma(n) \ge 1$, an instance of $\gamma$-GapCRP is a (basis for a) lattice $\mathcal{L} \subset \mathbb{Q}^n$. It is a YES instance if $\mu(\mathcal{L}) \le 1$ and a NO instance if $\mu(\mathcal{L}) > \gamma$.
+
+**Definition 2.16 (Smoothing Parameter Problem).** For any approximation factor $\gamma = \gamma(n) \ge 1$ and $\varepsilon = \varepsilon(n) > 0$, an instance of $\gamma$-GapSPP$_\varepsilon$ is a (basis for a) lattice $\mathcal{L} \subset \mathbb{Q}^n$. It is a YES instance if $\eta_\varepsilon(\mathcal{L}) \le 1$ and a NO instance if $\eta_\varepsilon(\mathcal{L}) > \gamma$.
+
+We will need the following result from [CDLP13].
+
+**Theorem 2.17.** For any $\varepsilon \in (0, 1/2)$, $\gamma$-GapSPP$_\varepsilon$ is in SZK for $\gamma = O(1 + \sqrt{\log(1/\varepsilon)/\log(n)}橋$.
+
+## 2.6 Noninteractive Proof Systems
+
+**Definition 2.18 (Noninteractive Proof System).** A pair $(P, V)$ is a noninteractive proof system for a promise problem $\Pi = (\Pi^{\text{YES}}, \Pi^{\text{NO}})$ if $P$ is a (possibly unbounded) algorithm and $V$ is a polynomial-time algorithm such that
+
+* Completeness: for every $x \in \Pi_n^{\text{YES}}$, Pr[$V(x, r, P(x, r))$ accepts] $\ge 1 - \varepsilon$; and
+
+⁴In [CDLP13], this result is proven only for $\varepsilon < 1/3$. However, it is immediate from, e.g., Lemma 2.9 that the result can be extended to any $\varepsilon < 1/2$.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_11.md b/samples/texts/1897687/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..1cf737fd4a1a79a2f226d1242bdafa58dade76a4
--- /dev/null
+++ b/samples/texts/1897687/page_11.md
@@ -0,0 +1,43 @@
+* Soundness: for every $x \in \Pi_n^{\text{NO}}$, $\Pr[\exists \pi : V(x, r, \pi) \text{ accepts}] \le \varepsilon$,
+
+where $n$ is the input length, $\varepsilon = \varepsilon(n) \le \text{negl}(n)$, and the probabilities are taken over $r$, which is sampled uniformly at random from $\{0, 1\}^{\text{poly}(n)}$.
+
+A noninteractive proof system $(P, V)$ for a promise problem $\Pi = (\Pi^{\text{YES}}, \Pi^{\text{NO}})$ is statistical zero knowledge if there exists a probabilistic polynomial-time algorithm $S$ (called a *simulator*) such that for all $x \in \Pi^{\text{YES}}$, the statistical distance between $S(x)$ and $(r, P(x, r))$ is negligible in $n$. The class of promise problems having noninteractive statistical zero-knowledge proof systems is denoted NISZK.
+
+## 2.7 Probability
+
+The entropy of a random variable $X$ over a countable set $S$ is given by
+
+$$H(X) := \sum_{a \in S} \Pr[X = a] \cdot \log_2(1/\Pr[X = a]) .$$
+
+We will also need the Chernoff-Hoeffding bound [Hoe63].
+
+**Lemma 2.19 (Chernoff-Hoeffding bound).** Let $X_1, \dots, X_m \in [0, 1]$ be independent and identically distributed random variables with $\bar{X} := \mathbb{E}[X_i]$. Then, for any $s > 0$,
+
+$$\Pr[m\bar{X} - \sum X_i \ge s] \le \exp(-s^2/(2m)) .$$
+
+Finally, we will need a minor variant of the above inequality.
+
+**Lemma 2.20.** Let $X_1, \dots, X_m \in \mathbb{R}$ be independent (but not necessarily identically distributed) random variables. Suppose that there exists an $\alpha \ge 0$ and $s > 0$ such that for any $r > 0$,
+
+$$\Pr[|X_i| \ge r] \le \alpha \exp(-r^2/s^2) .$$
+
+Then, for any $r > 0$,
+
+$$\Pr\left[\sum X_i^2 \ge r\right] \le (1+\alpha)^m \exp(-r/(2s^2)).$$
+
+*Proof.* For any index $i$, we have
+
+$$
+\begin{align*}
+\mathbb{E}[\exp(X_i^2/(2s^2))] &= 1 + \frac{1}{s^2} \cdot \int_0^\infty r \exp(r^2/(2s^2)) \Pr[|X_i| \ge r] dr \\
+&\le 1 + \frac{\alpha}{s^2} \cdot \int_0^\infty r \exp(-r^2/(2s^2)) dr \\
+&= 1 + \alpha.
+\end{align*}
+$$
+
+Since the $X_i$ are independent, it follows that
+
+$$\mathbb{E}\left[\exp\left(\sum X_i^2/(2s^2)\right)\right] = \mathbb{E}\left[\prod_i \exp(X_i^2/(2s^2))\right] \le (1+\alpha)^m .$$
+
+The result then follows by Markov's inequality. $\square$
\ No newline at end of file
diff --git a/samples/texts/1897687/page_12.md b/samples/texts/1897687/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..c341768c86ce6f7a2f9716bf1b45bb64f3efc86a
--- /dev/null
+++ b/samples/texts/1897687/page_12.md
@@ -0,0 +1,51 @@
+**3 Two NISZK Proofs for GapSPP**
+
+Recall the definition
+
+$$
+\eta_{\det}(\mathcal{L}) := \max_{\pi} \det(\pi(\mathcal{L}))^{1/\mathrm{rank}(\pi(\mathcal{L}))}.
+$$
+
+We will also need the following definition from [DR16],
+
+$$
+C_{\eta}(n) := \sup_{\mathcal{L}} \frac{\eta_{1/2}(\mathcal{L})}{\eta_{\det}(\mathcal{L})},
+$$
+
+where the supremum is taken over all lattices $\mathcal{L} \subset \mathbb{R}^n$. In this notation, Theorem 1.6 is equivalent to the
+inequality
+
+$$
+C_{\eta}(n) \leq 10(\log n + 2).
+$$
+
+We note that the true value of $C_\eta(n)$ is still not known. (In particular, the best lower bound is $C_\eta(n) \ge \sqrt{\log(n)/\pi} + o(1)$, which follows from the fact that $\eta_{1/2}(\mathbb{Z}^n) = \sqrt{\log(n)/\pi} + o(1)$.) We therefore state our results in terms of $C_\eta(n)$.
+
+**3.1 An Explicit Proof System**
+
+We first consider the NISZK proof system for $\sqrt{n}$-coGapSVP due to [PV08], shown in Figure 1. We show that this is actually also a NISZK proof system for $O(\sqrt{\log(1/\epsilon)} \cdot \log n)$-GapSPP$_\epsilon$ for negligible $\epsilon$. (In Section 3.2, we show a different proof system that works for all $\epsilon \in (0, 1/2)$, also with an approximation factor of $O(\log(n)\sqrt{\log(1/\epsilon)})$.)
+
+NISZK proof system for GapSPP.
+
+**Common Input:** A basis $\mathbf{B}$ for a lattice $\mathcal{L} \subset \mathbb{Q}^n$.
+
+**Random Input :** $m$ vectors $t_1, \dots, t_m \in \mathcal{P}(\mathbf{B})$, sampled uniformly at random.
+
+**Prover P:** Sample $m$ vectors $e_1, \dots, e_m \in \mathbb{R}^n$ independently from $D_{\mathcal{L}+t_i}$, and output them as the
+proof.
+
+**Verifier V:** Accept if and only if $e_i \equiv t_i \bmod \mathcal{L}$ for all $i$ and $\left\| \sum e_i e_i^T \right\| \le 3m$.
+
+Figure 1: The non-interactive zero-knowledge proof system for GapSPP, where $m := 100n$.
+
+**Theorem 3.1.** For any $\varepsilon \le \text{negl}(n)$, $\gamma$-GapSPP$_\varepsilon$ is in NISZK for
+
+$$
+\gamma := O(C_{\eta}(n) \sqrt{\log(1/\varepsilon)}) \le O(\log(n) \sqrt{\log(1/\varepsilon)})
+$$
+
+via the proof system shown in Figure 1.
+
+We will prove in turn that the proof system is statistical zero knowledge, complete, and sound. In fact,
+the proofs of statistical zero knowledge and completeness are nearly identical to the corresponding proofs
+in [PV08].
\ No newline at end of file
diff --git a/samples/texts/1897687/page_13.md b/samples/texts/1897687/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..a992c27ccd985895e0480b43f31adf30bab91278
--- /dev/null
+++ b/samples/texts/1897687/page_13.md
@@ -0,0 +1,34 @@
+To prove the zero-knowledge property of the proof system, we consider the simulator that behaves as follows. Let $\mathbf{e}_1, \dots, \mathbf{e}_m \in \mathbb{R}^m$ be sampled independently from the continuous Gaussian centered at 0. Let $\mathbf{t}_1, \dots, \mathbf{t}_m \in \mathcal{P}(\mathbf{B})$ such that $\mathbf{e}_i \equiv \mathbf{t}_i \bmod \mathcal{L}$. The simulator then outputs $\mathbf{t}_1, \dots, \mathbf{t}_m$ as the random input and $\mathbf{e}_1, \dots, \mathbf{e}_m$ as the proof.
+
+**Lemma 3.2 (Statistical zero knowledge).** For any $\varepsilon \in (0, 1)$ and lattice $\mathcal{L} \subset \mathbb{Q}^n$ with $\eta_\varepsilon(\mathcal{L}) \le 1$, the output of the simulator described above is within statistical distance $\varepsilon m$ of honestly generated random input and an honestly generated proof as in Figure 1. In particular, the proof system in Figure 1 is statistical zero knowledge for negligible $\varepsilon$.
+
+*Proof.* Notice that, conditioned on the random input $\mathbf{t}_i$, the distribution of $\mathbf{e}_i$ is exactly $D_{\mathcal{L}+\mathbf{t}_i,s}$. So, we only need to show that the random input $\mathbf{t}_1, \dots, \mathbf{t}_m \in \mathcal{P}(\mathbf{B})$ chosen by the simulator is within statistical distance $\varepsilon m$ of uniform. Indeed, this follows from Lemma 2.8 and the union bound. $\square$
+
+The proof of completeness is a bit tedious and nearly identical to proofs of similar statements in [AR04, PV08, DR16]. We include a proof in Appendix A.
+
+**Lemma 3.3 (Completeness).** For any lattice $\mathcal{L} \subset \mathbb{Q}^n$ with $\eta_{1/2}(\mathcal{L}) \le 1$, the proof given in Figure 1 will be accepted except with negligible probability. I.e., the proof system is complete.
+
+**Soundness.** We now show the soundness of the proof system shown in Figure 1, using Theorem 1.6. We note that [DR16] contains an implicit proof of a very similar result in a different context. (Dadush and Regev conjectured a form of Theorem 1.6 and showed a number of implications [DR16]. In particular, they showed that with non-negligible probability over a single uniformly random shift $\mathbf{t} \in \mathbb{R}^n/\mathcal{L}$, there is no list of vectors $\mathbf{e}_1, \dots, \mathbf{e}_m \in \mathcal{L} + \mathbf{t}$ with small covariance.)
+
+**Theorem 3.4.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ with basis **B** satisfying $\eta_{1/2}(\mathcal{L}) \ge 100C_\eta(n)$ (and in particular any lattice with $\eta_{1/2}(\mathcal{L}) \ge 1000(\log(n) + 2)$), if $\mathbf{t}_1, \dots, \mathbf{t}_m$ are sampled uniformly from $\mathbb{R}^n/\mathcal{L}$, then the probability that there exists any proof $\mathbf{e}_1, \dots, \mathbf{e}_m$ with $\mathbf{e}_i \equiv \mathbf{t}_i \bmod \mathcal{L}$ and
+
+$$
+\left\| \sum_i \mathbf{e}_i \mathbf{e}_i^T \right\| \le 3m
+$$
+
+is at most $\exp(-\Omega(m^2))$.
+
+*Proof.* By the definition of $C_\eta(n)$ there is a lattice projection $\pi$ such that $\det(\pi(\mathcal{L})) \ge 100^k$, where $k := \text{rank}(\pi(\mathcal{L}))$. For any $\mathbf{e}_1, \dots, \mathbf{e}_m$ with $\mathbf{e}_i \equiv \mathbf{t}_i \bmod \mathcal{L}$, we have
+
+$$
+\begin{align*}
+\left\| \sum_i \mathbf{e}_i \mathbf{e}_i^T \right\| &\geq \left\| \sum_i \pi(\mathbf{e}_i) \pi(\mathbf{e}_i)^T \right\| \\
+&\geq \frac{1}{k} \operatorname{Tr} \left( \sum_i \pi(\mathbf{e}_i) \pi(\mathbf{e}_i)^T \right) \\
+&= \frac{1}{k} \sum_i \| \pi(\mathbf{e}_i) \|_2^2 \\
+&\geq \frac{1}{k} \sum_i \operatorname{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L}))^2,
+\end{align*}
+$$
+
+where the first inequality on the spectral norms follows from the fact that $\langle \mathbf{u}, \pi(\mathbf{e}_i) \rangle = \langle \pi(\mathbf{u}), \pi(\mathbf{e}_i) \rangle$ and
+$\|\pi(\mathbf{u})\| \leq \|\mathbf{u}\|$; the second inequality follows from the fact that the spectral norm is the largest eigenvalue
+and the trace is the sum of the $k$ eigenvalues; and the equality is by definition of trace.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_14.md b/samples/texts/1897687/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f6f7d1ea701dc548f1ab44a27841956547a149a
--- /dev/null
+++ b/samples/texts/1897687/page_14.md
@@ -0,0 +1,32 @@
+Now by Claim 2.4, $\pi(\mathbf{t}_i)$ is uniformly distributed mod $\pi(\mathcal{L})$, and therefore by Lemma 2.3,
+
+$$ \mathbb{E}[\mathrm{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L}))^2] \geq \mu(\pi(\mathcal{L}))^2/4. $$
+
+Furthermore, since the $\mathbf{t}_i$ are independent and identically distributed with $\mathrm{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L})) \leq \mu(\pi(\mathcal{L}))$, we
+can apply the Chernoff-Hoeffding bound (Lemma 2.19) to get
+
+$$ \mathrm{Pr} \left[ \sum \mathrm{dist}(\pi(\mathbf{t}_i), \pi(\mathcal{L}))^2 \leq m\mu(\pi(\mathcal{L}))^2/5 \right] \leq \exp(-Cm^2). $$
+
+The result follows by noting that $\mu(\pi(\mathcal{L}))^2/(5k) \ge 3$ by Claim 2.1, together with the fact that $\det(\pi(\mathcal{L})) \ge 100^k$. $\square$
+
+**Corollary 3.5 (Soundness).** For any $\varepsilon \in (0, 1/2)$ and lattice $\mathcal{L} \subset \mathbb{R}^n$ with basis **B** satisfying $n \ge 2$ and $\eta_\varepsilon(\mathcal{L}) \ge 100C_\eta(n)\sqrt{\log(1/\varepsilon)}$ (and in particular any lattice with $\eta_\varepsilon(\mathcal{L}) \ge 1000(\log(n)+2)\sqrt{\log(1/\varepsilon)})$, if $\mathbf{t}_1, \dots, \mathbf{t}_m$ are sampled uniformly from $\mathcal{P}(\mathbf{B})$, then the probability that there exists a proof $\mathbf{e}_1, \dots, \mathbf{e}_m$ with $\mathbf{e}_i \equiv \mathbf{t} \bmod \mathcal{L}$ and
+
+$$ \left\| \sum \mathbf{e}_i \mathbf{e}_i^T \right\| \le 3m $$
+
+is at most $\exp(-\Omega(m^2))$. In other words, the proof system in Figure 1 is $\exp(-\Omega(m^2))$-statistically sound.
+
+*Proof.* By Lemma 2.9, we have $\eta_{1/2} \ge 100C_\eta(n)$, and the result follows from Theorem 3.4. $\square$
+
+**Making the prover efficient.** Finally, following [PV08] we observe that the prover in the proof system shown in Figure 1 can be made efficient if we relax the approximation factor. In particular, if $\eta_\epsilon(\mathcal{L}) \le 1/\sqrt{n\log n}$, then by Corollary 2.14, there is in fact an efficient prover. Theorem 1.2 then follows immediately from the above analysis.
+
+## 3.2 A Proof via Entropy Approximation
+
+We recall from Goldreich, Sahai, and Vadhan [GSV99] the Entropy Approximation problem, which asks us to approximate the entropy of the distribution obtained by calling some input circuit $C$ on the uniform distribution over its input space. In particular, we recall that [GSV99] proved that this problem is NISZK-complete. (Formally, we only need the fact that Entropy Approximation is in NISZK.)
+
+**Definition 3.6.** An instance of the Entropy Approximation problem is a circuit $C$ and an integer $k$. It is a YES instance if $H(C(U)) > k+1$ and a NO instance if $H(C(U)) < k-1$, where $U$ is the uniform distribution on the input space of $C$.
+
+**Theorem 3.7 ([GSV99]).** *Entropy Approximation is NISZK-complete.*
+
+In the rest of this section, we show a Karp reduction from $O(\log(n)\sqrt{\log(1/\varepsilon)})$-GapSPP$_\varepsilon$ to Entropy Approximation. I.e., we give an efficient algorithm that takes as input a basis for a lattice $\mathcal{L}$ and outputs a circuit $C_\mathcal{L}$ such that (1) if $\eta_\varepsilon(\mathcal{L}) \le 1$, then $H(C_\mathcal{L}(U))$ is large; but (2) if $\eta_\varepsilon(\mathcal{L}) \ge C \log(n)\sqrt{\log(1/\varepsilon)}$, then $H(C_\mathcal{L}(U))$ is small.
+
+Intuitively, we want to use a circuit that samples from the continuous Gaussian with parameter one modulo the lattice $\mathcal{L}$. Then, by Claim 2.7, if $\eta_\varepsilon(\mathcal{L}) \le 1$, the resulting distribution will be nearly uniform over $\mathbb{R}^n/\mathcal{L}$. On the other hand, we know that, with high probability, the continuous Gaussian lies in a set of
\ No newline at end of file
diff --git a/samples/texts/1897687/page_15.md b/samples/texts/1897687/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e15b6e4b2fc9958103244ea9feb32dc18df8aaf
--- /dev/null
+++ b/samples/texts/1897687/page_15.md
@@ -0,0 +1,31 @@
+volume roughly one. And, by definition, if $\eta_\epsilon(\mathcal{L}) \ge \Omega(C_\eta(n)\sqrt{\log(1/\epsilon)})$, then there exists a projection $\pi$ such that, say, $\text{vol}(\pi(\mathbb{R}^n/\mathcal{L})) = \det(\pi(\mathcal{L})) \ge 100$. Therefore, the projected Gaussian lies in a small fraction of $\pi(\mathbb{R}^n/\mathcal{L})$ with high probability.
+
+To make this precise, we must discretize $\mathbb{R}^n/\mathcal{L}$ appropriately to, say, $(\mathcal{L}/q)/\mathcal{L}$ for some large integer $q > 1$ and sample from a discretized version of the continuous Gaussian. Naturally, we choose $D_{\mathcal{L}/q}$. The following theorem shows that $D_{\mathcal{L}/q} \bmod \mathcal{L}$ lies in a small subset of $(\mathcal{L}/q)/\mathcal{L}$ when $\eta_{1/2}(\mathcal{L})$ is large.
+
+**Theorem 3.8.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ with sufficiently large $n$ and integer $q \ge 2^n (\eta_{2-n}(\mathcal{L}) + \mu(\mathcal{L}))$, if $\eta_{1/2}(\mathcal{L}) \ge 1000C_\eta(n)$ (and in particular if $\eta_{1/2}(\mathcal{L}) \ge 10^4(\log(n)+2)$), then there is a subset $S \subset (\mathcal{L}/q)/\mathcal{L}$ with $|S| \le q^n/200$ such that
+
+$$ \mathbf{x} \sim D_{\mathcal{L}/q} \bmod \mathcal{L} [\mathbf{X} \in S] \ge \frac{9}{10}. $$
+
+*Proof.* It is easy to see that $D_{\mathcal{L}/q}$ is statistically close to the distribution obtained by sampling from a continuous Gaussian with parameter one and rounding to the closest vector in $\mathcal{L}/q$. (One must simply recall from Lemma 2.6 that nearly all of the mass of $D_{\mathcal{L}/q}$ lies in a ball of radius $\sqrt{n}$ and notice that for such short points, shifts of size $\mu(\mathcal{L}/q) < 2^{-n}$ have little effect on the Gaussian mass.) It therefore suffices to show that the above probability is at least 19/20 when $\mathbf{X}$ is sampled from this new distribution. We write $CVP(\mathbf{t})$ for the closest vector in $\mathcal{L}/q$ to $\mathbf{t}$.
+
+By assumption, there is a lattice projection $\pi$ onto a $k$-dimensional subspace such that $\det(\pi(\mathcal{L})) \ge 1000^k$. Notice that $\|\pi(CVP(\mathbf{t}))\| \le \|\pi(\mathbf{t})\| + \mu(\mathcal{L})/q \le \|\mathbf{t}\| + 2^{-n}$ for any $\mathbf{t} \in \mathbb{R}^n$. In particular, if $\mathbf{X}$ is sampled from a continuous Gaussian with parameter one,
+
+$$ \Pr \left[ \| \pi(CVP(\mathbf{X})) \| \ge \sqrt{k} \right] \le \Pr \left[ \| \pi(\mathbf{X}) \| \ge \sqrt{k} - 2^{-n} \right] \le \frac{1}{20}, $$
+
+where we have applied Lemma 2.20. But, by Lemma 2.2, there are at most $(q/200)^k$ points $y \in (\pi(\mathcal{L})/q)/\pi(\mathcal{L}) \cap \sqrt{k}B_2^k$. Therefore, there are at most $q^n/200^k \le q^n/200$ points $y \in (\mathcal{L}/q)/\mathcal{L}$ with 19/20 of the mass, as needed. $\square$
+
+**Corollary 3.9.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ with $n \ge 2$, $\varepsilon \in (0, 1/2)$, and integer $q \ge 2$, let $\mathbf{X} \sim D_{\mathcal{L}/q} \bmod \mathcal{L}$. Then,
+
+1. if $\eta_\varepsilon(\mathcal{L}) \le 1$, then $H(\mathbf{X}) > n \log_2 q - 2$; but
+
+2. if $\eta_\varepsilon(\mathcal{L}) \ge 1000C_\eta(n) \cdot \sqrt{\log(1/\varepsilon)}$ (and in particular if $\eta_\varepsilon(\mathcal{L}) \ge 10^4 \log(n)\sqrt{\log(1/\varepsilon)})$ and $q \ge 2^n(\eta_{2-n}(\mathcal{L}) + \mu(\mathcal{L}))$, then $H(\mathbf{X}) < n \log_2 q - 6$.
+
+*Proof.* Suppose that $\eta_\varepsilon(\mathcal{L}) \le 1$. Then, by Claim 2.7, for any $y \in (\mathcal{L}/q)/\mathcal{L}$,
+
+$$ \Pr_{\mathbf{x} \sim D_{\mathcal{L}/q} \bmod \mathcal{L}} [\mathbf{X} = y] = \frac{\rho(\mathcal{L} + y)}{\rho(\mathcal{L}/q)} \le \frac{1+\varepsilon}{1-\varepsilon} \cdot \frac{1}{q^n}. $$
+
+It follows that
+
+$$ H(D_{\mathcal{L}/q} \bmod \mathcal{L}) \ge n \log_2 q + \log_2(1-\varepsilon) - \log_2(1+\varepsilon) > n \log_2 q - 2, $$
+
+as needed.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_16.md b/samples/texts/1897687/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..9211e8bc945f814fa3e8a0c3379149a8a520e7c0
--- /dev/null
+++ b/samples/texts/1897687/page_16.md
@@ -0,0 +1,35 @@
+Suppose, on the other hand, that $\eta_{\varepsilon}(\mathcal{L}) \ge 1000C_{\eta}(n) \cdot \sqrt{\log(1/\varepsilon)}$ and $q \ge 2^n(\eta_{2-n}(\mathcal{L}) + \mu(\mathcal{L}))$. By Lemma 2.9, $\eta_{1/2}(\mathcal{L}) \ge 1000C_{\eta}(n)$, so that by Theorem 3.8, there is a set $S$ of size $|S| = q^n/200$ with at least 9/10 of the mass of $D_{\mathcal{L}/q} \bmod \mathcal{L}$. Therefore,
+
+$$H(D_{\mathcal{L}/q} \bmod \mathcal{L}) \le \frac{9}{10} \cdot \log_2 |S| + \frac{1}{10} \cdot n \log_2 q < n \log_2 q - 6,$$
+
+as needed. $\square$
+
+Corollary 3.9 shows that, in order to reduce $O(\log(n)\sqrt{\log(1/\varepsilon)})$-GapSPP$_{\varepsilon}$ to Entropy Approximation, it suffices to construct a circuit that samples from $D_{\mathcal{L}/q} \bmod \mathcal{L}$. The main result of this section follows immediately from Corollary 2.12.
+
+**Theorem 3.10.** There is an efficient Karp reduction from $\gamma$-GapSPP$_{\varepsilon}$ to Entropy Approximation for
+
+$$\gamma := O(C_{\eta}(n)\sqrt{\log(1/\varepsilon)}) \le O(\log(n)\sqrt{\log(1/\varepsilon)}).$$
+
+and any $\varepsilon \in (0, 1/2)$. I.e., $\gamma$-GapSPP$_{\varepsilon}$ is in NISZK.
+
+*Proof.* The reduction behaves as follows on input $\mathcal{L} \subset \mathbb{Q}^n$. By Lemma 2.10, we can find an integer $q \ge 2$ with polynomial bit length that satisfies $q \ge 2^n(\eta_{2-n}(\mathcal{L}) + \mu(\mathcal{L}))$. The reduction constructs the circuit $C_{\mathcal{L}/q}$ from Corollary 2.12 and outputs the modified circuit $C_{(\mathcal{L}/q)/\mathcal{L}}$ that takes the output from $C_{\mathcal{L}/q}$ and reduces it modulo $\mathcal{L}$. It then outputs the Entropy Approximation instance $(C_{(\mathcal{L}/q)/\mathcal{L}}, k := n \log_2 q - 4)$.
+
+The running time is clear. Suppose that $\eta_{\varepsilon}(\mathcal{L}) \le 1$. Then, by Corollary 3.9,
+
+$$H(D_{\mathcal{L}/q} \bmod \mathcal{L}) > n \log_2 q - 2.$$
+
+Since the output of $C_{(\mathcal{L}/q)/\mathcal{L}}$ is statistically close to $D_{\mathcal{L}/q} \bmod \mathcal{L}$, it follows that $H(C_{(\mathcal{L}/q)/\mathcal{L}}(U)) > n \log_2 q - 3$, as needed.
+
+If, on the other hand, $\eta_{\varepsilon}(\mathcal{L}) \ge \Omega(C_{\eta}(n) \cdot \sqrt{\log(1/\varepsilon)})$, then by Corollary 3.9,
+
+$$H(D_{\mathcal{L}/q} \bmod \mathcal{L}) < n \log_2 q - 6.$$
+
+Since the output of $C_{(\mathcal{L}/q)/\mathcal{L}}$ is statistically close to $D_{\mathcal{L}/q} \bmod \mathcal{L}$, it follows that $H(C_{(\mathcal{L}/q)/\mathcal{L}}(U)) < n \log_2 q - 5$. $\square$
+
+# 4 A coNP Proof for $O(\log n)$-GapSPP
+
+We will need the following result from [RS17], which extends Theorem 1.6 to smaller $\varepsilon$ by noting that $\rho_{1/s}(\mathcal{L}^* \setminus \{\mathbf{0}\})$ decays at least as quickly as $\rho_{1/s}(\lambda_1(\mathcal{L}^*))$.
+
+**Theorem 4.1.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ and any $\varepsilon \in (0, 1/2)$,
+
+$$\eta_{\varepsilon}(\mathcal{L})^2 \le C_{\eta}(n)^2 \eta_{\det}(\mathcal{L})^2 + \frac{\log(1/\varepsilon)}{\pi \lambda_1(\mathcal{L}^*)^2} \le 100(\log n + 2)^2 \eta_{\det}(\mathcal{L})^2 + \frac{\log(1/\varepsilon)}{\pi \lambda_1(\mathcal{L}^*)^2}.$$
\ No newline at end of file
diff --git a/samples/texts/1897687/page_17.md b/samples/texts/1897687/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..057a77995defd58d54babbf7e3b93e56048f211d
--- /dev/null
+++ b/samples/texts/1897687/page_17.md
@@ -0,0 +1,51 @@
+*Proof.* We may assume without loss of generality that $\eta_{\det}(\mathcal{L}) = 1$. Then, by definition, $\rho_{1/C_\eta(n)}(\mathcal{L}^*\setminus\{\mathbf{0}\}) \le 1/2$. Therefore, for any $s \ge C_\eta(n)$,
+
+$$
+\begin{align*}
+\rho_{1/s}(\mathcal{L}^*) &= 1 + \sum_{\mathbf{w} \in \mathcal{L}^* \setminus \{\mathbf{0}\}} \exp(-\pi(s^2 - C_\eta(n)^2) \|w\|^2) \rho_{1/C_\eta(n)}(\mathbf{w}) \\
+&\le 1 + \sum_{\mathbf{w} \in \mathcal{L}^* \setminus \{\mathbf{0}\}} \exp(-\pi(s^2 - C_\eta(n)^2) \lambda_1(\mathcal{L}^*)^2) \rho_{1/C_\eta(n)}(\mathbf{w}) \\
+&\le 1 + \exp(-\pi(s^2 - C_\eta(n)^2) \lambda_1(\mathcal{L}^*)^2)/2,
+\end{align*}
+$$
+
+and the result follows.
+
+Next, we prove an easy lower bound with a similar form (by taking the average of two trivial lower bounds).
+
+**Lemma 4.2.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ and any $\varepsilon \in (0, 1/2)$,
+
+$$
+\eta_{\varepsilon}(\mathcal{L})^2 \geq \eta_{\det}(\mathcal{L})^2/8 + \frac{\log(2/\varepsilon)}{2\pi\lambda_1(\mathcal{L}^*)^2}.
+$$
+
+*Proof.* First, note that $\rho_{1/s}(\mathcal{L}^* \setminus \{\mathbf{0}\}) \ge 2\rho_{1/s}(\lambda_1(\mathcal{L}^*))$. Rearranging, we see that
+
+$$
+\eta_{\epsilon}(\mathcal{L})^2 \geq \frac{\log(2/\epsilon)}{\pi \lambda_1 (\mathcal{L}^*)^2}.
+$$
+
+On the other hand, recall that for any lattice projection $\pi$ onto a subspace $W$, $\det(\mathcal{L}^* \cap W) = 1/\det(\pi(\mathcal{L}))$. I.e., $\eta_{\det}(\mathcal{L}) = \max_{\mathcal{L}' \subseteq \mathcal{L}^*} \det(\mathcal{L}')^{-1/\rank(\mathcal{L}')}$. So, suppose $s \le \eta_{\det}(\mathcal{L})/2$. Then, by Lemma 2.5,
+
+$$
+\rho_{1/s}(\mathcal{L}^*) = \max_{\mathcal{L}' \subseteq \mathcal{L}^*} \rho_{1/s}(\mathcal{L}') \geq \max_{\mathcal{L}' \subseteq \mathcal{L}^*} s^{-\operatorname{rank}(\mathcal{L}') / \det(\mathcal{L}')} \geq 2.
+$$
+
+So, $\eta_\epsilon(\mathcal{L})^2 \geq \eta_1(\mathcal{L})^2 \geq \eta_{\det}(\mathcal{L})^2/4$. The result follows by taking the average of the two bounds. $\square$
+
+The main theorem of this section now follows immediately.
+
+**Theorem 4.3.** For any $\varepsilon \in (0, 1/2)$, $\gamma$-GapSPP$_\varepsilon$ is in coNP for $\gamma = O(C_\eta(n)) \le O(\log n)$.
+
+*Proof.* Let $\gamma := 2\sqrt{2C_\eta(n)}$. On input a lattice $\mathcal{L} \subset \mathbb{R}^n$, the prover simply sends a lattice projection $\pi$ with $\det(\pi(\mathcal{L}))^{1/\rank(\pi(\mathcal{L}))} = \eta_{\det}(\mathcal{L})$ and a vector $\mathbf{w} \in \mathcal{L}^*$ with $\|\mathbf{w}\| = \lambda_1(\mathcal{L}^*)$. The verifier checks that $\pi$ is indeed a lattice projection and that $\mathbf{w} \in \mathcal{L}^* \setminus \{\mathbf{0}\}$. It then answers NO if and only if
+
+$$
+\gamma^2 \det(\pi(\mathcal{L}))^{2/\rank(\pi(\mathcal{L}))}/8 + \frac{\log(1/\varepsilon)}{\pi \|\mathbf{w}\|^2} > \gamma^2. \quad (4.1)
+$$
+
+To prove completeness, suppose that $\eta_\epsilon(\mathcal{L}) > \gamma$. Then, by Theorem 4.1,
+
+$$
+\gamma^2 \eta_{\det}(\mathcal{L})^2 / 8 + \frac{\log(1/\epsilon)}{\pi \lambda_1 (\mathcal{L}^*)^2} \geq \eta_\epsilon(\mathcal{L})^2 > \gamma^2.
+$$
+
+I.e., there exists a valid proof, as needed.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_18.md b/samples/texts/1897687/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..b66abe00bff2c9d703fd3f1f091f9df90043b6d6
--- /dev/null
+++ b/samples/texts/1897687/page_18.md
@@ -0,0 +1,50 @@
+To prove soundness, suppose that $\eta_\varepsilon(\mathcal{L}) \le 1$. Then, by Lemma 4.2,
+
+$$
+\eta_{\det}(\mathcal{L})^2/8 + \frac{\log(1/\varepsilon)}{2\pi\lambda_1(\mathcal{L}^{*})^2} \leq \eta_{\varepsilon}(\mathcal{L})^2 \leq 1.
+$$
+
+Therefore,
+
+$$
+\begin{align*}
+\gamma^2 \eta_{\det}(\mathcal{L})^2 / 8 + \frac{\log(1/\varepsilon)}{\pi \lambda_1 (\mathcal{L}^*)^2} &\le \frac{\gamma^2 \eta_{\det}(\mathcal{L})^2 / 8 + \frac{\log(1/\varepsilon)}{\pi \lambda_1 (\mathcal{L}^*)^2}}{\eta_{\det}(\mathcal{L})^2 / 8 + \frac{\log(1/\varepsilon)}{2\pi \lambda_1 (\mathcal{L}^*)^2}} \\
+&\le \max\{\gamma^2, 2\} \\
+&\le \gamma^2.
+\end{align*}
+$$
+
+In other words, Equation (4.1) cannot hold for any pair $w \in \mathcal{L}^* \setminus \{0\}$ and lattice projection $\pi$. I.e., the
+verifier will always answer YES, as needed. $\square$
+
+Finally, we derive the following corollary.
+
+**Corollary 4.4.** For any $\epsilon \in (0, 1/2)$, $\gamma$-coGapSPP$_\epsilon$ has an SZK proof system with an efficient prover for
+
+$$
+\gamma := O(C_{\eta}(n) + \sqrt{\log(1/\epsilon)/\log n}) \le O(\log n + \sqrt{\log(1/\epsilon)/\log n}).
+$$
+
+*Proof.* By Theorem 2.17, $\gamma$-GapSPP$_\epsilon$ is in SZK. Since SZK is closed under complements [SV97, Oka96], $\gamma$-coGapSPP$_\epsilon$ is in SZK as well. By Theorem 4.3, $\gamma$-coGapSPP$_\epsilon$ is in NP. The result then follows by the fact that any language in SZK $\cap$ NP has an SZK proof system with an efficient prover [NV06]. $\square$
+
+5 An SZK Proof for $O(\sqrt{n})$-GapCRP
+
+In this section we prove that $O(\sqrt{n})$-GapCRP is in SZK, which improves the previous known result by a
+$\omega(\sqrt{\log n})$ factor [PV08]. First we need the following result from [CDLP13].
+
+**Lemma 5.1.** For any lattice $\mathcal{L}$ and parameter $s > 0$,
+
+$$
+\rho_s(\mathcal{L}) \cdot \gamma_s(\mathcal{V}(\mathcal{L})) \leq 1.
+$$
+
+Here we prove an upper bound on the smoothing parameter of a lattice in terms of its covering radius. This
+bound is implicit in [DR16].
+
+**Lemma 5.2.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ and $\epsilon > 0$, we have
+
+$$
+\eta_{\epsilon}(\mathcal{L}) \leq \sqrt{\frac{\pi}{\log(1+\epsilon)}} \cdot \mu(\mathcal{L}).
+$$
+
+In particular, $\eta_\epsilon(\mathcal{L}) \le O(\mu(\mathcal{L}))$ for any $\epsilon \ge \Omega(1)$.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_19.md b/samples/texts/1897687/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..abf8862249ba35a4dd1105cf5bef4856052809b7
--- /dev/null
+++ b/samples/texts/1897687/page_19.md
@@ -0,0 +1,37 @@
+*Proof.*
+
+$$
+\begin{align*}
+\rho_{1/s}(\mathcal{L}^*) &= s^{-n} \cdot \det(\mathcal{L}) \cdot \rho_s(\mathcal{L}) && (\text{Lemma 2.5}) \\
+&\le \frac{s^{-n} \cdot \det(\mathcal{L})}{\gamma_s(\nu(\mathcal{L}))} && (\text{Lemma 5.1}) \\
+&\le \frac{s^{-n} \cdot \det(\mathcal{L})}{\int_{\nu(\mathcal{L})} s^{-n} \exp(-\pi \mathbf{x}^2/s^2) \, d\mathbf{x}} \\
+&\le \frac{\det(\mathcal{L})}{\int_{\nu(\mathcal{L})} \exp(-\pi \mu(\mathcal{L})^2/s^2) \, d\mathbf{x}} \\
+&\le \exp(\pi \mu(\mathcal{L})^2/s^2),
+\end{align*}
+$$
+
+where we used the fact that $\|\mathbf{x}\| \le \mu(\mathcal{L})$ for any $\mathbf{x} \in \nu(\mathcal{L})$. By setting $s = \sqrt{\frac{\pi}{\log(1+\epsilon)}} \cdot \mu(\mathcal{L})$ we have the desired result. $\square$
+
+**Theorem 5.3.** The problem $O(\sqrt{n})$-GapCRP has an SZK proof system with an efficient prover, as does $O(\sqrt{n})$-coGapCRP.
+
+*Proof.* Fix some some constant $\epsilon \in (0, 1/2)$. By Lemma 2.13 and Lemma 5.2, we know that there exist $C_1$ and $C_2$ such that
+
+$$ C_1\eta_\varepsilon(\mathcal{L}) \leq \mu(\mathcal{L}) \leq C_2\sqrt{n} \cdot \eta_\varepsilon(\mathcal{L}), $$
+
+and hence there is a simple reduction from $O(\sqrt{n})$-GapCRP to $O(1)$-GapSPP$_\varepsilon$. It follows from Theorem 2.17 that $O(\sqrt{n})$-GapCRP is in SZK. To see that the prover can be made efficient, we recall from [GMR04] that $O(\sqrt{n})$-GapCRP is in NP $\cap$ coNP. The result then follows by the fact that any language in SZK $\cap$ NP has an SZK proof system with an efficient prover [NV06]. $\square$
+
+## References
+
+[ADRS15] D. Aggarwal, D. Dadush, O. Regev, and N. Stephens-Davidowitz. Solving the shortest vector problem in $2^n$ time using discrete Gaussian sampling. In *STOC*, pages 733–742. 2015.
+
+[ADS15] D. Aggarwal, D. Dadush, and N. Stephens-Davidowitz. Solving the closest vector problem in $2^n$ time - the discrete Gaussian strikes again! In *FOCS*, pages 563–582. 2015.
+
+[Ajt96] M. Ajtai. Generating hard instances of lattice problems. *Quaderni di Matematica*, 13:1–32, 2004. Preliminary version in STOC 1996.
+
+[AR04] D. Aharonov and O. Regev. Lattice problems in NP $\cap$ coNP. *J. ACM*, 52(5):749–765, 2005. Preliminary version in FOCS 2004.
+
+[Bab85a] L. Babai. Trading group theory for randomness. In *STOC*, pages 421–429. 1985.
+
+[Bab85b] L. Babai. On Lovász' lattice reduction and the nearest lattice point problem. *Combinatorica*, 6(1):1–13, 1986. Preliminary version in STACS 1985.
+
+[Ban93] W. Banaszczyk. New bounds in some transference theorems in the geometry of numbers. *Mathematische Annalen*, 296(4):625–635, 1993.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_2.md b/samples/texts/1897687/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..351123169076b45b78f18dd3d9528a7e8ef49ef6
--- /dev/null
+++ b/samples/texts/1897687/page_2.md
@@ -0,0 +1,11 @@
+# 1 Introduction
+
+Informally, a *proof system* [GMR85, Bab85a] is a protocol that allows a (possibly unbounded and malicious) prover to convince a skeptical verifier of the truth of some statement. A proof system is *zero knowledge* if the verifier “learns nothing more” from the interaction, other than the statement’s veracity. The system is said to be *statistical zero knowledge* if the revealed information is negligible, even to an unbounded verifier; the class of problems having such proof systems is called SZK. Since their introduction, proof systems and zero-knowledge have found innumerable applications in cryptography and complexity theory. As a few examples, they have been used in constructions of secure multiparty computation [GMW87], digital signatures [BG89], actively secure public-key encryption [NY90], and “ZAPs” [DN00]. And if a problem has an SZK (or even coAM) proof, it is not NP-hard unless the polynomial-time hierarchy collapses [BHZ87], so interactive proofs have been used as evidence against NP-hardness; see, e.g., [GMR85, GMW91, GG98, HR14].
+
+A proof system is *noninteractive* [BDMP88, GSV99] if it consists of just one message from the prover, assuming both it and the verifier have access to a truly random string. Noninteractive statistical zero-knowledge (NISZK) proof systems are especially powerful cryptographic primitives: they have minimal message complexity; they are concurrently and even “universally” composable [Can01]; and their security holds against unbounded malicious provers and verifiers, without any computational assumptions. However, we do not understand the class NISZK of problems that have noninteractive statistical zero-knowledge proof systems nearly as well as SZK. In particular, while NISZK is known to have complete problems, it is not known whether it is closed under complement or disjunction [GSV99], unlike SZK [SV97, Oka96].
+
+**Lattices and proofs.** An $n$-dimensional lattice is a (full-rank) discrete additive subgroup of $\mathbb{R}^n$, and consists of all integer linear combinations of some linearly independent vectors $\mathbf{B} = \{\mathbf{b}_1, \dots, \mathbf{b}_n\}$, called a *basis* of the lattice. Lattices have been extensively studied in computer science, and lend themselves to many natural computational problems. Perhaps the most well-known of these are the *Shortest Vector Problem* (SVP), which is to find a shortest nonzero vector in a given lattice, and the *Closest Vector Problem* (CVP), which is to find a lattice point that is closest to a given vector in $\mathbb{R}^n$. Algorithms for these problems and their approximation versions have many applications in computer science; see, e.g., [LLL82, Len83, Kan83, Odl90, JS98, NS01, DPV11]. In addition, many cryptographic primitives, ranging from public-key encryption and signatures to fully homomorphic encryption, are known to be secure assuming the (worst-case) hardness of certain lattice problems (see, e.g., [MR04, Reg05, GPV08, Pei09, BV11, BGV12]).
+
+Due to the importance of lattices in cryptography, proof systems and zero-knowledge protocols for lattice problems have received a good deal of attention. Early on, Goldreich and Goldwasser [GG98] showed that for $\gamma = O(\sqrt{n}/\log n)$, the $\gamma$-approximate Shortest and Closest Vector Problems, respectively denoted $\gamma$-GapSVP and $\gamma$-GapCVP, have SZK proof systems; this was later improved to coNP for $\gamma = O(\sqrt{n})$ factors [AR04].¹ Subsequently, Micciancio and Vadhan [MV03] gave different SZK proofs for the same problems, where the provers are *efficient* when given appropriate witnesses; this is obviously an important property if the proof systems are to be used by real entities as components of other protocols. Peikert and Vaikuntanathan [PV08] gave the first *noninteractive* statistical zero-knowledge proof systems for certain lattice problems, showing that, for example, $O(\sqrt{n})$-coGapSVP has an NISZK proof. The proof systems from [PV08] also have efficient provers, although for larger $\tilde{O}(n)$ approximation factors.
+
+¹As described, the proofs from [GG98] are statistical zero knowledge against only honest verifiers, but any such proof can unconditionally be transformed to one that is statistical zero knowledge against malicious verifiers [GSV98]). We therefore ignore the distinction for the remainder of the paper.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_20.md b/samples/texts/1897687/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..0b9fb8d1ffd8e90ae28b599addb101fd444e4431
--- /dev/null
+++ b/samples/texts/1897687/page_20.md
@@ -0,0 +1,35 @@
+[Ban95] W. Banaszczyk. Inequalites for convex bodies and polar reciprocal lattices in $\mathbb{R}^n$. *Discrete & Computational Geometry*, 13:217–231, 1995.
+
+[BDMP88] M. Blum, A. De Santis, S. Micali, and G. Persiano. Noninteractive zero-knowledge. *SIAM J. Comput.*, 20(6):1084–1118, 1991. Preliminary version in STOC 1988.
+
+[BG89] M. Bellare and S. Goldwasser. New paradigms for digital signatures and message authentication based on non-iterative zero knowledge proofs. In *CRYPTO*, pages 194–211. 1989.
+
+[BGV12] Z. Brakerski, C. Gentry, and V. Vaikuntanathan. (Leveled) fully homomorphic encryption without bootstrapping. *TOCT*, 6(3):13, 2014. Preliminary version in ITCS 2012.
+
+[BHZ87] R. B. Boppana, J. Håstad, and S. Zachos. Does co-NP have short interactive proofs? *Inf. Process. Lett.*, 25(2):127–132, 1987.
+
+[BLP+13] Z. Brakerski, A. Langlois, C. Peikert, O. Regev, and D. Stehlé. Classical hardness of learning with errors. In *STOC*, pages 575–584. 2013.
+
+[BV11] Z. Brakerski and V. Vaikuntanathan. Efficient fully homomorphic encryption from (standard) LWE. *SIAM J. Comput.*, 43(2):831–871, 2014. Preliminary version in FOCS 2011.
+
+[Can01] R. Canetti. Universally composable security: A new paradigm for cryptographic protocols. In *FOCS*, pages 136–145. 2001.
+
+[CDLP13] K. Chung, D. Dadush, F. Liu, and C. Peikert. On the lattice smoothing parameter problem. In *IEEE Conference on Computational Complexity*, pages 230–241. 2013.
+
+[DN00] C. Dwork and M. Naor. Zaps and their applications. *SIAM J. Comput.*, 36(6):1513–1543, 2007.
+
+[DPV11] D. Dadush, C. Peikert, and S. Vempala. Enumerative lattice algorithms in any norm via M-ellipsoid coverings. In *FOCS*, pages 580–589. 2011.
+
+[DR16] D. Dadush and O. Regev. Towards strong reverse Minkowski-type inequalities for lattices. In *FOCS*, pages 447–456. 2016.
+
+[GG98] O. Goldreich and S. Goldwasser. On the limits of nonapproximability of lattice problems. *J. Comput. Syst. Sci.*, 60(3):540–563, 2000. Preliminary version in STOC 1998.
+
+[GMR85] S. Goldwasser, S. Micali, and C. Rackoff. The knowledge complexity of interactive proof systems. *SIAM J. Comput.*, 18(1):186–208, 1989. Preliminary version in STOC 1985.
+
+[GMR04] V. Guruswami, D. Micciancio, and O. Regev. The complexity of the covering radius problem. *Computational Complexity*, 14(2):90–121, 2005. Preliminary version in CCC 2004.
+
+[GMW87] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game or A completeness theorem for protocols with honest majority. In *STOC*, pages 218–229. 1987.
+
+[GMW91] O. Goldreich, S. Micali, and A. Wigderson. Proofs that yield nothing but their validity for all languages in NP have zero-knowledge proof systems. *J. ACM*, 38(3):691–729, 1991.
+
+[GPV08] C. Gentry, C. Peikert, and V. Vaikuntanathan. Trapdoors for hard lattices and new cryptographic constructions. In *STOC*, pages 197–206. 2008.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_21.md b/samples/texts/1897687/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..455c1d17ed0c96401bf77019f35bd89c1a9def78
--- /dev/null
+++ b/samples/texts/1897687/page_21.md
@@ -0,0 +1,33 @@
+[GSV98] O. Goldreich, A. Sahai, and S. P. Vadhan. Honest-verifier statistical zero-knowledge equals general statistical zero-knowledge. In *STOC*, pages 399–408. 1998.
+
+[GSV99] O. Goldreich, A. Sahai, and S. P. Vadhan. Can statistical zero knowledge be made non-interactive? or on the relationship of SZK and NISZK. In *CRYPTO*, pages 467–484. 1999.
+
+[Hoe63] W. Hoeffding. Probability inequalities for sums of bounded random variables. *Journal of the American Statistical Association*, 58:13–30, 1963.
+
+[HR14] I. Haviv and O. Regev. On the lattice isomorphism problem. In *SODA*, pages 391–404. 2014.
+
+[JS98] A. Joux and J. Stern. Lattice reduction: A toolbox for the cryptanalyst. *J. Cryptology*, 11(3):161–185, 1998.
+
+[Kan83] R. Kannan. Improved algorithms for integer programming and related lattice problems. In *STOC*, pages 193–206. 1983.
+
+[Kle00] P. N. Klein. Finding the closest lattice vector when it’s unusually close. In *SODA*, pages 937–941. 2000.
+
+[Len83] H. W. Lenstra. Integer programming with a fixed number of variables. *Mathematics of Operations Research*, 8(4):538–548, November 1983.
+
+[LLL82] A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovász. Factoring polynomials with rational coefficients. *Mathematische Annalen*, 261(4):515–534, December 1982.
+
+[MG02] D. Micciancio and S. Goldwasser. *Complexity of Lattice Problems: a cryptographic perspective*, volume 671 of *The Kluwer International Series in Engineering and Computer Science*. Kluwer Academic Publishers, Boston, Massachusetts, 2002.
+
+[MP12] D. Micciancio and C. Peikert. Trapdoors for lattices: Simpler, tighter, faster, smaller. In *EUROCRYPT*, pages 700–718. 2012.
+
+[MR04] D. Micciancio and O. Regev. Worst-case to average-case reductions based on Gaussian measures. *SIAM J. Comput.*, 37(1):267–302, 2007. Preliminary version in FOCS 2004.
+
+[MV03] D. Micciancio and S. P. Vadhan. Statistical zero-knowledge proofs with efficient provers: Lattice problems and more. In *CRYPTO*, pages 282–298. 2003.
+
+[NS01] P. Q. Nguyen and J. Stern. The two faces of lattices in cryptology. In *CaLC*, pages 146–180. 2001.
+
+[NV06] M. Nguyen and S. P. Vadhan. Zero knowledge with efficient provers. In *STOC*, pages 287–295. 2006.
+
+[NY90] M. Naor and M. Yung. Public-key cryptosystems provably secure against chosen ciphertext attacks. In *STOC*, pages 427–437. 1990.
+
+[Odl90] A. M. Odlyzko. The rise and fall of knapsack cryptosystems. In C. Pomerance, editor, *Cryptography and Computational Number Theory*, volume 42 of *Proceedings of Symposia in Applied Mathematics*, pages 75–88. 1990.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_22.md b/samples/texts/1897687/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..e92902760404face8c82e43e29d8e43525354266
--- /dev/null
+++ b/samples/texts/1897687/page_22.md
@@ -0,0 +1,37 @@
+[Oka96] T. Okamoto. On relationships between statistical zero-knowledge proofs. *J. Comput. Syst. Sci.*, 60(1):47–108, 2000. Preliminary version in STOC 1996.
+
+[Pei09] C. Peikert. Public-key cryptosystems from the worst-case shortest vector problem. In *STOC*, pages 333–342. 2009.
+
+[PV08] C. Peikert and V. Vaikuntanathan. Noninteractive statistical zero-knowledge proofs for lattice problems. In *CRYPTO*, pages 536–553. 2008.
+
+[Reg05] O. Regev. On lattices, learning with errors, random linear codes, and cryptography. *J. ACM*, 56(6):1–40, 2009. Preliminary version in STOC 2005.
+
+[RS17] O. Regev and N. Stephens-Davidowitz. A reverse Minkowski theorem. In *STOC*, pages 941–953. 2017.
+
+[SV97] A. Sahai and S. P. Vadhan. A complete problem for statistical zero knowledge. *J. ACM*, 50(2):196–249, 2003. Preliminary version in FOCS 1997.
+
+[Ver12] R. Vershynin. *Introduction to the non-asymptotic analysis of random matrices*, chapter 5, pages 210–268. Cambridge University Press, 2012. Available at http://www-personal.umich.edu/~romanv/papers/non-asymptotic-rmt-plain.pdf.
+
+# A Proof of Lemma 3.3
+
+**Definition A.1.** For any $\delta > 0$, $S \subseteq \mathbb{R}^n$, we say that $A \subseteq S$ is a $\delta$-net of $S$ if for each $\mathbf{v} \in S$, there is some $\mathbf{u} \in A$ such that $\|\mathbf{u} - \mathbf{v}\| \le \delta$.
+
+**Lemma A.2.** For any $\delta > 0$, there exists a $\delta$-net of the unit sphere in $\mathbb{R}^n$ with at most $(1 + 2/\delta)^n$ points.
+
+*Proof.* Let $N$ be maximal such that $N$ points can be placed on the unit sphere in such a way that no pair of points is within distance $\delta$ of each other. Clearly, there exists a $\delta$-net of size $N$.
+
+So, it suffices to show that any collection of vectors $A$ in the unit sphere with $|A| > (1 + 2/\delta)^n$ must contain two points within distance $\delta$ of each other. Let
+
+$$ B := \bigcup_{\mathbf{u} \in A} ((\delta/2)B_2^n + \mathbf{u}) $$
+
+be the union of balls of radius $\delta/2$ centered at each point in $A$. Notice that $B \subseteq (1 + \delta/2)B_2^n$. If all of these balls were disjoint, then we would have
+
+$$ \mathrm{vol}(B_2^n) = |A| \cdot (\delta/2)^n \mathrm{vol}(B_2^n) > \mathrm{vol}((1 + \delta/2)B_2^n), $$
+
+a contradiction. Therefore, two such balls must overlap. I.e., there must be two points within distance $\delta$ of each other, as needed. $\square$
+
+We will need the following result from [Ver12, Lemma 5.4].
+
+**Lemma A.3.** For a symmetric matrix $M \in \mathbb{R}^{n \times n}$ and a $\delta$-net of the unit sphere $A$ with $\delta \in (0, 1/2)$,
+
+$$ \|M\| \le \frac{1}{1 - 2\delta} \cdot \max_{\mathbf{v} \in A} |\langle M\mathbf{v}, \mathbf{v}\rangle|. $$
\ No newline at end of file
diff --git a/samples/texts/1897687/page_23.md b/samples/texts/1897687/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..c603e10999141053bc82c911263327ccfda5add0
--- /dev/null
+++ b/samples/texts/1897687/page_23.md
@@ -0,0 +1,19 @@
+We will also need the following result from [MP12, Lemma 2.8], which shows that the discrete Gaussian distribution is subgaussian.
+
+**Lemma A.4.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ with $\eta_{1/2}(\mathcal{L}) \le 1$, shift vector $\mathbf{t} \in \mathbb{R}^n$, unit vector $\mathbf{v} \in \mathbb{R}^n$, and any $r > 0$,
+
+$$ \Pr_{\mathbf{X} \sim D_{\mathcal{L}-\mathbf{t}}} [|\langle \mathbf{v}, \mathbf{X} \rangle| \ge r] \le 10 \exp(-\pi r^2). $$
+
+*Proof of Lemma 3.3.* Let $\{\mathbf{v}_1, \dots, \mathbf{v}_N\}$ be a (1/10)-net of the unit sphere with $N \le 25^n$, as guaranteed by Lemma A.2. By Lemma A.4, we have that for any $\mathbf{e}_i$ in the proof, any $\mathbf{v}_j$, and any $r \ge 0$, $\Pr[|\langle \mathbf{v}_j, \mathbf{e}_i \rangle| \ge r] \le 10 \exp(-\pi r^2)$. Therefore, by Lemma 2.20
+
+$$ \Pr \left[ \sum_i |\langle \mathbf{v}_j, \mathbf{e}_i \rangle|^2 \geq r \right] \leq 2^m e^{-\pi r/2}. $$
+
+Applying the union bound, we have
+
+$$ \Pr[\exists j, \sum_i |\langle \mathbf{v}_j, \mathbf{e}_i \rangle|^2 \geq r] \leq N 2^m e^{-\pi r/2}. $$
+
+Taking $r := 2m$, we see that this probability is negligible. Applying Lemma A.3 shows that
+
+$$ \left\| \sum_i \mathbf{e}_i \mathbf{e}_i^T \right\| \le 2m \cdot \frac{5}{4} < 3m, $$
+
+except with negligible probability, as needed. $\square$
\ No newline at end of file
diff --git a/samples/texts/1897687/page_3.md b/samples/texts/1897687/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..c041a017619bc35810dc512cfebd4cb9a9af3c02
--- /dev/null
+++ b/samples/texts/1897687/page_3.md
@@ -0,0 +1,19 @@
+**Gaussians and the smoothing parameter.** Gaussian measures have become an increasingly important tool in the study of lattices. For $s > 0$, the Gaussian measure of parameter (or width) $s$ on $\mathbb{R}^n$ is defined as $\rho_s(\mathbf{x}) = \exp(-\pi\|\mathbf{x}\|^2/s^2)$; for a lattice $\mathcal{L} \subset \mathbb{R}^n$, the Gaussian measure of the lattice is then
+
+$$\rho_s(\mathcal{L}) := \sum_{\mathbf{v} \in \mathcal{L}} \rho_s(\mathbf{v}).$$
+
+Gaussian measures on lattices have innumerable applications, including in worst-case to average-case reductions for lattice problems [MR04, Reg05], the construction of cryptographic primitives [GPV08], the design of algorithms for SVP and CVP [ADRS15, ADS15], and the study of the geometry of lattices [Ban93, Ban95, DR16, RS17].
+
+In all of the above applications, a key quantity is the lattice *smoothing parameter* [MR04]. Informally, for a parameter $\epsilon > 0$ and a lattice $\mathcal{L}$, the smoothing parameter $\eta_\epsilon(\mathcal{L})$ is the minimal Gaussian parameter that “smooths out” the discrete structure of $\mathcal{L}$, up to error $\epsilon$. Formally, for $\epsilon > 0$ we define
+
+$$\eta_{\epsilon}(\mathcal{L}) := \min\{s > 0 : \rho_{1/s}(\mathcal{L}^{*}) \le 1 + \epsilon\},$$
+
+where $\mathcal{L}^* := \{\mathbf{w} \in \mathbb{R}^n : \forall \mathbf{y} \in \mathcal{L}, \langle \mathbf{w}, \mathbf{y} \rangle \in \mathbb{Z}\}$ is the dual lattice of $\mathcal{L}$. All of the computational applications from the previous paragraph rely in some way on the “smoothness” of the Gaussian with parameter $s \ge \eta_\epsilon(\mathcal{L})$ where $2^{-n} \ll \epsilon < 1/2$.² For example, several of the proof systems from [PV08] start with deterministic reductions to an intermediate problem, which asks whether a lattice is “smooth” or well-separated.
+
+**The GapSPP problem.** Given the prominence of the smoothing parameter in the theory of lattices, it is natural to ask about the complexity of computing it. Chung et al. [CDLP13] formally defined the problem $\gamma$-GapSPP$_\epsilon$ of approximating the smoothing parameter $\eta_\epsilon(\mathcal{L})$ to within a factor of $\gamma \ge 1$ and gave upper bounds on its complexity in the form of proof systems for remarkably low values of $\gamma$. For example, they showed that $\gamma$-GapSPP$_\epsilon \in$ SZK for $\gamma = O(1 + \sqrt{\log(1/\epsilon)/\log n})$. This in fact subsumes the prior result that $O(\sqrt{n/\log n})$-GapSVP $\in$ SZK of [GG98], via known relationships between the minimum distance and the smoothing parameter.
+
+Chung et al. also showed a worst-case to average-case (quantum) reduction from $\tilde{O}(\sqrt{n}/\alpha)$-GapSPP to a very important average-case problem in lattice-based cryptography, Regev’s Learning With Errors (LWE), which asks us to decode from a random “q-ary” lattice under error proportional to $\alpha$ [Reg05]. Again, this subsumes the prior best reduction for GapSVP due to Regev. Most recently, Dadush and Regev [DR16] showed a similar worst-case to average-case reduction from GapSPP to the Short Integer Solution problem [Ajt96, MR04], another widely used average-case problem in lattice-based cryptography.
+
+In hindsight, the proof systems and reductions of [GG98, Reg05, MR04] can most naturally be viewed as applying to GapSPP all along. This suggests that GapSPP may be a better problem than GapSVP on which to base the security of lattice-based cryptography. However, both [CDLP13] and [DR16] left open several questions and asked for a better understanding of the complexity of GapSPP. In particular, while interactive proof systems for this problem seem to be relatively well understood, nothing nontrivial was previously known about noninteractive proof systems (whether zero knowledge or not) for this problem.
+
+²For $\epsilon = 2^{-\Omega(n)}$ the smoothing parameter is determined (up to a constant factor) by the dual minimum distance, so it is much less interesting to consider as a separate quantity. The upper bound of $1/2$ could be replaced by any constant less than one. For $\epsilon \ge 1$, $\eta_\epsilon(\mathcal{L})$ is still formally defined, but its interpretation in terms of the “smoothness” of the corresponding Gaussian measure over $\mathcal{L}$ is much less clear.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_4.md b/samples/texts/1897687/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..29b23d2209584a14f051bf3224392376ae57d7dd
--- /dev/null
+++ b/samples/texts/1897687/page_4.md
@@ -0,0 +1,31 @@
+## 1.1 Our Results
+
+In this work we give new proof systems for lattice problems, and extend the reach of prior proof systems to new problems. Our new results, and how they compare to the previous state of the art, are as follows.
+
+Our first main result is a NISZK proof system for $\gamma$-GapSPP$_\epsilon$ with $\gamma = O(\log(n)\sqrt{\log(1/\epsilon)})$. This improves, by a $\Theta(\sqrt{n}/\log n)$ factor, upon the previous best approximation factor of $\gamma = O(\sqrt{n\log(1/\epsilon)})$, which follows from [PV08].
+
+**Theorem 1.1.** For any $\epsilon \in (0, 1/2)$, $O(\log(n)\sqrt{\log(1/\epsilon)})$-GapSPP$_\epsilon \in$ NISZK.
+
+In fact, we demonstrate two different proof systems to establish this theorem (see Section 3). The first is identical to a proof system from [PV08], but with a very different analysis that relies on a recent geometric theorem of [RS17]. However, this proof system only works for negligible $\epsilon < n^{-\omega(1)}$, so we also show an alternative that works for any $\epsilon \in (0, 1/2)$ via reduction to the NISZK-complete *Entropy Approximation problem* [GSV99].
+
+The prover in the proof system from [PV08] can be made efficient at the expense of a factor of $O(\sqrt{n \log n})$ in the approximation factor. From this we obtain the following.
+
+**Theorem 1.2.** For any negligible $0 < \epsilon < n^{-\omega(1)}$, there is a NISZK proof system with an efficient prover for $O(\sqrt{n \log^3(n) \log(1/\epsilon)})$-GapSPP$_\epsilon$.
+
+Next, we show that $O(\log n)$-GapSPP$_\epsilon \in$ coNP for any $\epsilon \in (0, 1)$. This improves, again by up to a $\Theta(\sqrt{n}/\log n)$ factor, the previous best known result of $O(1 + \sqrt{n}/\log(1/\epsilon))$-GapSPP$_\epsilon \in$ coNP, which follows from [Ban93].
+
+**Theorem 1.3.** For any $\epsilon \in (0, 1/2)$, $O(\log n)$-GapSPP$_\epsilon \in$ coNP.
+
+From this, together with the SZK protocol of [CDLP13] and the result of Nguyen and Vadhan [NV06] that any problem in SZK $\cap$ NP has an SZK proof system with an efficient prover, we obtain the following corollary. (The proof systems in [CDLP13] do not have efficient provers.)
+
+**Corollary 1.4.** For any $\epsilon \in (0, 1/2)$, there is an SZK proof system with an efficient prover for $O(\log n + \sqrt{\log(1/\epsilon)/\log n})$-coGapSPP$_\epsilon$.
+
+Finally, we observe that $O(\sqrt{n})$-GapCRP $\in$ SZK, where GapCRP is the problem of approximating the covering radius, i.e., the maximum possible distance from a given lattice. For comparison, the previous best approximation factor was from [PV08], who showed that $\gamma$-GapCRP $\in$ NISZK $\subseteq$ SZK for any $\gamma = \omega(\sqrt{n}\log n)$. We obtain this result via a straightforward reduction to $O(1)$-GapSPP$_\epsilon$ for constant $\epsilon < 1/2$, which, to recall, is in SZK [CDLP13]. Furthermore, since Guruswami, Micciancio, and Regev showed that $O(\sqrt{n})$-GapCRP $\in$ NP $\cap$ coNP [GMR04], it follows that the protocol can be made efficient.
+
+**Theorem 1.5.** We have $O(\sqrt{n})$-GapCRP $\in$ SZK. Furthermore, $O(\sqrt{n})$-GapCRP and $O(\sqrt{n})$-coGapCRP each have an SZK proof system with an efficient prover.
+
+## 1.2 Techniques
+
+**Sparse projections.** Our main technical tool will be sparse lattice projections. In particular, we use the determinant of a lattice, defined as $\det(\mathcal{L}) := |\det(\mathbf{B})|$ for any basis $\mathbf{B}$ of $\mathcal{L}$, as our measure of sparsity.³ It
+
+³This is indeed a measure of sparsity because $1/\det(\mathcal{L})$ is the average number of lattice points inside a random shift of any unit-volume body, or equivalently, the limit as $r$ goes to infinity of the number of lattice points per unit volume in a ball of radius $r$.
\ No newline at end of file
diff --git a/samples/texts/1897687/page_5.md b/samples/texts/1897687/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..d3c17fc30e6be35637e33819b309929d13c4192c
--- /dev/null
+++ b/samples/texts/1897687/page_5.md
@@ -0,0 +1,25 @@
+is an immediate consequence of the Poisson Summation Formula (Lemma 2.5) that $\det(\mathcal{L})^{1/n} \le 2\eta_{1/2}(\mathcal{L})$. Notice that this inequality formalizes the intuitive notion that “a lattice cannot be smooth and sparse simultaneously.”
+
+Dadush and Regev made the simple observation that the same statement is true when we consider projections of the lattice [DR16]. I.e., for any projection $\pi$ such that $\pi(\mathcal{L})$ is still a lattice, we have $\det(\pi(\mathcal{L}))^{1/\operatorname{rank}(\pi(\mathcal{L}))} \le 2\eta_{1/2}(\mathcal{L})$, where $\operatorname{rank}(\pi(\mathcal{L}))$ is the dimension of the span of $\pi(\mathcal{L})$. (Indeed, this fact is immediate from the above together with the identity $(\pi(\mathcal{L}))^* = \mathcal{L}^* \cap \operatorname{span}(\pi(\mathcal{L}))$.) Therefore, if we define
+
+$$ \eta_{\det}(\mathcal{L}) := \max_{\pi} \det(\pi(\mathcal{L}))^{1/\operatorname{rank}(\pi(\mathcal{L}))}, $$
+
+where the maximum is taken over all projections $\pi$ such that $\pi(\mathcal{L})$ is a lattice, then we have
+
+$$ \eta_{\det}(\mathcal{L}) \le 2\eta_{1/2}(\mathcal{L}). \quad (1.1) $$
+
+Dadush and Regev conjectured that Equation (1.1) is tight up to a factor of polylog(n). I.e., up to polylog factors, a lattice is not smooth if and only if it has a sparse projection. Regev and Stephens-Davidowitz proved this conjecture [RS17], and the resulting theorem, presented below, will be our main technical tool.
+
+**Theorem 1.6 ([RS17]).** For any lattice $\mathcal{L} \subset \mathbb{R}^n$,
+
+$$ \eta_{1/2}(\mathcal{L}) \le 10(\log n + 2)\eta_{\det}(\mathcal{L}). $$
+
+I.e., if $\eta_{1/2}(\mathcal{L}) \ge 10(\log n + 2)$, then there exists a lattice projection $\pi$ such that $\det(\pi(\mathcal{L})) \ge 1$.
+
+**coNP proof system.** Notice that Theorem 1.6 (together with Equation (1.1)) immediately implies that $O(\log n)$-GapSPP$_\epsilon$ is in coNP for $\epsilon = 1/2$. Indeed, a projection $\pi$ such that $\det(\pi(\mathcal{L}))^{1/\operatorname{rank}(\pi(\mathcal{L}))} \ge \eta_{1/2}(\mathcal{L})/O(\log n)$ can be used as a witness of “non-smoothness.” Theorem 1.6 shows that such a witness always exists, and Equation (1.1) shows that no such witness exists with $\det(\pi(\mathcal{L}))^{1/\operatorname{rank}(\pi(\mathcal{L}))} > 2\eta_{1/2}(\mathcal{L})$. In order to extend this result to all $\epsilon \in (0, 1)$, we use basic results about how $\eta_\epsilon(\mathcal{L})$ varies with $\epsilon$. (See Section 4.)
+
+**NISZK proof systems.** We give two different NISZK proof systems for $O(\log(n)\sqrt{\log(1/\epsilon)})$-GapSPP$_\epsilon$, both of which rely on Theorem 1.6.
+
+Our first proof system (shown in Figure 1, Section 3.1) uses many vectors $t_1, \dots, t_m$ sampled uniformly at random from a fundamental region of the lattice $\mathcal{L}$ as the common random string. The prover samples short vectors $e_i$ (for $i = 1, \dots, m$) from the discrete Gaussian distributions over the lattice cosets $e_i + \mathcal{L}$. The verifier accepts if and only if the matrix $E = \sum e_i e_i^T$ has small enough spectral norm. (I.e., the verifier accepts if the $e_i$ are “short in all directions.”) In fact, Peikert and Vaikuntanathan used the exact same proof system for the different lattice problem $O(\sqrt{n})$-coGapSVP, and their proofs of correctness and zero knowledge also apply to our setting. However, the proof of soundness is quite different: we show that, if the lattice has a sparse projection $\pi$, then $\operatorname{dist}(\pi(t_i), \pi(\mathcal{L}))$ will tend to be fairly large. It follows that $\sum ||\pi(e_i)||^2 = \operatorname{Tr}(\sum \pi(e_i)\pi(e_i)^T)$ will be fairly large with high probability, and therefore $\sum e_i e_i^T$ must have large spectral norm.
+
+Our second proof system follows from a reduction to the Entropy Approximation problem, which asks to estimate the entropy of the output distribution of a circuit on random input. Goldreich, Sahai, and Vadhan [GSV99] showed that Entropy Approximation is NISZK-complete, so that a problem is in NISZK if and only if it can be (Karp-)reduced to approximating the entropy of a circuit. If $\eta_\epsilon(\mathcal{L})$ is small, then we
\ No newline at end of file
diff --git a/samples/texts/1897687/page_6.md b/samples/texts/1897687/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6d95948a8a00a4a5e0057da0bd3c5673e5022bd
--- /dev/null
+++ b/samples/texts/1897687/page_6.md
@@ -0,0 +1,35 @@
+know that a continuous Gaussian modulo the lattice will be very close to the uniform distribution, and so (a suitable discretization of) this distribution will have high entropy. On the other hand, if $\eta_\varepsilon(\mathcal{L})$ is large, then Theorem 1.6 says that most of the measure of a continuous Gaussian modulo the lattice lies in a low-volume subset of $\mathbb{R}^n/\mathcal{L}$, and so (a discretization of) this distribution must have low entropy.
+
+This second proof system works for a wider range of $\varepsilon$. In particular, the first proof system is only statistical zero knowledge when $\varepsilon$ is negligible in the input size, whereas the second proof system works for any $\varepsilon \in (0, 1/2)$.
+
+## 1.3 Organization
+
+The remainder of the paper is organized as follows.
+
+* In Section 2 we recall the necessary background on lattices, proof systems, and probability.
+
+* In Section 3 we give two different NISZK proof systems for $O(\log(n)\sqrt{\log(1/\varepsilon)})$-GapSPP$_\varepsilon$.
+
+* In Section 4 we give a coNP proof system for $O(\log n)$-GapSPP$_\varepsilon$.
+
+* In Section 5 we show that $O(\sqrt{n})$-GapCRP $\in$ SZK, via a simple reduction to $O(1)$-GapSPP$_{1/4}$.
+
+# 2 Preliminaries
+
+## 2.1 Notation
+
+For any positive integer $d$, $[\tilde{d}]$ denotes the set $\{1, \dots, d\}$. We use bold lower-case letters to denote vectors. We write matrices in capital letters. The $i$th component (column) of a vector **x** (matrix **X**) is written as $\mathbf{x}_i$ ($\mathbf{X}_i$). The function $\log$ denotes the natural logarithm unless otherwise specified. For $\mathbf{x} \in \mathbb{R}^n$, $\|\mathbf{x}\| := \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}$ is the Euclidean norm. For a matrix $\mathbf{A} \in \mathbb{R}^{n \times m}$, $\|\mathbf{A}\| := \max_{\|\mathbf{x}\|=1} \|\mathbf{A}\mathbf{x}\|$ is the operator norm.
+
+We write $rB_2^n$ for the $n$-dimensional Euclidean ball of radius $r$. A set $S \subseteq \mathbb{R}^n$ is said to be *symmetric* if $-S = S$. The distance from a point $\mathbf{x} \in \mathbb{R}^n$ to a set $S \subseteq \mathbb{R}^n$ is defined to be $\text{dist}(\mathbf{x}, S) = \inf_{s \in S} \text{dist}(\mathbf{x}, s)$. We write $S^\perp$ to denote the subspace of vectors orthogonal to $S$. For a set $S \subseteq \mathbb{R}^n$ and a point $\mathbf{x} \in \mathbb{R}^n$, $\pi_S(\mathbf{x})$ denotes the orthogonal projection of $\mathbf{x}$ onto $\text{span}(S)$. For sets $A, B \subseteq \mathbb{R}^n$, we denote their Minkowski sum by $A + B = \{\mathbf{a} + \mathbf{b} : \mathbf{a} \in A, \mathbf{b} \in B\}$. We extend a function $f$ to a countable set in the natural way by defining $f(A) := \sum_{\mathbf{a} \in A} f(a)$.
+
+Throughout the paper, we write $C$ for an arbitrary universal constant $C > 0$, whose value might change from one use to the next.
+
+## 2.2 Lattices
+
+Here we provide some backgrounds on lattices. An $n$-dimensional *lattice* $\mathcal{L} \subset \mathbb{R}^n$ of rank $d$ is the set of integer linear combinations of $d$ linearly independent vectors $\mathbf{B} := (\mathbf{b}_1, \dots, \mathbf{b}_d)$,
+
+$$ \mathcal{L} = \mathcal{L}(\mathbf{B}) = \left\{ \mathbf{B}\mathbf{z} = \sum_{i \in [\tilde{d}]} z_i \cdot \mathbf{b}_i : \mathbf{z} \in \mathbb{Z}^d \right\}. $$
+
+We usually work with full-rank lattices, where $d = n$. A *sublattice* $\mathcal{L}' \subseteq \mathcal{L}$ is an additive subgroup of $\mathcal{L}$. The *dual lattice* of $\mathcal{L}$, denoted by $\mathcal{L}^*$, is defined as the set
+
+$$ \mathcal{L}^* = \left\{ \mathbf{y} \in \mathbb{R}^n : \forall \mathbf{v} \in \mathcal{L}, \langle \mathbf{v}, \mathbf{y} \rangle \in \mathbb{Z} \right\} $$
\ No newline at end of file
diff --git a/samples/texts/1897687/page_7.md b/samples/texts/1897687/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..5c53414bfdb7c2cf387a36b3b548a7745b7723ed
--- /dev/null
+++ b/samples/texts/1897687/page_7.md
@@ -0,0 +1,39 @@
+of all integer vectors having integer inner products with all vectors in $\mathcal{L}$. It is easy to check that $(\mathcal{L}^*)^* = \mathcal{L}$ and that, if $\mathbf{B}$ is a basis for $\mathcal{L}$, then $\mathbf{B}^* = \mathbf{B}(\mathbf{B}^T\mathbf{B})^{-1}$ is a basis for $\mathcal{L}^*$. The fundamental parallelepiped of a lattice $\mathcal{L}$ with respect to basis $\mathbf{B}$ is the set
+
+$$ \mathcal{P}(\mathbf{B}) = \left\{ \sum_{i \in [d]} c_i \mathbf{b}_i : 0 \le c_i < 1 \right\}. $$
+
+It is easy to see that $\mathcal{P}(\mathbf{B})$ is a fundamental domain of $\mathcal{L}$. I.e., it tiles $\mathbb{R}^n$ with respect to $\mathcal{L}$. For any lattice $\mathcal{L}(\mathbf{B})$ and point $\mathbf{x} \in \mathbb{R}^n$, there exists a unique point $\mathbf{y} \in \mathcal{P}(\mathbf{B})$ such that $\mathbf{y} - \mathbf{x} \in \mathcal{L}(\mathbf{B})$. We denote this vector by $\mathbf{y} = \mathbf{x} \bmod \mathbf{B}$. Notice that $\mathbf{y}$ can be computed in polynomial time given $\mathbf{B}$ and $\mathbf{x}$. We sometimes write $\mathbf{x} \bmod \mathcal{L}$ when the specific fundamental domain is not important, and we write $\mathbb{R}^n/\mathcal{L}$ for an arbitrary fundamental domain.
+
+The determinant of a lattice $\mathcal{L}$, is defined to be $\det(\mathcal{L}) = \sqrt{\det(\mathbf{B}^T\mathbf{B})}$. It is easy to verify that the determinant does not depend on the choice of basis and that $\det(\mathcal{L})$ is the volume of any fundamental domain of $\mathcal{L}$.
+
+The minimum distance of a lattice $\mathcal{L}$, is the length of the shortest non-zero lattice vector,
+
+$$ \lambda_1(\mathcal{L}) := \min_{\mathbf{y} \in \mathcal{L} \setminus \{\mathbf{0}\}} \|\mathbf{y}\|. $$
+
+Similarly, we define
+
+$$ \lambda_n(\mathcal{L}) := \min_i \max_j \|\mathbf{y}_i\|, $$
+
+where the minimum is taken over linearly independent lattice vectors $\mathbf{y}_1, \dots, \mathbf{y}_n \in \mathcal{L}$. The covering radius of a lattice $\mathcal{L}$ is
+
+$$ \mu(\mathcal{L}) := \max_{\mathbf{t} \in \mathbb{R}^n} \mathrm{dist}(\mathbf{t}, \mathcal{L}). $$
+
+The Voronoi cell of a lattice $\mathcal{L}$ is the set
+
+$$ \nu(\mathcal{L}) := \{\mathbf{x} \in \mathbb{R}^n : \| \mathbf{t} \| \le \| \mathbf{y} - \mathbf{t} \|, \forall \mathbf{y} \in \mathcal{L} \setminus \{\mathbf{0}\}\} $$
+
+of vectors in $\mathbb{R}^n$ that are closer to **0** than any other point of $\mathcal{L}$. It is easy to check that $V(\mathcal{L})$ is a symmetric polytope and that it tiles $\mathbb{R}^n$ with respect to $\mathcal{L}$. The following claim is an immediate consequence of the fact that an $n$-dimensional unit ball has volume at most $(2\pi e/n)^{n/2}$.
+
+**Claim 2.1.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$,
+
+$$ \mu(\mathcal{L}) \geq \sqrt{n/(2\pi e)} \cdot \det(\mathcal{L})^{1/n}. $$
+
+**Lemma 2.2.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ and $r \ge 0$,
+
+$$ |\mathcal{L} \cap r B_2^n| \le (5/\sqrt{n})^n \cdot \frac{(r + \mu(\mathcal{L}))^n}{\det(\mathcal{L})}. $$
+
+*Proof.* For each vector $\mathbf{y} \in \mathcal{L} \cap rB_2^n$, notice that $\nu(\mathcal{L}) + \mathbf{y} \subseteq (r + \mu(\mathcal{L}))B_2^n$. And, for distinct vectors $\mathbf{y}, \mathbf{y}' \in \mathcal{L}, \nu(\mathcal{L}) + \mathbf{y}$ and $\nu(\mathcal{L}) + \mathbf{y}'$ are disjoint (up to a set of measure zero). Therefore,
+
+$$ \mathrm{vol}((r + \mu(\mathcal{L}))B_2^n) \geq \mathrm{vol}\left(\bigcup_{\mathbf{y} \in \mathcal{L} \cap rB_2^n} (\nu(\mathcal{L}) + \mathbf{y})\right) = |\mathcal{L} \cap rB_2^n| \mathrm{vol}(\nu(\mathcal{L})) = |\mathcal{L} \cap rB_2^n| \det(\mathcal{L}). $$
+
+The result follows by recalling that for any $r' > 0$, $\mathrm{vol}(r'B_2^n) \le (5r'/\sqrt{n})^n$. □
\ No newline at end of file
diff --git a/samples/texts/1897687/page_8.md b/samples/texts/1897687/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..9667230831c9b516538681a9f8a28fd82c1eabe1
--- /dev/null
+++ b/samples/texts/1897687/page_8.md
@@ -0,0 +1,36 @@
+**Lemma 2.3 ([GMR04]).** For any lattice $\mathcal{L} \subset \mathbb{R}^n$,
+
+$$ \underset{\mathbf{t} \sim \mathbb{R}^n / \mathcal{L}}{\mathbb{E}} [\mathrm{dist}(\mathbf{t}, \mathcal{L})^2] \geq \mu(\mathcal{L})^2/4, $$
+
+where $\mathbf{t} \in \mathbb{R}^n/\mathcal{L}$ is sampled uniformly at random.
+
+*Proof.* Let $\mathbf{v} \in \mathbb{R}^n$ such that $\mathrm{dist}(\mathbf{v}, \mathcal{L}) = \mu(\mathcal{L})$. Notice that $\mathbf{v} - \mathbf{t} \bmod \mathcal{L}$ is uniformly distributed. And, by the triangle inequality, $\mathrm{dist}(\mathbf{v} - \mathbf{t}, \mathcal{L}) + \mathrm{dist}(\mathbf{t}, \mathcal{L}) \geq \mathrm{dist}(\mathbf{v}, \mathcal{L}) = \mu(\mathcal{L})$. So,
+
+$$ \underset{\mathbf{t} \sim \mathbb{R}^n / \mathcal{L}}{\mathbb{E}} [\mathrm{dist}(\mathbf{t}, \mathcal{L})] = \frac{1}{2} \cdot \underset{\mathbf{t} \sim \mathbb{R}^n / \mathcal{L}}{\mathbb{E}} [\mathrm{dist}(\mathbf{v} - \mathbf{t}, \mathcal{L}) + \mathrm{dist}(\mathbf{t}, \mathcal{L})] \geq \mu(\mathcal{L})/2. $$
+
+The result then follows by Markov's inequality. $\square$
+
+A *lattice projection* for a lattice $\mathcal{L} \subset \mathbb{R}^n$ is an orthogonal projection $\pi : \mathbb{R}^n \to \mathbb{R}^n$ defined by
+$\pi(\mathbf{x}) := \pi_{S^\perp}(\mathbf{x})$ for lattice vectors $S \subset \mathcal{L}$.
+
+**Claim 2.4.** For any $\mathcal{L} \subset \mathbb{R}^n$ and any lattice projection $\pi$, $\pi(\mathcal{L})$ is a lattice. Furthermore, if $\mathbf{t} \in \mathbb{R}^n/\mathcal{L}$ is sampled uniformly at random, then $\pi(\mathbf{t})$ is uniform mod $\pi(\mathcal{L})$.
+
+*Proof.* The first statement follows from the well known fact that, if $W = \operatorname{span} S$ for some set of lattice vectors $S \subset \mathcal{L}$, then there exists a basis $\mathbf{B} := (\mathbf{b}_1, \dots, \mathbf{b}_n)$ of $\mathcal{L}$ such that $\operatorname{span}(\mathbf{b}_1, \dots, \mathbf{b}_k) = W$, where $k := \dim W$. (See, e.g., [MG02].) From this, it follows immediately that $\pi(\mathbf{b}_{k+1}), \dots, \pi(\mathbf{b}_n)$ are linearly independent and $\pi(\mathcal{L})$ is the lattice spanned by these vectors, where $\pi := \pi_{S^\perp}$.
+
+The second statement follows from the following similarly well known fact. Let $\tilde{\mathbf{b}}_i := \pi_{\{\mathbf{b}_1, \dots, \mathbf{b}_{i-1}\}}^\perp (\mathbf{b}_i)$ be the Gram-Schmidt vectors of the basis $\mathbf{B}$ described above. Then, the hyperrectangle
+
+$$ \tilde{\mathbb{R}} := \left\{ \sum_i a_i \tilde{\mathbf{b}}_i : -1/2 < a_i \le 1/2 \right\} $$
+
+is a fundamental domain of the lattice. (See, e.g., [Bab85b]) I.e., for each $\mathbf{t} \in \mathbb{R}^n/\mathcal{L}$, there is a unique representative $\tilde{\mathbf{t}} \in \tilde{\mathbb{R}}$ with $\tilde{\mathbf{t}} = \mathbf{t} \bmod \mathcal{L}$. The result then follows by noting that, if $\tilde{\mathbf{t}} \in \tilde{\mathbb{R}}$ is chosen uniformly at random, then clearly $\pi(\tilde{\mathbf{t}}) \in \pi(\tilde{\mathbb{R}})$ is uniform in $\pi(\tilde{\mathbb{R}})$, which is a fundamental region of $\pi(\mathcal{L})$. $\square$
+
+## 2.3 Gaussian Measure
+
+Here we review some useful background on Gaussians over lattices. For a positive parameter $s > 0$ and vector $\mathbf{x} \in \mathbb{R}^n$, we define the Gaussian mass of $\mathbf{x}$ as $\rho_s(\mathbf{x}) = e^{-\pi\|\mathbf{x}\|^2/s^2}$. For a measurable set $A \subseteq \mathbb{R}^n$, we define $\gamma_s(A) = s^{-n} \int_A \rho_s(\mathbf{x}) d\mathbf{x}$. It is easy to see that $\gamma_s(\mathbb{R}^n) = 1$ and hence $\gamma_s$ is a probability measure. We define the discrete Gaussian distribution over a countable set $A$ as
+
+$$ D_{A,s}(\boldsymbol{x}) = \frac{\rho_s(\boldsymbol{x})}{\rho_s(A)}, \forall \boldsymbol{x} \in A. $$
+
+In all cases, the parameter $s$ is taken to be one when omitted. The following lemma is the Poisson Summation Formula for the Gaussian mass of a lattice.
+
+**Lemma 2.5.** For any (full-rank) lattice $\mathcal{L}$ and $s > 0$,
+
+$$ \rho_s(\mathcal{L}) = \frac{1}{\det(\mathcal{L})} \cdot \rho_{1/s}(\mathcal{L}^*). $$
\ No newline at end of file
diff --git a/samples/texts/1897687/page_9.md b/samples/texts/1897687/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..03052efd0aa9c4aedbb12ad3553dddaab241d838
--- /dev/null
+++ b/samples/texts/1897687/page_9.md
@@ -0,0 +1,41 @@
+We will also need Banaszczyk's celebrated lemma [Ban93, Lemma 1.5].
+
+**Lemma 2.6 ([Ban93]).** For any lattice $\mathcal{L} \subset \mathbb{R}^n$, shift vector $\mathbf{t} \in \mathbb{R}^n$, and $r \ge 1/\sqrt{2\pi}$,
+
+$$ \rho((\mathcal{L} + \mathbf{t}) \setminus \sqrt{n}B_2^n) \le (\sqrt{2\pi er^2}e^{-\pi r^2})^n \cdot \rho(\mathcal{L}). $$
+
+Micciancio and Regev introduced a lattice parameter called the smoothing parameter. For an $n$-dimensional lattice $\mathcal{L}$ and $\epsilon > 0$, the smoothing parameter $\eta_{\epsilon}(\mathcal{L})$ is defined as the smallest $s$ such that $\rho_{1/s}(\mathcal{L}^*) \le 1 + \epsilon$. The motivation for defining smoothing parameter comes from the following two facts [MR04].
+
+**Claim 2.7.** For any lattice $\mathcal{L} \subset \mathbb{R}^n$, shift vector $\mathbf{t} \in \mathbb{R}^n$, $\epsilon \in (0, 1)$, and parameter $s \ge \eta_{\epsilon}(\mathcal{L})$,
+
+$$ \frac{1 - \epsilon}{1 + \epsilon} \cdot \rho_s(\mathcal{L}) \le \rho_s(\mathcal{L} - \mathbf{t}) \le \rho_s(\mathcal{L}). $$
+
+**Lemma 2.8.** For any lattice $\mathcal{L}$, $\mathbf{c} \in \mathbb{R}^n$ and $s \ge \eta_{\epsilon}(\mathcal{L})$,
+
+$$ \Delta((D_s \bmod B), U(\mathbb{R}^n/\mathcal{L})) \le \epsilon/2, $$
+
+where $D_s$ is the continuous Gaussian distribution with parameter $s$ and $U(\mathbb{R}^n/\mathcal{L})$ denotes the uniform distribution over $\mathbb{R}^n/\mathcal{L}$.
+
+We use the following epsilon-decreasing tool which has been introduced in [CDLP13].
+
+**Lemma 2.9 ([CDLP13], Lemma 2.4).** For any lattice $\mathcal{L} \subset \mathbb{R}^n$ and any $0 < \epsilon' \le \epsilon < 1$,
+
+$$ \eta_{\epsilon'}(\mathcal{L}) \le \sqrt{\log(1/\epsilon')/\log(1/\epsilon)} \cdot \eta_{\epsilon}(\mathcal{L}). $$
+
+*Proof.* We may assume without loss of generality that $\eta_{\epsilon}(\mathcal{L}) = 1$. Notice that this implies that $\lambda_1(\mathcal{L}^*) \ge \sqrt{\log(1/\epsilon)/\pi}$. Then, for any $s \ge 1$,
+
+$$ \rho_{1/s}(\mathcal{L}^* \setminus \{\mathbf{0}\}) = \sum \exp(-\pi(s^2-1)\|\mathbf{w}\|^2) \cdot \rho(\mathbf{w}) \le \exp(-\pi(s^2-1)\lambda_1(\mathcal{L})^2)\rho(\mathcal{L}^* \setminus \{\mathbf{0}\}) \le \epsilon^{s^2}. $$
+
+Setting $s := \sqrt{\log(1/\epsilon')/\log(1/\epsilon)}$ gives the result. $\square$
+
+**Lemma 2.10.** For any lattice $\mathcal{L} \subset \mathbb{Q}^n$ with basis $\mathbf{B}$ whose bit length is $\beta$ and any $\epsilon \in (0, 1/2)$, we have $\eta_{\epsilon}(\mathcal{L}(\mathbf{B})) \le 2^{\text{poly}(\beta)}\sqrt{\log(1/\epsilon)}$, and $\lambda_n(\mathcal{L}) \le 2\mu(\mathcal{L}) \le 2^{\text{poly}(\beta)}$.
+
+## 2.4 Sampling from the Discrete Gaussian
+
+For any $\mathbf{B} = (\mathbf{b}_1, \dots, \mathbf{b}_n) \in \mathbb{R}^{n \times n}$, let
+
+$$ \| \tilde{\mathbf{B}} \| := \max_i \| \pi_{\{\mathbf{b}_1, \dots, \mathbf{b}_{i-1}\}}^\perp (\mathbf{b}_i) \|, $$
+
+i.e., $\|\tilde{\mathbf{B}}\|$ is the length of the longest Gram-Schmidt vector of $\mathbf{B}$.
+
+We recall the following result from a sequence of works due to Klein [Kle00]; Gentry, Peikert, and Vaikuntanathan [GPV08]; and Brakerski et al. [BLP$^{+}$13].
\ No newline at end of file
diff --git a/samples/texts/4164463/page_1.md b/samples/texts/4164463/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..0fa49bd17575d6d41759369a0eb5e3d526cea819
--- /dev/null
+++ b/samples/texts/4164463/page_1.md
@@ -0,0 +1,30 @@
+# Bivariate Beta Distributions and Beyond
+
+Jacobs, Rianne
+
+University of Pretoria, Department of Statistics
+cnr. Lynnwood Road and Roper Street
+Hatfield 0083, South Africa
+E-mail: rianne.jacobs@up.ac.za
+
+Bekker, Andriette
+
+University of Pretoria, Department of Statistics
+cnr. Lynnwood Road and Roper Street
+Hatfield 0083, South Africa
+E-mail: andriette.bekker@up.ac.za
+
+Human, Schalk
+
+University of Pretoria, Department of Statistics
+cnr. Lynnwood Road and Roper Street
+Hatfield 0083, South Africa
+E-mail: schalk.human@up.ac.za
+
+## Introduction
+
+Bivariate beta distributions make up a major part of statistical distribution theory. They form part of the Dirichlet family of distributions, but have become an important family of distributions in their own right. Many bivariate beta distributions have been derived out of an application or as extensions or generalizations from other well-known bivariate beta distributions. Since bivariate beta distributions are used in a wide variety of applications such as Bayesian statistics and reliability theory, there exist a huge amount of research in the development and derivation of new bivariate beta distributions. In this research, we considered the Kummer type beta distributions and more specifically, in this paper, the *bivariate Kummer-beta type IV* distribution. This distribution is an extension of the *bivariate beta type IV* distribution (or Jones' model) which has its roots in the model proposed by Libby and Novick (1982). It was, however, more explicitly derived by Jones (2001) and Olkin and Liu (2003). Furthermore, it is a special case of the model proposed by Sarabia and Castillo (2006). Although, in this paper we explicitly derive the *bivariate Kummer-beta type IV* distribution, it is also a special case of the *bimatrix variate Kummer-beta type IV distribution* defined by Bekker et al. (2010).
+
+Kummer-type distributions form an integral part of statistical distribution theory and a number of these distributions have been proposed. In the univariate case, for example, the *Kummer-gamma* and the *Kummer-beta* are introduced by Armero and Bayarri (1997) and Ng and Kotz (1995). The latter also proposed and studied the *multivariate Kummer-gamma* and *multivariate Kummer-beta* families of distributions. In the matrix variate case, there is the work by Gupta et al (2001), Nagar and Gupta (2002) and Nagar and Cardeño (2001). These authors proposed and studied matrix variate generalizations of the multivariate Kummer-beta and the multivariate Kummer-gamma families of distributions, which are called the *matrix variate Kummer-Dirichlet* (or the *matrix variate Kummer-beta*) and the *matrix variate Kummer-gamma* distributions, respectively. It should be noted that these Kummer distributions get their name from the fact that their normalizing constants are all defined in terms of one of the two so-called Kummer functions (see e.g. Rainville, 1960, p 124-126).
+
+The rest of this paper is structured as follows. First we derive the joint probability density function (pdf), $g(x_1, x_2)$, of the bivariate Kummer-beta type IV distribution. Then, the pdf's of
\ No newline at end of file
diff --git a/samples/texts/4164463/page_2.md b/samples/texts/4164463/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..4aa6a7f082a429006c42b81386500f0c0b3884b1
--- /dev/null
+++ b/samples/texts/4164463/page_2.md
@@ -0,0 +1,35 @@
+the marginal distributions, $m(x_i)$ for $i=1,2$, the pdf's of the conditional distributions, $h(x_i|x_j)$ for $i,j=1,2$ and $i \neq j$, and the product moment, $E(X_1^r X_2^s)$, of the bivariate Kummer-beta type IV are derived. We investigate the influence of the shape parameter, $\psi$, of the bivariate Kummer-beta type IV distribution. Thus, its effect on the correlation between the components of this distribution as well as its effect on the pdf of $(X_1, X_2)$ and the marginal of $X_1$. An example of a possible application will be presented.
+
+## The Bivariate Kummer-Beta type IV distribution
+
+In this section we derive the pdf of the new bivariate Kummer-beta type IV distribution, $g(x_1, x_2)$, the pdf's of the marginal distributions, $m(x_i)$ for $i = 1, 2$, the pdf's of the conditional distributions, $h(x_i|x_j)$ for $i, j = 1, 2$ and $i \neq j$, and the product moment, $E(X_1^r X_2^s)$.
+
+The pdf, $g(x_1, x_2)$, is derived by extending the kernel of the Jones' bivariate beta distribution, which has pdf
+
+$$f(x_1, x_2) = C x_1^{a_1-1} x_2^{b_1-1} (1-x_1)^{b+c-1} (1-x_2)^{a+c-1} (1-x_1 x_2)^{-(a+b+c)}$$
+
+for $0 \le x_1, x_2 \le 1$, $a, b, c > 0$ and where $C^{-1} = B(a, b, c) = \frac{\Gamma(a)\Gamma(b)\Gamma(c)}{\Gamma(a+b+c)}$ denotes the normalizing constant (Jones, 2001). Clearly, $X_1$ and $X_2$ each have a standard beta type I distribution, i.e. $X_1 \sim Beta(a, c)$ and $X_2 \sim Beta(b, c)$ over $0 \le x_1, x_2 \le 1$.
+
+**Theorem 1**
+
+The pdf of the bivariate Kummer-beta type IV distribution is given by
+
+$$ (1) \quad g(x_1, x_2) = K x_1^{a_1-1} x_2^{b_1-1} (1-x_1)^{b+c-1} (1-x_2)^{a+c-1} (1-x_1x_2)^{-(a+b+c)} e^{-\psi(x_1+x_2)} $$
+
+where $0 \le x_1, x_2 \le 1$, $a, b, c > 0$, $-\infty < \psi < \infty$, and the normalizing constant K is given by
+
+$$ (2) \quad K^{-1} = \sum_{k=0}^{\infty} \frac{(a+b+c)_k}{k!} \frac{F_1(a+k, a+b+c+k; -\psi) F_1(b+k, a+b+c+k; -\psi)}{(B(a+c,b+k)B(b+c,a+k))^{-1}} $$
+
+where $B(\cdot)$ denotes the beta function and ${}_1F_1(\cdot)$ denotes the confluent hypergeometric function (Gradshteyn, 2007, Section 9.2, p 1022). This distribution is denoted as $(X_1, X_2) \sim BKB^{IV}(a, b, c, \psi)$.
+
+**Proof**
+
+Define the bivariate Kummer-beta type IV distribution as
+
+$$ g(x_1, x_2) = K x_1^{a_1-1} x_2^{b_1-1} (1-x_1)^{b+c-1} (1-x_2)^{a+c-1} (1-x_1 x_2)^{-(a+b+c)} e^{-\psi(x_1+x_2)}. $$
+
+The normalizing constant is obtained by integrating over the full support of $g(x_1, x_2)$:
+
+$$ \begin{align*} &= \sum_{k=0}^{\infty} \frac{(a+b+c)_k}{k!} \int_{0}^{1} x_{2}^{b+k-1} (1-x_{2})^{a+c-1} e^{-\psi x_{2}} \int_{0}^{1} x_{1}^{a+k-1} (1-x_{1})^{b+c-1} e^{-\psi x_{1}} dx_{1} dx_{2} \\ &= \sum_{k=0}^{\infty} \frac{(a+b+c)_k}{k!} \int_{0}^{1} x_{2}^{b+k-1} (1-x_{2})^{a+c-1} e^{-\psi x_{2}} \frac{{}_1F_1(a+k; a+b+c+k; -\psi) {}_1F_1(b+k; a+b+c+k; -\psi)}{(B(b+c,a+k))^{-1}} dx_{2} \\ &= \sum_{k=0}^{\infty} \frac{(a+b+c)_k}{k!} \frac{{}_1F_1(a+k; a+b+c+k; -\psi) {}_1F_1(b+k; a+b+c+k; -\psi)}{(B(a+c,b+k)B(b+c,a+k))^{-1}} \end{align*} $$
+
+The above result is obtained by expanding the term $(1-x_1x_2)^{-(a+b+c)}$ as a power series and then using the integral representation of the confluent hypergeometric function, ${}_1F_1(\cdot)$ (Gradshteyn, 2007, Eq 3.383, p 347). ■
\ No newline at end of file
diff --git a/samples/texts/4164463/page_3.md b/samples/texts/4164463/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..a0c1eb0d47a5be50b83761f7b1dcb1e80f99f552
--- /dev/null
+++ b/samples/texts/4164463/page_3.md
@@ -0,0 +1,39 @@
+Note, the bivariate Kummer-beta type IV distribution may also be obtained by substituting $p = 1$ in the pdf of the bimatrix variate Kummer-beta type IV distribution defined by Bekker et al. (2010) (Section 5.3).
+
+**Theorem 2**
+
+If $(X_1, X_2) \sim BKB^{IV}(a, b, c, \psi)$, the marginal pdf of $X_1$ is given by
+
+$$ (3) \qquad m(x_1) = K x_1^{a_1-1} (1-x_1)^{b+c-1} e^{-\psi x_1} B(b, a+c) \Phi_1(b, a+b+c, a+b+c, \psi, x_1) $$
+
+$$ (4) \qquad = K x_1^{a_1-1} (1-x_1)^{b+c-1} e^{-\psi x_1} \sum_{k=0}^{\infty} \frac{(a+b+c)_k}{k!} \frac{x_1^k}{1-F_1(b+k; a+b+c+k; -\psi)} \frac{F_1(b+k; a+b+c+k; -\psi)}{(B(b+k, a+c))^{-1}} $$
+
+where $0 \le x_1, x_2 \le 1$, $a, b, c > 0$, $B(\cdot, \cdot)$ denotes the beta function, $\Phi_1(\cdot)$ denotes confluent hypergeometric series of two variables (Gradshteyn, 2007, Eq 9.261, p 1031) and $K$ is defined in (2).
+
+**Proof**
+
+Using (1), the first representation of $m(x_1)$ given in (3), is obtained by using the integral representation of the confluent hypergeometric series of two variables, $\Phi_1(\cdot)$ (Gradshteyn, 2007, Eq 3.385, p 349). The second representation of $m(x_1)$ given in (4), is obtained by expanding the term $(1-x_1x_2)^{-(a+b+c)}$ as a power series and using the integral representation of the confluent hypergeometric function, ${}_1F_1(\cdot, \cdot)$.
+
+Equation (4) is more useful (in the sense that it is easier to implement and/or program) in computer packages such as Mathematica when we want to graph the pdf $m(x_i)$ for $i = 1, 2$.
+
+The expression for the conditional pdf of $X_2|X_1$ can easily be obtained by using the definition $h(x_2|x_1) = \frac{g(x_1,x_2)}{m(x_1)}$. Note that the marginal pdf of $X_2$ and the conditional pdf of $X_1|X_2$ are obtained by substituting $x_2$ for $x_1$ in the respective expressions and interchanging the parameters $a$ and $b$.
+
+**Theorem 3**
+
+If $(X_1, X_2) \sim BKB^{IV}(a, b, c, \psi)$, the product moment, $E(X_1^r X_2^s)$, equals
+
+$$ (5) \qquad K \sum_{k=0}^{\infty} \frac{(a+b+c)_k {}_1F_1(a+k+r, a+b+c+k+r; -\psi) {}_1F_1(b+k+s, a+b+c+k+s; -\psi)}{k! (B(a+c,b+k+s)B(b+c,a+k+r))^{-1}} \\ = (A(a,b,c,0,0))^{-1} \times A(a,b,c,s,r) $$
+
+where
+
+$$ A(a,b,c,s,r) = \sum_{k=0}^{\infty} \frac{(a+b+c)_k {}_1F_1(a+k+r, a+b+c+k+r; -\psi) {}_1F_1(b+k+s, a+b+c+k+s; -\psi)}{k! (B(a+c,b+k+s)B(b+c,a+k+r))^{-1}} $$
+
+and $A(a,b,c,0,0) = K$ defined in (2).
+
+**Proof**
+
+From (1), expanding the term $(1-x_1x_2)^{-(a+b+c)}$ as a power series and using the integral representation of the confluent hypergeometric function, ${}_1F_1(\cdot, \cdot)$, we obtain (5).
+
+**Shape Analysis**
+
+Figure 1 illustrates the effect of the parameter $\psi$ for $\psi = -1.1, 0$ and $1.1$ on the bivariate Kummer-beta type IV pdf (see (1)) for $a = b = c = 2$. We also show the contour plots for easy comparison. We note that the domain of the graphs in Figure 1 are all $\mathbb{R}^2: [0, 1] \times [0, 1]$. Considering the graphs in Figure 1 we see that the parameter $\psi$ shifts the bell of the density. For negative $\psi$, the bell of the density moves away from the origin, while for positive $\psi$, the bell of the density moves towards the origin.
\ No newline at end of file
diff --git a/samples/texts/4164463/page_4.md b/samples/texts/4164463/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..c94dd914e4608c23460284235d8a2453127231c9
--- /dev/null
+++ b/samples/texts/4164463/page_4.md
@@ -0,0 +1,13 @@
+**Figure 1: Bivariate Kummer-beta type IV density function (a) and contour plots (b)**
+
+**Figure 2: Correlation between dependent components**
+
+The five curves are: thick solid line $a = 1, b = 1, c = 10$; medium solid line $a = 2, b = 4, c = 5$; thin solid line $a = 0.5, b = 0.9, c = 0.1$; dashed line $a = 2.5, b = 4, c = 0.5$; dotted line $a = 0.1, b = 0.5, c = 5$.
+
+Figure 2 illustrates the effect of the parameter $\psi \in [-10, 10]$ on the correlation between $X_1$ and $X_2$ using (5). We see that: (i) the parameter $\psi$ can both increase and decrease the correlations for different values of $a, b$ and $c$ and (ii) we can obtain a wide range of correlations between 0 and 1 - depending on the values of $a, b, c$ and $\psi$. The correlation cannot be negative because the central bivariate Kummer-beta type IV distribution is totally positive of order 2.
+
+When looking at the marginal density, we found that the parameter $\psi$ introduces skewness to the symmetric cases of the Jones' bivariate beta distribution and changes the kurtosis of the asymmetric cases.
+
+## An Application
+
+The stress-strength model in the context of reliability is a well-known application of various
\ No newline at end of file
diff --git a/samples/texts/4164463/page_5.md b/samples/texts/4164463/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..fae4287733439065c5d2977491b50dcd15a4a7a4
--- /dev/null
+++ b/samples/texts/4164463/page_5.md
@@ -0,0 +1,37 @@
+bivariate beta distributions. This model describes the life of a component with a random strength $X_2$ subjected to a random stress $X_1$. The reliability of a component can be expressed as $P(X_1 < X_2)$ or $P(\frac{X_1}{X_2} < 1) = P(R < 1)$. One method that is frequently used in obtaining the reliability is that of obtaining the distribution of the ratio of the two components stress, $X_1$, and strength, $X_2$. In this section we derive the exact expression for the pdf of the ratio of the correlated components of the bivariate Kummer-beta type IV distribution i.e. $R = \frac{X_1}{X_2}$, in terms of Meijer's G-function (see Mathai, 1993, Definition 2.1, p 60) using the Mellin transform and the inverse Mellin transform (see Mathai, 1993, Definition 1.8, p 23).
+
+**Theorem 4**
+
+If $(X_1, X_2) \sim BKB^{IV}(a, b, c, \psi)$ and we let $R = \frac{X_1}{X_2}$, then the pdf of R is given by
+
+$$ (6) \qquad w(r) = K\Gamma(a+c)\Gamma(b+c) \sum_{k=0}^{\infty} \sum_{l=0}^{\infty} \sum_{j=0}^{\infty} (a+b+c)_k \frac{(-\psi)^{j+l}}{j!k!l!} G_{2,2}^{1,1} \begin{pmatrix} r \\ \alpha_1, \alpha_2 \\ \beta_1, \beta_2 \end{pmatrix} \text{ for } r \ge 0 $$
+
+where
+
+$$ -\alpha_1 = b+k+j \quad \alpha_2 = a+b+c+k+l-1 \quad \beta_1 = a+k+l-1 \quad -\beta_2 = a+b+c+k+j $$
+
+with *K* defined in (2).
+
+**Proof**
+
+Setting $r = h-1$ and $s = 1-h$ in (5) and using the series representation of the confluent hypergeometric function, ${}_1F_1(.)$, we obtain an expression for the Mellin transform of $g(x_1, x_2)$ as
+
+$$
+\begin{aligned}
+M_g(h) &= E(R^{h-1}) = E\left(\left(\frac{X_1}{X_2}\right)^{h-1}\right) \\
+&= (A(a, b, c, 0, 0))^{-1} \times A(a, b, c, h-1, 1-h) \\
+&= K \frac{\Gamma(b+c)\Gamma(a+c)}{\Gamma(a+b+c)} \sum_{k=0}^{\infty} \sum_{l=0}^{\infty} \sum_{j=0}^{\infty} \Gamma(a+b+c+k) \frac{\Gamma(1-\alpha_1-h)\Gamma(\beta_1+h)}{\Gamma(1-\beta_2-h)\Gamma(\alpha_2+h)} \frac{(-\psi)^{j+l}}{j!k!l!}
+\end{aligned}
+$$
+
+with
+
+$$ -\alpha_1 = b + k + j \quad \alpha_2 = a + b + c + k + l - 1 \quad \beta_1 = a + k + l - 1 \quad -\beta_2 = a + b + c + k + j $$
+
+Using the inverse Mellin transform, the density of R given in (6) follows. ■
+
+Using (6), we can now calculate the reliability for various combinations of parameter values. Table 1 provides the reliability of the bivariate Kummer type IV distribution for $\psi = -1.1, 0, 1.1$ with parameters $a = 1, b = c = 2$ and $a = b = c = 2$. For example, when $a = 1, b = c = 2$ and $\psi = -1.1$, we see that $P(R < 1) = 0.68471$; this implies that the probability that the component will function satisfactorily is 0.68471; or, in other words, the component will fail with probability 0.31529.
+
+**Tabel 1: Some reliability values**
+
+| a | b | c | ψ | P(R < 1) |
|---|
| 1 | 2 | 2 | -1.1 | 0.68471 |
| 1 | 2 | 2 | 0 | 0.74904 |
| 1 | 2 | 2 | 1.1 | 0.75217 |
| 2 | 2 | 2 | -1.1 | 0.42325 |
| 2 | 2 | 2 | 0 | 0.49782 |
| 2 | 2 | 2 | 1.1 | 0.48213 |
\ No newline at end of file
diff --git a/samples/texts/4164463/page_6.md b/samples/texts/4164463/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..613c915d9b87300bd47b55230b3144e25d0b673b
--- /dev/null
+++ b/samples/texts/4164463/page_6.md
@@ -0,0 +1,21 @@
+Figure 3 illustrates the shape of the density of $R = \frac{X_1}{X_2}$ (see Equation 6) for the case $a = 1, b = c = 2$ and $a = b = c = 2$ for different values of $\psi$. The domain for these graphs is $\mathbb{R} : [0, \infty]$.
+
+**Figure 3: The pdf of the ratio**
+
+(i) $a=1, b=c=2$ and (ii) $a=b=c=2$. The three curves in each panel are:
+dashed line $\psi = -1.1$, solid line $\psi = 0$, dotted line $\psi = 1.1$.
+
+## Conclusion
+
+In this paper we introduced, derived and studied the new bivariate Kummer-beta type IV distribution. It was shown that the densities can take different shapes and, therefore, the bivariate Kummer-beta type IV distribution can be used to analyse skewed bivariate data sets. The expressions derived in this paper are a valuable contribution to the existing literature on *Continuous Bivariate Distributions* as the comprehensive work by Balakrishnan and Lai (2009). We also obtained an exact expression for the density function of the ratio of the components of the bivariate distribution which is useful in reliability. For future research, another possible application can be explored in the Beta-Binomial context.
+
+## REFERENCES
+
+1. Armero, C. and Bayarri, M.J. (1997). A Bayesian analysis of queueing system with unlimited service, *Journal of Statistical Planning and Inference*, **58**, 241-261.
+2. Balakrishnan, N. and Lai, C.D. (2009). *Continuous Bivariate Distributions*, 2nd Edition, Springer, New York.
+3. Bekker, A., Roux, J.J.J., Ehlers, R. and Arashi, M. (2010). Bimatrix variate beta type IV distribution relation to Wilks's statistic and bimatrix variate Kummer-beta type IV distribution, *Communications and Statistics-Theory and methods* (in press).
+4. Gradshteyn, I.S. and Ryzhik, I.M. (2007). *Table of Integrals, Series, and Products*, Academic Press, Amsterdam.
+5. Gupta, A.K., Cardeño, L. and Nagar, D.K. (2001). Matrix-variate Kummer-Dirichlet distributions, *J. Appl. Math.*, **1**(3), 117-139.
+6. Johnson, N.L., Kemp, A.W. and Kotz, S. (2005). *Univariate Discrete Distributions*, 3rd Edition, Wiley, Hoboken.
+7. Jones, M.C. (2001). Multivariate t and the beta distributions associated with the multivariate F distribution, *Metrika*, **54**, 215-231.
+8. Libby, D.L. and Novick, R.E. (1982). Multivariate generalized beta distributions with applications to utility assessment, *Journal of Educational Statistics*, **7**(4), 271-294.
\ No newline at end of file
diff --git a/samples/texts/4164463/page_7.md b/samples/texts/4164463/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..329e923716592fc24baef2f6607d3869bd888721
--- /dev/null
+++ b/samples/texts/4164463/page_7.md
@@ -0,0 +1,13 @@
+9. Mathai, A.M. (1993). *A Handbook of Generalized Special Functions for Statistical and Physical Sciences*, Clarendon Press, Oxford.
+
+10. Nagar, D.K. and Cardeño, L. (2001). Matrix variate Kummer-Gamma distributions, *Random Oper. and Stoch. Equ.*, 9(3), 207-218.
+
+11. Nagar, D.K. and Gupta, A.K. (2002). Matrix-variate Kummer-Beta distribution, *J. Austral. Math. Soc.*, **73**, 11-25.
+
+12. Ng, K.W. and Kotz, S. (1995). Kummer-gamma and Kummer-beta univariate and multivariate distributions, *Research report*, **84**, The University of Hong Kong.
+
+13. Olkin, I. and Liu, R. (2003). A Bivariate beta distribution, *Statistics and Probability Letters*, **62**, 407-412.
+
+14. Rainville, E.D. (1960). *Special Functions*, Macmillan, New York.
+
+15. Sarabia, J.M. and Castillo, E. (2006). Bivariate Distributions Based on the Generalized Three-Parameter Beta Distribution, In *Advances in Distribution Theory, Order Statistics, and Inference*, (Ed., N. Balakrishnan, E. Castillo, J.M. Sarabia), pp. 85-110, Birkhäuser, Boston.
\ No newline at end of file
diff --git a/samples/texts/5670049/page_1.md b/samples/texts/5670049/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..df1d9983adc4acf3c7067147b30990e7b7d1a51b
--- /dev/null
+++ b/samples/texts/5670049/page_1.md
@@ -0,0 +1,62 @@
+H13-65
+
+DIRECT IMPLEMENTATION OF NON-LINEAR CHEMICAL REACTION TERMS FOR OZONE CHEMISTRY
+IN CFD-BASED AIR QUALITY MODELLING
+
+Bart De Maerschalck¹, Stijn Janssen¹ and Clemens Mensink²
+
+¹VITO, Dep. Environmental Modelling, Boeretang 200, 2400 Mol, Belgium
+
+**Abstract:** In this paper we present the implementation of a chemistry model that transforms the NOx composition dynamically during transportation for a CFD-based air quality model. For that, the scalar advection equations for NO, NO2 and O3 are coupled by non-linear reaction terms and solved simultaneously. The model is implemented and tested in the Envi-met local air quality and micro climate model (Bruse 2007; De Maerschalck, Janssen *et al.* 2009).
+
+**Key words:** Key CFD-based air quality modelling, Ozone chemistry.
+
+INTRODUCTION
+
+The last decade Computational Fluid Dynamics (CFD) has gained interest as a practical tool for local air quality modelling in complex environment like street canyons, urbanized areas industrial plants. CFD-based air quality models are capable of solving complex three-dimensional flows around obstacles like buildings, trees and vehicles. After solving the wind and turbulence field, the dispersion of pollutants in the atmosphere can be simulated either in a Lagrangian approach, tracking individual particles after release, or a Eulerian approach, that is solving a 3D scalar advection equation. Until now, most CFD-based air quality models solve for an inert gas, similar to wind tunnel modelling. However, it is well understood that nitrogen oxides are reacting fast with ozone while a European air quality directive is specific for NO₂. Regarding traffic emissions about 80% of NOₓ emission is NO, but depending on the ozone background concentrations and meteorological conditions this will react and for secondary NO₂ which can have a significant effect on the local air quality.
+
+OZONE CHEMISTRY IN THE TROPOSPHERE
+
+Nitrogen oxides are ubiquitous urban air pollutants mainly emitted by traffic, power plants and industry. Nitric oxide is on mass basis the most important nitrogen compound emitted into the atmosphere. Nitric oxide is formed from atmospheric nitrogen (N) at high temperatures as in combustion processes. More than 90 percent of the emitted oxide consists of nitrogen oxide (NO), while the remaining party is emitted as nitrogen dioxide (NO₂) (Berkowicz 1998). Once emitted from the tail pipe, nitrogen oxide will react with ozone:
+
+$$
+\mathrm{NO} + \mathrm{O}_{3} \rightarrow \mathrm{NO}_{2} + \mathrm{O}_{2}, \qquad (1)
+$$
+
+Under typical tropospheric boundary layer conditions, this reaction takes place within a time span of a couple of seconds up to minutes, depending on the background concentrations NO, NO₂ and O₃ and meteorological conditions.
+NO₂ is the first reaction product of the atmospheric oxidation process of the emitted NO. However, the freshly formed nitrogen oxide will absorb solar ultraviolet radiation (200nm ≤ λ ≤ 420nm) and forms again NO and O₃:
+
+$$
+\text{NO}_2 + h\nu \rightarrow \text{NO} + \text{O}. \qquad (2)
+$$
+
+$$
+O + O_2 + M \rightarrow O_3 - M . \tag{3}
+$$
+
+Reaction (3) happens quasi immediately. Therefore, in general reactions (2) and (3) are considered as one and O2 in the
+atmosphere is accepted as being constant.
+
+The reaction of NO with O₃ and the photolysis of NO₂ form a cycle which occurs rapidly over the timescales of seconds up to minutes. Under most tropospheric conditions, NO and NO₂ will coexist as a mixture, called NOₓ. If a steady state is reached, the following equilibrium holds:
+
+$$
+\frac{[\text{NO}][\text{O}_x]}{[\text{NO}_2]} = \frac{j_{\text{NO}_2}}{k_{\text{NO}}}, \quad (4)
+$$
+
+where the parentheses indicate the number concentration of the compound in molecules/cm³. kNO is the second order or bimolecular reaction rate coefficient in (1) and is dependent on the ambient temperature (Seinfeld and Pandis 2006):
+
+$$
+k_{NO} = A_0 \exp \left( -\frac{E}{R} \frac{1}{T} \right). \qquad (5)
+$$
+
+with
+
+$$
+A_0 = 2.2 \times 10^{-12} \frac{cm^3}{molecules s},
+$$
+
+$$
+\frac{E}{R} = 1430 K . \qquad (7)
+$$
+
+Figure plots the reaction rate as a function of the temperature.
\ No newline at end of file
diff --git a/samples/texts/5670049/page_2.md b/samples/texts/5670049/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..439e995ebd2c705fce4199bb562affc877919955
--- /dev/null
+++ b/samples/texts/5670049/page_2.md
@@ -0,0 +1,21 @@
+Figure 1: Bimolecular reaction rate coefficient as function of the temperature
+
+$j_{NO_2}$ is the photolysis coefficient of equation (2) and is dependent on the solar ultraviolet radiation. The computation is rather complicated. Theoretically one should integrate over the product of the NO₂ specific absorption cross section with the quantum yield for photolysis and the spectral actinic flux within the limits of the ultraviolet spectrum (Seinfeld and Pandis 2006). However, for a fast estimate different parameterizations are available based on solar angle, solar radiation and cloud coverage (Berkowicz and Hertel 1989; de Leeuw 1995; van Ham and Pulles 1998).
+
+For the implementation in the Envi-met model, the following empirical formulation based on the solar radiation is used:
+
+$$j_{NO_2} = 0.8 \times 10^{-3} \exp(-10/R_s) + 7.4 \times 10^{-6} R_s, \quad (8)$$
+
+with $R_s$ the solar radiation measured in [W/m²]. In ENVI-met in every cell the solar radiation is calculated based on the positions of the sun, cloud cover, local shadows and reflections. Figure 6 shows the estimated values during two different days at a location in the Netherlands based on different parameterization schemes. The red line is the one according to (8) where $R_s$ is dynamically computed by the Envi-met model.
+
+Figure 6: Computed photolysis coefficient during the day for Vaassen, The Netherlands (Left: 23/08/2006, mean cloud coverage 76%; Right: 26/09/2006, 99%)
+
+## CHEMICAL EQUILIBRIUM
+
+Assume that $[NO]_0$, $[NO_2]_0$ and $[O_3]_0$ are the initial number concentrations put in a reactor of constant volume at constant temperature and radiation. After a short time a steady state will be reached for which the photostationary state relation (4) holds. From the conservation of nitrogen and the stoichiometric reaction of O₃ with NO follows (Seinfeld and Pandis 2006):
+
+$$[NO] + NO_2 \rightleftharpoons [NO']_0 + [NO_2]_0, \qquad (9)$$
+
+$$[O_3]_0 - [O_3] = [NO']_0 - [NO]_0, \qquad (10)$$
+
+One can solve now the chemical equilibrium in the reactor and get:
\ No newline at end of file
diff --git a/samples/texts/5670049/page_3.md b/samples/texts/5670049/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..b4f26758a31bb7dda978cc17ff0b3ec98c9fa3c3
--- /dev/null
+++ b/samples/texts/5670049/page_3.md
@@ -0,0 +1,21 @@
+$$
+\begin{align}
+& [NO_2] - [NO_2]_0 + \frac{1}{2} \left( O_3^{\circ} - [NO]_0 + \frac{j_{NO_2}}{k_{NO}} \right) - \frac{1}{2} \sqrt{D}, \tag{11} \\
+& [NO] = -\frac{1}{2} \left( [O_3]_0 - [NO]_0 + \frac{j_{NO_2}}{k_{NO}} \right) + \frac{1}{2} \sqrt{D}, \tag{12} \\
+& [O_3] = -\frac{1}{2} \left( [NO]_0 - [O_3]_0 + \frac{j_{NO_2}}{k_{NO}} \right) - \frac{1}{2} \sqrt{D}, \tag{13}
+\end{align}
+$$
+
+with:
+
+$$
+D = \left( [NO]_0 - [O_3]_0 - \frac{j_{NO_2}}{k_{NO}} \right)^2 + 4 \frac{j_{NO_2}}{k_{NO}} ([NO_2]_0 + [O_3]_0) \quad (14)
+$$
+
+One can assume that for a rural background concentration, NO, NO₂ and O₃ are in equilibrium. We now can verify that the parameterization in (8) together with the modelled solar radiation holds by using the computed photolysis coefficients to estimate the equilibrium state according to (11) to (14). The initial numbers are taken from nearby rural measurement stations. Theoretically, if the measured background concentration is in equilibrium and the photolysis coefficient is estimated well, the computed equilibrium should not differ from the local measurements.
+
+The measured background concentrations are compared to the modelled equilibrium state for two days at the location of Vaassen, the Netherlands (Janssen, De Maerschalck et al. 2008). The measured background concentration is the mean from three Dutch rural background concentrations. Again the red line is based on the parameterisation in (8). The red line with the bullets is the measured mean background concentration.
+
+Figure 3: Measured rural background concentrations and computed equilibrium for NO (left), NO2 (middle), and O3 (right). (Vaassen, The Netherlands, 23/08/2006, 76% cloudiness)
+
+Figure 4: Measured rural background concentrations and computed equilibrium for NO (left), NO2 (middle), and O3 (right). (Vaassen, The Netherlands, 26/09/2006, 99% cloudiness)
\ No newline at end of file
diff --git a/samples/texts/5670049/page_4.md b/samples/texts/5670049/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c2fb27fa37e7bb01ebe40cc97bd214327307bb2
--- /dev/null
+++ b/samples/texts/5670049/page_4.md
@@ -0,0 +1,31 @@
+DYNAMIC CHEMICAL TRANSFORMATION PROCESSES IN CFD BASED AIR QUALITY MODELS
+
+The dispersion of a certain gas *i* can be described by a scalar dispersion equation for the concentration $C_i(x, y, z)$:
+
+$$ \frac{\partial C_i}{\partial t} + \mathbf{u} \cdot \nabla C_i + \nabla \cdot (\mathbf{K}_i \cdot \nabla C_i) = E_i - S_i + R_i, \quad (15) $$
+
+with $E_i(x, y, z)$ the local emissions of compound *i* and $S_i(x, y, z)$ the sum of all sink terms (deposition, interaction with vegetation, sedimentation, ...). $R_i$ is the chemical reaction term and is in general dependent on the concentration of all compounds involved in the reaction. The advection velocity and the turbulent reaction terms are computed by the flow solver of the CFD model.
+
+For the photochemical reactions described above the dispersion equations for NO, $NO_2$ and $O_3$ have to be solved simultaneously. The partial differential equations are coupled with the following non-linear reaction terms:
+
+$$ R_{NO} = \left( \frac{d[NO]}{dt} \right)_B = -k_{NO}[NO][O_3] - j_{NO_2}[NO_2], \quad (16) $$
+
+$$ R_{NO_2} = \left( \frac{d[NO_2]}{dt} \right)_R k_{NO} NO || O_3 | j_{NO_2} NO_2 |, \quad (17) $$
+
+$$ R_{O_3} = \left( \frac{d[O_3]}{dt} \right)_U = -k_{NO}[NO][O_3] + j_{NO_2}[NO_2]. \quad (18) $$
+
+Notice that these reaction terms are given in number concentration while equation (15) is typically describing conservation of mass. In Envi-met all concentrations are mixing ratios measured in $\mu g/kg_{air}$. Therefore, equations (16) to (18) have to be converted to mass concentrations first.
+
+Figure (Janssen, De Maerschalck et al. 2008) illustrates the local effect of oxidation of traffic emitted NO on the local air quality. The continuous lines show the modelled NO and $NO_2$ concentrations downwind of a motorway. The green lines are for a motorway with a vegetation barrier, the red lines without a vegetation barrier. The position of the driving lanes and vegetation barrier are indicated by the red and green blocks. The green and red dots with error bars are the measured concentrations.
+
+One can see that NO is decreasing faster than the $NO_2$ concentrations, both with and without a vegetation barrier. This is due to the fact the NO is reacting with ozone and forms secondary $NO_2$. One can also notice that due to the vegetation the effect is even stronger. The vegetation slows down the local wind speed, so there is more time for the chemistry. At the same time, due to increased turbulence, more fresh ozone is mixed in which enhances the oxidation process as well.
+
+Figure 5: NO and $NO_2$ concentrations downwind of a highway with and without a vegetation barrier.
+
+REFERENCES
+
+Berkowicz, R. (1998), Street Scale Models in *Urban Air Pollution - European Aspects*, J. Fenger, O. Hertel and F. Palmgren, Kluwer Academic Publishers, 223-251.
+
+Berkowicz, R. and O. Hertel (1989), Technical Report DMU LUFT - A131, National Environmental Research Institute, Roskilde, Denmark,
+
+Bruse, M. (2007), Particle filtering capacity of urban vegetation: A microscale numerical approach. *Berliner Geographische Arbeiten* 109, 61-70.
\ No newline at end of file
diff --git a/samples/texts/5670049/page_5.md b/samples/texts/5670049/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..060e0e58f5191f2f43dfde936f7432cfd252604e
--- /dev/null
+++ b/samples/texts/5670049/page_5.md
@@ -0,0 +1,9 @@
+de Leeuw, F. A. A. M. (1995), *Parametrization of NO2 photodissociation rate*, Technical Report 722501004, Dutch National Institute for Public Health and Environment (RIVM), Bilthoven, The Netherlands
+
+De Maerschalck, B., S. Janssen, et al. (2009), *CFD Simulations Of The Impact of a Vegetation Barrier Along a Motor Way on Local Air Quality*. Air Quality, Science and Application, Istanbul, Turkey, University of Hertfordshire.
+
+Janssen, S., B. De Maerschalck, et al. (2008), *Modelanalyse van de IPL meetcampagne langs de A50 te Vaassen ter bepaling van het effect van vegetatie op luchtkwaliteit langs snelwegen*, DVS-2008-044, VITO, Mol, Belgium
+
+Seinfeld, J. H. and S. N. Pandis (2006), *Atmospheric chemistry and physics*, New Jersey, John Wiley.
+
+van Ham, J. and M. P. J. Pulles (1998), *Nieuw nationaal model*, Technical Report TNO R 98/306, TNO, KEMA, KNMI, VNONCW, RIVM, Apeldoorn, The Netherlands
\ No newline at end of file
diff --git a/samples/texts/6004336/page_1.md b/samples/texts/6004336/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1486e056c1398db94cd422632f66d24be490c29
--- /dev/null
+++ b/samples/texts/6004336/page_1.md
@@ -0,0 +1,35 @@
+A MULTI-PERIOD NEWSVENDOR PROBLEM WITH
+PRE-SEASON EXTENSION UNDER FUZZY DEMAND
+
+Hülya Behret¹, Cengiz Kahraman²
+
+¹,²Istanbul Technical University, Industrial Engineering Department,
+Macka, 34367, Istanbul, Turkey
+
+E-mails: ¹behreth@itu.edu.tr (corresponding author); ²kahramanc@itu.edu.tr
+
+Received 3 January 2010; accepted 1 September 2010
+
+**Abstract.** This paper proposes a fuzzy multi-period newsvendor model with pre-season extension for innovative products. The demand of the product is represented by fuzzy numbers with triangular membership function. The holding and shortage cost parameters are considered as imprecise and also represented by triangular fuzzy numbers. As the selling season draws closer, suppliers lead times shortens and thus production costs increase. In contrast, caused by the oncoming selling season, demand fuzziness decreases and more accurate demand forecasts can be maintained that lead to lower overage/underage costs. The objective of the model is to find the best order period and the best order quantity that will minimize the fuzzy expected total cost. The model is experimented with an illustrative example and supported by sensitivity analyses.
+
+**Keywords:** inventory problem, fuzzy modeling, newsvendor, innovative product, fuzzy demand, fuzzy inventory cost, pre-season.
+
+Reference to this paper should be made as follows: Behret, H.; Kahraman, C. 2010.
+A Multi-period Newsvendor Problem with Pre-season Extension under Fuzzy Demand,
+*Journal of Business Economics and Management* 11(4): 613–629.
+
+# 1. Introduction
+
+In today's competitive market conditions, development of innovative products is ex-
+tremely important for an enterprise to achieve and sustain the superiority. Innovative
+products are products that have shorter life cycles with higher innovation and fashion
+contents (Lee 2002). Fashion goods, electronic products and mass customized goods
+are examples of innovative products. Although innovative products tend to have higher
+product profit margins, the cost of obsolescence is high for them. These kinds of prod-
+ucts have much product variety and short product life cycles. Caused by the innovative
+nature of the product, usually no historical data are available for forecasting demand
+of such products. In addition to the short life cycle and high demand unpredictability
+of these products, another important feature is the long supply lead times. Because of
+having such long supply lead times and short sales seasons, procurement problem of
+these products corresponds to the single-period inventory (also known as newsvendor
+or newsboy) problem.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_10.md b/samples/texts/6004336/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..737e00507bda77cbec5211f8558905fb93919d2f
--- /dev/null
+++ b/samples/texts/6004336/page_10.md
@@ -0,0 +1,9 @@
+Fig. 3. Fuzzy concentration of $\tilde{X}$
+
+For example, let us order a quantity of 2000 units at the beginning of January. The unit penalty cost ($\tilde{\text{PC}}$) is a level-2 fuzzy set including imprecise demand and costs. For $x_1 = 1000$, the fuzzy penalty cost will be $\tilde{\text{PC}} = (1,2,3) \times 1000 = (1000,2000,3000)$ with possibility of 0. For $x_5 = 3000$, the fuzzy penalty cost will be $\tilde{\text{PC}} = (4,5,6) \times 1000 = (4000,5000,6000)$ with the possibility 0.80 and so on. For the ordering period January fuzzy unit penalty cost values for $Q = 2000$ is given in Table 3.
+
+The graphical representations of level-2 fuzzy sets of ($\tilde{\text{PC}}$) for January when $Q = 2000$ and the corresponding s-fuzzified set $s - \text{fuzz}(\tilde{\text{PC}})$ are shown in Figs. 4a and 4b, respectively.
+
+Table 3. Unit penalty cost values for January, $Q = 2000$
+
+ | μ̃X(xi) | (OC) | (℮UC) | (℮PC) |
|---|
| x1 | 1000 | 0 | (1000,2000,3000) | (0,0,0) | (1000,2000,3000) |
| x2 | 1500 | 0.20 | (500,1000,1500) | (0,0,0) | (500,1000,1500) |
| x3 | 2000 | 0.40 | (0,0,0) | (0,0,0) | (0,0,0) |
| x4 | 2500 | 0.60 | (0,0,0) | (2000,2500,3000) | (2000,2500,3000) |
| x5 | 3000 | 0.80 | (0,0,0) | (4000,5000,6000) | (4000,5000,6000) |
| x6 | 3500 | 1 | (0,0,0) | (6000,7500,9000) | (6000,7500,9000) |
| x7 | 4000 | 0.80 | (0,0,0) | (8000,10000,12000) | (8000,10000,12000) |
| x8 | 4500 | 0.60 | (0,0,0) | (10000,12500,15000) | (10000,12500,15000) |
| x9 | 5000 | 0.40 | (0,0,0) | (12000,15000,18000) | (12000,15000,18000) |
| x10 | 5500 | 0.20 | (0,0,0) | (14000,17500,21000) | (14000,17500,21000) |
| x11 | 6000 | 1 | (16,6799,24,4788) | (16,6799,24,4788) | (16,6799,24,4788) |
\ No newline at end of file
diff --git a/samples/texts/6004336/page_11.md b/samples/texts/6004336/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..22ef79c3d56d98b400096f25b0d9b151b6b91e3c
--- /dev/null
+++ b/samples/texts/6004336/page_11.md
@@ -0,0 +1,17 @@
+**Fig. 4a.** Level-2 fuzzy set ($\tilde{PC}$)
+
+**Fig. 4b.** s-fuzz($\tilde{PC}$)
+
+According to the s-fuzzified value of the penalty cost, total cost for the given values when $Q = 2000$ is calculated as below:
+
+$$ \tilde{TC}(2000, \tilde{X}) = [3.50 \times 2000 + defuzz(s - fuzz(\tilde{PC}))]. \quad (21) $$
+
+Here, the operator “defuzz” denotes the centroid method for defuzzification. Centroid defuzzification values have been obtained by using MATLAB R2008a Fuzzy Logic Toolbox as in Fig. 5.
+
+$$ \tilde{TC}(2000, \tilde{X}) = [3.5 \times 2000 + 10,022] = \$17,022. \quad (22) $$
+
+Fuzzy total cost values for all other order quantities of the ordering period January are given in Table 4. For the ordering period January, best order quantity which gives the min fuzzy total cost is found 3000 units with the fuzzy total cost value \$16,820.
+
+The procedure continues with calculation of other periods best order quantities. Using Equation 19, the fuzzy total costs for pre-season ordering periods are calculated and given as in Table 5 and Fig. 6.
+
+As stated by Table 5, best order period ($M^*$) is found January with fuzzy total cost \$16,820 with 3000 units. Here the decrease of demand fuzziness causes a decline on the fuzzy total cost values among ordering periods. However, the change of production cost also affects the cost function. The proposed model offers best order period and corresponding order quantity for the given fuzzy variables.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_12.md b/samples/texts/6004336/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..53d1a9fe05bc2f654c525ad7d1c1c15635f6dc65
--- /dev/null
+++ b/samples/texts/6004336/page_12.md
@@ -0,0 +1,11 @@
+**Table 4.** Fuzzy total costs for January, ($)
+
+| Order quantity | Fuzzy total cost |
| 1000 | 17,356 |
| 1500 | 17,159 |
| 2000 | 17,022 |
| 2500 | 16,930 |
| 3000 | 16,820* |
| 3500 | 16,828 |
| 4000 | 17,258 |
| 4500 | 19,079 |
| 5000 | 21,514 |
| 5500 | 24,045 |
| 6000 | 26,681 |
+
+Fig. 5. Centroid defuzzification of s - fuzz($\tilde{P}C$)
+
+**Table 5.** Fuzzy total cost for pre-season ordering periods, $\tilde{c}_h = \$$[1,2,3]$, $\tilde{c}_s = \$$[4,5,6]
+
+| Q | T̃C(Jan) | T̃C(Feb) | T̃C(March) | T̃C(April) | T̃C(May) | T̃C(June) |
|---|
| cp | 3,50 | 3,68 | 3,86 | 4,05 | 4,25 | 4,47 |
|---|
| 1000 | 17,356 | 17,335 | 17,360 | 17,327 | 17,262 | 17,332 |
|---|
| 1500 | 17,159 | 17,161 | 17,219 | 17,184* | 17,098 | 17,202 |
|---|
| 2000 | 17,022 | 17,089 | 17,210* | 17,202 | 17,087* | 17,202* |
|---|
| 2500 | 16,930 | 17,082 | 17,282 | 17,333 | 17,223 | 17,350 |
|---|
| 3000 | 16,820* | 17,075* | 17,375 | 17,515 | 17,448 | 17,617 |
|---|
| 3500 | 16,828 | 17,168 | 17,553 | 17,787 | 17,815 | 18,087 |
|---|
| 4000 | 17,258 | 17,718 | 18,237 | 18,659 | 19,003 | 19,589 |
|---|
| 4500 | 19,079 | 19,709 | 20,401 | 21,063 | 21,708 | 22,508 |
|---|
| 5000 | 21,514 | 22,252 | 23,059 | 23,861 | 24,678 | 25,636 |
|---|
| 5500 | 24,045 | 24,902 | 25,831 | 26,775 | 27,756 | 28,843 |
|---|
| 6000 | 26,681 | 27,658 | 28,703 | 29,778 | 30,880 | 32,058 |
|---|
+
+H. Behret, C. Kahraman. A multi-period newsvendor problem with pre-season extension under fuzzy demand
\ No newline at end of file
diff --git a/samples/texts/6004336/page_13.md b/samples/texts/6004336/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..d77ba1018bea8f60caebf47128ddcbf2431ff226
--- /dev/null
+++ b/samples/texts/6004336/page_13.md
@@ -0,0 +1,9 @@
+**Fig. 6.** Fuzzy total cost for pre-season ordering periods
+
+### 3.3. Sensitivity analysis
+
+In this section, various experiments have been performed to analyze the effect of membership function shapes to the fuzzy models. Through these experiments fuzzy unit holding cost and fuzzy unit shortage cost values have been changed (Table 6).
+
+**Table 6.** Performed experiments and values of the cost parameters
+
+| Experiments | c̃h ($) | c̃s ($) |
| 1 | (1,2,3) | (4,5,6) |
| 2 | (1,2,3) | (4,5,8) |
| 3 | (1,2,3) | (4,5,10) |
| 4 | (1,2,3) | (4,5,12) |
| 5 | (1,2,5) | (4,5,6) |
| 6 | (1,2,5) | (4,5,8) |
| 7 | (1,2,5) | (4,5,10) |
| 8 | (1,2,5) | (4,5,12) |
| 9 | (1,2,7) | (4,5,6) |
| 10 | (1,2,7) | (4,5,8) |
| 11 | (1,2,7) | (4,5,10) |
| 12 | (1,2,7) | (4,5,12) |
| 13 | (1,2,9) | (4,5,6) |
| 14 | (1,2,9) | (4,5,8) |
| 15 | (1,2,9) | (4,5,10) |
| 16 | (1,2,9) | (4,5,12) |
\ No newline at end of file
diff --git a/samples/texts/6004336/page_14.md b/samples/texts/6004336/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..1077ff262cb8aa5ea7a13341590cc70aeb31a73b
--- /dev/null
+++ b/samples/texts/6004336/page_14.md
@@ -0,0 +1,9 @@
+In these experiments, production cost values and demand membership functions are considered the same as the previous illustration while the membership of holding cost and the membership of shortage cost have been changed. According to the provided membership values, the fuzzy model generates the results represented in Table 7.
+
+**Table 7.** $M^*$, $Q^*$ and $\tilde{TC}$ for different $\tilde{c}_h$ and $\tilde{c}_s$
+
+| Experiments | $M^*$ | $Q^*$ | $\tilde{TC}($) |
|---|
| 1 | Jan | 3000 | 16,820 |
| 2 | Jan | 2500 | 17,716 |
| 3 | May | 2500 | 18,484 |
| 4 | June | 3000 | 19,152 |
| 5 | Jan | 3000 | 16,626 |
| 6 | Jan | 3000 | 17,619 |
| 7 | May | 2500 | 18,457 |
| 8 | May | 3000 | 19,093 |
| 9 | Jan | 3000 | 16,509 |
| 10 | Jan | 3000 | 17,545 |
| 11 | May | 3000 | 18,431 |
| 12 | May | 3000 | 19,062 |
| 13 | Jan | 3000 | 16,449 |
| 14 | Jan | 3000 | 17,498 |
| 15 | May | 3000 | 18,417 |
| 16 | May | 3000 | 19,048 |
+
+The results provided in Table 7 represents that, the increase of the fuzziness of cost parameters causes an increase of fuzzy total cost values. For example, in experiment 2, we increased the fuzziness of shortage cost by changing the membership function shape to $\tilde{c}_s = \$$[4,5,8]$. This change increased the fuzzy total cost value from $16,820 to $17,716. Shortage cost values as they have higher fuzziness affect the order period in that way. In experiments 3 and 4 membership values of shortage costs are wider than the values in experiments 1 and 2. For the experiments 1 and 2 best order period is found as January which has the maximum demand fuzziness and minimum production cost. However, for experiments 3 and 4 best order period changes to May and June, respectively, which have minimum demand fuzziness and maximum production costs.
+
+Best order quantities for the experiments do not change much according to different scenarios. The reason of this situation is the symmetrical shape of demands' membership function. Only in three experiments best order quantity decreases from 3000 to 2500. If nonsymmetrical membership functions are used for demand data, more changes on offered order quantities can be observed.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_15.md b/samples/texts/6004336/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..95f12827bb11f9271335ebd4626df9fb9b32d062
--- /dev/null
+++ b/samples/texts/6004336/page_15.md
@@ -0,0 +1,23 @@
+**4. Conclusion**
+
+The classical single-period inventory problem has been considered extensively in the relevant literature. Through these studies, most of the extensions have been made in the probabilistic and possibilistic framework. Most of the papers under possibilistic framework proposed single-period inventory models with only fuzzy environment extensions. Through this paper, we combine two extensions to single-period inventory problem which are; an extension to fuzzy environment and an extension to models with more than one period to prepare for the selling season. The model is developed for innovative products. The demand, holding and shortage cost parameters are represented by fuzzy numbers. The model has been experimented with an illustrative example and supported by sensitivity analyses.
+
+Contrary to the crisp model, fuzzy model proposes highly flexible solutions for all possible states. When the solutions of the model are analyzed, it is seen that the decrease of demand fuzziness causes a decline on the fuzzy total cost values among ordering periods. The fuzzy model offers the best order period and corresponding order quantity for the given fuzzy variables. Furthermore, the increase of the fuzziness of cost parameters causes an increase in fuzzy total cost values. Shortage cost values as they have higher fuzziness affect the order period in that way. Best order quantities for the experiments do not change much according to different scenarios. The reason of this situation is the symmetrical shape of demands' membership function.
+
+For further research, we suggest the examination of the influence of nonsymmetrical and different type of membership functions for demand data on order quantities. Furthermore, the use of an imprecise continuous demand function instead of the discrete case of this paper can be examined. This will require optimization techniques for solution procedure. Lastly, dependent demand structure among ordering periods is suggested to be applied.
+
+**References**
+
+Dubois, D.; Prade, H. 1980. *Fuzzy Sets and Systems: Theory and Applications*. New York: Academic Press.
+
+Dutta, P.; Chakraborty, D.; Roy, A. R. 2005. A single-period inventory model with fuzzy random variable demand, *Mathematical and Computer Modelling* 41(8-9): 915-922.
+doi:10.1016/j.mcm.2004.08.007
+
+Dutta, P.; Chakraborty, D.; Roy, A. R. 2007. An inventory model for single-period products with reordering opportunities under fuzzy demand, *Computers & Mathematics with Applications* 53(10): 1502-1517. doi:10.1016/j.camwa.2006.04.029
+
+Ishii, H.; Konno, T. 1998. A stochastic inventory problem with fuzzy shortage cost, *European Journal of Operational Research* 106(1): 90-94. doi:10.1016/S0377-2217(97)00173-2
+
+Ji, X. Y.; Shao, Z. 2006. Model and algorithm for bilevel newsboy problem with fuzzy demands and discounts, *Applied Mathematics and Computation* 172(1): 163-174.
+doi:10.1016/j.amc.2005.01.139
+
+Kao, C.; Hsu, W.-K. 2002. A single-period inventory model with fuzzy demand, *Computers & Mathematics with Applications* 43(6-7): 841-848. doi:10.1016/S0898-1221(01)00325-X
\ No newline at end of file
diff --git a/samples/texts/6004336/page_16.md b/samples/texts/6004336/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..d1991a4fefcde625e6dff52f84e51872724123b7
--- /dev/null
+++ b/samples/texts/6004336/page_16.md
@@ -0,0 +1,38 @@
+Khouja, M. 1999. The single-period (news-vendor) problem: literature review and suggestions for future research, *Omega – International Journal of Management Science* 27(5): 537–553.
+doi:10.1016/S0305-0483(99)00017-1
+
+Kopytov, E.; Greenglaz, L.; Tissen, F. 2006. Stochastic inventory control model with two stages in ordering process, *Journal of Business Economics and Management* 7(1): 21–24.
+
+Lee, H. L. 2002. Aligning supply chain strategies with product uncertainties, *California Management Review* 44(3): 105–119.
+
+Li, L. S.; Kabadi, S. N.; Nair, K. P. K. 2002. Fuzzy models for single-period inventory problem, *Fuzzy Sets and Systems* 132(3): 273–289.
+doi:10.1016/S0165-0114(02)00104-5
+
+Lu, H.-F. 2008. A fuzzy newsvendor problem with hybrid data of demand, *Journal of the Chinese Institute of Industrial Engineers* 25(6): 472–480.
+doi:10.1080/10170660809509109
+
+Nahmias, S. 1996. *Production and Operations Management*. Third edition. Boston MA: Irwin.
+
+Petrovic, D.; Petrovic, R.; Vujosevic, M. 1996. Fuzzy models for the newsboy problem, *International Journal of Production Economics* 45(1–3): 435–441.
+doi:10.1016/0925-5273(96)00014-X
+
+Ross, T. J. 1995. *Fuzzy Logic with Engineering Applications*. 2nd edition. New York: McGraw-Hill.
+
+Shao, Z.; Ji, X. 2006. Fuzzy multi-product constraint newsboy problem, *Applied Mathematics and Computation* 180(1): 7–15.
+doi:10.1016/j.amc.2005.11.123
+
+Silver, E. A.; Pyke, D. F.; Peterson, R. P. 1998. *Inventory Management and Production Planning and Scheduling*. Third edition. New York: John Wiley.
+
+Xu, R.; Zhai, X. 2008. Optimal models for single-period supply chain problems with fuzzy demand, *Information Sciences* 178(17): 3374–3381.
+doi:10.1016/j.ins.2008.05.012
+
+Xu, R.; Zhai, X. 2010. Analysis of supply chain coordination under fuzzy demand in a two-stage supply chain, *Applied Mathematical Modeling* 34(1): 129–139.
+doi:10.1016/j.apm.2009.03.032
+
+Zadeh, L. A. 1965. Fuzzy sets, *Information and Control* 8(3): 338–353.
+doi:10.1016/S0019-9958(65)90241-X
+
+Zadeh, L. A. 1971. Quantitative fuzzy semantics, *Information Science* 3(2): 159–176.
+doi:10.1016/S0020-0255(71)80004-X
+
+Zadeh, L. A. 1978. Fuzzy sets as a basis for a theory of possibility, *Fuzzy Sets and Systems* 100(1): 9–34.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_17.md b/samples/texts/6004336/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..c871a077366cc6f695431f08cbb262ce6d2efc67
--- /dev/null
+++ b/samples/texts/6004336/page_17.md
@@ -0,0 +1,13 @@
+# KELIŲ LAIKOTARPIŲ PARDAVIMŲ MODELIS, PAPILDYTAS PARUOŠIAMUOJU LAIKOTARPIU, ESANT NERAIŠKIAJAI PAKLAUSAI
+
+H. Behret, C. Kahraman
+
+## Santrauka
+
+Straipsnyje pasiūlytas neraiškusis kelių laikotarpių pardavimo modelis, papildytas paruošiamuoju laikotarpiu inovatyviems produktams. Produkto paklausa apibūdinama neraiškiaisiais skaičiais, aprašytais trikampe priklausomumo funkcija. Turto ir sąnaudų parametrai laikomi netiksliais ir taip pat apibūdiniami trikampe priklausomumo funkcija aprašytais neraiškiaisiais skaičiais. Kai pardavimo laikotarpis priartėja, tiekimo laikas trumpėja, o gaminio kaina išauga. Priešingai, dėl praeinančio pardavimo laikotarpio paklausos neapibrėžtumas sumažėja ir galima tiksliau prognozuoti paklausą, o tai padeda sumažinti išlaidas dėl gaminių pertekliaus ar trūkumo. Šio modelio tikslas – rasti geriausią užsakymo laiką ir geriausią užsakymo kiekį, kurie padėtų sumažinti visų numatomų sąnaudų neapibrėžtumą. Pateikiamas modelio taikymo pavyzdys ir modelio jautrumo analizė.
+
+**Reikšminiai žodžiai:** atsargų problema, neraiškusis modeliavimas, pardavimas, inovatyvus produktas, neraiškioji paklausa, neraiškioji atsargų savikaina, paruošiamasis laikotarpis.
+
+**Hülya BEHRET.** M. Sc., a research assistant at Istanbul Technical University, Industrial Engineering Department and working on her PhD dissertation on "Fuzzy Single-Period Inventory Models in Production Systems" under the supervision of Prof. Cengiz Kahraman. Her research areas are Production and Inventory Systems, Fuzzy Modeling, and Simulation.
+
+**Cengiz KAHRAMAN.** A professor of industrial engineering at Istanbul Technical University, Industrial Engineering Department. He has published about 100 international journal papers and 50 international book chapters on fuzzy applications of industrial engineering. His research areas are engineering economics, quality control and management, statistical decision making, and production engineering.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_2.md b/samples/texts/6004336/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..6a77c10b2d6906d69c1844a90154be5ec604ade4
--- /dev/null
+++ b/samples/texts/6004336/page_2.md
@@ -0,0 +1,5 @@
+The single-period inventory problem (SPP) deals with finding product's order quantity that minimizes the expected total cost or maximizes the expected total profit under linear purchasing, holding, and shortage costs and probabilistic demand. In SPP, product orders are given before the selling period begins. There is no option for an additional order during the selling period or there will be a penalty cost for this re-order. The assumption of the SPP is that if any inventory remains at the end of the period, either a discount is used to sell it or it is disposed of (Nahmias 1996). If the order quantity is smaller than the realized demand, the seller misses some profit. An extensive literature review on a variety of extensions of the single-period inventory problem and related multi-stage, inventory control models can be found Khouja (1999) and Silver et al. (1998). Most of the extensions have been made in the probabilistic framework, in which the uncertainty of demand is described by probability distributions. However, the methods based on the probability theory allow only quantitative uncertainties. For instance, Kopytov et al. (2006) developed a stochastic single-product inventory control model for the chain "producer – wholesaler – customer" for transport and industrial companies. The optimization criterion is the minimum of average expenses for goods holding, ordering and losses from deficit per a time unit.
+
+In reality, most of the evaluations are imprecise, fuzzy and cannot be quantified. The fuzzy set theory, introduced by Zadeh (1965), is the best form that adapts all the uncertainty set to the model. When subjective evaluations are considered, the possibility theory takes the place of the probability theory (Zadeh 1978). The fuzzy set theory can represent linguistic data which cannot be easily modeled by other methods (Dubois and Prade 1980).
+
+In the literature, fuzzy modeling is used for single-period inventory problems by several researchers. Ishii and Konno (1998) introduced a fuzzy newsboy model restricted to shortage cost that is given by an L-shape fuzzy number while the demand is still stochastic. Thus the total expected profit function also becomes a fuzzy number and an optimum order quantity is obtained in the sense of fuzzy max (min) order of the profit function. Petrovic et al. (1996) developed two fuzzy models to handle uncertainty in the SPP. In the first model, uncertain demand is represented by a discrete fuzzy set but the inventory costs thought as precise and in the second model inventory costs also described as triangular fuzzy numbers. In the paper the concept of level-2 fuzzy set and the method of arithmetic defuzzification are employed to access an optimum order quantity. Li et al. (2002) studied the single-period inventory problem in two different cases to maximize the profit through ordering fuzzy numbers with respect to their total integral values. In order to minimize the fuzzy total cost, Kao and Hsu (2002) constructed a single-period inventory model. They adopted a method for ranking fuzzy numbers to find the optimum order quantity in terms of the cost. In their first study, Dutta et al. (2005) presented a SPP in an imprecise and uncertain mixed environment. They introduced demand as a fuzzy random variable and developed a new methodology to determine the optimum order quantity where the optimum is achieved using a graded
\ No newline at end of file
diff --git a/samples/texts/6004336/page_3.md b/samples/texts/6004336/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..e0cf8d13360380bd6f53dcc19a7c8245dd36463f
--- /dev/null
+++ b/samples/texts/6004336/page_3.md
@@ -0,0 +1,5 @@
+mean integration representation. In their second study, Dutta et al. (2007) proposed a single-period inventory model of profit maximization with a reordering strategy in an imprecise environment. They divided the entire ordering period into two slots and considered the customer demand as a fuzzy number in both slots. They represented a solution procedure using ordering of fuzzy numbers with respect to their possibilistic mean values. Ji and Shao (2006) extended SPP in bi-level context, where the decision of retailer (lower level) was considered separate from this of the “manufacturer” (upper level). In another study Shao and Ji (2006) extended SPP in multi-product case with fuzzy demands under budget constraint. In both studies they adopt credibility theory and used chance-constrained programming. They solved their models by a hybrid intelligent algorithm based on genetic algorithm and developed a fuzzy simulation. Lu (2008) studied a fuzzy newsvendor problem to analyze optimum order policy based on probabilistic fuzzy sets with hybrid data. They verified that the fuzzy newsvendor model is one extension of the crisp models. Xu and Zhai (2008) proposed a fuzzy model to find an optimal technique for dealing with the fuzziness aspect of demand uncertainties. They used a triangular fuzzy number to model the external demand, and they developer optimal decision models for both an independent retailer and an integrated supply chain. In their more recent study, Xu and Zhai (2010) considered a two-stage supply chain coordination problem and focused on the fuzziness aspect of demand uncertainty. They used fuzzy numbers to depict customer demand, and investigated the optimization of the vertically integrated two-stage supply chain under perfect coordination and contrast with the non-coordination case.
+
+Most of the papers mentioned above proposed single-period inventory models with only fuzzy environment extensions. Through this paper, we combine two extensions to SPP which are; an extension to fuzzy environment and an extension to models with more than one period to prepare for the selling season. The idea behind the second extension is that there may be many periods to produce or purchase the items which will be sold in a single season. The question becomes when and what orders should be placed from the suppliers as the selling season draws closer. Related references about this extension can be found in Silver et al. (1998). However, these papers consider only the probabilistic demand cases. In this study, a multi-period newsvendor problem with pre-season extension under fuzzy demand is analyzed. As the selling season draws closer, suppliers lead times shortens and the production cost increases. In contrast, demand fuzziness decreases and more accurate demand forecasts can be maintained that lead to lower overage/underage costs. The purpose of this study is to find the best order period and best order quantity for the single-period inventory problem with multi-period pre-season extension under fuzzy demand.
+
+The remainder of the paper is organized as follows. The single-period inventory problem is subjected in Section 2. In Section 3, a fuzzy model with multi-period pre-season extension is described. A numerical illustration of the presented model is issued and sensitivity analyses over cost parameters are presented. Lastly the paper is concluded in Section 4.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_4.md b/samples/texts/6004336/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1da1241c915766262045427ce7e3c0dc9e95a92
--- /dev/null
+++ b/samples/texts/6004336/page_4.md
@@ -0,0 +1,43 @@
+## 2. Single-period inventory problem
+
+The objective of the stochastic single-period inventory problem is to determine the order quantity $Q*$ for a fixed time period that will minimize the expected total cost. The total cost is the linear sum of purchase, overage, and underage costs. It is assumed that there is no initial inventory on hand. Demand is a random variable and represented by probability distributions. Items are purchased (or produced) for a single-period at the cost of $c_p$. The holding cost which is the cost of storing excess products minus their salvage value is $c_h$ and the shortage cost which is the cost of lost sales due to the inability to supply the demand is $c_s$. The total cost function will be as follows:
+
+$$TC(Q, X) = c_p Q + c_h \max\{(Q - X), 0\} + c_s \max\{(X - Q), 0\}, \quad (1)$$
+
+where $Q$ represents order quantity and $X$ stands for the demand given by the domain $X = \{x_0, x_1, x_2, ..., x_n\}$. If demand has a probability distribution function $p_X(x_i)$, then the total expected cost function will be:
+
+$$
+\begin{aligned}
+E[TC(Q, X)] &= \sum_{i=0}^{n} TC(Q, x_i) p(x_i) = \\
+&= c_p Q + \sum_{i=0}^{n} [c_h \max\{(Q - x_i), 0\} + c_s \max\{(x_i - Q), 0\}] p(x_i),
+\end{aligned}
+\quad (2)
+$$
+
+which is equal to:
+
+$$E[TC(Q, X)] = c_p Q + \sum_{i=0}^{Q-1} [c_h (Q - x_i) p(x_i)] + \sum_{i=Q}^{n} [c_s (x_i - Q) p(x_i)]. \quad (3)$$
+
+Let
+
+$$\Delta E[TC(Q, X)] = E[TC(Q+1, X)] - E[TC(Q, X)]. \quad (4)$$
+
+Then, $\Delta E[TC(Q, X)]$ is the change in expected total cost when we switch from $Q$ to $Q+1$. For a convex cost function, the optimum $Q$ will be the lowest $Q$ where $\Delta E[TC(Q, X)]$ is greater than zero. Therefore, we select the smallest $Q$ for which,
+
+$$\Delta E[TC(Q, X)] \geq 0. \quad (5)$$
+
+The equation above holds if
+
+$$E[TC(Q+1, X)] - E[TC(Q, X)] \geq 0. \quad (6)$$
+
+Substituting Equation 3 into Equation 6 leads to:
+
+$$
+\begin{aligned}
+& c_p(Q+1) + \sum_{i=0}^{Q} [c_h(Q+1-x_i)p(x_i)] + \sum_{i=Q+1}^{n} [c_s(x_i-Q-1)p(x_i)] - \\
+& \quad [c_p(Q) + \sum_{i=0}^{Q-1} [c_h(Q-x_i)p(x_i)] + \sum_{i=Q}^{n} [c_s(x_i-Q)p(x_i)]] \geq 0 \quad (7) \\
+& c_p + \sum_{i=0}^{Q} [c_h p(x_i)] - \sum_{i=Q+1}^{n} [c_s p(x_i)] \geq 0.
+\end{aligned}
+$$
+
+While $\sum_{i=0}^{n} p_X(x_i) = 1$,
\ No newline at end of file
diff --git a/samples/texts/6004336/page_5.md b/samples/texts/6004336/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..dacf9b4c4a97418e8373f5d5e0c144afd61699c3
--- /dev/null
+++ b/samples/texts/6004336/page_5.md
@@ -0,0 +1,31 @@
+$$p_{X \le} (Q) \ge \frac{c_s - c_p}{c_h + c_s}, \quad (8)$$
+
+where $p_{X \le} (Q)$ is the probability that the demand $X$ is smaller or equal to the order quantity $Q$. The expected total cost, $E[TC(Q, X)]$ will be minimized by the smallest value of $Q$(call it $Q^*$) satisfying the equation above (Equation 8).
+
+### 3. Fuzzy model with multi-period pre-season extension
+
+The fuzzy set theory provides a proper framework for description of uncertainty related to vagueness of natural language expressions and judgments. Fuzzy sets have been introduced by Zadeh (1965) as an extension of the classical notion of set. In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition where an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1].
+
+Let $U$ be a classical set of objects, called the universe, whose generic elements are denoted by $x$. Membership in a classical subset $X$ of $U$ is often viewed as a characteristic function, $\mu_X$ from $U$ to such that:
+
+$$\mu_X(x) = \begin{cases} 1 & \text{iff } x \in X \\ 0 & \text{iff } x \notin X. \end{cases} \qquad (9)$$
+
+If the valuation set $(\{0,1\})$ is allowed to be the real interval $[0,1]$, $\tilde{X}$ is called a fuzzy set (Zadeh 1965), $\mu_{\tilde{X}}(x)$ is the grade of membership of $x$ in $\tilde{X}$. The closer the value of $\mu_{\tilde{X}}(x)$ is to, the more $x$ belongs to $\tilde{X}$. $\tilde{X}$ is completely characterized by the set of pairs:
+
+$$\tilde{X} = \{(x, \mu_{\tilde{X}}(x)), x \in \tilde{X}\}. \qquad (10)$$
+
+Fuzzy numbers are a particular kind of fuzzy sets. A fuzzy number is a fuzzy set $R$ of real numbers set with a continuous, compactly supported, and convex membership function.
+
+Let $U$ be a universal set; a fuzzy subset $\tilde{X}$ of $U$ is defined by a function $\mu_{\tilde{X}}(x): U \to [0,1]$ is called membership function. Here, $U$ is assumed to be the set of real numbers $R$ and $F$ is the space of fuzzy sets.
+
+The fuzzy set $\tilde{X} \in F$ is a fuzzy number iff:
+
+I. For $\forall \alpha \in [0,1]$, the set $X^\alpha = \{x \in R: \mu_{\tilde{X}}(x) \ge \alpha\}$, which is called $\alpha$ cut of $\tilde{X}$ is a convex set.
+
+II. $\mu_{\tilde{X}}(x)$ is a continuous function.
+
+III. $\sup(\tilde{X}) = \{x \in R: \mu_{\tilde{X}}(x) \ge 0\}$ is a bounded set in $R$.
+
+IV. $\text{height}(\tilde{X}) = \max_{x \in U} \mu_{\tilde{X}}(x) = h \ge 0$.
+
+By conditions (I) and (II), each $\alpha$-cut is a compact and convex subset of $R$ hence it is a closed interval in $R$, $X^\alpha = [X_L(\alpha), X_R(\alpha)]$. If $h=1$ we say that the fuzzy number is normal.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_6.md b/samples/texts/6004336/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..c51163d0f30d517dd70c9e846bbea4d3628c743e
--- /dev/null
+++ b/samples/texts/6004336/page_6.md
@@ -0,0 +1,22 @@
+Let us show a fuzzy number $\tilde{X} = (x_l, x_m, x_n, x_u)$ where $x_l < x_m < x_n < x_u$ in this form.
+The fuzzy number $\tilde{X}$ is a so-called triangular fuzzy number $\tilde{X} = (x_l, x_m = x_n, x_u)$,
+$x_l < x_m = x_n < x_u$ if its membership function $\mu_{\tilde{X}}(x) : R \rightarrow [0,1]$ is equal to (Fig. 1):
+
+$$ \mu_{\tilde{X}}(x) = \begin{cases}
+0 & x < x_l \\
+\left( \frac{x-x_l}{x_m-x_l} \right) & x_l < x < x_m \\
+\left( \frac{x_u-x}{x_u-x_m} \right) & x_m < x < x_u \\
+0 & x > x_u
+\end{cases} . \qquad (11) $$
+
+**Fig. 1.** Triangular membership function
+
+The fuzzy function $\tilde{X}$ is a function of parameters $x_l, x_m$ and $x_u$, known as the lower value, middle value and upper value respectively. The figure (Fig. 1) shows the membership function for a fuzzy number “approximately $x_m$”.
+
+In this study, a multi-period newsvendor problem with pre-season extension under fuzzy demand is analyzed. As the selling season draws closer, suppliers lead times shortens and the production cost increases. In contrast, demand fuzziness decreases and more accurate demand forecasts can be maintained that lead to lower overage/underage costs.
+
+The purpose of the model is to find the best order period ($M^*$) and best order quantity ($Q^*$) for the multi-period newsvendor problem with pre-season extension under discrete fuzzy demand $\tilde{X}$ with membership function $\mu_{\tilde{X}}(x_i)$, precise production cost $c_p$, fuzzy unit holding cost $\tilde{c}_h$ with $\mu_{\tilde{c}_h}$ and fuzzy unit shortage cost $\tilde{c}_s$ cost with $\mu_{\tilde{c}_s}$.
+
+### 3.1. Fuzzy demand and fuzzy inventory costs with precise purchasing cost
+
+Let the membership function $\mu_{\tilde{X}}(x_i)$ of fuzzy demand $\tilde{X}$ given by domain $\tilde{X} = \{x_0, x_1, x_2, ..., x_n\}$ have the triangular membership function (Fig. 1). Unit production cost, ($c_p$) is precise while unit holding cost ($\tilde{c}_h$) and unit shortage cost ($\tilde{c}_s$), are represented by triangular fuzzy numbers. The uncertain demand and uncertain cost
\ No newline at end of file
diff --git a/samples/texts/6004336/page_7.md b/samples/texts/6004336/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d3a5258d8a31b1c859d6737831a3f5f337af849
--- /dev/null
+++ b/samples/texts/6004336/page_7.md
@@ -0,0 +1,38 @@
+parameters cause uncertain overage and underage costs. For a given Q and $x_i \in \tilde{X}$, the fuzzy overage ($\tilde{OC}$) and underage costs ($\tilde{UC}$), are as follows;
+
+$$
+\begin{aligned}
+\tilde{OC} &= c_h \max\{(Q - \tilde{X}), 0\} \\
+\tilde{UC} &= c_s \max\{(X - Q), 0\}.
+\end{aligned}
+\quad (12) $$
+
+The unit penalty cost ($\tilde{PC}$) is the sum of unit overage cost and unit underage cost with the membership function $\mu_{\tilde{PC}}$.
+
+$$ \tilde{PC} = \tilde{OC} + \tilde{UC}. \quad (13) $$
+
+The unit penalty cost ($\tilde{PC}$) is a level-2 fuzzy set which means that it includes two fuzzy values and there are corresponding membership degrees of these fuzzy values.
+
+A level-2 fuzzy set can be reduced to an ordinary fuzzy set by s-fuzzification process (Zadeh 1971). The membership function of an ordinary fuzzy set is maintained via s-fuzzification ("s" stands for support) as follows:
+
+$$ \mu_{s-fuzz}(\tilde{PC})(x) = \sup_{i=0,1,2,...,n} [\mu_{\tilde{PC}}(i) \times \mu_{c_i}(x)], x \in \tilde{X}, \quad (14) $$
+
+where $c_i(x)$ is the $i^{th}$ possible fuzzy cost of $\tilde{PC}$ and $\mu_{\tilde{PC}}(i)$ is the possibility of that cost. According to the properties of possibility measure (Zadeh 1978), $\mu_{\tilde{PC}}(i)$ is obtained as follows:
+
+$$ \mu_{\tilde{PC}}(i) = \max_{x_i \in \bar{X}} \mu_{\bar{X}}(x_i), \quad i = 0,1,2,...,n. \quad (15) $$
+
+The fuzzy total cost is minimized by the best order quantity ($Q^*$):
+
+$$ \tilde{T}C(Q^*, \bar{X}) = \min_Q [c_p Q + defuzz(s - fuzz(\tilde{PC}))]. \quad (16) $$
+
+Here, the operator “defuzz” denotes the defuzzification process. Defuzzification is the conversion of a fuzzy quantity to a precise quantity; in contrast fuzzification is the conversion of a precise quantity to a fuzzy quantity. Usually, a fuzzy system will have a number of rules that transform a number of variables into a “fuzzy” result. Defuzzification would transform this result into a single number (Ross 1995). There are several methods for defuzzification. In this study centroid defuzzification method is applied. Centroid defuzzification method (also called center of area, center of gravity) is the most common of all the defuzzification methods. It is given by the algebraic expression as follows.
+
+For continuous functions:
+
+$$ z^* = \frac{\int \mu_{\tilde{C}(z)} zdz}{\int \mu_{\tilde{C}(z)} dz}. \quad (17) $$
+
+For discrete functions:
+
+$$ z^* = \frac{\sum \mu_{\tilde{C}(z)}^z}{\sum \mu_{\tilde{C}(z)}}, \quad (18) $$
+
+where $\tilde{C}_{(z)} = \bigcup_{i=1}^z \tilde{C}_i$ and $\tilde{C}_i$ is one of the membership functions that figure the fuzzy output. This method is represented in Fig. 2.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_8.md b/samples/texts/6004336/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..5377fd2344fa42b3766023581d1a2fc4171e154b
--- /dev/null
+++ b/samples/texts/6004336/page_8.md
@@ -0,0 +1,21 @@
+Fig. 2. Centroid defuzzification method
+
+Best order quantity ($Q^*$) which minimizes the fuzzy total cost is found by brute-force search algorithm which is a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
+
+The procedure explained above is examined for every pre-season ordering period. Through these periods as the selling season draws closer the production cost increases and demand fuzziness decreases and more accurate demand forecasts can be maintained that lead to lower overage/underage costs.
+
+In the multi-period model, $c_{pj}$ denotes unit production cost for period $j$. Demand fuzziness decreases as the ordering period closes to the selling season. This decrease of fuzziness is maintained by fuzzy concentration of the membership function of demand according to the periods. The membership function $\mu_{\tilde{X}}(x_i)$ of fuzzy demand $\tilde{X}$ is given by domain $\tilde{X} = \{x_0, x_1, x_2, ..., x_n\}$ concentrates as the selling season draws closer.
+
+Concentrations tend to concentrate the element of a fuzzy set by reducing the degree of membership of all elements that have lower membership degrees in the set. The less an element is in a set, the more it is reduced in membership through concentration (Ross 1995).
+
+The membership function of fuzzy demand for period $j$ is denoted by $\mu_{\tilde{X}_j}(x_i)$. Fuzzy total cost for pre-season ordering periods is as follows:
+
+$$ \widetilde{TC}_j(Q_j^*, \tilde{X}) = \min_{Q_j} [c_{pj}Q_j + defuzz(s - fuzz(\widetilde{PC}_j))]. \quad (19) $$
+
+Here $\widetilde{PC}_j$ represents the penalty cost for period $j$ where $j=1,2,...,m$. The best order period ($M^*$) is the period which has the minimum fuzzy total cost.
+
+## 3.2. Numerical illustration
+
+Suppose that a retailer wants to introduce an innovative product to the market at the beginning of July and have six possible periods (January, February, March, April, May, June) to give the manufacturing order to the manufacturer. As the selling season draws closer, suppliers lead times shorten and the production cost increases. On the other hand demand fuzziness decreases and more accurate demand forecasts can be maintained that lead to lower overage/underage costs.
+
+Let unit holding cost ($c_h$) and unit shortage cost ($c_s$) be imprecise and represented with triangular fuzzy numbers as, $c_h = [1, 2, 3]$ and $c_s = [4, 5, 6]$, respectively. Unit purchasing cost is precise and increases 5% according to the previous months' cost Table 1.
\ No newline at end of file
diff --git a/samples/texts/6004336/page_9.md b/samples/texts/6004336/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..f201b019e04db5fb40d16663a834ac89bf72418e
--- /dev/null
+++ b/samples/texts/6004336/page_9.md
@@ -0,0 +1,13 @@
+**Table 1.** Unit purchasing cost per month ($)
+
+| Jan | Feb | March | April | May | June |
|---|
| 3.50 | 3.68 | 3.86 | 4.05 | 4.25 | 4.47 |
+
+Let the membership function $\mu_{\tilde{X}}(x_i)$ of fuzzy demand $\tilde{X}$ for January be $\tilde{X} = \{1000,1500,2000,2500,3000,3500,4000,4500,5000,5500,6000\}$ with the triangular membership function as follows:
+
+$$ \mu_{\tilde{X}}(x_i) = \begin{cases} 0 & x_i < 1000 \\ \left(\frac{x_i - 1000}{2500}\right) & 1000 < x_i < 3500 \\ \left(\frac{6000 - x_i}{2500}\right) & 3500 < x_i < 6000 \\ 0 & x_i > 6000 \end{cases} \tag{20} $$
+
+For example let $x_1 = 1000$. The membership value of $x_1$ will be $\mu_{\tilde{X}}(x_1) = 0$ or if $x_7 = 1000$ then the membership value of $x_7$ will be $\mu_{\tilde{X}}(x_7) = 0.80$ and so on. The membership functions for other months are maintained through the membership function of the demand for January using fuzzy concentration (Table 2 and Fig. 3).
+
+**Table 2.** Membership values of demand per month
+
+| Demand | Jan | Feb | March | April | May | June |
|---|
| 1000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
| 1500 | 0.200 | 0.134 | 0.089 | 0.040 | 0.008 | 0.002 |
| 2000 | 0.400 | 0.318 | 0.253 | 0.160 | 0.064 | 0.026 |
| 2500 | 0.600 | 0.528 | 0.465 | 0.360 | 0.216 | 0.130 |
| 3000 | 0.800 | 0.757 | 0.716 | 0.640 | 0.512 | 0.410 |
| 3500 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 |
| 4000 | 0.800 | 0.757 | 0.716 | 0.640 | 0.512 | 0.410 |
| 4500 | 0.600 | 0.528 | 0.465 | 0.360 | 0.216 | 0.130 |
| 5000 | 0.400 | 0.318 | 0.253 | 0.160 | 0.064 | 0.026 |
| 5500 | 0.200 | 0.134 | 0.089 | 0.040 | 0.008 | 0.002 |
| 6000 | 1.259 | 1.179 | 1.126 | 1.198 | 1.279 | 1.349 |
\ No newline at end of file
diff --git a/samples/texts/6409431/page_1.md b/samples/texts/6409431/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..aeb1c0276b9fa56d7e257cb46a823051bd77343e
--- /dev/null
+++ b/samples/texts/6409431/page_1.md
@@ -0,0 +1,16 @@
+# Electrochemical Detection of Daphnetin Based on a Glassy Carbon Electrode Modified with Nafion and RGO-TEPA
+
+Ming Zhou, Yunjie Sheng, Yangchun Li, Yurong Wang*
+
+College of Pharmaceutical Science, Zhejiang Chinese Medical University, Hangzhou 310053, China
+*E-mail: wangyr@zcmu.edu.cn
+
+Received: 2 December 2020 / Accepted: 25 January 2021 / Published: 28 February 2021
+
+A sensitive electrochemical sensor composed of Nafion, tetraethylene pentamine-functionalized reduced graphene oxide (RGO-TEPA), and a glassy carbon electrode (GCE) was developed to analyze daphnetin (DAPH). The electrochemical behavior of DAPH on Nafion/RGO-TEPA/GCE was studied by cyclic voltammetry. The electroanalytical method for DAPH detection was established using differential pulse voltammetry. A detection limit of 0.5 nM ($S/N = 3$) and a linear calibration range of 0.1 to 10 µM were obtained. The reproducibility and repeatability of the method were calculated to be 1.6% and 4.8%, respectively. The sensor showed good selectivity for DAPH and was successfully applied to detect DAPH in Zushima tablets. The content of DAPH in Zushima tablets detected by our sensor agreed well with that detected by HPLC.
+
+**Keywords:** Daphnetin; electrochemical sensor; Nafion; reduced graphene oxide-tetraethylene pentamine (RGO-TEPA)
+
+## 1. INTRODUCTION
+
+Coumarin frameworks are ubiquitous in various medicinal compounds and play a crucial role in pharmaceuticals [1]. In particular, daphnetin (7,8-dihydroxycoumarin, DAPH) is one of the most important coumarin derivatives because of its multiple pharmacological effects and broad clinical applications [2–5]. For example, DAPH was found to be one of the major active ingredients in the Chinese medicinal herb Zushima, which is widely used to treat coagulation disorders and rheumatoid arthritis in China [6]. Due to its importance, the development of convenient, sensitive, and inexpensive methods for DAPH content detection is a meaningful task for quality control in pharmaceuticals. Traditionally, liquid chromatography (LC) and high-performance liquid chromatography (HPLC) are applied for the detection of DAPH content [7,8]. However, these methods have major drawbacks, namely, they are relatively time-consuming, require expensive equipment, and use of toxic solvents. Alternatively, electrochemical approaches have been used to detect the DAPH content in pharmaceutical
\ No newline at end of file
diff --git a/samples/texts/6409431/page_10.md b/samples/texts/6409431/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..c923010952f53b70ca8ecb21120a3157cfca436c
--- /dev/null
+++ b/samples/texts/6409431/page_10.md
@@ -0,0 +1,11 @@
+0.9852). Based on the slopes of the $E_{pa}$-ln$v$ and $E_{pc}$-ln$v$ lines, $n$ and $\alpha$ were calculated to be 1.7 and 0.64, respectively. Since $n$ should be an integer, there were 2 electrons involved in the redox reaction of DAPH at Nafion/RGO-TEPA/GCE. Additionally, according to equation (4), $k_s$ was calculated to be 0.42 s⁻¹.
+
+**Figure 5.** Relationship between the peak potential and ln$v.
+
+### 3.4 Chronocoulometric studies
+
+The saturated adsorption capacity ($\Gamma^*$) and diffusion coefficient ($D$) of DAPH on the Nafion/RGO-TEPA/GCE surface were calculated by the chronocoulometry curves of Nafion/RGO-TEPA/GCE in a 0.1 M PBS (pH = 2.2) solution in the absence (Fig. 6A, curve a) and presence (Fig. 6A, curve b) of DAPH (10 µM).
+
+The corresponding $Q-t^{1/2}$ plots are provided in Fig. 6B, and the linear relationship of $Q-t^{1/2}$ corresponded to the following equations: $Q (\mu C) = -55.44t^{1/2} - 156.42 (R^2 = 0.9995)$ and $Q (\mu C) = -87.15t^{1/2} - 314.25 (R^2 = 0.9978)$. According to the Anson equation (equation (1) provided in Sec. 3.1), the value of $Q_{ads}$ was calculated to be $1.47 \times 10^{-4}$ C for the oxidation of adsorbed daphnetin, and $D$ was $4.0 \times 10^{-8}$ cm·s⁻¹. The $\Gamma^*$ of DAPH on Nafion/RGO-TEPA/GCE could be determined to be $3.79 \times 10^{-9}$ mol·cm⁻² using Laviron's theory (5)
+
+$$Q_{ads} = nFA\Gamma^* \quad (5)$$
\ No newline at end of file
diff --git a/samples/texts/6409431/page_11.md b/samples/texts/6409431/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..a2bb765ba27677ac4252afd48a58ccf8476b5270
--- /dev/null
+++ b/samples/texts/6409431/page_11.md
@@ -0,0 +1 @@
+**Figure 6.** (A) Chronocoulometric curves of the background (curve a) and 10 µM DAPH in 0.1 M PBS (pH 2.2) on Nafion/RGO-TEPA/GCE (b). (B) Corresponding Q-t¹/² plots.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_12.md b/samples/texts/6409431/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..f92076f81ad74932ca7cec43171bdbd41ad933ef
--- /dev/null
+++ b/samples/texts/6409431/page_12.md
@@ -0,0 +1,8 @@
+### 3.5 Optimization of measurement parameters
+
+#### 3.5.1 Effect of pH
+
+The redox peak current of DAPH was also greatly influenced by the pH value of the supporting electrolyte.
+
+**Figure 7.** (A) CV curves of DAPH (10 µM) on Nafion/RGO-TEPA/GCE in 0.1 M PBS at different pH values (1.6, 1.8, 2.2, 2.6, 3.0) and a scan rate of 50 mV·s⁻¹. (B) pH-dependence of *E*p and *I*.
+
diff --git a/samples/texts/6409431/page_13.md b/samples/texts/6409431/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c60643bf253be3b7d51be10bea4eea37c1bbbd8
--- /dev/null
+++ b/samples/texts/6409431/page_13.md
@@ -0,0 +1,7 @@
+Thus, the electrochemical response of DAPH (10 µM) on Nafion/RGO-TEPA/GCE was investigated by CV in PBS over a pH range of 1.6 ~ 3.0. As shown in Fig. 7A, the current intensity of DAPH first increased with an increasing pH value from 1.6 to 2.2, while continually increasing the pH value led to a decrease in the current intensity. Therefore, considering the detection sensitivity, a pH of 2.2 was chosen for further experiments.
+
+From Fig. 7B, it is distinctly found that the redox peak potential of DAPH shifted negatively with an increasing pH, which means that protons are involved in the electrode reaction. The oxidation peak potential ($E_{pa}$) followed the linear equation $E_{pa} (V) = 0.685 - 0.0375 \text{ pH (R}^2 = 0.9935)$. The reduction peak potential ($E_{pc}$) followed the linear equation $E_{pc} (V) = 0.624 - 0.0776 \text{ pH (R}^2 = 0.9929)$. The slope value (-37.5 mV·pH⁻¹) was less than the theoretical value (-59.0 mV·pH⁻¹), suggesting the involvement of an unequal number of protons and electrons. According to the Nernst equation, the transferred number of electrons in the oxidation process was double the number of protons. According to the above results, it appeared that two electrons and one proton were involved in the redox reaction of daphnetin. This result is different from the reported references, where equal numbers of electrons and protons are involved in the redox reaction of daphnetin [9-11]. However, a similar result has also been reported in other references about Nafion-covered electrodes [26-29]. The possible reason might be due to the competition between DAPH and the proton with the Nafion film. [28].
+
+### 3.5.2 Effect of the amount of RGO-TEPA on the oxidation peaks of DAPH
+
+The amount of RGO-TEPA can directly change the properties and functions of the electrode surface and finally affect the detection of DAPH. Thus, the effect of the amount of RGO-TEPA was investigated by covering the GCE surface with 4 ~ 11 µL of RGO-TEPA solution. As shown in Fig. 8, the oxidation peak current increased with an increasing amount of RGO-TEPA up to 8 µL and then decreased gradually. This result should be attributed to the decreased electron-transfer rate resulting from the considerable thickness of the nanofilm formed by an excessive amount of RGO-TEPA. Therefore, 8 µL of RGO-TEPA solution was selected as the optimal amount for the preparation of the modified electrode.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_14.md b/samples/texts/6409431/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..afff05b7e29dd9d9e92176cc4af14acb8847b5bc
--- /dev/null
+++ b/samples/texts/6409431/page_14.md
@@ -0,0 +1,11 @@
+**Figure 8.** Effect of the accumulated volume of RGO-TEPA on the oxidation peak current of DAPH at a concentration of 1 mg mL⁻¹ and a scan rate of 50 mV·s⁻¹.
+
+## 3.6 Analytical performance
+
+### 3.6.1 Repeatability and reproducibility
+
+The repeatability of Nafion/RGO-TEPA/GCE was evaluated by detecting the oxidation peak current of 10 µM DAPH of one modified electrode by DPV. The relative standard deviation (RSD) for seven independent measurements was 1.6%. The reproducibility of the proposed sensor was estimated by the oxidation peak current of 10 µM DAPH of five parallel modified electrodes by DPV under the same conditions. The RSD of the current response was 4.8%, which indicated the good reproducibility of the sensor.
+
+### 3.6.2 Calibration curve and detection limit
+
+Fig. 9A shows the oxidation peak current of different concentrations of DAPH under the optimal DPV conditions using Nafion/RGO-TEPA/GCE. The peak current had a linear relationship with the concentration of DAPH in the range of 0.01~1.0 µM and 1.0~10 µM (Fig. 9B). The linear regression equation could be expressed as $I_{pa1}$ (µA) = 4.798C (µM) +0.021 ($R^2$= 0.9975, 0.01 ~ 1.0 µM) with a sensitivity of 23.87 µA L µM⁻¹ cm⁻², and $I_{pa2}$ (µA) = 1.678C (µM) + 3.816 ($R^2$= 0.9931, 1.0 ~ 10 µM) with a sensitivity of 8.35 µA µM⁻¹ cm⁻². The oxidation of DAPH on the modified electrode exhibited a high sensitivity at low concentrations and a low sensitivity at high concentrations. The low sensitivity at high concentrations may be due to fewer active sites and the kinetic limitations for the oxidation of DAPH [30]. Furthermore, the detection limit was calculated to be 0.5 nM ($S/N$ = 3). As shown in Table
\ No newline at end of file
diff --git a/samples/texts/6409431/page_15.md b/samples/texts/6409431/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..000403206963f1877303fff3e6f33fb934273790
--- /dev/null
+++ b/samples/texts/6409431/page_15.md
@@ -0,0 +1,3 @@
+1, comparing the performance of our Nafion/RGO-TEPA/GCE sensor with previously reported DAPH sensors, Nafion/RGO-TEPA/GCE had a wider linear range and lower detection limit.
+
+**Figure 9.** (A) DPV curves of DAPH on Nafion/RGO-TEPA/GCE in 0.1 M PBS (pH = 2.2) with different concentrations: (a) 0.01, (b) 0.05, (c) 0.1, (d) 0.5, (e) 1.0, (f) 2.0, (g) 5.0, (h) 8.0, and (i) 10.0 µM. The inset curve represents the DPV curves of Nafion/RGO-TEPA/GCE at low DAPH concentrations (a-c). (B) Calibration curve corresponding to the response of the modified sensor to the concentration of DAPH. Scan rate: 50 mV·s⁻¹.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_16.md b/samples/texts/6409431/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..c8563d5bd22fa2f56e94a2f38f22d03e652cd297
--- /dev/null
+++ b/samples/texts/6409431/page_16.md
@@ -0,0 +1,9 @@
+**Table 1.** Comparison of the electrochemical sensing performance of different modified electrodes toward DAPH detection.
+
+| Electrode | Linear range (µM) | LOD (nM) | Technique | Reference |
|---|
| ERGO/GCE | 0.05~2 | 30 | DPV | 9 |
| SDS-GN/SnO2/GCE | 0.03~8 | 8 | DPV | 10 |
| Ca2GeO4-GR/GCE | 0.02~0.9 | 6 | DPV | 11 |
| Nafion/RGO-TEPA/GCE | 0.01~10 | 0.5 | DPV | This work |
+
+### 3.6.3 Interference studies
+
+Under optimal experimental conditions, the potential interference from metal ions and organic compounds that may be present in pharmaceutical samples was investigated in a 10 µM DAPH solution by DPV. The tolerance limit was defined as the maximum concentration of the interfering species that caused an error of less than ±5%. As shown in Fig. 10 (relative to the DAPH concentration), 200-fold concentrations of Na+, Ca²⁺, Zn²⁺, K⁺, and Cl⁻, 100-fold concentrations of Cu²⁺, SO₄²⁻, malic acid, glucose, and tartaric acid, 50-fold concentrations of sucrose, citric acid and ascorbic acid showed no interference on the detection of DAPH. These results indicated that the Nafion/RGO-TEPA/GCE sensor has an excellent anti-interference ability.
+
+**Figure 10.** Relative analytical response ($I_{Interference}/I_{DAPH}$) of 10 µM DAPH in the presence of different interferents.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_17.md b/samples/texts/6409431/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..469a982a096dbbcb07d4e70e191bce03de55bb64
--- /dev/null
+++ b/samples/texts/6409431/page_17.md
@@ -0,0 +1,15 @@
+### 3.6.4 Real sample analysis
+
+To verify the validity of the present method, it was employed to determine DAPH in Zushima tablets. The sample preparation procedure is described in Section 2.4. Briefly, 4.44 µL of the real sample solution was diluted to 25.00 mL by the supporting electrolyte and analyzed. A recognizable oxidation peak was observed for DAPH in the real sample. To evaluate the accuracy of the proposed method, the recovery test was also carried out by adding known levels of DAPH to the sample. Moreover, the sample was also detected by the HPLC method to further verify the accuracy of the proposed method. The obtained results are listed in Table 2. The recoveries of DAPH on Nafion/RGO-TEPA/GCE ranged from 101.5% to 108.5%, and the contents of DAPH detected by HPLC and the Nafion/RGO-TEPA/GCE sensor were in good agreement. From these excellent results, Nafion/RGO-TEPA/GCE was confirmed as an efficient electrode for the effective and accurate detection of DAPH in real samples.
+
+**Table 2. Recovery results for DAPH in Zushima tablet samples (n = 3)**
+
+| Sample | DPV | HPLC |
|---|
Added (µM) | Expected (µM) | Found (µM) | RSD (%) | Recovery (%) | Content in each tablet (mg·tablet-1) | Content in each tablet (mg·tablet-1) |
|---|
Zushima tablets | 0.000 | 0.000 | 0.197 | 4.8 | - | 5.03 | 5.12 |
| 0.200 | 0.397 | 0.404 | 1.3 | 103.5 |
| 0.400 | 0.597 | 0.631 | 2.5 | 108.5 |
| 0.600 | 0.797 | 0.806 | 1.5 | 101.5 |
+
+## 4. CONCLUSIONS
+
+In summary, a novel, reliable and sensitive sensor for the electrochemical detection of DAPH was successfully developed based on RGO-TEPA and Nafion through the dropwise addition of layers method. The sensor was first applied to detect DAPH in Zushima tablets. Under optimal conditions, a wide linear range (0.01 ~ 10.0 µM) with a detection limit of 0.05 nM, good reproducibility, and good selectivity were achieved on the Nafion/RGO-TEPA/GCE sensor due to the good electroconductivity and excellent electron transfer capability of RGO-TEPA. These results demonstrated better performance than the reported results for other DAPH-detecting sensors. The proposed sensing strategy may be potentially applied in the detection of other pharmaceuticals.
+
+## ACKNOWLEDGMENTS
+
+The authors greatly acknowledge the financial support from Opening Project of Zhejiang Provincial First-rate Subject (Chinese Traditional Medicine), Zhejiang Chinese Medical University (No. Ya2017009).
\ No newline at end of file
diff --git a/samples/texts/6409431/page_18.md b/samples/texts/6409431/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..12da64658c93231f8988063ca01a31b9e880a1e9
--- /dev/null
+++ b/samples/texts/6409431/page_18.md
@@ -0,0 +1,67 @@
+**References**
+
+1. F. G. Medina, J. G. Marrero, M. Macias-Alonso, M. C. Gonzalez, I. Cordova-Guerrero, A. G. Teissier Garcia and S. Osegueda-Robles, *Nat. Prod. Rep.*, **32** (2015) 1472.
+
+2. G. Du, H. Tu, X. Li, A. Pei, J. Chen, Z. Miao, J. Li, C. Wang, H. Xie, X. Xu and H. Zhao, *Neurochem. Res.*, **39** (2014) 269.
+
+3. G. J. Finn, B. S. Creaven and D. A. Egan, *Biochem. Pharmacol.*, **67** (2004) 1779.
+
+4. K. C. Fylaktakidoua, D. J. Hadjipavlou-Litinab, K. E. Litinasc and D. N. Nicolaides, *Curr. Pharm. Des.*, **10** (2004), 3813.
+
+5. Q. Gao, J. Shan, H. Xu, X. Cai, H. Tong and L. Jiang, *Chin. J. New Drugs*, **16** (2007) 2021.
+
+6. Q. Gao, J. J. Shan, L. Q. Di, L. J. Jiang and H. Q. Xu, *J. Ethnopharmacol.*, **120** (2008) 259.
+
+7. D. A. Egan, C. Duff, L. Jordan, R. Fitzgerald, S. Connolly and G. Finn, *Chromatographia*, **58** (2003) 649.
+
+8. J. Chen, X. Liu and Y. P. Shi, *Anal. Chim. Acta.*, **523** (2004) 29.
+
+9. W. Qiao, Y. Li, L. Wang, G. Li, J. Li and B. Ye, *J. Electroanal. Chem.*, **74** (2015) 68.
+
+10. S. Jing, H. Zheng, L. Zhao, L. Qu and L. Yu, *J. Electroanal. Chem.*, **787** (2017) 72.
+
+11. Y. Fu, L. Wang, D. Huang, L. Zou and B. Ye, *J. Electroanal. Chem.*, **801** (2017) 77.
+
+12. G. Darabdhara, M. R. Das, S. P. Singh, A. K. Rengan, S. Szunerits and R. Boukherroub, *Adv. Colloid Interface.*, **217** (2019) 101991.
+
+13. S. J. Rowley-Neale, E. P. Randviir, A. S. Abo Dena and C. E. Banks, *Appl. Mater. Today*, **10** (2018) 218.
+
+14. W. Mao, J. He, Z. Tang, C. Zhang, J. Chen, J. Li and C. Yu, *Biosens. Bioelectron.*, **131** (2019) 67.
+
+15. J. Chen, C. Yu, Y. Zhao, Y. Niu, L. Zhang, Y. Yu, J. Wu and J. He, *Biosens. Bioelectron.*, **91** (2017) 892.
+
+16. H. Zhong, C. Yu, R. Gao, J. Chen, Y. Yu, Y. Geng, Y. Wen and J. He, *Biosens. Bioelectron.*, **144** (2019) 111635.
+
+17. H. Ma, X. Zhang, X. Li, R. Li, B. Du and Q. Wei, *Talanta*, **143** (2015) 77.
+
+18. S. Zhao, Y. Zhang, S. Ding, J. Fan, Z. Luo, K. Liu, Q. Shi, W. Liu and G. Zang, *J. Electroanal. Chem.*, **834** (2019) 33.
+
+19. X. Zhang, F. Li, Q. Wei, B. Du, D. Wu and H. Li, *Sensor: Actuat. B-Chem.*, **194** (2014) 64.
+
+20. S. Yu, G. Zou and Q. Wei, *Talanta*, **156-157** (2016) 11.
+
+21. L. Cao, C. Fang, R. Zeng, X. Zhao, F. Zhao, Y. Jiang and Z. Chen, *Sensor: Actuat. B-Chem.*, **252** (2017) 44.
+
+22. D. Wu, A. Guo, Z. Guo, L. Xie, Q. We and B. Du, *Biosens. Bioelectron.*, **54** (2014) 634.
+
+23. F.C.Anson,*Anal.Chem.*, **38** (1966) 54.
+
+24.H.Zhang,Y.Shang,T.Zhang,K.Zhuo and J.Wang,Sensor:Actuat.B-Chem., 242 (2017) 492.
+
+25.E.Laviron,J.Electroanal.Chem., 101 (1979) 19.
+
+26.E.Mynttinen,N.Wester,T.Lilius,E.Kalso,J.Koskinen and T.Laurila,Electrochim.Acta, 295 (2019) 347.
+
+27.J.Liu,W.Weng,C.Yin,G.Luo,H.Xie,Y.Niu,X.Li,G.Li,Y.Xi,Y.Gong,S.Zhang and W.Sun,
+*Int.J.Electrochem.Sci.*, **14** (2019) 1310.
+
+28.Y.Wang,H.Ge,G.Ye,H.Chen and X.Hu,J.Mater.Chem.B ,3 (2015) 3747.
+
+29.C.Montes,A.MContento,M.J.Villaseñor and Á.Ríos,Microchimica Acta, 187 (2020) 190.
+
+30.R.Nehru,Y.F.Hsu and S.F.Wang,Anal.Chim.Acta ,1122 (2020) 76.
+
+© 2021 The Authors.
+Published by ESG (www.electrochemsci.org). This article is an open access
+article distributed under the terms and conditions of the Creative Commons Attribution license
+(http://creativecommons.org/licenses/by/4.0/).
\ No newline at end of file
diff --git a/samples/texts/6409431/page_2.md b/samples/texts/6409431/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b78b09f14a4e9ab1b6c8f49dbc83ae2b4d6eeed
--- /dev/null
+++ b/samples/texts/6409431/page_2.md
@@ -0,0 +1,15 @@
+formulations with good detection sensitivity and a reasonable detection limit [9]. Furthermore, it has
+been demonstrated that nanocomposite-modified electrodes can sensitively increase the electrochemical
+response to DAPH [10,11].
+
+In the last decade, reduced graphene oxide (RGO) has been increasingly studied for fabricating electrochemical sensors or biosensors because of its excellent electrochemical properties and large surface area [12,13]. Reduced graphene oxide-tetraethylene pentaamine (RGO-TEPA), a novel combination of RGO and TEPA by C-N covalent bonding, has been developed as an excellent substrate material for the modification of electrochemical immunosensors to detect numerous target analytes, such as monocyte chemoattractant protein-1 [14], fibroblast growth factor receptor 3 gene [15], T-2 toxin [16], and several cancer markers [17–22]. It has been found that the existence of the large number of amino groups in RGO-TEPA not only improves the stability of RGO but also allows the material to be modified by metals.
+
+In this study, we develop a sensitive electrochemical method for the detection of DAPH based on using an RGO-TEPA-modified glassy carbon electrode (GCE) as a novel electrochemical sensor. To the best of our knowledge, this is the first attempt to use an RGO-TEPA-modified electrochemical sensor to sensitively increase the electrochemical response of a pharmaceutical formulation.
+
+## 2. EXPERIMENTAL
+
+### 2.1 Reagents and instruments
+
+A standard DAPH reagent (98%) was purchased from Shanghai Yuanye Bio-Technology Co., Ltd. (Shanghai, China). RGO and RGO-TEPA were provided by Nanjing/Jiangsu XFNANO Materials Tech Co., Ltd. (Nanjing, China). A 5% Nafion solution was received from Sigma-Aldrich (St. Louis, Missouri, USA). The standard DAPH stock solution (1 mM) was prepared with anhydrous methanol and stored at 4°C in the dark. The supporting electrolyte (a 0.1 M phosphate buffered saline (PBS) solution) was prepared by mixing NaH2PO4 and Na2HPO4 solutions. The low pH value of PBS was adjusted with 0.1 M H3PO4 solution. Working standards were prepared daily by diluting the stock solution to the desired concentration with PBS. All the other chemicals were of analytical grade or better and used as received. All the water used in this work was obtained from a Smart-DUVF water purification system (Shanghai Hitech Instruments Co., Ltd., China) with an electrical resistance of 18.2 MΩ cm.
+
+Cyclic voltammetry (CV) and differential pulse voltammetry (DPV) were conducted with a CHI660 electrochemical workstation (Shanghai Chenhua Instrument Co., Ltd., Shanghai, China). Electrochemical impedance spectroscopy (EIS) was carried out on an RST5210F electrochemical workstation (Suzhou Risetest Electronic Co., Ltd., Suzhou, China). A conventional three-electrode system was applied in the experiment with a bare or modified glassy carbon electrode (GCE, 3 mm diameter) as the working electrode and Ag/AgCl and platinum wire as the reference and counter electrodes, respectively. A PHS-3E pH meter (China Shanghai Electric Instrument Co., Ltd., China) was used for pH adjustments.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_3.md b/samples/texts/6409431/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..68034fcda4ea1de9e4dcef718285244d450a5ed7
--- /dev/null
+++ b/samples/texts/6409431/page_3.md
@@ -0,0 +1,17 @@
+## 2.2 Preparation of the modified electrodes
+
+Before modification, the bare GCE was mechanically polished to a mirror-like surface in turn with abrasive paper and 0.3 and 0.05 µm alumina slurries. Then, the samples were cleaned ultrasonically in ethanol and ultrapure water for 5 min each. Finally, the samples were dried in air at room temperature.
+
+The purchased RGO-TEPA powder was ultrasonically dispersed in ultrapure water for 1 h to obtain a 1.0 mg mL⁻¹ homogenous dispersion. Before coating with Nafion, a 0.5 wt.% Nafion solution was prepared by diluting a 5 wt.% Nafion solution in absolute ethanol (≥ 99.7 wt.%). Next, RGO-TEPA/GCE was obtained by carefully casting an 8 µL RGO-TEPA suspension on the electrode surface and drying under an infrared lamp. Nafion/RGO-TEPA/GCE was obtained by dropwise adding 2 µL of 0.5% Nafion ethanol solution onto the surface of dried RGO-TEPA/GCE. Thus, the Nafion/RGO-TEPA/GCE sensor was developed layer by layer.
+
+## 2.3 Real sample solution preparation
+
+Zushima tablets were purchased from a local pharmacy in Hangzhou. Tablet powder was obtained from finely powdering 10 Zushima tablets in an agate mortar. The accurately weighed powder (0.3610 g) was transferred into 50 mL of 85% methanol and dissolved by ultrasonication for 30 min to obtain a homogenized suspension. The supernatant was collected by centrifugation and filtratration. The sample solution was stored at 4 °C in the dark. Before each measurement, a certain amount of sample was diluted to 10 mL with 0.1 M PBS (pH = 2.2).
+
+# 3. RESULTS AND DISCUSSION
+
+## 3.1 Electrochemical behaviors of Nafion/RGO-TEPA/GCE
+
+The electron transfer ability of the redox reaction on the Nafion/RGO-TEPA/GCE surface was verified by EIS using a 5 mM K₃Fe(CN)₆/K₄Fe(CN)₆ solution (containing 0.1 M KCl) as an electrochemical probe with an applied frequency range from 0.01 Hz to 1 MHz and an amplitude of 5 mV. The bare GCE, RGO/GCE, RGO-TEPA/GCE, and Nafion/RGO/GCE were also detected for comparison, and the obtained results were reported in the form of Nyquist plots (Fig. 1).
+
+In the EIS curve, the diameter of the semicircle is equal to the Randles circuit (Rct) value which is related to the electron transfer kinetics of the redox probe at the electrode/electrolyte interface. The Rct values of the bare GCE, RGO/GCE, RGO-TEPA/GCE, Nafion/RGO/GCE, and Nafion/RGO-TEPA/GCE were calculated to be approximately 149, 66, 60, 55, and 43 Ω, respectively. These results demonstrated that Nafion/RGO-TEPA accelerated the electron transfer between the electrode surface and the electrochemical probe K₃Fe(CN)₆/K₄Fe(CN)₆.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_4.md b/samples/texts/6409431/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4016269857f4b2d981dd0af22d3c2175f60a985
--- /dev/null
+++ b/samples/texts/6409431/page_4.md
@@ -0,0 +1,7 @@
+**Figure 1.** EIS of the bare GCE, RGO/GCE, RGO-TEPA/GCE, Nafion/RGO/GCE, and Nafion/RGO-TEPA/GCE in 5.0 mM $K_3Fe(CN)_6/K_4Fe(CN)_6$ solution containing 0.1 M KCl.
+
+The active area of an electrode can be estimated by chronocoulometry. Fig. 2A shows the single-potential-step chronocoulometric curves of the bare GCE, RGO/GCE, RGO-TEPA/GCE, Nafion/RGO/GCE, and Nafion/RGO-TEPA/GCE in a 1 mM $K_3Fe(CN)_6$ solution containing 0.1 M KCl. The corresponding $Q-t^{1/2}$ plots are provided in Fig. 2B, and the linear relationships of $Q-t^{1/2}$ were expressed as $Q (\mu C) = 14.4694t^{1/2} + 0.6694 (R^2 = 0.9996)$, $Q (\mu C) = 21.9014t^{1/2} + 1.2621 (R^2 = 0.9923)$, $Q (\mu C) = 28.6747t^{1/2} + 4.2133 (R^2 = 0.9942)$, $Q (\mu C) = 50.4908t^{1/2} + 3.9289 (R^2 = 0.9977)$ and $Q (\mu C) = 60.2945t^{1/2} + 7.8804 (R^2 = 0.9904)$, for the bare GCE, RGO/GCE, RGO-TEPA/GCE, Nafion/RGO/GCE, and Nafion/RGO-TEPA/GCE, respectively. According to the Anson equation (1) [23],
+
+$$Q = \frac{2nFAC(Dt)^{1/2}}{\pi^{1/2}} + Q_{dl} + Q_{ads} \quad (1)$$
+
+where $Q_{dl}$ is the double-layer charge, $Q_{ads}$ is the Faraday charge, $F$ is the Faraday constant, $A$ ($\text{cm}^2$) is the active area of the working electrode, $c$ ($\text{mol} \cdot \text{cm}^{-3}$) is the concentration of the substrate, $n$ is the number of electrons transferred and $D$ ($\text{cm}^2 \cdot \text{s}^{-1}$) is the diffusion coefficient, the value of $n$ was 1 and $D$ was $7.6 \times 10^{-6} \text{ cm}^2 \cdot \text{s}^{-1}$ in the $K_3Fe(CN)_6$ solution [24].
\ No newline at end of file
diff --git a/samples/texts/6409431/page_5.md b/samples/texts/6409431/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..bfd2367b0d7b2d66ae27eb1002b2d409f6a987ab
--- /dev/null
+++ b/samples/texts/6409431/page_5.md
@@ -0,0 +1 @@
+**Figure 2.** (A) Chronocoulometric curves of the bare GCE, RGO/GCE, RGO-TEPA/GCE, Nafion/RGO/GCE, and Nafion/RGO-TEPA/GCE in a 1 mM $K_3Fe(CN)_6$ solution containing 0.1 M KCl. (B) The corresponding $Q-t^{1/2}$ plots.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_6.md b/samples/texts/6409431/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..16897dcca01c9fc7dd0b3525a145299b8afa6f0f
--- /dev/null
+++ b/samples/texts/6409431/page_6.md
@@ -0,0 +1,7 @@
+From the slope value of the regression equation of $Q-t^{1/2}$, the active areas of the above electrodes were calculated to be 0.048 cm², 0.073 cm², 0.096 cm², 0.168 cm² and 0.201 cm². These results indicated that the active area of Nafion/RGO-TEPA/GCE was approximately 4 times that of the bare GCE and much larger than that of the other electrodes. This large active area was beneficial for providing more electroactive sites to increase the electrochemical response of DAPH on Nafion/RGO-TEPA/GCE.
+
+## 3.2 Electrochemical behavior of DAPH on the Nafion/RGO-TEPA/GCE
+
+The electrochemical behaviors of DAPH on the bare GCE, RGO/GCE, RGO-TEPA/GCE, Nafion/RGO/GCE, and Nafion/RGO-TEPA/GCE were investigated by CV. As shown in Fig. 3, an enhanced redox current peak of DAPH with the largest background current was obtained on the Nafion/RGO-TEPA/GCE sensor compared with that of the RGO-TEPA/GCE, Nafion/RGO/GCE, RGO/GCE, and bare GCE sensors. The highest peak current observed might be due to the fast electron transfer kinetics on the Nafion/RGO-TEPA/GCE-modified electrode and the hydrogen bond formed between the amino groups in TEPA and the carbonyl groups in DAPH. The largest background current of the Nafion/RGO-TEPA/GCE might be due to the large effective surface area of the electrode. Therefore, it is feasible to detect DAPH with Nafion/RGO-TEPA/GCE.
+
+**Figure 3.** CV curves of DAPH (10 µM) on the bare GCE, RGO/GCE, Nafion/RGO/GCE, RGO-TEPA/GCE, and Nafion/RGO-TEPA/GCE sensors in 0.1 M PBS (pH = 2.2) containing 10 µM DAPH at a scan rate of 50 mV·s⁻¹.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_7.md b/samples/texts/6409431/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..5740c6010e3c41d1c7dba39174b69002e3b5696d
--- /dev/null
+++ b/samples/texts/6409431/page_7.md
@@ -0,0 +1,3 @@
+### 3.3 Effect of the scan rate
+
+The electrochemical responses of DAPH (10 µM) at different scan rates before and after accumulation were investigated by CV to discuss the redox mechanism (Fig. 4). As shown in Fig. 4A-D, the redox peak currents increased with the scan rate from 0.02 to 0.3 V s⁻¹. Moreover, after DAPH accumulated, the redox peak currents were greater than those before accumulation. Fig. 4C shows, that both the oxidation peak current ($I_{pa}$) and the reduction peak current ($I_{pc}$) have a good linear range with the scan rate (ν). The fitting linear regression equations are as follows: $I_{pa}$ (µA) = -67.10ν (V·s⁻¹) + 2.15 (R² = 0.9939) and $I_{pc}$ (µA) = 56.35ν (V·s⁻¹) - 1.51 (R² = 0.9956), indicating that the electrochemical process of DAPH on Nafion/RGO-TEPA/GCE before accumulation is an adsorption-controlled process. Additionally, Fig. 4D shows, that $I_{pa}$ and $I_{pc}$ have a good linear range with the square rootof the scan rate (ν¹/²). The fitting linear regression equations can be expressed as follows: $I_{pa}$ (µA) = -75.89ν¹/² (V¹/₂·s⁻¹/²) + 3.73 (R² = 0.9980) and $I_{pc}$ (µA) = 65.53ν¹/² (V¹/₂·s⁻¹/²) - 2.87 (R² = 0.9984), indicating that the electrochemical process of DAPH on Nafion/RGO-TEPA/GCE after accumulation is a diffusion-controlled process.
\ No newline at end of file
diff --git a/samples/texts/6409431/page_8.md b/samples/texts/6409431/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..a902790b8b68f0bbfa7ab68ec142e66a7c4ff6e4
--- /dev/null
+++ b/samples/texts/6409431/page_8.md
@@ -0,0 +1,3 @@
+(B)
+
+(C)
\ No newline at end of file
diff --git a/samples/texts/6409431/page_9.md b/samples/texts/6409431/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..ffd30ceead73ae8d7a975ea00f683f6015c42b49
--- /dev/null
+++ b/samples/texts/6409431/page_9.md
@@ -0,0 +1,13 @@
+**Figure 4.** CV curves of DAPH (10 µM) on Nafion/RGO-TEPA/GCE in 0.1 M PBS (pH = 2.2) at different scan rates from 0.02 to 0.30 V·s⁻¹ with 20 mV s⁻¹ intervals before (A) and after (B) accumulation. (C) Linear relationship obtained for the redox peak current vs. the scan rate. (D) Linear relationship obtained for the redox peak current vs. the square root of the scan rate.
+
+The relationship between the scan rate and redox peak potential (*E*) was deduced from Fig. 4A and is shown in Fig. 5. With an increasing scan rate, the oxidation peak potential (*E*pa) positively shifted, and the reduction peak potential (*E*pc) negatively shifted, indicating a quasi-reversible electrode process. For the model of a surface-controlled process, the electron transfer kinetics of daphnetin on Nafion/RGO-TEPA/GCE could be deduced by the Laviron equations (2-4) [25]:
+
+$$E_{\text{pa}} = E^{0'} + \frac{RT}{(1-\alpha)nF} \ln v \quad (2)$$
+
+$$E_{\text{pc}} = E^{0'} - \frac{RT}{\alpha nF} \ln v \quad (3)$$
+
+$$\lg k_s = \alpha \lg(1-\alpha) + (1-\alpha)\lg\alpha - \lg\frac{RT}{nFv} - \alpha(1-\alpha)\frac{nF\Delta E_p}{2.303RT} \quad (4)$$
+
+where, $E^{0'}$ is the formal standard potential, $k_s$ is the apparent heterogeneous electron transfer rate constant, $n$ is the electron transfer number, $\alpha$ is the charge transfer coefficient, $v$ is the scan rate, and $\Delta E_p$ is the peak-to-peak potential separation.
+
+When $v > 0.06 \text{ V·s}^{-1}$, $E_{\text{pa}}$ and $E_{\text{pc}}$ had a good linear relationship with $\ln v$ providing regression equations of $E_{\text{pa}}$ (V) = 0.0418$\ln v$ + 0.6794 ($R^2$ = 0.9868) and $E_{\text{pc}}$ (V) = -0.0231$\ln v$ + 0.4428 ($R^2$ =
\ No newline at end of file
diff --git a/samples/texts/7149586/page_1.md b/samples/texts/7149586/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..0f6fc907d7b687bbf2776ec881d00e9d15f59e6c
--- /dev/null
+++ b/samples/texts/7149586/page_1.md
@@ -0,0 +1,12 @@
+# Mixed-Integer Nonlinear Programming Models and Algorithms for Large-Scale Supply Chain Design with Stochastic Inventory Management
+
+Fengqi You, Ignacio E. Grossmann*
+
+Department of Chemical Engineering, Carnegie Mellon University
+Pittsburgh, PA 15213, USA
+
+## Abstract
+
+An important challenge for most chemical companies is to simultaneously consider inventory optimization and supply chain network design under demand uncertainty. This leads to a problem that requires integrating a stochastic inventory model with the supply chain network design model. This problem can be formulated as a large scale combinatorial optimization model that includes nonlinear terms. Since these models are very difficult to solve, they require exploiting their properties and developing special solution techniques to reduce the computational effort. In this work, we analyze the properties of the basic model and develop solution techniques for a joint supply chain network design and inventory management model for a given product. The model is formulated as a nonlinear integer programming problem. By reformulating it as a mixed-integer nonlinear programming (MINLP) problem and using an associated convex relaxation model for initialization, we first propose a heuristic method to quickly obtain good quality solutions. Further, a decomposition algorithm based on Lagrangean relaxation is developed for obtaining global or near-global optimal solutions. Extensive computational examples with up to 150 distribution centers and 150 retailers are presented to illustrate the performance of the algorithms and to compare them with the full-space solution.
+
+* To whom all correspondence should be addressed. E-mail: grossmann@cmu.edu
\ No newline at end of file
diff --git a/samples/texts/7149586/page_10.md b/samples/texts/7149586/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..5678d38963675dc7b6857e2ef6ca490a26e48a3a
--- /dev/null
+++ b/samples/texts/7149586/page_10.md
@@ -0,0 +1,50 @@
+dij Unit transportation cost from DC j to retailer i
+
+$\chi$ Days per year (to convert daily demand and variance values to annual costs)
+
+$\mu_i$ Mean demand at retailer i (daily)
+
+$\sigma_i^2$ Variance of demand at retailer i (daily)
+
+$F_j$ Fixed cost of placing an order from the supplier to the DC at candidate site j
+
+$g_j$ Fixed transportation cost from the supplier to the DC at candidate site j
+
+$a_j$ Unit transportation cost from the supplier to the DC at candidate site j
+
+$L$ Lead time from the supplier to the candidate DC sites (in days)
+
+$h$ Unit inventory holding cost
+
+$\alpha$ Desired probability of retailer orders satisfied
+
+$\beta$ Weight factor assigned to transportation costs
+
+$\theta$ Weight factor assigned to inventory costs
+
+$z_{\alpha}$ Standard normal deviate such that $\text{Pr}(z \le z_{\alpha}) = \alpha$
+
+**Decision Variables (0-1)**
+
+$X_j$ 1 if we locate a DC in candidate site j, and 0 otherwise
+
+$Y_{ij}$ 1 if retailer i is served by the DC at candidate site j, and 0 otherwise
+
+**4.1. Objective Function**
+
+The objective of this model is to minimize the total weighted cost of the following
+items:
+
+* fixed cost for locating facilities,
+
+• transportation costs from DCs to retailers,
+
+* fixed order placing costs, transportation costs from the supplier to DCs and the expected working inventory costs in the DCs,
+
+• safety stock costs in DCs.
+
+The facility location cost is given by,
+
+$$
+\sum_{j \in J} f_j X_j \tag{1}
+$$
\ No newline at end of file
diff --git a/samples/texts/7149586/page_11.md b/samples/texts/7149586/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..bff4ad12050774cf4377ac35bd7a078fea7234c0
--- /dev/null
+++ b/samples/texts/7149586/page_11.md
@@ -0,0 +1,13 @@
+The product of yearly expected mean demand ($\chi\mu_i$) and the unit transportation cost ($d_{ij}$) leads to the annual DC to retailer transportation costs. If the retailer $i$ is not served by the DC in candidate location $j$, the transportation cost is zero. Hence, the total expected transportation costs from DCs to retailers can be expressed as:
+
+$$ \sum_{j \in J} \sum_{i \in I} \chi d_{ij} \mu_i Y_{ij} \quad (2) $$
+
+As all the retailers have stochastic demands and all the DCs manage the inventory using a $(Q, r)$ policy with *Type I Service* constraint, the working inventory cost can be approximated with an economic order quantity model (EOQ) with very small error bound.$^{30, 31}$ Let $n$ be the number of replenishments per year and $D$ be the annual demand for the product. Thus, the annual costs of ordering, shipping and working inventory from the supplier to the DCs are approximated by:
+
+$$ Fn + \beta \left( g + a \frac{D}{n} \right) n + \theta \frac{hD}{2n} \quad (3) $$
+
+The first term $Fn$ is the total ordering cost per year. The second term is the annual transportation cost times the weighted factor ($\beta$), where $(D/n)$ is the expected shipment size, and the shipping cost is given by a linear function $v(x) = g + ax$. The third term is the annual working inventory costs times the weighted factor ($\theta$), where $D/(2n)$ is the average inventory level on hand. Considering (3) as a function of annual order number $n$, by setting the first order derivative to zero with respect to $n$, we can obtain the optimal order number $n = \sqrt{\theta h D / (2(F + \beta g))}$. Therefore, by substituting into (3), the total optimal cost for replenishments, including ordering, transportation and working inventory holding cost is given by,
+
+$$ \beta aD + \sqrt{2\theta h(F + \beta g)D} \quad (4) $$
+
+Substituting the demand $D$ with the annual expected demand of the product in each DC ($\sum_{i \in I} \chi\mu_i Y_{ij}$), the total replenishment costs for all the DCs can be expressed by,
\ No newline at end of file
diff --git a/samples/texts/7149586/page_12.md b/samples/texts/7149586/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..d7966b8bc572c266b4a494fb0c4dfe3864673501
--- /dev/null
+++ b/samples/texts/7149586/page_12.md
@@ -0,0 +1,33 @@
+$$ \beta \sum_{j \in J} a_j \sum_{i \in I} \chi \mu_i Y_{ij} + \sum_{j \in J} \sqrt{2\theta h(F_j + \beta g_j) \sum_{i \in I} \chi \mu_i Y_{ij}} \quad (5) $$
+
+As the demand at each retailer follows a given normal distribution, let $\mu_i$ and $\sigma_i^2$ be the mean and variance of demand of the product at retailer $i$. Due to the risk-pooling effect,³² the lead time demand at each DC is also normally distributed with a mean of $L \sum_{i \in S} \mu_i$ and a variance of $L \sum_{i \in S} \sigma_i^2$. Thus, the safety stock required in the DC at candidate location $j$ to ensure that stockouts occur with a probability of $\alpha$ or less is,
+
+$$ z_{\alpha} \sqrt{L \sum_{i \in I} \sigma_i^2 Y_{ij}}. \quad (6) $$
+
+Therefore, the objective function of this model is given by
+
+$$
+\begin{aligned}
+\text{Min:} & \quad \sum_{j \in J} f_j X_j + \beta \sum_{j \in J} \sum_{i \in I} \chi d_{ij} \mu_i Y_{ij} + \beta \sum_{j \in J} a_j \sum_{i \in I} \chi \mu_i Y_{ij} \\
+& \quad + \sum_{j \in J} \sqrt{2\theta h(F_j + \beta g_j) \sum_{i \in I} \chi \mu_i Y_{ij}} + \theta h z_{\alpha} \sum_{j \in J} \sqrt{\sum_{i \in I} L \sigma_i^2 Y_{ij}}
+\end{aligned}
+\quad (7)
+$$
+
+where each term accounts for the fixed facility location cost, DC to retailer transportation costs, replenishment costs (including supplier to DC transportation costs, fixed ordering costs and working inventory costs) and safety stock costs.
+
+## 4.2. Network Constraints
+
+Two constraints are used to define the network structure. The first one is that each retailer $i$ should be served by only one DC,
+
+$$ \sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \quad (8) $$
+
+The second constraint states that if a retailer $i$ is served by the DC in candidate location $j$, the DC must exist,
+
+$$ Y_{ij} \le X_j, \quad \forall i \in I, \forall j \in J. \quad (9) $$
+
+Finally, all the decision variables are binary variables in this model:
+
+$$ X_j \in \{0,1\}, \quad \forall j \in J \quad (10) $$
+
+$$ Y_{ij} \in \{0,1\}, \quad \forall i \in I, \forall j \in J \quad (11) $$
\ No newline at end of file
diff --git a/samples/texts/7149586/page_13.md b/samples/texts/7149586/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..4bd8e446d11932c7e6d6a2475d630b15b622edb3
--- /dev/null
+++ b/samples/texts/7149586/page_13.md
@@ -0,0 +1,39 @@
+**4.3. INLP Model**
+
+Grouping the parameters, we can rearrange the objective function and formulate the
+problem to (P0) as the following integer nonlinear programming (INLP) problem:
+
+$$
+\begin{align}
+\textbf{(P0)} \quad & \text{Min:} \quad \sum_{j \in J} (f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}}) \tag{12} \\
+& \text{s.t.} \quad \sum_{j \in J} Y_{ij} = 1, \qquad \forall i \in I \tag{8} \\
+& \qquad Y_{ij} \le X_j, \qquad \forall i \in I, \forall j \in J \tag{9} \\
+& \qquad X_j \in \{0,1\}, \qquad \forall j \in J \tag{10} \\
+& \qquad Y_{ij} \in \{0,1\}, \qquad \forall i \in I, \forall j \in J \tag{11}
+\end{align}
+$$
+
+where
+
+$$
+\begin{align*}
+\hat{d}_{ij} &= \beta \chi \mu_i (d_{ij} + a_j) \\
+K_j &= \sqrt{2\theta h \chi (F_j + \beta g_j)} \\
+q &= \theta h z_{\alpha} \\
+\hat{\sigma}_i^2 &= L \sigma_i^2
+\end{align*}
+$$
+
+5. Solution Approach
+
+The joint supply chain network design and inventory management model
+((8)-(12)) is a nonlinear integer program where all the decision variables are binary
+variables. Besides its combinatorial nature, the nonlinear terms are nonconvex which
+make the optimization model very difficult to solve. In order to address this problem,
+previous researchers¹⁹, ³³ have simplified the model by assuming that the
+variance-to-mean ratio at all the retailers are identical, but in the real world this ratio
+for each retailer may vary from others, and thus an efficient algorithm is required to
+solve the model (P0) without the aforementioned assumption so as to provide a good
+approximation for real cases. In the next section, we reformulate the INLP model (P0)
+as a mixed-integer nonlinear programming (MINLP) problem with fewer 0-1
+variables and solve it with different solution approaches, including a heuristic method
\ No newline at end of file
diff --git a/samples/texts/7149586/page_14.md b/samples/texts/7149586/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..b06db69520b8e26f30e8ad6a4a1923b8fdfaed7b
--- /dev/null
+++ b/samples/texts/7149586/page_14.md
@@ -0,0 +1,32 @@
+to obtain “good quality” solutions very quickly, and a Lagrangean relaxation algorithm
+for obtaining global or near-global optimal solutions.
+
+**5.1. MINLP Formulation**
+
+The original INLP model (P0) is very difficult to solve for large instances due to
+the potentially large number of binary variables (see Table 2 in Section 7 for
+examples). As shown in the proposition below, the assignment variables (Yij) in the
+model can be relaxed as continuous variables without changing the optimal integer
+solution. This allows us to reformulate (P0) as a MINLP problem with fewer 0-1
+variables and most of them appearing in linear form.
+
+**Proposition 1.** The continuous variables $Y_{ij}$ yield 0-1 integer values when (P0) is globally optimized or locally optimized for fixed 0-1 value for $X_j$.
+
+Proposition 1 means that the following problem (P1) yields integer values on the
+assignment variables $Y_{ij}$ when it is globally optimized or locally optimized for a
+fixed 0-1 integer value for $X_j$.
+
+$$
+\begin{align}
+& \text{(P1)} \quad \text{Min:} \quad \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j \sqrt{\sum_{i \in I} \mu_i Y_{ij}} + q \sqrt{\sum_{i \in I} \hat{\sigma}_i^2 Y_{ij}} \right) \tag{13} \\
+& \text{s.t.} \quad \sum_{j \in J} Y_{ij} = 1, \qquad \forall i \in I. \tag{8} \\
+& \qquad Y_{ij} \le X_j, \qquad \forall i \in I, \forall j \in J. \tag{9} \\
+& \qquad X_j \in \{0,1\}, \qquad \forall j \in J \tag{10} \\
+& \qquad Y_{ij} \ge 0, \qquad \forall i \in I, \forall j \in J. \tag{14}
+\end{align}
+$$
+
+The proof, which is given in Appendix A, is based on the fact that for fixed $X_j$
+problem (P1) is a concave minimization problem defined over a polyhedron, and for
+which local and global solution for fixed integer values of $X_j$, yield integer values
+for the continuous variables $Y_{ij}$.
\ No newline at end of file
diff --git a/samples/texts/7149586/page_15.md b/samples/texts/7149586/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..33dd7e403ae97a9390863cb164c7f429f7d00b6e
--- /dev/null
+++ b/samples/texts/7149586/page_15.md
@@ -0,0 +1,5 @@
+Proposition 1 allows us to solve the MINLP model (P1) instead of the INLP model (P0) significantly reducing the computational effort. It is interesting to note that if we set the unit inventory holding cost $h = 0$, the square root terms in the objective function (13) can be removed and problem (P1) reduces to the widely studied “Uncapacitated Facility Location” (UFL) problem,³, ⁴, ³⁴, ³⁵ which is known to exhibit integer solutions for relaxed variables $Y_{ij}$. Furthermore, this problem is also known to be solvable through its LP relaxation for most instances.
+
+(P1) is an MINLP problem with linear constraints and a nonlinear objective function including nonconvexities in the continuous variables. Optimization methods that can be used for obtaining the global optimal solution of problem (P1) include the branch and reduce method,³⁶, ³⁷ the α-BB method,³⁸ the spatial branch and bound search method for bilinear and linear fractional terms³⁹, ⁴⁰ and the outer-approximation method by Kesavan et al.⁴¹ All these methods rely on a branch and bound solution procedure. The difference among these methods lies on the definition of the convex envelopes for computing the lower bound, and on how to perform the branching on the discrete and continuous variables. The global optimization solver that is commercially available is BARON,⁴² which implements a branch-and-reduce solution method.
+
+Since a global optimization algorithm can be expensive, another alternative is to use an MINLP method that relies on the assumption that the functions are convex. Although in this case global optimality cannot be guaranteed, the solutions can be obtained much faster, because a local optimal solution can be efficiently be found for a fixed value of the integer variables (optimal or near optimal). A general review of these MINLP methods is given in Grossmann.⁴³ Methods include the branch and bound method,⁴⁴ Generalized Benders Decomposition,⁴⁵ Outer-Approximation,⁴⁶, ⁴⁷ LP/NLP based branch and bound,⁴⁸ and Extended Cutting Plane Method.⁴⁹ A number of computer codes are available that implement these methods. The program DICOPT⁴⁷ is an MINLP solver that is based on the Outer Approximation
\ No newline at end of file
diff --git a/samples/texts/7149586/page_18.md b/samples/texts/7149586/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..2307d5b11c92ba9bd87ee4e43e4b21dc84e11084
--- /dev/null
+++ b/samples/texts/7149586/page_18.md
@@ -0,0 +1,30 @@
+From (15) and (16), it is easy to see that the lower bounds $Z1_j$ and $Z2_j$ are both 0, and their upper bounds are $\sqrt{\sum_{i \in I} \mu_i}$ and $\hat{\sigma}_i^2$ respectively.
+
+Therefore, the secant of (18) is given by,
+
+$$ -\sqrt{\sum_{i \in I} \mu_i} \cdot Z1_j + \sum_{i \in I} \mu_i Y_{ij} \leq 0, \quad \forall j \in J \quad (21) $$
+
+Similarly, the secant of (19) is given by,
+
+$$ -\sqrt{\sum_{i \in I} \hat{\sigma}_i^2} \cdot Z2_j + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \leq 0, \quad \forall j \in J \quad (22) $$
+
+In this way the convex relaxation of model (P2) can be formulated as Problem (P3):
+
+$$ (\textbf{P3}) \quad \begin{array}{ll} \text{Min:} & \displaystyle \sum_{j \in J} \left( f_j X_j + \sum_{i \in I} \hat{d}_{ij} Y_{ij} + K_j Z1_j + qZ2_j \right) \\[1em] \text{s.t.} & \displaystyle \sum_{j \in J} Y_{ij} = 1, \quad \forall i \in I. \\[1em] & Y_{ij} \le X_j, \quad \forall i \in I, \forall j \in J. \\[1em] & -\sqrt{\sum_{i \in I} \mu_i} \cdot Z1_j + \sum_{i \in I} \mu_i Y_{ij} \le 0, \quad \forall j \in J \\[1em] & -\sqrt{\sum_{i \in I} \hat{\sigma}_i^2} \cdot Z2_j + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \\[1em] & X_j \in \{0,1\}, \quad \forall j \in J \\[1em] & Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \\[1em] & Z1_j \ge 0, Z2_j \ge 0, \quad \forall j \in J
+\end{array} \quad (20) $$
+
+(8)
+
+(9)
+
+(21)
+
+(22)
+
+(10)
+
+(14)
+
+(17)
+
+(P3) is a mixed-integer linear programming (MILP) problem which is the convex relaxation of problem (P2). The optimal solution of variables $X_j$ and $Y_{ij}$ of problem (P3) is a feasible solution of problem (P2) due to the linear constraints (8) and (9), and it can provide an initial point before solving (P2) with an MINLP solver. In this way, we can greatly speed up the computation and enhance the likelihood of obtaining a near-optimal solution of model (P2). In summary, the heuristic algorithm for obtaining a good quality solution with reasonable computational effort by using MINLP solvers that rely on convexity assumptions is as follows:
\ No newline at end of file
diff --git a/samples/texts/7149586/page_20.md b/samples/texts/7149586/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..353f8808b7832fe38d4bec50475546df81ed5ccb
--- /dev/null
+++ b/samples/texts/7149586/page_20.md
@@ -0,0 +1,41 @@
+$$
+\begin{align}
+& -Z2_j^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij} \le 0, \quad \forall j \in J \tag{19} \\
+& X_j \in \{0,1\}, \quad \forall j \in J \tag{10} \\
+& Y_{ij} \ge 0, \quad \forall i \in I, \forall j \in J. \tag{14} \\
+& Z1_j \ge 0, Z2_j \ge 0, \quad \forall j \in J \tag{17}
+\end{align}
+$$
+
+where *V* is the objective function value. Next, we observe that (*P*(*λ*)) can be
+decomposed into |*J*| subproblems, one for each candidate DC site *j* ∈ *J*, where each
+one is denoted by (*P**j*(*λ*)) and is shown for a specific subproblem for candidate DC site
+*j** as follows:
+
+$$
+\begin{equation}
+\begin{split}
+(\mathbf{P}_{j^*}(\boldsymbol{\lambda})) V_{j^*} = \text{Min: } & f_{j^*} X_{j^*} + \sum_{i \in I} (\hat{d}_{ij^*} - \lambda_i) Y_{ij^*} + K_{j^*} Z 1_{j^*} + q Z 2_{j^*}
+\end{split}
+\tag{24}
+\end{equation}
+$$
+
+s.t.
+$$
+\begin{array}{l@{\quad}l@{\qquad}l}
+Y_{ij^*} & \le X_{j^*}, & \forall i \in I \\
+\\
+-Z1_{j^*}^2 & + \sum_{i \in I} \mu_i Y_{ij^*} & \le 0 \\
+\\
+-Z2_{j^*}^2 & + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij^*} & \le 0 \\
+\\
+Y_{ij^*} & \ge 0, & \forall i \in I \\
+\\
+X_{j^*} & \in \{0,1\} \\
+\\
+Z1_{j^*} & \ge 0, Z2_{j^*} \ge 0
+\end{array}
+$$
+
+Subproblem (P_j(λ)) has one binary variable (X_j^*), |I|+2 continuous variables (Z1_j^*, Z2_j^*, Y_j^*) and 2|I|+2 constraints. Because we have |J| subproblems (P_j(λ)), and one for each candidate DC site j∈J, we call it a "spatial" decomposition scheme, i.e. decomposition by the spatial structure of the supply chain network.¹²,²⁶ Let V_j denote the globally optimal objective function value of problem (P_j(λ)). As a result of the decomposition procedure, the globally optimal objective function value of (P(λ)), which corresponds to a lower bound of problem (P2), can be calculated by:
\ No newline at end of file
diff --git a/samples/texts/7149586/page_22.md b/samples/texts/7149586/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..b2ea80ca4b9036a07925a10fa41dec211ddbe937
--- /dev/null
+++ b/samples/texts/7149586/page_22.md
@@ -0,0 +1,32 @@
+of solving problem $\mathbf{P}_j(\boldsymbol{\lambda})$ for each candidate DC location $j$.
+
+First, consider the problem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$, which is actually a special case of $(\mathbf{P}_j(\boldsymbol{\lambda}))$
+when $X_j = 1$. The formulation for a specific $j^*$ is given as:
+
+$$
+\begin{equation}
+\begin{array}{@{}l@{\quad}l@{}}
+(\mathbf{PR}_{j^*}(\boldsymbol{\lambda})) & \hat{V}_{j^*} = \text{Min: } f_{j^*} + \sum_{i \in I} (\hat{d}_{ij^*} - \lambda_i) Y_{ij^*} + K_{j^*} Z 1_{j^*} + qZ 2_{j^*} \quad (27) \\
+& \text{s.t. } Y_{ij^*} \le 1, \quad \forall i \in I \\
+& -Z 1_{j^*}^2 + \sum_{i \in I} \mu_i Y_{ij^*} \le 0 \\
+& -Z 2_{j^*}^2 + \sum_{i \in I} \hat{\sigma}_i^2 Y_{ij^*} \le 0 \\
+& Y_{ij^*} \ge 0, \quad \forall i \in I \\
+& Z 1_{j^*} \ge 0, \ Z 2_{j^*} \ge 0
+\end{array}
+\end{equation}
+$$
+
+where $\hat{V}_j$ is denoted as the globally optimal objective function value of the problem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$.
+
+Note that the $X_j$ variable does not appear in subproblem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$. Therefore,
+the minimum objective function value of subproblem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$ is equal to the
+minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$ when $X_j = 1$. However, it is not
+always the same as the globally minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$,
+because $(\mathbf{P}_j(\boldsymbol{\lambda}))$ could be globally optimal when $X_j = 0$,
+
+For each fixed value of the Lagrange multiplier $\lambda_i$, if the globally minimum
+objective function value of the Lagrange subproblem $(\mathbf{PR}_j(\boldsymbol{\lambda}))$ is negative, it means
+that when $X_j = 1$ the minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$ is
+negative. Because we know when $X_j = 0$ the objective function value of problem
+$(\mathbf{P}_j(\boldsymbol{\lambda}))$ is 0, it follows that under this value of the Lagrange multiplier, the globally
+minimum objective function value of problem $(\mathbf{P}_j(\boldsymbol{\lambda}))$ is the same as the minimum
\ No newline at end of file
diff --git a/samples/texts/7740975/page_20.md b/samples/texts/7740975/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..c75017f6b1f8137e9cd078e2debe9b2cf8814d1b
--- /dev/null
+++ b/samples/texts/7740975/page_20.md
@@ -0,0 +1,17 @@
+We give an implementation of Algorithm 3 for *sparse* = false in Appendix B. For brevity, we skip the *sparse* = true case. We also tested our implementation on several parameter sets:⁷
+
+1. Considering an LWE instance with $n = 100$ and $q \approx 2^{23}$, $\alpha = 8/q$ and $h = 20$, we first BKZ-50 reduced the basis **L** for $c = 16$. This produced a short vector **w** such that $|\langle \mathbf{w}, \mathbf{c} \rangle| \approx 2^{15.3}$. Then, running LLL 256 times, we produced short vectors such that $E[|\langle \mathbf{w}_i, \mathbf{c} \rangle|] = 2^{15.7}$ and standard deviation $2^{16.6}$.
+
+2. Considering an LWE instance with $n = 140$ and $q \approx 2^{40}$, $\alpha = 8/q$ and $h = 32$, we first BKZ-70 reduced the basis **L** for $c = 1$. This took 64 hours and produced a short vector **w** such that $|\langle \mathbf{w}, \mathbf{c} \rangle| \approx 2^{23.7}$, with $E[|\langle \mathbf{w}, \mathbf{c} \rangle|] \approx 2^{25.5}$ conditioned on $|\mathbf{w}|$. Then, running LLL 140 times (each run taking about 50 seconds on average), we produced short vectors such that $E[|\langle \mathbf{w}_i, \mathbf{c} \rangle|] = 2^{26.0}$ and standard deviation $2^{26.4}$ for $\langle \mathbf{w}_i, \mathbf{c} \rangle$.
+
+3. Considering the same LWE instance with $n = 140$ and $q \approx 2^{40}$, $\alpha = 8/q$ and $h = 32$, we first BKZ-70 reduced the basis **L** for $c = 16$. This took 65 hours and produced a short vector **w** such that $|\langle \mathbf{w}, \mathbf{c} \rangle| \approx 2^{24.7}$ after scaling by c, cf. $E[|\langle \mathbf{w}, \mathbf{c} \rangle|] \approx 2^{24.8}$. Then, running LLL 140 times (each run taking about 50 seconds on average), we produced short vectors such that $E[|\langle \mathbf{w}_i, \mathbf{c} \rangle|] = 2^{25.5}$ and standard deviation $2^{25.9}$ for $\langle \mathbf{w}_i, \mathbf{c} \rangle$.
+
+4. Considering again the same LWE instance with $n = 140$ and $q \approx 2^{40}$, $\alpha = 8/q$ and $h = 32$, we first BKZ-70 reduced the basis **L** for $c = 1$. This took 30 hours and produced a short vector **w** such that $|\langle \mathbf{w}, \mathbf{c} \rangle| \approx 2^{25.2}$, cf. $E[|\langle \mathbf{w}, \mathbf{c} \rangle|] \approx 2^{25.6}$. Then, running LLL 1024 times (each run taking about 50 seconds on average), we produced 1016 short vectors such that $E[|\langle \mathbf{w}_i, \mathbf{c} \rangle|] = 2^{25.8}$ and standard deviation $2^{26.1}$ for $\langle \mathbf{w}_i, \mathbf{c} \rangle$.
+
+5. Considering an LWE instance with $n = 180$ and $q \approx 2^{40}$, $\alpha = 8/q$ and $h = 48$, we first BKZ-70 reduced the basis **L** for $c = 8$. This took 198 hours⁸ and produced a short vector **w** such that $|\langle \mathbf{w}, \mathbf{c} \rangle| \approx 2^{26.7}$, cf. $E[|\langle \mathbf{w}, \mathbf{c} \rangle|] \approx 2^{25.9}$. Then, running LLL 180 times (each run taking about 500 seconds on average), we produced short vectors such that $E[|\langle \mathbf{w}_i, \mathbf{c} \rangle|] = 2^{26.6}$ and standard deviation $2^{26.9}$ for $\langle \mathbf{w}_i, \mathbf{c} \rangle$.
+
+All our experiments match our prediction bounding the growth of the norms of our vectors by a factor of two. Note, however, that in the fourth experiment 1 in 128 vectors found with LLL was a duplicate of previously discovered vector, indicating that re-randomisation is not perfect. While the effect of this loss on the running time of the overall algorithm is small, it highlights that further research is required on the interplay of re-randomisation and lattice reduction.
+
+⁷ All experiments on “strombenzin” with Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz.
+
+⁸ We ran 49 BKZ tours until fplll’s auto abort triggered. After 16 tours the norm of the then shortest vector was by a factor 1.266 larger than the norm of the shortest vector found after 49 tours.
\ No newline at end of file
diff --git a/samples/texts/7740975/page_21.md b/samples/texts/7740975/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..8d6b143bc76e806d1b2d145e392ec2ed986786e6
--- /dev/null
+++ b/samples/texts/7740975/page_21.md
@@ -0,0 +1,18 @@
+Applying Algorithm 3 to parameter choices from HElib and SEAL, we arrive at
+the estimates in Table 1. These estimates were produced using the Sage [S+15]
+code available at http://bitbucket.org/malb/lwe-estimator which optimises
+the parameters c, $\ell$, $k$, $\beta$ to minimise the overall cost.
+
+For the HElib parameters in Table 1 we chose the sparse strategy. Here, amortising
+costs as in Section 3 did not lead to a significant improvement, which is why we
+did not use it in these cases. All considered lattices have dimension < 2n. Hence,
+one Ring-LWE sample is sufficient to mount these attacks. Note that this is less
+than the dual attack as described in [GHS12a] would require (two samples).
+
+For the SEAL parameter choices in Table 1, dimension $n = 1024$ requires two
+Ring-LWE samples, larger dimensions only require one sample. Here, amortising
+costs as in Algorithm 1 does lead to a modest improvement and is hence enabled.
+
+Finally, we note that reducing $q$ to $\approx 2^{34}$ resp. $\approx 2^{560}$ leads to an estimated cost of 80 bits for $n = 1024$ resp. $n = 16384$ for $s \leftarrow_S B_{64}^-$. For $s \leftarrow_S B^-$, $q \approx 2^{40}$ resp. $q \approx 2^{660}$ leads to an estimated cost of 80 bits under the techniques described here. In both cases, we assume $\sigma \approx 3.2$.
+
+**Acknowledgements.** We thank Kenny Paterson and Adeline Roux-Langlois for helpful comments on an earlier draft of this work. We thank Hao Chen for reporting an error in an earlier version of this work.
\ No newline at end of file
diff --git a/samples/texts/7740975/page_23.md b/samples/texts/7740975/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..aafb4f4c96a82571e8c26a696e5dee52473e271f
--- /dev/null
+++ b/samples/texts/7740975/page_23.md
@@ -0,0 +1,31 @@
+[BDGL16] Anja Becker, Léo Ducas, Nicolas Gama, and Thijs Laarhoven. New directions in nearest neighbor searching with applications to lattice sieving. In Robert Krauthgamer, editor, 27th SODA, pages 10–24. ACM-SIAM, January 2016.
+
+[BG14] Shi Bai and Steven D. Galbraith. Lattice decoding attacks on binary LWE. In Willy Susilo and Yi Mu, editors, ACISP 14, volume 8544 of LNCS, pages 322–337. Springer, Heidelberg, July 2014.
+
+[BGPW16] Johannes A. Buchmann, Florian Göpfert, Rachel Player, and Thomas Wunderer. On the hardness of LWE with binary error: Revisiting the hybrid lattice-reduction and meet-in-the-middle attack. In David Pointcheval, Abderrahmane Nitaj, and Tajjeeddine Rachidi, editors, AFRICACRYPT 16, volume 9646 of LNCS, pages 24–43. Springer, Heidelberg, April 2016.
+
+[BGV12] Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. (Leveled) fully homomorphic encryption without bootstrapping. In Shafi Goldwasser, editor, ITCS 2012, pages 309–325. ACM, January 2012.
+
+[BKW00] Avrim Blum, Adam Kalai, and Hal Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. In 32nd ACM STOC, pages 435–440. ACM Press, May 2000.
+
+[BLLN13] Joppe W. Bos, Kristin Lauter, Jake Loftus, and Michael Naehrig. Improved security for a ring-based fully homomorphic encryption scheme. In Martijn Stam, editor, 14th IMA International Conference on Cryptography and Coding, volume 8308 of LNCS, pages 45–64. Springer, Heidelberg, December 2013.
+
+[BLP+13] Zvika Brakerski, Adeline Langlois, Chris Peikert, Oded Regev, and Damien Stehlé. Classical hardness of learning with errors. In Dan Boneh, Tim Roughgarden, and Joan Feigenbaum, editors, 45th ACM STOC, pages 575–584. ACM Press, June 2013.
+
+[Bra12] Zvika Brakerski. Fully homomorphic encryption without modulus switching from classical GapSVP. In Safavi-Naini and Canetti [SNC12], pages 868–886.
+
+[BV11] Zvika Brakerski and Vinod Vaikuntanathan. Efficient fully homomorphic encryption from (standard) LWE. In Rafail Ostrovsky, editor, 52nd FOCS, pages 97–106. IEEE Computer Society Press, October 2011.
+
+[Che13] Yuanmi Chen. Réduction de réseau et sécurité concrète du chiffrement complètement homomorphe. PhD thesis, Paris 7, 2013.
+
+[CKH+16] Jung Hee Cheon, Jinsu Kim, Kyoo Hyung Han, Yongha Son, and Changmin Lee. Practical post-quantum public key cryptosystem based on LWE. In Seokhie Hong and Jong Hwan Park, editors, 19th Annual International Conference on Information Security and Cryptology (ICISC), Lecture Notes in Computer Science. Springer, 2016.
+
+[CN11] Yuanmi Chen and Phong Q. Nguyen. BKZ 2.0: Better lattice security estimates. In Dong Hoon Lee and Xiaoyun Wang, editors, ASIACRYPT 2011, volume 7073 of LNCS, pages 1–20. Springer, Heidelberg, December 2011.
+
+[CN12] Yuanmi Chen and Phong Q. Nguyen. BKZ 2.0: Better lattice security estimates (full version). http://www.di.ens.fr/~ychen/research/Full_BKZ.pdf, 2012.
+
+[CS15] Jung Hee Cheon and Damien Stehlé. Fully homomorphic encryption over the integers revisited. In Oswald and Fischlin [OF15], pages 513–536.
+
+[CS16] Ana Costache and Nigel P. Smart. Which ring based somewhat homomorphic encryption scheme is best? In Kazue Sako, editor, CT-RSA 2016, volume 9610 of LNCS, pages 325–340. Springer, Heidelberg, February / March 2016.
+
+[DTV15] Alexandre Duc, Florian Tramèr, and Serge Vaudenay. Better algorithms for LWE and LWR. In Oswald and Fischlin [OF15], pages 173–202.
\ No newline at end of file
diff --git a/samples/texts/7740975/page_24.md b/samples/texts/7740975/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..e028260e2a2085b7ec19fe5f7dd35fbbd5f8dc63
--- /dev/null
+++ b/samples/texts/7740975/page_24.md
@@ -0,0 +1,35 @@
+[DXL12] Jintai Ding, Xiang Xie, and Xiaodong Lin. A simple provably secure key exchange scheme based on the learning with errors problem. Cryptology ePrint Archive, Report 2012/688, http://eprint.iacr.org/2012/688.
+
+[FPL16] The FPLLL development team. FPLLL 5.0, a lattice reduction library, 2016. Available at https://github.com/fplll/fplll.
+
+[FV12] Junfeng Fan and Frederik Vercauteren. Somewhat practical fully homomorphic encryption. Cryptology ePrint Archive, Report 2012/144, 2012. http://eprint.iacr.org/2012/144.
+
+[GHS12a] Craig Gentry, Shai Halevi, and Nigel P. Smart. Homomorphic evaluation of the AES circuit. In Safavi-Naini and Canetti [SNC12], pages 850–867.
+
+[GHS12b] Craig Gentry, Shai Halevi, and Nigel P. Smart. Homomorphic evaluation of the AES circuit. Cryptology ePrint Archive, Report 2012/099, 2012. http://eprint.iacr.org/2012/099.
+
+[GJS15] Qian Guo, Thomas Johansson, and Paul Stankovski. Coded-BKW: Solving LWE using lattice codes. In Gennaro and Robshaw [GR15], pages 23–42.
+
+[GR15] Rosario Gennaro and Matthew J. B. Robshaw, editors. *CRYPTO 2015, Part I*, volume 9215 of LNCS. Springer, Heidelberg, August 2015.
+
+[GSW13] Craig Gentry, Amit Sahai, and Brent Waters. Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based. In Ran Canetti and Juan A. Garay, editors, *CRYPTO 2013, Part I*, volume 8042 of LNCS, pages 75–92. Springer, Heidelberg, August 2013.
+
+[HG07] Nick Howgrave-Graham. A hybrid lattice-reduction and meet-in-the-middle attack against NTRU. In Alfred Menezes, editor, *CRYPTO 2007*, volume 4622 of LNCS, pages 150–169. Springer, Heidelberg, August 2007.
+
+[HPS11] Guillaume Hanrot, Xavier Pujol, and Damien Stehlé. Analyzing blockwise lattice algorithms using dynamical systems. In Phillip Rogaway, editor, *CRYPTO 2011*, volume 6841 of LNCS, pages 447–464. Springer, Heidelberg, August 2011.
+
+[HS14] Shai Halevi and Victor Shoup. Algorithms in HElib. In Juan A. Garay and Rosario Gennaro, editors, *CRYPTO 2014, Part I*, volume 8616 of LNCS, pages 554–571. Springer, Heidelberg, August 2014.
+
+[KF15] Paul Kirchner and Pierre-Alain Fouque. An improved BKW algorithm for LWE with applications to cryptography and lattices. In Gennaro and Robshaw [GR15], pages 43–62.
+
+[KF16] Paul Kirchner and Pierre-Alain Fouque. Comparison between subfield and straightforward attacks on NTRU. *IACR Cryptology ePrint Archive*, 2016:717, 2016.
+
+[KL15] Miran Kim and Kristin Lauter. Private genome analysis through homomorphic encryption. *BMC Medical Informatics and Decision Making*, 15(5):1–12, 2015.
+
+[Laa15] Thijs Laarhoven. Sieving for shortest vectors in lattices using angular locality-sensitive hashing. In Gennaro and Robshaw [GR15], pages 3–22.
+
+[LCP16] Kim Laine, Hao Chen, and Rachel Player. Simple Encrypted Arithmetic Library - SEAL (v2.1). Technical report, Microsoft Research, September 2016. MSR-TR-2016-68.
+
+[LN13] Mingjie Liu and Phong Q. Nguyen. Solving BDD by enumeration: An update. In Ed Dawson, editor, *CT-RSA 2013*, volume 7779 of LNCS, pages 293–309. Springer, Heidelberg, February / March 2013.
+
+[LN14] Tancrède Lepoint and Michael Naehrig. A comparison of the homomorphic encryption schemes FV and YASHE. In David Pointcheval and Damien
\ No newline at end of file
diff --git a/samples/texts/7740975/page_25.md b/samples/texts/7740975/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..46d13511292efa7bfdbb8754d1a71ede2e1d0352
--- /dev/null
+++ b/samples/texts/7740975/page_25.md
@@ -0,0 +1,35 @@
+Vergnaud, editors, *AFRICACRYPT 14*, volume 8469 of LNCS, pages 318-335. Springer, Heidelberg, May 2014.
+
+[LP11] Richard Lindner and Chris Peikert. Better key sizes (and attacks) for LWE-based encryption. In Aggelos Kiayias, editor, *CT-RSA 2011*, volume 6558 of LNCS, pages 319-339. Springer, Heidelberg, February 2011.
+
+[LP16] Kim Laine and Rachel Player. Simple Encrypted Arithmetic Library - SEAL (v2.0). Technical report, Microsoft Research, September 2016. MSR-TR-2016-52.
+
+[LPR10] Vadim Lyubashevsky, Chris Peikert, and Oded Regev. On ideal lattices and learning with errors over rings. In Henri Gilbert, editor, *EUROCRYPT 2010*, volume 6110 of LNCS, pages 1-23. Springer, Heidelberg, May 2010.
+
+[LPR13] Vadim Lyubashevsky, Chris Peikert, and Oded Regev. A toolkit for ring-LWE cryptography. Cryptology ePrint Archive, Report 2013/293, 2013. http://eprint.iacr.org/2013/293.
+
+[LTV12] Adriana López-Alt, Eran Tromer, and Vinod Vaikuntanathan. On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption. In Howard J. Karloff and Toniann Pitassi, editors, *44th ACM STOC*, pages 1219-1234. ACM Press, May 2012.
+
+[MR09] Daniele Micciancio and Oded Regev. Lattice-based cryptography. In Daniel J. Bernstein, Johannes Buchmann, and Erik Dahmen, editors, *Post-Quantum Cryptography*, pages 147-191. Springer, Berlin, Heidelberg, New York, 2009.
+
+[OF15] Elisabeth Oswald and Marc Fischlin, editors. *EUROCRYPT 2015, Part I*, volume 9056 of LNCS. Springer, Heidelberg, April 2015.
+
+[Pei09] Chris Peikert. Some recent progress in lattice-based cryptography. In Omer Reingold, editor, *TCC 2009*, volume 5444 of LNCS, page 72. Springer, Heidelberg, March 2009.
+
+[Reg05] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. In Harold N. Gabow and Ronald Fagin, editors, *37th ACM STOC*, pages 84-93. ACM Press, May 2005.
+
+[Reg09] Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. *J. ACM*, 56(6), 2009.
+
+[S+15] William Stein et al. *Sage Mathematics Software Version 7.1*. The Sage Development Team, 2015. Available at http://www.sagemath.org.
+
+[Sch03] Claus-Peter Schnorr. Lattice reduction by random sampling and birthday methods. In Helmut Alt and Michel Habib, editors, *STACS 2003, 20th Annual Symposium on Theoretical Aspects of Computer Science, Berlin, Germany, February 27 - March 1, 2003, Proceedings*, volume 2607 of Lecture Notes in Computer Science, pages 145-156. Springer, 2003.
+
+[Sho01] Victor Shoup. NTL: A library for doing number theory. http://www.shoup.net/ntl/, 2001.
+
+[SL12] Jayalal Sarma and Princy Lunawat. IITM-CS6840: Advanced Complexity Theory — Lecture 11: Amplification Lemma. http://www.cse.iitm.ac.in/~jayalal/teaching/CS6840/2012/lecture11.pdf, 2012.
+
+[SNC12] Reihaneh Safavi-Naini and Ran Canetti, editors. *CRYPTO 2012*, volume 7417 of LNCS. Springer, Heidelberg, August 2012.
+
+[Wal15] Michael Walter. Lattice point enumeration on block reduced bases. In Anja Lehmann and Stefan Wolf, editors, *ICITS 15*, volume 9063 of LNCS, pages 269-282. Springer, Heidelberg, May 2015.
+
+[Wun16] Thomas Wunderer. Revisiting the hybrid attack: Improved analysis and refined security estimates. Cryptology ePrint Archive, Report 2016/733, 2016. http://eprint.iacr.org/2016/733.
\ No newline at end of file
diff --git a/samples/texts/7740975/page_26.md b/samples/texts/7740975/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..efacd3f4f91420ea2deeac5b3b60792cc565a79b
--- /dev/null
+++ b/samples/texts/7740975/page_26.md
@@ -0,0 +1,76 @@
+## A Rerandomisation
+
+**Data:** $n \times m$ matrix $\mathbf{L}$
+
+**Data:** density parameter $d$, default $d = 3$
+
+**Result:** $\mathbf{U} \cdot \mathbf{L}$ where $\mathbf{U}$ is a sparse, unimodular matrix.
+
+for $i \leftarrow 0$ to $4 \cdot n - 1$ do
+ $a \leftarrow \$ \{0, n-1\}$;
+ $b \leftarrow \$ \{0, n-1\} \setminus \{a\}$;
+ $\mathbf{L}_{(b)}, \mathbf{L}_{(a)} \leftarrow \mathbf{L}_{(a)}, \mathbf{L}_{(b)}$;
+
+end
+
+for $a \leftarrow 0$ to $n-2$ do
+ for $i \leftarrow 0$ to $d-1$ do
+ $b \leftarrow \$ \{a+1, n-1\}$;
+ $s \leftarrow \$ \{0, 1\}$;
+ $\mathbf{L}_{(a)} \leftarrow \mathbf{L}_{(a)} + (-1)^s \cdot \mathbf{L}_{(b)}$;
+
+end
+
+end
+
+return $\mathbf{L}$;
+
+**Algorithm 4:** Rerandomisation strategy in the fplll library [FPL16].
+
+## B Implementation
+
+```c
+# -*- coding: utf-8 -*-
+
+
+from sage.all import shuffle, randint, ceil, next_prime, log, cputime, mean, variance, set_random_seed, sqrt
+from copy import copy
+from sage.all import GF, ZZ
+from sage.all import random_matrix, random_vector, vector, matrix, identity_matrix
+from sage.stats.distributions.discrete_gaussian_integer import DiscreteGaussianDistributionIntegerSampler \
+ as DiscreteGaussian
+from estimator.estimator import preprocess_params, stddevf
+
+def gen_fhe_instance(n, q, alpha=None, h=None, m=None, seed=None):
+ """
+ Generate FHE-style LWE instance
+
+ :param n: dimension
+ :param q: modulus
+ :param alpha: noise rate (default: 8/q)
+ :param h: hamming weight of the secret (default: 2/3n)
+ :param m: number of samples (default: n)
+
+ """
+ if seed is not None:
+ set_random_seed(seed)
+
+ q = next_prime(ceil(q)-1, proof=False)
+ if alpha is None:
+ alpha = ZZ(8)/q
+
+ n, alpha, q = preprocess_params(n, alpha, q)
+
+ stddev = stddevf(alpha*q)
+
+ if m is None:
+ m = n
+
+ K = GF(q, proof=False)
+ A = random_matrix(K, m, n)
+
+ if h is None:
+ s = random_vector(ZZ, n, x=-1, y=1)
+ else:
+ s = [-1, 1]
+```
\ No newline at end of file
diff --git a/samples/texts/7740975/page_27.md b/samples/texts/7740975/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..7341fea4ea3c9bd85bbc6a239d80b047c1090348
--- /dev/null
+++ b/samples/texts/7740975/page_27.md
@@ -0,0 +1,75 @@
+```python
+s = [S[randint(0, 1)] for i in range(h)]
+s += [0 for _ in range(n-h)]
+shuffle(s)
+s = vector(ZZ, s)
+c = A*s
+
+D = DiscreteGaussian(stddev)
+
+for i in range(m):
+ c[i] += D()
+
+return A, c
+
+def dual_instance0(A):
+ Generate dual attack basis.
+ :param A: LWE matrix A
+
+q = A.base_ring().order()
+B0 = A.left_kernel().basis_matrix().change_ring(ZZ)
+m = B0.ncols()
+n = B0.nrows()
+r = m-n
+B1 = matrix(ZZ, r, n).augment(q+identity_matrix(ZZ, r))
+B = B0.stack(B1)
+return B
+
+def dual_instance1(A, scale=1):
+ Generate dual attack basis for LWE normal form.
+ :param A: LWE matrix A
+ """
+q = A.base_ring().order()
+n = A.ncols()
+B = A.matrix_from_rows(range(0, n)).inverse().change_ring(ZZ)
+L = identity_matrix(ZZ, n).augment(B)
+L = Lstack matrix(ZZ, n, n).augment(q+identity_matrix(ZZ, n))
+
+for i in range(0, 2*n):
+ for j in range(n, 2*n):
+ L[i, j] = scale + L[i, j]
+
+return L
+
+def balanced_lift(e):
+ Lift e mod q to integer such that result is between -q/2 and q/2
+ :param e: a value or vector mod q
+ """
+from sage.rings.finite_ringsoadmodimport isIntegerMod
+
+q = e.base_ring().order()
+if isIntegerMod(e):
+ e = ZZ(e)
+ if e > q//2:
+ e -= q
+ return e
+else:
+ return vector(balanced_lift(ee) for ee in e)
+
+def apply_short1(y, A, c, scale=1):
+ Compute 'y*A', 'y*c' where y is a vector in the integer row span of
+ `dual_instance(A)`
+ :param y: (short) vector in scaled dual lattice
+ :param A: LWE matrix
+ :param c: LWE vector
+ """
+m = A.nrows()
+y = vector(ZZ, 1/ZZ(scale) * y[-mURRE])
+a = balanced_lift(y*A)
+e = balanced_lift(y*c)
+return a, e
+
+def log_mean(X):
+ return log(mean([abs(x) for x in X]), 2)
+```
+
diff --git a/samples/texts/7740975/page_28.md b/samples/texts/7740975/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a50f33dba5385805aa792c9cc61f1d390bcf8ab
--- /dev/null
+++ b/samples/texts/7740975/page_28.md
@@ -0,0 +1,82 @@
+```python
+def log_var(X):
+ return log(variance(X).sqrt(), 2)
+
+def silke(A, c, beta, h, m=None, scale=1, float_type="double"):
+
+ :param A: LWE matrix
+ :param C: LWE vector
+ :param beta: BKW block size
+ :param m: number of samples to consider
+ :param scale: scale rhs of lattice by this factor
+
+ ****
+ from fpylll import BKZ, IntegerMatrix, LLL, GSO
+ from fpylll.algorithms.bkz2 import BKZReduction as BKZ2
+
+ if m is None:
+ m = A.nrows()
+
+ L = dual_instance1(A, scale=scale)
+ L = IntegerMatrix.from_matrix(L)
+ L = LLL.reduction(L, flags=LLL.VERBOSE)
+ M = GSO.Mat(L, float_type=float_type)
+ bkz = BKZ2(M)
+ t = 0.0
+ param = BKZ Param(block_size=beta,
+ strategies=BKZ.DEFAULT_STRATEGY,
+ auto_abort=True,
+ max_loops=16,
+ flags=BKZ.VERBOSE | BKZ.AUTO_ABORT | BKZ.MAX_loops)
+ bkz(param)
+ t += bkz.stats.total_time
+
+ H = copy(L)
+
+ import pickle
+ pickle.dump(L, open("L-%d-%d.sobj"%(L.nrows, beta), "wb")
+
+ E = []
+ Y = set()
+ V = set()
+ y_i = vector(ZZ, tuple(L[0]))
+ Y.add Simpleset(y_i)
+ E.append(apply_simpleset(y_i, A, c, scale=scale)[1])
+
+ v = L[0].norm()
+ v_ = v/sqrt(L.ncols)
+ v_r = 3.2*sqrt(L.ncols - A.ncols())*v_-scale
+ v_l = sqrt(h)*v_
+
+ fmt = u'{{"t": %5.1f, "log(sigma)": %5.1f, "log(|y|):" %5.1f, "log(E[sigma]):" %5.1f}}'
+
+ print
+ print fmt(t,
+ log(abs(E[-1]), 2),
+ log(L[0].norm(), 2),
+ logovernorm(v_r**2 + v_l**2, 2))
+
+ print
+ for i in range(m):
+ t = cputime()
+ M = GSO.Mat(L, float_type=float_type)
+ bkz = BKZ2(M)
+ t = cputime()
+ bkz.randomize_block(0, L.nrows, stats=None, density=3)
+ LLL.reduction(L)
+ y_i = vector(ZZ, tuple(L[0]))
+ l_n = L[0].norm()
+ if L[0].norm() > H[0].norm():
+ L = copy(H)
+ t = cputime(t)
+
+ Y.add Simpleset(y_i)
+ V.add(y_i.norm())
+ E.append(apply_simpleset(y_i, A, c, scale=scale)[1])
+ if len(V) >= 2:
+ fmt = u'{{"i": %4d, "t": %5.1f, "log(|e_i|):" %5.1f, "log(|y_i|):" %5.1f,"}
+ fmt += u'{{"log(sigma)":" (%5.1f,%5.1f), "log(|y|):" (%5.1f,%5.1f), |Y|:" %5d}}
+ print fmt(i+2, t, log(abs(E[-1]), 2), log(1_n, 2), log_mean(E), log_var(E), log_mean(V), log_var(V), len(Y))
+```
+
+return E
\ No newline at end of file
diff --git a/samples/texts/7740975/page_8.md b/samples/texts/7740975/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..d736541debadd1b928c7f06f6a74e39bfa0d9ab9
--- /dev/null
+++ b/samples/texts/7740975/page_8.md
@@ -0,0 +1,23 @@
+To produce a short enough **y**, we may call a lattice-reduction algorithm. In particular, we may call the BKZ algorithm with block size $β$. After performing BKZ-$β$ reduction the first vector in the transformed lattice basis will have norm $\delta_0^m \cdot \det(\Lambda)^{1/m}$ where $\det(\Lambda)$ is the determinant of the lattice under consideration, $m$ its dimension and the root-Hermite factor $\delta_0$ is a constant based on the block size parameter $β$. Increasing the parameter $β$ leads to a smaller $\delta_0$ but also leads to an increase in run-time; the run-time grows at least exponential in $β$ (see below).
+
+In our case, the expression above simplifies to $\|\mathbf{y}\| \approx \delta_0^m \cdot q^{n/m}$ whp, where $n$ is the LWE dimension and $m$ is the number of samples we consider. The minimum of this expression is attained at $m = \sqrt{\frac{n \log q}{\log \delta_0}}$ [MR09].
+
+Explicitly, we are given a matrix $\mathbf{A} \in \mathbb{Z}_q^{m \times n}$, construct a basis $\mathbf{Y}$ for its left kernel modulo $q$ and then consider the $q$-ary lattice $\Lambda_q(\mathbf{Y})$ spanned by the rows of $\mathbf{Y}$. With high probability $\mathbf{Y}$ is an $(m-n) \times m$ matrix and $\Lambda_q(\mathbf{Y})$ has volume $q^n$. Let $\mathbf{L}$ be a basis for $\Lambda_q(\mathbf{Y})$, $m' = m - n$ and write $\mathbf{Y} = [\mathbf{I}_{m'}|\mathbf{Y}']$ then we have
+
+$$ \mathbf{L} = \begin{pmatrix} \mathbf{I}_{m'} & \mathbf{Y}' \\ 0 & q\mathbf{I}_n \end{pmatrix}. $$
+
+In other words, we are attempting to find a short vector **y** in the integer row span of **L**.
+
+Given a target for the norm of **y** and hence for $\delta_0$, HElib² estimates the cost of lattice reduction by relying on the following formula from [LP11]:
+
+$$ \log t_{BKZ}(\delta_0) = \frac{1.8}{\log \delta_0} - 110, \quad (1) $$
+
+where $t_{BKZ}(\delta_0)$ is the time in seconds it takes to BKZ reduce a basis to achieve root-Hermite factor $\delta_0$. This estimate is based on experiments with BKZ in the NTL library [Sho01] and extrapolation.
+
+## 2.3 LP model
+
+The [LP11] model for estimating the cost of lattice-reduction is not correct.
+
+Firstly, it expresses runtime in seconds instead of units of computation. As Moore's law progresses and more parallelism is introduced, the number of instructions that can be performed in a second increases. Hence, we first must translate
+
+² https://github.com/shaih/HElib/blob/a5921a08e8b418f154be54f4e39a849e74489319/src/FHEContext.cpp#L22
\ No newline at end of file
diff --git a/samples/texts/7740975/page_9.md b/samples/texts/7740975/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac8374799cc562984e5a7f542a4fadcbe0752b1a
--- /dev/null
+++ b/samples/texts/7740975/page_9.md
@@ -0,0 +1,22 @@
+Eq. (1) to units of computation. The experiments of Lindner and Peikert were
+performed on a 2.33 Ghz AMD Opteron machine, so we may assume that about
+$2.33 \cdot 10^9$ operations can be performed on such a machine in one second and we
+scale Eq. (1) accordingly.³
+
+Secondly, the LP model does not fit the implementation of BKZ in NTL. The BKZ algorithm internally calls an oracle for solving the shortest vector problem in smaller dimension. The most practically relevant algorithms for realising this oracle are enumeration without preprocessing (Fincke-Pohst) which costs $2^{\Theta(\beta^2)}$ operations, enumeration with recursive preprocessing (Kannan) which costs $\beta^{\Theta(\beta)}$ and sieving which costs $2^{\Theta(\beta)}$. NTL implements enumeration without preprocessing. That is, while it was shown in [Wal15] that BKZ with recursive BKZ pre-processing achieves a run-time of $\text{poly}(n) \cdot \beta^{\Theta(\beta)}$, NTL does not implement the necessary recursive preprocessing with BKZ in smaller dimensions. Hence, it runs in time $\text{poly}(n) \cdot 2^{\Theta(\beta^2)}$ for block size $\beta$.
+
+Thirdly, the LP model assumes a linear relation between $1/\log(\delta_0)$ and the log of the running time of BKZ, but from the "lattice rule-of-thumb" ($\delta_0 \approx \beta^{1/(2\beta)}$) and $2^{\Theta(\beta)}$ being the complexity of the best known algorithm for solving the shortest vector problem, we get:
+
+**Lemma 2 ([APS15]).** *The log of the time complexity achieve a root-Hermite factor $\delta_0$ with BKZ is*
+
+$$ \Theta\left(\frac{\log(1/\log \delta_0)}{\log \delta_0}\right) $$
+
+if calling the SVP oracle costs $2^{\Theta(\beta)}$.
+
+To illustrate the difference between Lemma 2 and Eq. (1), consider Regev’s original parameters [Reg05] for LWE: $q \approx n^2$, $\alpha q \approx \sqrt{n}$. Then, solving LWE with the dual attack and advantage $\epsilon$ requires a log root-Hermite factor $\log \delta_0 = \log^2 \left( \alpha \sqrt{\ln(1/\epsilon)/\pi^{-1}} \right) / (4n \log q)$ [APS15]. Picking $\epsilon$ such that $\log \sqrt{\ln(1/\epsilon)/\pi} \approx 1$, the log root-Hermite factor becomes $\log \delta_0 = \frac{9 \log n}{32 n}$. Plugging this result into Eq. 1, we would estimate that solving LWE for these parameters takes $\log t_{BKZ}(\delta_0) = \frac{32 n}{5 \log n} - 110$ seconds, which is subexponential in $n$.
+
+## 2.4 Parameter choices in SEAL v2.0
+
+SEAL v2.0 [LP16] largely leaves parameter choices to the user. However, it provides the ChooserEvaluator::default_parameter_options() function which returns
+
+³ The number of operations on integers of size $\log q$ depends on $q$ and is not constant. However, constant scaling provides a reasonable approximation for the number of operations for the parameter ranges we are interested in here.
\ No newline at end of file