| H'2 | B̅ | ̅ | ̅ | [B̅] | RFk(μi|k) | RFk(μi|k) | [RF'*k(μi|k)]̅ | RFk(μi|k) | Z*[1-(k mod 2)] ∈ {G**G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*G̅*"}, {"bbox": [586, 696, 600, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_k+1}(\\mu_i|k+1)\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n\\mathbb{Z}_1^{*- (\\text{mod } 2)} \\in \\{G_1^{\\mathbb{R}f_{k+1}(\\mu_i|k+1)}\\}^{q_s}\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [609, 696, 624, 805], "category": "Formula", "text": "$$\n-[-]\n$$"}, {"bbox": [885, 548, 905, 805], "category": "Formula", "text": "$$\nDef. $RF'_{k'}(µ_i|k) if µ[k + l] = l - β.\n$$"}, {"bbox": [308, 1598, 325, 1615], "category": "Page-footer", "text": "-\n-"}, {"bbox": [308, 148, 905, 1554], "category": "Caption", "text": "**Figure 6:** This is a summary of the game transition in Lemma *3.17*. In “crs₁” column, “B” (resp. “H”) means that commitments are perfectly binding and proofs are perfectly sound (resp. commitments are perfectly hiding and proofs are perfectly zero-knowledge). In the “guess” column, if the simulator chooses a random guess $β ∈ \\lbrace {0 , 1} \\rbrace$, then $β$ is written. In “w₁” column, “z₀ᵢ = z₁ᵢ,” and “z₂ᵢ = x₂” means that each equation holds in L₁. For z₀ᵢ and z₁ᵢ, we consider two cases; one is $β ≠ μₕ[k + l]$ and the other is $β = μₗ[k + l]$. In the former case, $(z₂ᵢ - x₂) = 0$ holds in L₁ and $ρ₁$ is a valid proof. In the latter case, $(z₀ᵢ - z₁ᵢ) = 0$ holds in L₁ and $ρ₁$ is a valid proof. In the “forgery check” column, $Z_j^*$ is set to Dec($sk_y$, ctⱼ*) for j ∈ {0 , *l*} and $Z_{0,i}$ is the plaintext of ct₀ in the *i*-th signature $σ_i$. In the “abort cond.” column, if $Z_2^* ≠ G_۱^β$ holds, the simulator aborts. In the “reduction” column, we write what kind of security is used. “CRS IND”, “Soundness”, and “Hiding” means the CRS indistinguishability; perfect soundness; perfect hiding of the Groth-Sahai proof system; respectively. “Stat. Diff.” means the statistical difference between the two advantages. Note that $RF_{k+1}(μ_{k+l}) := RF_k(μ_l)$ if $μ[k+l] = β$ and $RF_{k+1}(μ_{k+l}):= RF'_k(μ_l)$ if $μ[k+l] = l - β$."}]
\ No newline at end of file
diff --git a/samples/texts/2864204/page_16.md b/samples/texts/2864204/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..7d0bcb80f7e12395f2cb8035f63ca634194e8023
--- /dev/null
+++ b/samples/texts/2864204/page_16.md
@@ -0,0 +1,36 @@
+**Lemma 3.18 (H$_{4}$$_{,k}$ to H$_{4}$$_{,k,}$$_{1}$).** $\text{AdvH}_{4,k,1} = \text{AdvH}_{4,k}$.
+
+*Proof.* In $H_{4,k,1}$, $x_2$ is switched from $0$ to $1 - \beta$, where $\beta \stackrel{\leftarrow}{\simeq} \{0,1\}$. Though $x_2 \neq z_{2,i}$ may happen in $H_{4,k,1}$, still $z_{0,i} = z_{1,i}$ holds and hence $\text{ins}_1$ is in $\mathcal{L}_1$ in both games. Thus commitment $\mathbf{c}_2 \stackrel{\leftarrow}{\simeq} \text{Com}(\text{crs}_1, x_2)$ and proofs $\rho_1$ distribute identically in both games due to the witness indistinguishability under $\text{crs}_1$ generated by HG(par). Thus, $\text{AdvH}_{4,k,1} = \text{AdvH}_{4,k}$.
+
+**Lemma 3.19 (H$_{4}$$_{,k,}$$_{1}$ to H$_{4}$$_{,k,}$$_{2}$).** There exists an adversary $\mathcal{B}$ against IND-mCPA security of PKE with running time $\mathbf{T}(\mathcal{B}) \approx \mathbf{T}(\mathcal{A})$ and $\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}) \ge |\text{AdvH}_{4,k,2} - \text{AdvH}_{4,k,1}|$.
+
+*Proof.* In $H_{4,k,2}$, $\text{ct}_2$ encrypts $Z_{2,i} = G^{\mu_i[k+1]}$, instead of $Z_{2,i} = G^0$. Observe that $\text{sk}_2$ is used only in making commitment $\mathbf{k}_2$ and proof $\rho_1$ with $\text{crs}_1$ generated by HG(par) in both games. Thus we can construct a straightforward reduction to bound the difference by IND-mCPA security of PKE by using perfect zero-knowledge simulator Sim for making $\rho_1$ and relevant commitments. $\blacksquare$
+
+**Lemma 3.20 (H$_{4}$$_{,k,}$$_{2}$ to H$_{4}$$_{,k,}$$_{3}$).** $\text{AdvH}_{4,k,3} = \frac{1}{2}\text{AdvH}_{4,k,2}$.
+
+*Proof.* In $H_{4,k,3}$, $\beta$ and $b$ are independent of adversary's view and chosen uniformly at random. $\mathbf{c}_2$ perfectly hides $\beta$ since $\text{crs}_1$ is generated by HG(par) and the simulation of SIGN is independent of $\beta$. Thus, the event ABORT is independent of adversary's success event and
+
+$$
+\begin{align*}
+\Pr[\text{ABORT}] &= \Pr[(z_2^* \in \{0,1\}) \land z_2^* = 1-\beta] + \Pr[z_2^* \notin \{0,1\} \land b=0] \\
+&= \frac{1}{2}\Pr[z_2^* \in \{0,1\}] + \frac{1}{2}(1-\Pr[z_2^* \in \{0,1\}]) = \frac{1}{2},
+\end{align*}
+$$
+
+where $z_2^*$ is the discrete log of $Z_2^*$ based on $G$ and independent of $b$. This only halves $\mathcal{A}$'s advantage. We note that, for all accepted forgeries in Games $H_{4,k,3}$ to $H_{4,k,8}$, the following equation holds:
+
+$$ z_2^* \neq x_2. \tag{1} $$
+
+In the following games, we define the random function:
+
+$$ \mathbf{RF}_{k+1}(\mu|k+1) := \begin{cases} \mathbf{RF}_k(\mu|k) & (\mu[k+1] = \beta) \\ \mathbf{RF}'_k(\mu|k) & (\mu[k+1] = 1-\beta) \end{cases}, \quad (2) $$
+
+where $\mathbf{RF}_k$ and $\mathbf{RF}'_k$ are two independent random functions from $\{0,1\}^k \to \mathbb{Z}_p$. By the definition, we note that $\mathbf{RF}_{k+1} : \{0,1\}^{k+1} \to \mathbb{Z}_p$ is a random function.
+
+**Lemma 3.21 (H$_{4}$$_{,k,}$$_{3}$ to H$_{4}$$_{,k,}$$_{4}$).** There exists an adversary $\mathcal{B}$ against IND-mCPA security of PKE with running time $\mathbf{T}(\mathcal{B}) \approx \mathbf{T}(\mathcal{A})$ and $\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}) \ge |\text{AdvH}_{4,k,4} - \text{AdvH}_{4,k,3}|$.
+
+*Proof.* In game $H_{4,k,4}$, $x_2 = z_{2,i}$ holds if $\mu_i[k+1] \neq \beta$; otherwise $z_{0,i} = z_{1,i}$. If $\mu_i[k+1] = \beta$, then $z_{0,i} = z_{1,i} = \mathbf{RF}_k(\mu_i|k)$, otherwise $x_2 = z_{2,i} = 1-\beta$ by Equation (2). Thus, in either case, $(z_{0,i}-z_{1,i})(x_2-z_{2,i}) = 0$ holds and $\text{ins}_1 \in \mathcal{L}_1$. Another difference between $\text{AdvH}_{4,k,3}$ and $H_{4,k,4}$ is that $\text{ct}_1$ is a ciphertext either of $Z_{1,i} = G^{\mathbf{RF}_{k+1}(\mu_i|k+1)}$ (in $H_{4,k,4}$) or $Z_{1,i} = G^{\mathbf{RF}_k(\mu_i|k)}$ (in $\text{AdvH}_{4,k,3}$). Moreover, $\text{sk}_1$ is used only for making $\mathbf{k}_1$ and $\rho_1$ with respect to $\text{crs}_1$ generated by HG(par) in both games. Thus, as well as Lemma 3.19, we can construct a straightforward reduction to bound this difference by IND-mCPA-security of PKE using Sim for simulating $\rho_1$ and relevant commitments. Lemma 3.21 is concluded. $\blacksquare$
+
+**Lemma 3.22 (H$_{4}$$_{,k,}$$_4$ to H$_{4}$$_{,k,}$$_5$).** There exists an adversary $\mathcal{B}$ against CRS indistinguishability of GS with running time $\mathbf{T}(\mathcal{B}) \approx \mathbf{T}(\mathcal{A})$ and $2\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{B}) \ge |\text{AdvH}_{4,k,5} - \text{AdvH}_{4,k,4}|$.
+
+*Proof.* In $H_{4,k,5}$, VER rejects a forgery if $Z_{1-(k \bmod 2)}^* \notin \{G^{\mathbf{RF}_k(\mu_j|k)}\}_{j=1}^{q_\sigma}$ instead of using $Z_{k \bmod 2}^*$. In these games, Equation (1) holds and we can switch $\text{crs}_1$ to be binding and argue that $Z_{k \bmod 2}^* = Z_{1-(k \bmod 2)}^*$ by $z_2^* \neq x_2$ and the perfect soundness of GS for language $\mathcal{L}_1$. More formally, we prove that via the game sequence in Figure 7. As shown in Lemma 3.21, $\text{ins}_1$ is always in $\mathcal{L}_1$ and we can construct a
\ No newline at end of file
diff --git a/samples/texts/2864204/page_17.md b/samples/texts/2864204/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..2140d0e4e00266a47c3620d58c0a748f45342755
--- /dev/null
+++ b/samples/texts/2864204/page_17.md
@@ -0,0 +1,36 @@
+**Figure 7:** Games $H'_1-H'_3$ for the proof of Lemma 3.22.
+
+straightforward reduction to show that there exists an adversary $\mathcal{B}$ against CRS indistinguishability of GS with
+
+$$ \mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{B}) \geq |\mathrm{Adv}H'_1 - \mathrm{Adv}H_{4,k,4}|. $$
+
+Since $\text{crs}_1$ is binding in both $H'_1$ and $H'_2$, by the perfect soundness of GS and Equation (1), $Z_{k \bmod 2}^* = Z_{1-(k \bmod 2)}^*$ holds if $\rho_1^*$ gets verified. Hence, the changes between $H'_1$ and $H'_2$ are only conceptual, and thus $\mathrm{Adv}H'_2 = \mathrm{Adv}H'_1$. By the CRS indistinguishability of GS, we have $\mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{B}) \geq |\mathrm{Adv}H'_3 - \mathrm{Adv}H'_2|$. It is clear that $\mathrm{Adv}H'_3 = \mathrm{Adv}H_{4,k,5}$ ■
+
+**Lemma 3.23 (H$_{4}$$_{,k}$$_{,}$$_{5}$ to H$_{4}$$_{,k}$$_{,}$$_{6}$).** There exists an adversary $\mathcal{B}$ against IND-mCPA security of PKE with running time $T(\mathcal{B}) \approx T(\mathcal{A})$ and $\mathrm{Adv}_{\mathrm{PKE}}^{\mathrm{mcpa}}(\mathcal{B}) \geq |\mathrm{Adv}H_{4,k,6} - \mathrm{Adv}H_{4,k,5}|$.
+
+*Proof.* In $H_{4,k,6}$, $z_{0,i} = z_{1,i}$ is used as $w_1$. It holds that $(z_{0,i} - z_{1,i})(x_2 - z_{2,i}) = 0$ and $\mathrm{ins}_1 \in \mathcal{L}_1$ as the case in $H_{4,k,5}$. In the signing oracle of $H_{4,k,6}$, $\mathrm{ct}_0$ encrypts $Z_{0,i} = G^{\mathrm{RF}_{k+1}}(\mu_i|k)$ instead of $Z_{0,i} = G^{\mathrm{RF}_k}(\mu_i|k)$. Observe that $\mathrm{sk}_0$ is used only in making $k_0$ and $\rho_1$ with $\text{crs}_1$ generated by $\mathrm{HG}(\mathrm{par})$ in both games. We thus can construct a straightforward reduction to bound the difference between $H_{4,k,5}$ and $H_{4,k,6}$ by IND-mCPA security using zero-knowledge simulator $\mathrm{Sim}$ for making $\rho_1$ and relevant commitments. ■
+
+**Lemma 3.24 (H$_{4}$$_{,k}$$_{,}$$_{6}$ to H$_{4}$$_{,k}$$_{,}$$_{7}$).** $\mathrm{Adv}H_{4,k,6} \leq \mathrm{Adv}H_{4,k,7} + \frac{q_s}{p}$.
+
+*Proof.* According to Equation (2), the difference between $H_{4,k,6}$ and $H_{4,k,7}$ is that the accepted forgery with a $Z_{1-(k \bmod 2)}^*$ in either:
+
+$$
+\begin{align*}
+\mathcal{Z}_6 &:= \{G^{\mathbf{RF}_k(\mu_j|k)}\}_{j=1}^{q_s} \\
+&= \underbrace{\{G^{\mathbf{RF}_k(\mu_j|k)} : \mu_j[k+1] = \beta\}}_{=:\!S_1}_{j=1}^{q_s} \cup \{G^{\mathbf{RF}_k(\mu_j|k)} : \mu_j[k+1] = 1-\beta\}_{j=1}^{q_s} \\
+&\quad (\text{in } H_{4,k,6})
+\end{align*}
+$$
+
+or
+
+$$ \mathcal{Z}_7 := \{G^{\mathbf{RF}_{k+1}(\mu_j|k+1)}\}_{j=1}^{q_s} = S_1 \cup \{G^{\mathbf{RF}'_k(\mu_j|k)} : \mu_j[k+1] = 1-\beta\}_{j=1}^{q_s} (\text{in } H_{4,k,7}). $$
+
+We note that, for those messages M where $\mu[k+1] = 1-\beta$ and $\mu|k \in CM := \{\mu_j|k : \mu_j[k+1] = \beta\}_{j=1}^{q_s}$, the value $G^{\mathbf{RF}_k}(\mu|k) \in S_1$. Namely,
+
+$$
+\begin{align*}
+S' &:= S_1 \cap \{G^{\mathbf{RF}_k(\mu_j|k)} : \mu_j[k+1] = 1-\beta\}_{j=1}^{q_s} \\
+ &= \{G^{\mathbf{RF}_k(\mu_j|k)} : \mu_j[k+1] = 1-\beta \land \mu_j[k] \in CM\}_{j=1}^{q_s}.
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/2864204/page_18.md b/samples/texts/2864204/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..749a072851725587b0c3443f2735e2a5a0fe12d4
--- /dev/null
+++ b/samples/texts/2864204/page_18.md
@@ -0,0 +1,53 @@
+We note that $S'$ is not empty, since each element $G^{\mathbf{RF}_k(\mu_j|k)}$ depends on $k$-bit prefix of $\mu_j$. Thus, we can rewrite
+
+$$ Z_6 = S_1 \cup \underbrace{\{G^{\mathbf{RF}_k(\mu_j|k)} : \mu_j[k+1] = 1-\beta \land \mu_j|k \notin CM\}}_{:=S_2}^{q_s}. $$
+
+We define the following game $H_{4,k,6'}$ between $H_{4,k,6}$ and $H_{4,k,7}$. $H_{4,k,6'}$ simulates INIT and SIGN as in $H_{4,k,6}$, but differs in simulating VER, where it only accepts forgery with $Z_{1-(k \bmod 2)}^* \in S_1$. Precisely, $H_{4,k,6'}$ simulates VER as follows:
+
+• Parse $\sigma^* := ((ct_j^*)_{0\le j\le 2}, \rho_0^*, \rho_1^*)$.
+
+• $Z_2^* \leftarrow \text{Dec}(sk_2, ct_2^*)$. If $Z_2^* \neq G^\beta$ then return 0.
+
+• $Z_{1-(k \bmod 2)}^* \leftarrow \text{Dec}(sk_{1-(k \bmod 2)}, ct_{1-(k \bmod 2)}^*)$. If $Z_{1-(k \bmod 2)}^* \notin S_1$ then return 0.
+
+• Return $(M^* \notin Q_M) \land (\text{Ver}(pk, M^*, \sigma^*) = 1)$.
+
+We note that the value $\mathbf{RF}_k(\mu|k)$ is perfectly hidden from $\mathcal{A}$ for $\mu[k+1]=1-\beta$ and $\mu|k \notin CM$ since $\mathcal{A}$ only learns $\mathbf{RF}'_k(\mu|k)$ from SIGN by Equation (2) and $\mathbf{RF}$ and $\mathbf{RF}'$ are two independent random functions. Thus, even an unbounded adversary $\mathcal{A}$ can output a value in $S_2$ with probability at most $q_s/p$ and the following holds,
+
+$$ \text{Adv}_{H_{4,k,6}} - \text{Adv}_{H_{4,k,6'}} \le \frac{q_s}{p}. $$
+
+Compared to $H_{4,k,6'}$, there are more valid forgeries in $H_{4,k,7}$ and we have
+
+$$ \text{Adv}_{H_{4,k,6'}} \le \text{Adv}_{H_{4,k,7}}. $$
+
+Thus, $\text{Adv}_{H_{4,k,6}} - \text{Adv}_{H_{4,k,7}} \le \frac{q_s}{p}$ and we conclude the lemma. $\blacksquare$
+
+**Lemma 3.25 (H$_{4}$$_{,k}$$_{,}$$_{7}$ to H$_{4}$$_{,k}$$_{,}$$_{8}$).** $\text{Adv}_{H_{4,k,8}} = 2\text{Adv}_{H_{4,k,7}}.$
+
+*Proof.* $H_{4,k,8}$ accepts a forgery no matter if `ABORT = 1` or not. By the same argument as in Lemma 3.20, this doubles the advantage of $\mathcal{A}$. $\blacksquare$
+
+Note that we have stopped using $sk_2$ in $H_{4,k,8}$. In $H_{4,k,9}$, $ct_2$ encrypts $Z_{2,i} = G^0$ instead of $Z_{2,i} = G^{\mu_i[k+1]}$. By the same argument as Lemma 3.19, we have
+
+**Lemma 3.26 (H$_{4}$$_{,k}$$_{,}$$_{8}$ to H$_{4}$$_{,k}$$_{,}$$_{9}$).** There exists an adversary $\mathcal{B}$ against IND-mCPA security of PKE with running time $T(\mathcal{B}) \approx T(\mathcal{A})$ and $\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}) \ge |\text{Adv}_{H_{4,k,9}} - \text{Adv}_{H_{4,k,8}}|$.
+
+**Lemma 3.27 (H$_{4}$$_{,k}$$_{,}$$_{9}$ to H$_{4}$$_{,k}$$_{,}$$_{10}$).** $\text{Adv}_{H_{4,k,10}} = \text{Adv}_{H_{4,k,9}}.$
+
+*Proof.* In $H_{4,k,10}$, $x_2$ is switched from $1-\beta$ to 0 and $\rho_1$ is generated by using $P$ instead of Sim. Since $\text{crs}_1$ is generated by $\text{HG}(\text{par})$, $c_2 \leftarrow^{\oplus} \text{Com}(\text{crs}_1, x_2)$ is distributed the same in both $H_{4,k,9}$ and $H_{4,k,10}$. So is $\rho_1$ by the perfect zero-knowledge property. Thus, $\text{Adv}_{H_{4,k,10}} = \text{Adv}_{H_{4,k,19}}$. $\blacksquare$
+
+**Lemma 3.28 (H$_{4}$$_{,k}$$_{,}$$_{10}$ to H$_{4}$$_{,k}$$_{+1}$).** $\text{Adv}_{H_{4,k+1}} = \text{Adv}_{H_{4,k,10}}.$
+
+*Proof.* $H_{4,k,10}$ simulates INIT and VER the same as in $H_k$ and $z_{0,i} = z_{1,i} = \mathbf{RF}_{k+1}(\mu_i|k+1)$. Thus, $\text{Adv}_{H_{4,k,10}} = \text{Adv}_{H_{4,k+1}}$. $\blacksquare$
+
+From Lemmata 3.18 to 3.23, we have
+
+$$ \text{Adv}_{H_{4,k}} - 2\text{Adv}_{H_{4,k},6} \le |\text{Adv}_{H_{4,k}} - 2\text{Adv}_{H_{4,k},6}| \le 4\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{B}_1) + 5\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}_2). $$
+
+From Lemmata 3.25 to 3.28, we have
+
+$$ 2\text{Adv}_{H_{4,k},7} - \text{Adv}_{H_{4,k},1} \le |2\text{Adv}_{H_{4,k},7} - \text{Adv}_{H_{4,k},1}| \le \text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}_2). $$
+
+As $2\text{Adv}_{H_{4,k},6} \le 2\text{Adv}_{H_{4,k},7} + \frac{2q_s}{p}$ (Lemma 3.24), we conclude Lemma 3.17 as
+
+$$ \text{Adv}_{H_{4,k}} - \text{Adv}_{H_{4,k+1}} \le 4\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{B}_1) + 6\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}_2) + 2q_s/p. $$
+
+$\blacksquare$
\ No newline at end of file
diff --git a/samples/texts/2864204/page_19.md b/samples/texts/2864204/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..05a853380a3daefa244d63d2d202796b58abfad6
--- /dev/null
+++ b/samples/texts/2864204/page_19.md
@@ -0,0 +1,15 @@
+We syntactically define $\mathbf{F}(\mathbf{M}_i) := \mathbf{RF}_L(\mu_i)$ in $G_1$ since the binary representation of a group element is unique and have
+
+**Lemma 3.29 (H$_{4}$$_{,L}$ to G$_{1}$).** There exists an adversary $\mathcal{B}$ against CRS indistinguishability of GS with running time $\mathbf{T}(\mathcal{B}) \approx \mathbf{T}(\mathcal{A})$ and $\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{B}) \ge |\text{Adv}_{G_1} - \text{Adv}_{H_{4,L}}|$.
+
+*Proof.* We note that $L$ is the smallest even integer that is equal or larger than the bit size of $p$ (namely, $L$ mod 2 = 0). The only difference between $G_1$ and $H_{4,L}$ is the simulation of $\mathbf{crs}_1$, which is generated by either BG (in $G_1$) or HG (in $H_{4,L}$) since $\mathbf{F}(\mathbf{M}_i) = \mathbf{RF}_L(\mu_i)$. From that, we obtain a straightforward reduction to CRS indistinguishability of GS. $\blacksquare$
+
+Combining Lemmata 3.12 to 3.17 and Lemma 3.29, we have $\text{Adv}_{G_0} \le \text{Adv}_{G_1} + 3\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{B}_1) + L \cdot (4\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{B}_1) + 6\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{B}_2) + \frac{2q_s}{p})$ and conclude Lemma 3.8.
+
+### 3.3.2 From $G_2$ to $G_3$: Proof of Lemma 3.10
+
+The proof of Lemma 3.10 is essentially the same as Lemma 3.8, but the game sequence is defined in the reverse order. For completeness, we define the game sequence in Figure 8. For the game sequence $S_{0,k}$, the index $k$ starts with $L$ and ends with 0. We use $\text{Adv}_{S_i}$ to denote the advantage of $\mathcal{A}$ in Game $S_i$.
+
+**Figure 8:** Games $S_{0,L}-S_{0,0}$ and $S_1-S_3$ for the proof of Lemma 3.10. $\mathbf{RF}_k := \{0,1\}^k \to \mathbb{Z}_p$ is a truly random function, and $\mu_i$ is a random binary encoding of $\mathbf{M}_i$.
+
+By defining $\mathbf{RF}_L(\mu) := \mathbf{F}(\mathbf{M})$ (where $\mu$ is a random binary encoding of $\mathbf{M}$) and the same argument as in Lemma 3.16, we have
\ No newline at end of file
diff --git a/samples/texts/2864204/page_2.md b/samples/texts/2864204/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..7bd1a8f1a11361cdbf32122a35ab4b41252ed06c
--- /dev/null
+++ b/samples/texts/2864204/page_2.md
@@ -0,0 +1,14 @@
+| Reference | |M| | |σ| | |pk| | Sec. Loss | Assumptions |
|---|
| HJ [37] | 1 | 10d + 6 | 13 | 8 | DLIN | | ACDKNO [3] | (n1, 0) | (7, 4) | (5, n1 + 12) | O(qs) | SXDH, XDLIN1 | | ACDKNO [3] | (n1, n2) | (8, 6) | (n2 + 6, n1 + 13) | O(qs) | SXDH, XDLIN1 | | LPY [45] | (n1, 0) | (10, 1) | (16, 2n1 + 5) | O(qs) | SXDH, XDLIN2 | | KPW [41] | (n1, 0) | (6, 1) | (0, n1 + 6) | O(qs2) | SXDH | | KPW [41] | (n1, n2) | (7, 3) | (n2 + 1, n1 + 7) | O(qs2) | SXDH | | JR [39] | (n1, 0) | (5, 1) | (0, n1 + 6) | O(qs log qs) | SXDH | | Ours (Sect. 4.2) | (n1, 0) | (13, 12) | (18, n1 + 11) | O(λ) | SXDH | | Ours (Sect. 4.3) | (n1, n2) | (14, 14) | (n2 + 19, n1 + 12) | O(λ) | SXDH |
+
+**Table 1:** Object sizes and loss of security among structure-preserving signature schemes with assumptions in the standard model. Smallest possible parameters are set to parameterized assumptions. Notation (x,y) means x and y elements in $\mathbb{G}_1$ and $\mathbb{G}_2$, respectively. The $|M|$, $|\sigma|$, $|pk|$ columns mean the number of messages, the number of group elements in a signature, and the number of group elements in a public key, respectively. The “Sec. Loss” column means reduction costs. The “Assumptions” column means the underlying assumptions for proving security. For HJ, parameter d limits number of signing to $2^d$. Parameters $q_s$ and $\lambda$ represent number of signing queries and security parameter, respectively.
+
+The only tightly secure SPS under compact assumptions is that by Hofheinz and Jager [37]. Their tree-based construction, however, yields unacceptably large signatures consisting of hundreds of group elements. For other SPS schemes under compact assumptions the security is proven using a hybrid argument that repeat reductions in $q_s$. Thus, their security loss is $O(q_s)$ [3, 45] or even $O(q_s^2)$ [41], as shown in Table 1.
+
+| Reference | |M| | #(s.mult) in signing | #(PPEs) | #(Pairings) |
|---|
| Plain | Batched |
|---|
| KPW [41] | | (6, 1) | 3 | $n_1 + 11$ | $n_1 + 10$ | | JR [39] | (n1, 0) | (6, 1) | 2 | $n_1 + 8$ | $n_1 + 6$ | | Ours (Sect. 4.2) | | (15, 15) | 15 | $n_1 + 57$ | $n_1 + 16$ |
+| KPW [41] | | (8, 3.5) | 4 | $n_1 + n_2 + 15$ | $n_1 + n_2 + 14$ | | Ours (Sect. 4.3) | (n1, n2) | (17.5, 16) | 16 | $n_1 + n_2 + 61$ | $n_1 + n_2 + 18$ |
+
+**Table 2:** Comparison of factors relevant to computational efficiency against SPS schemes having smallest signature sizes. Third column indicates number of scalar multiplications in $\mathbb{G}_1$ and $\mathbb{G}_2$ for signing. Multi-scalar multiplication is counted as 1.5. For JR, a constant pairing is included. Column “Batched” shows the number of pairings in a verification when pairing product equations are merged into one by using a batch verification technique [14].
+
+The non-tightness of security reductions does not necessarily mean the existence of a forger with reduced complexity, but the security guarantees given by non-tight reductions are quantitatively weaker than those given by tight reductions. Recovering from the security loss by increasing the security parameter is not a trivial solution when bilinear groups are involved. The security in source and target groups should be balanced, and computational efficiency is influenced by the choice of curves, pairings, and parameters such as embedding degrees, and the presence of dedicated techniques. In practice, an optimal setting for a targeted security parameter is determined by actual benchmarks, e.g., [30, 6, 33, 25], and only standard security parameters such as 128, 192, and 256, have been investigated. One would thus have to hop to the next standard security level to offset the security loss in reality. Besides, we stress that increasing the security parameter for a building block in structure-preserving cryptography is more costly than usual as it results in losing efficiency in all other building blocks using the same bilinear groups. Thus, the demand for tight security is stronger in structure-preserving cryptography.
+
+Even in ordinary (i.e. non-structure-preserving) signature schemes, most of the constructions satisfying tight security are either in the random oracle model, e.g. [9, 40, 23, 4], rely on q-type or strong RSA assumptions, e.g., [16, 46], or lead to large signatures and/or keys, e.g., [23, 43]. Hofheinz presented the first tightly secure construction with compact signatures and keys under a standard compact assumption
\ No newline at end of file
diff --git a/samples/texts/2864204/page_20.md b/samples/texts/2864204/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..223efe19e783073beafa2e88bf85355d9448c729
--- /dev/null
+++ b/samples/texts/2864204/page_20.md
@@ -0,0 +1,15 @@
+**Lemma 3.30 (G₂ to S₀,ₐ).** There exists an adversary *B* against CRS indistinguishability of GS with running times T(*A*) ≈ T(*B*) and AdvGScrsind(*B*) ≥ |AdvS₀,ₐ − AdvG₂|.
+
+**Lemma 3.31 (S₀,ₖ to S₀,ₖ₋₁).** There exist adversaries B₁ against CRS indistinguishability of GS and B₂ against IND-mCPA security of PKE with T(A) ≈ T(B₁) ≈ T(B₂) and AdvS₀,ₖ − AdvS₀,ₖ₋₁ ≤ 4AdvGScrsind(B₁) + 6AdvPKEmcpa(B₂) + 2qp/p.
+
+*Proof.* The proof of Lemma 3.31 is essentially the same as the one of Lemma 3.17, but here we derandomize z₀,ᵢ and z₁,ᵢ from **RF**k(*μ*ᵢ|*k*) to **RF**k₋₁(*μ*ᵢ|*k*-₁) instead of randomizing z₀,ᵢ and z₁,ᵢ from **RF**k₋₁(*μ*ᵢ|*k*-₁) to **RF**k(*μ*ᵢ|*k*). We define the detailed games in Figure 9 and sketch the proof as follows.
+
+**Figure 9:** Games S₀,ₖ₋₁-S₀,ₖ,₁₀ for the proof of Lemma 3.31. µ[k] is the k-th bit of µ and µ|k is the first k bits of µ. RFk₋₁: {0, 1}k₋₁ → Zp is a truly random functions (defined by Equation (3)).
+
+By the same arguments as in Lemmata 3.18 to 3.20, we have the following Lemmata.
+
+**Lemma 3.32 (S₀,ₖ to S₀,ₖ,₁).** AdvS₀,ₖ,₁ = AdvS₀,ₖ.
+
+**Lemma 3.33 (S₀,ₖ,₁ to S₀,ₖ,₂).** There exists an adversary *B* against IND-mCPA security of PKE with T(*A*) ≈ T(*B*₁) and AdvPKEmcpa(*B*) ≥ |AdvS₀,ₖ,₂ − AdvS₀,ₖ,₁|.
+
+**Lemma 3.34 (S₀,ₖ,₂ to S₀,ₖ,₃).** AdvS₀,ₖ,₃ = 1/2 AdvS₀,ₖ,₂.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_21.md b/samples/texts/2864204/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..489e9971cd54a81ae274ccdcb1dc6f1b550868ad
--- /dev/null
+++ b/samples/texts/2864204/page_21.md
@@ -0,0 +1,57 @@
+In the following games, we define the random function:
+
+$$
+\mathbf{RF}_{k-1}(\mu|_{k-1}) := \mathbf{RF}_k(\mu|_{k-1}, \beta), \qquad (3)
+$$
+
+where $\beta$ is a random guess of $z_2^*$ in $\text{ct}_2^*$. We note that $\mathbf{RF}_{k-1} : \{0,1\}^{k-1} \to \mathbb{Z}_p$ is a random function by
+the equation (3) since $\mathbf{RF}_k$ is a random function.
+
+**Lemma 3.35 (S0,k,3 to S0,k,4).** There exists an adversary *B* against IND-mCPA security of PKE with T(*A*) ≈ T(*B*) and AdvmcpaPKE(*B*) ≥ |AdvS0,k,4 - AdvS0,k,3|.
+
+Proof. The proof is similar to that of Lemma 3.21. First observe that, in $S_{0,k,4}$, $x_2 = z_{2,i}$ as $w_1$ if $\mu_i[k+1] \neq \beta$; otherwise $z_{0,i} = z_{1,i}$. If $\mu_i[k] = \beta$, then $z_{0,i} = z_{1,i} = \mathbf{RF}_k(\mu_i|_{k-1}, \beta) + m_i$, otherwise, $x_2 = z_{2,i} = 1-\beta$ by Equation (3). Thus, $(z_{0,i}-z_{1,i})(x_2-z_{2,i}) = 0$ holds and $ins_1 \in \mathcal{L}_1$ in either case. Then, the difference between $S_{0,k,4}$ and $S_{0,k,3}$ is that $\text{ct}_1$ is a ciphertext either of $Z_{1,i} = G_1^{\mathbf{RF}_{k-1}(\mu_i|_k)} \cdot M_i$ (in $S_{0,k,4}$) or $Z_{1,i} = G_1^{\mathbf{RF}_k(\mu_i|_k)} \cdot M_i$ (in $S_{0,k,3}$). Since $sk_1$ is used only for making $k_1$ and $\rho_1$ with respect to $\text{crs}_1$ generated by HG(par) in both games, we can construct a straightforward reduction to bound this difference by IND-mCPA-security of PKE using zero-knowledge simulator Sim to make $\rho_1$ and relevant commitments. The lemma is concluded. $\blacksquare$
+
+By the same arguments as in Lemmata 3.22 to 3.23, we have
+
+**Lemma 3.36 (S0,k,4 to S0,k,5).** There exists an adversary *B* against CRS indistinguishability of GS with T(*A*) ≈ T(*B*) and 2AdvcrsindGS(*B*) ≥ |AdvS0,k,5 − AdvS0,k,4|.
+
+**Lemma 3.37 (S0,k,5 to S0,k,6).** There exists an adversary *B* against IND-mCPA security of PKE with T(*A*) ≈ T(*B*) and AdvmcpaPKE(*B*) ≥ |AdvS0,k,6 − AdvH0,k,5|.
+
+**Lemma 3.38 (S0,k,6 to S0,k,7).** AdvS0,k,6 − AdvS0,k,7 ≤ q_s/p.
+
+*Proof.* Similar to Lemma 3.24, the difference between $S_{0,k,6}$ and $S_{0,k,7}$ is that the accepted forgery with a $Z_{1-(k \bmod 2)}^*$ in either:
+
+$$
+\begin{align*}
+Z_6 &:= \left\{ G_1^{\mathbf{RF}_k (\mu_j | k)} M_j \right\}_{j=1}^{q_s} \\
+&= \underbrace{\left\{ G_1^{\mathbf{RF}_k (\mu_j | k-1, \beta)} M_j : \mu_j[k] = \beta \right\}}_{=:S_1}_{j=1} + \underbrace{\left\{ G_1^{\mathbf{RF}_k (\mu_j | k-1, 1-\beta)} M_j : \mu_j[k] = 1-\beta \right\}}_{=:S_2}_{j=1} \\
+&\qquad \text{(in } S_{0,k,6})
+\end{align*}
+$$
+
+or
+
+$$
+\begin{align*}
+Z_7 &:= \left\{ G_1^{\mathbf{RF}_{k-1}(\mu_j|k-1)} M_j \right\}_{j=1}^{q_s} \\
+ &= S_1 \cup \underbrace{\left\{ G_1^{\mathbf{RF}_k(\mu_j|k-1,\beta)} M_j : \mu_j[k] = 1-\beta \right\}}_{=:S_3}^{q_s} \quad (\text{in } S_{0,k,7}),
+\end{align*}
+$$
+
+according to Equation (3).
+
+We define the following game $S_{0,k,6'}$ between $S_{0,k,6}$ and $S_{0,k,7}$. $S_{0,k,6'}$ simulates INIT and SIGN as in $S_{0,k,6}$, but differs in simulating VER, where it only accepts forgery with $Z_{1-(k \bmod 2)}^* \in S_1$. Precisely, $S_{0,k,6'}$ simulates VER as follows:
+
+• Parse $\sigma^* := ((\text{ct}_j^*)_0 \le j \le 2, \rho_0^*, \rho_1^*)$.
+
+• $Z_2^* \leftarrow \text{Dec}(sk_2, \text{ct}_2^*)$. If $Z_2^* \neq G_1^\beta$ then return 0.
+
+• $Z_{1-(k \bmod 2)}^* \leftarrow \text{Dec}(sk_{1-(k \bmod 2)}, \text{ct}_{1-(k \bmod 2)}^*)$. If $Z_{1-(k \bmod 2)}^* \notin S_1$ then return 0.
+
+• Return $(M^* \notin Q_M) \land (\text{Ver}(pk, M^*, \sigma^*) = 1)$.
+
+From answers of SIGN, adversaries $\mathcal{A}$ only learn value $\mathbf{RF}_k(\mu_j|_{k-1}, \beta)$ for all signing messages $M_j$. Thus, if $\mathbf{RF}_k: \{0,1\}^k \rightarrow \mathbb{Z}_p$ is a random function, then values $\mathbf{RF}_k(\mu_j|_{k-1}, 1-\beta)$ are perfectly hidden from $\mathcal{A}$ until VER is asked. We have that, even for an unbounded adversary $\mathcal{A}$, it can only output a value in $S_2$ with probability at most $\frac{q_s}{p}$ and the following holds
+
+$$
+\mathrm{AdvS}_{0,k,6} - \mathrm{AdvS}_{0,k,6'} \leq \frac{q_s}{p}.
+$$
\ No newline at end of file
diff --git a/samples/texts/2864204/page_22.md b/samples/texts/2864204/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..43abd8111f43bfcc49f9805ea327428d46c729e8
--- /dev/null
+++ b/samples/texts/2864204/page_22.md
@@ -0,0 +1,45 @@
+Compared to $S_{0,k,6'}$, there are more valid forgeries in $S_{0,k,7}$ and we have
+
+$$ \mathrm{Adv}_{S_{0,k,6'}} \leq \mathrm{Adv}_{S_{0,k,7}}. $$
+
+Thus, $\mathrm{Adv}_{S_{0,k,6}} - \mathrm{Adv}_{S_{0,k,7}} \leq \frac{q_s}{p}$ and we conclude the lemma. ■
+
+**Lemma 3.39 (S$_{0}$$_{,k,}$$_{7}$ to S$_{0}$$_{,k,}$$_{8}$).** $\mathrm{Adv}_{S_{0,k,8}} = 2\mathrm{Adv}_{S_{0,k,7}}.$
+
+**Lemma 3.40 (S$_{0}$$_{,k,}$$_{8}$ to S$_{0}$$_{,k,}$$_{9}$).** There exists an adversary $\mathcal{B}$ against IND-mCPA security of PKE with $T(\mathcal{A}) \approx T(\mathcal{B})$ and $\mathrm{Adv}_{\mathrm{PKE}}^{\mathrm{mcpa}}(\mathcal{B}) \geq |\mathrm{Adv}_{S_{0,k,9}} - \mathrm{Adv}_{S_{0,k,8}}|$.
+
+**Lemma 3.41 (S$_{0}$$_{,k,}$$_{9}$ to S$_{0}$$_{,k,}$$_{10}$).** $\mathrm{Adv}_{S_{0,k,10}} = \mathrm{Adv}_{S_{0,k,9}}.$
+
+**Lemma 3.42 (S$_{0}$$_{,k,}$$_{10}$ to S$_{0}$$_{,k}$−$_{1}$).** $\mathrm{Adv}_{S_{0,k-1}} = \mathrm{Adv}_{S_{0,k,10}}.$
+
+Summarizing the above lemmata, we have $\mathrm{Adv}_{S_{0,k}} - \mathrm{Adv}_{S_{0,k-1}} \leq 4\mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{B}_1) + 6\mathrm{Adv}_{\mathrm{PKE}}^{\mathrm{mcpa}}(\mathcal{B}_2) + 2\frac{q_s}{p}$ and conclude Lemma 3.31. ■
+
+By defining $\mathbf{RF}_0(\epsilon) := x_0 \stackrel{\$}{\leftarrow} \mathbb{Z}_p$, similar to Lemma 3.16, we have
+
+**Lemma 3.43 (S$_{0,}$$_{0}$ to S$_1$).** There exists an adversary $\mathcal{B}$ against CRS indistinguishability with running time $T(\mathcal{A}) \approx T(\mathcal{B})$ and $\mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{B}) \geq |\mathrm{Adv}_{S_1} - \mathrm{Adv}_{S_{0,0}}|$.
+
+Similar to Lemmata 3.14 and 3.15, we have
+
+**Lemma 3.44 (S$_1$ to S$_2$).** $\mathrm{Adv}_{S_2} = \mathrm{Adv}_{S_1}.$
+
+**Lemma 3.45 (S$_2$ to S$_3$).** There exists an adversary $\mathcal{B}$ against CRS indistinguishability with running times $T(\mathcal{A}) \approx T(\mathcal{B})$ and $\mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{B}) \geq |\mathrm{Adv}_{S_3} - \mathrm{Adv}_{S_2}|$.
+
+Observing that $\mathbf{crs}_0 \stackrel{\$}{\leftarrow} \mathbf{BG(par)}$, $z_{0,i} = z_{1,i} = x_0 + m_i$ and $\rho_0 \stackrel{\$}{\leftarrow} \mathbf{P(\mathbf{crs}_0, ins_0, w_0)}$, we have $G_3 = S_3$ and
+
+**Lemma 3.46 (S$_3$ to G$_3$).** $\mathrm{Adv}_{G_3} = \mathrm{Adv}_{S_3}.$
+
+Summarizing Lemmata 3.30, 3.31 and 3.43 to 3.46, we have $\mathrm{Adv}_{G_2} \leq \mathrm{Adv}_{G_3} + (4L+3)\mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{B}_1) + 6L \cdot \mathrm{Adv}_{\mathrm{PKE}}^{\mathrm{mcpa}}(\mathcal{B}_2) + \frac{2Lq_s}{p}$.
+
+We omit high level outlines of the game transitions in Lemmata 3.10 and 3.31 since they are very similar to Figures 3 and 6 for Lemmata 3.8 and 3.17.
+
+# 4 Instantiation
+
+We instantiate our generic construction in Type-III bilinear groups under the SXDH assumption. Through-out this section, we denote group elements in $\mathbb{G}_1$ with plain upper-case letters, such as $X$, and elements in $\mathbb{G}_2$ such letters with tilde, such as $\tilde{x}$. Scalar values in $\mathbb{Z}_p$ are denoted with lower-case letters. We may also put a tilde to scalar values or other objects when they are related to group elements in $\mathbb{G}_2$ in a way that is clear from the context.
+
+We begin with optimizations in Section 4.1 made on top of the generic construction. We then present a concrete scheme for signing unilateral messages in Section 4.2 and for bilateral messages in Section 4.3 followed by full details of the Groth-Sahai proofs in Section 4.4.
+
+## 4.1 ElGamal Encryption with Common Randomness
+
+Observe that relation $(z_0 - z_1)(x_2 - z_2) = 0$ in $\mathcal{L}_1$ is a quadratic equation and it can be proved efficiently by a GS proof if $z_0$ and $z_1$ are committed in the same group and $z_2$ is committed in the other group. Relevant encryptions should follow the deployment of groups. We thus build the first two ciphertexts, $ct_0$ and $ct_1$ in $\mathbb{G}_1$, and $ct_2$ in $\mathbb{G}_2$.
+
+To gain efficiency, we consider using the same randomness for making $ct_0$ and $ct_1$. For this to be done without spoiling the security proof, it is sufficient that one of the ciphertext $ct_b$ is perfectly simulated given the other ciphertext $ct_{b-1}$. Formally, we assume that there exists a function, say SimEnc, such that, for
\ No newline at end of file
diff --git a/samples/texts/2864204/page_23.md b/samples/texts/2864204/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..b792c5823ab2ba2f7a5a4bf121f6106c27f2dd5f
--- /dev/null
+++ b/samples/texts/2864204/page_23.md
@@ -0,0 +1,44 @@
+any key pairs $(pk, sk) \stackrel{\mathcal{S}}{\leftarrow} \text{Gen}_P(\text{par})$ and $(pk', sk') \stackrel{\mathcal{S}}{\leftarrow} \text{Gen}_P(\text{par})$, any messages $m$ and $m'$ in the legitimate message space, and any randomness $s$, it holds that $\text{Enc}(pk', m'; s) = \text{SimEnc}(sk', m', \text{Enc}(pk, m; s))$. In [11], Bellare et al. formally defined such a property as *reproducibility*. Given reproducible PKE and its ciphertext $ct_b \stackrel{\mathcal{S}}{\leftarrow} \text{Enc}(pk_b, G_1^{z_b}; s)$, we can compute another ciphertext $ct_{1-b} \stackrel{\mathcal{S}}{\leftarrow} \text{SimEnc}(sk_{1-b}, G_1^{z_{1-b}}, ct_b)$ without knowing $sk_b$ or $s$. All reduction steps with respect to the CPA security of PKE go through using $\text{SimEnc}$ and simulated GS proofs. Precisely, we use $\text{SimEnc}$ in Lemma 3.21 to compute $ct_0$ from given $ct_1$. Similar adjustment applies to Lemma 3.23, 3.35 and 3.37.
+
+As shown in [11], ElGamal encryption (EG) is reproducible. Let $(y, G_1^y)$ and $(y', G_1^{y'}) \in \mathbb{Z}_p \times \mathbb{G}_1$ be two key pairs of ElGamal encryption. Given ciphertext $(M \cdot (G_1^y)^s, G_1^s)$ of message $M$ with $s$ and public key $G_1^y$, one can compute $(M' \cdot (G_1^s)^{y'}, G_1^s)$ for any $M'$ using secret key $y'$. It is exactly the same ciphertext obtained from the regular encryption with common randomness $s$. We thus encrypt $z_0$ and $z_1$ with ElGamal encryption in $\mathbb{G}_1$ using the same randomness and removing redundant $G_1^s$. For encrypting $z_2$, we also use ElGamal but in $\mathbb{G}_2$. Bellare et al. show that the multi-message chosen-plaintext security for each encryption holds under the DDH assumption in respective groups, which is directly implied by the SXDH assumption [10]. We thus have:
+
+**Theorem 4.1.** For all adversaries $\mathcal{A}$ against IND-mCPA security of EG, there exists an adversary $\mathcal{C}$ against the SXDH assumption with running time $T(\mathcal{C}) \approx T(\mathcal{A})$ and $\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{A}) \le 2 \text{Adv}_{\text{PGGen}}^{\text{sxdh}}(\mathcal{C}) + \frac{1}{p}$.
+
+## 4.2 Concrete Scheme for Unilateral Messages
+
+We present a concrete scheme, SPSu1, for signing messages in $\mathbb{G}_1$. We use a structure-preserving one-time signature scheme, POSu1, taken from the results of Abe et al. [3], and the SXDH-based instantiation of GS proof system. The description of POSu1 is blended into the description of SPSu1. For the GS proofs, however, we only show concrete relations in this section and present details of computation in Section 4.4.
+
+We use notations $[x]_i$ and $[\tilde{x}]_1$ as a shorthand of $\text{Com}(\text{crs}_i, x)$ and $\text{Com}(\tilde{\text{crs}}_1, x)$, respectively. We abuse these notations to present witnesses in a relation. It is indeed useful to keep track which CRS and which source group is used to commit to a witness. This notational convention is used in the rest of the paper.
+
+**Scheme SPSu1:** Let par := $(p, \mathbb{G}_1, \mathbb{G}_2, \mathbb{G}_T, e, G, \tilde{G})$ be a description of Type-III bilinear groups generated by PGGen$(1^\lambda)$.
+
+**SPSu1.Gen(par).** Generates $\text{crs}_0$, and $(\text{crs}_1, \tilde{\text{crs}}_1)$ as shown in (18). Picks $x_0 \stackrel{\mathcal{S}}{\leftarrow} \mathbb{Z}_p$ and set $x_1 = x_2 := 0$. Generates three ElGamal keys $\tilde{Y}_0 := \tilde{G}^{y_0}, \tilde{Y}_1 := \tilde{G}^{y_1}$, and $Y_2 := G^{y_2}$ where $y_i \stackrel{\mathcal{S}}{\leftarrow} \mathbb{Z}_p$ for $i = 0, 1, 2$. Then computes commitments
+
+$$
+\begin{align*}
+[x_0]_0 &:= \text{Com}(\text{crs}_0, x_0; r_{x_0}), \\
+[y_0]_0 &:= \text{Com}(\text{crs}_0, y_0; r_{y_0}), \\
+[y_0]_1 &:= \text{Com}(\text{crs}_1, y_0; r_{y_0}), \\
+[\tilde{y}_2]_1 &:= \text{Com}(\tilde{\text{crs}}_1, y_2; r_{y_2})
+\end{align*}
+$$
+
+$$
+\begin{align*}
+[x_1]_0 &:= \text{Com}(\text{crs}_0, x_1; r_{x_1}), \\
+[\tilde{x}_2]_1 &:= \text{Com}(\tilde{\text{crs}}_1, x_2; r_{x_2}), \\
+[y_1]_1 &:= \text{Com}(\text{crs}_1, y_1; r_{y_1}),
+\end{align*}
+$$
+
+as shown in Equation (19). Generates a persistent key pair of POSu1 by $w \stackrel{\mathcal{S}}{\leftarrow} \mathbb{Z}_p^*$, $\gamma_i \stackrel{\mathcal{S}}{\leftarrow} \mathbb{Z}_p^*$, $\tilde{G}_r := \tilde{G}^w$, and $\tilde{G}_i := \tilde{G}_r^{\gamma_i}$ for $i = 1, \dots, n_1$. Outputs pk and sk defined as $pk := (G, \tilde{G}, \text{crs}_0, \text{crs}_1, \tilde{\text{crs}}_1, \tilde{Y}_0, \tilde{Y}_1, Y_2, [x_0]_0$, $[x_1]_0$, $[\tilde{x}_2]_1$, $[y_0]_0$, $[y_0]_1$, $[y_1]_1$, $[\tilde{y}_2]_1$, $\tilde{G}_r, \tilde{G}_1, \dots, \tilde{G}_{n_1})$, and $sk := (x_0, y_0, y_1, y_2, r_{x_0}, r_{x_1}, r_{x_2}, r_{y_0}, r_{y_1}, r_{y_2}, w, \gamma_1, \dots, \gamma_{n_1})$, where par and pk are implicitly included in pk and sk, respectively.
+
+**SPSu1.Sign(sk, M).** Given sk as defined above and $M := (M_1, \dots, M_{n_1}) \in \mathbb{G}_1^{n_1}$, proceeds as follows.
+
+- Generate one-time POSu1 key pair $\alpha \stackrel{\mathcal{S}}{\leftarrow} \mathbb{Z}_p^*$ and $\tilde{A} := \tilde{G}^\alpha$, and compute a one-time signature, $(Z, R)$, by
+
+$$ Z := G^{\alpha - \rho w} \quad \text{and} \quad R := G^\rho \prod_{i=1}^{n_1} M_i^{-\gamma_i}, \qquad (4) $$
+
+where $w, \gamma_1, \dots, \gamma_{n_1}$ are taken from sk, and $\rho$ is chosen uniformly from $\mathbb{Z}_p$.
+
+- Encrypt $z_0 = z_1 := x_0$, and $z_2 := 0$ as $(\tilde{E}_{z_0}, \tilde{E}_{z_1}, \tilde{E}_s) := (\tilde{G}^{z_0}\tilde{Y}_0^s, \tilde{G}^{z_1}\tilde{Y}_1^s, \tilde{G}^s)$ and $(E_{z_2}, E_t) := (G^{z_2}Y_t^t, G^t)$, where $s,t \stackrel{\mathcal{S}}{\leftarrow} \mathbb{Z}_p$.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_24.md b/samples/texts/2864204/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..0e5ca81da95612e33ec7b7b9dba70b3ee444720c
--- /dev/null
+++ b/samples/texts/2864204/page_24.md
@@ -0,0 +1,37 @@
+- Commit to $z_0$, $z_1$, and $z_2$ by $[z_0]_0$, $[z_0]_1$, $[z_1]_1$, and $[\tilde{z}_2]_1$, as described in equation (19).
+
+- Using $\mathbf{crs}_0$, commitments $[x_0]_0$, $[x_1]_0$, and $[y_0]_0$ in pk, and default commitment $[1]_0$ computed with randomness $0 \in \mathbb{Z}_p$, as shown in equation (20), compute GS proofs $\rho_{0,0}$ and $\rho_{0,1}$ for relations
+
+$$\rho_{0,0} : \tilde{G}^{[z_0]_0} (\tilde{G}^{-1})^{[x_0]_0} (\tilde{A}^{-1})^{[x_1]_0} = 1, \quad \text{(linear MSE in } \mathbb{G}_2\text{)} \qquad (5)$$
+
+$$\rho_{0,1} : \tilde{E}_{z_0}^{[1]_0} (\tilde{G}^{-1})^{[z_0]_0} (\tilde{E}_s^{-1})^{[y_0]_0} = 1 \quad \text{(linear MSE in } \mathbb{G}_2\text{)} \qquad (6)$$
+
+that correspond to clauses $\tilde{G}^{z_0} = \tilde{G}^{x_0} \cdot \tilde{M}^{x_1}$ for $\tilde{M} := \tilde{A}$ and $(\tilde{E}_{z_0}, \tilde{E}_s) \in \text{Enc}(\tilde{Y}_0, \tilde{G}^{z_0})$ in $\mathcal{L}_0$, respectively.
+
+- Similarly, using $(\mathbf{crs}_1, \tilde{\mathbf{crs}}_1)$ and default commitments $[1]_1$ and $[\tilde{1}]_1$, computes GS proofs $\rho_{1,0}, \rho_{1,1}, \rho_{1,2}$, and $\rho_{1,3}$ for relations
+
+$$\rho_{1,0} : ([\tilde{x}_2]_1 - [\tilde{z}_2]_1)([z_0]_1 - [z_1]_1) = 0, \quad \text{(non-linear QE)} \qquad (7)$$
+
+$$\rho_{1,1} : \tilde{E}_{z_0}^{[1]_1} (\tilde{G}^{-1})^{[z_0]_1} (\tilde{E}_s^{-1})^{[y_0]_1} = 1, \quad \text{(linear MSE in } \mathbb{G}_2\text{)} \qquad (8)$$
+
+$$\rho_{1,2} : \tilde{E}_{z_1}^{[1]_1} (\tilde{G}^{-1})^{[z_1]_1} (\tilde{E}_s^{-1})^{[y_1]_1} = 1, \quad \text{(linear MSE in } \mathbb{G}_2\text{)} \qquad (9)$$
+
+$$\rho_{1,3} : E_{z_2}^{[\tilde{1}]_1} (G^{-1})^{[\tilde{z}_2]_1} (E_t^{-1})^{[\tilde{y}_2]_1} = 1, \quad \text{(linear MSE in } \mathbb{G}_1\text{)} \qquad (10)$$
+
+that correspond to clauses in $\mathcal{L}_1$.
+
+- Output a signature $\sigma := (\tilde{A}, Z, R, \tilde{E}_{z_0}, \tilde{E}_{z_1}, \tilde{E}_s, E_{z_2}, E_t, [z_0]_0, [z_0]_1, [z_1]_1, [\tilde{z}_2]_1, \rho_{0,0}, \rho_{0,1}, \rho_{1,0}, \rho_{1,1}, \rho_{1,2}, \rho_{1,3})$.
+
+**SPSu1.Ver(pk, M, \sigma).** Return 1 if all the following verifications are passed. Return 0, otherwise.
+
+- Verify signature $(Z, R)$ of POSu1 for $M = (M_1, \dots, M_{n_1})$ with one-time key $\tilde{A}$ by
+
+$$e(G, \tilde{A}) = e(Z, \tilde{G}) e(R, \tilde{G}_r) \prod_{i=1}^{n_1} e(M_i, \tilde{G}_i). \qquad (11)$$
+
+- Verify all GS proofs $\rho_{0,0}, \rho_{0,1}, \rho_{1,0}, \rho_{1,1}, \rho_{1,2}, \rho_{1,3}$ with commitments $[z_0]_0, [z_0]_1, [z_1]_1, [\tilde{z}_2]_1$, and ciphertext $\tilde{E}_{z_0}, \tilde{E}_{z_1}, \tilde{E}_s, E_{z_2}, E_t$ in $\sigma$, using $[x_0]_0, [x_1]_0, [y_0]_0, [\tilde{x}_2]_1, [\tilde{y}_2]_1, [y_0]_1, [y_1]_1, [\tilde{y}_2]_1$ in pk, as expressed in equations (23) and (25). Default commitments $[1]_1$ and $[\tilde{1}]_1$ are built on-the-fly following equation (20).
+
+This completes the description of SPSu1.
+
+**PERFORMANCE.** Keeping in mind that generators G and $\tilde{G}$ are used commonly in the components, we assess the size of public-keys and signatures. By (a,b), we denote a and b elements in $\mathbb{G}_1$ and $\mathbb{G}_2$, respectively. A public-key consists of common reference string $(\mathbf{crs}_0, (\mathbf{crs}_1, \mathbf{\bar{crs}}_1))$ consisting of (7,4) elements, commitments $([x_0]_0, [x_1]_0, [y_0]_0, [\tilde{x}_2]_1, [\tilde{y}_2]_1, [y_0]_1, [y_1]_1)$ consisting of (10,4) elements, three ElGamal public-keys $(\tilde{pk}_0, \tilde{pk}_1, pk_2)$ consisting of (1,2) elements, and a public-key $(\tilde{G}_r, \tilde{G}_1, \dots, \tilde{G}_{n_1})$ for POSu1 that contains $(0,n_1+1)$ elements. In total, a public-key consists of $(18,n_1+11)$ elements. A signature consists of commitments $[z_0]_0, [z_0]_1, [z_1]_1, [\tilde{z}_2]_1$ containing (6,2) elements, four proofs, $\rho_{0,0}, \rho_{0,1}, \rho_{1,1}$, and $\rho_{1,2}$, for linear MSEs in $\mathbb{G}_2$ that costs (0, 1) × 4, proof $\rho_{1,0}$ of non-linear QE consisting of (2, 2) elements, proof $\rho_{1,3}$ for a linear MSE in $\mathbb{G}_1$ that costs (1, 0), three ElGamal ciphertexts (of two ones share a randomness) consisting of (2, 3) elements, and a one-time public-key and signature of POSu1 consisting of (0, 1) and (2, 0) elements, respectively. Summing up, a signature consists of (13, 12) group elements.
+
+Since computational cost largely depends on available resources and implementation, we only show basic parameters that can be dominant factors in computation. First, for signature generation, the number of elements in a signature almost counts a number of scalar multiplications. To be slightly more accurate, we count the number of multi-scalar multiplications and add them as 1.5 scalar multiplications. Element R in POSu1 and all elements in proofs $\rho_{0,0}, \rho_{0,1}, \rho_{1,1}, \rho_{1,2}, \rho_{1,0}, \rho_{1,3}$ that sum up to (4,6) elements in total, are computed through multi-scalar multiplications. The remaining (9,6) elements in a signature are those in commitments and ElGamal encryptions for binary values and counted as scalar multiplications.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_25.md b/samples/texts/2864204/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..d578d5896204bb94b20bccd06f74933779cec3a5
--- /dev/null
+++ b/samples/texts/2864204/page_25.md
@@ -0,0 +1,37 @@
+Accordingly, we estimate the signing cost as $15(= 4 \times 1.5 + 9)$ and $15(= 6 \times 1.5 + 6)$ scalar multiplications in $\mathbb{G}_1$ and $\mathbb{G}_2$, respectively. Computational workload for verification is much more implementation dependent. The number of equations and pairings are 15 and $n_1 + 57$, respectively from simple counting in the description. With the most aggressive batch verification that wraps all equations into one, we merge pairings with respect to default generators $\mathcal{G}$ and $\tilde{\mathcal{G}}$, and CRSes. It reduces the number of pairings down to $n_1 + 16$ in exchange of increasing the number of multi-scalar multiplications (which is ignored in Table 2) for randomizing each element.
+
+**SECURITY.** Regarding POSu1 used in the above construction, the following statement is proven in [3].
+
+**Theorem 4.2 ([3]).** POSu1 is OT-nCMA secure if the DDH$_2$ assumption holds with respect to PGGen. In particular, for all polynomial-time algorithms $\mathcal{A}$ there exists a polynomial-time algorithm $\mathcal{B}$ with $T(\mathcal{A}) \approx T(\mathcal{B})$ and $\text{Adv}_{\text{POSu1}}^{\text{ncma}}(\mathcal{A}) \le \text{Adv}_{\text{PGGen}}^{\text{ddh}_2}(\mathcal{B}) + 1/p$.
+
+With asymmetric pairing groups, CRS indistinguishability of GS proof system is tightly reduced from the SXDH assumption. Namely, the following theorem holds.
+
+**Theorem 4.3 ([34]).** For all adversaries $\mathcal{A}$ against CRS indistinguishability of GS, there exists an adversary $\mathcal{B}$ with running time $T(\mathcal{B}) \approx T(\mathcal{A})$ and $\text{Adv}_{\text{GS}}^{\text{crsind}}(\mathcal{A}) \le 2 \cdot \text{Adv}_{\text{PGGen}}^{\text{sxdh}}(\mathcal{B})$.
+
+Combining Theorems 2.6, 3.6, 4.1, 4.2, and 4.3, we have the following theorem.
+
+**Theorem 4.4.** SPSu1 is UF-CMA if the SXDH assumption holds with respect to PGGen. In particular, for any polynomial-time algorithm $\mathcal{A}$, there exists a polynomial-time algorithm $\mathcal{B}$ that runs in almost the same as $\mathcal{A}$ and
+
+$$ \text{Adv}_{\text{SPSu1}}^{\text{uf-cma}}(\mathcal{A}) \le (40L + 13) \cdot \text{Adv}_{\text{PGGen}}^{\text{sxdh}}(\mathcal{B}) + \frac{4L(q_s + 3) + 1}{p}. \quad (12) $$
+
+If we have $L = \log_2 p = 256$ for the targeted 128-bit security level, for instance, the security loss of POSu1 is approximately in 13 bits ($2^{13.3}$).
+
+## 4.3 Concrete Scheme for Bilateral Messages
+
+To sign bilateral messages $(M_1, M_2) \in \mathbb{G}_1^{n_1} \times \mathbb{G}_2^{n_2}$, we use SPSu1 in the previous section to sign $M_1 \in \mathbb{G}_1^{n_1}$ and combine it with another POS, say POSu2, that signs $M_2 \in \mathbb{G}_2^{n_2}$. Since a one-time public key of POSu2 is in $\mathbb{G}_1$, it can be appended to $M_1$ and authenticated by SPSu1 by extending the message space to $\mathbb{G}_2^{n_1+1}$. We give the details below.
+
+### Scheme SPSb:
+
+**SPSb.Gen(par).** Given par, proceeds with the same steps, except for generating keys for POSb instead of POSu1.
+
+- Chooses $w, \mu$ randomly from $\mathbb{Z}_p^*$ and computes $\tilde{G}_r := \tilde{G}^w$ and $G_r := G^\mu$. For $i = 1, \dots, n_1 + 1$, uniformly chooses $\gamma_i$ from $\mathbb{Z}_p$ and computes $\tilde{G}_i := \tilde{G}^{\gamma_i}$. For $j = 1, \dots, n_2$, uniformly chooses $\psi_j$ from $\mathbb{Z}_p$ and computes $G_j := G_r^{\psi_j}$.
+
+- Outputs pk and sk defined as $pk := (G, \tilde{G}, \text{crs}_0, \text{crs}_1, \tilde{\text{crs}}_1, \tilde{Y}_0, \tilde{Y}_1, Y_2, [x_0]_0, [x_1]_0, [\tilde{x}_2]_1, [y_0]_0, [y_0]_1, [y_1]_1, [\tilde{y}_2]_1, \tilde{G}_r, \tilde{G}_1, \dots, \tilde{G}_{n_1+1}, G_r, G_1, \dots, G_{n_2})$ and $sk := (x_0, y_0, y_1, y_2, r_{x_0}, r_{x_1}, r_{x_2}, r_{y_0}, r_{y_1}, r_{y_2}, w, \gamma_1, \dots, \gamma_{n_1+1}, \mu, \psi_1, \dots, \psi_{n_2})$, where par and pk are implicitly included in pk and sk, respectively.
+
+**SPSb.Sign(sk, M).** Given sk as defined above and M = $(M_1, \dots, M_{n_1}, \tilde{M}_1, \dots, \tilde{M}_{n_2}) \in \mathbb{G}_1^{n_1} \times \mathbb{G}_2^{n_2}$, proceeds as follows.
+
+- Generates POSu2 one-time key pair $\zeta \leftarrow_S \mathbb{Z}_p^*$ and B := $G^\zeta$, and one-time signature $(\tilde{Z}, \tilde{R})$ by
+
+$$ \tilde{Z} := \tilde{G}^{\zeta - \delta\mu} \quad \text{and} \quad \tilde{R} := \tilde{G}^{\delta} \prod_{j=1}^{n_2} \tilde{M}_j^{-\psi_j} \qquad (13) $$
+
+where $\mu, \psi_1, \dots, \psi_{n_2}$ are taken from sk, and $\delta$ is chosen uniformly from $\mathbb{Z}_p^*$.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_26.md b/samples/texts/2864204/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..06a040b85bed4ceb960031b259050cb3e5164e7c
--- /dev/null
+++ b/samples/texts/2864204/page_26.md
@@ -0,0 +1,39 @@
+- Sets $M_{n_1+1} := B$.
+
+- Generates one-time POSu1 key pair $\alpha \stackrel{g}{\leftarrow} Z_p^*$ and $\tilde{A} := G_2^\alpha$ and one-time signature $(Z, R)$ by
+
+$$Z := G^{\alpha-\rho w}, \quad \text{and} \quad R := G^\rho \prod_{i=1}^{n_1+1} M_i^{-\gamma_i} \qquad (14)$$
+
+where $w, \gamma_1, \dots, \gamma_{n_1+1}$ are chosen from $sk$, and $\rho$ is chosen from $\mathbb{Z}_p$.
+
+- Then creates ElGamal ciphertexts and GS proofs as well as those in SPSu1.
+
+- Outputs a signature $\sigma := (B, \tilde{Z}, \tilde{R}, \tilde{A}, Z, R, \tilde{E}_{z_0}, \tilde{E}_{z_1}, \tilde{E}_s, E_{z_2}, E_t, [z_0]_0, [z_0]_1, [z_1]_1, [\tilde{z}_2]_1, \rho_{0,0}, \rho_{0,1}, \rho_{1,0}, \rho_{1,1}, \rho_{1,2}, \rho_{1,3})$.
+
+**SPSb.Ver(pk, M, \sigma).**
+
+Returns 1 if all the following verifications are passed. Returns 0 otherwise.
+
+- parses M into $M = (M_1, \dots, M_{n_1}, \tilde{M}_1, \dots, \tilde{M}_{n_2}) \in \mathbb{G}_1^{n_1} \times \mathbb{G}_2^{n_2}$.
+
+- Verifies signature $(\tilde{Z}, \tilde{R})$ of POSu2 for $(\tilde{M}_1, \dots, \tilde{M}_{n_2})$ with one-time key $B$ by
+
+$$e(B, \tilde{G}) = e(G, \tilde{Z}) e(G_r, \tilde{R}) \prod_{j=1}^{n_2} e(G_j, \tilde{M}_j). \qquad (15)$$
+
+- Verifies signature $(Z, R)$ of POSu1 for $(M_1, \dots, M_{n_1})$ and $M_{n_1+1} := B$ with one-time key $\tilde{A}$ by
+
+$$e(G, \tilde{A}) = e(Z, \tilde{G}) e(R, \tilde{G}_r) \prod_{i=1}^{n_1+1} e(M_i, \tilde{G}_i). \qquad (16)$$
+
+- Verifies GS proofs as well as SPSu1.
+
+**PERFORMANCE.** The only difference compared to SPSu1 is extra POSu2. It adds $(n_2 + 1, 1)$ and $(1, 2)$ elements and results in $(n_2 + 19, n_1 + 12)$ and $(14, 14)$ elements in a public-key and a signature, respectively. Among $(1, 2)$ elements newly added to a signature, only one in $\mathbb{G}_2$ is computed by multi-scalar multiplication. Hence, the cost for signature generation increases by 2.5 and 1 scalar multiplications in $\mathbb{G}_1$ and $\mathbb{G}_2$, respectively. In verification, the additional POS requires 1 more equation and $n_2 + 4$ pairings, resulting in 16 equations and $n_1 + n_2 + 61$ pairings. Two of the new pairings include $G$ and $\tilde{G}$, they are merged with pairings with respect to those elements, and the remaining $n_2 + 2$ pairings are counted as an additional cost in the case of batch verification. Hence, we have $n_1 + n_2 + 18$ pairings.
+
+**SECURITY.** Theorem 4.2 holds for POSu2 under the DDH$_1$ assumption. Combining it with Theorem 4.4, we obtain the following.
+
+**Theorem 4.5.** SPSb is UF-CMA if the SXDH assumption holds with respect to PGGen. In particular, for any polynomial-time algorithm $A$, there exists an algorithm $B$ with $T(B) \approx T(A)$ and
+
+$$\mathrm{Adv}_{\mathrm{SPSb}}^{\mathrm{uf-cma}}(\mathcal{A}) \le (40L+14) \cdot \mathrm{Adv}_{\mathrm{PGGen}}^{\mathrm{sxdh}}(\mathcal{B}) + \frac{4L(q_s+3)+2}{p}. \qquad (17)$$
+
+## 4.4 Specific Groth-Sahai Proofs under SXDH
+
+Among wide variations of relations that are provable with GS proofs, our instantiation involves only three types of relations; linear multiscalar multiplication equations (MSEs) in $\mathbb{G}_1$ and $\mathbb{G}_2$, and non-linear quadratic equations (QEs). Witnesses are committed in either $\mathbb{G}_1$ or $\mathbb{G}_2$ depending on the relations to prove. We summarize the space and computation complexity in Table 3 and give details in the sequel.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_27.md b/samples/texts/2864204/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d421e77fb7c40b961f707bf0e77c9016b6d5c40
--- /dev/null
+++ b/samples/texts/2864204/page_27.md
@@ -0,0 +1,45 @@
+| Object | #(elements) | #(s.mult) | Verification |
|---|
| #(equations) | #(pairings) |
|---|
| CRS in G1 | (3, 0) | (3, 0) | - | - | | CRS in G2 | (0, 3) | (0, 3) | - | - | | Commitment [w] for w ∈ Zp | (2, 0) | (3, 0) | - | - | | Commitment [ŵ] for w ∈ Zp | (0, 2) | (0, 3) | - | - | | Commitment [b] for b ∈ {0, 1} | (2, 0) | (2, 0) | - | - | | Commitment [b̂] for b ∈ {0, 1} | (0, 2) | (0, 2) | - | - | | Proof of linear MSE in G1 | (1, 0) | (1.5, 0) | 2 | 4 | | Proof of linear MSE in G2 | (0, 1) | (0, 1.5) | 2 | 4 | | Proof of non-linear QE | (2, 2) | (3, 3) | 4 | 16 |
+
+**Table 3:** Sizes and computational costs for GS proofs in the SXDH assumption setting for relations used in our construction. Default generators $G$ and $\tilde{G}$ are not included in CRS. Column #*(s.mult) indicates number of scalar multiplications in $G_1$ and $G_2$ for generating object by counting multi-scalar multiplication as 1.5. Linear MSE and non-linear QE are specific to relations in Equation (5) to (10).
+
+**CRS Generation:** Our construction includes three independent common reference strings, $\textbf{crs}_0$ and $(\tilde{\textbf{crs}}_1, \tilde{\textbf{crs}}_1)$ generated in the binding mode as
+
+$$ \textbf{crs}_0 := \begin{pmatrix} G & Q_0 \\ U_0 & V_0 \end{pmatrix}, \quad \textbf{crs}_1 := \begin{pmatrix} G & Q_1 \\ U_1 & V_1 \end{pmatrix}, \quad \tilde{\textbf{crs}}_1 := \begin{pmatrix} \tilde{G} & \tilde{Q}_1 \\ \tilde{U}_1 & \tilde{V}_1 \end{pmatrix}, \qquad (18) $$
+
+where, for $\chi_0, \xi_0, \chi_1, \xi_1, \tilde{\chi}_1, \tilde{\xi}_1 \leftarrow^{\$} Z_p^*$, $Q_i := G^{\chi_i}, U_i := G^{\xi_i}, V_i := G^{\chi_i \xi_i}$ for $i = 0, 1$ and $\tilde{Q}_1 := \tilde{G}^{\tilde{\chi}_1}$,
+$\tilde{U}_1 := \tilde{G}^{\tilde{\xi}_1}, \tilde{V}_1 := \tilde{G}^{\tilde{\chi}_1 \tilde{\xi}_1}$.
+
+**Scalar Commitments:** To commit to $x \in Z_p$ under $\textbf{crs}_i$, compute
+
+$$ [x]_i := \operatorname{Com}(\textbf{crs}_i, x; r) := (U_i^x G^r, (V_i G)^x Q_i^r), \qquad (19) $$
+
+where $r \in Z_p$ is a fresh randomness. A default commitment of $1 \in Z_p$ uses $0 \in Z_p$ as a randomness, namely,
+
+$$ [1]_i := \operatorname{Com}(\textbf{crs}_i, 1; 0) := (U_i, V_i G). \qquad (20) $$
+
+When $x$ is committed by using $\tilde{\textbf{crs}}_1$, we denote it by $[\tilde{x}]_1$ and compute as
+
+$$ [\tilde{x}]_1 = \operatorname{Com}(\tilde{\textbf{crs}}_1, x; r) := (\tilde{U}_1^x \tilde{G}^r, (\tilde{V}_1 \tilde{G})^x \tilde{Q}_1^r). \qquad (21) $$
+
+**Proof of Scalar MSE:** Proof $\rho_{0,0}$ for relation (5) as a linear MSE in $G_1$ consists of a single element $\pi_{0,0} \in G_2$ computed as
+
+$$ \pi_{0,0} := \tilde{G}^{r_{z_0}} (\tilde{G}^{-1})^{r_{x_0}} (\tilde{A}^{-1})^{r_{x_1}}, \qquad (22) $$
+
+where $r_{z_0}, r_{x_0}$, and $r_{x_1}$ are random coins used to commit to $z_0, x_0, x_1$ by $[\tilde{z}_0]_0, [\tilde{x}_0]_0, [\tilde{x}_1]_0$, respectively. It is verified by evaluating
+
+$$ e(C_{z_0,1}, \tilde{G}) e(C_{x_0,1}, \tilde{G}^{-1}) e(C_{x_1,1}, \tilde{A}^{-1}) = e(G, \pi_{0,0}), \quad \text{and} $$
+
+$$ e(C_{z_0,2}, \tilde{G}) e(C_{x_0,2}, \tilde{G}^{-1}) e(C_{x_1,2}, \tilde{A}^{-1}) = e(Q_0, \pi_{0,0}), \qquad (23) $$
+
+where $(C_{x_1}, C_{x_2}) := [x]_0$ for $x \in \{z_0, x_0, x_1\}$, and $\tilde{G}$ and $Q_0$ are taken from $\textbf{crs}_0$.
+
+Proofs $\rho_{0,1}$, $\rho_{1,1}$, and $\rho_{1,2}$ are for linear MSEs in exactly the same form as equation (5). They are generated and verified in the same manner as above.
+
+**Proof of Non-Linear QE:** Proof $\rho_{1,0}$ for non-linear QE (7) consists of $(\theta_{1,0,1}, \theta_{1,0,2}, \pi_{1,0,1}, \pi_{1,0,2}) \in G_1^2 \times G_2^2$ that, $\psi \leftarrow^{\$} Z_p$,
+
+$$ \begin{align}
+\theta_{1,0,1} &:= U_1^{z_0(r_{x_2}-r_{z_2})-z_1(r_{x_2}-r_{z_2})} G^{(x_2-z_2)(z_0-z_1)-\psi}, \\
+\theta_{1,0,2} &:= (V_1 G)^{z_0(r_{x_2}-r_{z_2})-z_1(r_{x_2}-r_{z_2})} Q_1^{(x_2-z_2)(z_0-z_1)-\psi}, \tag{24} \\
+\pi_{1,0,1} &:= \tilde{U}_1^{x_2(r_{z_0}-r_{z_1})-z_2(r_{z_0}-r_{z_1})} \tilde{G}^{\psi}, &&\text{and} \\
+\pi_{1,0,2} &:= (\tilde{V}_1 \tilde{G})^{x_2(r_{z_0}-r_{z_1})-z_2(r_{z_0}-r_{z_1})} \tilde{Q}_1^{\psi},
+\end{align} $$
\ No newline at end of file
diff --git a/samples/texts/2864204/page_28.md b/samples/texts/2864204/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..f94833befc626608c3933bc773c73334dce65eb8
--- /dev/null
+++ b/samples/texts/2864204/page_28.md
@@ -0,0 +1,36 @@
+where $r_X$ is a random coin used to commit to $x$. The verification evaluates
+
+$$
+\begin{align}
+e(C_{z_0,1} C_{z_1,1}^{-1}, \tilde{D}_{x_2,1}) e(C_{z_0,1} C_{z_1,1}^{-1}, \tilde{D}_{z_2,1}^{-1}) &= e(G, \pi_{1,0,1}) e(\theta_{1,0,1}, \tilde{G}), \\
+e(C_{z_0,2} C_{z_1,2}^{-1}, \tilde{D}_{x_2,1}) e(C_{z_0,2} C_{z_1,2}^{-1}, \tilde{D}_{z_2,1}^{-1}) &= e(Q_1, \pi_{1,0,1}) e(\theta_{1,0,2}, \tilde{G}), \\
+e(C_{z_0,1} C_{z_1,1}^{-1}, \tilde{D}_{x_2,2}) e(C_{z_0,1} C_{z_1,1}^{-1}, \tilde{D}_{z_2,2}^{-1}) &= e(G, \pi_{1,0,2}) e(\theta_{1,0,1}, \tilde{Q}_1), \text{ and} \\
+e(C_{z_0,2} C_{z_1,2}^{-1}, \tilde{D}_{x_2,2}) e(C_{z_0,2} C_{z_1,2}^{-1}, \tilde{D}_{z_2,2}^{-1}) &= e(Q_1, \pi_{1,0,2}) e(\theta_{1,0,2}, \tilde{Q}_1),
+\end{align}
+\tag{25} $$
+
+where $(C_{x_1}, C_{x_2}) := [\mathbf{x}]_1$ for $\mathbf{x} \in \{z_0, z_1\}$, $(\tilde{D}_{y_1}, \tilde{D}_{y_2}) := [\tilde{\mathbf{y}}]_1$ for $\mathbf{y} \in \{x_2, x_3\}$, and other group elements are taken from $(\mathbf{crs}_1, \tilde{\mathbf{crs}}_1)$.
+
+**Batch Verification:** The number of pairing computations in equations (23) and (25) can be reduced when verifying proofs $\rho_{0,0}, \rho_{0,1}, \rho_{1,0}, \rho_{1,1}, \rho_{1,2}$ and $\rho_{1,3}$ at once by batch verification. By merging pairings with respect to $G$, $\tilde{G}$, $Q_0$, $Q_1$, $\tilde{Q}_1$, $\tilde{A}$, $\tilde{E}_{z_0}$, $\tilde{E}_s$, $\tilde{D}_{x_2,1}$, $\tilde{D}_{x_2,2}$, $\tilde{D}_{z_2,1}$, $\tilde{D}_{z_2,2}$, $\tilde{E}_{z_1}$, $E_{z_2}$, and $E_t$, we have a single pairing product equation consisting of 15 pairings. It will be merged further with the verification equations for the POS part that includes pairings involving $G$ and $\tilde{G}$. For SPSu1, the batch verification equation consists of $n_1 + 16$ pairings, of which $n_1 + 1$ pairings are from POSu1. For SPSb, it consists of $n_1 + n_2 + 18$ pairings, of which $n_1 + n_2 + 3$ pairings are from POSb.
+
+## Acknowledgments
+
+We thank Mehdi Tibouch and Taechan Kim for their valuable discussion on parameters settings for bilinear groups.
+
+## References
+
+[1] Michel Abdalla, Pierre-Alain Fouque, Vadim Lyubashevsky, and Mehdi Tibouchi. Tightly-secure signatures from lossy identification schemes. In David Pointcheval and Thomas Johansson, editors, *EUROCRYPT 2012*, volume 7237 of LNCS, pages 572–590. Springer, Heidelberg, April 2012. (Cited on page 2.)
+
+[2] Michel Abdalla and Tanja Lange, editors. *PAIRING 2012*, volume 7708 of LNCS. Springer, Heidelberg, May 2013. (Cited on page 28.)
+
+[3] Masayuki Abe, Melissa Chase, Bernardo David, Markulf Kohlweiss, Ryo Nishimaki, and Miyako Ohkubo. Constant-size structure-preserving signatures: Generic constructions and simple assumptions. *Journal of Cryptology*, 29(4):833–878, October 2016. (Cited on page 1, 2, 6, 7, 10, 23, 25.)
+
+[4] Masayuki Abe, Georg Fuchsbauer, Jens Groth, Kristiyan Haralambiev, and Miyako Ohkubo. Structure-preserving signatures and commitments to group elements. *Journal of Cryptology*, 29(2):363–421, April 2016. (Cited on page 1, 3.)
+
+[5] Masayuki Abe, Jens Groth, Kristiyan Haralambiev, and Miyako Ohkubo. Optimal structure-preserving signatures in asymmetric bilinear groups. In Phillip Rogaway, editor, *CRYPTO 2011*, volume 6841 of LNCS, pages 649–666. Springer, Heidelberg, August 2011. (Cited on page 1.)
+
+[6] Tolga Acar, Kristin Lauter, Michael Naehrig, and Daniel Shumow. Affine pairings on ARM. In Abdalla and Lange [2], pages 203–209. (Cited on page 3.)
+
+[7] Diego F. Aranha, Laura Fuentes-Castañeda, Edward Knapp, Alfred Menezes, and Francisco Rodríguez-Henríquez. Implementing pairings at the 192-bit security level. In Abdalla and Lange [2], pages 177–195. (Cited on page 2.)
+
+[8] Nuttapong Attrapadung, Goichiro Hanaoka, and Shota Yamada. A framework for identity-based encryption with almost tight security. In Iwata and Cheon [38], pages 521–549. (Cited on page 4.)
\ No newline at end of file
diff --git a/samples/texts/2864204/page_29.md b/samples/texts/2864204/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..235c4cb6e5463caec771eb11711cf67fe3e76869
--- /dev/null
+++ b/samples/texts/2864204/page_29.md
@@ -0,0 +1,29 @@
+[9] Paulo S. L. M. Barreto, Craig Costello, Rafael Misoczki, Michael Naehrig, Geovandro C. C. F. Pereira, and Gustavo Zanon. Subgroup security in pairing-based cryptography. In Kristin E. Lauter and Francisco Rodríguez-Henríquez, editors, *LATINCRYPT 2015*, volume 9230 of LNCS, pages 245–265. Springer, Heidelberg, August 2015. (Cited on page 3.)
+
+[10] Mihir Bellare, Alexandra Boldyreva, and Silvio Micali. Public-key encryption in a multi-user setting: Security proofs and improvements. In Bart Preneel, editor, *EUROCRYPT 2000*, volume 1807 of LNCS, pages 259–274. Springer, Heidelberg, May 2000. (Cited on page 7, 23.)
+
+[11] Mihir Bellare, Alexandra Boldyreva, and Jessica Staddon. Randomness re-use in multi-recipient encryption schemes. In *Public Key Cryptography - PKC 2003, 6th International Workshop on Theory and Practice in Public Key Cryptography, Miami, FL, USA, January 6-8, 2003, Proceedings*, volume 2567 of *Lecture Notes in Computer Science*, pages 85–99. Springer, 2003. (Cited on page 23.)
+
+[12] Mihir Bellare and Phillip Rogaway. The exact security of digital signatures: How to sign with RSA and Rabin. In Ueli M. Maurer, editor, *EUROCRYPT'96*, volume 1070 of LNCS, pages 399–416. Springer, Heidelberg, May 1996. (Cited on page 2.)
+
+[13] Mihir Bellare and Sarah Shoup. Two-tier signatures, strongly unforgeable signatures, and Fiat-Shamir without random oracles. In Tatsuaki Okamoto and Xiaoyun Wang, editors, *PKC 2007*, volume 4450 of LNCS, pages 201–216. Springer, Heidelberg, April 2007. (Cited on page 6.)
+
+[14] Olivier Blazy, Georg Fuchsbauer, Malika Izabachène, Amandine Jambert, Hervé Sibert, and Damien Vergnaud. Batch Groth-Sahai. In Jianying Zhou and Moti Yung, editors, *ACNS 10*, volume 6123 of LNCS, pages 218–235. Springer, Heidelberg, June 2010. (Cited on page 2.)
+
+[15] Olivier Blazy, Eike Kiltz, and Jiaxin Pan. (Hierarchical) identity-based encryption from affine message authentication. In Juan A. Garay and Rosario Gennaro, editors, *CRYPTO 2014, Part I*, volume 8616 of LNCS, pages 408–425. Springer, Heidelberg, August 2014. (Cited on page 4.)
+
+[16] Dan Boneh and Xavier Boyen. Secure identity based encryption without random oracles. In Franklin [28], pages 443–459. (Cited on page 2.)
+
+[17] Dan Boneh, Xavier Boyen, and Hovav Shacham. Short group signatures. In Franklin [28], pages 41–55. (Cited on page 7.)
+
+[18] Jan Camenisch, Maria Dubovitskaya, and Kristiyan Haralambiev. Efficient structure-preserving signature scheme from standard assumptions. In Visconti and Prisco [49], pages 76–94. (Cited on page 1.)
+
+[19] Jan Camenisch, Maria Dubovitskaya, Kristiyan Haralambiev, and Markulf Kohlweiss. Composable and modular anonymous credentials: Definitions and practical constructions. In Tetsu Iwata and Jung Hee Cheon, editors, *ASIACRYPT 2015, Part II*, volume 9453 of LNCS, pages 262–288. Springer, Heidelberg, November / December 2015. (Cited on page 3.)
+
+[20] Julien Cathalo, Benoît Libert, and Moti Yung. Group encryption: Non-interactive realization in the standard model. In Mitsuru Matsui, editor, *ASIACRYPT 2009*, volume 5912 of LNCS, pages 179–196. Springer, Heidelberg, December 2009. (Cited on page 1.)
+
+[21] Melissa Chase and Markulf Kohlweiss. A new hash-and-sign approach and structure-preserving signatures from DLIN. In Visconti and Prisco [49], pages 131–148. (Cited on page 1.)
+
+[22] Sanjit Chatterjee, Neal Koblitz, Alfred Menezes, and Palash Sarkar. Another look at tightness II: Practical issues in cryptography. Cryptology ePrint Archive, Report 2016/360, 2016. http://eprint.iacr.org/2016/360. (Cited on page 1.)
+
+[23] Jie Chen and Hoeteck Wee. Fully, (almost) tightly secure IBE and dual system groups. In Ran Canetti and Juan A. Garay, editors, *CRYPTO 2013, Part II*, volume 8043 of LNCS, pages 435–460. Springer, Heidelberg, August 2013. (Cited on page 2, 3, 4.)
\ No newline at end of file
diff --git a/samples/texts/2864204/page_3.md b/samples/texts/2864204/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a2c07e2e5d9b0772d9d29376a806f04b3b9e43a
--- /dev/null
+++ b/samples/texts/2864204/page_3.md
@@ -0,0 +1,16 @@
+over bilinear groups [35]. However, his construction can only be used to sign integer messages (and not
+group elements or, e.g., its own public key), so it is not structure-preserving.
+
+**OUR CONTRIBUTIONS.** We propose the first (almost) tightly secure SPS schemes with constant number of group elements in signatures. Our schemes are proven secure based on standard assumptions (e.g., the symmetric external Diffie-Hellman (SXDH) assumption). Concretely, we first present a generic construction of an almost tightly secure SPS scheme from a structure-preserving public-key encryption secure against chosen-plaintext attacks and the GS proof system. With ElGamal encryption and the GS proofs over asymmetric pairing groups, we obtain concrete SPS schemes with compact signature size whose unforgeability against adaptive chosen-message attacks (UF-CMA) is reduced from the SXDH assumption with security loss of $O(\lambda)$, which is independent of $q_s$.
+
+The primary benefit of our tightly secure SPS schemes is their availability in structure-preserving cryptography under the current standard security level. For a system modularly built with structure-preserving building blocks, a compact and tightly secure SPS scheme has been a missing piece, since other useful building blocks, such as one-time signatures and commitments, are known to be tightly secure. Plugging in our scheme, one can increase the proven security in applications of structure-preserving cryptography such as blind signatures [4], group signatures [45], and unlinkable redactable signatures [19] used in anonymous credential systems.
+
+The second benefit of our result is the removal of $q_s$ from the security bound, which aims to simplify the systems design. With previous schemes, there are trade-offs among security, efficiency, and usability; if one desires stronger security guarantees without sacrificing efficiency, a rigid limitation has to be put on the number of signatures per public key, or, if more flexibility on the number of possible signatures is important in considered applications, one has to take the risk with weaker security guarantees or less efficiency. With our schemes, one no longer needs to fix $q_s$ in advance and can focus on desirable security and permissible efficiency for the targeted system.
+
+Nevertheless, the performance as a stand-alone signature scheme is of a concern. We summarise several parameters that dominate the space and computation costs in Tables 1 and 2. The bare numbers in the tables imply that our schemes are outperformed by those in the literature if they are used at the same security level. Taking the security loss into consideration, however, the tightness of our schemes offsets the difference in terms of computational complexity. We elaborate this point in the following. Though concrete complexity varies widely depending on platforms and implementations, it is safe to say that computing a pairing in the 192-bit security level is slowed by a factor of $\delta := 6$ to $7$ on ordinary personal computers [9, 26] and $\delta := 9$ to $12$ on processors for embedded systems [6, 32, 48] compared to those in the 128-bit security level. According to the number of pairings in Table 2, our scheme for bilateral messages at the 128-bit security level verifies a signature with batch verification $4.6 < \delta(n_1 + n_2 + 14)/(n_1 + n_2 + 18) < 9.3$ times faster than the KPW scheme at the 192-bit security setting for offsetting its security loss of 60 bits. Applying the same argument to the case of unilateral messages, ours in the 128-bit security level will be $2.2 < \delta(n_1 + 6)/(n_1 + 16) < 4.5$ times faster compared to the JR scheme in the 192-bit security level. Even with plain verification, i.e., without batch technique, the advantage remains depending on the platform and the size of messages.
+
+We note that the above simple argument ignores dedicated techniques for computing pairing products, e.g., [47], and costs for subtle computations. It may not be fair to ignore the concrete security loss in our schemes, which can be as large as 13 bits at the 128-bit security level, as mentioned in Section 4. Nevertheless, taking into account the fact that the performance gap between different security levels will be larger than those shown in the above benchmarks published previously [42] (i.e., slowdown factor $\delta$ in the above argument will be much larger), even the simple estimation is aimed to show the practical significance of tightly secure schemes.
+
+**TECHNICAL OVERVIEW.** Eliminating any representation-dependent computation in the construction is a crucial technical challenge. Towards this goal, we adapt the “adaptive partitioning” technique of Hofheinz [36] (which in turn builds upon [23]) to the setting of structure-preserving signatures. Thus, in our security proof, we gradually transform the conditions necessary for a successful forgery until a valid forgery is impossible. This will require $O(\lambda)$ game hops, thus leading to a security loss independent of the number of adversarial signing queries.
+
+Concretely, in the scheme itself, we require that every valid signature must carry an (encrypted) “authentication tag” $Z = X$, where $X \in G$ is a fixed group element. We will gradually transform this requirement $Z = X$ into the following combination of requirements on the authentication tag $Z^*$ from a
\ No newline at end of file
diff --git a/samples/texts/2864204/page_30.md b/samples/texts/2864204/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..096fda11a1defe45d0ddb1f1b11b2d80b3e46dac
--- /dev/null
+++ b/samples/texts/2864204/page_30.md
@@ -0,0 +1,31 @@
+[24] Benoît Chevallier-Mames. An efficient CDH-based signature scheme with a tight security reduction. In Victor Shoup, editor, *CRYPTO 2005*, volume 3621 of LNCS, pages 511–526. Springer, Heidelberg, August 2005. (Cited on page 2.)
+
+[25] Taher ElGamal. A public key cryptosystem and a signature scheme based on discrete logarithms. In G. R. Blakley and David Chaum, editors, *CRYPTO'84*, volume 196 of LNCS, pages 10–18. Springer, Heidelberg, August 1984. (Cited on page 7.)
+
+[26] Andreas Enge and Jérôme Milan. Implementing cryptographic pairings at standard security levels. In Rajat Subhra Chakraborty, Vashek Matyas, and Patrick Schaumont, editors, *Security, Privacy, and Applied Cryptography Engineering - 4th International Conference, SPACE 2014, Pune, India, October 18-22, 2014. Proceedings*, volume 8804 of Lecture Notes in Computer Science, pages 28–46. Springer, 2014. (Cited on page 2, 3.)
+
+[27] Alex Escala and Jens Groth. Fine-tuning Groth-Sahai proofs. In Hugo Krawczyk, editor, *PKC 2014*, volume 8383 of LNCS, pages 630–649. Springer, Heidelberg, March 2014. (Cited on page 4, 7, 8.)
+
+[28] Matthew Franklin, editor. *CRYPTO 2004*, volume 3152 of LNCS. Springer, Heidelberg, August 2004. (Cited on page 29.)
+
+[29] Romain Gay, Dennis Hofheinz, Eike Kiltz, and Hoeteck Wee. Tightly CCA-secure encryption without pairings. In Marc Fischlin and Jean-Sébastien Coron, editors, *EUROCRYPT 2016, Part I*, volume 9665 of LNCS, pages 1–27. Springer, Heidelberg, May 2016. (Cited on page 4.)
+
+[30] Rosario Gennaro and Matthew J. B. Robshaw, editors. *CRYPTO 2015, Part II*, volume 9216 of LNCS. Springer, Heidelberg, August 2015. (Cited on page 31.)
+
+[31] Robert Granger, Dan Page, and Nigel P. Smart. High security pairing-based cryptography revisited. In Florian Hess, Sebastian Pauli, and Michael E. Pohst, editors, *Algorithmic Number Theory, 7th International Symposium, ANTS-VII, Berlin, Germany, July 23-28, 2006, Proceedings*, volume 4076 of Lecture Notes in Computer Science, pages 480–494. Springer, 2006. (Cited on page 2.)
+
+[32] Gurleen Grewal, Reza Azarderakhsh, Patrick Longa, Shi Hu, and David Jao. Efficient implementation of bilinear pairings on ARM processors. In Lars R. Knudsen and Huapeng Wu, editors, *SAC 2012*, volume 7707 of LNCS, pages 149–165. Springer, Heidelberg, August 2013. (Cited on page 3.)
+
+[33] Jens Groth. Simulation-sound NIZK proofs for a practical language and constant size group signatures. In Xuejia Lai and Kefei Chen, editors, *ASIACRYPT 2006*, volume 4284 of LNCS, pages 444–459. Springer, Heidelberg, December 2006. (Cited on page 1.)
+
+[34] Jens Groth and Amit Sahai. Efficient non-interactive proof systems for bilinear groups. *SIAM J. Comput.*, 41(5):1193–1232, 2012. (Cited on page 1, 25.)
+
+[35] Dennis Hofheinz. Algebraic partitioning: Fully compact and (almost) tightly secure cryptography. In Eyal Kushilevitz and Tal Malkin, editors, *TCC 2016-A, Part I*, volume 9562 of LNCS, pages 251–281. Springer, Heidelberg, January 2016. (Cited on page 3, 4.)
+
+[36] Dennis Hofheinz. Adaptive partitioning. In Jean-Sébastien Coron and Jesper Buus Nielsen, editors, *Advances in Cryptology – EUROCRYPT 2017: 36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, April 30 – May 4, 2017, Proceedings*, Part III, pages 489–518. Springer, 2017. (Cited on page 3, 4.)
+
+[37] Dennis Hofheinz and Tibor Jager. Tightly secure signatures and public-key encryption. *Des. Codes Cryptography*, 80(1):29–61, 2016. (Cited on page 2, 7.)
+
+[38] Tetsu Iwata and Jung Hee Cheon, editors. *ASIACRYPT 2015, Part I*, volume 9452 of LNCS. Springer, Heidelberg, November / December 2015. (Cited on page 28, 31.)
+
+[39] Charanjit S. Jutla and Arnab Roy. Improved structure preserving signatures under standard bilinear assumptions. Cryptology ePrint Archive, Report 2017/025, 2017. http://eprint.iacr.org/2017/025. (Cited on page 1, 2.)
\ No newline at end of file
diff --git a/samples/texts/2864204/page_31.md b/samples/texts/2864204/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9ec79710de420a3083cdb58bb4a02b07cf5a974
--- /dev/null
+++ b/samples/texts/2864204/page_31.md
@@ -0,0 +1,19 @@
+[40] Jonathan Katz and Nan Wang. Efficiency improvements for signature schemes with tight security reductions. In Sushil Jajodia, Vijayalakshmi Atluri, and Trent Jaeger, editors, *ACM CCS 03*, pages 155–164. ACM Press, October 2003. (Cited on page 2.)
+
+[41] Eike Kiltz, Jiaxin Pan, and Hoeteck Wee. Structure-preserving signatures from standard assumptions, revisited. In Gennaro and Robshaw [30], pages 275–295. (Cited on page 1, 2, 6.)
+
+[42] Taechan Kim and Razvan Barbulescu. Extended tower number field sieve: A new complexity for the medium prime case. In Matthew Robshaw and Jonathan Katz, editors, *CRYPTO 2016, Part I*, volume 9814 of LNCS, pages 543–571. Springer, Heidelberg, August 2016. (Cited on page 3.)
+
+[43] Benoît Libert, Marc Joye, Moti Yung, and Thomas Peters. Concise multi-challenge CCA-secure encryption and signatures with almost tight security. In Palash Sarkar and Tetsu Iwata, editors, *ASIACRYPT 2014, Part II*, volume 8874 of LNCS, pages 1–21. Springer, Heidelberg, December 2014. (Cited on page 2.)
+
+[44] Benoît Libert, Thomas Peters, Marc Joye, and Moti Yung. Compactly hiding linear spans - tightly secure constant-size simulation-sound QA-NIZK proofs and applications. In Iwata and Cheon [38], pages 681–707. (Cited on page 4.)
+
+[45] Benoît Libert, Thomas Peters, and Moti Yung. Short group signatures via structure-preserving signatures: Standard model security from simple assumptions. In Gennaro and Robshaw [30], pages 296–316. (Cited on page 1, 2, 3.)
+
+[46] Sven Schäge. Tight proofs for signature schemes without random oracles. In Kenneth G. Paterson, editor, *EUROCRYPT 2011*, volume 6632 of LNCS, pages 189–206. Springer, Heidelberg, May 2011. (Cited on page 2.)
+
+[47] Michael Scott. On the efficient implementation of pairing-based protocols. In Liqun Chen, editor, *13th IMA International Conference on Cryptography and Coding*, volume 7089 of LNCS, pages 296–308. Springer, Heidelberg, December 2011. (Cited on page 3.)
+
+[48] Rajeev Verma. *Efficient Implementations of Pairing-Based Cryptography on Embedded Systems*. PhD thesis, Rochester Institute of Technology, New York, USA, 2015. (Cited on page 3.)
+
+[49] Ivan Visconti and Roberto De Prisco, editors. *SCN 12*, volume 7485 of LNCS. Springer, Heidelberg, September 2012. (Cited on page 29.)
\ No newline at end of file
diff --git a/samples/texts/2864204/page_4.md b/samples/texts/2864204/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..33d4ecf65256ccd97331178fbf5b4a4ac086e2ab
--- /dev/null
+++ b/samples/texts/2864204/page_4.md
@@ -0,0 +1,23 @@
+valid forgery:
+
+(a) We must have $Z^* = X \cdot M^*$, where $X \in G$ is a fixed random group element, and $M^* \in G$ is the signed message in the forgery.
+
+(b) Also, we must have $Z^* = X \cdot M_i$ for some previously signed message $M_i$. Since we may assume $M^* \notin \{M_i\}$ in the (non-strong) existential unforgeability experiment, any attempted forgery will thus be invalid.
+
+The key technique to establishing these modified requirements is a “partitioning argument” similar to the one from [36]. That is, in the proof, we will enforce more and more dependencies of the authentication tag $Z$ on the *bit representation* of $M$. Note that this bit representation is not used in the real scheme; this would in fact be problematic in the context of structure-preserving constructions. For instance, to establish a dependence of $Z$ on the $k$-th bit $b_M$ of the bit representation of $M$, we proceed as follows:
+
+1. First, we “partition” the set of all messages into two subsets, depending on $b_M$. This means that signatures issued by the experiment now carry (an encryption of) $b_M$ in a special component. The reason for this partitioning is that we can now, depending on the encrypted $b_M$, use different verification rules.
+
+2. We guess the encrypted bit $b^*$ from the forgery, and change the encrypted $Z$ in issued signatures for all $b_M \neq b^*$. (This change can be justified by setting up things such that $Z$ can only be retrieved from a signature if the encrypted bit $b$ is equal to $b^*$. If $b \neq b^*$, then $Z$ is hidden, and can hence be modified in issued signatures.) This introduces a dependence of $Z$ in issued signatures on $b_M$.
+
+However, the encrypted bit $b^*$ from the forgery is not necessarily identical to $b_{M^*}$ (since this property cannot be easily enforced in a structure-preserving way). As a consequence, we cannot force the adversary to respect the additional dependencies in his forgery. Yet, we will show that we can force the adversary to reuse one $Z = X \cdot M_i$ from a signing query. This leads to requirement (b) in verification forgeries, and requirement (a) will finally be enforced by a regular GS proof in signatures (that GS proof is simulated in all intermediate steps).
+
+This line of reasoning borrows from Chen and Wee's [23] general idea of establishing tight security through a repeated partitioning of the message space (resp. identity space in an identity-based encryption scheme) into two sets, each time adjusting signatures for messages from one of the two sets in the process. However, their approach, as well as other follow-up approaches (e.g., [15, 44, 8, 35, 29]) embeds the partitioning already in the scheme (in the sense that the scheme must already contain all potentially possible “partitioning rules,” for instance according to each message bit). Since these rules in the mentioned schemes are based on the message bits (or an algebraic predicate on the discrete logarithm of the message [35]), this would not lead to a structure-preserving scheme.
+
+Instead, we adapt the “adaptive partitioning” (AP) technique of Hofheinz [36], in which the partitioning is performed dynamically, through an *encrypted* partitioning bit embedded in signatures. This allows us to separate partitioning from the way messages are bound to signatures in the scheme. We thus bind a message through an authentication tag, as mentioned above, that is more algebraic and admits structure-preserving GS proofs. The encrypted partitioning bit is fixed to a constant in the real scheme and turned into a variable only in the security proof where non-generic computations are allowed.
+
+In adapting AP to our setting, we face two difficulties, however: the partitioning used in AP is bit-based (which is incompatible with our requirement of a structure-preserving scheme), and its complexity leads to comparatively complex schemes. More specifically, AP leads to several expensive “OR”-proofs in ciphertexts, resp. signatures. As a consequence, the (encryption) schemes in [36] are not competitive in complexity to non-tightly secure schemes, even when taking into account a potentially larger security level for non-tightly secure schemes. On the other hand, our signature schemes are carefully designed so that GS proofs in signatures are done only for less costly linear relations (except for one crucial “OR”-proof). We further use optimization techniques of Escala and Groth [27] to reduce the size of GS proofs in our instantiation.
+
+Moreover, AP crucially relies on the bit representation of messages (resp. encryption tags that are hash values in [36]). In particular, the encryption scheme from [36] is not structure-preserving. For our purposes, we thus have to modify this technique to work with group elements instead of hash values. This leads to a very simple and clean structure-preserving signature scheme whose security proof still crucially uses the bit representation of group elements. We find this property surprising and conceptually interesting.
+
+**OPEN PROBLEMS.** While being compact and tightly secure, our concrete SPS schemes contain a moderate
\ No newline at end of file
diff --git a/samples/texts/2864204/page_5.md b/samples/texts/2864204/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..fb6f854450496d7b960a023025f723f4da731988
--- /dev/null
+++ b/samples/texts/2864204/page_5.md
@@ -0,0 +1,35 @@
+number of group elements in a signature. We leave as an open problem to design more compact SPSes with even smaller number of group elements. Another interesting open problem is to decrease the security loss from $\mathcal{O}(\lambda)$ to $\mathcal{O}(1)$.
+
+**ORGANIZATION.** The rest of the paper is organized as follows. After introducing notations, security definitions, and building blocks in Section 2, we present our generic construction and its security proof in Section 3. We discuss an instantiation over asymmetric bilinear groups in Section 4.
+
+# 2 Preliminaries
+
+## 2.1 Notations
+
+For an integer $p$, define $\mathbb{Z}_p$ as the residual ring $\mathbb{Z}/p\mathbb{Z}$. If $\mathcal{B}$ is a set, then $x \stackrel{\$}{\leftarrow} \mathcal{B}$ denotes the process of sampling an element $x$ from set $\mathcal{B}$ uniformly at random. All our algorithms are probabilistic polynomial time (p.p.t. for short) unless stated otherwise. If $\mathcal{A}$ is an algorithm, then $a \stackrel{\$}{\leftarrow} \mathcal{A}(b)$ denotes the random variable, which is defined as the output of $\mathcal{A}$ on input $b$. To make the randomness explicit, we use the notation $a \leftarrow \mathcal{A}(b; r)$ meaning that the algorithm is executed on input $b$ and randomness $r$. Note that $\mathcal{A}$'s execution is now deterministic.
+
+We say that a function $\epsilon$ is negligible in security parameter $\lambda$ if, for all constant $c > 0$ and all sufficiently large $\lambda$, $\nu(\lambda) < \lambda^{-c}$ holds.
+
+## 2.2 Pairing Groups and Diffie-Hellman Assumptions
+
+Let PGGen be an algorithm that on input security parameter $\lambda$ returns a description par = ($p, G_1, G_2, G_T, e, G_1, G_2$) of pairing groups, where $p$ is a poly($\lambda$)-bit prime, $G_1, G_2, G_T$ are cyclic groups of order $p$, $G_1$ and $G_2$ are generators of $G_1$ and $G_2$, respectively, and $e: G_1 \times G_2 \to G_T$ is an efficiently computable non-degenerate bilinear map. Pairing group par is said to be a Type-III asymmetric pairing group if $G_1 \neq G_2$, and there does not exist an efficiently computable isomorphism between $G_1$ and $G_2$. When distinction between source groups is not important, we use $\mathbb{G}$ and $\mathcal{G}$ to represent $G_1$ and/or $G_2$, and their default generator, respectively. When a group element is given to an algorithm as an input, its membership to the intended group must be tested, but we make it implicit throughout the paper for conciseness of the description.
+
+Our instantiation in Section 4 is based on the following standard assumption over asymmetric pairing groups.
+
+**Definition 2.1 (Decisional Diffie-Hellman assumption).** The decisional Diffie-Hellman assumption (DDH$_s$) holds relative to PGGen in group $\mathbb{G}_s$ ($s \in \{1, 2, T\}$) if, for all p.p.t. adversaries $\mathcal{A}$, advantage function
+
+$$ \mathrm{Adv}_{\mathrm{PGGen}}^{\mathrm{ddh}_2}(\mathcal{A}) := |\Pr[\mathcal{A}(\mathrm{par}, G_s^a, G_s^b, G_s^{ab}) = 1] - \Pr[\mathcal{A}(\mathrm{par}, G_s^a, G_s^b, G_s^c) = 1]| $$
+
+is negligible in security parameter $\lambda$, where the probability is taken over par $\stackrel{\$}{\leftarrow}$ PGGen$(1^\lambda)$, $a, b, c \stackrel{\$}{\leftarrow} \mathbb{Z}_p$. The SXDH assumption holds relative to PGGen if for all p.p.t. adversaries $\mathcal{A}$, advantage function $\mathrm{Adv}_{\mathrm{PGGen}}^{\mathrm{sxdh}}(\mathcal{A}) := \max(\mathrm{Adv}_{\mathrm{PGGen}}^{\mathrm{ddh}_1}(\mathcal{A}), \mathrm{Adv}_{\mathrm{PGGen}}^{\mathrm{ddh}_2}(\mathcal{A}))$ is negligible.
+
+## 2.3 Structure-preserving Signatures
+
+**Definition 2.2 (Structure-preserving signature scheme).** An SPS scheme SPS with respect to PGGen is a tuple of algorithms SPS = (Gen, Sign, Ver):
+
+* The key generation algorithm Gen(par) takes par $\stackrel{\$}{\leftarrow}$ PGGen$(1^\lambda)$ as input and returns a public/secret key pair, (pk, sk), where pk ∈ $\mathbb{G}^{n_{pk}}$ for some $n_{pk} \in \text{poly}(\lambda)$. Message space $\mathcal{M} := \mathbb{G}^n$ for some constant $n \in \text{poly}(\lambda)$ is implicitly determined by pk.
+
+* The signing algorithm Sign(sk, M) returns a signature $\sigma \in \mathbb{G}^{n_\sigma}$ for some $n_\sigma \in \text{poly}(\lambda)$.
+
+* The deterministic verification algorithm Ver(pk, M, σ) solely evaluates pairing product equations and returns 1 (accept) or 0 (reject).
+
+(Perfect correctness.) For all (pk, sk) $\stackrel{\$}{\leftarrow}$ Gen(par), all messages $M \in \mathcal{M}$, and all $\sigma \stackrel{\$}{\leftarrow}$ Sign(sk, M), Ver(pk, M, σ) = 1 holds.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_6.md b/samples/texts/2864204/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..5d548f5c2c264e9e8b1a595b6d8dcb2f3623fd44
--- /dev/null
+++ b/samples/texts/2864204/page_6.md
@@ -0,0 +1,41 @@
+Though our final goal is to achieve security against adaptive chosen-message attacks, we use the following slightly relaxed notion in the generic construction.
+
+**Definition 2.3 (UF-XCMA Security).** A signature scheme SPS is unforgeable against auxiliary chosen-message attacks (UF-XCMA-secure) for relation $\mathcal{R}$ if, for all p.p.t. adversaries $\mathcal{A}$, advantage function
+
+$$ \mathrm{Adv}_{\mathrm{SPS}}^{\mathrm{uf-xcma}}(\mathcal{A}) := \Pr \left[ \mathrm{VER}(\mathbf{M}^*, \sigma^*) = 1 \mid \begin{array}{l} \mathrm{par} \stackrel{\$}{\leftarrow} \mathrm{PGGen}(1^\lambda); \\ (\mathbf{M}^*, \sigma^*) \stackrel{\$}{\leftarrow} \mathcal{A}^{\mathrm{INIT}, \mathrm{SIGN}(\cdot)}(\mathrm{par}) \end{array} \right] $$
+
+is negligible in security parameter $\lambda$ where
+
+• INIT runs $(pk, sk) \stackrel{\$}{\leftarrow} \mathrm{Gen}(\mathrm{par})$, initializes $Q_M$ with $\emptyset$, and returns pk to $\mathcal{A}$,
+
+• $\mathrm{SIGN}(\mathbf{M}, m)$ checks if $\mathcal{R}(\mathbf{M}, m) = 1$, runs $\sigma \stackrel{\$}{\leftarrow} \mathrm{Sign}(\mathrm{sk}, \mathbf{M})$, adds the $\mathbf{M}$ to $Q_M$, and returns $\sigma$ to $\mathcal{A}$, and
+
+• $\mathrm{VER}(\mathbf{M}^*, \sigma^*)$ returns 1 if $\mathbf{M}^* \notin Q_M$ and $1 = \mathrm{Ver}(\mathrm{pk}, \mathbf{M}^*, \sigma^*)$, or returns 0, otherwise.
+
+As we are concerned with structure-preserving schemes, we fix $\mathcal{R}(\mathbf{M}, m)$ to a relation that returns 1 iff $\mathbf{M} = \mathbb{G}^m$ where $\mathbb{G}$ is a generator in a group. This relation is sufficient for our purpose, that is, combining with a partial one-time signature scheme described below. By letting $\mathcal{R}$ be a constant function $\mathcal{R} = 1$, we obtain a standard notion of *unforgeability against chosen-message attacks* (UF-CMA-secure) and denote its advantage function by $\mathrm{Adv}_{\mathrm{SPS}}^{\mathrm{uf-cma}}(\mathcal{A})$. UF-XCMA is slightly stronger than unforgeability against extended random message attacks (UF-XRMA) introduced by Abe et al. [3]. While UF-XRMA is relative to a preliminary fixed algorithm that chooses messages to sign, it is the adversary that selects messages in UF-XCMA. Thus, UF-XCMA implies UF-XRMA.
+
+**FROM UF-XCMA TO UF-CMA.** In this paper, we focus on constructing UF-XCMA secure structure-preserving signature and then transform it to a UF-CMA secure SPS by using a partial one-time signature (POS) scheme [13, 3] in the standard way [3, 41]. POS is also known as two-tier signature schemes and is a variation of one-time signatures where parts of keys are updated after every signing. Here we recall useful definitions of POS and the transform.
+
+**Definition 2.4 (Partial One-Time Signature Scheme [13]).** A partial one-time signature scheme POS with respect to $\mathrm{PGGen}$ is a set of polynomial-time algorithms $(\mathcal{G}, \mathrm{Update}, \mathcal{S}, \mathcal{V})$ that, for $\mathrm{par} \stackrel{\$}{\leftarrow} \mathrm{PGGen}(1^\lambda)$:
+
+• $\mathcal{G}(\mathrm{par})$ generates a long-term public key $\mathrm{pk}$ and secret key $\mathrm{sk}$, and implicitly defines the associated message space $\mathcal{M}_o$ and the one-time public key space $\mathcal{K}_{\mathrm{opk}}$.
+
+• $\mathrm{Update}(\mathrm{par})$ takes $\mathrm{par}$ as input, and outputs a one-time key pair $(\mathrm{opk}, \mathrm{osk})$.
+
+• $\mathrm{S}(\mathrm{sk}, \mathrm{osk}, \mathbf{M})$ outputs a signature $\sigma$ on message $\mathbf{M}$ based on $\mathrm{sk}$ and $\mathrm{osk}$.
+
+• $\mathrm{V}(\mathrm{pk}, \mathrm{opk}, \mathbf{M}, \sigma)$ outputs 1 for acceptance or 0 for rejection.
+
+*(Perfect correctness.)* For all $(\mathrm{pk}, \mathrm{sk}) \stackrel{\$}{\leftarrow} \mathcal{G}(\mathrm{par})$, all $(\mathrm{opk}, \mathrm{osk}) \stackrel{\$}{\leftarrow} \mathrm{Update}(\mathrm{par})$, all messages $\mathbf{M} \in \mathcal{M}$, and all $\sigma \stackrel{\$}{\leftarrow} \mathcal{S}(\mathrm{sk}, \mathrm{osk}, \mathbf{M})$, $\mathrm{V}(\mathrm{pk}, \mathrm{opk}, \mathbf{M}, \sigma) = 1$ holds.
+
+POS is structure-preserving if $\mathrm{pk}, \mathrm{opk}, \mathbf{M}$, and $\sigma$ consist only elements of $\mathbb{G}$, and $\mathcal{V}$ evaluates group membership testing and pairing product equations.
+
+We require POS to be *unforgeable against one-time non-adaptive chosen-message attacks (OT-nCMA)*, which is defined as follows.
+
+**Definition 2.5 (OT-nCMA Security).** A POS scheme is unforgeable against one-time non-adaptive chosen-message attacks (OT-nCMA) if for any algorithm $\mathcal{A}$, the following advantage function $\mathrm{Adv}_{\mathrm{POS}}^{\mathrm{n formula}}(\mathcal{A})$ is negligible in $\lambda$,
+
+$$ \mathrm{Adv}_{\mathrm{POS}}^{\mathrm{n formula}}(\mathcal{A}) := \Pr \left[ \mathrm{VER}(\mathrm{opk}^*, \mathbf{M}^*, \sigma^*) = 1 \mid \begin{array}{l} \text{par } \leftarrow^{\$} \text{PGGen}(1^{\lambda}); \\ (\mathrm{opk}^*, \sigma^*, \mathbf{M}^*) \leftarrow^{\$} \mathcal{A}^{\text{INIT}, \text{SIGN}(\cdot)}(\text{par}) \end{array} \right] $$
+
+where
+
+• INIT runs $(pk, sk) \stackrel{\$}{\leftarrow} G(\mathrm{par})$, initializes $Q_M$ with $\emptyset$, and returns pk to $\mathcal{A}$.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_7.md b/samples/texts/2864204/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..f30b29670a6e3a1791591cce08ef33a1d2566a6e
--- /dev/null
+++ b/samples/texts/2864204/page_7.md
@@ -0,0 +1,35 @@
+* SIGN(M) runs $(opk, osk) \leftarrow^{\$} Update(par)$ and $\sigma \leftarrow^{\$} S(sk, osk, M)$, and then returns $(opk, \sigma)$ to $A$, and records $(opk, M, \sigma)$ to the list $Q_M$.
+
+* VER$(opk^*, \sigma^*, M^*)$ returns 1 if there exists $(opk^*, M, \sigma) \in Q_M$ and $M^* \neq M$ and $1 = V(pk, opk^*, M^*, \sigma^*)$, or returns 0, otherwise.
+
+Let POS := (G, Update, S, V) be a structure-preserving partially one-time signature scheme with message space $\mathcal{M}$ and one-time public key space $\mathcal{K}_{opk}$, and xSPS := $(\text{Gen}', \text{Sign}', \text{Ver}')$ be a structure-preserving signature scheme with message space $\mathcal{K}_{opk}$. The transformed UF-CMA secure SPS scheme, SPS := $(\text{Gen}, \text{Sign}, \text{Ver})$, is defined as follows.
+
+| Gen(par): | Sign(sk, M): | Ver(pk, M, σ): |
|---|
| (pk1, sk1) ↔$ G(par) | (opk, osk) ↔$ Update(par) | Parse σ = (opk, σ1, σ2) | | (pk2, sk2) ↔$ Gen'(par) | σ1 ↔$ S(sk1, osk, M) | If V(pk1, opk, M, σ1) = 1 | | pk := (pk1, pk2) | σ2 ↔$ Sign'(sk2, opk) | ∧Ver'(pk2, opk, σ2) = 1 | | sk := (sk1, sk2) | Return (opk, σ1, σ2) | then return 1 | | Return (pk, sk) | | Else return 0 |
+
+The correctness and structure-preserving property of SPS are implied by those of POS and xSPS in a straightforward way. The following theorem ([3, Theorem 3]) states UF-CMA security of SPS.
+
+**Theorem 2.6.** If POS is OT-nCMA *secure and xSPS is UF-XRMA secure*, then SPS *defined as above* is UF-CMA *secure*. In particular, for all adversaries *A* against UF-CMA security of SPS, there exist adversaries *B* against OT-nCMA security of POS and *C* against UF-XRMA security of xSPS with running times $T(\mathcal{A}) \approx T(\mathcal{B}) \approx T(\mathcal{C})$ and $\text{Adv}_{\text{SPS}}^{\text{uf-cma}}(\mathcal{A}) \le \text{Adv}_{\text{POS}}^{\text{ncma}}(\mathcal{B}) + \text{Adv}_{\text{SPS}}^{\text{uf-xcma}}(\mathcal{C})$.
+
+## 2.4 Public-Key Encryption Schemes
+
+**Definition 2.7 (Public-key encryption).** A Public-Key Encryption scheme (PKE) consists of algorithms PKE := $(\text{Gen}_P, \text{Enc}, \text{Dec})$:
+
+* The key generation algorithm Gen$_P$(par) takes par $←^{\$}$ PGGen$(1^\lambda)$ as input and generates a pair of public and secret keys (pk, sk). Message space $\mathcal{M}$ is implicitly defined by pk.
+
+* The encryption algorithm Enc(pk, M) returns a ciphertext ct.
+
+* The deterministic decryption algorithm Dec(sk, ct) returns a message M.
+
+(Perfect correctness.) For all par $←^{\$}$ PGGen$(1^\lambda)$, (pk, sk) $←^{\$}$ Gen$_P$(par), messages $M \in \mathcal{M}$, and ct $←^{\$}$ Enc(pk, M), Dec(sk, ct) = M holds.
+
+**Definition 2.8 (IND-mCPA Security [10]).** A PKE scheme PKE is indistinguishable against multi-instance chosen-plaintext attack (IND-mCPA-secure) if for any $q_e \ge 0$ and for all p.p.t. adversaries *A* with access to oracle ENC at most $q_e$ times the following advantage function $\text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{A})$ is negligible,
+
+$$ \text{Adv}_{\text{PKE}}^{\text{mcpa}}(\mathcal{A}) := \left| \Pr \left[ b' = b \middle| \begin{array}{l} \text{par } \leftarrow^{\$} \text{PGGen}(1^{\lambda}); (pk, sk) \leftarrow^{\$} \text{Gen}_{\text{P}}(\text{par}); \\ b \leftarrow^{\$} \{0, 1\}; b' \leftarrow^{\$} \mathcal{A}^{\text{ENC}(.,')}(pk) \end{array} \right] - \frac{1}{2} \right| $$
+
+where $\text{ENC}(M_0, M_1)$ runs $ct^* \leftarrow^{\$} \text{Enc}(pk, M_b)$, and returns $ct^*$ to $\mathcal{A}$.
+
+Some public-key encryption schemes, e.g., ElGamal encryption [25] and Linear encryption [17], are structure-preserving and satisfy IND-mCPA security with tight reductions to compact assumptions such as DDH and the Decision Linear assumption [17], respectively (cf. [37]).
+
+## 2.5 The Groth-Sahai Proof System
+
+We recall the Groth-Sahai proof system and its properties as a commit-and-prove scheme. We follow definitions by Escala and Groth in [27] in a simplified form that is sufficient for our purpose. For a given pairing group par $←^{\$} PGGen(1^\lambda)$, the GS-proof system is a non-interactive zero-knowledge proof (NIZK) system for satisfiability of a set of equations over par. Let $\mathcal{L}_{par}$ be a family of NP languages defined over par. For a language $\mathcal{L} \in \mathcal{L}_{par}$, let $R_\mathcal{L} := \{(x, \omega) : x \in \mathcal{L} \text{ and } \omega \in W(x)\}$ be a witness relation, where $W(x)$ is the set of witnesses for $x \in \mathcal{L}$. As our construction fixes the language in advance, it is sufficient for our purpose to define the proof system to be specific to $\mathcal{L}$ as follows.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_8.md b/samples/texts/2864204/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..112bf920d38073d6bb6db3fcd9c689e7eae025b8
--- /dev/null
+++ b/samples/texts/2864204/page_8.md
@@ -0,0 +1,50 @@
+**Definition 2.9 (The Groth-Sahai Proof System).** The Groth-Sahai commit-and-prove system for par $←^s$ PGGen$(1^\lambda)$ and $\mathcal{L} \in \mathcal{L}_{\text{par}}$ consists of p.p.t. algorithms GS := (BG, Com, P, V) that:
+
+* BG(par) is a binding common reference string generation algorithm that outputs crs.
+
+* $\text{Com}(crs, \omega; r)$ is a commitment algorithm that outputs a commitment c for given witness $\omega$ with randomness $r \leftarrow \mathcal{R}_c$ and crs.
+
+* $P(\text{crs}, (x, c), (\omega, r))$ is a prover algorithm that returns a proof $\rho$ on $(x, \omega) \in \mathbb{R}_{\mathcal{L}} \wedge c = \text{Com}(\text{crs}, \omega; r)$.
+
+* $V(\text{crs}, x, c, \rho)$ is a deterministic verification algorithm that returns 0 (reject) or 1 (accept).
+
+(Perfect correctness.) For all par $←^s$ PGGen$(1^\lambda)$, crs $←^s$ BG(par), $(x, \omega) \in \mathbb{R}_\mathcal{L}$, and $r \in \mathcal{R}_c$, $V(\text{crs}, x, c, P(\text{crs}, (x, c), (\omega, r))) = 1$ holds, where $c \leftarrow \text{Com}(\text{crs}, \omega; r)$.
+
+When witness $\omega$ consists of several objects and only part of them are committed to $c$, commitments for the remaining part of the witness is prepared by P and included in the proof.
+
+The following properties of the GS-proof system are used in this paper. For fully formal treatment, we refer to [27].
+
+**Definition 2.10 (Security properties of the Groth-Sahai proof system).** The following properties hold for all par $←^s$ PGGen$(1^\lambda)$,
+
+* Perfect Soundness. For all crs $\in$ BG(par), all $x \notin \mathcal{L}$, all c, and all $\rho$, we have $V(\text{crs}, x, c, \rho) = 0$.
+
+* CRS Indistinguishability. There exists a algorithm HG, called the hiding common reference string generator that, for all adversaries $\mathcal{A}$, the following advantage function is negligible,
+
+$$ \mathrm{Adv}_{\mathrm{GS}}^{\mathrm{crsind}}(\mathcal{A}) := \left| \Pr \left[ b' = b \; \middle| \; \begin{array}{@{}l@{}} \text{par} \leftarrow^s \text{PGGen}(1^\lambda); \\ \text{crs}_0 \leftarrow^s \text{BG(par)}; (\text{crs}_1, \text{trap}) \leftarrow^s \text{HG(par)}; \\ b \leftarrow^s \{0, 1\}; b' \leftarrow^s \mathcal{A}(\text{crs}_b) \end{array} \right] - \frac{1}{2} \right| $$
+
+* Dual-mode Commitment. For all crs $\in$ BG(par), Com is perfectly binding. Namely, for all $w_0 \neq w_1$, we have $\{c_0 \leftarrow \text{Com}(crs, w_0; r_0)\} \cap \{c_1 \leftarrow \text{Com}(crs, w_1; r_1)\} = \emptyset$ (where the sets are taken over $r_0, r_1 \in \mathcal{R}_c$).
+
+For all (crs, trap) $\in$ HG(par), Com is perfectly hiding. Namely, for all $\omega_0 \neq \omega_1$, the following two distributions are identical: $\{c_0 \leftarrow \text{Com}(crs, \omega_0; r_0)\}$ and $\{c_1 \leftarrow \text{Com}(crs, \omega_1; r_1)\}$, where $r_0, r_1 \in \mathcal{R}_c$.
+
+* Perfect Zero-knowledge. There exists a simulator Sim := (SimCom, SimP) such that, for all (crs, trap) $\in$ HG(par), and $(x, \omega) \in \mathbb{R}_\mathcal{L}$, the following two distributions are identical:
+
+$$
+\begin{align*}
+& \{(c, \rho) | r \leftarrow^s \mathcal{R}_c; c \leftarrow \text{Com}(crs, \omega; r); \rho \leftarrow^s P(\text{crs}, (x, c), (\omega, r))\}, \text{and} \\
+& \{(c', \rho') | (c', \gamma) \leftarrow^s \text{SimCom}(crs, trap); \rho' \leftarrow^s \text{SimP}(crs, trap, \gamma)\}.
+\end{align*}
+$$
+
+Since the above distributions are identical, it also holds for reused commitment and multiple adaptively chosen statements $x$ that involve the same witness and commitment.
+
+The GS-proof system is structure-preserving for proving satisfiability of linear multi-scalar multiplication equations (MSEs) and a non-linear quadratic equation (QE). Regarding security, it is known that its CRS indistinguishability is tightly reduced to the SXDH assumption (cf. Theorem 4.3).
+
+# 3 Generic Construction
+
+In this section, we focus on a generic construction of a UF-XCMA-secure SPS scheme, xSPS. By coupling it with an off-the-shelf structure-preserving POS scheme, we obtain a UF-CMA-secure SPS scheme via Theorem 2.6.
+
+## 3.1 Scheme Description
+
+Let par $←^s$ PGGen$(1^\lambda)$ be a set of system parameters. We represent a source group and its generator by $\mathbb{G}$ and $\mathcal{G}$, respectively. Let PKE := (Gen$_{\mathbb{P}}$, Enc, Dec) be a PKE scheme, and GS := (BG, Com, P, V) be the Groth-Sahai proof system for languages $\mathcal{L}_0$ and $\mathcal{L}_1$ defined below. Our SPS scheme xSPS := (Gen, Sign, Ver) is defined in Figure 1.
+
+The correctness of xSPS is implied by that of the Groth-Sahai proof system, and the structure-preserving property is implied by that of the PKE scheme and the Groth-Sahai proof system.
\ No newline at end of file
diff --git a/samples/texts/2864204/page_9.md b/samples/texts/2864204/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..19cb839ae99a4bef75af92ccbc2f71a6743388a2
--- /dev/null
+++ b/samples/texts/2864204/page_9.md
@@ -0,0 +1,40 @@
+**Gen(par):**
+
+crs0, crs1 ⇐ BG(par); For i = 0, 1, 2 : (pki, ski) ⇐ Genp(par)
+x0 ⇐ Zp; x1 := x2 := 0 ∈ Zp; r0, r1, r2, t0, t1, t2, t3 ⇐ Rc
+c0 ← Com(crs0, x0; r0); c1 ← Com(crs0, x1; r1); c2 ← Com(crs1, x2; r2)
+k0 ← Com(crs1, sk0; t0); k1 ← Com(crs1, sk1; t1); k2 ← Com(crs1, sk2; t2); k3 ← Com(crs0, sk0; t3)
+pk := (crs0, crs1, (ci)0≤i≤2, (ki)0≤i≤3); sk := ((ski, xi, ri)0≤i≤2, (ti)0≤i≤3)
+Return (pk, sk)
+
+**Sign(sk, M ∈ G):**
+
+z₀ := z₁ := x₀; z₂ := 0; For i = 0, 1, 2 : ctᵢ ⇐ Enc(pkᵢ, Gzᵢ)
+ins₀ := (ct₀, M); cv₀ := (c₀, c₁, k₃); w₀ := (x₀, x₁, sk₀); R₀ := (r₀, r₁, t₃)
+ins₁ := (ctᵢ)₀≤i≤₂; cv₁ := (c₂, (kᵢ)₀≤i≤₂); w₁ := (x₂, (skᵢ)₀≤i≤₂); R₁ := (r₂, (tᵢ)₀≤i≤₂)
+ρ₀ ⇐ P(crs₀, ins₀, cv₀), (w₀, R₀) //Prove that ins₀ ∈ L₀ and w₀ is committed in cv₀
+ρ₁ ⇐ P(crs₁, (ins₁, cv₁), (w₁, R₁)) //Prove that ins₁ ∈ L₁ and w₁ is committed in cv₁
+Return σ := (ct₀, ct₁, ct₂, ρ₀, ρ₁)
+
+**Ver(pk, M, σ):**
+
+Parse σ := ((ctᵢ)₀≤i≤₂, ρ₀, ρ₁)
+ins₀ := (ct₀, M); cv₀ := (c₀, c₁, k₃); ins₁ := (ctᵢ)₀≤i≤₂; cv₁ := (c₂, k₀, k₁, k₂)
+Return (V(crs₀, ins₀, cv₀, ρ₀) ∧ V(crs₁, ins₁, cv₁, ρ₁))
+
+**Languages:**
+
+$L_0 := \{ (ct_0, M) \mid \exists x_0, x_1 \in \mathbb{Z}_p, sk_0 \in SK \text{ s.t.} \\ G^{z_0} = G^{x_0}M^{x_1} \land G^{z_0} = \text{Dec}(sk_0, ct_0) \}$
+$L_1 := \{ (ct_i)_{0 \le i \le 2} \mid \exists x_2 \in \mathbb{Z}_p, sk_0, sk_1, sk_2 \in SK \text{ s.t.} \\ ((z_0 - z_1)(x_2 - z_2) = 0) \land \sum_{i=0}^{2} (G^{z_i} = \text{Dec}(sk_i, ct_i)) \}$
+
+**Figure 1:** Our signature scheme xSPS.
+
+*Remark 3.1* (Role of proof *ρ*₀). The main role is to bind a message into a signature. In the real scheme, it is just a proof of the signing key *x*₀ in *ct*₀ (and *c*₀) since *x*₁ is fixed to 0. Yet the proof is bound to message *M* through randomness *r*₁ used for committing to *x*₁. In the security proof, it can be seen as an encrypted one-time message authentication code (MAC) of *M* and forces the adversary to reuse given signatures since, intuitively, the adversary cannot generate a new MAC for hidden keys *x*₀ and *x*₁.
+
+*Remark 3.2* (Role of proof *ρ*₁). *ρ*₁ is used for partitioning. It proves that two ciphertexts *ct*₀ and *ct*₁ are consistent (namely, the same plaintext is encrypted) or the plaintext in the ciphertext *ct*₂ is committed to in *c*₂. In the real scheme, *ρ*₁ proves the consistency of double encryption *ct*₀ and *ct*₁. In the security proof, *ρ*₁ enables us to achieve two (seemingly incompatible) functionalities under a binding mode CRS. One is forcing the adversary to use consistent ciphertexts in its forgery. A simulator guesses *z*₂\* in the forgery and makes *x*₂ ≠ *z*₂\* hold. The other is letting the simulator use inconsistent ciphertexts in a special situation achieved using a partitioning technique (see Section 3.2 for more details). In that situation, the simulator can make *x*₂ = *z*₂ hold and use a real witness of *ρ*₀.
+
+*Remark 3.3* (On the range of $z_2$). The range of $z_2$ is $\mathbb{Z}_p$ since $z_2$ is the plaintext of $ct_2$. Readers might think we should bind $z_2$ on $\{0, 1\}$ by using a Groth-Sahai proof since the simulator in the security proof guesses $z_2\*$ in the forgery as explained in the previous remark. This is not the case. In fact, even if an adversary uses $z_2\*$ such that $z_2\* \notin \{0, 1\}$, it has no advantage because the simualtor uses $x_2$ such that $x_2 \in \{0, 1\}$ in the security proof. Value $z_2$ affects $ρ_1$. However, to make a valid forgery by using $x_2 = z_2\*$ as a witness in $ρ_1$, adversaries have no choice but to use $z_2\* \in \{0, 1\}$ as long as $x_2 \in \{0, 1\}$. Accordingly, we do not need to bind $z_2$ on $\{0, 1\}$. This intuition is implemented formally in the proof of Lemma 3.20.
+
+*Remark 3.4* (On verifying correctness of *pk*). Verifying correctness of commitment $k_i$ with respect to $sk_i$ is not necessary for achieving UF-CMA security where keys are generated honestly by definition. But it may have to be verified (once for all at the time of publishing *pk*) if the scheme is used in an application where signers can be corrupted at the time of key generation.
+
+*Remark 3.5* (On XCMA and CMA security of xSPS.). We prove that xSPS is UF-XCMA for efficiency though, in fact, we can prove xSPS is UF-CMA. When we prove UF-CMA, a simulator does not have exponents of queried messages, but the simulator must generate proofs $ρ_0$ for $x_1 \neq 0$ under the binding
\ No newline at end of file
diff --git a/samples/texts/2974107/page_1.md b/samples/texts/2974107/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..8aac69e5ef6a016c11a8842d05fb238c3f58508a
--- /dev/null
+++ b/samples/texts/2974107/page_1.md
@@ -0,0 +1,29 @@
+Aligned interpolation and application to drift kinetic
+semi-Lagrangian simulations with oblique magnetic field
+in cylindrical geometry
+
+Guillaume Latu, Michel Mehrenberger, M Ottaviani, Eric Sonnendrücker
+
+► To cite this version:
+
+Guillaume Latu, Michel Mehrenberger, M Ottaviani, Eric Sonnendrücker. Aligned interpolation and application to drift kinetic semi-Lagrangian simulations with oblique magnetic field in cylindrical geometry. [Research Report] IRMA. 2014. hal-01098373
+
+HAL Id: hal-01098373
+
+https://hal.inria.fr/hal-01098373
+
+Submitted on 5 Jan 2015
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_10.md b/samples/texts/2974107/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..1872678159bef7a02cb498db59fbdbff61827445
--- /dev/null
+++ b/samples/texts/2974107/page_10.md
@@ -0,0 +1,45 @@
+## Appendix B: Dispersion equation
+
+We make the following expansions:
+
+$$f = f_0 + \varepsilon f_1 + \mathcal{O}(\varepsilon^2), \quad \phi = \phi_0 + \varepsilon\phi_1 + \mathcal{O}(\varepsilon^2)$$
+
+with
+
+$$f_0(r, v) = f_{eq}(r, v) = \frac{n_0(r) \exp\left(-\frac{v^2}{2T_i(r)}\right)}{(2\pi T_i(r))^{1/2}}, \quad \phi_0 = 0.$$
+
+We obtain
+
+$$\partial_t f_1 - \frac{\partial_\theta \phi_1}{r B_0} \partial_r f_0 + v b_z \partial_z f_1 + v \frac{b_\theta}{r} \partial_\theta f_1 - \left( b_\theta \frac{\partial_\theta \phi_1}{r} + b_z \partial_z \phi_1 \right) \partial_v f_0 = \mathcal{O}(\varepsilon).$$
+
+and
+
+$$-\left(\partial_r^2 \phi_1 + \left(\frac{1}{r} + \frac{\partial_r n_0}{n_0}\right) \partial_r \phi_1 + \frac{1}{r^2} \partial_\theta^2 \phi_1\right) + \frac{1}{T_e} \phi_1 = \frac{1}{n_0} \int f_1 dv + \mathcal{O}(\varepsilon).$$
+
+We assume that the solutions have the form :
+
+$$f_1 = f_{m,n,\omega}(r,v)e^{i(m\theta+kz-\omega t)}, \quad \phi_1 = \phi_{m,n,\omega}(r)e^{i(m\theta+kz-\omega t)}$$
+
+with $k = \frac{n}{R}$. Then, we obtain
+
+$$(-\omega + kvb_z + v\frac{mb_\theta}{r})f_{m,n,\omega} = \left(\frac{m}{rB_0}\partial_r f_0 + (b_\theta\frac{m}{r} + bzk)\partial_v f_0\right)\phi_{m,n,\omega}$$
+
+and
+
+$$-\left(\partial_r^2 \phi_{m,n,\omega} + \left(\frac{1}{r} + \frac{\partial_r n_0}{n_0}\right) \partial_r \phi_{m,n,\omega} - \frac{m^2}{r^2} \phi_{m,n,\omega}\right) + \frac{1}{T_e} \phi_{m,n,\omega} = \frac{1}{n_0} \int f_{m,n,\omega} dv,$$
+
+We get, as $k_\parallel = (b_\theta \frac{m}{r} + bzk)$
+
+$$\begin{aligned} & -\left(\partial_r^2 \phi_{m,n,\omega} + \left(\frac{1}{r} + \frac{\partial_r n_0}{n_0}\right) \partial_r \phi_{m,n,\omega} - \frac{m^2}{r^2} \phi_{m,n,\omega}\right) + \frac{1}{T_e} \phi_{m,n,\omega} \\ &= \frac{1}{n_0} \phi_{m,n,\omega} \int \frac{\frac{m}{rB_0} \partial_r f_0 + k_\parallel \partial_v f_0}{vk_\parallel - \omega} dv \end{aligned}$$
+
+By using the expression of $f_0$, we have
+
+$$I = \int \frac{-\frac{v}{T_i} + \frac{m}{k_\parallel r B_0} \left( \frac{\partial_r n_0}{n_0} - \frac{\partial_r T_i}{2T_i} + \frac{v^2 \partial_r T_i}{2T_i^2} \right)}{v - \frac{\omega}{k_\parallel}} f_0 dv.$$
+
+We introduce for $\ell \in N$:
+
+$$I_\ell(u) = \frac{1}{n_0} \int v^\ell \frac{f_0}{v-u} f_0 dv,$$
+
+so that
+
+$$\frac{I}{n_0} = -\frac{1}{T_i} I_1\left(\frac{\omega}{k_\parallel}\right) + \frac{m}{k_\parallel r B_0} \left[ \left(\frac{\partial_r n_0}{n_0} - \frac{\partial_r T_i}{2T_i}\right) I_0\left(\frac{\omega}{k_\parallel}\right) + \frac{\partial_r T_i}{2T_i^2} I_2\left(\frac{\omega}{k_\parallel}\right) \right].$$
\ No newline at end of file
diff --git a/samples/texts/2974107/page_11.md b/samples/texts/2974107/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..891e86ef985148f13f433ae0f410165ffabec4c6
--- /dev/null
+++ b/samples/texts/2974107/page_11.md
@@ -0,0 +1,33 @@
+We use the relations :
+
+$$I_0 = \frac{1}{(2T_i)^{1/2}} Z\left(\frac{u}{(2T_i)^{1/2}}\right), \quad I_1 = 1 + uI_0, \quad I_2 = u(1+uI_0),$$
+
+with
+
+$$Z(u) = \frac{1}{\sqrt{\pi}} \int \frac{\exp(-x^2)}{x-u} dx = i\sqrt{\pi} \exp(-u^2)(1 - \operatorname{erf}(-iu)),$$
+
+$$\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} \exp(-t^2) dt.$$
+
+The dispersion relation is, putting $\phi = \phi_{m,n,\omega}$, for convenience,
+
+$$A = -\partial_r^2 \phi - \left(\frac{1}{r} + \frac{\partial_r n_0}{n_0}\right) \partial_r \phi + \frac{m^2}{r^2} \phi + \frac{1}{T_e} \phi \\ = \left[ -\frac{1}{T_i} (1+zZ(z)) + \frac{m}{k^* r B_0} \left( Z(z) \left( \frac{\partial_r n_0}{n_0} - \frac{\partial_r T_i}{2T_i} \right) + z(1+zZ(z)) \frac{\partial_r T_i}{T_i} \right) \right] \phi,$$
+
+with $k^* = (2T_i)^{1/2}k_\|$ and $z = \frac{\omega}{k^*}$, and recalling that $k_\| = (b_\theta \frac{m}{r} + b_z k)$. Note that the dispersion relation depends on $m$ and $k_\|$ and not directly on $n$. This means that taking different values of $\iota$ and $n$ but with same $m$ and $k_\|$ will lead to the same dispersion relation.
+
+References
+
+[1] D. COULETTE & N. BESSE *Numerical comparisons of gyrokinetic multi-water-bag models*. JCP 248 (2013), 1-32.
+
+[2] J. P. Braeunig, N. Crouseilles, M. Mehrenberger, E. Sonnendrücker, *Guiding-center simulations on curvilinear meshes*, Discrete and Continuous Dynamical Systems Series S, Volume 5, Number 3, June 2012.
+
+[3] N. Crouseilles, P. Glanc, S. A. Hirstoaga, E. Madaule, M. Mehrenberger, J. Pétri, *A new fully two-dimensional conservative semi-Lagrangian method: applications on polar grids*, from diocotron instability to ITG turbulence, Eur. Phys. J. D (2014) 68: 252, topical issue of Vlasovia 2013.
+
+[4] X. Garbet, Y. Idomura, L. Villard, T.H. Watanabe, *Gyrokinetic simulations of turbulent transport.*, Nucl. Fusion **50**, 043002 (2010).
+
+[5] T. Görler, X. Lapillonne, S. Brunner, T. Dannert, F. Jenko, F. Merz, D. Told, *The global version of the gyrokinetic turbulence code GENE.*, J. Comput. Physics 230(18): 7053-7071 (2011).
+
+[6] V. Grandgirard, M. Brunetti, P. Bertrand, N. Besse, X. Garbet, P. Ghendrih, G. Manfredi, Y. Sarazin, O. Sauter, E. Sonnendrücker, J. Vaclavik, L. Villard, *A drift-kinetic Semi-Lagrangian 4D code for ion turbulence simulation*, J. Comput. Phys. 217, (2006), pp. 395-423.
+
+[7] F. Hariri, M. Ottaviani, *A flux-coordinate independent field-aligned approach to plasma turbulence simulations*, CPC 184 (2013), pp 2419-2429.
+
+[8] J. M. Kwon, D. Yi, X. Piao, P. Kim, *Development of semi-Lagrangian gyrokinetic code for full-f turbulence simulation in general tokamak geometry*, JCP available online 15 december 2014.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_12.md b/samples/texts/2974107/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..7dade9497d60f6e6bf00aead4137991b6725e165
--- /dev/null
+++ b/samples/texts/2974107/page_12.md
@@ -0,0 +1,3 @@
+[9] E. Sánchez, R. Kleiber, R. Hatzky, A. Soba, X. Sáez, F. Castejón, J. M. Cela, *Linear and nonlinear simulations using the EUTERPE gyro kinetic code*, IEEE transactions on plasma science, Vol. 38 (9), September 2010, pp. 2119-2128.
+
+[10] SELALIB, http://selalib.gforge.inria.fr/
\ No newline at end of file
diff --git a/samples/texts/2974107/page_13.md b/samples/texts/2974107/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f60de404e1362ab05110dc7c7ab70835f964b4e
--- /dev/null
+++ b/samples/texts/2974107/page_13.md
@@ -0,0 +1 @@
+Figure 5: $L^{\infty}$ error vs $N_{\varphi}/n$ for the standard method (old) and vs $N_{\varphi}/|k_{||}|$ for the aligned method (new). Parameters are $q = \sqrt{2}$, $N_{\theta} = 400$. Top: LAG3; bottom: LAG5.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_14.md b/samples/texts/2974107/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..a25345d54c3383f4991889483441d9a4cfd6fcfc
--- /dev/null
+++ b/samples/texts/2974107/page_14.md
@@ -0,0 +1,3 @@
+Figure 6: $L^{\infty}$ error vs $N_{\varphi}/n$ for the standard method (old) and vs $N_{\varphi}/|k_{\parallel}|$ for the aligned method (new). Parameters are $q = \sqrt{2}$, $N_{\theta} = 400$ and LAG17.
+
+Figure 7: $L^{\infty}$ error vs $N_{\varphi}/n$ using $N_{\theta} = 200$ (or $N_{\theta} = 200$, when $n = 23$), with parameters $q = \sqrt{2}$, $m = 34$, LAG9 and $n = 5, 12, 23, 30$ which corresponds to $k_{\parallel} \sim -19, -12, -1, 6$.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_15.md b/samples/texts/2974107/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..c867bb8a213373a6b2b9cdd052ab2b0c2604c424
--- /dev/null
+++ b/samples/texts/2974107/page_15.md
@@ -0,0 +1 @@
+Figure 8: Poloidal cut (left) and $\theta-z$ cut (right) of distribution function at time $t = 4000$. Top: $\iota = 0$, $n = 1$, $m = 15$ on $256 \times 512 \times 32 \times 128$ grid. Middle: $\iota = 0.8$, $n = -11$, $m = 15$ on $256 \times 512 \times 32 \times 128$ grid. Bottom: $\iota = 0.8$, $n = -11$, $m = 15$ on $256 \times 512 \times 64 \times 128$ grid.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_16.md b/samples/texts/2974107/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..59804c40d6cef45894024d969525732edffa2201
--- /dev/null
+++ b/samples/texts/2974107/page_16.md
@@ -0,0 +1 @@
+Figure 9: Poloidal cut (left) and $\theta-z$ cut (right) of distribution function at time $t = 6000$. Top: $\iota = 0.8$, $n = -11$, $m = 15$ on $256 \times 512 \times 32 \times 128$ grid. Bottom: $\iota = 0.8$, $n = -11$, $m = 15$ on $256 \times 512 \times 64 \times 128$ grid.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_2.md b/samples/texts/2974107/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..90d1e673f317974f5a61d68bf6f11685fc30bb66
--- /dev/null
+++ b/samples/texts/2974107/page_2.md
@@ -0,0 +1,35 @@
+Aligned interpolation and application to drift kinetic
+semi-Lagrangian simulations with oblique magnetic field in
+cylindrical geometry
+
+G. Latu, M. Mehrenberger, M. Ottaviani, E. Sonnendrücker
+
+December 24, 2014
+
+Abstract
+
+We introduce field aligned interpolation for Semi-Lagrangian schemes, adapting a method developed by Hariri-Ottaviani [7] to the semi-Lagrangian context. This approach is validated on the constant oblique advection equation and on a 4D drift kinetic model with oblique magnetic field in cylindrical geometry. The strength of this method is that one can reduce the number of points in the longitudinal direction. More precisely, we observe that we gain a factor $\frac{|n|}{|n+m\boldsymbol{i}|}$ (where $\boldsymbol{i}$ is the inverse safety factor), with respect to the classical approach, for the typical function $\sin(m\theta + n\varphi)$.
+
+# 1 Introduction
+
+In gyrokinetic simulations, it is observed that solution structures follow the field lines of the (strong) magnetic field and numerical methods have to be adapted to benefit from this fact. Different strategies exist for dealing with field alignement in gyrokinetic codes (see [9], [5] for example). We explore here an idea developed recently in [7] and adapt it in the context of a semi-Lagrangian code.
+
+Our example of validation will be the following 4D drift-kinetic equation in cylindrical geometry, with oblique magnetic field. We look for $f = f(t, r, \theta, z, v)$ satisfying
+
+$$ \partial_t f + [\phi, f] + v \nabla_{\parallel} f - \nabla_{\parallel} \phi \partial_v f = 0, $$
+
+with
+
+$$ [\phi, f] = -\frac{\partial_{\theta}\phi}{rB_0}\partial_r f + \frac{\partial_r\phi}{rB_0}\partial_{\theta}f, \quad \nabla_{\parallel} = \mathbf{b} \cdot \nabla, $$
+
+so that
+
+$$ \partial_t f - \frac{\partial_\theta \phi}{r B_0} \partial_r f + \left( \frac{\partial_r \phi}{r B_0} + v \frac{b_\theta}{r} \right) \partial_\theta f + v b_z \partial_z f - \left( b_\theta \frac{\partial_\theta \phi}{r} + b_z \partial_z \phi \right) \partial_v f = 0, \quad (1) $$
+
+for $(r, \theta, z, v) \in [r_{\min}, r_{\max}] \times [0, 2\pi] \times [0, 2\pi R] \times [-v_{\max}, v_{\max}]$. The self-consistent potential $\phi = \phi(r, \theta, z)$ solves the quasi-neutral equation without zonal flow
+
+$$ -\left(\partial_r^2 \phi + \left(\frac{1}{r} + \frac{\partial_r n_0}{n_0}\right) \partial_r \phi + \frac{1}{r^2} \partial_\theta^2 \phi\right) + \frac{1}{T_e} \phi = \frac{1}{n_0} \left(\int f - f_{eq} dv\right). $$
+
+Here the oblique magnetic field **B** whose norm is **B** (which can depend on *r*) writes
+
+$$ \mathbf{B} = B\mathbf{b}, \quad \mathbf{b} = b_z\hat{\mathbf{z}} + b_\theta\hat{\theta}, \quad b_\theta = \frac{c}{\sqrt{1+c^2}}, \quad b_z = \frac{1}{\sqrt{1+c^2}}, \quad c = \frac{ir}{R}, $$
\ No newline at end of file
diff --git a/samples/texts/2974107/page_3.md b/samples/texts/2974107/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea8291b6a6c1739ef0a11b101321c8a437dca1f2
--- /dev/null
+++ b/samples/texts/2974107/page_3.md
@@ -0,0 +1,41 @@
+and is parametrized by $B_0 := Bb_z$ and the rotational transform iota which satisfies
+
+$$ \iota = \frac{b_{\theta}/r}{b_z/R} = \frac{1}{q}, $$
+
+where *q* is called the safety factor. When $\iota = 0$, we get the classical drift kinetic model given in [6, 3] for example. A similar model has been simulated in [9], with $\iota = 0.8$ as an example, using a Particle in Cell method.
+
+Equation (1) can be derived from the gyrokinetic equations. Note that some terms are dropped (w.r.t. [9] for example), which permits to retain the oblique feature while sticking to a same structure of equation as in the case $\iota = 0$.
+
+The strategy that we adopt is to solve the constant oblique advection equation
+
+$$ \partial_t f + v \mathbf{b} \cdot \nabla f = 0, \quad (2) $$
+
+that enters in equation (1) using an interpolation that is aligned along the direction of the magnetic field **b**. Such strategy permits to reduce the number of points in the z-direction. Note that another possible strategy would be to adapt the grid to magnetic field lines (see [2] for example); one strength of the present approach is that the grid points do not need to be changed, which permits an easier implementation, as for example the Poisson solver (here for the quasi-neutral equation) does not need to be changed.
+
+The numerical scheme is developed in Section 2, and numerical results are shown in Section 3: first, results on the 2D oblique constant advection equation, where numerical errors can be performed and then results on the 4D drift kinetic model with oblique magnetic field. In Section 4, we give the conclusion and the perspectives of this work. In Appendix A, we detail the derivation of (1), from the gyrokinetic equations and give the dispersion relation in Appendix B.
+
+# 2 Numerical scheme
+
+## 2.1 Constant oblique advection
+
+Writing $\varphi = \frac{z}{R}$, we have to solve for $g := g(t, \theta, \varphi) = f(t, r, \theta, R\varphi, v)$, the constant oblique advection equation (2). We have
+
+$$ \partial_t f + v \nabla_{\|} f = \partial_t f + v \frac{b_\theta}{r} \partial_\theta f + v b_z \partial_z f = 0, $$
+
+which leads to
+
+$$ \partial_t g + v \frac{b_\theta}{r} \partial_\theta g + v \frac{b_z}{R} \partial_\varphi g = \partial_t g + \tilde{v} (\iota \partial_\theta g + \partial_\varphi g) = 0, \quad (3) $$
+
+with $\tilde{v} = \frac{vb_z}{R}$, and $g$ is $2\pi$ periodic in $\theta$ and $\varphi$. Let $\Delta t \in \mathbb{R}^+$, and $t_j = \ell \Delta t$, $\ell \in \mathbb{N}$. We have the relation
+
+$$ g(t_{\ell} + \Delta t, \theta, \varphi) = g(t_{\ell}, \theta - \iota \tilde{v} \Delta t, \varphi - \tilde{v} \Delta t). $$
+
+Let $N_\theta, N_\varphi \in \mathbb{N}^*$, and $\theta_i = \frac{2\pi i}{N_\theta}$, $\varphi_j = \frac{2\pi j}{N_\varphi}$, which can be defined for $i, j \in \mathbb{R}$. We suppose to know values $g_{\ell,i,j} \approx g(t_\ell, \theta_i, \varphi_j)$, for $i=0, \dots, N_\theta - 1$, $j=0, \dots, N_\varphi - 1$. By periodicity, we can suppose that $i, j \in \mathbb{Z}$.
+
+We fix two integers $r \le 0 \le s$. For $j=0, \dots, N_\varphi - 1$, there exists $j_0 \in \mathbb{Z}$ and $0 \le \beta < 1$ such that
+
+$$ \varphi_j - \tilde{v}\Delta t = \varphi_{j_0+\beta}. $$
+
+We then define
+
+$$ \varphi_j - \tilde{v} \Delta t_k = \varphi_{j_0+k}, \quad k=r, \dots, s. $$
\ No newline at end of file
diff --git a/samples/texts/2974107/page_4.md b/samples/texts/2974107/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..48de49f588150fb49ceb8cba8ad580ac53bd6981
--- /dev/null
+++ b/samples/texts/2974107/page_4.md
@@ -0,0 +1,27 @@
+Figure 1: Schematic view of the oblique interpolation. Values at green points at time $t_{\ell+1}$ are updated through values at red points at time $t_\ell$. These values are obtained by computing first values at blue points, using an interpolation in $\theta$ (vertical direction) from values at black points (constant advection), at time $t_\ell$. Then values at red points are obtained from values at blue points using Lagrange interpolation along the oblique parallel direction (here LAG5).
+
+For each $k = r, \dots, s$, from the values
+
+$$g_{\ell,i,j_0+k} \approx g(t_\ell, \theta_i, \varphi_j - \tilde{v}\Delta t_k), \quad i=0, \dots, N_\theta - 1,$$
+
+we compute
+
+$$\tilde{g}_{i,k} \approx g(t_\ell, \theta_i - \iota \tilde{v} \Delta t_k, \varphi_j - \tilde{v} \Delta t_k) = g(t_\ell + \Delta t_k, \theta_i, \varphi_j), \quad i = 0, \dots, N_\theta - 1,$$
+
+by using an interpolation in $\theta$.
+
+For each $i = 0, \dots, N_\theta - 1$, from the values
+
+$$\tilde{g}_{i,k} \approx g(t_\ell, \theta_i - \iota \tilde{v} \Delta t_k, \varphi_j - \tilde{v} \Delta t_k), \quad k = r, \dots, s$$
+
+we finally compute
+
+$$g_{\ell+1,i,j} \approx g(t_\ell, \theta_i - \iota \tilde{v} \Delta t, \varphi_j - \tilde{v} \Delta t),$$
+
+using an interpolation along the parallel direction: we reconstruct a value $\tilde{g}_{i,\beta}$, from the values $\tilde{g}_{i,k}, k=r,\dots,s$ and take $g_{\ell+1,i,j} = \tilde{g}_{i,\beta}$.
+
+In the following, we will use Lagrange of degree $2d+1$ LAG(2d+1), for the interpolation in the parallel direction and take $r=-d, s=d+1$.
+
+## 2.2 Drift kinetic model with oblique magnetic field
+
+We use a classical backward semi-lagrangian (BSL) scheme as in the case where $\ell = 0$ [3]. The model is implemented in SELALIB [10] and uses a parallelization in $r$. Advection in $z$ is replaced
\ No newline at end of file
diff --git a/samples/texts/2974107/page_5.md b/samples/texts/2974107/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..3280b3b176e4dd99554160ec5c7389292174977a
--- /dev/null
+++ b/samples/texts/2974107/page_5.md
@@ -0,0 +1,60 @@
+by an advection along the parallel direction (2). The term **b** · ∇**φ** is computed along the parallel
+direction, using a finite difference formula.
+
+3 Numerical results
+
+3.1 Constant oblique advection
+
+In the case of constant oblique advection, we have to solve (3).
+
+We consider an initial function with a well defined helicity $g = g_0(m\theta + n\varphi)$, so that
+
+$$
+\mathbf{b} \cdot \nabla f = \left( m \frac{b_{\theta}}{r} + n \frac{b_z}{R} \right) g'_{0}(m\theta + n\varphi) = k_{\parallel} g'_{0}(m\theta + n\varphi),
+$$
+
+where
+
+$$
+k_{\parallel} := \frac{b_z}{R} (n + im) = \frac{b_z}{qR} (m + qn),
+$$
+
+In order to have $\mathbf{b} \cdot \nabla f$ bounded, we look for situations where
+
+$|m+qn| \le 1,$
+
+as in real tokamaks, it is assumed that $k_\|$ will typically be in the range of [-$\frac{1}{qR}$, $\frac{1}{qR}$]. We will
+use in the sequel $g_0 = \sin$. The displacement due to advection equation has a main parameter
+$\Delta t$. A extended set of various $\Delta t$ will be investigated because the error of one numerical scheme
+is much dependant on it. We choose a value for safety factor which is a non-rational surface
+case, $q = \sqrt{2}$. We look at four configurations for the initial functions
+
+$$
+\begin{align*}
+A: (n=5, m=-7) & \qquad k_{\parallel} \approx 0.07 & \frac{1}{qR} \\
+B: (n=24, m=-34) & \qquad k_{\parallel} \approx -0.06 & \frac{1}{qR} \\
+C: (n=5, m=-6) & \qquad k_{\parallel} \approx 1.07 & \frac{1}{qR} \\
+D: (n=24, m=-33) & \qquad k_{\parallel} \approx 0.94 & \frac{1}{qR}
+\end{align*}
+\tag{4}
+$$
+
+In the Figures 2 and 3, the abcissa of the plots are Nϕ, and the ordinate are the L∞ norm
+which is the maximum of the difference of the function computed after one time step versus the
+analytical function which is here
+
+$$
+f(\theta, \varphi, \Delta t) = \sin(m(\theta - \Delta t)) + n(\varphi - q\Delta t)).
+$$
+
+The maximum is over the grid points and over several values of Δt. We show the results of the
+aligned versus the standard (non-aligned) scheme. Some parameters are fixed for this study:
+Nθ = 400, q = √2. Note that several time steps Δt are evaluated to approximate well the
+maximal error one can reach for the set of parameter used (the error is varying much along with
+the time step).
+
+For left-hand side plots on these two Figures, the $k_\|$ is near 0 and the aligned method is very accurate, even if one takes a low value for $N_\varphi$. If one considers now the right-hand side plots, the $k_\|$ is close to $\frac{1}{qR}$. Even if the aligned method gives lower error than standard method, the error is bigger at small $N_\varphi$ compared to low $k_\|$.
+
+The Fig. 2 considers lower frequency than Fig. 3. The behaviour in the two cases are similar except that there is a shift of the curves along the $\varphi$ direction. This is expected, and what is interesting is that the aligned method behaves good (low error) even with low values for $N_\varphi$.
+
+In Fig. 4, the discretisation along $\theta$ has been refined ($N_\theta = 2000$) compared to Fig. 3. The asymptotic error (at large $N_\varphi$) is lowered. The aligned method for $k_\|$ close to zero (left-hand side plot) is very accurate with an $L_\infty$ error lower than $10^{-6}$.
\ No newline at end of file
diff --git a/samples/texts/2974107/page_6.md b/samples/texts/2974107/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..7cf3fef2fc3e1670653812a9faed5dd22bf8705b
--- /dev/null
+++ b/samples/texts/2974107/page_6.md
@@ -0,0 +1,20 @@
+Figure 2: Error in $L_\infty$-norm compared to the analytical solution for advection $N_\theta = 400$, $q = \sqrt{2}, n = 5$ and $m = -7$ (left), $m = -6$ (right)
+
+Figure 3: Error in $L_\infty$-norm compared to the analytical solution for advection $N_\theta = 400$, $q = \sqrt{2}, n = 24$ and $m = -34$ (left), $m = -33$ (right)
+
+A more detailed study is given on Figures 5,6,7. We first use the same configurations, but
+make vary the degree of interpolation in the Lagrange reconstruction: LAG3, on Figure 5 top),
+LAG5 on Figure 5 bottom and LAG17 on Figure 6. For the standard method, we use Lagrange
+interpolation in $\varphi$ direction and stick to cubic splines in the $\theta$ direction, as in the case of oblique
+interpolation. We give the $L^\infty$ error w.r.t. $N_\phi/n$ for the standard method and w.r.t. $N_\phi/|\hat{k}_\parallel|$
+for the aligned method, where we have defined
+
+$$
+\hat{k}_{\parallel} = n + m\iota.
+$$
+
+Note that $k_{\parallel}$ and $\hat{k}_{\parallel}$ only differ by the constant multiplicative factor $\frac{b_z}{R}$, which enters as factor in the advection. The error does not here really depend on this factor, as we look here for maximal error over one time step of different size. In the following, we have taken $\Delta t \in \{0.1/s\}$, $s = 1, \dots, 100$, and solved the equation
+
+$$
+\partial_t g + A_1 \partial_\theta g + A_2 \partial_\varphi g,
+$$
\ No newline at end of file
diff --git a/samples/texts/2974107/page_7.md b/samples/texts/2974107/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..ffd001c80d1ad42ae839e02a954f4ae885e7346e
--- /dev/null
+++ b/samples/texts/2974107/page_7.md
@@ -0,0 +1,15 @@
+Figure 4: Error in $L_\infty$-norm compared to the analytical solution for advection $N_\theta = 2000$, $q = \sqrt{2}$, $n = 24$ and $m = -34$ (left), $m = -33$ (right)
+
+on $[0, 2\pi]^2$ with $A_1 = 1$ and $A_2 = q$. So $r, R, v$ can be chosen so that $\tilde{v} = q$, that is
+
+$$ \frac{vb_z}{qR} = 1, \quad b_z = \sqrt{1 + \left(\frac{r}{qR}\right)^2}. $$
+
+In this way of presentation, we remark that we have three zones: a first zone, when the error does not decrease, a second zone where the error decreases according to the order of interpolation and then a third zone when the error does no more decrease which depends on $N_\theta/|m|$. Note that for a fixed degree and a given $N_\theta/|m|$, the error corresponding to the standard or aligned method lies on the *same* curve. But for a given $N_\phi$, there is shift in abscissa which corresponds to the distance between $N_\varphi/|n|$ and $N_\varphi/|\hat{k}_\parallel|$. So when the error of the standard method is in the left of the curve, the error of the new method lie on the right of it. So, for example, in order to have an error around the error of discretization in $\theta$ (when the error begins to saturate: beginning of the third zone), as we are on the same curve, we typically need $\frac{N_\varphi}{|\hat{k}_\parallel|}$ points for the aligned method instead of $\frac{N_\varphi}{|n|}$ points. The factor of gain is thus
+
+$$ \frac{|n|}{|\hat{k}_{\parallel}|} = \frac{|n|}{|n + ml|} $$
+
+On Figure 7, we fix $m = -34$ and change the values of $n$. Now we consider as abscissa only $N_\varphi/|m|$. We then see the same effect. We can remark that when $n$ is smaller than $|\hat{k}_\parallel|$ (here $n=5$ and $\hat{k}_\parallel \simeq -19$), the classical method is more accurate. For $n$ and $\hat{k}_\parallel$ similar, both methods have the same accuracy ($n=12$ and $\hat{k}_\parallel \simeq -12$). Then increasing $n$, $|\hat{k}_\parallel|$ is diminished ($n=23$ and $\hat{k}_\parallel \simeq -1$) and the aligned method becomes more and more efficient. For $\hat{k}_\parallel \simeq 0$, we have the best result (this corresponds to the previous curve: with $n=24$). Increasing again $n$, the results are still better but less and less ($n=30$ and $\hat{k}_\parallel \simeq 6$). For $n=100$ (not shown), we approach again the curve corresponding to $n=12$. We have considered here LAG9, $N_\theta = 200$, except for $n=23$, where $N_\theta = 400$. This to see, that the saturation error diminishes with $N_\theta$ (on the previous plots, we had seen that the error increases with $m$).
+
+On Figures 5,6,7, we have considered the following values for $N_\varphi$:
+
+$$ N_\varphi \in A, A = \{2, 3, 4, 5, 6, 8, 9, 11, 13, 16, 19, 22, 26, 32, 38, 45, 53, 64, 76, 90, 107, 128\} \cup B, \\ B = \{152, 181, 215, 256, 304, 362, 430, 512, 608, 724, 861, 1024\}. $$
\ No newline at end of file
diff --git a/samples/texts/2974107/page_8.md b/samples/texts/2974107/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..a89f9c8d96908003c2d468f01f9e88293a5bc2b3
--- /dev/null
+++ b/samples/texts/2974107/page_8.md
@@ -0,0 +1,31 @@
+## 3.2 Drift kinetic model with oblique magnetic field
+
+The initial function is given by
+
+$$f(t = 0, r, \theta, z, v) = f_{\text{eq}}(r, v) \left[ 1 + \epsilon \exp \left( -\frac{(r-r_p)^2}{\delta r} \right) \cos \left( m\theta + \frac{n}{R}z \right) \right],$$
+
+where the equilibrium function is
+
+$$f_{\text{eq}}(r,v) = \frac{n_0(r) \exp(-\frac{v^2}{2T_i(r)})}{(2\pi T_i(r))^{1/2}}.$$
+
+The radial profiles {$T_i, T_e, n_0$} have the analytical expressions
+
+$$\mathcal{P}(r) = C_P \exp \left( -\kappa_P \delta r_P \tanh \left( \frac{r-r_P}{\delta r_P} \right) \right), \quad \mathcal{P} \in \{T_i, T_e, n_0\},$$
+
+where the constants are
+
+$$C_{T_i} = C_{T_e} = 1, \quad C_{n_0} = \frac{r_{\max} - r_{\min}}{\int_{r_{\min}}^{r_{\max}} \exp(-\kappa_{n_0} \delta r_{n_0} \tanh(\frac{r-r_p}{\delta r_{n_0}})) dr}.$$
+
+Finally, we consider the parameters of [1] (MEDIUM case)
+
+$$r_{\min} = 0.1, r_{\max} = 14.5, \kappa_{n_0} = 0.055, \kappa_{T_i} = \kappa_{T_e} = 0.27586, \delta r_{T_i} = \delta r_{T_e} = \frac{\delta r_{n_0}}{2} = 1.45, \\ \epsilon = 10^{-6}, R = 239.8081535, r_p = \frac{r_{\min} + r_{\max}}{2}, \delta r = \frac{4\delta r_{n_0}}{\delta r_{T_i}}.$$
+
+We take $B_0 = -1$. We consider the case $\iota = 0$, $n = 1$, $m = 15$, which leads to $k_\parallel = \frac{1}{R}$. We then consider a case $\iota = 0.8$, $n = -11$, $m = 15$, which leads to $k_\parallel = \frac{b_z}{R} (-11 + 0.8 \cdot 15) = \frac{b_z}{R}$. Note that $b_z = \frac{1}{\sqrt{1+c^2}}$, with $0 \le c \le \iota \frac{r_{\max}}{R} \le 0.05$, so that $|b_z - 1| \le 1.25 \cdot 10^{-3}$, and thus the dispersion relation which depend on $m$ and $k_\parallel$ and not directly on $n$ (see Appendix A) will give almost the same result, which means that both simulations should lead to similar results in the poloidal plane, at least in the linear phase (as it is also observed in [9]).
+
+We take LAG5 for the interpolation along the parallel direction and cubic splines for the interpolation along $\theta$. Finite differences of order 6 are used for the derivative computation along the parallel direction and cubic splines are used otherwise.
+
+When $\iota = 0$, we use the classical method with cubic splines for the interpolation along $z$ direction. Such behavior is observed on Figure 8 and Figure 9. We see that the poloidal cut $f(t,r,\theta,z=0, v=0)$ are similar in the linear phase (Figure 8 left) and the corresponding excited modes are clearly visible in the $\theta-z$ cut $f(t,r_p,\theta,z=0)$ (Figure 9 right). After a while, we see the effect of the non linear phase. Note that we have not excited the most unstable mode (which is here $m=10$, $k_\parallel = \frac{3}{R}$). We still observe an alignment of the structures (Figure 9 bottom middle/right), and we see that the poloidal cuts are very similar considering $N_z = 32$ or $N_z = 64$. Note that in these figures we use raw data for the visualisation. Since the number of points in $N_z$ is purposely low, the corresponding plots in the $\theta-z$ plane (Figure 8 middle/right and Figure 9 bottom/right) are necessarily coarse. This is not an indication of numerical problems. Indeed a better visualisation in this plane can be achieved by reconstructing the distribution function on a finer mesh using the field aligned interpolation.
+
+# 4 Conclusion and perspectives
+
+We have given some first numerical evidence that the strategy of Hariri-Ottaviani [7] works in the context of semi-Lagrangian gyrokinetic simulations. Validation has been performed on the
\ No newline at end of file
diff --git a/samples/texts/2974107/page_9.md b/samples/texts/2974107/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..b4d380f0b3a02b1c08da2cc2dd022afc26a38637
--- /dev/null
+++ b/samples/texts/2974107/page_9.md
@@ -0,0 +1,61 @@
+analytical test of the constant oblique advection and then on a drift kinetic equation with oblique magnetic field. Extension to tokamak configuration in toroidal geometry is the next step of this study. Just before submitting the first preprint, we got knowledge of the paper [8]; there, first results in this direction are already given.
+
+## Appendix A: Derivation of the model
+
+Denoting by
+
+$$ \mathbf{B}^* = \mathbf{B} + v_{\parallel} \nabla \times \mathbf{b}, \quad B_{\parallel}^* = \mathbf{b} \cdot \mathbf{B}^* = B + v_{\parallel} \nabla \times \mathbf{b} \cdot \mathbf{b}, $$
+
+the gyrokinetic Vlasov equation in cartesian coordinates used in classical simulations is recalled in the review paper by Garbet et. al. [4]. In the electrostatic case it reduces to
+
+$$ \frac{\partial f}{\partial t} + \frac{d\mathbf{X}}{dt} \cdot \nabla_x f + \frac{dV_{\parallel}}{dt} \frac{\partial f}{\partial v_{\parallel}} = 0, $$
+
+with
+
+$$ B_{\parallel}^* \frac{d\mathbf{X}}{dt} = V_{\parallel} \mathbf{B}^* + \mathbf{b} \times (\mu \nabla B + \nabla \langle \phi \rangle), $$
+
+$$ B_{\parallel}^{*} \frac{dV_{\parallel}}{dt} = -\mathbf{B}^{*} \cdot (\mu \nabla B + \nabla \langle \phi \rangle), $$
+
+where $\langle \phi \rangle$ is the gyro-average operator applied to the electrostatic potential $\phi$. We then have, supposing that $\iota = \iota(r)$,
+
+$$ B_{\parallel}^{*} = B + \frac{2c - \iota' R c^2}{1+c^2} v_{\parallel} = B + \frac{c+rc'}{1+c^2} v_{\parallel}. $$
+
+We consider for the moment $\mu = 0$, so that $\langle \phi \rangle = \phi$. We then get
+
+$$
+\begin{aligned}
+B_{\parallel}^{*} \frac{d\mathbf{X}}{dt} &= -(1+c^2)^{1/2} \frac{\partial_{\theta}\phi}{r} + c\mathbf{b} \cdot \nabla\phi) \hat{\mathbf{r}} + ((1+c^2)^{1/2} \partial_r \phi - \frac{c^2}{(1+c^2)^{1/2}} v_{\parallel}^2) \hat{\theta} \\
+&\quad +(B_{\parallel}^{*} v_{\parallel} + \frac{c^3}{1+c^2} v_{\parallel}^2 - c \partial_r \phi) \mathbf{b}
+\end{aligned}
+$$
+
+$$ B_{\parallel}^{*} \frac{dV_{\parallel}}{dt} = -(B_{\parallel}^{*} + \frac{c^3}{1+c^2} v_{\parallel}) \mathbf{b} \cdot \nabla \phi - \frac{c^2 v_{\parallel}}{(1+c^2)^{1/2}} \frac{\partial_{\theta}\phi}{r} $$
+
+We then consider the following intermediate model
+
+$$
+\begin{aligned}
+B_{\parallel}^{*} \frac{d\mathbf{X}}{dt} &= -(1+c^2)^{1/2} \frac{\partial_{\theta}\phi}{r} \hat{\mathbf{r}} + (1+c^2)^{1/2} \partial_r \phi \hat{\theta} + v_{\parallel} B_{\parallel}^{*} \mathbf{b} \\
+\frac{dV_{\parallel}}{dt} &= -\mathbf{b} \cdot \nabla \phi,
+\end{aligned}
+$$
+
+which we further simplify into
+
+$$
+\begin{aligned}
+B \frac{d\mathbf{X}}{dt} &= -\frac{1}{(1+c^2)^{1/2}} \frac{\partial_\theta \phi}{r} \hat{\mathbf{r}} + \frac{1}{(1+c^2)^{1/2}} \partial_r \phi \hat{\theta} + v_\parallel B \mathbf{b} \\
+\frac{dV_\parallel}{dt} &= -\mathbf{b} \cdot \nabla \phi.
+\end{aligned}
+$$
+
+As we have the relation $B = B_0(1+c^2)^{1/2}$, we get the following model
+
+$$
+\begin{aligned}
+\frac{d\mathbf{X}}{dt} &= -\frac{\partial_\theta \phi}{r B_0} \hat{\mathbf{r}} + \frac{\partial_r \phi}{B_0} \hat{\theta} + v_\parallel \mathbf{b} \\
+\frac{dV_\parallel}{dt} &= -\mathbf{b} \cdot \nabla \phi,
+\end{aligned}
+$$
+
+which corresponds to (1), by writing $v_\parallel$ instead of $v$, in order to use a shorter notation.
\ No newline at end of file
diff --git a/samples/texts/348597/page_217.md b/samples/texts/348597/page_217.md
new file mode 100644
index 0000000000000000000000000000000000000000..3cbaccac353fd86daabf4eefb86125e60b38e010
--- /dev/null
+++ b/samples/texts/348597/page_217.md
@@ -0,0 +1,32 @@
+and whose magnitude is the area of the base parallelogram. From the definition of the scalar product, dotting this vector with $\vec{u}$ will give a scalar that is the product of the area of the parallelepiped base multiplied by the parallelepiped height, whose magnitude is exactly the volume of the parallelepiped.
+
+The circular permutation property of Eq. (7.52) then has a very simple geometric interpretation: in computing the volume of a parallelepiped, it does not matter which surface we call base.
+
+## MATLAB Representation
+
+The cross product of the vectors $\vec{u} = (u_1, u_2, u_3)$ and $\vec{v} = (v_1, v_2, v_3)$ is found using the `cross(u,v)` command.
+
+The triple scalar product of the vectors $\vec{u}, \vec{v}$, and $\vec{w}$ is found through the `det([u;v;w])` command. Make sure that the vectors defined as arguments of these functions are defined as 3-D vectors, so that the commands work and the results make sense.
+
+### Example 7.8
+
+Given the vectors $\vec{u} = (2, 1, 0)$, $\vec{v} = (0, 3, 0)$, $\vec{w} = (1, 2, 3)$, find the cross product of the separate pairs of these vectors, and the volume of the parallelepiped formed by the three vectors.
+
+**Solution:** Type, execute, and interpret at each step, each of the following commands, using the above definitions:
+
+```
+u=[2 1 0]
+v=[0 3 0]
+w=[1 2 3]
+ucrossv=cross(u,v)
+ucrossw=cross(u,w)
+vcrossw=cross(v,w)
+paralvol=abs det([u;v;w])
+paralvol2=abs(cross(u,v)*w')
+```
+
+**Question:** Verify that the last command is an alternate way of writing the volume of the parallelepiped expression.
+
+## In-Class Exercises
+
+Pb. 7.10 Compute the shortest distance from New York to London. (Hint: (1) A great circle is the shortest path between two points on a sphere; (2) the angle between the radial unit vectors passing through each of the cities can be obtained from their respective latitude and longitude.)
\ No newline at end of file
diff --git a/samples/texts/348597/page_220.md b/samples/texts/348597/page_220.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe853f72cc9148553c8e44b4d9cbcf2820a96281
--- /dev/null
+++ b/samples/texts/348597/page_220.md
@@ -0,0 +1,25 @@
+$$ \frac{d\vec{R}(t)}{dt} = \frac{dx(t)}{dt}\hat{e}_1 + \frac{dy(t)}{dt}\hat{e}_2 + \frac{dz(t)}{dt}\hat{e}_3 \quad (7.54) $$
+
+and the unit vector tangent to the curve is given by:
+
+$$ \hat{T}(t) = \frac{\frac{d\vec{R}(t)}{dt}}{\left\|\frac{d\vec{R}(t)}{dt}\right\|} \qquad (7.55) $$
+
+This is, of course, the unit vector that is always in the direction of the velocity
+of the particle.
+
+LEMMA
+
+If a vector valued function $\vec{V}(t)$ has a constant value, then its derivative $\frac{d\vec{V}(t)}{dt}$ is
+orthogonal to it.
+
+PROOF The proof of this lemma is straightforward. If the length of the vector is constant, then its dot product with itself is a constant; that is, $\vec{V}(t) \cdot \vec{V}(t) = C$.
+
+Differentiating both sides of this equation gives $\frac{d\vec{V}(t)}{dt} \cdot \vec{V}(t) = 0$, and the orthogonality between the two vectors is thus established.
+
+The tangential unit vector $\hat{T}(t)$ is, by definition, constructed to have unit length. We construct the norm to the curve by taking the unit vector in the direction of the time-derivative of the tangential vector; that is,
+
+$$ \hat{N}(t) = \frac{\frac{d\hat{T}(t)}{dt}}{\left\|\frac{d\hat{T}(t)}{dt}\right\|} \qquad (7.56) $$
+
+The curvature of the curve is
+
+$$ \kappa = \frac{\left| \frac{d\hat{T}(t)}{dt} \right|}{\left| \frac{d\vec{R}(t)}{dt} \right|} \qquad (7.57) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_221.md b/samples/texts/348597/page_221.md
new file mode 100644
index 0000000000000000000000000000000000000000..48c7ad6c2e1a157a170b4b2892ad4ae0cac8a4d9
--- /dev/null
+++ b/samples/texts/348597/page_221.md
@@ -0,0 +1,46 @@
+**Example 7.9**
+
+Find the tangent, normal, and curvature of the trajectory of a particle moving in uniform circular motion of radius *a* and with angular frequency *ω*.
+
+**Solution:** The parametric equation of motion is
+
+$$
+\vec{R}(t) = a \cos(\omega t)\hat{e}_1 + a \sin(\omega t)\hat{e}_2
+$$
+
+The velocity vector is
+
+$$
+\frac{d\vec{R}(t)}{dt} = -a\omega \sin(\omega t)\hat{e}_1 + a\omega \cos(\omega t)\hat{e}_2
+$$
+
+and its magnitude is $a\omega$.
+
+The tangent vector is therefore:
+
+$$
+\hat{T}(t) = -\sin(\omega t)\hat{e}_1 + \cos(\omega t)\hat{e}_2
+$$
+
+The normal vector is
+
+$$
+\hat{N}(t) = -\cos(\omega t)\hat{e}_1 - \sin(\omega t)\hat{e}_2
+$$
+
+The radius of curvature is
+
+$$
+\kappa(t) = \frac{\left|\frac{d\hat{T}(t)}{dt}\right|}{\left|\frac{d\bar{R}(t)}{dt}\right|} = \frac{\left\|\begin{bmatrix} -\omega \cos(\omega t)\hat{e}_1 - \omega \sin(\omega t)\hat{e}_2 \\ -a\omega \sin(\omega t)\hat{e}_1 + a\omega \cos(\omega t)\hat{e}_2 \end{bmatrix}\right\|}{\left\|a\right\|} = \frac{1}{a} = \text{constant}
+$$
+
+**Homework Problems**
+
+Pb. 7.23 Show that in 2-D the radius of curvature can be written as:
+
+$$
+\kappa = \frac{|x'y'' - y'x''|}{((x')^2 + (y')^2)^{3/2}}
+$$
+
+where the prime refers to the first derivative with respect to time, and the
+double-prime refers to the second derivative with respect to time.
\ No newline at end of file
diff --git a/samples/texts/348597/page_223.md b/samples/texts/348597/page_223.md
new file mode 100644
index 0000000000000000000000000000000000000000..cb6cf54cab34eb7626d4323e73ded83ce0871761
--- /dev/null
+++ b/samples/texts/348597/page_223.md
@@ -0,0 +1,33 @@
+$$ \Delta W = P(t) \frac{dx}{dt} \Delta t + Q(t) \frac{dy}{dt} \Delta t = \left( P(t) \frac{dx}{dt} + Q(t) \frac{dy}{dt} \right) \Delta t \quad (7.62) $$
+
+and the total work can be written as an integral over the single variable $t$:
+
+$$ W = \int_{t_0}^{t_1} \left( P(t) \frac{dx}{dt} + Q(t) \frac{dy}{dt} \right) dt \qquad (7.63) $$
+
+## Homework Problems
+
+**Pb. 7.25** How much work is done in moving the particle from the point (0, 0) to the point (3, 9) in the presence of the force $\vec{F}$ along the following two different paths?
+
+a. The parabola $y = x^2$.
+
+b. The line $y = 3x$.
+
+The force is given by:
+
+$$ \vec{F} = xy\hat{e}_x + (x^2 + y^2)\hat{e}_y $$
+
+**Pb. 7.26** Let $\vec{F} = y\hat{e}_x + x\hat{e}_y$. Calculate the work moving from (0, 0) to (1, 1) along each of the following curves:
+
+a. The straight line $y = x$.
+
+b. The parabola $y = x^2$.
+
+c. The curve C described by the parametric equations:
+
+$$ x(t) = t^{3/2} \quad \text{and} \quad y(t) = t^5 $$
+
+A vector field such as the present one, whose line integral is independent of the path chosen between fixed initial and final points, is said to be conservative. In your vector calculus course, you will establish the necessary and sufficient conditions for a vector field to be conservative. The importance of conservative fields lies in the ability of their derivation from a scalar potential. More about this topic will be discussed in electromagnetic courses.
+
+## 7.8 Infinite Dimensional Vector Spaces*
+
+This chapter section introduces some preliminary ideas on infinite-dimensional vector spaces. We assume that the components of this vector space are
\ No newline at end of file
diff --git a/samples/texts/348597/page_225.md b/samples/texts/348597/page_225.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c4be9d74a35290131ece82e67b158a90796e98b
--- /dev/null
+++ b/samples/texts/348597/page_225.md
@@ -0,0 +1,27 @@
+* Orthogonality. Two vectors are orthogonal if:
+
+$$ \langle \psi | \varphi \rangle = \int_{t_{\min}}^{t_{\max}} \bar{\psi}(t) \varphi(t) dt = 0 \qquad (7.66) $$
+
+* Basis vectors. Any function in Hilbert space can be expanded in a linear combination of the basis vectors {$u_n$}, such that:
+
+$$ |\varphi\rangle = \sum_n c_n |u_n\rangle \qquad (7.67) $$
+
+and such that the elements of the basis vectors obey the orthonormality relations:
+
+$$ \langle u_m | u_n \rangle = \delta_{m,n} \qquad (7.68) $$
+
+* Decomposition rule. To find the $c_n$'s, we follow the same procedure adopted for finite-dimensional vector spaces; that is, take the inner product of the expansion in Eq. (7.67) with the bra $\langle u_m |$. We obtain, using the orthonormality relations [Eq. (7.68)], the following:
+
+$$ \langle u_m | \varphi \rangle = \sum_n c_n \langle u_m | u_n \rangle = \sum_n c_n \delta_{m,n} = c_m \qquad (7.69) $$
+
+Said differently, $c_m$ is the projection of the ket $|φ⟩$ onto the bra $\langle u_m|$.
+
+* The norm as a function of the components. The norm of a vector can be expressed as a function of its components. Using Eqs. (7.67) and (7.68), we obtain:
+
+$$ \| \varphi \|^{2} = \langle \varphi | \varphi \rangle = \sum_{n} \sum_{m} \bar{c}_{n} c_{m} \langle u_{n} | u_{m} \rangle = \sum_{n} \sum_{m} \bar{c}_{n} c_{m} \delta_{n,m} = \sum_{n} |c_{n}|^{2} \qquad (7.70) $$
+
+Said differently, the norm square of a vector is equal to the sum of the magnitude square of the components.
+
+## Application 1: The Fourier Series
+
+The theory of Fourier series, as covered in your calculus course, states that a function that is periodic, with period equal to 1, in some normalized units can be expanded as a linear combination of the sequence {$\exp(j2\pi nt)$}, where n is an integer that goes from minus infinity to plus infinity. The purpose here is to recast the familiar Fourier series results within the language and notations of the above formalism.
\ No newline at end of file
diff --git a/samples/texts/348597/page_231.md b/samples/texts/348597/page_231.md
new file mode 100644
index 0000000000000000000000000000000000000000..560af133be1b273356921afdbf4278b601c64a52
--- /dev/null
+++ b/samples/texts/348597/page_231.md
@@ -0,0 +1,29 @@
+$$ \int_{-1}^{1} P_m(x) \left\{ \frac{d}{dx} \left[ (1-x^2) \frac{dP_l(x)}{dx} \right] + l(l+1)P_l(x) \right\} dx = 0 \quad (7.100) $$
+
+Integrating the first term by parts, we obtain:
+
+$$ \int_{-1}^{1} \left\{ (x^2 - 1) \frac{dP_m(x)}{dx} \frac{dP_l(x)}{dx} + l(l+1)P_m(x)P_l(x) \right\} dx = 0 \quad (7.101) $$
+
+Similarly, we can write the ODE for $P_m(x)$, and multiply on the left by $P_l(x)$; this results in the equation:
+
+$$ \int_{-1}^{1} \left\{ (x^2 - 1) \frac{dP_l(x)}{dx} \frac{dP_m(x)}{dx} + m(m+1)P_l(x)P_m(x) \right\} dx = 0 \quad (7.102) $$
+
+Now, subtracting Eq. (7.102) from Eq. (7.101), we obtain:
+
+$$ [m(m+1)-l(l+1)] \int_{-1}^{1} P_l(x) P_m(x) dx = 0 \quad (7.103) $$
+
+But because $l \neq m$, this can only be satisfied if the integral is zero, which is the result that we are after.
+
+6. Finally, we compute the normalization of the basis functions; that is, compute:
+
+$$ \int_{-1}^{1} P_l(x) P_l(x) dx = N_l^2 \quad (7.104) $$
+
+From Eq. (7.90), we can write:
+
+$$ P_l(x) - (2l-1)x P_{l-1}(x) + (l-1)P_{l-2}(x) = 0 \quad (7.105) $$
+
+If we multiply this equation by $(2l+1)P_l(x)$ and subtract from it Eq. (7.90), which we multiplied by $(2l+1)P_{l-1}(x)$, we obtain:
+
+$$ \begin{aligned} & l(2l+1)P_l^2(x) + (2l-1)(l-1)P_{l-1}(x)P_{l-2}(x) \\ & - (l+1)(2l-1)P_{l-1}(x)P_{l+1}(x) - l(2l-1)P_{l-1}^2(x) = 0 \end{aligned} \quad (7.106) $$
+
+Now integrate over the interval [-1, 1] and using Eq. (7.103), we obtain, for $l=2, 3, ...$:
\ No newline at end of file
diff --git a/samples/texts/348597/page_233.md b/samples/texts/348597/page_233.md
new file mode 100644
index 0000000000000000000000000000000000000000..2bfff6762a4055716518e525f3af452ae41df1a5
--- /dev/null
+++ b/samples/texts/348597/page_233.md
@@ -0,0 +1,16 @@
+**Solution:** The conditions for the above theorem are satisfied, and
+
+$$c_l = \left(l + \frac{1}{2}\right) \int_a^1 P_l(x) dx \quad (7.113)$$
+
+From Eq. (7.96), and noting that $P_l(1) = 1$, we find that:
+
+$$c_0 = \frac{1}{2}(1-a) \quad (7.114)$$
+
+and
+
+$$c_l = -\frac{1}{2}[P_{l+1}(a) - P_{l-1}(a)] \quad (7.115)$$
+
+We show in Figure 7.4 the sum of the truncated decomposition for Example 7.10 for different values of $l_{max}$.
+
+FIGURE 7.4
+The plot of the truncated Legendre polynomials expansion of the discontinuous function given by Eq. (7.112), for a = 0.25. Top panel: $l_{max}$ = 4. Middle panel: $l_{max}$ = 8. Bottom panel: $l_{max}$ = 16.
\ No newline at end of file
diff --git a/samples/texts/348597/page_235.md b/samples/texts/348597/page_235.md
new file mode 100644
index 0000000000000000000000000000000000000000..2008e85fa5aec7bf39b7d04ea0f3f9a737d5f582
--- /dev/null
+++ b/samples/texts/348597/page_235.md
@@ -0,0 +1,29 @@
+## 8
+
+### Matrices
+
+#### 8.1 Setting up Matrices
+
+**DEFINITION** A matrix is a collection of numbers arranged in a two-dimensional (2-D) array structure. Each element of the matrix, call it $M_{i,j}$, occupies the *i*-th row and *j*-th column.
+
+$$ M = \begin{bmatrix} M_{11} & M_{12} & M_{13} & \cdots & M_{1n} \\ M_{21} & M_{22} & M_{23} & \cdots & M_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ M_{m1} & M_{m2} & M_{m3} & \cdots & M_{mn} \end{bmatrix} \quad (8.1) $$
+
+We say that **M** is an (*m* × *n*) matrix, which means that it has *m* rows and *n* columns. If *m* = *n*, we call the matrix square. If *m* = 1, the matrix is a row vector; and if *n* = 1, the matrix is a column vector.
+
+##### 8.1.1 Creating Matrices in MATLAB
+
+##### 8.1.1.1 Entering the Elements
+
+In this method, the different elements of the matrix are keyed in; for example:
+
+$$ M = [1 \ 3 \ 5 \ 7 \ 11; \ 13 \ 17 \ 19 \ 23 \ 29; \ 31 \ 37 \ 41 \ 47 \ 53] $$
+
+gives
+
+$$
+\begin{array}{r@{\ }r@{\ }r@{\ }r@{\ }r}
+\mathbf{M} = & \mathbf{1} & \mathbf{3} & \mathbf{5} & \mathbf{7} & \mathbf{11} \\
+ & \mathbf{13} & \mathbf{17} & \mathbf{19} & \mathbf{23} & \mathbf{29} \\
+ & \mathbf{31} & \mathbf{37} & \mathbf{41} & \mathbf{47} & \mathbf{53}
+\end{array}
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_236.md b/samples/texts/348597/page_236.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e3d1b20de154727bc4bbeb62085c3f18864c171
--- /dev/null
+++ b/samples/texts/348597/page_236.md
@@ -0,0 +1,49 @@
+To find the size of the matrix (i.e., the number of rows and columns), enter:
+
+`size(M)`
+
+gives
+
+```
+ans =
+ 3 5
+```
+
+To view a particular element, for example, the (2, 4) element, enter:
+
+`M(2,4)`
+
+gives
+
+```
+ans =
+ 23
+```
+
+To view a particular row such as the 3rd row, enter:
+
+`M(3,:`
+
+gives
+
+```
+ans =
+31 37 41 47 53
+```
+
+To view a particular column such as the 4th column, enter:
+
+`M(:,4)`
+
+gives
+
+```
+ans =
+ 7
+ 23
+ 47
+```
+
+If we wanted to construct a submatrix of the original matrix, for example, one that includes the block from the 2nd to 3rd row (included) and from the 2nd column to the 4th column (included), enter:
+
+`M(2:3,2:4)`
\ No newline at end of file
diff --git a/samples/texts/348597/page_237.md b/samples/texts/348597/page_237.md
new file mode 100644
index 0000000000000000000000000000000000000000..114ed06b19364b251c10051444e05f5357143104
--- /dev/null
+++ b/samples/texts/348597/page_237.md
@@ -0,0 +1,46 @@
+gives
+
+```
+ans =
+ 17 19 23
+ 37 41 47
+```
+
+### 8.1.1.2 Retrieving Special Matrices from the MATLAB Library
+
+MATLAB has some commonly used specialized matrices in its library that can be called as needed. For example:
+
+* The matrix of size ($m \otimes n$) with all elements being zero is `M zeros(m,n);`
+
+For example:
+
+`M = zeros(3,4)`
+
+gives
+
+```
+M =
+ 0 0 0 0
+ 0 0 0 0
+ 0 0 0 0
+```
+
+* The matrix of size ($m \otimes n$) with all elements equal to 1 is `N = ones(m,n):`
+
+For example:
+
+`N = ones(4,3)`
+
+produces
+
+```
+N =
+ 1 1 1
+ 1 1 1
+ 1 1 1
+ 1 1 1
+```
+
+* The matrix of size ($n \otimes n$) with only the diagonal elements equal to one, otherwise zero, is `P = eye(n,n):`
+
+For example:
\ No newline at end of file
diff --git a/samples/texts/348597/page_238.md b/samples/texts/348597/page_238.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d6041583bf888f0e740acc700a07f6f7d3a9721
--- /dev/null
+++ b/samples/texts/348597/page_238.md
@@ -0,0 +1,50 @@
+`P=eye(4,4)`
+
+gives
+
+`P =`
+
+```
+1 0 0 0
+0 1 0 0
+0 0 1 0
+0 0 0 1
+```
+
+* The matrix of size ($n \otimes n$) with elements randomly chosen from the interval $[0, 1]$, such as:
+
+`Q=rand(4,4)`
+
+gives, in one instance:
+
+`Q =`
+
+```
+0.9708 0.4983 0.9601 0.2679
+0.9901 0.2140 0.7266 0.4399
+0.7889 0.6435 0.4120 0.9334
+0.4387 0.3200 0.7446 0.6833
+```
+
+* We can select to extract the upper triangular part of the Q matrix, but assign to all the lower triangle elements the value zero:
+
+`upQ=triu(Q)`
+
+produces
+
+`upQ =`
+
+```
+0.9708 0.4983 0.9601 0.2679
+0 0.2140 0.7266 0.4399
+0 0 0.4120 0.9334
+0 0 0 0.6833
+```
+
+or extract the lower triangular part of the Q matrix, but assign to all the upper triangle elements the value zero:
+
+`loQ=tril(Q)`
+
+produces
+
+`loQ =`
\ No newline at end of file
diff --git a/samples/texts/348597/page_239.md b/samples/texts/348597/page_239.md
new file mode 100644
index 0000000000000000000000000000000000000000..a02f3ce378382f6d9768300ddb96d01ba306459b
--- /dev/null
+++ b/samples/texts/348597/page_239.md
@@ -0,0 +1,34 @@
+| 0.9708 | 0 | 0 | 0 | | 0.9901 | 0.2140 | 0 | 0 | | 0.7889 | 0.6435 | 0.4120 | 0 | | 0.4387 | 0.3200 | 0.7446 | 0.6833 |
+
+* The single quotation mark (') after the name of a matrix changes the matrix rows into becoming its columns, and vice versa, if the elements are all real. If the matrix has complex numbers as elements, it also takes their complex conjugate in addition to the transposition.
+
+* Other specialized matrices, including the whole family of sparse matrices, are also included in the MATLAB library. You can find more information about them in the **help** documentation.
+
+### 8.1.1.3 Functional Construction of Matrices
+
+The third method for generating matrices is to give, if it exists, an algorithm that generates each element of the matrix. For example, suppose we want to generate the Hilbert matrix of size ($n \otimes n$), where $n = 4$ and the functional form of the elements are: $M_{mn} = \frac{1}{m+n}$. The routine for generating this matrix will be as follows:
+
+```matlab
+M=zeros(4,4);
+for m=1:4
+ for n=1:4
+ M(m,n)=1/(m+n);
+ end
+end
+M
+```
+
+* We can also create new matrices by appending known matrices. For example:
+
+Let the matrices **A** and **B** be given by:
+
+$$
+\begin{aligned}
+\mathbf{A} &= [\begin{matrix} 1 & 2 & 3 & 4 \\ \end{matrix}] ; \\
+\mathbf{B} &= [\begin{matrix} 5 & 6 & 7 & 8 \\ \end{matrix}] ;
+\end{aligned}
+ $$
+
+We want to expand the matrix **A** by the matrix **B** along the horizontal (this is allowed only if both matrices have the same number of rows). Enter:
+
+$$ \mathbf{C} = [\mathbf{A} \ \mathbf{B}] $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_24.md b/samples/texts/348597/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..3c5f499a132237044f2e9d67d60beed83b1d21b1
--- /dev/null
+++ b/samples/texts/348597/page_24.md
@@ -0,0 +1,39 @@
+**else if** expression 2
+
+Commands 2 evaluated if expression 2 is True
+
+**else if** expression 3
+
+Commands 3 evaluated if expression 3 is True
+
+...
+
+**else**
+
+Commands evaluated if no other expression is True
+
+**end**
+
+In this form, only the commands associated with the first True expression encountered are evaluated; ensuing relational expressions are not tested.
+
+### 1.5.2.1 Alternative Syntax to the if Statement
+
+As an alternative to the `if` syntax, we can use, in certain instances, Boolean expressions to specify an expression in different domains. For example, `(x>=1)` has the value 1 if `x` is larger than or equal to 1 and zero otherwise; and `(x<=h)` is equal to 1 when `x` is smaller than or equal to `h`, and zero otherwise.
+
+The relational operations allowed inside the parentheses are: `==`, `<=`, `>=`, `=~`, `<`, `>`.
+
+#### *Homework Problem*
+
+**Pb. 1.2** For the values of integer *a* going from 1 to 10, using separately the methods of the **if** syntax and the Boolean alternative expressions, find the values of *C* if:
+
+$$C = a^2 \quad \text{for } a < 3$$
+
+$$C = a + 5 \quad \text{for } 3 \le a < 7$$
+
+$$C = a \quad \text{for } a \ge 7$$
+
+Use the `stem` command to graphically show C.
+
+## 1.6 Array Operations
+
+In the above examples, we used `for` loops repeatedly. However, this kind of loop-programming is very inefficient and must be avoided as much as possible.
\ No newline at end of file
diff --git a/samples/texts/348597/page_240.md b/samples/texts/348597/page_240.md
new file mode 100644
index 0000000000000000000000000000000000000000..fda549076f07e2d156685e4af8b49bc2114dd72d
--- /dev/null
+++ b/samples/texts/348597/page_240.md
@@ -0,0 +1,59 @@
+gives
+
+$$
+C = \begin{pmatrix}
+1 & 2 & 3 & 4 & 5 & 6 & 7 & 8
+\end{pmatrix}
+$$
+
+Or, we may want to expand **A** by stacking it on top of **B** (this is allowed only if both matrices have the same number of columns). Enter:
+
+$$
+D = [A;B]
+$$
+
+produces
+
+$$
+D = \begin{pmatrix}
+1 & 2 & 3 & 4 \\
+5 & 6 & 7 & 8
+\end{pmatrix}
+$$
+
+We illustrate the appending operations for larger matrices: define **E** as the
+(2 ⊗ 3) matrix with one for all its elements, and we desire to append it hori-
+zontally to **D**. This is allowed because both have the same number of rows
+(= 2). Enter:
+
+$$
+E=\text{ones}(2,3)
+$$
+
+produces
+
+$$
+E = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix}
+$$
+
+Enter:
+
+$$
+F = [D E]
+$$
+
+produces
+
+$$
+F = \begin{pmatrix}
+1 & 2 & 3 & 4 & 1 & 1 & 1 \\
+5 & 6 & 7 & 8 & 1 & 1 & 1
+\end{pmatrix}
+$$
+
+Or, we may want to stack two matrices in a vertical configuration. This
+requires that the two matrices have the same number of columns. Enter:
+
+$$
+G=\text{ones}(2,4)
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_241.md b/samples/texts/348597/page_241.md
new file mode 100644
index 0000000000000000000000000000000000000000..74bde130328ec4de8f621b7e9f9e008653d218b3
--- /dev/null
+++ b/samples/texts/348597/page_241.md
@@ -0,0 +1,30 @@
+gives
+
+$$G = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{bmatrix}$$
+
+Enter
+
+$$H = [D; G]$$
+
+produces
+
+$$H = \begin{bmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \end{bmatrix}$$
+
+Finally, the command sum applied to a matrix gives a row in which *m*-element is the sum of all the elements of the *m*th column in the original matrix.
+For example, entering:
+
+$$sum(H)$$
+
+produces
+
+$$ans = \begin{bmatrix} 8 & 10 & 12 & 14 \end{bmatrix}$$
+
+## 8.2 Adding Matrices
+
+Adding two matrices is only possible if they have equal numbers of rows and equal numbers of columns; or, said differently, they both have the same size.
+
+The addition operation is the obvious one. That is, the (m, n) element of the sum (A+B) is the sum of the (m, n) elements of respectively A and B:
+
+$$(A + B)_{mn} = A_{mn} + B_{mn} \qquad (8.2)$$
+
+Entering
\ No newline at end of file
diff --git a/samples/texts/348597/page_242.md b/samples/texts/348597/page_242.md
new file mode 100644
index 0000000000000000000000000000000000000000..91a5a72cfade8b34cfcba0dcc44924916174d1a5
--- /dev/null
+++ b/samples/texts/348597/page_242.md
@@ -0,0 +1,42 @@
+A = [1 2 3 4];
+
+B = [5 6 7 8];
+
+A+B
+
+produces
+
+ans =
+
+6 8 10 12
+
+If we had subtraction of two matrices, it would be the same syntax as above
+but using the minus sign between the matrices.
+
+## 8.3 Multiplying a Matrix by a Scalar
+
+If we multiply a matrix by a number, each element of the matrix is multiplied by that number.
+
+Entering:
+
+3*A
+
+produces
+
+ans =
+
+3 6 9 12
+
+Entering:
+
+3* (A+B)
+
+produces
+
+ans =
+
+18 24 30 36
+
+## 8.4 Multiplying Matrices
+
+Two matrices $A(m \otimes n)$ and $B(r \otimes s)$ can be multiplied only if $n = r$. The size of the product matrix is $(m \otimes s)$. An element of the product matrix is obtained from those of the constitutent matrices through the following rule:
\ No newline at end of file
diff --git a/samples/texts/348597/page_243.md b/samples/texts/348597/page_243.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd32b18c0f9efde50e8bb50d612de9e27d18bb6f
--- /dev/null
+++ b/samples/texts/348597/page_243.md
@@ -0,0 +1,38 @@
+$$(\mathbf{A}\mathbf{B})_{kl} = \sum_{h} A_{kl}B_{hl} \qquad (8.3)$$
+
+This result can be also interpreted by observing that the (k, l) element of the product is the dot product of the k-row of **A** and the l-column of **B**.
+
+In MATLAB, we denote the product of the matrices **A** and **B** by **A**\***B**.
+
+### Example 8.1
+
+Write the different routines for performing the matrix multiplication from the different definitions of the matrix product.
+
+**Solution:** Edit and execute the following script M-file:
+
+```matlab
+D=[1 2 3; 4 5 6];
+E=[3 6 9 12; 4 8 12 16; 5 10 15 20];
+
+F=D.*E
+
+F1=zeros(2,4);
+for i=1:2
+ for j=1:4
+ for k=1:3
+ F1(i,j)=F1(i,j)+D(i,k)*E(k,j);
+ end
+ end
+end
+
+F2= zeros(2,4);
+for i=1:2
+ for j=1:4
+ F2(i,j)=D(iefe)*E(j,:);
+ end
+end
+
+F2
+```
+
+The result **F** is the one obtained using the MATLAB built-in matrix multiplication; the result **F1** is that obtained from Eq. (8.3) and **F2** is the answer obtained by performing, for each element of the matrix product, the dot product of the appropriate row from the first matrix with the appropriate col-
\ No newline at end of file
diff --git a/samples/texts/348597/page_244.md b/samples/texts/348597/page_244.md
new file mode 100644
index 0000000000000000000000000000000000000000..43d62a75270db12da8a185e4d76858eb74c7d8c4
--- /dev/null
+++ b/samples/texts/348597/page_244.md
@@ -0,0 +1,36 @@
+umn from the second matrix. Of course, all three results should give the same
+answer, which they do.
+
+**8.5 Inverse of a Matrix**
+
+In this section, we assume that we are dealing with square matrices (n ⊗ n)
+because these are the only class of matrices for which we can define an
+inverse.
+
+**DEFINITION** A matrix **M**⁻¹ is called the inverse of matrix **M** if the following conditions are satisfied:
+
+$$
+\mathbf{M}\mathbf{M}^{-1} = \mathbf{M}^{-1}\mathbf{M} = \mathbf{I} \qquad (8.4)
+$$
+
+(The identity matrix is the (n ⊗ n) matrix with ones on the diagonal and zero everywhere else; the matrix eye(n, n) in MATLAB.)
+
+**EXISTENCE** The existence of an inverse of a matrix hinges on the condition that the determinant of this matrix is non-zero [det(M) in MATLAB]. We leave the proof of this theorem to future courses in linear algebra. For now, the formula for generating the value of the determinant is given here.
+
+* The determinant of a square matrix **M**, of size (n ⊗ n), is a number equal to:
+
+$$
+\det(\mathbf{M}) = \sum_{p} \binom{-1}{p} M_{1k_1} M_{2k_2} M_{3k_3} \dots M_{nk_n} \quad (8.5)
+$$
+
+where *P* is the *n!* permutation of the first *n*-integers. The sign in front of each
+term is positive if the number of transpositions relating
+
+(1,2,3,...,n) and (k₁,k₂,k₃,...,kₙ)
+
+is even, while the sign is negative otherwise.
+
+**Example 8.2**
+
+Using the definition for a determinant, as given in Eq. (8.5), find the expres-
+sion for the determinant of a (2 ⊗ 2) and a (3 ⊗ 3) matrix.
\ No newline at end of file
diff --git a/samples/texts/348597/page_245.md b/samples/texts/348597/page_245.md
new file mode 100644
index 0000000000000000000000000000000000000000..94768d749089a2ea1f1fce7c9f63844dc74f300a
--- /dev/null
+++ b/samples/texts/348597/page_245.md
@@ -0,0 +1,41 @@
+**Solution:**
+
+a. If $n = 2$, there are only two possibilities for permuting these two numbers, giving the following: (1, 2) and (2, 1). In the first permutation, no transposition was necessary; that is, the multiplying factor in Eq. (8.5) is 1. In the second term, one transposition is needed; that is, the multiplying factor in Eq. (8.5) is -1, giving for the determinant the value:
+
+$$ \Delta = M_{11}M_{22}-M_{12}M_{21} \quad (8.6) $$
+
+b. If $n = 3$, there are only six permutations for the sequence (1, 2, 3): namely, (1, 2, 3), (2, 3, 1), and (3, 1, 2), each of which is an even permutation and (3, 2, 1), (2, 1, 3), and (1, 3, 2), which are odd permutations, thereby giving for the determinant the value:
+
+$$ \begin{aligned} \Delta = & M_{11}M_{22}M_{33} + M_{12}M_{23}M_{31} + M_{13}M_{21}M_{32} \\ & - (M_{13}M_{22}M_{31} + M_{12}M_{21}M_{33} + M_{11}M_{23}M_{32}) \end{aligned} \quad (8.7) $$
+
+MATLAB Representation
+
+Compute the determinant and the inverse of the matrices **M** and **N**, as keyed below:
+
+$$
+\begin{array}{l}
+\mathbf{M}=[\begin{matrix}1 & 3 & 5\\ & & 7\end{matrix}; \begin{matrix}11 & 13\\ & 17\end{matrix}; \begin{matrix}19 & 23\end{matrix}]
+\\[1em]
+\det\mathbf{M}=\det(\mathbf{M})
+\\[1em]
+\text{inv}\mathbf{M}=\text{inv}(\mathbf{M})
+\end{array}
+$$
+
+gives
+
+$$
+\begin{array}{l@{\hspace{4em}}c}
+\begin{array}{@{}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c@{\hspace{0.5em}}c}
+\mathbf{detM}= & -84 & & & & & & & & & & & & & \\
+\mathbf{invM}= & & -0.0714 & -0.3095 & 0.1905 & & & & & & & & & & \\
+& -0.7143 & 0.7381 & -0.2619 & & & & & & & & & & & \\
+& & 0.6429 & -0.3810 & 0.1190 & & & & & & & & & & \\
+\end{array}
+& \mathbf{N}=[\begin{matrix}2 & 4 & 6\\ & 3 & 5\\ 7\\ & 5 & 9\\ 13\end{matrix}]
+\\[1em]
+\det\mathbf{N}=\det(\mathbf{N})
+\\[1em]
+\text{inv}\mathbf{N}=\text{inv}(\mathbf{N})
+\end{array}
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_246.md b/samples/texts/348597/page_246.md
new file mode 100644
index 0000000000000000000000000000000000000000..ebd3d59e897288d9e956f560df0efae15b092da8
--- /dev/null
+++ b/samples/texts/348597/page_246.md
@@ -0,0 +1,50 @@
+produces
+
+$$
+\begin{array}{l}
+\det N = \\
+\quad 0 \\
+\inv N
+\end{array}
+$$
+
+Warning: Matrix is close to singular or badly scaled.
+
+## *Homework Problems*
+
+**Pb. 8.1** As earlier defined, a square matrix in which all elements above (below) the diagonal are zeros is called a lower (upper) triangular matrix. Show that the determinant of a triangular $n \otimes n$ matrix is
+
+$$
+\det(\mathbf{T}) = T_{11}T_{22}T_{33} \dots T_{nn}
+$$
+
+**Pb. 8.2** If M is an *n* ⊗ *n* matrix and *k* is a constant, show that:
+
+$$
+\det(kM) = k^n \det(M)
+$$
+
+**Pb. 8.3** Assuming the following result, which will be proven to you in linear algebra courses:
+
+$$
+\det(\mathbf{MN}) = \det(\mathbf{M}) \times \det(\mathbf{N})
+$$
+
+Prove that if the inverse of the matrix *M* exists, then:
+
+$$
+\det(\mathbf{M}^{-1}) = \frac{1}{\det(\mathbf{M})}
+$$
+
+## **8.6 Solving a System of Linear Equations**
+
+Let us assume that we have a system of *n* linear equations in *n* unknowns that we want to solve:
+
+$$
+\begin{align}
+& M_{11} x_1 + M_{12} x_2 + M_{13} x_3 + \dots + M_{1n} x_n = b_1 \notag \\
+& M_{21} x_1 + M_{22} x_2 + M_{23} x_3 + \dots + M_{2n} x_n = b_2 \tag{8.8} \\
+& \vdots \notag \\
+& M_{n1} x_1 + M_{n2} x_2 + M_{n3} x_3 + \dots + M_{nn} x_n = b_n \notag
+\end{align}
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_247.md b/samples/texts/348597/page_247.md
new file mode 100644
index 0000000000000000000000000000000000000000..e542bcecc2d6754a81b3ac5a1b458fb2f59215a9
--- /dev/null
+++ b/samples/texts/348597/page_247.md
@@ -0,0 +1,63 @@
+The above equations can be readily written in matrix notation:
+
+$$
+\begin{bmatrix}
+M_{11} & M_{12} & M_{13} & \cdots & M_{1n} \\
+M_{21} & M_{22} & M_{23} & \cdots & M_{2n} \\
+\vdots & \vdots & \vdots & \ddots & \vdots \\
+\vdots & \vdots & \vdots & \cdots & \vdots \\
+M_{n1} & M_{n2} & M_{n3} & \cdots & M_{nn}
+\end{bmatrix}
+\begin{bmatrix}
+x_1 \\ x_2 \\ \vdots \\ x_n
+\end{bmatrix}
+=
+\begin{bmatrix}
+b_1 \\ b_2 \\ \vdots \\ b_n
+\end{bmatrix}
+\quad (8.9)
+$$
+
+or
+
+$$
+MX = B \tag{8.10}
+$$
+
+where the column of b's and x' s are denoted by **B** and **X**. Multiplying, on the
+left, both sides of this matrix equation by **M**⁻¹, we find that:
+
+$$
+X = M^{-1}B \tag{8.11}
+$$
+
+As pointed out previously, remember that the condition for the existence of solutions is a non-zero value for the determinant of M.
+
+**Example 8.3**
+
+Use MATLAB to solve the system of equations given by:
+
+$$
+\begin{align*}
+x_1 + 3x_2 + 5x_3 &= 22 \\
+7x_1 + 11x_2 - 13x_3 &= -10 \\
+17x_1 + 19x_2 - 23x_3 &= -14
+\end{align*}
+$$
+
+Solution: Edit and execute the following script M-file:
+
+M=[1 3 5; 7 11 -13; 17 19 -23];
+
+B=[22;-10;-14];
+
+detM=det(M);
+
+invM=inv(M);
+
+X=inv(M)*B.
+
+Verify that the vector X could also have been obtained using the left slash
+notation: X=M\B.
+
+NOTE In this and the immediately preceding chapter sections, we said very little about the algorithm used for computing essentially the inverse of a matrix. This is a subject that will be amply covered in your linear algebra courses. What the interested reader needs to know at this stage is that the
\ No newline at end of file
diff --git a/samples/texts/348597/page_248.md b/samples/texts/348597/page_248.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f2de9415c3156cf68349ab452a8c5cd8ac53e23
--- /dev/null
+++ b/samples/texts/348597/page_248.md
@@ -0,0 +1,57 @@
+Gaussian elimination technique (and its different refinements) is essentially
+the numerical method of choice for the built-in algorithms of numerical soft-
+wares, including MATLAB. The following two examples are essential build-
+ing blocks in such constructions.
+
+**Example 8.4**
+
+Without using the MATLAB inverse command, solve the system of equations:
+
+$$
+LX = B \tag{8.12}
+$$
+
+where L is a lower triangular matrix.
+
+Solution: In matrix form, the system of equations to be solved is
+
+$$
+\begin{bmatrix}
+L_{11} & 0 & 0 & \cdots & 0 \\
+L_{21} & L_{22} & 0 & \cdots & 0 \\
+\vdots & \vdots & \vdots & \ddots & \vdots \\
+\vdots & \vdots & \vdots & \ddots & \vdots \\
+L_{n1} & L_{n2} & L_{n3} & \cdots & L_{nn}
+\end{bmatrix}
+\begin{bmatrix}
+x_1 \\ x_2 \\ \vdots \\ x_n
+\end{bmatrix}
+=
+\begin{bmatrix}
+b_1 \\ b_2 \\ \vdots \\ b_n
+\end{bmatrix}
+\quad (8.13)
+$$
+
+The solution of this system can be directly obtained if we proceed iteratively.
+That is, we find in the following order: x₁, x₂, ..., xₙ, obtaining:
+
+$$
+\begin{align*}
+x_1 &= \frac{b_1}{L_{11}} \\
+x_2 &= \frac{(b_2 - L_{21}x_1)}{L_{22}} \\
+&\vdots \\
+x_k &= \frac{\left( b_k - \sum_{j=1}^{k-1} L_{kj}x_j \right)}{L_{kk}}
+\end{align*}
+\tag{8.14}
+$$
+
+The above solution can be implemented by executing the following script
+M-file:
+
+L=[ ]; % enter the L matrix
+b=[ ]; % enter the B column
+n=T-length(b);
+x=T-zeros(n,1);
+
+© 2001 by CRC Press LLC
\ No newline at end of file
diff --git a/samples/texts/348597/page_249.md b/samples/texts/348597/page_249.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d7d0ef4f7530f3d58109d49be4b44bd06cea62f
--- /dev/null
+++ b/samples/texts/348597/page_249.md
@@ -0,0 +1,59 @@
+$$
+\begin{aligned}
+x(1) &= b(1)/L(1,1); \\
+\text{for } k=2:n \\
+x(k) &= (b(k)-L(k,1:k-1) * x(1:k-1))/L(k,k); \\
+\text{end} \\
+x
+\end{aligned}
+$$
+
+**Example 8.5**
+
+Solve the system of equations: **U** = **B**, where **U** is an upper triangular matrix.
+
+*Solution:* The matrix form of the problem becomes:
+
+$$
+\begin{bmatrix}
+U_{11} & U_{12} & U_{13} & \cdots & U_{1n} \\
+0 & U_{22} & U_{23} & \cdots & U_{2n} \\
+\vdots & \vdots & \vdots & \ddots & \vdots \\
+0 & 0 & \cdots & U_{n-1n-1} & U_{n-1n} \\
+0 & 0 & \cdots & \cdots & U_{nn}
+\end{bmatrix}
+\begin{bmatrix}
+x_1 \\ x_2 \\ \vdots \\ x_{n-1} \\ x_n
+\end{bmatrix}
+=
+\begin{bmatrix}
+b_1 \\ b_2 \\ \vdots \\ b_{n-1} \\ b_n
+\end{bmatrix}
+\quad (8.15)
+$$
+
+In this case, the solution of this system can also be directly obtained if we pro-
+ceed iteratively, but this time in the backward order xn, xn-1, ..., x1, obtaining:
+
+$$
+\begin{align*}
+x_n &= \frac{b_n}{U_{nn}} \\
+x_{n-1} &= \frac{(b_{n-1} - U_{n-1,n} x_n)}{U_{n-1,n-1}} \\
+&\vdots \\
+x_k &= \frac{\left( b_k - \sum_{j=k+1}^{n} U_{kj} x_j \right)}{U_{kk}}
+\end{align*}
+\tag{8.16}
+$$
+
+The corresponding *script M-file* is
+
+```matlab
+U = []; % enter the U matrix
+b = []; % enter the B column
+n = length(b);
+x = zeros(n, 1);
+x(n) = b(n) / U(n,n);
+for k=n-1:-1:1
+```
+
+© 2001 by CRC Press LLC
\ No newline at end of file
diff --git a/samples/texts/348597/page_25.md b/samples/texts/348597/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..7d31cad665df397b35d828c53ce70acbfee96fa7
--- /dev/null
+++ b/samples/texts/348597/page_25.md
@@ -0,0 +1,43 @@
+ble in MATLAB. In fact, ideally, a good MATLAB program will always mini-
+mize the use of loops because MATLAB is an interpreted language — not a
+compiled one. As a result, any looping process is very inefficient. Neverthe-
+less, at times we use the **for** loops, when necessitated by pedagogical reasons.
+To understand array operations more clearly, consider the following:
+
+a=1:3 % a starts at 1, goes to 3 in increments of 1.
+
+If the increment is not 1, you must specify the increment; for example:
+
+b=2:2:6 % b starts at 2, goes to 6 in increments of 2
+
+To distinguish arrays operations from either operations on scalars or on
+matrices, the symbol for multiplication becomes `.*`, that of division `./`, and
+that of exponentiation `.^`. Thus, for example:
+
+c=a.*b % takes every element of a and multiplies
+ % it by the element of b in the same array location
+
+Similarly, for exponentiation and division:
+
+d=a.^b
+
+e=a./b
+
+If you try to use the regular scalar operations symbols, you will get an error
+message.
+
+Note that array operations such as the above require that the two arrays
+have the same length (i.e., the same number of elements). To verify that two
+arrays have the same number of elements (dimension), use the `length` com-
+mand. Thus, to find the length of `a` and `b`, enter:
+
+length(a)
+
+length(b)
+
+**NOTE** The expression x=linspace(0,10,200) is also the generator for an x-array with first element equal to 0, a last element equal to 10, and having 200 equally spaced points between 0 and 100. Here, the number of points rather than the increment is specified; that is, length(x)=200.
+
+1.7 Curve and Surface Plotting
+
+Review the sections of the Supplement pertaining to lines, quadratic func-
+tions, and trigonometric functions before proceeding further.
\ No newline at end of file
diff --git a/samples/texts/348597/page_250.md b/samples/texts/348597/page_250.md
new file mode 100644
index 0000000000000000000000000000000000000000..59657436c215c1a8cd73dca947b8432ab5fa01cf
--- /dev/null
+++ b/samples/texts/348597/page_250.md
@@ -0,0 +1,28 @@
+$$x(k) = \frac{(b(k) - U(k, k+1:n) * x(k+1:n))}{U(k, k)}; \\ \text{end} \\ x$$
+
+## 8.7 Application of Matrix Methods
+
+This section provides seven representative applications that illustrate the immense power that matrix formulation and tools can provide to diverse problems of common interest in electrical engineering.
+
+### 8.7.1 dc Circuit Analysis
+
+#### Example 8.6
+
+Find the voltages and currents for the circuit given in Figure 8.1.
+
+**FIGURE 8.1**
+Circuit of Example 8.6.
+
+**Solution:** Using Kirchoff's current and voltage laws and Ohm's law, we can write the following equations for the voltages and currents in the circuit, assuming that $R_L = 2\Omega$:
+
+$$V_1 = 5$$
+
+$$V_1 - V_2 = 50I_1$$
+
+$$V_2 - V_3 = 100I_2$$
+
+$$V_2 = 300I_3$$
+
+$$V_3 = 2I_2$$
+
+$$I_1 = I_2 + I_3$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_251.md b/samples/texts/348597/page_251.md
new file mode 100644
index 0000000000000000000000000000000000000000..8db7a6feb75fa2735ca0a40db23474ba8d61ba76
--- /dev/null
+++ b/samples/texts/348597/page_251.md
@@ -0,0 +1,27 @@
+NOTE These equations can be greatly simplified if we use the method of elimination of variables. This is essentially the method of nodes analysis covered in circuit theory courses. At this time, our purpose is to show a direct numerical method for obtaining the solutions.
+
+If we form column vector **VI**, the top three components referring to the voltages V₁, V₂, V₃, and the bottom three components referring to the currents I₁, I₂, I₃, then the following script M-file provides the solution to the above circuit:
+
+M=[1 0 0 0 0 0;1 -1 0 -50 0 0;0 1 -1 0 -100 0;...
+0 1 0 0 0 -300;0 0 1 0 -2 0;0 0 0 1 -1 -1];
+
+Vs=[5;0;0;0;0;0];
+
+VI=M\Vs
+
+In-Class Exercise
+
+**Pb. 8.4** Use the same technique as shown in Example 8.6 to solve for the potentials and currents in the circuit given in Figure 8.2.
+
+FIGURE 8.2
+Circuit of Pb. 8.4.
+
+### 8.7.2 dc Circuit Design
+
+In design problems, we are usually faced with the reverse problem of the direct analysis problem, such as the one solved in Section 8.7.1.
+
+#### Example 8.7
+
+Find the value of the lamp resistor in Figure 8.1, so that the current flowing through it is given, *a priori*.
+
+**Solution:** We approach this problem by defining a function file for the relevant current. In this case, it is
\ No newline at end of file
diff --git a/samples/texts/348597/page_252.md b/samples/texts/348597/page_252.md
new file mode 100644
index 0000000000000000000000000000000000000000..36a9fbad019c18cda65be8025baf23e4139b0659
--- /dev/null
+++ b/samples/texts/348597/page_252.md
@@ -0,0 +1,28 @@
+```matlab
+function ilamp=circuit872(RL)
+M=[1 0 0 0 0 0 0;1 -1 0 -50 0 0;0 1 -1 0 -100 0;...
+ 0 1 0 0 0 -300;0 0 1 0 -RL 0;0 0 0 0 1 -1 -1];
+Vs=[5;0;0;0;0;0];
+VI=M\Vs;
+ilamp=VI(5);
+```
+
+Then, from the command window, we proceed by calling this function and plotting the current in the lamp as a function of the resistance. Then we graphically read for the value of R_L, which gives the desired current value.
+
+## In-Class Exercise
+
+**Pb. 8.5** For the circuit of Figure 8.1, find R_L that gives a 22-mA current in the lamp. (Hint: Plot the current as function of the load resistor.)
+
+### 8.7.3 ac Circuit Analysis
+
+Conceptually, there is no difference between performing an ac steady-state analysis of a circuit with purely resistive elements, as was done in Subsection 8.7.1, and performing the analysis for a circuit that includes capacitors and inductors, if we adopt the tool of impedance introduced in Section 6.8, and we write the circuit equations instead with phasors. The only modification from an all-resistors circuit is that matrices now have complex numbers as elements, and the impedances have frequency dependence. For convenience, we illustrate again the relationships of the voltage-current phasors across resistors, inductors, and capacitors:
+
+$$ \tilde{V}_R = \tilde{I}R \quad (8.17) $$
+
+$$ \tilde{V}_L = \tilde{I}(j\omega L) \quad (8.18) $$
+
+$$ \tilde{V}_C = \frac{\tilde{I}}{(j\omega C)} \quad (8.19) $$
+
+and restate Kirchoff's laws again:
+
+* Kirchoff's voltage law: The sum of all voltage drops around a closed loop is balanced by the sum of all voltage sources around the same loop.
\ No newline at end of file
diff --git a/samples/texts/348597/page_253.md b/samples/texts/348597/page_253.md
new file mode 100644
index 0000000000000000000000000000000000000000..8881105f115a86e20206f629c4f8803159a05b55
--- /dev/null
+++ b/samples/texts/348597/page_253.md
@@ -0,0 +1,24 @@
+* Kirchoff's current law: The algebraic sum of all currents entering (exiting) a circuit node must be zero.
+
+**In-Class Exercise**
+
+Pb. 8.6 In a bridged-T filter, the voltage $V_s(t)$ is the input voltage, and the output voltage is that across the load resistor $R_L$. The circuit is given in Figure 8.3.
+
+FIGURE 8.3
+Bridged-T filter. Circuit of Pb. 8.6.
+
+Assuming that $R_1 = R_2 = 3 \, \Omega$, $R_L = 2 \, \Omega$, $C = 0.25 \, F$, and $L = 1 \, H$:
+
+a. Write the equations for the phasors of the voltages and currents.
+
+b. Form the matrix representation for the equations found in part (a).
+
+c. Plot the magnitude and phase of $\frac{\tilde{V}_{\text{out}}}{\tilde{V}_s}$ as a function of the frequency.
+
+d. Compare the results obtained in part (c) with the analytical results of the problem, given by:
+
+$$ \frac{\tilde{V}_{\text{out}}}{\tilde{V}_s} = \frac{N(\omega)}{D(\omega)} $$
+
+$$ N(\omega) = R_2 R_L (R_1 + R_2) + j\omega R_2^2 (L + C R_1 R_L) $$
+
+$$ D(\omega) = R_2 [R_1 R_L + R_2 R_L - \omega^2 L C R_1 (R_2 + R_L)] + j\omega[L(R_1 R_2 + R_1 R_L + R_2 R_L) + C R_1 R_2^2 R_L] $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_254.md b/samples/texts/348597/page_254.md
new file mode 100644
index 0000000000000000000000000000000000000000..f749c540be4f0e3ddd9cac0b208f1ba46415052c
--- /dev/null
+++ b/samples/texts/348597/page_254.md
@@ -0,0 +1,32 @@
+### 8.7.4 Accuracy of a Truncated Taylor Series
+
+In this subsection and subection 8.7.5, we illustrate the use of matrices as a convenient constructional tool to state and manipulate problems with two indices. In this application, we desire to verify the accuracy of the truncated
+
+Taylor series $S = \sum_{n=0}^{N} \frac{x^n}{n!}$ as an approximation to the function $y = \exp(x)$, over the interval $0 \le x < 1$.
+
+Because this application's purpose is to illustrate a constructional scheme, we write the code lines as we are proceeding with the different computational steps:
+
+1. We start by dividing the (0, 1) interval into equally spaced segments. This array is given by:
+
+$$x=[0:0.01:1];$$
+
+$$M=\text{length}(x);$$
+
+2. Assume that we are truncating the series at the value $N = 10$:
+
+$$N=10;$$
+
+3. Construct the matrix W having the following form:
+
+$$W = \begin{bmatrix}
+1 & x_1 & \frac{x_1^2}{2!} & \frac{x_1^3}{3!} & \cdots & \frac{x_1^N}{N!} \\
+1 & x_2 & \frac{x_2^2}{2!} & \frac{x_2^3}{3!} & \cdots & \frac{x_2^N}{N!} \\
+1 & x_3 & \frac{x_3^2}{2!} & \frac{x_3^3}{3!} & \cdots & \frac{x_3^N}{N!} \\
+\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
+\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
+1 & x_M & \frac{x_M^2}{2!} & \frac{x_M^3}{3!} & \cdots & \frac{x_M^N}{N!}
+\end{bmatrix} \quad (8.20)$$
+
+Specify the size of W, and then give the induction rule to go from one column to the next:
+
+$$W(i, j) = x(i) * \frac{W(i, j-1)}{j-1} \qquad (8.21)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_255.md b/samples/texts/348597/page_255.md
new file mode 100644
index 0000000000000000000000000000000000000000..a30a80c1ec447f27036b59f7192d01914a194925
--- /dev/null
+++ b/samples/texts/348597/page_255.md
@@ -0,0 +1,28 @@
+This is implemented in the code as follows:
+
+W=ones(M,N);
+for i=1:M
+ for j=2:N
+ W(i,j)=x(i)*W(i,j-1)/(j-1);
+ end
+end
+
+4. The value of the truncated series at a specific point is the sum of the row elements corresponding to its index; however since MATLAB command `sum` acting on a matrix adds the column elements, we take the sum of the adjoint (the matrix obtained, for real elements, by changing the rows to columns and vice versa) of **W** to obtain our result. Consequently, add to the code:
+
+serexp=sum(W');
+
+5. Finally, compare the values of the truncated series with that of the exponential function
+
+y=exp(x);
+plot(x,serexp,x,y,'--')
+
+In examining the plot resulting from executing the above instructions, we observe that the truncated series give a very good approximation to the exponential over the whole interval.
+
+If you would also like to check the error of the approximation as a function of *x*, enter:
+
+dy=abs(y-serexp);
+semilogy(x,dy)
+
+Examining the output graph, you will find, as expected, that the error increases with an increase in the value of *x*. However, the approximation of the exponential by the partial sum of the first ten elements of the truncated Taylor series is accurate over the whole domain considered, to an accuracy of better than one part per million.
+
+**Question:** Could you have estimated the maximum error in the above computed value of *dy* by evaluating the first neglected term in the Taylor's series at *x* = 1?
\ No newline at end of file
diff --git a/samples/texts/348597/page_256.md b/samples/texts/348597/page_256.md
new file mode 100644
index 0000000000000000000000000000000000000000..4d084f3d917ae57959fefcc267bf3b4873f18be6
--- /dev/null
+++ b/samples/texts/348597/page_256.md
@@ -0,0 +1,41 @@
+*In-Class Exercise*
+
+**Pb. 8.7** Verify the accuracy of truncating at the fifth element the following Taylor series, in a domain that you need to specify, so the error is everywhere less than one part in 10,000:
+
+$$
+\text{a. } \ln(1+x) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{x^n}{n}
+$$
+
+$$
+\text{b.} \quad \sin(x) = \sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{(2n+1)!}
+$$
+
+$$
+c. \cos(x) = \sum_{n=0}^{\infty} (-1)^n \frac{x^{2n}}{(2n)!}
+$$
+
+8.7.5 Reconstructing a Function from Its Fourier Components
+
+From the results of Section 7.9, where we discussed the Fourier series, it is a
+simple matter to show that any even periodic function with period 2π can be
+written in the form of a cosine series, and that an odd periodic function can
+be written in the form of a sine series of the fundamental frequency and its
+higher harmonics.
+
+Knowing the coefficients of its Fourier series, we would like to plot the
+function over a period. The purpose of the following example is two-fold:
+
+1. On the mechanistic side, to illustrate again the setting up of a two indices problem in a matrix form.
+
+2. On the mathematical contents side, examining the effects of trun-
+cating a Fourier series on the resulting curve.
+
+**Example 8.8**
+
+Plot $y(x) = \sum_{k=1}^{M} C_k \cos(kx)$, if $C_k = \frac{(-1)^k}{k^2 + 1}$. Choose successively for $M$ the values 5, 20, and 40.
+
+Solution: Edit and execute the following script M-file:
+
+M= ;
+p=500;
+k=1:M;
\ No newline at end of file
diff --git a/samples/texts/348597/page_257.md b/samples/texts/348597/page_257.md
new file mode 100644
index 0000000000000000000000000000000000000000..7685e52104046b6910a49b12d30c2f88563ea553
--- /dev/null
+++ b/samples/texts/348597/page_257.md
@@ -0,0 +1,31 @@
+```matlab
+n=0:p;
+x=(2*pi/p)*n;
+a=cos((2*pi/p)*n'*k);
+c=(-1).^k}/(k.^2+1);
+y=a*c';
+plot(x,y)
+axis([0 2*pi -1 1.2])
+```
+
+Draw in your notebook the approximate shape of the resulting curve for different values of M.
+
+## In-Class Exercises
+
+**Pb. 8.8** For different values of the cutoff, plot the resulting curves for the functions given by the following Fourier series:
+
+$$y_1(x) = \frac{8}{\pi^2} \sum_{k=1}^{\infty} \left( \frac{1}{(2k-1)^2} \right) \cos((2k-1)x)$$
+
+$$y_2(x) = \frac{4}{\pi} \sum_{k=1}^{\infty} \left( \frac{(-1)^{k-1}}{(2k-1)} \right) \cos((2k-1)x)$$
+
+$$y_3(x) = \frac{2}{\pi} \sum_{k=1}^{\infty} \frac{1}{(2k-1)} \sin((2k-1)x)$$
+
+**Pb. 8.9** The purpose of this problem is to explore the Gibbs phenomenon. This phenomenon occurs as a result of truncating the Fourier series of a discontinuous function. Examine, for example, this phenomenon in detail for the function $y_3(x)$ given in **Pb. 8.8**.
+
+The function under consideration is given analytically by:
+
+$$y_3(x) = \begin{cases} 0.5 & \text{for } 0 < x < \pi \\ -0.5 & \text{for } \pi < x < 2\pi \end{cases}$$
+
+a. Find the value where the truncated Fourier series overshoots the value of 0.5. (Answer: The limiting value of this first maximum is 0.58949).
+
+b. Find the limiting value of the first local minimum. (Answer: The limiting value of this first minimum is 0.45142).
\ No newline at end of file
diff --git a/samples/texts/348597/page_258.md b/samples/texts/348597/page_258.md
new file mode 100644
index 0000000000000000000000000000000000000000..9cf4ec25d7343df1b50218250301b73e97ff6902
--- /dev/null
+++ b/samples/texts/348597/page_258.md
@@ -0,0 +1,30 @@
+c. Derive, from first principles, the answers to parts (a) and (b). (Hint: Look up in a standard integral table the sine integral function.)
+
+**NOTE** An important goal of filter theory is to find methods to smooth these kinds of oscillations.
+
+### 8.7.6 Interpolating the Coefficients of an (n - 1)-degree Polynomial from n Points
+
+The problem at hand can be posed as follows:
+
+Given the coordinates of n points: $(x_1, y_1)$, $(x_2, y_2)$, ..., $(x_n, y_n)$, we want to find the polynomial of degree $(n-1)$, denoted by $p_{n-1}(x)$, whose curve passes through these points.
+
+Let us assume that the polynomial has the following form:
+
+$$p_{n-1}(x) = a_1 + a_2 x + a_3 x^2 + \dots + a_n x^{n-1} \quad (8.22)$$
+
+From a knowledge of the column vectors **X** and **Y**, we can formulate this problem in the standard linear system form. In particular, in matrix form, we can write:
+
+$$\mathbf{V} * \mathbf{A} = \begin{bmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ \vdots \\ a_n \end{bmatrix} = \mathbf{Y} \quad (8.23)$$
+
+Knowing the matrix **V** and the column **Y**, it is then a trivial matter to deduce the column **A**:
+
+$$\mathbf{A} = \mathbf{V}^{-1} * \mathbf{Y} \quad (8.24)$$
+
+What remains to be done is to generate in an efficient manner the matrix **V** using the column vector **X** as input. We note the following recursion relation for the elements of **V**:
+
+$$V(k, j) = x(k) * V(k, j-1) \quad (8.25)$$
+
+Furthermore, the first column of **V** has all its elements equal to 1.
+The following routine computes **A**:
+
+© 2001 by CRC Press LLC
\ No newline at end of file
diff --git a/samples/texts/348597/page_259.md b/samples/texts/348597/page_259.md
new file mode 100644
index 0000000000000000000000000000000000000000..38c2b0e53aad2a2d5c3590d08ee44f7b7346a154
--- /dev/null
+++ b/samples/texts/348597/page_259.md
@@ -0,0 +1,27 @@
+X = [x1; x2; x3; ...; xn];
+
+Y = [y1; y2; y3; ...; yn];
+
+n = length(X);
+
+V = ones(n,n);
+
+for j=2:n
+V(:,j) = X.*V(:,j-1);
+end
+
+A = V \ Y
+
+*In-Class Exercises*
+
+Find the polynomials that are defined through:
+
+Pb. 8.10 The points (1, 5), (2, 11), and (3, 19).
+
+Pb. 8.11 The points (1, 8), (2, 39), (3, 130), (4, 341), and (5, 756).
+
+**8.7.7 Least Square Fit of Data**
+
+In Section 8.7.6, we found the polynomial of degree ($n-1$) that was uniquely determined by the coordinates of $n$ points on its curve. However, when data fitting is the tool used by experimentalists to verify a theoretical prediction, many more points than the minimum are measured in order to minimize the effects of random errors generated in the acquisition of the data. But this over-determination in the system parameters faces us with the dilemma of what confidence level one gives to the accuracy of specific data points, and which data points to accept or reject. *A priori*, one takes all data points, and resorts to a determination of the vector $\mathbf{A}$ whose corresponding polynomial comes closest to all the experimental points. Closeness is defined through the Euclidean distance between the experimental points and the predicted curve. This method for minimizing the sum of the square of the Euclidean distance between the optimal curve and the experimental points is referred to as the least-square fit of the data.
+
+To have a geometrical understanding of what we are attempting to do, consider the conceptually analogous problem in 3-D of having to find the plane with the least total square distance from five given data points. So what do we do? Using the projection procedure derived in Chapter 7, we deduce each point's distance from the plane; then we go ahead and adjust the parameters of the plane equation to obtain the smallest total square distance between the points and the plane. In linear algebra courses, using generalized optimiza-
\ No newline at end of file
diff --git a/samples/texts/348597/page_26.md b/samples/texts/348597/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..8a80c16d4c34be31f8eecd782a2d3a22880a94c1
--- /dev/null
+++ b/samples/texts/348597/page_26.md
@@ -0,0 +1,35 @@
+### 1.7.1 x-y Parametric Plot
+
+Now edit another M-file called `myline.m` as follows and execute it.
+
+```matlab
+N=10;
+for m=1:N
+ x(m)=m;
+ y(m)=2*m+3;
+end
+plot(x,y)
+```
+
+After executing the *M-file* using `myline`, you should see a straight line connecting the points (1, 5) and (10, 23). This demonstration shows the basic construct for creating two arrays and plotting the points with their *x*-coordinate from a particular location in one array and their *y*-coordinate from the same location in the second array. We say that the **plot** command here plotted the *y*-array vs. the *x*-array.
+
+We note that the points are connected by a continuous line making a smooth curve; we say that the program graphically interpolated the discrete points into a continuous curve. If we desire to see additionally the individual points corresponding to the values of the arrays, the last command should be changed to:
+
+`plot(x,y,x,y,'o')`
+
+#### Example 1.10
+
+Plot the two curves $y_1 = 2x + 3$ and $y_2 = 4x + 3$ on the same graph.
+
+*Solution:* Edit and execute the following *script M-file*:
+
+```matlab
+for m=1:10 m=1:10;
+ x(m)=m; x=m;
+ y1(m)=2*m+3; or better y1=2*m+3;
+ y2(m)=4*m+3;
+end
+plot(x,y1,x,y2)
+```
+
+Finally, note that you can separate graphs in one figure window. This is done using the `subplot` function in MATLAB. The arguments of the subplot function are `subplot(m,n,p)`, where m is the number of rows partitioning the graph, n is the number of columns, and p is the particular subgraph chosen (enumerated through the left to right, top to bottom convention).
\ No newline at end of file
diff --git a/samples/texts/348597/page_260.md b/samples/texts/348597/page_260.md
new file mode 100644
index 0000000000000000000000000000000000000000..2044a9bfa7e4865b5efb27c3b2756953d34a6038
--- /dev/null
+++ b/samples/texts/348597/page_260.md
@@ -0,0 +1,46 @@
+tion techniques, you will be shown that the best fit to A (i.e., the one called
+least-square fit) is given (using the rotation of the previous subsection) by:
+
+$$
+A_N = (V^T V)^{-1} V^T Y \quad (8.26)
+$$
+
+A MATLAB routine to fit a number of (n) points to a polynomial of order
+(m - 1) now reads:
+
+X=[x1;x2;x3;...;xn];
+
+Y=[y1;y2;y3;...;yn];
+
+n = length(X);
+
+m= % (m-1) is the degree of the polynomial
+
+V=ones(n,m);
+
+for j=2:m
+
+V(:,j)=X.*V(:,j-1);
+
+end
+
+AN=inv(V'*V)*(V'*Y)
+
+MATLAB also has a built-in command to achieve the least-square fit of data.
+Look up the **polyfit** function in your help documentation, and learn its use
+and point out what difference exists between its notation and that of the
+above routine.
+
+In-Class Exercise
+
+Pb. 8.12 Find the second-degree polynomials that best fit the data points: (1, 8.1), (2, 24.8), (3, 52.5), (4, 88.5), (5, 135.8), and (6, 193.4).
+
+**8.8 Eigenvalues and Eigenvectors***
+
+*DEFINITION* If $\mathbf{M}$ is a square $n \otimes n$ matrix, then a vector $\mathbf{|v}\rangle$ is called an eigenvector and $\lambda$, a scalar, is called an eigenvalue, if they satisfy the relation:
+
+$$
+\mathbf{M}|\mathbf{v}\rangle = \lambda|\mathbf{v}\rangle \tag{8.27}
+$$
+
+that is, the vector M|v⟩ is a scalar multiplied by the vector |v⟩.
\ No newline at end of file
diff --git a/samples/texts/348597/page_261.md b/samples/texts/348597/page_261.md
new file mode 100644
index 0000000000000000000000000000000000000000..0bac5f119568f6dfadbb0798de44fa343380468a
--- /dev/null
+++ b/samples/texts/348597/page_261.md
@@ -0,0 +1,31 @@
+### 8.8.1 Finding the Eigenvalues of a Matrix
+
+To find the eigenvalues, note that the above definition of eigenvectors and eigenvalues can be rewritten in the following form:
+
+$$ (M - \lambda I)|v\rangle = 0 \quad (8.28) $$
+
+where $I$ is the identity $n \otimes n$ matrix. The above set of homogeneous equations admits a solution only if the determinant of the matrix multiplying the vector $|v\rangle$ is zero. Therefore, the eigenvalues are the roots of the polynomial $p(\lambda)$, defined as follows:
+
+$$ p(\lambda) = \det(M - \lambda I) \quad (8.29) $$
+
+This equation is called the characteristic equation of the matrix $M$. It is of degree $n$ in $\lambda$. (This last assertion can be proven by noting that the contribution to the determinant of $(M - \lambda I)$, coming from the product of the diagonal elements of this matrix, contributes a factor of $\lambda^n$ to the expression of the determinant.)
+
+**Example 8.9**
+
+Find the eigenvalues and the eigenvectors of the matrix $M$, defined as follows:
+
+$$ M = \begin{pmatrix} 2 & 4 \\ 1/2 & 3 \end{pmatrix} $$
+
+**Solution:** The characteristic polynomial for this matrix is given by:
+
+$$ p(\lambda) = (2-\lambda)(3-\lambda) - (4)(1/2) = \lambda^2 - 5\lambda + 4 $$
+
+The roots of this polynomial (i.e., the eigenvalues of the matrix) are, respectively,
+
+$$ \lambda_1 = 1 \quad \text{and} \quad \lambda_2 = 4 $$
+
+To find the eigenvectors corresponding to the above eigenvalues, which we shall denote respectively by $|v_1\rangle$ and $|v_2\rangle$, we must satisfy the following two equations separately:
+
+$$ \begin{pmatrix} 2 & 4 \\ 1/2 & 3 \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} = 1 \begin{pmatrix} a \\ b \end{pmatrix} $$
+
+and
\ No newline at end of file
diff --git a/samples/texts/348597/page_262.md b/samples/texts/348597/page_262.md
new file mode 100644
index 0000000000000000000000000000000000000000..88877dc0aee9115d0ee4f4c33879ab9c2e9c211a
--- /dev/null
+++ b/samples/texts/348597/page_262.md
@@ -0,0 +1,44 @@
+$$
+\begin{pmatrix} 2 & 4 \\ 1/2 & 3 \end{pmatrix} \begin{pmatrix} c \\ d \end{pmatrix} = 4 \begin{pmatrix} c \\ d \end{pmatrix}
+$$
+
+From the first set of equations, we deduce that: b = -a/4; and from the second
+set of equations that d = c/2, thus giving for the eigenvectors |v₁⟩ and |v₂⟩, the
+following expressions:
+
+$$
+|v_1\rangle = a \begin{pmatrix} -1 \\ 1/4 \end{pmatrix}
+$$
+
+$$
+|v_2\rangle = c \begin{pmatrix} -1 \\ -1/2 \end{pmatrix}
+$$
+
+It is common to give the eigenvectors in the normalized form (that is, fix *a* and *c* to make $\langle v_1 | v_1 \rangle = \langle v_2 | v_2 \rangle = 1$, thus giving for $|v_1\rangle$ and $|v_2\rangle$, the normalized values:
+
+$$
+|v_1\rangle = \sqrt{\frac{16}{17}} \begin{pmatrix} -1 \\ 1/4 \end{pmatrix} = \begin{pmatrix} -0.9701 \\ 0.2425 \end{pmatrix}
+$$
+
+$$
+|v_2\rangle = \sqrt{\frac{4}{5}} \begin{pmatrix} -1 \\ -1/2 \end{pmatrix} = \begin{pmatrix} -0.8944 \\ -0.4472 \end{pmatrix}
+$$
+
+8.8.2 Finding the Eigenvalues and Eigenvectors Using MATLAB
+
+Given a matrix M, the MATLAB command to find the eigenvectors and
+eigenvalues is given by [V,D]=eig(M); the columns of V are the eigen-
+vectors and D is a diagonal matrix whose elements are the eigenvalues. Enter-
+ing the matrix M and the eigensystem commands gives:
+
+V =
+-0.9701 -0.8944
+ 0.2425 -0.4472
+
+D =
+ 1 0
+ 0 4
+
+Finding the matrices V and D is referred to as diagonalizing the matrix M. It
+should be noted that this is not always possible. For example, the matrix is
+not diagonalizable when one or more of the roots of the characteristic poly-
\ No newline at end of file
diff --git a/samples/texts/348597/page_263.md b/samples/texts/348597/page_263.md
new file mode 100644
index 0000000000000000000000000000000000000000..6fea77e56f527710cefbd9f67bf733153e2a83bc
--- /dev/null
+++ b/samples/texts/348597/page_263.md
@@ -0,0 +1,49 @@
+nomial is zero. In courses of linear algebra, you will study the necessary and
+sufficient conditions for **M** to be diagonalizable.
+
+In-Class Exercises
+
+**Pb. 8.13** Show that if **M**|**v**⟩ = λ|**v**⟩, then **M**ⁿ|**v**⟩ = λⁿ|**v**⟩. That is, the eigenvalues of **M**ⁿ are λⁿ; however, the eigenvectors |**v**⟩'s remain the same as those of **M**.
+Verify this theorem using the choice in Example 8.9 for the matrix **M**.
+
+**Pb. 8.14** Find the eigenvalues of the upper triangular matrix:
+
+$$
+T = \begin{pmatrix}
+1/4 & 0 & 0 \\
+-1 & 1/2 & 0 \\
+2 & -3 & 1
+\end{pmatrix}
+$$
+
+Generalize your result to prove analytically that the eigenvalues of any trian-
+gular matrix are its diagonal elements. (Hint: Use the previously derived
+result in Pb. 8.1 for the expression of the determinant of a triangular matrix.)
+
+**Pb. 8.15** A general theorem, which will be proven to you in linear algebra courses, states that if a matrix is diagonalizable, then, using the above notation:
+
+$$
+VDV^{-1} = M
+$$
+
+Verify this theorem for the matrix $\mathbf{M}$ of Example 8.9.
+
+a. Using this theorem, show that:
+
+$$
+\det(\mathbf{M}) = \det(\mathbf{D}) = \prod_{i}^{n} \lambda_i
+$$
+
+b. Also show that:
+
+$$
+VD^n V^{-1} = M^n
+$$
+
+c. Apply this theorem to compute the matrix $M^5$, for the matrix $M$ of Example 8.9.
+
+Pb. 8.16 Find the non-zero eigenvalues of the 2 ⊗ 2 matrix A that satisfies the equation:
+
+$$
+A = A^3
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_264.md b/samples/texts/348597/page_264.md
new file mode 100644
index 0000000000000000000000000000000000000000..53a128c59f348f8a6661410638bf72136015d8c5
--- /dev/null
+++ b/samples/texts/348597/page_264.md
@@ -0,0 +1,39 @@
+## Homework Problems
+
+The function of a matrix can formally be defined through a Taylor series expansion. For example, the exponential of a matrix $\mathbf{M}$ can be defined through:
+
+$$ \exp(\mathbf{M}) = \sum_{n=0}^{\infty} \frac{\mathbf{M}^n}{n!} $$
+
+**Pb. 8.17** Use the results from **Pb. 8.15** to show that:
+
+$$ \exp(\mathbf{M}) = \mathbf{V} \exp(\mathbf{D}) \mathbf{V}^{-1} $$
+
+where, for any diagonal matrix:
+
+$$ \exp\begin{pmatrix}
+\lambda_1 & 0 & \cdots & 0 \\
+0 & \lambda_2 & \cdots & \vdots \\
+\vdots & \vdots & \ddots & \vdots \\
+\vdots & \lambda_{n-1} & \cdots & 0 \\
+0 & 0 & \cdots & \lambda_n
+\end{pmatrix}
+=
+\begin{pmatrix}
+\exp(\lambda_1) & 0 & \cdots & 0 \\
+0 & \exp(\lambda_2) & \cdots & \vdots \\
+\vdots & \vdots & \ddots & \vdots \\
+\vdots & \exp(\lambda_{n-1}) & 0 & 0 \\
+0 & 0 & \cdots & \exp(\lambda_n)
+\end{pmatrix} $$
+
+**Pb. 8.18** Using the results from **Pb. 8.17**, we deduce a direct technique for solving the initial value problem for any system of coupled linear ODEs with constant coefficients.
+
+Find and plot the solutions in the interval $0 \le t \le 1$ for the following set of ODEs:
+
+$$ \frac{dx_1}{dt} = x_1 + 2x_2 $$
+
+$$ \frac{dx_2}{dt} = 2x_1 - 2x_2 $$
+
+with the initial conditions: $x_1(0) = 1$ and $x_2(0) = 3$. (Hint: The solution of $\frac{dX}{dt} = AX$ is $X(t) = e^{At}X(0)$, where $X$ is a time-dependent vector and $A$ is a time-independent matrix.)
+
+**Pb. 8.19** MATLAB has a shortcut for computing the exponential of a matrix. While the command `exp(M)` takes the exponential of each element of the matrix, the command `expm(M)` computes the matrix exponential. Verify your results for **Pb. 8.18** using this built-in function.
\ No newline at end of file
diff --git a/samples/texts/348597/page_265.md b/samples/texts/348597/page_265.md
new file mode 100644
index 0000000000000000000000000000000000000000..2aacc4ffadf1b3be3bbf54ceeb81ec77785964e7
--- /dev/null
+++ b/samples/texts/348597/page_265.md
@@ -0,0 +1,35 @@
+## 8.9 The Cayley-Hamilton and Other Analytical Techniques*
+
+In Section 8.8, we presented the general techniques for computing the eigenvalues and eigenvectors of square matrices, and showed their power in solving systems of coupled linear differential equations. In this section, we add to our analytical tools arsenal some techniques that are particularly powerful when elegant solutions are desired in low-dimensional problems. We start with the Cayley-Hamilton theorem.
+
+### 8.9.1 Cayley-Hamilton Theorem
+
+The matrix $\mathbf{M}$ satisfies its own characteristic equation.
+
+PROOF As per Eq. (8.29), the characteristic equation for a matrix is given by:
+
+$$p(\lambda) = \det(\mathbf{M} - \lambda\mathbf{I}) = 0 \quad (8.30)$$
+
+Let us now form the polynomial of the matrix $\mathbf{M}$ having the same coefficients as that of the characteristic equation, $p(\mathbf{M})$. Using the result from **Pb. 8.15**, and assuming that the matrix is diagonalizable, we can write for this polynomial:
+
+$$p(\mathbf{M}) = \mathbf{V}p(\mathbf{D})\mathbf{V}^{-1} \quad (8.31)$$
+
+where
+
+$$p(\mathbf{D}) = \begin{pmatrix}
+p(\lambda_1) & 0 & \cdots & \cdots & 0 \\
+0 & p(\lambda_2) & & & 0 \\
+\vdots & \vdots & \ddots & & \vdots \\
+\vdots & \vdots & & p(\lambda_{n-1}) & 0 \\
+0 & 0 & \cdots & 0 & p(\lambda_n)
+\end{pmatrix} \quad (8.32)$$
+
+However, we know that $\lambda_1, \lambda_2, \dots, \lambda_{n-1}, \lambda_n$ are all roots of the characteristic equation. Therefore,
+
+$$p(\lambda_1) = p(\lambda_2) = \dots = p(\lambda_{n-1}) = p(\lambda_n) = 0 \quad (8.33)$$
+
+thus giving:
+
+$$p(\mathbf{D}) = 0 \quad (8.34)$$
+
+$$\Rightarrow p(\mathbf{M}) = 0 \quad (8.35)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_266.md b/samples/texts/348597/page_266.md
new file mode 100644
index 0000000000000000000000000000000000000000..9fc1ca420ffc1730d852af0731196380ed54b975
--- /dev/null
+++ b/samples/texts/348597/page_266.md
@@ -0,0 +1,36 @@
+**Example 8.10**
+
+Using the Cayley-Hamilton theorem, find the inverse of the matrix $\mathbf{M}$ given in Example 8.9.
+
+*Solution:* The characteristic equation for this matrix is given by:
+
+$$p(\mathbf{M}) = \mathbf{M}^2 - 5\mathbf{M} + 4\mathbf{I} = 0$$
+
+Now multiply this equation by $\mathbf{M}^{-1}$ to obtain:
+
+$$\mathbf{M} - 5\mathbf{I} + 4\mathbf{M}^{-1} = 0$$
+
+and
+
+$$\Rightarrow \mathbf{M}^{-1} = 0.25(5\mathbf{I} - \mathbf{M}) = \begin{pmatrix} \frac{3}{4} & -1 \\ -\frac{1}{8} & \frac{1}{2} \end{pmatrix}$$
+
+**Example 8.11**
+
+Reduce the following fourth-order polynomial in $\mathbf{M}$, where $\mathbf{M}$ is given in Example 8.9, to a first-order polynomial in $\mathbf{M}$:
+
+$$P(\mathbf{M}) = \mathbf{M}^4 + \mathbf{M}^3 + \mathbf{M}^2 + \mathbf{M} + \mathbf{I}$$
+
+*Solution:* From the results of Example 8.10, we have:
+
+$$\begin{align*}
+\mathbf{M}^2 &= 5\mathbf{M} - 4\mathbf{I} \\
+\mathbf{M}^3 &= 5\mathbf{M}^2 - 4\mathbf{M} = 5(5\mathbf{M} - 4\mathbf{I}) - 4\mathbf{M} = 21\mathbf{M} - 20\mathbf{I} \\
+\mathbf{M}^4 &= 21\mathbf{M}^2 - 20\mathbf{M} = 21(5\mathbf{M} - 4\mathbf{I}) - 20\mathbf{M} = 85\mathbf{M} - 84\mathbf{I} \\
+\therefore P(\mathbf{M}) &= 112\mathbf{M} - 107\mathbf{I}
+\end{align*}$$
+
+Verify the answer numerically using MATLAB.
+
+### 8.9.2 Solution of Equations of the Form $\frac{dX}{dt} = AX$
+
+We sketched a technique in Pb. 8.17 that uses the eigenvectors matrix and solves this equation. In Example 8.12, we solve the same problem using the Cayley-Hamilton technique.
\ No newline at end of file
diff --git a/samples/texts/348597/page_267.md b/samples/texts/348597/page_267.md
new file mode 100644
index 0000000000000000000000000000000000000000..a48bb6c7089332fc1167f6da7125d6f161b2968d
--- /dev/null
+++ b/samples/texts/348597/page_267.md
@@ -0,0 +1,50 @@
+**Example 8.12**
+
+Using the Cayley-Hamilton technique, solve the system of equations:
+
+$$
+\begin{align*}
+\frac{dx_1}{dt} &= x_1 + 2x_2 \\
+\frac{dx_2}{dt} &= 2x_1 - 2x_2
+\end{align*}
+$$
+
+with the initial conditions: $x_1(0) = 1$ and $x_2(0) = 3$
+
+Solution: The matrix **A** for this system is given by:
+
+$$
+\mathbf{A} = \begin{pmatrix} 1 & 2 \\ 2 & -2 \end{pmatrix}
+$$
+
+and the solution of this system is given by:
+
+$$
+X(t) = e^{At} X(0)
+$$
+
+Given that $\mathbf{A}$ is a $2 \otimes 2$ matrix, we know from the Cayley-Hamilton result that the exponential function of $\mathbf{A}$ can be written as a first-order polynomial in $\mathbf{A}$; thus:
+
+$$
+P(\mathbf{A}) = e^{\mathbf{A}t} = a\mathbf{I} + b\mathbf{A}
+$$
+
+To determine *a* and *b*, we note that the polynomial equation holds as well for
+the eigenvalues of **A**, which are equal to −3 and 2; therefore:
+
+$$
+e^{-3t} = a - 3b \\
+e^{2t} = a + 2b
+$$
+
+giving:
+
+$$
+a = \frac{2}{5} e^{-3t} + \frac{3}{5} e^{2t}
+$$
+
+$$
+b = \frac{1}{5} e^{2t} - \frac{1}{5} e^{-3t}
+$$
+
+and
\ No newline at end of file
diff --git a/samples/texts/348597/page_268.md b/samples/texts/348597/page_268.md
new file mode 100644
index 0000000000000000000000000000000000000000..2db48b1ebe4ebe884448f245c8e6363500f62029
--- /dev/null
+++ b/samples/texts/348597/page_268.md
@@ -0,0 +1,29 @@
+$$e^{At} = \begin{pmatrix} \frac{1}{5}e^{-3t} + \frac{4}{5}e^{2t} & \frac{2}{5}e^{2t} - \frac{2}{5}e^{-3t} \\ \frac{2}{5}e^{2t} - \frac{2}{5}e^{-3t} & \frac{4}{5}e^{-3t} + \frac{1}{5}e^{2t} \end{pmatrix}$$
+
+Therefore, the solution of the system of equations is
+
+$$X(t) = \begin{pmatrix} 2e^{2t} - e^{-3t} \\ e^{2t} + 2e^{-3t} \end{pmatrix}$$
+
+### 8.9.3 Solution of Equations of the Form $\frac{d\mathbf{X}}{dt} = A\mathbf{X} + B(t)$
+
+Multiplying this equation on the left by $e^{-At}$, we obtain:
+
+$$e^{-At} \frac{d\mathbf{X}}{dt} = e^{-At} A \mathbf{X} + e^{-At} B(t) \quad (8.36)$$
+
+Rearranging terms, we write this equation as:
+
+$$e^{-At} \frac{d\mathbf{X}}{dt} - e^{-At} A\mathbf{X} = e^{-At} B(t) \quad (8.37)$$
+
+We note that the LHS of this equation is the derivative of $e^{-At}\mathbf{X}$. Therefore, we can now write Eq. (8.37) as:
+
+$$\frac{d}{dt}[e^{-At}\mathbf{X}(t)] = e^{-At}\mathbf{B}(t) \quad (8.38)$$
+
+This can be directly integrated to give:
+
+$$[e^{-At}\mathbf{X}(t)]^t_0 = \int_0^t e^{-A\tau}\mathbf{B}(\tau)d\tau \quad (8.39)$$
+
+or, written differently as:
+
+$$e^{-At}\mathbf{X}(t) - \mathbf{X}(0) = \int_0^t e^{-A\tau}\mathbf{B}(\tau)d\tau \quad (8.40a)$$
+
+which leads to the standard form of the solution:
\ No newline at end of file
diff --git a/samples/texts/348597/page_269.md b/samples/texts/348597/page_269.md
new file mode 100644
index 0000000000000000000000000000000000000000..9b771c74a686ed8fa79411b3aa6891e9e42a5678
--- /dev/null
+++ b/samples/texts/348597/page_269.md
@@ -0,0 +1,29 @@
+$$ \mathbf{X}(t) = e^{\mathbf{A}t}\mathbf{X}(0) + \int_0^t e^{\mathbf{A}(t-\tau)}\mathbf{B}(\tau)d\tau \quad (8.40b) $$
+
+We illustrate the use of this solution in finding the classical motion of an electron in the presence of both an electric field and a magnetic flux density.
+
+**Example 8.13**
+
+Find the motion of an electron in the presence of a constant electric field and a constant magnetic flux density that are parallel.
+
+*Solution:* Let the electric field and the magnetic flux density be given by:
+
+$$ \vec{E} = E_0 \hat{e}_3 $$
+
+$$ \vec{B} = B_0 \hat{e}_3 $$
+
+Newton's equation of motion in the presence of both an electric field and a magnetic flux density is written as:
+
+$$ m \frac{d\vec{v}}{dt} = q(\vec{E} + \vec{v} \times \vec{B}) $$
+
+where $\vec{v}$ is the velocity of the electron, and $m$ and $q$ are its mass and charge, respectively. Writing this equation in component form, it reduces to the following matrix equation:
+
+$$ \frac{d}{dt} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix} = \alpha \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ v_3 \end{pmatrix} + \beta \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} $$
+
+where $\alpha = \frac{qB_0}{m}$ and $\beta = \frac{qE_0}{m}$.
+
+This equation can be put in the above standard form for an inhomogeneous first-order equation if we make the following identifications:
+
+$$ A = \alpha \begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \text{and} \quad B = \beta \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} $$
+
+First, we note that the matrix **A** is block diagonalizable; that is, all off-diagonal elements with 3 as either the row or column index are zero, and therefore
\ No newline at end of file
diff --git a/samples/texts/348597/page_27.md b/samples/texts/348597/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..06ada023195f8cd0088d29a5044d228ff077b248
--- /dev/null
+++ b/samples/texts/348597/page_27.md
@@ -0,0 +1,47 @@
+### 1.7.1.1 Demonstration: Plotting Multiple Figures within a Figure Window
+
+Using the data obtained in the previous example, observe the difference in
+the partition of the page in the following two sets of commands:
+
+```matlab
+subplot(2,1,1)
+plot(x,y1)
+subplot(2,1,2)
+plot(x,y2)
+```
+
+and
+
+```matlab
+clf
+subplot(1,2,1)
+plot(x,y1)
+subplot(1,2,2)
+plot(x,y2)
+```
+
+## 1.7.2 More on Parametric Plots in 2-D
+
+In the preceding subsection, we generated the x- and y-arrays by first writing
+the x-variable as a linear function of a parameter, and then expressed the
+dependent variable y as a function of that same parameter. What we did is
+that, instead of thinking of a function as a relation between an independent
+variable x and a dependent variable y, we thought of both x and y as being
+dependent functions of a third independent parameter. This method of curve
+representation, known as the parametric representation, is described by (x(t),
+y(t)), where the parameter t varies over some finite domain (tmin, tmax). Note,
+however, that in the general case, unlike the examples in the previous chapter
+subsection, the independent variable x need not be linear in the parameter,
+nor is the process of parametrization unique.
+
+**Example 1.11**
+
+Plot the trigonometric circle.
+
+*Solution:* Recalling that the *x*-coordinate of any point on the trigonometric circle has the cosine as *x*-component and the sine as *y*-component, the generation of the trigonometric circle is immediate:
+
+```matlab
+th=linspace(0,2*pi,101)
+x=cos(th);
+y=sin(th);
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_270.md b/samples/texts/348597/page_270.md
new file mode 100644
index 0000000000000000000000000000000000000000..ab5876184577101a085452cdaad197d7e0bc5afb
--- /dev/null
+++ b/samples/texts/348597/page_270.md
@@ -0,0 +1,23 @@
+we can separately do the exponentiation of the third component giving $e^0 = 1$; the exponentiation of the top block can be performed along the same steps, using the Cayley-Hamilton techniques from Example 8.12, giving finally:
+
+$$e^{At} = \begin{pmatrix} \cos(\alpha t) & \sin(\alpha t) & 0 \\ -\sin(\alpha t) & \cos(\alpha t) & 0 \\ 0 & 0 & 1 \end{pmatrix}$$
+
+Therefore, we can write the solutions for the electron's velocity components as follows:
+
+$$\begin{pmatrix} v_1(t) \\ v_2(t) \\ v_3(t) \end{pmatrix} = \begin{pmatrix} \cos(\alpha t) & \sin(\alpha t) & 0 \\ -\sin(\alpha t) & \cos(\alpha t) & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} v_1(0) \\ v_2(0) \\ v_3(0) \end{pmatrix} + \beta \begin{pmatrix} 0 \\ 0 \\ t \end{pmatrix}$$
+
+or equivalently:
+
+$$v_1(t) = v_1(0)\cos(\alpha t) + v_2(0)\sin(\alpha t)$$
+
+$$v_2(t) = -v_1(0)\sin(\alpha t) + v_2(0)\cos(\alpha t)$$
+
+$$v_3(t) = v_3(0) + \beta t$$
+
+## In-Class Exercises
+
+**Pb. 8.20** Plot the 3-D curve, with time as parameter, for the tip of the velocity vector of an electron with an initial velocity $v = v_0\hat{e}_1$, where $v_0 = 10^5$ m/s, entering a region of space where a constant electric field and a constant magnetic flux density are present and are described by: $\vec{E} = E_0\hat{e}_3$, where $E_0 = -10^4$ V/m, and $\vec{B} = B_0\hat{e}_3$, where $B_0 = 10^{-2}$ Wb/m$^2$. The mass of the electron is $m_e = 9.1094 \times 10^{-31}$ kg, and the magnitude of the electron charge is $e = 1.6022 \times 10^{-19}$ C.
+
+**Pb. 8.21** Integrate the expression of the velocity vector in Pb. 8.20 to find the parametric equations of the electron position vector for the preceding problem configuration, and plot its 3-D curve. Let the origin of the axis be fixed to where the electron enters the region of the electric and magnetic fields.
+
+**Pb. 8.22** Find the parametric equations for the electron velocity if the electric field and the magnetic flux density are still parallel, the magnetic flux density is still constant, but the electric field is now described by $\vec{E} = E_0 \cos(\omega t) \hat{e}_3$.
\ No newline at end of file
diff --git a/samples/texts/348597/page_271.md b/samples/texts/348597/page_271.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a832eb162d0d4b1e2d5d53ba122ba5b198cc492
--- /dev/null
+++ b/samples/texts/348597/page_271.md
@@ -0,0 +1,42 @@
+**Example 8.14**
+
+Find the motion of an electron in the presence of a constant electric field and
+a constant magnetic flux density perpendicular to it.
+
+*Solution:* Let the electric field and the magnetic flux density be given by:
+
+$$
+\vec{E} = E_0 \hat{e}_3
+$$
+
+$$
+\vec{B} = B_0 \hat{e}_1
+$$
+
+The matrix **A** is given in this instance by:
+
+$$
+\mathbf{A} = \alpha \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0 \end{pmatrix}
+$$
+
+while the vector **B** is still given by:
+
+$$
+\mathbf{B} = \beta \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}
+$$
+
+The matrix $e^{At}$ is now given by:
+
+$$
+e^{At} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(\alpha t) & \sin(\alpha t) \\ 0 & -\sin(\alpha t) & \cos(\alpha t) \end{pmatrix}
+$$
+
+and the solution for the velocity vector is for this configuration given, using
+Eq. (8.40), by:
+
+$$
+\begin{pmatrix} v_1(t) \\ v_2(t) \\ v_3(t) \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos(\alpha t) & \sin(\alpha t) \\ 0 & -\sin(\alpha t) & \cos(\alpha t) \end{pmatrix} \begin{pmatrix} v_1(0) \\ v_2(0) \\ v_3(0) \end{pmatrix} + \\
+\qquad + \int_0^t \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos[\alpha(t-\tau)] & \sin[\alpha(t-\tau)] \\ 0 & -\sin[\alpha(t-\tau)] & \cos[\alpha(t-\tau)] \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ \beta \end{pmatrix} d\tau
+$$
+
+leading to the following parametric representation for the velocity vector:
\ No newline at end of file
diff --git a/samples/texts/348597/page_272.md b/samples/texts/348597/page_272.md
new file mode 100644
index 0000000000000000000000000000000000000000..a7a2ae5d25ffa773903d7fafb9f23682deee1d4f
--- /dev/null
+++ b/samples/texts/348597/page_272.md
@@ -0,0 +1,21 @@
+$$v_1(t) = v_1(0)$$
+
+$$v_2(t) = v_2(0)\cos(\alpha t) + v_3(0)\sin(\alpha t) + \frac{\beta}{\alpha}[1 - \cos(\alpha t)]$$
+
+$$v_3(t) = -v_2(0)\sin(\alpha t) + v_3(0)\cos(\alpha t) + \frac{\beta}{\alpha}\sin(\alpha t)$$
+
+## Homework Problems
+
+**Pb. 8.23** Plot the 3-D curve, with time as parameter, for the tip of the velocity vector of an electron with an initial velocity $\vec{v}(0) = \frac{v_0}{\sqrt{3}}(\hat{e}_1 + \hat{e}_2 + \hat{e}_3)$, where $v_0 = 10^5$ m/s, entering a region of space where the electric field and the magnetic flux density are constant and described by $\vec{E} = E_0\hat{e}_3$, where $E_0 = -10^4$ V/m; and $\vec{B} = B_0\hat{e}_1$, where $B_0 = 10^{-2}$ Wb/m².
+
+**Pb. 8.24** Find the parametric equations for the position vector for Pb. 8.23, assuming that the origin of the axis is where the electron enters the region of the force fields. Plot the 3-D curve that describes the position of the electron.
+
+### 8.9.4 Pauli Spinors
+
+We have shown thus far in this section the power of the Cayley-Hamilton theorem in helping us avoid the explicit computation of the eigenvectors while still analytically solving a number of problems of linear algebra where the dimension of the matrices was essentially 2 ⊗ 2, or in some special cases 3 ⊗ 3. In this subsection, we discuss another analytical technique for matrix manipulation, one that is based on a generalized underlying abstract algebraic structure: the Pauli spin matrices. This is the prototype and precursor to more advanced computational techniques from a field of mathematics called Group Theory. The Pauli matrices are 2 ⊗ 2 matrices given by:
+
+$$\sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \qquad (8.41a)$$
+
+$$\sigma_2 = j \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \qquad (8.41b)$$
+
+$$\sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad (8.41c)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_273.md b/samples/texts/348597/page_273.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9141afb757dce658e188a0bbe5df54e04180146
--- /dev/null
+++ b/samples/texts/348597/page_273.md
@@ -0,0 +1,55 @@
+These matrices have the following properties, which can be easily verified by
+inspection:
+
+$$
+\text{Property 1:} \qquad \sigma_1^2 = \sigma_2^2 = \sigma_3^2 = I \tag*{(8.42)}
+$$
+
+where *I* is the 2 ⊗ 2 identity matrix.
+
+$$
+\text{Property 2:} \quad \sigma_1\sigma_2 + \sigma_2\sigma_1 = \sigma_1\sigma_3 + \sigma_3\sigma_1 = \sigma_2\sigma_3 + \sigma_3\sigma_2 = 0 \quad (8.43)
+$$
+
+$$
+\text{Property 3:} \qquad \sigma_1\sigma_2 = j\sigma_3; \quad \sigma_2\sigma_3 = j\sigma_1; \quad \sigma_3\sigma_1 = j\sigma_2 \tag{8.44}
+$$
+
+If we define the quantity $\vec{\sigma} \cdot \vec{v}$ to mean:
+
+$$
+\vec{\sigma} \cdot \vec{v} = \sigma_{1}v_{1} + \sigma_{2}v_{2} + \sigma_{3}v_{3} \tag{8.45}
+$$
+
+that is, $\vec{v} = (v_1, v_2, v_3)$, where the parameters $v_1, v_2, v_3$ are represented as the components of a vector, the following theorem is valid.
+
+THEOREM
+
+$$
+(\vec{\sigma} \cdot \vec{v})(\vec{\sigma} \cdot \vec{w}) = (\vec{v} \cdot \vec{w})\mathbf{I} + j\vec{\sigma} \cdot (\vec{v} \times \vec{w}) \quad (8.46)
+$$
+
+where the vectors' dot and cross products have the standard definition.
+
+PROOF The left side of this equation can be expanded as follows:
+
+$$
+\begin{align*}
+(\vec{\sigma} \cdot \vec{v})(\vec{\sigma} \cdot \vec{w}) &= (\sigma_1 v_1 + \sigma_2 v_2 + \sigma_3 v_3)(\sigma_1 w_1 + \sigma_2 w_2 + \sigma_3 w_3) \\
+&= (\sigma_1^2 v_1 w_1 + \sigma_2^2 v_2 w_2 + \sigma_3^2 v_3 w_3) + (\sigma_1 \sigma_2 v_1 w_2 + \sigma_2 \sigma_1 v_2 w_1) + \\
+&\quad + (\sigma_1 \sigma_3 v_1 w_3 + \sigma_3 \sigma_1 v_3 w_1) + (\sigma_2 \sigma_3 v_2 w_3 + \sigma_3 \sigma_2 v_3 w_2)
+\end{align*}
+$$
+
+Using property 1 of the Pauli's matrices, the first parenthesis on the RHS of Eq. (8.47) can be written as:
+
+$$
+(\sigma_1^2 v_1 w_1 + \sigma_2^2 v_2 w_2 + \sigma_3^2 v_3 w_3) = (v_1 w_1 + v_2 w_2 + v_3 w_3) I = (\vec{v} \cdot \vec{w}) I \quad (8.48)
+$$
+
+Using properties 2 and 3 of the Pauli’s matrices, the second, third, and
+fourth parentheses on the RHS of Eq. (8.47) can respectively be written as:
+
+$$
+(\sigma_{1}\sigma_{2}v_{1}w_{2} + \sigma_{2}\sigma_{1}v_{2}w_{1}) = j\sigma_{3}(v_{1}w_{2} - v_{2}w_{1}) \quad (8.49)
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_274.md b/samples/texts/348597/page_274.md
new file mode 100644
index 0000000000000000000000000000000000000000..32a1d26d8e12c822569e15f63b87835845a11b7b
--- /dev/null
+++ b/samples/texts/348597/page_274.md
@@ -0,0 +1,53 @@
+$$
+(\sigma_1 \sigma_3 v_1 w_3 + \sigma_3 \sigma_1 v_3 w_1) = j\sigma_2 (-v_1 w_3 + v_3 w_1) \quad (8.50)
+$$
+
+$$
+(\sigma_2 \sigma_3 v_2 w_3 + \sigma_3 \sigma_2 v_3 w_2) = j\sigma_1 (v_2 w_3 - v_3 w_2) \quad (8.51)
+$$
+
+Recalling that the cross product of two vectors ($\vec{v} \times \vec{w}$) can be written from Eq. (7.49) in components form as:
+
+$$
+(\vec{v} \times \vec{w}) = (v_2 w_3 - v_3 w_2, -v_1 w_3 + v_3 w_1, v_1 w_2 - v_2 w_1)
+$$
+
+the second, third, and fourth parentheses on the RHS of Eq. (8.47) can be com-
+bined to give $j\vec{\sigma} \cdot (\vec{v} \times \vec{w})$, thus completing the proof of the theorem.
+
+COROLLARY
+
+If $\hat{e}$ is a unit vector, then:
+
+$$
+(\vec{\sigma} \cdot \hat{e})^2 = I \qquad (8.52)
+$$
+
+PROOF Using Eq. (8.46), we have:
+
+$$
+(\vec{\sigma} \cdot \hat{e})^2 = (\hat{e} \cdot \hat{e}) \mathbf{I} + j\vec{\sigma} \cdot (\hat{e} \times \hat{e}) = \mathbf{I}
+$$
+
+where, in the last step, we used the fact that the norm of a unit vector is one
+and that the cross product of any vector with itself is zero.
+A direct result of this corollary is that:
+
+$$
+(\vec{\sigma} \cdot \hat{e})^{2m} = I
+\quad (8.53)
+$$
+
+and
+
+$$(\vec{\sigma} \cdot \hat{e})^{2m+1} = (\vec{\sigma} \cdot \hat{e}) \quad (8.54)$$
+
+From the above results, we are led to the theorem:
+
+THEOREM
+
+$$
+\exp(j\vec{\sigma} \cdot \hat{\epsilon}\phi) = \cos(\phi) + j\vec{\sigma} \cdot \hat{\epsilon}\sin(\phi) \quad (8.55)
+$$
+
+PROOF If we Taylor expand the exponential function, we obtain:
\ No newline at end of file
diff --git a/samples/texts/348597/page_275.md b/samples/texts/348597/page_275.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ce1e8daa09deae44b578c238858613e3a39bce3
--- /dev/null
+++ b/samples/texts/348597/page_275.md
@@ -0,0 +1,23 @@
+$$ \exp(j\vec{\sigma} \cdot \hat{e}\phi) = \sum_m \frac{[j\phi(\vec{\sigma} \cdot \hat{e})]^m}{m!} \qquad (8.56) $$
+
+Now separating the even power and odd power terms, using the just derived result for the odd and even powers of $(\vec{\sigma} \cdot \hat{e})$, and Taylor expansions of the cosine and sine functions, we obtain the desired result.
+
+**Example 8.15**
+
+Find the time development of the spin state of an electron in a constant magnetic flux density.
+
+*Solution:* [For readers not interested in the physical background of this problem, they can immediately jump to the paragraph following Eq. (8.59).]
+
+*Physical Background:* In addition to the spatio-temporal dynamics, the electron and all other elementary particles of nature also have internal degrees of freedom; which means that even if the particle has no translational motion, its state may still evolve in time. The spin of a particle is such an internal degree of freedom. The electron spin internal degree of freedom requires for its representation a two-dimensional vector, that is, two fundamental states are possible. As may be familiar to you from your elementary chemistry courses, the up and down states of the electron are required to satisfactorily describe the number of electrons in the different orbitals of the atoms. For the up state, the eigenvalue of the spin matrix is positive; while for the down state, the eigenvalue is negative (respectively $\hbar/2$ and $-\hbar/2$, where $\hbar = 1.0546 \times 10^{-34}$ J.s = $h/(2\pi)$, and $h$ is Planck's constant).
+
+Due to spin, the quantum mechanical dynamics of an electron in a magnetic flux density does not only include quantum mechanically the time development equivalent to the classical motion that we described in Examples 8.13 and 8.14; it also includes precession of the spin around the external magnetic flux density, similar to that experienced by a small magnet dipole in the presence of a magnetic flux density.
+
+The magnetic dipole moment due to the spin internal degree of freedom of an electron is proportional to the Pauli's spin matrix; specifically:
+
+$$ \vec{\mu} = -\mu_B \vec{\sigma} \qquad (8.57) $$
+
+where $\mu_B = 0.927 \times 10^{-23}$ J/Tesla.
+
+In the same notation, the electron spin angular momentum is given by:
+
+$$ \vec{S} = \frac{\hbar}{2} \vec{\sigma} \qquad (8.58) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_276.md b/samples/texts/348597/page_276.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e305207c8854cb5e13a655620290b92944059bf
--- /dev/null
+++ b/samples/texts/348597/page_276.md
@@ -0,0 +1,43 @@
+The electron magnetic dipole, due to spin, interaction with the magnetic flux
+density is described by the potential:
+
+$$
+\mathbf{V} = \mu_B \vec{\sigma} \cdot \vec{B} \tag{8.59}
+$$
+
+and the dynamics of the electron spin state in the magnetic flux density is
+described by Schrodinger's equation:
+
+$$
+j\hbar \frac{d}{dt} |\psi\rangle = \mu_B \vec{\sigma} \cdot \vec{B} |\psi\rangle \quad (8.60)
+$$
+
+where, as previously mentioned, the Dirac ket-vector is two-dimensional.
+
+**Mathematical Problem:** To put the problem in purely mathematical form, we are asked to find the time development of the two-dimensional vector $|\psi\rangle$ if this vector obeys the system of equations:
+
+$$
+\frac{d}{dt} \begin{pmatrix} a(t) \\ b(t) \end{pmatrix} = -j \frac{\Omega}{2} (\vec{\sigma} \cdot \hat{e}) \begin{pmatrix} a(t) \\ b(t) \end{pmatrix} \quad (8.61)
+$$
+
+where $\frac{\Omega}{2} = \frac{\mu_B B_0}{\hbar}$, and is called the Larmor frequency, and the magnetic flux
+density is given by $\vec{B} = B_0 \hat{e}$. The solution of Eq. (8.61) can be immediately
+written because the magnetic flux density is constant. The solution at an arbi-
+trary time is related to the state at the origin of time through:
+
+$$
+\begin{pmatrix} a(t) \\ b(t) \end{pmatrix} = \exp \left[ -j \frac{\Omega}{2} (\vec{\sigma} \cdot \hat{e}) t \right] \begin{pmatrix} a(0) \\ b(0) \end{pmatrix} \quad (8.62)
+$$
+
+which from Eq. (8.55) can be simplified to read:
+
+$$
+\begin{pmatrix} a(t) \\ b(t) \end{pmatrix} = \left[ \cos\left(\frac{\Omega}{2}t\right) I - j(\vec{\sigma} \cdot \hat{e}) \sin\left(\frac{\Omega}{2}t\right) \right] \begin{pmatrix} a(0) \\ b(0) \end{pmatrix} \quad (8.63)
+$$
+
+If we choose the magnetic flux density to point in the z-direction, then the
+solution takes the very simple form:
+
+$$
+\begin{pmatrix} a(t) \\ b(t) \end{pmatrix} = \begin{pmatrix} e^{-j\frac{\Omega t}{2}} a(0) \\ e^{j\frac{\Omega t}{2}} b(0) \end{pmatrix} \qquad (8.64)
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_277.md b/samples/texts/348597/page_277.md
new file mode 100644
index 0000000000000000000000000000000000000000..95bc6ed6f5a1ebefd6a99eec1f57a389feaec986
--- /dev/null
+++ b/samples/texts/348597/page_277.md
@@ -0,0 +1,23 @@
+Physically, the above result can be interpreted as the precession of the electron around the direction of the magnetic flux density. To understand this statement, let us find the eigenvectors of the $\sigma_x$ and $\sigma_y$ matrices. These are given by:
+
+$$ \alpha_x = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} \quad \text{and} \quad \beta_x = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \qquad (8.65a) $$
+
+$$ \alpha_y = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ j \end{pmatrix} \quad \text{and} \quad \beta_y = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -j \end{pmatrix} \qquad (8.65b) $$
+
+The eigenvalues of $\sigma_x$ and $\sigma_y$ corresponding to the eigenvectors $\alpha$ are equal to 1, while those corresponding to the eigenvectors $\beta$ are equal to -1.
+
+Now, assume that the electron was initially in the state $\alpha_x$:
+
+$$ \begin{pmatrix} a(0) \\ b(0) \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} = \alpha_x \qquad (8.66) $$
+
+By substitution in Eq. (8.64), we can compute the electron spin state at different times. Thus, for the time indicated, the electron spin state is given by the second column in the list below:
+
+$$ t = \frac{\pi}{2\Omega} \Rightarrow |\psi\rangle = e^{-j\pi/4}\alpha_y \qquad (8.67) $$
+
+$$ t = \frac{\pi}{\Omega} \Rightarrow |\psi\rangle = e^{-j\pi/2}\beta_x \qquad (8.68) $$
+
+$$ t = \frac{3\pi}{2\Omega} \Rightarrow |\psi\rangle = e^{-j3\pi/4}\beta_y \qquad (8.69) $$
+
+$$ t = \frac{2\pi}{\Omega} \Rightarrow |\psi\rangle = e^{-j\pi}\alpha_x \qquad (8.70) $$
+
+In examining the above results, we note that, up to an overall phase, the electron spin state returns to its original state following a cycle. During this cycle, the electron "pointed" successively in the positive x-axis, the positive y-axis, the negative x-axis, and the negative y-axis before returning again to the positive x-axis, thus mimicking the hand of a clock moving in the counterclockwise direction. It is this "motion" that is referred to as the electron spin precession around the direction of the magnetic flux density.
\ No newline at end of file
diff --git a/samples/texts/348597/page_278.md b/samples/texts/348597/page_278.md
new file mode 100644
index 0000000000000000000000000000000000000000..60fd9e38e95b190c6c86a734cb36628eef601c62
--- /dev/null
+++ b/samples/texts/348597/page_278.md
@@ -0,0 +1,21 @@
+## In-Class Exercises
+
+**Pb. 8.25** Find the Larmor frequency for an electron in a magnetic flux density of 100 Gauss ($10^{-2}$ Tesla).
+
+**Pb. 8.26** Similar to the electron, the proton and the neutron also have spin as one of their internal degrees of freedom, and similarly attached to this spin, both the proton and the neutron each have a magnetic moment. The magnetic moment attached to the proton and neutron have, respectively, the values $\mu_n = -1.91$ $\mu_N$ and $\mu_p = 2.79 \mu_N$, where $\mu_N$ is called the nuclear magneton and is equal to $\mu_N = 0.505 \times 10^{-26}$ Joule/Tesla.
+
+Find the precession frequency of the proton spin if the proton is in the presence of a magnetic flux density of strength 1 Tesla.
+
+## Homework Problem
+
+**Pb. 8.27** Magnetic resonance imaging (MRI) is one of the most accurate techniques in biomedical imaging. Its principle of operation is as follows. A strong dc magnetic flux density aligns in one of two possible orientations the spins of the protons of the hydrogen nuclei in the water of the tissues (we say that it polarizes them). The other molecules in the system have zero magnetic moments and are therefore not affected. In thermal equilibrium and at room temperature, there are slightly more protons aligned parallel to the magnetic flux density because this is the lowest energy level in this case. A weaker rotating ac transverse flux density attempts to flip these aligned spins. The energy of the transverse field absorbed by the biological system, which is proportional to the number of spin flips, is the quantity measured in an MRI scan. It is a function of the density of the polarized particles present in that specific region of the image, and of the frequency of the ac transverse flux density.
+
+In this problem, we want to find the frequency of the transverse field that will induce the maximum number of spin flips.
+
+The ODE describing the spin system dynamics in this case is given by:
+
+$$ \frac{d}{dt} |\psi\rangle = j[\Omega_{\perp} \cos(\omega t)\sigma_1 + \Omega_{\perp} \sin(\omega t)\sigma_2 + \Omega_{\parallel}\sigma_3] |\psi\rangle $$
+
+where $\Omega = \frac{\mu_p B_0}{\hbar}$, $\Omega_{\perp} = \frac{\mu_p B_{\perp}}{\hbar}$, $\mu_p$ is given in Pb. 8.26, and the magnetic flux density is given by
+
+$$ \vec{B} = B_{\perp} \cos(\omega t) \hat{e}_1 + B_{\perp} \sin(\omega t) \hat{e}_2 + B_0 \hat{e}_3 $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_279.md b/samples/texts/348597/page_279.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a496ee531edf81dbceba86bd65309c6f5bf07ec
--- /dev/null
+++ b/samples/texts/348597/page_279.md
@@ -0,0 +1,25 @@
+Assume for simplicity the initial state $|\psi(t=0)\rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$, and denote the state of the system at time $t$ by $|\psi(t)\rangle = \begin{pmatrix} a(t) \\ b(t) \end{pmatrix}$:
+
+a. Find numerically at which frequency $\omega$ the magnitude of $b(t)$ is maximum.
+
+b. Once you have determined the optimal $\omega$, go back and examine what strategy you should adopt in the choice of $\Omega_{\perp}$ to ensure maximum resolution.
+
+c. Verify your numerical answers with the analytical solution of this problem, which is given by:
+
+$$|b(t)|^2 = \frac{\Omega_{\perp}^2}{\tilde{\omega}^2} \sin^2(\tilde{\omega} t)$$
+
+where $\tilde{\omega}^2 = (\Omega - \omega/2)^2 + \Omega_{\perp}^2$.
+
+## 8.10 Special Classes of Matrices*
+
+### 8.10.1 Hermitian Matrices
+
+Hermitian matrices of finite or infinite dimensions (operators) play a key role in quantum mechanics, the primary tool for understanding and solving physical problems at the atomic and subatomic scales. In this section, we define these matrices and find key properties of their eigenvalues and eigenvectors.
+
+**DEFINITION** The Hermitian adjoint of a matrix **M**, denoted by **M**† is equal to the complex conjugate of its transpose:
+
+$$\mathbf{M}^{\dagger} = \overline{\mathbf{M}}^{T} \qquad (8.71)$$
+
+For example, in complex vector spaces, the bra-vector will be the Hermitian adjoint of the corresponding ket-vector:
+
+$$\langle v | = (|v\rangle)^{\dagger} \qquad (8.72)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_28.md b/samples/texts/348597/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..1bfdaad6fbc791a3b394140214599d81e662de1a
--- /dev/null
+++ b/samples/texts/348597/page_28.md
@@ -0,0 +1,37 @@
+`plot(x,y)`
+
+`axis square`
+
+The parametric representation of many common curves is our next topic of interest. The parametric representation is defined such that if *x* and *y* are continuous functions of *t* over the interval *I*, we can describe a curve in the *x*-*y* plane by specifying:
+
+$$C: x = x(t), y = y(t), \text{ and } t \in I$$
+
+**More Examples:**
+
+In the following examples, we want to identify the curves $f(x, y) = 0$ corresponding to each of the given parametrizations.
+
+**Example 1.12**
+
+C: $x = 2t - 1$, $y = t + 1$, and $0 < t < 2$. The initial point is at $x = -1$, $y = 1$, and the final point is at $x = 3$, $y = 3$.
+
+*Solution:* The curve $f(x, y) = 0$ form can be obtained by noting that:
+
+$$2t - 1 = x \Rightarrow t = (x + 1)/2$$
+
+Substitution into the expression for *y* results in:
+
+$$y = \frac{x}{2} + \frac{3}{2}$$
+
+This describes a line with slope 1/2 crossing the *x*-axis at $x = -3$.
+
+*Question:* Where does this line cross the *y*-axis?
+
+**Example 1.13**
+
+C: $x = 3 + 3\cos(t)$, $y = 2 + 2\sin(t)$, and $0 < t < 2\pi$. The initial point is at $x = 6$, $y = 2$, and the final point is at $x = 6$, $y = 2$.
+
+*Solution:* The curve $f(x, y) = 0$ can be obtained by noting that:
+
+$$\sin(t) = \frac{y-2}{2} \quad \text{and} \quad \cos(t) = \frac{x-3}{3}$$
+
+Using the trigonometric identity $\cos^2(t) + \sin^2(t) = 1$, we deduce the following equation:
\ No newline at end of file
diff --git a/samples/texts/348597/page_280.md b/samples/texts/348597/page_280.md
new file mode 100644
index 0000000000000000000000000000000000000000..62b060a0f68df6e8b105d89db745c5f9ecb4802b
--- /dev/null
+++ b/samples/texts/348597/page_280.md
@@ -0,0 +1,49 @@
+LEMMA
+
+$$
+(\mathbf{A}\mathbf{B})^{\dagger} = \mathbf{B}^{\dagger}\mathbf{A}^{\dagger} \quad (8.73)
+$$
+
+PROOF From the definition of matrix multiplication and Hermitian adjoint, we have:
+
+$$
+\begin{align*}
+[ (\mathbf{A}\mathbf{B})^\dagger ]_{ij} &= (\bar{\mathbf{A}} \bar{\mathbf{B}})_j{}_i \\
+&= \sum_k \bar{\mathbf{A}}_{jk} \bar{\mathbf{B}}_{ki} = \sum_k (\mathbf{A}^\dagger)_{kj} (\mathbf{B}^\dagger)_{ik} \\
+&= \sum_k (\mathbf{B}^\dagger)_{ik} (\mathbf{A}^\dagger)_{kj} = (\mathbf{B}^\dagger \mathbf{A}^\dagger)_{ij}
+\end{align*}
+$$
+
+**DEFINITION** A matrix is Hermitian if it is equal to its Hermitian adjoint; that is
+
+$$
+H^+ = H \tag{8.74}
+$$
+
+THEOREM 1
+
+The eigenvalues of a Hermitian matrix are real.
+
+PROOF Let $\lambda_m$ be an eigenvalue of $\mathbf{H}$ and let $|v_m\rangle$ be the corresponding eigenvector; then:
+
+$$
+\mathbf{H} |v_m\rangle = \lambda_m |v_m\rangle \qquad (8.75)
+$$
+
+Taking the Hermitian adjoints of both sides, using the above lemma, and
+remembering that **H** is Hermitian, we successively obtain:
+
+$$
+(H|v_m\rangle)^{\dagger} = \langle v_m | H^{\dagger} = \langle v_m | H = \langle v_m | \bar{\lambda}_m \quad (8.76)
+$$
+
+Now multiply (in an inner-product sense) Eq. (8.75) on the left with the bra
+$\langle v_m|$ and Eq. (8.76) on the right by the ket-vector $|v_m\rangle$, we obtain:
+
+$$
+\langle v_m | \mathbf{H} | v_m \rangle = \lambda_m \langle v_m | v_m \rangle = \bar{\lambda}_m \langle v_m | v_m \rangle \Rightarrow \lambda_m = \bar{\lambda}_m \quad (8.77)
+$$
+
+THEOREM 2
+
+The eigenvectors of a Hermitian matrix corresponding to different eigenvalues are orthogonal; that is, given that:
\ No newline at end of file
diff --git a/samples/texts/348597/page_281.md b/samples/texts/348597/page_281.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c7815d176c6bafff3718b31cc3cb260bc10306a
--- /dev/null
+++ b/samples/texts/348597/page_281.md
@@ -0,0 +1,50 @@
+$$
+\mathbf{H} |v_m\rangle = \lambda_m |v_m\rangle \quad (8.78)
+$$
+
+$$
+\mathbf{H} |\boldsymbol{v}_n\rangle = \boldsymbol{\lambda}_n |\boldsymbol{v}_n\rangle \qquad (8.79)
+$$
+
+and
+
+$$
+\lambda_m \neq \lambda_n
+\quad (8.80)
+$$
+
+then:
+
+$$
+\langle v_n | v_m \rangle = \langle v_m | v_n \rangle = 0 \tag{8.81}
+$$
+
+PROOF Because the eigenvalues are real, we can write:
+
+$$
+\langle v_n | \mathbf{H} | v_m \rangle = \langle v_n | \lambda_n | v_m \rangle \quad (8.82)
+$$
+
+Dot this quantity on the right by the ket $|v_m\rangle$ to obtain:
+
+$$
+\langle v_n | \mathbf{H} | v_m \rangle = \langle v_n | \lambda_n | v_m \rangle = \lambda_n \langle v_n | v_m \rangle \quad (8.83)
+$$
+
+On the other hand, if we dotted Eq. (8.78) on the left with the bra-vector $\langle v_n |$, we obtain:
+
+$$
+\langle v_n | \mathbf{H} | v_m \rangle = \langle v_n | \lambda_m | v_m \rangle = \lambda_m \langle v_n | v_m \rangle \quad (8.84)
+$$
+
+Now compare Eqs. (8.83) and (8.84). They are equal, or that:
+
+$$
+\lambda_m \langle v_n | v_m \rangle = \lambda_n \langle v_n | v_m \rangle \tag{8.85}
+$$
+
+However, because $\lambda_m \neq \lambda_n$, this equality can only be satisfied if $\langle v_n | v_m \rangle = 0$, which is the desired result.
+
+**In-Class Exercises**
+
+Pb. 8.28 Show that any Hermitian 2 ⊗ 2 matrix has a unique decomposition into the Pauli spin matrices and the identity matrix.
\ No newline at end of file
diff --git a/samples/texts/348597/page_282.md b/samples/texts/348597/page_282.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f809717e5f08da9fa2961232f2e9f7d16488c9f
--- /dev/null
+++ b/samples/texts/348597/page_282.md
@@ -0,0 +1,37 @@
+**Pb. 8.29** Find the multiplication rule for two $2 \otimes 2$ Hermitian matrices that have been decomposed into the Pauli spin matrices and the identity matrix; that is
+
+If:
+
+$$\mathbf{M} = a_0 \mathbf{I} + a_1 \mathbf{\sigma}_1 + a_2 \mathbf{\sigma}_2 + a_3 \mathbf{\sigma}_3$$
+
+and
+
+$$\mathbf{N} = b_0 \mathbf{I} + b_1 \mathbf{\sigma}_1 + b_2 \mathbf{\sigma}_2 + b_3 \mathbf{\sigma}_3$$
+
+Find: the *p*-components in: $\mathbf{P} = \mathbf{MN} = p_0 \mathbf{I} + p_1 \mathbf{\sigma}_1 + p_2 \mathbf{\sigma}_2 + p_3 \mathbf{\sigma}_3$
+
+## *Homework Problem*
+
+**Pb. 8.30** The Calogero and Perelomov matrices of dimensions $n \otimes n$ are given by:
+
+$$M_{lk} = (1 - \delta_{lk}) \left\{ 1 + j \cot \left[ \frac{(l-k)\pi}{n} \right] \right\}$$
+
+a. Verify that their eigenvalues are given by:
+
+$$\lambda_s = 2s - n - 1$$
+
+where $s = 1, 2, 3, ..., n$.
+
+b. Verify that their eigenvectors matrices are given by:
+
+$$V_{ls} = \exp\left(-j \frac{2\pi ls}{n}\right)$$
+
+c. Use the above results to derive the Diophantine summation rule:
+
+$$\sum_{l=1}^{n-1} \cot\left(\frac{l\pi}{n}\right) \sin\left(\frac{2sl\pi}{n}\right) = n - 2s$$
+
+where $s = 1, 2, 3, ..., n-1$.
+
+### 8.10.2 Unitary Matrices
+
+**DEFINITION** A unitary matrix has the property that its Hermitian adjoint is equal to its inverse:
\ No newline at end of file
diff --git a/samples/texts/348597/page_283.md b/samples/texts/348597/page_283.md
new file mode 100644
index 0000000000000000000000000000000000000000..3acddf11dc656362816fdc3c6d08cbd9d9c0607a
--- /dev/null
+++ b/samples/texts/348597/page_283.md
@@ -0,0 +1,51 @@
+$$
+\mathbf{U}^{\dagger} = \mathbf{U}^{-1} \tag*{(8.86)}
+$$
+
+An example of a unitary matrix would be the matrix $e^{jHt}$, if $\mathbf{H}$ was Hermitian.
+
+THEOREM 1
+
+The eigenvalues of a unitary matrix all have magnitude one.
+
+PROOF The eigenvalues and eigenvectors of the unitary matrix satisfy the usual equations for these quantities; that is:
+
+$$
+\mathbf{U} |v_n\rangle = \lambda_n |v_n\rangle \quad (8.87)
+$$
+
+Taking the Hermitian conjugate of this equation, we obtain:
+
+$$
+\langle v_n | \mathbf{U}^\dagger = \langle v_n | \mathbf{U}^{-1} = \langle v_n | \bar{\lambda}_n \quad (8.88)
+$$
+
+Multiplying Eq. (8.87) on the left by Eq. (8.88), we obtain:
+
+$$
+\langle v_n | \mathbf{U}^{-1} \mathbf{U} | v_n \rangle = \langle v_n | v_n \rangle = |\lambda_n|^2 \langle v_n | v_n \rangle \quad (8.89)
+$$
+
+from which we deduce the desired result that: $|\lambda_n|^2 = 1$.
+
+A direct corollary of the above theorem is that $|\det(\mathbf{U})| = 1$. This can be proven directly if we remember the result of **Pb. 8.15**, which states that the determinant of any diagonalizable matrix is the product of its eigenvalues, and the above theorem that proved that each of these eigenvalues has unit magnitude.
+
+THEOREM 2
+
+A transformation represented by a unitary matrix keeps invariant the scalar (dot, or inner) product of two vectors.
+
+PROOF The matrix $\mathbf{U}$ acting on the vectors $|\phi\rangle$ and $|\psi\rangle$ results in two new vectors, denoted by $|\phi'\rangle$ and $|\psi'\rangle$ and such that:
+
+$$
+|\phi'\rangle = \mathbf{U}|\phi\rangle \tag{8.90}
+$$
+
+$$
+|\psi'\rangle = \mathbf{U}|\psi\rangle \tag{8.91}
+$$
+
+Taking the Hermitian adjoint of Eq. (8.90), we obtain:
+
+$$
+\langle\phi'| = \langle\phi|\mathbf{U}^+ = \langle\phi|\mathbf{U}^{-1} \qquad (8.92)
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_284.md b/samples/texts/348597/page_284.md
new file mode 100644
index 0000000000000000000000000000000000000000..e91679fc6171741fd1f43ff295ba212e7e36565c
--- /dev/null
+++ b/samples/texts/348597/page_284.md
@@ -0,0 +1,45 @@
+Multiplying Eq. (8.91) on the left by Eq. (8.92), we obtain:
+
+$$
+\langle \varphi' | \psi' \rangle = \langle \varphi | \mathbf{U}^{-1} \mathbf{U} | \psi \rangle = \langle \varphi | \psi \rangle \quad (8.93)
+$$
+
+which is the result that we are after. In particular, note that the norm of the
+vector under this matrix multiplication remains invariant. We will have the
+opportunity to study a number of examples of such transformations in
+Chapter 9.
+
+**8.10.3 Unimodular Matrices**
+
+*DEFINITION* A unimodular matrix has the defining property that its determinant is equal to one. In the remainder of this section, we restrict our discussion to 2 ⊗ 2 unimodular matrices, as these form the tools for the matrix formulation of ray optics and Gaussian optics, which are two of the major sub-fields of photonics engineering.
+
+**Example 8.16**
+
+Find the eigenvalues and eigenvectors of the 2 ⊗ 2 unimodular matrix.
+
+Solution: Let the matrix **M** be given by the following expression:
+
+$$
+\mathbf{M} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \tag{8.94}
+$$
+
+The unimodularity condition is then written as:
+
+$$
+\det(\mathbf{M}) = ad - bc = 1 \tag{8.95}
+$$
+
+Using Eq. (8.95), the eigenvalues of this matrix are given by:
+
+$$
+\lambda_{\pm} = \frac{1}{2} [ (a+d) \pm \sqrt{(a+d)^2 - 4} ] \qquad (8.96)
+$$
+
+Depending on the value of (a + d), these eigenvalues can be parameterized in
+a simple expression. We choose, here, the range -2 ≤ (a + d) ≤ 2 for illustrative
+purposes. Under this constraint, the following parameterization is conven-
+ient:
+
+$$
+\cos(\theta) = \frac{1}{2}(a+d) \tag{8.97}
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_285.md b/samples/texts/348597/page_285.md
new file mode 100644
index 0000000000000000000000000000000000000000..3d430a58fcf9222d912dc7b4e231a2763bea43ac
--- /dev/null
+++ b/samples/texts/348597/page_285.md
@@ -0,0 +1,44 @@
+(For the ranges below -2 and above 2, the hyperbolic cosine function will be
+more appropriate and similar steps to the ones that we will follow can be
+repeated.)
+
+Having found the eigenvalues, which can now be expressed in the simple
+form:
+
+$$
+\lambda_{\pm} = e^{\pm j\theta} \tag{8.98}
+$$
+
+let us proceed to find the matrix V, defined as:
+
+$$
+M = VDV^{-1} \quad \text{or} \quad MV = VD \tag{8.99}
+$$
+
+and where **D** is the diagonal matrix of the eigenvalues. By direct substitution,
+in the matrix equation defining **V**, Eq. (8.99), the following relations can be
+directly obtained:
+
+$$
+\frac{V_{11}}{V_{21}} = \frac{\lambda_{+} - d}{c} \qquad (8.100)
+$$
+
+and
+
+$$
+\frac{V_{12}}{V_{22}} = \frac{\lambda_{-} - d}{c} \tag{8.101}
+$$
+
+If we choose for convenience $V_{11} = V_{22} = c$ (which is always possible because each eigenvector can have the value of one of its components arbitrary chosen with the other components expressed as functions of it), the matrix $V$ can be written as:
+
+$$
+V = \begin{pmatrix} e^{j\theta} - d & e^{-j\theta} - d \\ c & c \end{pmatrix} \tag{8.102}
+$$
+
+and the matrix **M** can be then written as:
+
+$$
+\[
+\mathbf{M} = \frac{\begin{pmatrix} e^{j\theta} - d & e^{-j\theta} - d \\ c & c \end{pmatrix} \begin{pmatrix} e^{j\theta} & 0 \\ 0 & e^{-j\theta} \end{pmatrix} \begin{pmatrix} c & d - e^{-j\theta} \\ -c & e^{j\theta} - d \end{pmatrix}}{(2j \sin(\theta))} \tag{8.103}
+\]
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_286.md b/samples/texts/348597/page_286.md
new file mode 100644
index 0000000000000000000000000000000000000000..169f829d76341052a5575e95495522eab154c2c9
--- /dev/null
+++ b/samples/texts/348597/page_286.md
@@ -0,0 +1,21 @@
+## Homework Problem
+
+**Pb. 8.31** Use the decomposition given by Eq. (8.103) and the results of **Pb. 8.15** to prove the Sylvester theorem for the unimodular matrix, which states that:
+
+$$ \mathbf{M}^n = \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} \frac{\sin[(n+1)\theta] - D\sin(n\theta)}{\sin(\theta)} & \frac{B\sin(n\theta)}{\sin(\theta)} \\ \frac{C\sin(n\theta)}{\sin(\theta)} & \frac{D\sin(n\theta) - \sin[(n-1)\theta]}{\sin(\theta)} \end{pmatrix} $$
+
+where $\theta$ is defined in Equation 8.97.
+
+## Application: Dynamics of the Trapping of an Optical Ray in an Optical Fiber
+
+Optical fibers, the main waveguides of land-based optical broadband networks are hair-thin glass fibers that transmit light pulses over very long distances with very small losses. Their waveguiding property is due to a quadratic index of refraction radial profile built into the fiber. This profile is implemented in the fiber manufacturing process, through doping the glass with different concentrations of impurities at different radial distances.
+
+The purpose of this application is to explain how waveguiding can be achieved if the index of refraction inside the fiber has the following profile:
+
+$$ n = n_0 \left( 1 - \frac{n_2^2}{2} r^2 \right) \qquad (8.104) $$
+
+where *r* is the radial distance from the fiber axis and $n_2^2 r^2$ is a number smaller than 0.01 everywhere inside the fiber.
+
+This problem can, of course, be solved by finding the solution of Maxwell equations, or the differential equation of geometrical optics for ray propagation in a non-uniform medium. However, we will not do this in this application. Here, we use only Snell's law of refraction (see Figure 8.4), which states that at the boundary between two transparent materials with two different indices of refraction, light refracts such that the product of the index of refraction of each medium multiplied by the sine of the angle that the ray makes with the normal to the interface in each medium is constant, and Sylvester's theorem derived in **Pb. 8.31**.
+
+Let us describe a light ray going through the fiber at any point *z* along its length, by the distance *r* that the ray is displaced from the fiber axis, and by the small angle $\alpha$ that the ray's direction makes with the fiber axis. Now consider two points on the fiber axis separated by the small distance $\delta z$. We want
\ No newline at end of file
diff --git a/samples/texts/348597/page_287.md b/samples/texts/348597/page_287.md
new file mode 100644
index 0000000000000000000000000000000000000000..fc8dee74a02bcb0f0783a521eeb887166dd74522
--- /dev/null
+++ b/samples/texts/348597/page_287.md
@@ -0,0 +1,16 @@
+FIGURE 8.4
+Parameters of Snell's law of refraction.
+
+to find $r(z + \delta z)$ and $\alpha(z + \delta z)$, knowing $r(z)$ and $\alpha(z)$. We are looking for the iteration relation that successive applications will permit us to find the ray displacement $r$ and $\alpha$ slope at any point inside the fiber if we knew their values at the fiber entrance plane.
+
+We solve the problem in two steps. We first assume that there was no bending in the ray, and then find the ray transverse displacement following a small displacement. This is straightforward from the definition of the slope of the ray:
+
+$$ \delta r = \alpha(z)\delta z \quad (8.105) $$
+
+Because the angle $\alpha$ is small, we approximated the tangent of the angle by the value of the angle in radians.
+
+Therefore, if we represent the position and slope of the ray as a column matrix, Eq. (8.105) can be represented by the following matrix representation:
+
+$$ \begin{pmatrix} r(z + \delta z) \\ \alpha(z + \delta z) \end{pmatrix} = \begin{pmatrix} 1 & \delta z \\ 0 & 1 \end{pmatrix} \begin{pmatrix} r(z) \\ \alpha(z) \end{pmatrix} \quad (8.106) $$
+
+Next, we want to find the bending experienced by the ray in advancing through the distance $\delta z$. Because the angles that should be used in Snell's law are the complementary angles to those that the ray forms with the axis of the fiber, and recalling that the glass index of refraction is changing only in the radial direction, we deduce from Snell's law that:
\ No newline at end of file
diff --git a/samples/texts/348597/page_288.md b/samples/texts/348597/page_288.md
new file mode 100644
index 0000000000000000000000000000000000000000..957d8dee4fdf20309aa5f0b8598892b574e867b4
--- /dev/null
+++ b/samples/texts/348597/page_288.md
@@ -0,0 +1,48 @@
+$$
+n(r + \delta r) \cos(\alpha + \delta\alpha) = n(r) \cos(\alpha) \quad (8.107)
+$$
+
+Now, taking the leading terms of a Taylor expansion of the LHS of this equa-
+tion leads us to:
+
+$$
+\left[ n(r) + \frac{dn(r)}{dr} \delta r \right] \left[ 1 - \frac{(\alpha + \delta\alpha)^2}{2} \right] \approx n(r) \left( 1 - \frac{\alpha^2}{2} \right) \quad (8.108)
+$$
+
+Further simplification of this equation gives to first order in the variations:
+
+$$
+\delta\alpha \approx \frac{1}{\alpha n(r)} \frac{dn(r)}{dr} \delta r \approx \frac{1}{n_0} (-n_0 n_2^2 r) \delta z = -(n_2^2 \delta z) r \quad (8.109)
+$$
+
+which can be expressed in matrix form as:
+
+$$
+\begin{pmatrix} r(z + \delta z) \\ \alpha(z + \delta z) \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ -n_2^2 \delta z & 1 \end{pmatrix} \begin{pmatrix} r(z) \\ \alpha(z) \end{pmatrix} \qquad (8.110)
+$$
+
+The total variation in the values of the position and slope of the ray can be
+obtained by taking the product of the two matrices in Eqs. (8.106) and (8.110),
+giving:
+
+$$
+\begin{pmatrix} r(z + \delta z) \\ \alpha(z + \delta z) \end{pmatrix} = \begin{pmatrix} 1 - (n_2 \delta z)^2 & \delta z \\ -n_2^2 \delta z & 1 \end{pmatrix} \begin{pmatrix} r(z) \\ \alpha(z) \end{pmatrix} \quad (8.111)
+$$
+
+Equation (8.111) provides us with the required recursion relation to numeri-
+cally iterate the progress of the ray inside the fiber. Thus, the ray distance
+from the fiber axis and the angle that it makes with this axis can be computed
+at any z in the fiber if we know the values of the ray transverse coordinate and
+its slope at the entrance plane.
+
+The problem can also be solved analytically if we note that the determinant
+of this matrix is 1 (the matrix is unimodular). Sylvester's theorem provides
+the means to obtain the following result:
+
+$$
+\begin{pmatrix} r(z) \\ \alpha(z) \end{pmatrix} = \begin{pmatrix} \cos(n_2 z) & \frac{\sin(n_2 z)}{n_2} \\ -n_2 \sin(n_2 z) & \cos(n_2 z) \end{pmatrix} \begin{pmatrix} r(0) \\ \alpha(0) \end{pmatrix} \quad (8.112)
+$$
+
+**Homework Problems**
+
+Pb. 8.32 Consider an optical fiber of radius $a = 30\mu$, $n_0 = 4/3$, and $n_2 = 10^3 m^{-1}$. Three ray enters this fiber parallel to the fiber axis at distances of $5\mu$, $10\mu$, and $15\mu$ from the fiber's axis.
\ No newline at end of file
diff --git a/samples/texts/348597/page_289.md b/samples/texts/348597/page_289.md
new file mode 100644
index 0000000000000000000000000000000000000000..93e8bb639679582a2aafd3a9c0d89692c340519c
--- /dev/null
+++ b/samples/texts/348597/page_289.md
@@ -0,0 +1,17 @@
+a. Write a MATLAB program to follow the progress of the rays through the fiber, properly choosing the $\delta z$ increment.
+
+b. Trace these rays going through the fiber.
+
+Figure 8.5 shows the answer that you should obtain for a fiber length of 3 cm.
+
+FIGURE 8.5
+Traces of rays, originally parallel to the fiber's axis, when propagating inside an optical fiber.
+
+**Pb. 8.33** Using Sylvester’s theorem, derive Eq. (8.112). (Hint: Define the angle $\theta$, such that $\sin(\frac{\theta}{2}) = \frac{\alpha\delta z}{2}$, and recall that while $\delta z$ goes to zero, its product with the number of iterations is finite and is equal to the distance of propagation inside the fiber.)
+
+**Pb. 8.34** Find the maximum angle that an incoming ray can have so that it does not escape from the fiber. (Remember to include the refraction at the entrance of the fiber.)
+
+## 8.11 MATLAB Commands Review
+
+* **det**: Compute the determinant of a matrix.
+* **expm**: computes the matrix exponential.
\ No newline at end of file
diff --git a/samples/texts/348597/page_29.md b/samples/texts/348597/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..cb2441a8d98df41598ba883f77adb0561e0b2fe3
--- /dev/null
+++ b/samples/texts/348597/page_29.md
@@ -0,0 +1,31 @@
+$$ \frac{(y-2)^2}{2^2} + \frac{(x-3)^2}{3^2} = 1 $$
+
+This is the equation of an ellipse centered at x = 3, y = 2 and having major and minor radii equal to 3 and 2, respectively.
+
+**Question 1:** What are the coordinates of the foci of this ellipse?
+
+**Question 2:** Compare the above curve with the curve defined through:
+
+$x = 3 + 3 \cos(2t), y = 2 + 2 \sin(2t), \text{ and } 0 < t < 2\pi$
+
+What conclusions can you draw from your answer?
+
+## In-Class Exercises
+
+Pb. 1.3 Show that the following parametric equations:
+
+$x = h + a \sec(t)$, $y = k + b \tan(t)$, and $-\pi/2 < t < \pi/2$
+
+are those of the hyperbola also represented by the equation:
+
+$$ \frac{(x-h)^2}{a^2} - \frac{(y-k)^2}{b^2} = 1 $$
+
+Pb. 1.4 Plot the hyperbola represented by the parametric equations of Pb. 1.3, with $h = 2, k = 2, a = 1, b = 2$. Find the coordinates of the vertices and the foci. (Hint: One branch of the hyperbola is traced for $-\pi/2 < t < \pi/2$, while the other branch is traced when $\pi/2 < t < 3\pi/2$.)
+
+Pb. 1.5 The parametric equations of the cycloid are given by:
+
+$x = R\omega t + R \sin(\omega t)$, $y = R + R \cos(\omega t)$, and $0 < t$
+
+Show how this parametric equation can be obtained by following the kinematics of a point attached to the outer rim of a wheel that is uniformly rolling, without slippage, on a flat surface. Relate the above parameters to the linear speed and the radius of the wheel.
+
+Pb. 1.6 Sketch the curve C defined through the following parametric equations:
\ No newline at end of file
diff --git a/samples/texts/348597/page_290.md b/samples/texts/348597/page_290.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8db234be3d5779da2191cea263e1c20985374c0
--- /dev/null
+++ b/samples/texts/348597/page_290.md
@@ -0,0 +1,15 @@
+**eye** Identity matrix.
+
+**inv** Find the inverse of a matrix.
+
+**ones** Matrix with all elements equal to 1.
+
+**polyfit** Fit polynomial to data.
+
+**triu** Extract upper triangle of a matrix.
+
+**tril** Extract lower triangle of a matrix.
+
+**zeros** Matrix with all elements equal to zero.
+
+**[V,D]=eig(M)** Finds the eigenvalues and eigenvectors of a matrix.
\ No newline at end of file
diff --git a/samples/texts/348597/page_291.md b/samples/texts/348597/page_291.md
new file mode 100644
index 0000000000000000000000000000000000000000..172cd81d816a8bb25146fb00c144ceff1028413f
--- /dev/null
+++ b/samples/texts/348597/page_291.md
@@ -0,0 +1,21 @@
+# 9
+
+## Transformations
+
+The theory of transformations concerns itself with changes in the coordinates and shapes of objects upon the action of geometrical operations, dynamical boosts, or other operators. In this chapter, we deal only with linear transformations, using examples from both plane geometry and relativistic dynamics (space-time geometry). We also show how transformation techniques play an important role in image processing. We formulate both the problems and their solutions in the language of matrices. Matrices are still denoted by bold-face type and matrix multiplication by an asterisk.
+
+### 9.1 Two-Dimensional (2-D) Geometric Transformations
+
+We first concern ourselves with the operations of inversion about the origin of axes, reflection about the coordinate axes, rotation around the origin, scaling, and translation. But prior to going into the details of these transformations, we need to learn how to draw closed polygonal figures in MATLAB so that we can implement and graph the different cases.
+
+#### 9.1.1 Polygonal Figures Construction
+
+Consider a polygonal figure whose vertices are located at the points:
+
+$$ (x_1, y_1), (x_2, y_2), \dots, (x_n, y_n) $$
+
+The polygonal figure can then be thought off as line segments (edges) connecting the vertices in a given order, including the edge connecting the last point to the initial point to ensure that we obtain a closed figure. The implementation of the steps leading to the drawing of the figure follows:
+
+1. Label all vertex points.
+
+2. Label the path you follow.
\ No newline at end of file
diff --git a/samples/texts/348597/page_292.md b/samples/texts/348597/page_292.md
new file mode 100644
index 0000000000000000000000000000000000000000..a01f7eb9c70744a827f32466944f35b6fedebf0d
--- /dev/null
+++ b/samples/texts/348597/page_292.md
@@ -0,0 +1,43 @@
+3. Construct a $(2 \otimes (n+1))$ matrix, the **G** matrix, where the elements of the first row consist of the ordered $(n+1)$-tuplet, $(x_1, x_2, x_3, \ldots, x_n, x_1)$, and those of the second row consists of the corresponding $y$ coordinates $(n+1)$-tuplet.
+
+4. Plot the second row of **G** as function of its first row.
+
+**Example 9.1**
+
+Plot the trapezoid whose vertices are located at the points (2, 1), (6, 1), (5, 3), and (3, 3).
+
+*Solution:* Enter and execute the following commands:
+
+```
+G=[2 6 5 3 2; 1 1 3 3 1];
+plot(G(1:,:),G(2:,:))
+```
+
+To ensure that the exact geometrical shape is properly reproduced, remember
+to instruct your computer to choose the axes such that you have equal
+$x$-range and $y$-range and an aspect ratio of 1. If you would like to add any text
+anywhere in the figure, use the command `gtext`.
+
+**9.1.2 Inversion about the Origin and Reflection about the Coordinate Axes**
+
+We concern ourselves here with inversion with respect to the origin and with
+reflection about the *x*- or *y*-axis. Inversion about other points or reflection
+about other than the coordinate axes can be deduced from a composition of
+the present transformations and those discussed later.
+
+* The inversion about the origin changes the coordinates as follows:
+
+$$
+\begin{align}
+x' &= -x \notag \\
+y' &= -y \tag{9.1}
+\end{align}
+$$
+
+In matrix form, this transformation can be represented by:
+
+$$
+\mathbf{P} = \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \tag{9.2}
+$$
+
+* For the reflection about the *x*-axis, denoted by $\mathbf{P}_x$, and the reflection about the *y*-axis, denoted by $\mathbf{P}_y$, the transformation matrices are given by:
\ No newline at end of file
diff --git a/samples/texts/348597/page_293.md b/samples/texts/348597/page_293.md
new file mode 100644
index 0000000000000000000000000000000000000000..ead0ee788cd4f562b1552462f1d350fc3c9a7bb7
--- /dev/null
+++ b/samples/texts/348597/page_293.md
@@ -0,0 +1,25 @@
+$$ \mathbf{P}_x = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \qquad (9.3) $$
+
+$$ \mathbf{P}_y = \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} \qquad (9.4) $$
+
+**In-Class Exercise**
+
+**Pb. 9.1** Using the trapezoid of Example 9.1, obtain all the transformed G's as a result of the action of each of the three transformations defined in Eqs. (9.2) through (9.4), and plot the transformed figures on the same graph.
+
+**Pb. 9.2** In drawing the original trapezoid, we followed the counterclockwise direction in the sequencing of the different vertices. What is the sequencing of the respective points in each of the transformed G's?
+
+**Pb. 9.3** Show that the quantity ($x^2 + y^2$) is invariant under separately the action of $\mathbf{P}_x$, $\mathbf{P}_y$, or $\mathbf{P}$.
+
+### 9.1.3 Rotation around the Origin
+
+The new coordinates of a point in the x-y plane rotated by an angle $\theta$ around the z-axis can be directly derived through some elementary trigonometry. Here, instead, we derive the new coordinates using results from the complex numbers chapter (Chapter 6). Recall that every point in a 2-D plane represents a complex number, and multiplication by a complex number of modulus 1 and argument $\theta$ results in a rotation of angle $\theta$ of the original point. Therefore:
+
+$$ z' = ze^{j\theta} $$
+
+$$ \begin{align} x' + jy' &= (x + jy)(\cos(\theta) + j\sin(\theta)) \tag{9.5} \\ &= (x \cos(\theta) - y \sin(\theta)) + j(x \sin(\theta) + y \cos(\theta)) \nonumber \end{align} $$
+
+Equating separately the real parts and the imaginary parts, we deduce the action of rotation on the coordinates of a point:
+
+$$ x' = x \cos(\theta) - y \sin(\theta) \qquad (9.6) $$
+
+$$ y' = x \sin(\theta) + y \cos(\theta) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_294.md b/samples/texts/348597/page_294.md
new file mode 100644
index 0000000000000000000000000000000000000000..6db3de74d3a0e4c392febc1b67735457a0b48a4f
--- /dev/null
+++ b/samples/texts/348597/page_294.md
@@ -0,0 +1,42 @@
+The above transformation can also be written in matrix form. That is, if the
+point is represented by a size 2 column vector, then the new vector is related
+to the old one through the following transformation:
+
+$$
+\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \mathbf{R}(\theta) \begin{bmatrix} x \\ y \end{bmatrix} \quad (9.7)
+$$
+
+The convention for the sign of the angle is the same as that used in Chapter 6,
+namely that it is measured positive when in the counterclockwise direction.
+
+**Preparatory Exercises**
+
+Using the above form for the rotation matrix, verify the following properties:
+
+Pb. 9.4 Its determinant is equal to 1.
+
+$$
+\mathbf{R}(-\theta) = [\mathbf{R}(\theta)]^{-1} = [\mathbf{R}(\theta)]^T
+$$
+
+Pb. 9.6 $\mathbf{R}(\theta_1) * \mathbf{R}(\theta_2) = \mathbf{R}(\theta_1 + \theta_2) = \mathbf{R}(\theta_2) * \mathbf{R}(\theta_1)$
+
+Pb. 9.7 $(x')^2 + (y')^2 = x^2 + y^2$
+
+**Pb. 9.8** Show that **P** = **R**(θ = π). Also show that there is no rotation that can reproduce **P**x or **P**y.
+
+In-Class Exercises
+
+Pb. 9.9 Find the coordinates of the image of the point (x, y) obtained by reflection about the line y = x. Test your results using MATLAB.
+
+Pb. 9.10 Find the transformation matrix corresponding to a rotation of -π/3, followed by an inversion around the origin. Solve the problem in two different ways.
+
+Pb. 9.11 By what angle should you rotate the trapezoid so that point (6, 1) of the trapezoid of Example 9.1 is now on the y-axis?
+
+9.1.4 Scaling
+
+If the x-coordinate of each point in the plane is multiplied by a positive con-
+stant s_x, then the effect of this transformation is to expand or compress each
+plane figure in the x-direction. If 0 < s_x < 1, the result is a compression; and if
+s_x > 1, the result is an expansion. The same can also be done along the y-axis.
+This class of transformations is called scaling.
\ No newline at end of file
diff --git a/samples/texts/348597/page_295.md b/samples/texts/348597/page_295.md
new file mode 100644
index 0000000000000000000000000000000000000000..d440e718d46debb7f675ca3c09f6aef4e66ff788
--- /dev/null
+++ b/samples/texts/348597/page_295.md
@@ -0,0 +1,44 @@
+The matrices corresponding to these transformations, in 2-D, are
+respectively:
+
+$$
+S_x = \begin{bmatrix} s_x & 0 \\ 0 & 1 \end{bmatrix} \tag{9.8}
+$$
+
+$$
+S_y = \begin{bmatrix} 1 & 0 \\ 0 & s_y \end{bmatrix} \tag{9.9}
+$$
+
+**In-Class Exercises**
+
+**Pb. 9.12** Find the transformation matrix for simultaneously compressing the x-coordinate by a factor of 2, while expanding the y-coordinate by a factor of 2. Apply this transformation to the trapezoid of Example 9.1 and plot the result.
+
+**Pb. 9.13** Find the inverse matrices for S_x and S_y.
+
+9.1.5 Translation
+
+A translation is defined by a vector $\vec{T} = (t_x, t_y)$, and the transformation of the coordinates is given simply by:
+
+$$
+\begin{align}
+x' &= x + t_x \\
+y' &= y + t_y
+\end{align}
+\tag{9.10}
+$$
+
+or, written in matrix form as:
+
+$$
+\begin{equation}
+\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} x \\ y \end{bmatrix} + \begin{bmatrix} t_x \\ t_y \end{bmatrix} \tag{9.11}
+\end{equation}
+$$
+
+The effect of translation over the matrix G is described by the relation:
+
+$$
+G_T = G + T^* \mathbf{ones}(1, n+1) \quad (9.12)
+$$
+
+where *n* is the number of points being translated.
\ No newline at end of file
diff --git a/samples/texts/348597/page_296.md b/samples/texts/348597/page_296.md
new file mode 100644
index 0000000000000000000000000000000000000000..e85a2ef5ac43ea147b9c645276da34570357999b
--- /dev/null
+++ b/samples/texts/348597/page_296.md
@@ -0,0 +1,29 @@
+**In-Class Exercise**
+
+Pb. 9.14 Translate the trapezoid of Example 9.1 by a vector of length 5 that is making an angle of 30° with the x-axis.
+
+## 9.2 Homogeneous Coordinates
+
+As we have seen in Section 9.1, inversion about the origin, reflection about the coordinate axes, rotation, and scaling are operations that can be represented by a multiplicative matrix, and therefore the composite operation of acting successively on a figure by one or more of these operations can be described by a product of matrices. The translation operation, on the other hand, is represented by an addition, and thus cannot be incorporated, as yet, into the matrix multiplication scheme; and consequently, the expression for composite operations becomes less tractable. We illustrate this situation with the following example:
+
+**Example 9.2**
+
+Find the new G that results from rotating the trapezoid of Example 9.1 by a $\pi/4$ angle around the point Q (-5, 5).
+
+**Solution:** Because we have thus far defined the rotation matrix only around the origin, our task here is to generalize this result. We solve the problem by reducing it to a combination of elementary operations thus far defined. The strategy for solving the problem goes as follows:
+
+1. Perform a translation to place Q at the origin of a new coordinate system.
+
+2. Perform a $\pi/4$ rotation around the new origin, using the above form for rotation.
+
+3. Translate back the origin to its initial location.
+
+Written in matrix form, the above operations can be written sequentially as follows:
+
+$$1. \qquad \mathbf{G}_1 = \mathbf{G} + \mathbf{T} * \text{ones}(1, n+1) \qquad (9.13)$$
+
+where
+
+$$\mathbf{T} = \begin{bmatrix} 5 \\ -5 \end{bmatrix} \qquad (9.14)$$
+
+and $n = 4$.
\ No newline at end of file
diff --git a/samples/texts/348597/page_297.md b/samples/texts/348597/page_297.md
new file mode 100644
index 0000000000000000000000000000000000000000..719b4ea193b6e8e5938191ab8cf739dc364b38a8
--- /dev/null
+++ b/samples/texts/348597/page_297.md
@@ -0,0 +1,32 @@
+$$2. \qquad G_2 = R(\pi/4) * G_1 \tag{9.15}$$
+
+$$3. \qquad G_3 = G_2 - T * \text{ones}(1, n+1) \tag{9.16}$$
+
+and the final result can be written as:
+
+$$G_3 = R(\pi/4) * G + [(R(\pi/4) - 1) * T] * \text{ones}(1, n+1) \tag{9.17}$$
+
+We can implement the above sequence of transformations through the following script M-file:
+
+```matlab
+plot(-5,5,'*')
+hold on
+G=[2 6 5 3 2; 1 1 3 3 1];
+plot(G(1:.),G(2:.),'b')
+T=[5;-5];
+G1=G+T*ones(1,5);
+plot(G1(1:.),G1(2:.), 'r')
+R=[cos(pi/4) -sin(pi/4);sin(pi/4) cos(pi/4)];
+G2=R*G1;
+plot(G2(1:.),G2(2:.),'g')
+G3=G2-T*ones(1,5);
+plot(G3(1:.),G3(2:.),'k')
+axis([-12 12 -12 12])
+axis square
+```
+
+Although the above formulation of the problem is absolutely correct, the number of terms in the final expression for the image can wind up, in more involved problems, being large and cumbersome because of the existence of sums and products in the intermediate steps. Thus, the question becomes: can we incorporate all the transformations discussed thus far into only multiplicative matrices?
+
+The answer comes from an old trick that mapmakers have used successfully; namely, the technique of homogeneous coordinates. In this technique, as applied to the present case, we append to any column vector the row with value 1, that is, the point $(x_m, y_m)$ is now represented by the column vector:
+
+$$\begin{bmatrix} x_m \\ y_m \\ 1 \end{bmatrix} \tag{9.18}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_298.md b/samples/texts/348597/page_298.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f76f446f641067da90f21bd79a5d795350f9591
--- /dev/null
+++ b/samples/texts/348597/page_298.md
@@ -0,0 +1,23 @@
+Similarly in the definition of $\mathbf{G}$, we should append to the old definition, a row with all elements being 1.
+
+In this coordinate representation, the different transformations thus far discussed are now multiplicative and take the following forms:
+
+$$ \mathbf{P} = \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad (9.19) $$
+
+$$ \mathbf{P}_x = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad (9.20) $$
+
+$$ \mathbf{P}_y = \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad (9.21) $$
+
+$$ \mathbf{S} = \begin{bmatrix} s_x & 0 & 0 \\ 0 & s_y & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad (9.22) $$
+
+$$ \mathbf{R}(\theta) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{bmatrix} \qquad (9.23) $$
+
+$$ \mathbf{T} = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{bmatrix} \qquad (9.24) $$
+
+The composite matrix of any two transformations can now be written as the product of the matrices representing the constituent transformations. Of course, this economizes on the writing of expressions and makes the calculations less prone to trivial errors originating in the expansion of products of sums.
+
+**Example 9.3**
+
+Repeat Example 9.2, but now use the homogeneous coordinates.
+
+*Solution:* The following *script M-file* implements the required task:
\ No newline at end of file
diff --git a/samples/texts/348597/page_299.md b/samples/texts/348597/page_299.md
new file mode 100644
index 0000000000000000000000000000000000000000..c476752140aa91f4032b2ab7a63f0108bbd70104
--- /dev/null
+++ b/samples/texts/348597/page_299.md
@@ -0,0 +1,28 @@
+```matlab
+plot(-5,5,'*')
+hold on
+G=[2 6 5 3 2; 1 1 3 3 1;1 1 1 1 1];
+plot(G(1:),G(2:),'b')
+T=[1 0 5;0 1 -5;0 0 1];
+G1=T*G;
+plot(G1(1:),G1(2:), 'r')
+R=[cos(pi/4) -sin(pi/4) 0;sin(pi/4) cos(pi/4) 0;...
+ 0 0 1];
+G2=R*G1;
+plot(G2(1:),G2(2:),'g')
+G3=inv(T)*G2;
+plot(G3(1:),G3(2:),'k')
+axis([-12 12 -12 12])
+axis square
+hold off
+```
+
+## 9.3 Manipulation of 2-D Images
+
+Currently more and more images are being stored or transmitted in digital form. What does this mean?
+
+To simplify the discussion, consider a black and white image and assume that it has a square boundary. The digital image is constructed by the optics of the detecting system (i.e., the camera) to form on a plane containing a 2-D array of detectors, instead of the traditional photographic film. Each of these detectors, called a pixel (picture element), measures the intensity of light falling on it. The image is then represented by a matrix having the same size as the detectors' 2-D array structure, and such that the value of each of the matrix elements is proportional to the intensity of the light falling on the associated detector element. Of course, the resolution of the picture increases as the number of arrays increases.
+
+### 9.3.1 Geometrical Manipulation of Images
+
+Having the image represented by a matrix, it is now possible to perform all kinds of manipulations on it in MATLAB. For example, we could flip it in the left/right directions (`fliplr`), or in the up/down direction (`flipud`), or rotate it by 90° (`rot90`), or for that matter transform it by any matrix transformation. In the remainder of this section, we explore some of the
\ No newline at end of file
diff --git a/samples/texts/348597/page_3.md b/samples/texts/348597/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc069364d2182f78edcadffbf98dafd045743bd0
--- /dev/null
+++ b/samples/texts/348597/page_3.md
@@ -0,0 +1,10 @@
+ELEMENTARY
+MATHEMATICAL and
+COMPUTATIONAL TOOLS
+for ELECTRICAL and
+COMPUTER ENGINEERS
+USING MATLAB®
+
+Jamal T. Manassah
+
+City College of New York
\ No newline at end of file
diff --git a/samples/texts/348597/page_30.md b/samples/texts/348597/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..154312ce4def0344a6c9531265796c1092a1aef1
--- /dev/null
+++ b/samples/texts/348597/page_30.md
@@ -0,0 +1,27 @@
+$$x(t) = \begin{cases} t+2 & \text{for } -3 \le t \le -1 \\ +1-\frac{1}{\sqrt{3}}\tan\left(\frac{\pi}{3}(1-t^2)\right) & \text{for } -1 < t < 0 \\ -1+\frac{1}{\sqrt{3}}\tan\left(\frac{\pi}{3}(1-t^2)\right) & \text{for } 0 < t < 1 \end{cases}$$
+
+$$y(t) = \begin{cases} 0 & \text{for } -3 \le t \le -1 \\ \frac{1}{\sqrt{3}}\tan\left(\frac{\pi}{3}(1-t^2)\right) & \text{for } -1 < t < 0 \\ \frac{1}{\sqrt{3}}\tan\left(\frac{\pi}{3}(1-t^2)\right) & \text{for } 0 < t < 1 \end{cases}$$
+
+## Homework Problems
+
+The following set of problems provides the mathematical basis for understanding the graphical display on the screen of an oscilloscope, when in the x-y mode.
+
+**Pb. 1.7** To put the quadratic expression $Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0$ in standard form (i.e., to eliminate the x-y mixed term), make the transformation
+
+$$x = x' \cos(\theta) - y' \sin(\theta)$$
+
+$$y = x' \sin(\theta) + y' \cos(\theta)$$
+
+Show that the mixed term is eliminated if $\cot(2\theta) = \frac{(A-C)}{B}$.
+
+**Pb. 1.8** Consider the parametric equations:
+
+$$C: x = a \cos(t), y = b \sin(t + \varphi), \text{ and } 0 < t < 2\pi$$
+
+where the initial point is at $x = a$, $y = b \sin(\varphi)$, and the final point is at $x = a$, $y = b \sin(\varphi)$.
+
+a. Obtain the equation of the curve in the form $f(x, y) = 0$.
+
+b. Using the results of **Pb. 1.7**, prove that the ellipse inclination angle is given by:
+
+$$\cot(2\theta) = \frac{(a^2 - b^2)}{2ab \sin(\varphi)}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_300.md b/samples/texts/348597/page_300.md
new file mode 100644
index 0000000000000000000000000000000000000000..6dde716bd07a22f9a2ee72b3bf34bd07932d1633
--- /dev/null
+++ b/samples/texts/348597/page_300.md
@@ -0,0 +1,32 @@
+techniques commonly employed in the handling and manipulation of digital images.
+
+Let us explore and observe the structure of a matrix subjected to the above elementary transformations. For this purpose, execute and observe the outputs from each of the following commands:
+
+M=(1/25)*[1 2 3 4 5;6 7 8 9 10;11 12 13 14 15;16
+17 18 19 20;21 22 23 24 25]
+
+lrM=fliplr(M)
+
+udM=flipud(M)
+
+Mr90=rot90(M)
+
+A careful examination of the resulting matrix elements will indicate the general features of each of these transformations. You can also see in a visually more suggestive form how each of the transformations changed the image of the original matrix, if we render the image of M and its transform in false colors, that is, we assign a color to each number.
+
+To perform this task, choose the `colormap(hot)` command to obtain the images. In this mapping, the program assigns a color to each pixel, varying from black-red-yellow-white, depending on the magnitude of the intensity at the corresponding detector.
+
+Enter, in the following sequence, each of the following commands and at each step note the color distributions of the image:
+
+colormap(hot)
+imagesc(M, [0 1])
+imagesc(lrM, [0 1])
+imagesc(udM, [0 1])
+imagesc(Mr90, [0 1])
+
+The command `imagesc` produces an intensity image of a data matrix that spans a given range of values.
+
+### 9.3.2 Digital Image Processing
+
+A typical problem in digital image processing involves the analysis of the raw data of an image that was subject, during acquisition, to a blur due to the movement of the camera or to other sources of noise. An example of this situation occurs in the analysis of aerial images; the images are blurred due, *inter alia*, to the motion of the plane while the camera shutter is open. The question is, can we do anything to obtain a crisper image from the raw data if we know the speed and altitude of the plane when it took the photograph?
+
+The answer is affirmative. We consider for our example the photograph of a rectangular board. Construct this image by entering:
\ No newline at end of file
diff --git a/samples/texts/348597/page_301.md b/samples/texts/348597/page_301.md
new file mode 100644
index 0000000000000000000000000000000000000000..97ce6dffd79c6297d33d9e0af1f137c96a453203
--- /dev/null
+++ b/samples/texts/348597/page_301.md
@@ -0,0 +1,27 @@
+FIGURE 9.1
+
+The raw and processed images of a rectangular board photographed from a moving plane.
+Top panel: Raw (blurred) image. Bottom panel: Processed image.
+
+```matlab
+N=64;
+A zeros(N,N);
+A(15:35,15:45)=1;
+colormap(gray);
+imagesc(A,[0 1])
+```
+
+where (N N) is the size of the image (here, N = 64).
+
+Now assume that the camera that took the image had moved while the
+shutter was open by a distance that would correspond in the image plane to
+L pixels. What will the image look like now? (See Figure 9.1.)
+
+The blurring operation was modeled here by the matrix B. The blurred
+image is simulated through the matrix product:
+
+$$A1 = A * B \qquad (9.25)$$
+
+where B, the blurring matrix, is given by the following Toeplitz matrix:
+
+$$L=9; \\ B=\text{toeplitz}([ones(L,1);zeros(N-L,1)], [1;zeros(N-1,1)])/L;$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_302.md b/samples/texts/348597/page_302.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e49874c014169f5ef9e7139465be6af922376b8
--- /dev/null
+++ b/samples/texts/348597/page_302.md
@@ -0,0 +1,31 @@
+Here, the blur length was L = 9, and the blurred image **A1** was obtained by executing the following commands:
+
+`A1=A*B;`
+
+`imagesc(A1,[0 1])`
+
+To bring back the unblurred picture, simply multiply the matrix **A1** on the right by `inv(B)` and obtain the original image.
+
+In practice, one is given the blurred image and asked to reconstruct it while correcting for the blur. What to do?
+
+1. Compute the blur length from the plane speed and height.
+
+2. Construct the Toeplitz matrix, and take its inverse.
+
+3. Apply the inverse of the Toeplitz matrix to the blurred image matrix, obtaining the processed image.
+
+### 9.3.3 Encrypting an Image
+
+If for any reason, two individuals desire to exchange an image but want to keep its contents only to themselves, they may agree beforehand on a scrambling matrix that the first individual applies to scramble the sent image, while the second individual applies the inverse of the scramble matrix to unscramble the received image.
+
+Given that an average quality image currently has a minimum size of about (1000×1000) pixels, reconstructing the scrambling matrix, if chosen cleverly, would be inaccessible except to the most powerful and specialized computers.
+
+The purpose of the following problems is to illustrate an efficient method for building a scrambling matrix.
+
+#### In-Class Exercises
+
+Assume for simplicity that the 2-D array size is (10×10), and that the scrambling matrix is chosen such that each row has one element equal to 1, while the others are 0, and no two rows are equal.
+
+**Pb. 9.15** For the (10×10) matrix dimension, how many possible scrambling matrices S, constructed as per the above prescription, are there? If the matrix size is (1000×1000), how many such scrambling matrices will there be?
+
+**Pb. 9.16** An original figure was scrambled by the scrambling matrix S to obtain the image shown in Figure 9.2. The matrix S is (10×10) and has all its elements equal to zero, except $S(1, 6) = S(2, 3) = S(3, 2) = S(4, 1) = S(5, 9) = S(6, 4) = S(7, 10) = S(8, 7) = S(9, 8) = S(10, 5) = 1$. Find the original image.
\ No newline at end of file
diff --git a/samples/texts/348597/page_303.md b/samples/texts/348597/page_303.md
new file mode 100644
index 0000000000000000000000000000000000000000..be0d7df038921e4df28d7f5fcfa42353c2d4d6bf
--- /dev/null
+++ b/samples/texts/348597/page_303.md
@@ -0,0 +1,10 @@
+FIGURE 9.2
+Scrambled image of Pb. 9.16.
+
+## 9.4 Lorentz Transformation*
+
+### 9.4.1 Space-Time Coordinates
+
+Einstein's theory of special relativity studies the relationship of the dynamics of a system, if described in two coordinate systems moving with constant speed one from the other. The theory of special relativity does not assume, as classical mechanics does, that there exists an absolute time common to all coordinate systems. It associates with each coordinate system a four-dimensional space (three space coordinates and one time coordinate). The theory of special relativity associates a space-time transformation to go between two coordinate systems moving uniformly with respect to each other. Each real point event (e.g., the arrival of a light flash on a screen) will be measured in both systems. If we distinguish by primes the data of the second observer from those of the first, then the first observer will ascribe to the event the coordinates $(x, y, z, t)$, while the second observer will ascribe to it the coordinates $(x', y', z', t')$; that is, there is no absolute time. The Lorentz transformation gives the rules for going from one coordinate system to the other.
+
+Assuming that the velocity $v$ between the two systems has the same direction as the positive x-axis and where the x-axis direction continuously coin-
\ No newline at end of file
diff --git a/samples/texts/348597/page_304.md b/samples/texts/348597/page_304.md
new file mode 100644
index 0000000000000000000000000000000000000000..23575a201da1cbacf7432ef2025620608a350704
--- /dev/null
+++ b/samples/texts/348597/page_304.md
@@ -0,0 +1,30 @@
+cides with that of the x'-axis; and furthermore, that the origin of the spatial
+coordinates of one system at time $t = 0$ coincides with the origin of the other
+system at time $t' = 0$, Einstein, on the basis of two postulates, derived the fol-
+lowing transformation relating the coordinates of the two systems:
+
+$$x' = \frac{x - vt}{\sqrt{1 - \frac{v^2}{c^2}}}, \quad y' = y, \quad z' = z, \quad t' = \frac{t - \frac{v}{c^2}x}{\sqrt{1 - \frac{v^2}{c^2}}} \quad (9.26)$$
+
+where *c* is the velocity of light in vacuum. The derivation of these formulae
+are detailed for you in electromagnetic theory or modern physics courses and
+are not the subject of discussions here. Our purpose here is to show that
+knowing the above transformations, we can deduce many interesting physi-
+cal observations as a result thereof.
+
+**Preparatory Exercise**
+
+Pb. 9.17 Show that, upon a Lorentz transformation, we have the equality:
+
+$$x'^2 + y'^2 + z'^2 - c^2 t'^2 = x^2 + y^2 + z^2 - c^2 t^2$$
+
+This is referred to as the Lorentz invariance of the norm of the space-time four-vectors. What is the equivalent invariant in 3-D Euclidean geometry?
+
+If we rename our coordinates such that:
+
+$$x_1 = x, \quad x_2 = y, \quad x_3 = z, \quad x_4 = jct \tag{9.27}$$
+
+the Lorentz transformation takes the following matricial form:
+
+$$L_{\beta} = \begin{bmatrix} \frac{1}{\sqrt{1-\beta^2}} & 0 & 0 & \frac{j\beta}{\sqrt{1-\beta^2}} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ -\frac{j\beta}{\sqrt{1-\beta^2}} & 0 & 0 & \frac{1}{\sqrt{1-\beta^2}} \end{bmatrix} \qquad (9.28)$$
+
+where $\beta = \frac{v}{c}$, and the relations that were given earlier relating the primed and unprimed coordinates can be summarized by:
\ No newline at end of file
diff --git a/samples/texts/348597/page_305.md b/samples/texts/348597/page_305.md
new file mode 100644
index 0000000000000000000000000000000000000000..296b6a37f42faf0a4bdcebe2fea34aafb4823941
--- /dev/null
+++ b/samples/texts/348597/page_305.md
@@ -0,0 +1,40 @@
+$$
+\begin{bmatrix} x'_{1} \\ x'_{2} \\ x'_{3} \\ x'_{4} \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{1-\beta^2}} & 0 & 0 & \frac{j\beta}{\sqrt{1-\beta^2}} \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ -\frac{j\beta}{\sqrt{1-\beta^2}} & 0 & 0 & \frac{1}{\sqrt{1-\beta^2}} \end{bmatrix} * \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} \quad (9.29)
+$$
+
+In-Class Exercises
+
+**Pb. 9.18** Write the above transformation for the case that the two coordinate systems are moving from each other at half the speed of light, and find (x', y', z', t') if
+
+$x = 2, \quad y = 3, \quad z = 4, \quad ct = 3$
+
+**Pb. 9.19** Find the determinant of Lβ.
+
+**Pb. 9.20** Find the multiplicative inverse of Lβ, and compare it to the transpose.
+
+**Pb. 9.21** Find the approximate expression of Lβ for β << 1. Give a physical interpretation to your result using Newtonian mechanics.
+
+9.4.2 Addition Theorem for Velocities
+
+The physical problem of interest here is: assuming that a point mass is mov-
+ing in the primed system in the x'-y' plane with uniform speed u' and its tra-
+jectory is making an angle θ' with the x'-axis, what is the speed of this
+particle, as viewed in the unprimed system, and what is the angle that its tra-
+jectory makes with the x-axis, as observed in the unprimed system?
+
+In the unprimed and primed systems, respectively, the parametric equa-
+tions for the point particle motion are given by:
+
+$$
+x = ut \cos(\theta), \quad y = ut \sin(\theta) \tag{9.30}
+$$
+
+$$
+x' = u't' \cos(\theta), \quad y' = u't' \sin(\theta') \tag{9.31}
+$$
+
+where *u* and *u'* are the speeds of the particle in the unprimed and primed sys-
+tems, respectively. Note that if the prime system moves with velocity *v* with
+respect to the unprimed system, then the unprimed system moves with a
+velocity –*v* with respect to the primed system, and using the Lorentz trans-
+formation, we can write the following equalities:
\ No newline at end of file
diff --git a/samples/texts/348597/page_306.md b/samples/texts/348597/page_306.md
new file mode 100644
index 0000000000000000000000000000000000000000..12388b5c01ae033e420309344cdd5410412f5f63
--- /dev/null
+++ b/samples/texts/348597/page_306.md
@@ -0,0 +1,27 @@
+$$ut \cos(\theta) = \frac{(u' \cos(\theta') + v)}{\sqrt{1 - \beta^2}} t' \quad (9.32)$$
+
+$$ut \sin(\theta) = u't' \sin(\theta') \quad (9.33)$$
+
+$$t = \frac{[1 + (u'v / c^2) \cos(\theta')] t'}{\sqrt{1 - \beta^2}} \quad (9.34)$$
+
+Dividing Eqs. (9.32) and (9.33) by Eq. (9.34), we obtain:
+
+$$u \cos(\theta) = \frac{(u' \cos(\theta') + v)}{[1 + (u'v / c^2) \cos(\theta')]} \quad (9.35)$$
+
+$$u \sin(\theta) = \frac{u' \sin(\theta') \sqrt{1 - \beta^2}}{[1 + (u'v/c^2) \cos(\theta')]} \quad (9.36)$$
+
+From this we can deduce the magnitude and direction of the velocity of the particle, as measured in the unprimed system:
+
+$$u^2 = \frac{u'^2 + v^2 + 2u'v \cos(\theta') - (u'^2 v^2 / c^2) \sin^2(\theta')}{[1 + (u'v/c^2) \cos(\theta')]^2} \quad (9.37)$$
+
+$$\tan(\theta) = \frac{u' \sin(\theta') \sqrt{1 - \beta^2}}{u' \cos(\theta') + v} \quad (9.38)$$
+
+## Preparatory Exercises
+
+**Pb. 9.22** Find the velocity of a photon (the quantum of light) in the unprimed system if its velocity in the primed system is $u' = c$.
+
+(Note the constancy of the velocity of light, if measured from either the primed or the unprimed system. As previously mentioned, this constituted one of only two postulates in Einstein's formulation of the theory of special relativity, which determined uniquely the form of the dynamical boost transformation.)
+
+**Pb. 9.23** Show that if $u'$ is parallel to the $x'$-axis, then the velocity addition formula takes the following simple form:
+
+$$u = \frac{u' + v}{1 + \frac{u'v}{c^2}}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_307.md b/samples/texts/348597/page_307.md
new file mode 100644
index 0000000000000000000000000000000000000000..537fe9df22d684b2b9db2f78fd655c7e0cdf862a
--- /dev/null
+++ b/samples/texts/348597/page_307.md
@@ -0,0 +1,27 @@
+**Pb. 9.24** Find the approximate form of the above expression for *u* when $\beta$ << 1, and show that it reduces to the expression of velocity addition in Newtonian mechanics.
+
+## In-Class Exercises
+
+**Pb. 9.25** Find the angle $\theta$, if $\theta' = \frac{\pi}{2}$ and $u' = v = \frac{c}{2}$.
+
+**Pb. 9.26** Plot the angle $\theta$ as a function of $\theta'$ when $v/c = 0.99$ and $u'/c = 1$.
+
+**Pb. 9.27** Let the variable $\phi$ be defined such that $\tanh(\phi) = \beta$. Write the Lorentz transformation matrix as function of $\phi$. Can you give the Lorentz transformation a geometric interpretation in non-Euclidean geometry?
+
+**Pb. 9.28** Using the result of Pb. 9.27, write the resultant transformation from a boost with parameter $\phi_1$, followed by another boost with parameter $\phi_2$. Does this rule for composition of Lorentz transformations remind you of a similar transformation that you studied previously in this chapter?
+
+## 9.5 MATLAB Commands Review
+
+**colormap** Control the color mix of an image.
+
+**fliplr** Flip a matrix left to right.
+
+**flipud** Flip a matrix in the up-to-down direction.
+
+**imagesc** Create a pixel intensity map from data stored in a matrix.
+
+**load** Import data files from outside MATLAB.
+
+**rot90** Rotate a matrix by 90°.
+
+**toeplitz** Specialized matrix constructor that describes, *inter alia*, the operation of a blur in an image.
\ No newline at end of file
diff --git a/samples/texts/348597/page_308.md b/samples/texts/348597/page_308.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8fec9add175f6c447b65d9fbe821d58cffe6243
--- /dev/null
+++ b/samples/texts/348597/page_308.md
@@ -0,0 +1,15 @@
+# 10
+
+## *A Taste of Probability Theory**
+
+### 10.1 Introduction
+
+In addition to its everyday use in all aspects of our public, personal, and leisure lives, probability plays an important role in electrical engineering practice in at least three important aspects. It is the mathematical tool to deal with three broad areas:
+
+1. *The problems associated with the inherent uncertainty in the input of certain systems.* The random arrival time of certain inputs to a system cannot be predetermined; for example, the log-on and the log-off times of terminals and workstations connected to a computer network, or the data packets' arrival time to a computer network node.
+
+2. *The problems associated with the distortion of a signal due to noise.* The effects of noise have to be dealt with satisfactorily at each stage of a communication system from the generation, to the transmission, to the detection phases. The source of this noise may be due to either fluctuations inherent in the physics of the problem (e.g., quantum effects and thermal effects) or due to random distortions due to externally generated uncontrollable parameters (e.g., weather, geography, etc.).
+
+3. *The problems associated with inherent human and computing machine limitations while solving very complex systems.* Individual treatment of the dynamics of very large number of molecules in a material, in which more than $10^{22}$ molecules may exist in a quart-size container, is not possible at this time, and we have to rely on statistical averages when describing the behavior of such systems. This is the field of statistical physics and thermodynamics.
+
+Furthermore, probability theory provides the necessary mathematical tools for error analysis in all experimental sciences. It permits estimation of the
\ No newline at end of file
diff --git a/samples/texts/348597/page_309.md b/samples/texts/348597/page_309.md
new file mode 100644
index 0000000000000000000000000000000000000000..c506b4f6b4a1894af3d053a2db69fc605b35155d
--- /dev/null
+++ b/samples/texts/348597/page_309.md
@@ -0,0 +1,23 @@
+error bars and the confidence level for any experimentally obtained result, through a methodical analysis and reduction of the raw data.
+
+In future courses in probability, random variables, stochastic processes (which is random variables theory with time as a parameter), information theory, and statistical physics, you will study techniques and solutions to the different types of problems from the above list. In this very brief introduction to the subject, we introduce only the very fundamental ideas and results — where more advanced courses seem to almost always start.
+
+## 10.2 Basics
+
+Probability theory is best developed mathematically based on a set of axioms from which a well-defined deductive theory can be constructed. This is referred to as the axiomatic approach. We concentrate, in this section, on developing the basics of probability theory, using a physical description of the underlying concepts of probability and related simple examples, to lead us intuitively to what is usually the starting point of the set theoretic axiomat-ic approach.
+
+Assume that we conduct $n$ independent trials under identical conditions, in each of which, depending on chance, a particular event $A$ of particular interest either occurs or does not occur. Let $n(A)$ be the number of experi-ments in which $A$ occurs. Then, the ratio $n(A)/n$, called the relative frequency of the event $A$ to occur in a series of experiments, clusters for $n \to \infty$ about some constant. This constant is called the probability of the event $A$, and is denoted by:
+
+$$P(A) = \lim_{n \to \infty} \frac{n(A)}{n} \qquad (10.1)$$
+
+From this definition, we know specifically what is meant by the statement that the probability for obtaining a head in the flip of a fair coin is 1/2.
+
+Let us consider the rolling of a single die as our prototype experiment :
+
+1. The possible outcomes of this experiment are elements belonging to the set:
+
+$$S = \{1, 2, 3, 4, 5, 6\} \qquad (10.2)$$
+
+If the die is fair, the probability for each of the elementary elements of this set to occur in the roll of a die is equal to:
+
+$$P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = \frac{1}{6} \qquad (10.3)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_31.md b/samples/texts/348597/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..2381e717adce9ba1a07c85825098604de619ee60
--- /dev/null
+++ b/samples/texts/348597/page_31.md
@@ -0,0 +1,35 @@
+**Pb. 1.9** If the parametric equations of a curve are given by:
+
+$$C: x = \cos(t), y = \sin(2t), \text{ and } 0 < t < 2\pi$$
+
+where the initial point is at $x = 1, y = 0$, and the final point is at $x = 1, y = 0$.
+
+The curve so obtained is called a Lissajous figure. It has the shape of a figure 8 with two nodes in the x-direction and only one node in the y-direction.
+
+What do you think the parametric equations should be if we wanted *m* nodes on the x-axis and *n* nodes on the y-axis? Test your hypothesis by plotting the results.
+
+### 1.7.3 Plotting a 3-D Curve
+
+Our next area of exploration is plotting 3-D curves.
+
+#### Example 1.14
+
+Plot the helix.
+
+**Solution:** To plot a helical curve, we can imagine initially that a point is revolving at a uniform speed around the perimeter of a circle. Now imagine that as the circular motion is continuing, the point is moving away from the x-y plane at some constant linear speed. The parametric representation of this motion can be implemented in MATLAB through the following:
+
+```matlab
+for m=1:201
+ th(m)=2*pi*.01*(m-1);
+ x(m)=cos(th(m));
+ y(m)=sin(th(m));
+ z(m)=th(m);
+end
+plot3(x,y,z)
+```
+
+#### In-Class Exercises
+
+**Pb. 1.10** In the helix of Example 1.14, what is the vertical distance (the pitch) between two consecutive helical turns. How can you control this distance? Find two methods of implementation.
+
+**Pb. 1.11** If instead of a circle in 2-D, as in the helix, the particle describes in 2-D a Lissajous pattern having two nodes in the y-direction and three nodes
\ No newline at end of file
diff --git a/samples/texts/348597/page_310.md b/samples/texts/348597/page_310.md
new file mode 100644
index 0000000000000000000000000000000000000000..18f91e86c88550ed9b940c6871403a7a29174a5c
--- /dev/null
+++ b/samples/texts/348597/page_310.md
@@ -0,0 +1,29 @@
+2. The observer may be interested not only in the elementary elements occurrence, but in finding the probability of a certain event which may consist of a set of elementary outcomes; for example:
+
+a. An event may consist of "obtaining an even number of spots on the upward face of a randomly rolled die." This event then consists of all successful trials having as experimental outcomes any member of the set:
+
+$$E = \{2, 4, 6\} \tag{10.4}$$
+
+b. Another event may consist of "obtaining three or more spots" (hence, we will use this form of abbreviated statement, and not keep repeating: on the upward face of a randomly rolled die). Then, this event consists of all successful trials having experimental outcomes any member of the set:
+
+$$B = \{3, 4, 5, 6\} \tag{10.5}$$
+
+Note that, in general, events may have overlapping elementary elements.
+
+For a fair die, using the definition of the probability as the limit of a relative frequency, it is possible to conclude, based on experimental trials, that:
+
+$$P(E) = P(2) + P(4) + P(6) = \frac{1}{2} \tag{10.6}$$
+
+while
+
+$$P(B) = P(3) + P(4) + P(5) + P(6) = \frac{2}{3} \tag{10.7}$$
+
+and
+
+$$P(S) = 1 \tag{10.8}$$
+
+The last equation [Eq. (10.8)] is the mathematical expression for the statement that the probability of the event that includes all possible elementary outcomes is 1 (i.e., certainty).
+
+It should be noted that if we define the events *O* and *C* to mean the events of “obtaining an odd number” and “obtaining a number smaller than 3,” respectively, we can obtain these events’ probabilities by enumerating the elements of the subsets of *S* that represent these events; namely:
+
+$$P(O) = P(1) + P(3) + P(5) = \frac{1}{2} \tag{10.9}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_311.md b/samples/texts/348597/page_311.md
new file mode 100644
index 0000000000000000000000000000000000000000..671ba23774500bba28f72e21ebfd573003c913fb
--- /dev/null
+++ b/samples/texts/348597/page_311.md
@@ -0,0 +1,25 @@
+$$P(C) = P(1) + P(2) = \frac{1}{3} \qquad (10.10)$$
+
+However, we also could have obtained these same results by noting that the events E and O (B and C) are disjoint and that their union spanned the set S. Therefore, the probabilities for events O and C could have been deduced, as well, through the relations:
+
+$$P(O) = 1 - P(E) \qquad (10.11)$$
+
+$$P(C) = 1 - P(B) \qquad (10.12)$$
+
+From the above and similar observations, it would be a satisfactory representation of the physical world if the above results were codified and elevated to the status of axioms for a formal theory of probability. However, the question becomes how many of these basic results (the axioms) one really needs to assume, such that it will be possible to derive all other results of the theory from this seed. This is the starting point for the formal approach to the probability theory.
+
+The following axioms were proven to be a satisfactory starting point. Assign to each event A, consisting of elementary occurrences from the set S, a number P(A), which is designated as the probability of the event A, and such that:
+
+$$1. \qquad 0 \le P(A) \qquad (10.13)$$
+
+$$2. \qquad P(S) = 1 \qquad (10.14)$$
+
+$$3. \quad \begin{array}{l} \text{If: } A \cap B = \emptyset, \text{ where } \emptyset \text{ is the empty set} \\ \text{Then: } P(A \cup B) = P(A) + P(B) \end{array} \qquad (10.15)$$
+
+In the following examples, we illustrate some common techniques for finding the probabilities for certain events. Look around, and you will find plenty more.
+
+**Example 10.1**
+
+Find the probability for getting three sixes in a roll of three dice.
+
+**Solution:** First, compute the number of elements in the total sample space. We can describe each roll of the dice by a 3-tuplet (a, b, c), where a, b, and c can take the values 1, 2, 3, 4, 5, 6. There are $6^3 = 216$ possible 3-tuplets. The event that we are seeking is realized only in the single elementary occurrence when the 3-tuplet (6, 6, 6) is obtained; therefore, the probability for this event, for fair dice, is
\ No newline at end of file
diff --git a/samples/texts/348597/page_312.md b/samples/texts/348597/page_312.md
new file mode 100644
index 0000000000000000000000000000000000000000..c5ea993fde6fd4751a66892b9a8c82940868451f
--- /dev/null
+++ b/samples/texts/348597/page_312.md
@@ -0,0 +1,31 @@
+$$P(A) = \frac{1}{216}$$
+
+**Example 10.2**
+
+Find the probability of getting only two sixes in a roll of three dice.
+
+*Solution:* The event in this case consists of all elementary occurrences having the following forms:
+
+$(a, 6, 6), \ (6, b, 6), \ (6, 6, c)$
+
+where $a = 1, ..., 5$; $b = 1, ..., 5$; and $c = 1, ..., 5$. Therefore, the event A consists of elements corresponding to 15 elementary occurrences, and its probability is
+
+$$P(A) = \frac{15}{216}$$
+
+**Example 10.3**
+
+Find the probability that, if three individuals are asked to guess a number from 1 to 10, their guesses will be different numbers.
+
+*Solution:* There are 1000 distinct equiprobable 3-tuplets $(a, b, c)$, where each component of the 3-tuplet can have any value from 1 to 10. The event A occurs when all components have unequal values. Therefore, while $a$ can have any of 10 possible values, $b$ can have only 9, and $c$ can have only 8. Therefore, $n(A) = 8 \times 9 \times 10$, and the probability for the event A is
+
+$$P(A) = \frac{8 \times 9 \times 10}{1000} = 0.72$$
+
+**Example 10.4**
+
+An inspector checks a batch of 100 microprocessors, 5 of which are defective. He examines ten items selected at random. If none of the ten items is defective, he accepts the batch. What is the probability that he will accept the batch?
+
+*Solution:* The number of ways of selecting 10 items from a batch of 100 items is:
+
+$$N = \frac{100!}{10!(100-10)!} = \frac{100!}{10!90!} = C_{100}^{10}$$
+
+where $C_k^n$ is the binomial coefficient and represents the number of combinations of $n$ objects taken $k$ at a time without regard to order. It is equal to $\frac{n!}{k!(n-k)!}$. All these combinations are equally probable.
\ No newline at end of file
diff --git a/samples/texts/348597/page_313.md b/samples/texts/348597/page_313.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed5a402ba95acf634b5352379de414f6b0420bd3
--- /dev/null
+++ b/samples/texts/348597/page_313.md
@@ -0,0 +1,41 @@
+If the event A is that where the batch is accepted by the inspector, then A
+occurs when all ten items selected belong to the set of acceptable quality
+units. The number of elements in A is
+
+$$
+N(A) = \frac{95!}{10!85!} = C_{10}^{95}
+$$
+
+and the probability for the event A is
+
+$$
+P(A) = \frac{C_{10}^{95}}{C_{10}^{100}} = \frac{86 \times 87 \times 88 \times 89 \times 90}{96 \times 97 \times 98 \times 99 \times 100} = 0.5837
+$$
+
+In-Class Exercises
+
+**Pb. 10.1** A cube whose faces are colored is split into 125 smaller cubes of equal size.
+
+a. Find the probability that a cube drawn at random from the batch of randomly mixed smaller cubes will have three colored faces.
+
+b. Find the probability that a cube drawn from this batch will have two colored faces.
+
+**Pb. 10.2** An urn has three blue balls and six red balls. One ball was randomly drawn from the urn and then a second ball, which was blue. What is the probability that the first ball drawn was blue?
+
+**Pb. 10.3** Find the probability that the last two digits of the cube of a random integer are 1. Solve the problem analytically, and then compare your result to a numerical experiment that you will conduct and where you compute the cubes of all numbers from 1 to 1000.
+
+**Pb. 10.4** From a lot of *n* resistors, *p* are defective. Find the probability that *k* resistors out of a sample of *m* selected at random are found defective.
+
+**Pb. 10.5** Three cards are drawn from a deck of cards.
+
+a. Find the probability that these cards *are* the Ace, the King, and the Queen of Hearts.
+
+b. Would the answer change if the statement of the problem was "an Ace, a King, and a Queen"?
+
+**Pb. 10.6 Show that:**
+
+$$
+P(\bar{A}) = 1 - P(A)
+$$
+
+where $\bar{A}$, the complement of A, are all events in S having no element in common with A.
\ No newline at end of file
diff --git a/samples/texts/348597/page_314.md b/samples/texts/348597/page_314.md
new file mode 100644
index 0000000000000000000000000000000000000000..64925582ddf8379fe7ef916b2f5cf53e9d41131b
--- /dev/null
+++ b/samples/texts/348597/page_314.md
@@ -0,0 +1,35 @@
+NOTE In solving certain category of probability problems, it is often convenient to solve for $P(A)$ by computing the probability of its complement and then applying the above relation.
+
+**Pb. 10.7** Show that if $A_1, A_2, ..., A_n$ are mutually exclusive events, then:
+
+$$P(A_1 \cup A_2 \cup ... \cup A_n) = P(A_1) + P(A_2) + ... + P(A_n)$$
+
+(Hint: Use mathematical induction and Eq. (10.15).)
+
+## 10.3 Addition Laws for Probabilities
+
+We start by reminding the reader of the key results of elementary set theory:
+
+* The Commutative law states that:
+
+$$A \cap B = B \cap A \qquad (10.16)$$
+
+$$A \cup B = B \cup A \qquad (10.17)$$
+
+* The Distributive laws are written as:
+
+$$A \cap (B \cup C) = (A \cap B) \cup (A \cap C) \qquad (10.18)$$
+
+$$A \cup (B \cap C) = (A \cup B) \cap (A \cup C) \qquad (10.19)$$
+
+* The Associative laws are written as:
+
+$$(A \cup B) \cup C = A \cup (B \cup C) = A \cup B \cup C \qquad (10.20)$$
+
+$$(A \cap B) \cap C = A \cap (B \cap C) = A \cap B \cap C \qquad (10.21)$$
+
+* De Morgan's laws are
+
+$$\overline{(A \cup B)} = \bar{A} \cap \bar{B} \qquad (10.22)$$
+
+$$\overline{(A \cap B)} = \bar{A} \cup \bar{B} \qquad (10.23)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_315.md b/samples/texts/348597/page_315.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f14a5e35ca7c256fe23bd53afe4b3febf8d630f
--- /dev/null
+++ b/samples/texts/348597/page_315.md
@@ -0,0 +1,35 @@
+* The Duality principle states that: If in an identity, we replace unions by intersections, intersections by unions, $S$ by $\emptyset$, and $\emptyset$ by $S$, then the identity is preserved.
+
+**THEOREM 1**
+
+If we define the difference of two events $A_1 - A_2$ to mean the events in which $A_1$ occurs but not $A_2$, the following equalities are valid:
+
+$$P(A_1 - A_2) = P(A_1) - P(A_1 \cap A_2) \quad (10.24)$$
+
+$$P(A_2 - A_1) = P(A_2) - P(A_1 \cap A_2) \quad (10.25)$$
+
+$$P(A_1 \cup A_2) = P(A_1) + P(A_2) - P(A_1 \cap A_2) \quad (10.26)$$
+
+PROOF From the basic set theory algebra results, we can deduce the following equalities:
+
+$$A_1 = (A_1 - A_2) \cup (A_1 \cap A_2) \quad (10.27)$$
+
+$$A_2 = (A_2 - A_1) \cup (A_1 \cap A_2) \quad (10.28)$$
+
+$$A_1 \cup A_2 = (A_1 - A_2) \cup (A_2 - A_1) \cup (A_1 \cap A_2) \quad (10.29)$$
+
+Further note that the events $(A_1 - A_2)$, $(A_2 - A_1)$, and $(A_1 \cap A_2)$ are mutually exclusive. Using the results from **Pb. 10.7**, Eqs. (10.27) and (10.28), and the preceding comment, we can write:
+
+$$P(A_1) = P(A_1 - A_2) + P(A_1 \cap A_2) \quad (10.30)$$
+
+$$P(A_2) = P(A_2 - A_1) + P(A_1 \cap A_2) \quad (10.31)$$
+
+which establish Eqs. (10.24) and (10.25). Next, consider Eq. (10.29); because of the mutual exclusivity of each event represented by each of the parenthesis on its LHS, we can use the results of **Pb. 10.7**, to write:
+
+$$P(A_1 \cup A_2) = P(A_1 - A_2) + P(A_2 - A_1) + P(A_1 \cap A_2) \quad (10.32)$$
+
+using Eqs. (10.30) and (10.31), this can be reduced to Eq. (10.26).
+
+**THEOREM 2**
+
+Given any *n* events $A_1, A_2, ..., A_n$ and defining $P_1, P_2, P_3, ..., P_n$ to mean:
\ No newline at end of file
diff --git a/samples/texts/348597/page_316.md b/samples/texts/348597/page_316.md
new file mode 100644
index 0000000000000000000000000000000000000000..b7d750f01659299d0a29422a994491c76b8cf999
--- /dev/null
+++ b/samples/texts/348597/page_316.md
@@ -0,0 +1,35 @@
+$$P_1 = \sum_{i=1}^{n} P(A_i) \qquad (10.33)$$
+
+$$P_2 = \sum_{1 \le i < j \le n} P(A_i \cap A_j) \qquad (10.34)$$
+
+$$P_3 = \sum_{1 \le i < j < k \le n} P(A_i \cap A_j \cap A_k) \qquad (10.35)$$
+
+etc., ..., then:
+
+$$P\left(\bigcup_{k=1}^{n} A_k\right) = P_1 - P_2 + P_3 - P_4 + \dots + (-1)^{n-1} P_n \qquad (10.36)$$
+
+This theorem can be proven by mathematical induction (we do not give the details of this proof here).
+
+**Example 10.5**
+
+Using the events *E*, *O*, *B*, *C* as defined in Section 10.1, use Eq. (10.36) to show that: $P(E \cup O \cup B \cup C) = 1$.
+
+**Solution:** Using Eq. (10.36), we can write:
+
+$$
+\begin{align*}
+P(E \cup O \cup B \cup C) &= P(E) + P(O) + P(B) + P(C) \\
+&\quad - [P(E \cap O) + P(E \cap B) + P(E \cap C) + P(O \cap B) + P(O \cap C) + P(B \cap C)] \\
+&\quad + [P(E \cap O \cap B) + P(E \cap O \cap C) + P(E \cap B \cap C) + P(O \cap B \cap C)] \\
+&\quad - P(E \cap O \cap B \cap C) \\
+&= \left[\frac{1}{2} + \frac{1}{2} + \frac{2}{3} + \frac{1}{3}\right] - \left[0 + \frac{2}{6} + \frac{1}{6} + \frac{2}{6} + \frac{1}{6} + 0\right] + [0 + 0 + 0 + 0] - [0] = 1
+\end{align*}
+$$
+
+**Example 10.6**
+
+Show that for any *n* events $A_1, A_2, ..., A_n$, the following inequality holds:
+
+$$P\left(\bigcup_{k=1}^{n} A_k\right) \leq \sum_{k=1}^{n} P(A_k)$$
+
+**Solution:** We prove this result by mathematical induction:
\ No newline at end of file
diff --git a/samples/texts/348597/page_317.md b/samples/texts/348597/page_317.md
new file mode 100644
index 0000000000000000000000000000000000000000..9bae74e7608429e94c5a368ef4c1aca2784e7a9e
--- /dev/null
+++ b/samples/texts/348597/page_317.md
@@ -0,0 +1,38 @@
+* For $n = 2$, the result holds because by Eq. (10.26) we have:
+
+$$P(A_1 \cup A_2) = P(A_1) + P(A_2) - P(A_1 \cap A_2)$$
+
+and since any probability is a non-negative number, this leads to
+the inequality:
+
+$$P(A_1 \cup A_2) \leq P(A_1) + P(A_2)$$
+
+* Assume that the theorem is true for ($n-1$) events, then we can write:
+
+$$P\left(\bigcup_{k=2}^{n} A_k\right) \leq \sum_{k=2}^{n} P(A_k)$$
+
+* Using associativity, Eq. (10.26), the result for (n-1) events, and the non-negativity of the probability, we can write:
+
+$$
+\begin{align*}
+P\left(\bigcup_{k=1}^{n} A_k\right) &= P\left(A_1 \cup \left(\bigcup_{k=1}^{n} A_k\right)\right) \\
+&= P(A_1) + P\left(\bigcup_{k=2}^{n} A_k\right) - P\left(A_1 \cap \left(\bigcup_{k=2}^{n} A_k\right)\right) \\
+&\leq P(A_1) + \sum_{k=2}^{n} P(A_k) - P\left(A_1 \cap \left(\bigcup_{k=2}^{n} A_k\right)\right) \leq \sum_{k=1}^{n} P(A_k)
+\end{align*}
+$$
+
+which is the desired result.
+
+## In-Class Exercises
+
+**Pb. 10.8** Show that if the events $A_1, A_2, ..., A_n$ are such that:
+
+$$A_1 \subsetneq A_2 \subsetneq \dots \subsetneq A_n$$
+
+then:
+
+$$P\left(\bigcup_{k=1}^{n} A_k\right) = P(A_n)$$
+
+**Pb. 10.9** Show that if the events $A_1, A_2, ..., A_n$ are such that:
+
+$$A_1 \supsetneq A_2 \supsetneq \dots \supsetneq A_n$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_318.md b/samples/texts/348597/page_318.md
new file mode 100644
index 0000000000000000000000000000000000000000..acb905d5a05015a6820a090823149e830b606eda
--- /dev/null
+++ b/samples/texts/348597/page_318.md
@@ -0,0 +1,33 @@
+then:
+
+$$P\left(\bigcap_{k=1}^{n} A_k\right) = P(A_n)$$
+
+**Pb. 10.10** Find the probability that a positive integer randomly selected will be non-divisible by:
+
+a. 2 and 3.
+
+b. 2 or 3.
+
+**Pb. 10.11** Show that the expression for Eq. (10.36) simplifies to:
+
+$$P(A_1 \cup A_2 \cup \dots \cup A_n) = C_1^n P(A_1) - C_2^n P(A_1 \cap A_2) + C_3^n P(A_1 \cap A_2 \cap A_3) - \dots + (-1)^{n-1} P(A_1 \cap A_2 \cap \dots \cap A_n)$$
+
+when the probability for the intersection of any number of events is independent of the indices.
+
+**Pb. 10.12** A filing stack has $n$ drawers, and a secretary randomly files $m$ letters in these drawers.
+
+a. Assuming that $m > n$, find the probability that there will be at least one letter in each drawer.
+
+b. Plot this probability for $n = 12$, and $15 \le m \le 50$.
+
+*(Hint: Take the event $A_i$ to mean that no letter is filed in the $i$th drawer and use the result of Pb. 10.11.)*
+
+## 10.4 Conditional Probability
+
+The conditional probability of an event $A$ assuming $C$ and denoted by $P(A|C)$ is, by definition, the ratio:
+
+$$P(A|C) = \frac{P(A \cap C)}{P(C)} \qquad (10.37)$$
+
+### Example 10.7
+
+Considering the events *E*, *O*, *B*, *C* as defined in Section 10.2 and the above definition for conditional probability, find the probability that the number of spots showing on the die is even, assuming that it is equal to or greater than 3.
\ No newline at end of file
diff --git a/samples/texts/348597/page_319.md b/samples/texts/348597/page_319.md
new file mode 100644
index 0000000000000000000000000000000000000000..e12ef759db52fdf5212456a5e3425533524c3499
--- /dev/null
+++ b/samples/texts/348597/page_319.md
@@ -0,0 +1,33 @@
+**Solution:** In the above notation, we are asked to find the quantity $P(E|B)$. Using Eq. (10.37), this is equal to:
+
+$$P(E|B) = \frac{P(E \cap B)}{P(B)} = \frac{P(\{4,6\})}{P(\{3,4,5,6\})} = \frac{\left(\frac{2}{6}\right)}{\left(\frac{4}{6}\right)} = \frac{1}{2}$$
+
+In this case, $P(E|B) = P(E)$. When this happens, we say that the two events *E* and *B* are independent.
+
+### Example 10.8
+
+Find the probability that the number of spots showing on the die is even, assuming that it is larger than 3.
+
+**Solution:** Call *D* the event of having the number of spots larger than 3. Using Eq. (10.37), $P(E|D)$ is equal to:
+
+$$P(E|D) = \frac{P(E \cap D)}{P(D)} = \frac{P(\{4,6\})}{P(\{4,5,6\})} = \frac{\left(\frac{2}{6}\right)}{\left(\frac{3}{6}\right)} = \frac{2}{3}$$
+
+In this case, $P(E|D) \neq P(E)$; and thus the two events *E* and *D* are not independent.
+
+### Example 10.9
+
+Find the probability of picking a blue ball first, then a red ball from an urn that contains five red balls and four blue balls.
+
+**Solution:** From the definition of conditional probability [Eq. (10.37)], we can write:
+
+$$
+\begin{align*}
+&P(\text{Blue ball first and Red ball second}) = \\
+& P(\text{Red ball second}|\text{Blue ball first}) \times P(\text{Blue ball first})
+\end{align*} $$
+
+The probability of picking a blue ball first is
+
+$$P(\text{Blue ball first}) = \frac{\text{Original number of Blue balls}}{\text{Total number of balls}} = \frac{4}{9}$$
+
+The conditional probability is given by:
\ No newline at end of file
diff --git a/samples/texts/348597/page_32.md b/samples/texts/348597/page_32.md
new file mode 100644
index 0000000000000000000000000000000000000000..6de23706563527211a81d5349c3bfd1a2cd5d424
--- /dev/null
+++ b/samples/texts/348597/page_32.md
@@ -0,0 +1,35 @@
+in the x-direction, assuming that the z-parametric equation remains the same,
+show the resulting 3-D trajectory.
+
+**Pb. 1.12** What if $z(t)$ is periodic in $t$? For example, $z(t) = \cos(t)$ or $z(t) = \cos(2t)$, while the 2-D motion is still circular. Show the 3-D trajectory.
+
+In Example 1.14, we used the **for** loop to generate the dependent arrays for
+the helix; but as pointed out previously, a more efficient method to program
+the helix is in the array notation, as follows:
+
+```matlab
+th=[0:.01:2]*2*pi;
+x=cos(th);
+y=sin(th);
+z=th;
+plot3(x,y,z)
+```
+
+### 1.7.4 Plotting a 3-D Surface
+
+We now explore the two different techniques for rendering, in MATLAB, 3-D
+surface graphics: the mesh and the contour representations.
+
+* A function of two variables $z = f(x, y)$ represents a surface in 3-D geometry; for example:
+
+$$z = ax + by + c$$
+
+represents a plane that crosses the vertical axis (z-axis) at c.
+
+* There are essentially two main techniques in MATLAB for viewing surfaces: the **mesh** function and the **contour** function.
+
+* In both techniques, we must first create a 2-D array structure (like a checkerboard) with the appropriate *x-* and *y*-values. To implement this, we use the MATLAB **meshgrid** function.
+
+* The *z*-component is then expressed in the variables assigned to implement the **meshgrid** command.
+
+* We then plot the function with either the **mesh** command or the **contour** command. The **mesh** command gives a 3-D rendering of the surface, while the **contour** command gives contour lines, wherein each contour represents the locus of points on the surface having the same height above the *x*-*y* plane. This last rendering technique is that used by mapmakers to represent the topography of a terrain.
\ No newline at end of file
diff --git a/samples/texts/348597/page_320.md b/samples/texts/348597/page_320.md
new file mode 100644
index 0000000000000000000000000000000000000000..540e1c6f12a41b142259b3fd33454cf4e17cf6e7
--- /dev/null
+++ b/samples/texts/348597/page_320.md
@@ -0,0 +1,36 @@
+$$P(\text{Red ball second} | \text{Blue ball first}) = \frac{\text{Number of Red balls}}{\text{Number of balls remaining after first pick}} = \frac{5}{8}$$
+
+giving:
+
+$$P(\text{Blue ball first and Red ball second}) = \frac{4}{9} \times \frac{5}{8} = \frac{5}{18}$$
+
+### 10.4.1 Total Probability and Bayes Theorems
+
+#### TOTAL PROBABILITY THEOREM
+
+If {$A_1, A_2, ..., A_n$} is a partition of the total elementary occurrences set S, that is,
+
+$$\bigcup_{i=1}^{n} A_i = S \quad \text{and} \quad A_i \cap A_j = \emptyset \quad \text{for } i \neq j$$
+
+and B is an arbitrary event, then:
+
+$$P(B) = P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + ... + P(B|A_n)P(A_n) \quad (10.38)$$
+
+PROOF. From the algebra of sets, and the definition of a partition, we can write the following equalities:
+
+$$
+\begin{align}
+B &= B \cap S = B \cap (A_1 \cup A_2 \cup \dots \cup A_n) \tag{10.39} \\
+&= (B \cap A_1) \cup (B \cap A_2) \cup \dots \cup (B \cap A_n)
+\end{align}
+$$
+
+Since the events ($B \cap A_i$) and ($B \cap A_j$) are mutually exclusive for $i \neq j$, then using the results of **Pb. 10.7**, we can deduce that:
+
+$$P(B) = P(B \cap A_1) + P(B \cap A_2) + \dots + P(B \cap A_n) \quad (10.40)$$
+
+Now, using the conditional probability definition [Eq. (10.38)], Eq. (10.40) can be written as:
+
+$$P(B) = P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \dots + P(B|A_n)P(A_n) \quad (10.41)$$
+
+This result is known as the Total Probability theorem.
\ No newline at end of file
diff --git a/samples/texts/348597/page_321.md b/samples/texts/348597/page_321.md
new file mode 100644
index 0000000000000000000000000000000000000000..b0faea0310a7c6dd37b8445a02864408057d6ba6
--- /dev/null
+++ b/samples/texts/348597/page_321.md
@@ -0,0 +1,44 @@
+BAYES THEOREM
+
+$$
+P(A_i|B) = \frac{P(B|A_i)P(A_i)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \dots + P(B|A_n)P(A_n)} \quad (10.42)
+$$
+
+PROOF From the definition of the conditional probability [Eq. (10.37)], we can write:
+
+$$
+P(B \cap A_i) = P(A_i | B)P(B) \tag{10.43}
+$$
+
+Again, using Eqs. (10.37) and (10.43), we have:
+
+$$
+P(A_i|B) = \frac{P(B|A_i)P(A_i)}{P(B)} \qquad (10.44)
+$$
+
+Now, substituting Eq. (10.41) in the denominator of Eq. (10.44), we obtain Eq.
+(10.42).
+
+**Example 10.10**
+
+A digital communication channel transmits the signal as a collection of ones
+(1s) and zeros (0s). Assume (statistically) that 40% of the 1s and 33% of the 0s
+are changed upon transmission. Suppose that, in a message, the ratio
+between the transmitted 1 and the transmitted 0 was 5/3. What is the proba-
+bility that the received signal is the same as the transmitted signal if:
+
+a. The received signal was a 1?
+
+b. The received signal was a 0?
+
+Solution: Let O be the event that 1 was received, and Z be the event that 0 was received. If H₁ is the hypothesis that 1 was received and H₀ is the hypothesis that 0 was received, then from the statement of the problem, we know that:
+
+$$
+\frac{P(H_1)}{P(H_0)} = \frac{5}{3} \quad \text{and} \quad P(H_1) + P(H_0) = 1
+$$
+
+giving:
+
+$P(H_1) = \frac{5}{8}$ and $P(H_0) = \frac{3}{8}$
+
+Furthermore, from the text of the problem, we know that:
\ No newline at end of file
diff --git a/samples/texts/348597/page_322.md b/samples/texts/348597/page_322.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ff3bed86b50c468a70be18f446a2d8b787b42de
--- /dev/null
+++ b/samples/texts/348597/page_322.md
@@ -0,0 +1,31 @@
+$$P(O|H_1) = \frac{3}{5} \quad \text{and} \quad P(Z|H_1) = \frac{2}{5}$$
+
+$$P(O|H_0) = \frac{1}{3} \quad \text{and} \quad P(Z|H_0) = \frac{2}{3}$$
+
+From the total probability result [Eq. (10.41)], we obtain:
+
+$$\begin{align*}
+P(O) &= P(O|H_1)P(H_1) + P(O|H_0)P(H_0) \\
+&= \frac{3}{5} \times \frac{5}{8} + \frac{1}{3} \times \frac{3}{8} = \frac{1}{2}
+\end{align*}$$
+
+and
+
+$$\begin{align*}
+P(Z) &= P(Z|H_1)P(H_1) + P(Z|H_0)P(H_0) \\
+&= \frac{2}{5} \times \frac{5}{8} + \frac{2}{3} \times \frac{3}{8} = \frac{1}{2}
+\end{align*}$$
+
+The probability that the received signal is 1 if the transmitted signal was 1 from Bayes theorem:
+
+$$P(H_1|O) = \frac{P(H_1)P(O|H_1)}{P(O)} = \frac{\frac{5}{3}}{\frac{3}{2}} = \frac{3}{4}$$
+
+Similarly, we can obtain the probability that the received signal is 0 if the transmitted signal is 0:
+
+$$P(H_0|Z) = \frac{P(H_0)P(Z|H_0)}{P(Z)} = \frac{\frac{3}{2}}{\frac{8}{2}} = \frac{3}{4}$$
+
+## In-Class Exercises
+
+**Pb. 10.13** Show that when two events A and B are independent, the addition law for probability becomes:
+
+$$P(A \cup B) = P(A) + P(B) - P(A)P(B)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_323.md b/samples/texts/348597/page_323.md
new file mode 100644
index 0000000000000000000000000000000000000000..f2892d557cd0da8a4323a464f8c71bd178d49bb9
--- /dev/null
+++ b/samples/texts/348597/page_323.md
@@ -0,0 +1,27 @@
+**Pb. 10.14** Consider four boxes, each containing 1000 resistors. Box 1 contains 100 defective items; Box 2 contains 400 defective items; Box 3 contains 50 defective items; and Box 4 contains 80 defective items.
+
+a. What is the probability that a resistor chosen at random from any of the boxes is defective?
+
+b. What is the probability that if the resistor is found defective, it came from Box 2?
+
+(Hint: The randomness in the selection of the box means that: $P(B_1) = P(B_2) = P(B_3) = P(B_4) = 0.25$.)
+
+## 10.5 Repeated Trials
+
+Bernoulli trials refer to identical, successive, and independent trials, in which an elementary event A can occur with probability:
+
+$$p = P(A) \qquad (10.45)$$
+
+or fail to occur with probability:
+
+$$q = 1 - p \qquad (10.46)$$
+
+In the case of *n* consecutive Bernoulli trials, each elementary event can be described by a sequence of 0s and 1s, such as in the following:
+
+$$\omega = \underbrace{1001...01}_{n \text{ digits} - k \text{ ones}} \qquad (10.47)$$
+
+where *n* is the number of trials, *k* is the number of successes, and ($n-k$) is the number of failures. Because the trials are independent, the probability for the above single occurrence is:
+
+$$P(\omega) = p^k q^{n-k} \qquad (10.48)$$
+
+The total probability for the event with *k* successes in *n* trials is going to be the probability of the single event multiplied by the number of configurations with a given number of digits and a given number of 1s. The number of such configurations is given by the binomial coefficient $\binom{n}{k}$. Therefore:
\ No newline at end of file
diff --git a/samples/texts/348597/page_324.md b/samples/texts/348597/page_324.md
new file mode 100644
index 0000000000000000000000000000000000000000..adf48236a55d00a20781ff08aff3f8189425dc95
--- /dev/null
+++ b/samples/texts/348597/page_324.md
@@ -0,0 +1,37 @@
+$$P(k \text{ successes in } n \text{ trials}) = C_k^n p^k q^{n-k} \quad (10.49)$$
+
+**Example 10.11**
+
+Find the probability that the number 3 will appear twice in five independent rolls of a die.
+
+*Solution:* In a single trial, the probability of success (i.e., 3 showing up) is
+
+$$p = \frac{1}{6}$$
+
+Therefore, the probability that it appears twice in five independent rolls will be
+
+$$P(2 \text{ successes in } 5 \text{ trials}) = C_2^5 p^2 q^5 = \frac{5!}{2!3!} \left(\frac{1}{6}\right)^2 \left(\frac{5}{6}\right)^3 = 0.16075$$
+
+**Example 10.12**
+
+Find the probability that in a roll of two dice, three occurrences of snake-eyes (one spot on each die) are obtained in ten rolls of the two dice.
+
+*Solution:* The space S of the roll of two dice consists of 36 elementary elements (6 × 6), only one of which results in a snake-eyes configuration; therefore:
+
+$$p = 1/36; \quad k = 3; \quad n = 10$$
+
+and
+
+$$P(3 \text{ successes in } 10 \text{ trials}) = C_3^{10} p^3 q^7 = \frac{10!}{3!7!} \left(\frac{1}{36}\right)^3 \left(\frac{35}{36}\right)^7 = 0.00211$$
+
+**In-Class Exercises**
+
+**Pb. 10.15** Assuming that a batch of manufactured components has an 80% chance of passing an inspection, what is the chance that at least 16 batches in a lot of 20 would pass the inspection?
+
+**Pb. 10.16** In an experiment, we keep rolling a fair die until it comes up showing three spots. What are the probabilities that this will take:
+
+a. Exactly four rolls?
+
+b. At least four rolls?
+
+c. At most four rolls?
\ No newline at end of file
diff --git a/samples/texts/348597/page_325.md b/samples/texts/348597/page_325.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe9563a5acbe8eb813d0dfd2138632f9020cd303
--- /dev/null
+++ b/samples/texts/348597/page_325.md
@@ -0,0 +1,29 @@
+**Pb. 10.17** Let X be the number of successes in a Bernoulli trials experiment with n trials and the probability of success p in each trial. If the mean number of successes m, also called average value $\bar{X}$ and expectation value E(X), is defined as:
+
+$$m \equiv \bar{X} \equiv E(X) = \sum XP(X)$$
+
+and the variance is defined as:
+
+$$V(X) = E((X - \bar{X})^2)$$
+
+show that:
+
+$$\bar{X} = np \quad \text{and} \quad V(X) = np(1-p)$$
+
+### 10.5.1 Generalization of Bernoulli Trials
+
+In the above Bernoulli trials, we considered the case of whether or not a single event A was successful (i.e., two choices). This was the simplest partition of the set S.
+
+In cases where we partition the set S in r subsets: $S = \{A_1, A_2, ..., A_r\}$, and the probabilities for these single events are, respectively: $\{p_1, p_2, ..., p_r\}$, where $p_1 + p_2 + ... + p_r = 1$, it can be easily proven that the probability in n independent trials for the event $A_1$ to occur $k_1$ times, the event $A_2$ to occur $k_2$ times, etc., is given by:
+
+$$P(k_1, k_2, \dots, k_r; n) = \frac{n!}{k_1! k_2! \dots k_r!} p_1^{k_1} p_2^{k_2} \dots p_r^{k_r} \quad (10.50)$$
+
+where $k_1 + k_2 + \dots + k_r = n$
+
+**Example 10.13**
+
+Consider the sum of the spots in a roll of two dice. We partition the set of outcomes $\{2, 3, ..., 11, 12\}$ into the three events $A_1 = \{2, 3, 4, 5\}$, $A_2 = \{6, 7\}$, $A_3 = \{8, 9, 10, 11, 12\}$. Find P(1, 7, 2; 10).
+
+**Solution:** The probabilities for each of the events are, respectively:
+
+$$p_1 = \frac{10}{36}, \quad p_2 = \frac{11}{36}, \quad p_3 = \frac{15}{36}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_326.md b/samples/texts/348597/page_326.md
new file mode 100644
index 0000000000000000000000000000000000000000..682ef3fc9885718e8490f196e6fa2c1826a4d46f
--- /dev/null
+++ b/samples/texts/348597/page_326.md
@@ -0,0 +1,29 @@
+and
+
+$$P(1,7,2;10) = \frac{10!}{1!7!2!} \left(\frac{10}{36}\right)^1 \left(\frac{11}{36}\right)^7 \left(\frac{15}{36}\right)^2 = 0.00431$$
+
+## 10.6 The Poisson and the Normal Distributions
+
+In this section, we obtain approximate expressions for the binomial distribution in different limits. We start by considering the expression for the probability of *k* successes in *n* Bernoulli trials with two choices for outputs; that is, Eq. (10.49).
+
+### 10.6.1 The Poisson Distribution
+
+Consider the limit when $p << 1$, but $np \equiv a \approx O(1)$. Then:
+
+$$P(k=0) = \frac{n!}{0!n!} p^0 (1-p)^n = \left(1-\frac{a}{n}\right)^n \quad (10.51)$$
+
+But in the limit $n \to \infty$,
+
+$$\left(1 - \frac{a}{n}\right)^n = e^{-a} \qquad (10.52)$$
+
+giving:
+
+$$P(k=0) = e^{-a} \qquad (10.53)$$
+
+Now consider $P(k = 1)$; it is equal to:
+
+$$\lim_{n \to \infty} P(k=1) = \frac{n!}{1!(n-1)!} p^1 (1-p)^{n-1} \approx a \left(1 - \frac{a}{n}\right)^n \approx ae^{-a} \quad (10.54)$$
+
+For $P(k = 2)$, we obtain:
+
+$$\lim_{n \to \infty} P(k=2) = \frac{n!}{2!(n-2)!} p^2 (1-p)^{n-2} \approx \frac{a^2}{2!} \left(1 - \frac{a}{n}\right)^n \approx \frac{a^2}{2!} e^{-a} \quad (10.55)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_327.md b/samples/texts/348597/page_327.md
new file mode 100644
index 0000000000000000000000000000000000000000..05671aee3d324275c46a6ccf3502eba08c08da86
--- /dev/null
+++ b/samples/texts/348597/page_327.md
@@ -0,0 +1,20 @@
+Similarly,
+
+$$ \lim_{n \to \infty} P(k) \approx \frac{a^k}{k!} e^{-a} \quad (10.56) $$
+
+We compare in Figure 10.1 the exact with the approximate expression for the probability distribution, in the region of validity of the Poisson approximation.
+
+FIGURE 10.1
+The Poisson distribution.
+
+**Example 10.14**
+
+A massive parallel computer system contains 1000 processors. Each processor fails independently of all others and the probability of its failure is 0.002 over a year. Find the probability that the system has no failures during one year of operation.
+
+**Solution:** This is a problem of Bernoulli trials with $n = 1000$ and $p = 0.002$:
+
+$$ P(k=0) = C_{0}^{1000} p^{0} (1-p)^{1000} = (0.998)^{1000} = 0.13506 $$
+
+or, using the Poisson approximate formula, with $a = np = 2$:
+
+$$ P(k=0) \approx e^{-a} = e^{-2} \approx 0.13533 $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_328.md b/samples/texts/348597/page_328.md
new file mode 100644
index 0000000000000000000000000000000000000000..24a4693f927a4e14b107a2ee979b9631a45ce31e
--- /dev/null
+++ b/samples/texts/348597/page_328.md
@@ -0,0 +1,28 @@
+**Example 10.15**
+
+Due to the random vibrations affecting its supporting platform, a recording head introduces glitches on the recording medium at the rate of $n = 100$ glitches per minute. What is the probability that $k = 3$ glitches are introduced in the recording over any interval of time $\Delta t = 1$s?
+
+*Solution:* If we choose an interval of time equal to 1 minute, the probability for an elementary event to occur in the subinterval $\Delta t$ in this 1 minute interval is
+
+$$p = \frac{1}{60}$$
+
+The problem reduces to finding the probability of $k = 3$ in $n = 100$ trials.
+The Poisson formula gives this probability as:
+
+$$P(3) = \frac{1}{3!} \left(\frac{100}{60}\right)^3 \exp\left(-\frac{100}{60}\right) = 0.14573$$
+
+where $a = 100/60$. (For comparison purposes, the exact value for this probability, obtained using the binomial distribution expression, is 0.1466.)
+
+## *Homework Problem*
+
+**Pb. 10.18** Let $A_1, A_2, ..., A_{m+1}$ be a partition of the set $S$, and let $p_1, p_2, ..., p_{m+1}$ be the probabilities associated with each of these events. Assuming that $n$ Bernoulli trials are repeated, show, using Eq. (10.50), that the probability that the event $A_1$ occurs $k_1$ times, the event $A_2$ occurs $k_2$ times, etc., is given in the limit $n \to \infty$ by:
+
+$$\lim_{n \to \infty} P(k_1, k_2, \dots, k_{m+1}; n) = \frac{(a_1)^{k_1} e^{-a_1}}{k_1!} \frac{(a_2)^{k_2} e^{-a_2}}{k_2!} \dots \frac{(a_m)^{k_m} e^{-a_m}}{k_m!}$$
+
+where $a_i = np_i$.
+
+### 10.6.2 The Normal Distribution
+
+Prior to considering the derivation of the normal distribution, let us recall Sterling’s formula, which is the approximation of $n!$ when $n \to \infty$:
+
+$$\lim_{n \to \infty} n! \approx \sqrt{2\pi n} n^n e^{-n} \quad (10.57)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_329.md b/samples/texts/348597/page_329.md
new file mode 100644
index 0000000000000000000000000000000000000000..0337a7f7f1903d6f548958baa22fd79a15abe3b3
--- /dev/null
+++ b/samples/texts/348597/page_329.md
@@ -0,0 +1,35 @@
+We seek the approximate form of the binomial distribution in the limit of very large *n* and *npq* >> 1. Using Eq. (10.57), the expression for the probability given in Eq. (10.49), reduces to:
+
+$$
+P(k \text{ successes in } n \text{ trials}) = \frac{1}{\sqrt{2\pi}} \sqrt{\frac{n}{k(n-k)}} \left(\frac{np}{k}\right)^k \left(\frac{nq}{(n-k)}\right)^{n-k} \quad (10.58)
+$$
+
+Now examine this expression in the neighborhood of the mean (see **Pb. 10.17**). We define the distance from this mean, normalized to the square root of the variance, as:
+
+$$
+x = \frac{k - np}{\sqrt{npq}} \tag{10.59}
+$$
+
+Using the leading two terms of the power expansion of $(\ln(1 + \varepsilon) = \varepsilon - \varepsilon^2/2 + ...)$, the natural logarithm of the two parentheses on the RHS of Eq. (10.58) can be approximated by:
+
+$$
+\ln\left(\frac{k}{np}\right)^{-k} \approx -(np + \sqrt{npq} x) \left(\sqrt{\frac{q}{np}} x - \frac{1}{2} \frac{q}{np} x^2\right) \quad (10.60)
+$$
+
+$$
+\ln\left(\frac{n-k}{nq}\right)^{-(n-k)} \approx -(nq - \sqrt{npq} x) \left(-\sqrt{\frac{p}{nq}} x - \frac{1}{2} \frac{p}{nq} x^2\right) \quad (10.61)
+$$
+
+Adding Eqs. (10.61) and (10.62), we deduce that:
+
+$$
+\lim_{n \to \infty} \left( \frac{np}{k} \right)^k \left( \frac{nq}{n-k} \right)^{n-k} = e^{-x^2} \quad (10.62)
+$$
+
+Furthermore, we can approximate the square root term on the RHS of Eq. (10.58) by its value at the mean; that is
+
+$$
+\sqrt{\frac{n}{n(n-k)}} \approx \frac{1}{\sqrt{npq}} \tag{10.63}
+$$
+
+Combining Eqs. (10.62) and (10.63), we can approximate Eq. (10.58), in this limit, by the Gaussian distribution:
\ No newline at end of file
diff --git a/samples/texts/348597/page_33.md b/samples/texts/348597/page_33.md
new file mode 100644
index 0000000000000000000000000000000000000000..927270ae694a49c92df96081d443b29db741f1ce
--- /dev/null
+++ b/samples/texts/348597/page_33.md
@@ -0,0 +1,32 @@
+### 1.7.4.1 Surface Rendering
+
+#### Example 1.15
+
+Plot the sinc function whose equation is given by:
+
+$$z = \frac{\sin(\sqrt{x^2 + y^2})}{\sqrt{x^2 + y^2}}$$
+
+over the domain $-8 < x < 8$ and $-8 < y < 8$.
+
+**Solution:** The implementation of the mesh rendering follows:
+
+x=[-8:.1:8];
+y=[-8:.1:8];
+[X,Y]=meshgrid(x,y);
+R=sqrt(X.^2+Y.^2)+eps;
+Z=sin(R)/R;
+mesh(X,Y,Z)
+
+The variable `eps` is a tolerance number = $2^{-52}$ used for determining expressions near apparent singularities, to avoid numerical division by zero.
+
+To generate a contour plot, we replace the last command in the above by:
+
+`contour(X,Y,Z,50) % The fourth argument specifies the number of contour lines to be shown`
+
+If we are interested only in a particular contour level, for example, the one with elevation $Z_0$, we use the `contour` function with an option, as follows:
+
+`contour(X,Y,Z,[Z₀ Z₀])`
+
+Occasionally, we might be interested in displaying simultaneously the mesh and contour rendering of a surface. This is possible through the use of the command `meshc`. It is the same as the `mesh` command except that a contour plot is drawn beneath the mesh.
+
+**Preparatory Activity:** Look in your calculus book for some surfaces equations, such as those of the hyperbolic paraboloid and the elliptic paraboloid and others of your choice for the purpose of completing **Pb. 1.16** of the next in-class activity.
\ No newline at end of file
diff --git a/samples/texts/348597/page_330.md b/samples/texts/348597/page_330.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a9b3e5d432f776f2970334de41a3b75764ccd6f
--- /dev/null
+++ b/samples/texts/348597/page_330.md
@@ -0,0 +1,25 @@
+$$P(k \text{ successes in } n \text{ trials}) = \frac{1}{\sqrt{2\pi npq}} \exp\left[-\frac{(k-np)^2}{2npq}\right] \quad (10.64)$$
+
+This result is known as the De Moivre-Laplace theorem. We compare in Figure 10.2 the binomial distribution and its Gaussian approximation in the region of the validity of the approximation.
+
+FIGURE 10.2
+The normal (Gaussian) distribution.
+
+**Example 10.16**
+
+A fair die is rolled 400 times. Find the probability that an even number of spots show up 200 times, 210 times, 220 times, and 230 times.
+
+*Solution:* In this case, $n = 400$; $p = 0.5$; $np = 200$; and $\sqrt{npq} = 10$.
+
+Using Eq. (10.65), we get:
+
+$$
+\begin{cases}
+P(200 \text{ even}) = 0.03989; & P(210 \text{ even}) = 0.02419 \\
+P(220 \text{ even}) = 0.00540; & P(230 \text{ even}) = 4.43 \times 10^{-4}
+\end{cases}
+$$
+
+**Homework Problems**
+
+**Pb. 10.19** Using the results of **Pb. 4.34**, relate in the region of validity of the Gaussian approximation to the quantity:
\ No newline at end of file
diff --git a/samples/texts/348597/page_331.md b/samples/texts/348597/page_331.md
new file mode 100644
index 0000000000000000000000000000000000000000..f532cfb9bdcb3ddf3a520fcbd959fe5bf4188b2c
--- /dev/null
+++ b/samples/texts/348597/page_331.md
@@ -0,0 +1,7 @@
+$$ \sum_{k=k_1}^{k_2} P(k \text{ successes in } n \text{ trials}) $$
+
+to the Gaussian integral, specifying each of the parameters appearing in your expression. (Hint: First show that in this limit, the summation can be approximated by an integration.)
+
+**Pb. 10.20** Let $A_1, A_2, ..., A_r$ be a partition of the set $S$, and let $p_1, p_2, ..., p_r$ be the probabilities associated with each of these events. Assuming *n* Bernoulli trials are repeated, show that, in the limit $n \to \infty$ and where $k_i$ are in the vicinity of $np_i \gg 1$, the following approximation is valid:
+
+$$ P(k_1, k_2, \dots, k_r; n) = \frac{\exp\left\{-\frac{1}{2}\left[\frac{(k_1 - np_1)^2}{np_1} + \dots + \frac{(k_r - np_r)^2}{np_r}\right]\right\}}{\sqrt{(2\pi n)^{r-1} p_1 \dots p_r}} $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_332.md b/samples/texts/348597/page_332.md
new file mode 100644
index 0000000000000000000000000000000000000000..362065be26dd5232bacdd084606bcad40b1dfa5c
--- /dev/null
+++ b/samples/texts/348597/page_332.md
@@ -0,0 +1,31 @@
+# Supplement: Review of Elementary Functions
+
+In this supplement, we review the basic features and characteristics of the simple elementary functions.
+
+## S.1 Affine Functions
+
+By an affine function, we mean an expression of the form
+
+$$y(x) = ax + b \quad (S.1)$$
+
+In the special case where $b = 0$, we say that $y$ is a linear function of $x$. We can interpret the parameters in the above function as representing the slope-intercept form of a straight line. Here, $a$ is the slope, which is a measure of the steepness of a line; and $b$ is the $y$-intercept (i.e., the line intersects the $y$-axis at the point $(0, b)$).
+
+The following cases illustrate the different possibilities:
+
+1. $a = 0$: this specifies a horizontal line at a height $b$ above the x-axis and that has zero slope.
+
+2. $a > 0$: the height of a point on the line (i.e., the $y$-value) increases as the value of $x$ increases.
+
+3. $a < 0$: the height of the line decreases as the value of $x$ increases.
+
+4. $b > 0$: the line $y$-intercept is positive.
+
+5. $b < 0$: the line $y$-intercept is negative.
+
+6. $x = k$: this function represents a vertical line passing through the point $(k, 0)$.
+
+It should be noted that:
+
+* If two lines have the same slope, they are parallel.
+
+* Two nonvertical lines are perpendicular if and only if their slopes are negative reciprocals of each other. (It is easy to deduce this
\ No newline at end of file
diff --git a/samples/texts/348597/page_333.md b/samples/texts/348597/page_333.md
new file mode 100644
index 0000000000000000000000000000000000000000..791ad78ab6a92caf233eda36cded503f3e9a656d
--- /dev/null
+++ b/samples/texts/348597/page_333.md
@@ -0,0 +1,22 @@
+property if you remember the relationship that you learned in
+trigonometry relating the sine and cosine of two angles that differ
+by $\pi/2$.) See Section S.4 for more details.
+
+**FIGURE S.1**
+Graph of the line $y = ax + b$ ($a = 2, b = 5$).
+
+## S.2 Quadratic Functions
+
+### Parabola
+
+A quadratic parabolic function is an expression of the form:
+
+$$y(x) = ax^2 + bx + c \quad \text{where} \quad a \neq 0 \qquad (S.2)$$
+
+Any $x$ for which $ax^2 + bx + c = 0$ is called a root or a zero of the quadratic function. The graphs of quadratic functions are called parabolas.
+
+If we plot these parabolas, we note the following characteristics:
+
+1. For $a > 0$, the parabola opens up (convex curve) as shown in Figure S.2.
+
+2. For $a < 0$, the parabola opens down (concave curve) as shown in Figure S.2.
\ No newline at end of file
diff --git a/samples/texts/348597/page_334.md b/samples/texts/348597/page_334.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ae05a33ea85ae61f58ebc380f6c52d830ca46b9
--- /dev/null
+++ b/samples/texts/348597/page_334.md
@@ -0,0 +1,15 @@
+FIGURE S.2
+
+Graph of a quadratic parabolic (second-order polynomial) function with 0 or 2 roots.
+
+3. The parabola does not always intersect the x-axis; but where it does, this point's abscissa is a real root of the quadratic equation.
+
+A parabola can cross the x-axis in either 0 or 2 points, or the x-axis can be tangent to it at one point. If the vertex of the parabola is above the x-axis and the parabola opens up, there is no intersection, and hence, no real roots. If, on the other hand, the parabola opens down, the curve will intersect at two values of x equidistant from the vertex position. If the vertex is below the x-axis, we reverse the convexity conditions for the existence of two real roots. We recall that the roots of a quadratic equation are given by:
+
+$$x_{\pm} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \qquad (S.3)$$
+
+When $b^2 - 4ac < 0$, the parabola does not intersect the x-axis. There are no real roots; the roots are said to be complex conjugates. When $b^2 - 4ac = 0$, the x-axis is tangent to the parabola and we have one double root.
+
+**Geometrical Description of a Parabola**
+
+The parabola can also be described through the following geometric construction: a parabola is the locus of all points P in a plane that are equidistant from a fixed line (called the directrix) and a fixed point (called the focus) not situated on the line.
\ No newline at end of file
diff --git a/samples/texts/348597/page_335.md b/samples/texts/348597/page_335.md
new file mode 100644
index 0000000000000000000000000000000000000000..e30b64ff70d8ffa5c602903e0a2936ec8f1d61ba
--- /dev/null
+++ b/samples/texts/348597/page_335.md
@@ -0,0 +1,17 @@
+FIGURE S.3
+
+Graph of a parabola defined through geometric parameters. (Parameter values: h = 2, k = 2, p = 1.)
+
+$$d_1 = d_2 \tag{S.4}$$
+
+The algebraic expression for the parabola, using the above geometric parameters, can be obtained by specifically writing and equating the expressions for the distances of a point on the parabola from the focus and from the directrix:
+
+$$\sqrt{(x-h)^2 + (y-(k+p))^2} = |y-(k-p)| \tag{S.5}$$
+
+Squaring both sides of this equation, this equality reduces to:
+
+$$(x-h)^2 = 4p(y-k) \tag{S.6}$$
+
+or in standard form, it can be written:
+
+$$y = \frac{x^2}{4p} - \frac{h}{2p}x + \left(\frac{h^2 + 4pk}{4p}\right) \tag{S.7}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_336.md b/samples/texts/348597/page_336.md
new file mode 100644
index 0000000000000000000000000000000000000000..12c82b987f23715903f795afa3284924237c9b48
--- /dev/null
+++ b/samples/texts/348597/page_336.md
@@ -0,0 +1,25 @@
+## Ellipse
+
+The standard form of the equation describing an ellipse is given by:
+
+$$ \frac{(x-h)^2}{a^2} + \frac{(y-k)^2}{b^2} = 1 \qquad (S.8) $$
+
+The ellipse's center is located at $(h, k)$, and assuming $a > b$, the major axis length is equal to $2a$, the minor axis length is equal to $2b$, the foci are located at $(h-c, k)$ and $(h+c, k)$, and those of the vertices at $(h-a, k)$ and $(h+a, k)$; where
+
+$$ c^2 = a^2 - b^2 \qquad (S.9) $$
+
+## Geometric Definition of an Ellipse
+
+An ellipse is the locus of all points P such that the sum of the distance between P and two distinct points (called the foci) is constant and greater than the distance between the two foci.
+
+$$ d_1 + d_2 = 2a \qquad (S.10) $$
+
+The center of the ellipse is the midpoint between foci, and the two points of intersection of the line through the foci and the ellipse are called the vertices.
+
+The eccentricity of an ellipse is the ratio of the distance between the center and a focus over the distance between the center and a vertex; that is:
+
+$$ \varepsilon = c/a \qquad (S.11) $$
+
+**FIGURE S.4**
+
+Graph of an ellipse defined through geometric parameters. (Parameter values: h = 2, k = 2, a = 3, b = 2.)
\ No newline at end of file
diff --git a/samples/texts/348597/page_337.md b/samples/texts/348597/page_337.md
new file mode 100644
index 0000000000000000000000000000000000000000..5553d33a1a934a02c808602cd84b044c38758ab8
--- /dev/null
+++ b/samples/texts/348597/page_337.md
@@ -0,0 +1,19 @@
+## Hyperbola
+
+The standard form of the equation describing a hyperbola is given by:
+
+$$ \frac{(x-h)^2}{a^2} - \frac{(y-k)^2}{b^2} = 1 \qquad (S.12) $$
+
+The center of the hyperbola is located at $(h, k)$, and assuming $a > b$, the major axis length is equal to $2a$, the minor axis length is equal to $2b$, the foci are located at $(h-c, k)$ and $(h+c, k)$, and those of the vertices at $(h-a, k)$ and $(h+a, k)$. In this case, $c > a > 0$ and $c > b > 0$ and
+
+$$ c^2 = a^2 + b^2 \qquad (S.13) $$
+
+## Geometric Definition of a Hyperbola
+
+A hyperbola is the locus of all points P in a plane such that the absolute value of the difference of the distances between P and the two foci is constant and is less than the distance between the two foci; that is:
+
+$$ |d_1 - d_2| = 2a \qquad (S.14) $$
+
+**FIGURE S.5**
+
+Graph of a hyperbola defined through geometric parameters. (Parameter values: h = 2, k = 2, a = 1, b = 3.)
\ No newline at end of file
diff --git a/samples/texts/348597/page_338.md b/samples/texts/348597/page_338.md
new file mode 100644
index 0000000000000000000000000000000000000000..9cc599206c8473fa765ce9d881b2e729ddcbe2bb
--- /dev/null
+++ b/samples/texts/348597/page_338.md
@@ -0,0 +1,31 @@
+The center of the hyperbola is the midpoint between foci, and the two points of intersection of the line through the foci and the hyperbola are called the vertices.
+
+## S.3 Polynomial Functions
+
+A polynomial function is an expression of the form:
+
+$$p(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0 \quad (S.15)$$
+
+where $a_n \neq 0$ for an $n$th degree polynomial.
+
+The Fundamental Theorem of Algebra states that, for the above polynomial, there are exactly $n$ complex roots; furthermore, if all the polynomial coefficients are real, then the complex roots always come in pairs consisting of a complex number and its complex conjugate.
+
+## S.4 Trigonometric Functions
+
+The trigonometric circle is defined as the circle with center at the origin of the coordinates axes and having radius 1.
+
+The trigonometric functions are defined as functions of the components of a point P on the trigonometric circle. Specifically, if we define the angle $\theta$ as the angle between the x-axis and the line OP, then:
+
+* $\cos(\theta)$ is the x-component of the point P.
+
+* $\sin(\theta)$ is the y-component of the point P.
+
+Using the Pythagorean theorem in the right angle triangle OQP, one deduces that:
+
+$$\sin^2(\theta) + \cos^2(\theta) = 1 \quad (S.16)$$
+
+Using the above definitions for the sine and cosine functions and elementary geometry, it is easy to note the following properties for the trigonometric functions:
+
+$$\sin(-\theta) = -\sin(\theta) \quad \text{and} \quad \cos(-\theta) = \cos(\theta) \quad (S.17)$$
+
+$$\sin(\theta + \pi) = -\sin(\theta) \quad \text{and} \quad \cos(\theta + \pi) = -\cos(\theta) \quad (S.18)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_339.md b/samples/texts/348597/page_339.md
new file mode 100644
index 0000000000000000000000000000000000000000..4efa558122c7db21037b62335b3ac7c2ec79bac4
--- /dev/null
+++ b/samples/texts/348597/page_339.md
@@ -0,0 +1,18 @@
+FIGURE S.6
+The trigonometric circle.
+
+$$ \sin(\theta + \pi/2) = \cos(\theta) \quad \text{and} \quad \cos(\theta + \pi/2) = -\sin(\theta) \quad (S.19) $$
+
+$$ \sin(\pi/2 - \theta) = \cos(\theta) \quad \text{and} \quad \cos(\pi/2 - \theta) = \sin(\theta) \quad (S.20) $$
+
+The tangent and cotangent functions are defined as:
+
+$$ \tan(\theta) = \frac{\sin(\theta)}{\cos(\theta)} \quad \text{and} \quad \cot(\theta) = \frac{1}{\tan(\theta)} \quad (S.21) $$
+
+Other important trigonometric relations relate the angles and sides of a triangle. These are the so-called Law of Cosines and Law of Sines in a triangle:
+
+$$ c^2 = a^2 + b^2 - 2ab \cos(\gamma) \quad (S.22) $$
+
+$$ \frac{\sin(\alpha)}{a} = \frac{\sin(\beta)}{b} = \frac{\sin(\gamma)}{c} \quad (S.23) $$
+
+where the sides of the triangle are *a*, *b*, *c*, and the angles opposite, respectively, of each of these sides are denoted by α, β, γ.
\ No newline at end of file
diff --git a/samples/texts/348597/page_34.md b/samples/texts/348597/page_34.md
new file mode 100644
index 0000000000000000000000000000000000000000..6b8ce03e8634d06e79fe9b537ef7a0d6c459edd4
--- /dev/null
+++ b/samples/texts/348597/page_34.md
@@ -0,0 +1,31 @@
+## In-Class Exercises
+
+**Pb. 1.13** Use the **contour** function to graphically find the locus of points on the above sinc surface that are 1/2 units above the x-y plane (i.e., the surface intersection with the z = 1/2 plane).
+
+**Pb. 1.14** Find the x-y plane intersection with the following two surfaces:
+
+$$z_1 = 3 + x + y$$
+
+$$z_2 = 4 - 2x - 4y$$
+
+**Pb. 1.15** Verify your answers to **Pb. 1.14** with that which you would obtain analytically for the shape of the intersection curves of the surfaces with the x-y plane. Also, compute the coordinates of the point of intersection of the two obtained curves. Verify your results graphically.
+
+**Pb. 1.16** Plot the surfaces that you have selected in your preparatory activity. Look in the help folder for the **view** command to learn how to view these surfaces from different angles.
+
+## 1.8 Polar Plots
+
+MATLAB can also display polar plots. In the first example, we draw an ellipse of the form $r = 1 + \epsilon \cos(\theta)$ in a polar plot; other shapes are given in the other examples.
+
+### Example 1.16
+
+Plot the ellipse in a polar plot.
+
+*Solution:* The following sequence of commands plot the polar plot of an ellipse with $\epsilon = 0.2$:
+
+```
+th=0:2*pi/100:2*pi;
+rho=1+.2*cos(th);
+polar(th,rho)
+```
+
+The shape you obtain may be unfamiliar; but to verify that this is indeed an ellipse, view the curve in a Cartesian graph. For that, you can use the MATLAB polar to Cartesian converter `pol2cart`, as follows:
\ No newline at end of file
diff --git a/samples/texts/348597/page_340.md b/samples/texts/348597/page_340.md
new file mode 100644
index 0000000000000000000000000000000000000000..f98071dd7549d71169b59057250fc6cc14a65e87
--- /dev/null
+++ b/samples/texts/348597/page_340.md
@@ -0,0 +1,25 @@
+## S.5 Inverse Trigonometric Functions
+
+The inverse of a function $y = f(x)$ is a function, denoted by $x = f^{-1}(y)$, having the property that $y = f(f^{-1}(y))$. It is important to note that a function $f(x)$ that is single-valued (i.e., to each element $x$ in its domain, there corresponds one, and only one, element $y$ in its range) may have an inverse that is multi-valued (i.e., many $x$ values may correspond to the same $y$). Typical examples of multi-valued inverse functions are the inverse trigonometric functions. In such instances, a single-valued inverse function can be defined if the range of the inverse function is defined on a more limited region of space. For example, the $\cos^{-1}$ function (called arc cosine) is single-valued if $0 \le x \le \pi$.
+
+Note that the above notation for the inverse of a function should not be confused with the negative-one power of the function $f(x)$, which should be written as:
+
+$$ (f(x))^{-1} \text{ or } 1/f(x) $$
+
+Also note that because the inverse function reverses the role of the x- and y-coordinates, the graphs of $y = f(x)$ and $y = f^{-1}(x)$ are symmetric with respect to the line $y = x$ (i.e., the first bisector of the coordinate axes).
+
+## S.6 The Natural Logarithmic Function
+
+The natural logarithmic function is defined by the following integral:
+
+$$ \ln(x) = \int_{1}^{x} \frac{1}{t} dt \qquad (S.24) $$
+
+The following properties of the logarithm can be directly deduced from the above definition:
+
+$$ \ln(ab) = \ln(a) + \ln(b) \qquad (S.25) $$
+
+$$ \ln(a^r) = r \ln(a) \qquad (S.26) $$
+
+$$ \ln\left(\frac{1}{a}\right) = -\ln(a) \qquad (S.27) $$
+
+$$ \ln\left(\frac{a}{b}\right) = \ln(a) - \ln(b) \qquad (S.28) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_341.md b/samples/texts/348597/page_341.md
new file mode 100644
index 0000000000000000000000000000000000000000..f191fbaf1526f79e5be1b9da07bc1962bd47b160
--- /dev/null
+++ b/samples/texts/348597/page_341.md
@@ -0,0 +1,55 @@
+To illustrate the technique for deriving any of the above relations, let us
+consider the first of them:
+
+$$
+\ln(ab) = \int_{1}^{ab} \frac{1}{t} dt = \int_{1}^{a} \frac{1}{t} dt + \int_{a}^{ab} \frac{1}{t} dt \quad (S.29)
+$$
+
+The first term on the RHS is ln(a), while the second term through the substi-
+tution u = t/a reduces to the definition of ln(b).
+
+Note that:
+
+$$
+\ln(1) = 0 \tag{S.30}
+$$
+
+$$
+\ln(e) = 1 \tag{S.31}
+$$
+
+where $e = 2.71828$.
+
+**S.7 The Exponential Function**
+
+The exponential function is defined as the inverse function of the natural log-
+arithmic function; that is
+
+$$
+\exp(\ln(x)) = x \text{ for all } x > 0 \qquad (\text{S.32})
+$$
+
+$$
+\ln(\exp(y)) = y \quad \text{for all } y \tag{S.33}
+$$
+
+The following properties of the exponential function hold for all real numbers:
+
+$$
+\exp(a)\exp(b) = \exp(a+b) \tag{S.34}
+$$
+
+$$
+(\exp(a))^b = \exp(ab) \tag{S.35}
+$$
+
+$$
+\exp(-a) = \frac{1}{\exp(a)} \tag*{\text{(S.36)}}
+$$
+
+$$
+\frac{\exp(a)}{\exp(b)} = \exp(a-b) \qquad (S.37)
+$$
+
+It should be pointed out that any of the above properties can be directly
+obtained from the definition of the exponential function and the properties of
\ No newline at end of file
diff --git a/samples/texts/348597/page_342.md b/samples/texts/348597/page_342.md
new file mode 100644
index 0000000000000000000000000000000000000000..be611158ce5be10f0eb9a40fdf8aa69fd9a6d518
--- /dev/null
+++ b/samples/texts/348597/page_342.md
@@ -0,0 +1,50 @@
+the logarithmic function. For example, the first of these relations can be
+derived as follows:
+
+$$
+\ln(\exp(a)\exp(b)) = \ln(\exp(a)) + \ln(\exp(b)) = a+b \quad (S.38)
+$$
+
+Taking the exponential of both sides of this equation, we obtain:
+
+$$
+\exp(\ln(\exp(a) \exp(b))) = \exp(a) \exp(b) = \exp(a+b) \quad (\text{S.39})
+$$
+
+which is the desired result.
+
+**Useful Features of the Exponential Function**
+
+If the exponential function is written in the form:
+
+$$
+y(x) = \exp(-bx) \tag{S.40}
+$$
+
+the following features are apparent:
+
+1. If $b > 0$, then the function is convergent at (+ infinity) and goes to zero there.
+
+2. If *b* < 0, then the function blows up at (+ infinity).
+
+3. If b = 0, then the function is everywhere equal to a constant y = 1.
+
+4. The exponential functions are monotonically increasing for b < 0, and monotonically decreasing for b > 0.
+
+5. If $b_1 > b_2 > 0$, then everywhere on the positive x-axis, $y_1(x) < y_2(x)$.
+
+6. The exponential function has no roots.
+
+7. For $b > 0$, the product of the exponential function with any polynomial goes to zero at (+ infinity).
+
+We plot in Figures S.7 and S.8 examples of the exponential function for dif-
+ferent values of the parameters. The first six properties above are clearly
+exhibited in these figures.
+
+**S.8 The Hyperbolic Functions**
+
+The hyperbolic cosine function is defined by:
+
+$$
+\cosh(x) = \frac{\exp(x) + \exp(-x)}{2} \qquad (S.41)
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_343.md b/samples/texts/348597/page_343.md
new file mode 100644
index 0000000000000000000000000000000000000000..87b7d003dbdea66cc23c496227ac3220ede78d30
--- /dev/null
+++ b/samples/texts/348597/page_343.md
@@ -0,0 +1,5 @@
+FIGURE S.7
+The graph of the function $y = exp(-bx)$, for different positive values of b.
+
+FIGURE S.8
+The graph of the function $y = exp(-bx)$, for different negative values of b.
\ No newline at end of file
diff --git a/samples/texts/348597/page_344.md b/samples/texts/348597/page_344.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ccb86dd3c2877628d8b0cb1b71a05cc04d5bb82
--- /dev/null
+++ b/samples/texts/348597/page_344.md
@@ -0,0 +1,33 @@
+and the hyperbolic sine function is defined by:
+
+$$ \sinh(x) = \frac{\exp(x) - \exp(-x)}{2} \qquad (S.42) $$
+
+Using the above definitions, it is straightforward to derive the following relations:
+
+$$ \cosh^2(x) - \sinh^2(x) = 1 \qquad (S.43) $$
+
+$$ 1 - \tan^2(x) = \sech^2(x) \qquad (S.44) $$
+
+## S.9 The Inverse Hyperbolic Functions
+
+$$ y = \sinh^{-1}(x) \quad \text{if} \quad x = \sinh(y) \qquad (S.45) $$
+
+Using the definition of the hyperbolic functions, we can write the inverse hyperbolic functions in terms of logarithmic functions. For example, considering the inverse hyperbolic sine function from above, we obtain:
+
+$$ e^y - 2x - e^{-y} = 0 \qquad (S.46) $$
+
+multiplying by $e^y$ everywhere, we obtain a second-degree equation in $e^y$:
+
+$$ e^{2y} - 2xe^y - 1 = 0 \qquad (S.47) $$
+
+Solving this quadratic equation, and choosing the plus term in front of the discriminant, since $e^y$ is everywhere positive, we obtain:
+
+$$ e^y = x + \sqrt{x^2 + 1} \qquad (S.48) $$
+
+giving, for the inverse hyperbolic sine function, the expression:
+
+$$ y = \sinh^{-1}(x) = \ln(x + \sqrt{x^2 + 1}) \qquad (S.49) $$
+
+In a similar manner, one can show the following other identities:
+
+$$ \cosh^{-1}(x) = \ln(x + \sqrt{x^2 - 1}) \qquad (S.50) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_345.md b/samples/texts/348597/page_345.md
new file mode 100644
index 0000000000000000000000000000000000000000..ba4d22ade6d5ef5a2343c72d29b71bab672f1604
--- /dev/null
+++ b/samples/texts/348597/page_345.md
@@ -0,0 +1,3 @@
+$$ \tanh^{-1}(x) = \frac{1}{2} \ln \left( \frac{1+x}{1-x} \right) \qquad (S.51) $$
+
+$$ \mathrm{sech}^{-1}(x) = \frac{1}{2} \ln \left( \frac{1 + \sqrt{1 - x^2}}{x} \right) \qquad (S.52) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_346.md b/samples/texts/348597/page_346.md
new file mode 100644
index 0000000000000000000000000000000000000000..f3a12158949abe57275e0974251ca87201c59f76
--- /dev/null
+++ b/samples/texts/348597/page_346.md
@@ -0,0 +1,19 @@
+# Appendix: Some Useful Formulae
+
+## Sum of Integers and Their Powers
+
+$$ \sum_{k=1}^{n} k = \frac{n(n+1)}{2} $$
+
+$$ \sum_{k=1}^{n} k^2 = \frac{n(n+1)(2n+1)}{6} $$
+
+$$ \sum_{k=1}^{n} k^3 = \left[ \frac{n(n+1)}{2} \right]^3 $$
+
+$$ \sum_{k=1}^{n} k^4 = \frac{n(n+2)(2n+1)(3n^2 + 3n - 1)}{30} $$
+
+$$ \sum_{k=1}^{n} (2k-1) = n^2 $$
+
+$$ \sum_{k=1}^{n} (2k-1)^2 = \frac{n(4n^2-1)}{3} $$
+
+$$ \sum_{k=1}^{n} (2k-1)^3 = n^2(2n^2-1) $$
+
+$$ \sum_{k=1}^{n} k(k+1)^2 = \frac{n(n+1)(n+2)(3n+5)}{12} $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_347.md b/samples/texts/348597/page_347.md
new file mode 100644
index 0000000000000000000000000000000000000000..bb5b868135ad03b94ffec2b9d0983df55fb8a898
--- /dev/null
+++ b/samples/texts/348597/page_347.md
@@ -0,0 +1,17 @@
+## Arithmetic Series
+
+$$ \sum_{k=0}^{n-1} (a + kr) = \frac{n}{2}[2a + (n-1)r] $$
+
+## Geometric Series
+
+$$ \sum_{k=1}^{n} aq^{k-1} = \frac{a(q^n - 1)}{q - 1} \quad q \neq 1 $$
+
+## Arithmo-Geometric Series
+
+$$ \sum_{k=0}^{n-1} (a+kr)q^k = \frac{a - [a + (n-1)r]q^n}{(1-q)} + \frac{rq(1-q^{n-1})}{(1-q)^2}, \quad q \neq 1 $$
+
+## Taylor's Series
+
+$$ f(x+a) = \sum_{k=0}^{\infty} f^{(k)}(x) \frac{a^k}{k!} $$
+
+$$ f(x+a, y+b) = f(x, y) + a \frac{\partial f}{\partial x} + b \frac{\partial f}{\partial y} + \frac{1}{2!} \left[ a^2 \frac{\partial^2 f}{\partial x^2} + b^2 \frac{\partial^2 f}{\partial y^2} + 2ab \frac{\partial^2 f}{\partial x \partial y} + \dots \right] $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_348.md b/samples/texts/348597/page_348.md
new file mode 100644
index 0000000000000000000000000000000000000000..600b4852fe8e3c23e4f9aa7a09e0aabc9c5fe5a4
--- /dev/null
+++ b/samples/texts/348597/page_348.md
@@ -0,0 +1,31 @@
+## Trigonometric Functional Relations
+
+$$ \sin(x) \pm \sin(y) = 2 \sin\left[\frac{1}{2}(x \pm y)\right] \cos\left[\frac{1}{2}(x \mp y)\right] $$
+
+$$ \cos(x) + \cos(y) = 2 \cos\left[\frac{1}{2}(x+y)\right] \cos\left[\frac{1}{2}(x-y)\right] $$
+
+$$ \cos(x) - \cos(y) = 2 \sin\left[\frac{1}{2}(x+y)\right] \sin\left[\frac{1}{2}(y-x)\right] $$
+
+$$ \sin\left(\frac{1}{2}x\right) = \pm\sqrt{\frac{1}{2}(1 - \cos(x))} $$
+
+$$ \cos\left(\frac{1}{2}x\right) = \pm\sqrt{\frac{1}{2}(1 + \cos(x))} $$
+
+$$ \sin(2x) = 2 \sin(x) \cos(x) $$
+
+$$ \sin(3x) = 3 \sin(x) - 4 \sin^3(x) $$
+
+$$ \sin(4x) = \cos(x)[4 \sin(x) - 8 \sin^3(x)] $$
+
+$$ \cos(2x) = 2 \cos^2(x) - 1 $$
+
+$$ \cos(3x) = 4 \cos^3(x) - 3 \cos(x) $$
+
+$$ \cos(4x) = 8 \cos^4(x) - 8 \cos^2(x) + 1 $$
+
+## Relation of Trigonometric and Hyperbolic Functions
+
+$$ \sin(x) = -j \sinh(jx) $$
+
+$$ \cos(x) = \cosh(jx) $$
+
+$$ \tan(x) = \frac{1}{j} \tanh(jx) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_349.md b/samples/texts/348597/page_349.md
new file mode 100644
index 0000000000000000000000000000000000000000..2922071b621f47a2f2c1afc75ccefa6fed4e3fd3
--- /dev/null
+++ b/samples/texts/348597/page_349.md
@@ -0,0 +1,11 @@
+Expansion of Elementary Functions in Power Series
+
+$$e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}$$
+
+$$\sin(x) = \sum_{k=0}^{\infty} (-1)^k \frac{x^{2k+1}}{(2k+1)!}$$
+
+$$\cos(x) = \sum_{k=0}^{\infty} (-1)^k \frac{x^{2k}}{(2k)!}$$
+
+$$\sinh(x) = \sum_{k=0}^{\infty} \frac{x^{2k+1}}{(2k+1)!}$$
+
+$$\cosh(x) = \sum_{k=0}^{\infty} \frac{x^{2k}}{(2k)!}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_35.md b/samples/texts/348597/page_35.md
new file mode 100644
index 0000000000000000000000000000000000000000..5d96e099c8d5afa8cf708dc59af5aefae52cff41
--- /dev/null
+++ b/samples/texts/348597/page_35.md
@@ -0,0 +1,48 @@
+[x,y]=pol2cart(th,rho);
+plot(x,y)
+axis equal
+
+**Example 1.17**
+
+Graph the polar plot of a spiral.
+
+*Solution:* The equation of the spiral is given by:
+
+$$
+r = a\theta
+$$
+
+Its polar plot can be viewed by executing the following script M-file (a = 3):
+
+th=0:2*pi/100:2*pi;
+rho=3*th;
+polar(th,rho)
+
+**In-Class Exercises**
+
+**Pb. 1.17** Prove that the polar equation $r = 1 + \epsilon \cos(\theta)$, where $\epsilon$ is always between -1 and 1, results in an ellipse. (Hint: Relate $\epsilon$ to the ratio between the semi-major and semi-minor axis.) It is worth noting that the planetary orbits are usually described in this manner in most astronomy books.
+
+**Pb. 1.18** Plot the three curves described by the following polar equations:
+
+$$
+r = 2 - 2\sin(\theta), \quad r = 1 - \sqrt{2}\sin(\theta), \quad r = \sqrt{2}\sin(2\theta)
+$$
+
+**Pb. 1.19** Plot:
+
+$$
+r = \sin(2\theta) \cos(2\theta)
+$$
+
+The above gives a flower-type curve with eight petals. How would you make
+a flower with 16 petals?
+
+**Pb. 1.20** Plot:
+
+$$
+r = \sin^2(\theta)
+$$
+
+This two-lobed structure shows the power distribution of a simple dipole
+antenna. Note the directed nature of the radiation. Can you increase the
+directivity further?
\ No newline at end of file
diff --git a/samples/texts/348597/page_36.md b/samples/texts/348597/page_36.md
new file mode 100644
index 0000000000000000000000000000000000000000..41803aeb995bb1212abcdd9cd455431eb4fd148d
--- /dev/null
+++ b/samples/texts/348597/page_36.md
@@ -0,0 +1,34 @@
+**Pb. 1.21** Acquaint yourself with the polar plots of the following curves:
+(choose first $a=1$, then experiment with other values).
+
+a. Straight lines: $r = \frac{1}{\cos(\theta) + a\sin(\theta)}$ for $0 \le \theta \le \frac{\pi}{2}$
+
+b. Cissoid of Diocles: $r = a \frac{\sin^2(\theta)}{\cos(\theta)}$ for $-\frac{\pi}{3} \le \theta \le \frac{\pi}{3}$
+
+c. Strophoid: $r = \frac{a \cos(2\theta)}{\cos(\theta)}$ for $-\frac{\pi}{3} \le \theta \le \frac{\pi}{3}$
+
+d. Folium of Descartes: $r = \frac{3a \sin(\theta)\cos(\theta)}{\sin^3(\theta) + \cos^3(\theta)}$ for $-\frac{\pi}{6} \le \theta \le \frac{\pi}{2}$
+
+## 1.9 Animation
+
+A very powerful feature of MATLAB is its ability to render an animation. For example, suppose that we want to visualize the oscillations of an ordinary spring. What are the necessary steps to implement this objective?
+
+1. Determine the parametric equations that describe the curve at a fixed time. In this instance, it is the helix parametric equations as given earlier in Example 1.14.
+
+2. Introduce the time dependence in the appropriate curve parameters. In this instance, make the helix pitch to be oscillatory in time.
+
+3. Generate 3-D plots of the curve at different times. Make sure that your axis definition includes all cases.
+
+4. Use the `movie` commands to display consecutively the different frames obtained in step 3.
+
+The following *script M-file* implements the above workplan:
+
+```matlab
+th=0:pi/60:32*pi;
+a=1;
+A=0.25;
+w=2*pi/15;
+M=moviein(16);
+for t=1:16;
+x=a*cos(th);
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_37.md b/samples/texts/348597/page_37.md
new file mode 100644
index 0000000000000000000000000000000000000000..9429f0d27a8a81b2e584154c6589a1bbb0c758b1
--- /dev/null
+++ b/samples/texts/348597/page_37.md
@@ -0,0 +1,27 @@
+```matlab
+y=a*sin(th);
+z=(1+A*cos(w*(t-1)))*th;
+plot3(x,y,z,'r');
+axis([-2 2 -2 2 0 40*pi]);
+M(:,t)=getframe;
+end
+movie(M,15)
+```
+
+The statement `M=moviein(16)` creates the 2-D structure that stores in each column the data corresponding to a frame at a specific time. The frames themselves are generated within the **for** loop. The **getframe** function returns a pixel image of the image of the different frames. The last command plays the movie *n*-times (15, in this instance).
+
+## 1.10 Histograms
+
+The most convenient representation for data collected from experiments is in the form of histograms. Typically, you collect data and want to sort it out in different bins; the MATLAB command for this operation is **hist**. But prior to getting to this point, let us introduce some array-related definitions and learn the use of the MATLAB commands that compute them.
+
+Let {$y_n$} be a data set; it can be represented in MATLAB by an array. The largest element of this array is obtained through the command **max(y)**, and the smallest element is obtained through the command **min(y)**.
+
+The mean value of the elements of the array is obtained through the command **mean(y)**, and the standard deviation is obtained through the command **std(y)**.
+
+The definitions of the mean and of the standard deviation are, respectively, given by:
+
+$$ \bar{y} = \frac{\sum_{i=1}^{N} y(i)}{N} $$
+
+$$ \sigma_y = \sqrt{\frac{\sum_{i=1}^{N} (y(i))^2 - \left(\sum_{i=1}^{N} y(i)\right)^2}{N(N-1)}} $$
+
+where *N* is the dimension of the array.
\ No newline at end of file
diff --git a/samples/texts/348597/page_38.md b/samples/texts/348597/page_38.md
new file mode 100644
index 0000000000000000000000000000000000000000..bbfe5e5ea522f297627743c734255a96ea179909
--- /dev/null
+++ b/samples/texts/348597/page_38.md
@@ -0,0 +1,32 @@
+The data (i.e., the array) can be organized into a number of bins ($n_b$) and exhibited through the command `[n,y]=hist(y,nb)`; the array $n$ in the output will be the number of elements in each of the bins.
+
+**Example 1.18**
+
+Find the mean and the standard deviation and draw the histogram, with 20 bins, for an array whose 10,000 elements are chosen from the MATLAB built-in normal distribution with zero mean and standard deviation 1.
+
+**Solution:** Edit and execute the following script M-file:
+
+```matlab
+y=randn(1,10000);
+meany=mean(y)
+stdy=std(y)
+nb=20;
+hist(y,nb)
+```
+
+You will notice that the results obtained for the mean and the standard deviation vary slightly from the theoretical results. This is due to the finite number of elements chosen for the array and the intrinsic limit in the built-in algorithm used for generating random numbers.
+
+**NOTE** The MATLAB command for generating an N-elements array of random numbers generated uniformly from the interval [0, 1] is `rand(1,N)`.
+
+## 1.11 Printing and Saving Work in MATLAB
+
+*Printing a figure:* Use the MATLAB `print` function to print a displayed figure directly to your printer. Notice that the printed figure does not take up the entire page. This is because the default orientation of the graph is in portrait mode. To change these settings, try the following commands on an already generated graphic window:
+
+```
+orient('landscape') %full horizontal layout
+orient('tall') %full vertical layout
+```
+
+*Printing a program file (script M-file):* For both the Mac and PC, open the M-file that you want to print. Go to the **File** pull-down menu, and select **Print**.
+
+*Saving and loading variables (data):* You can use the MATLAB `save` function to either save a particular variable or the entire MATLAB workspace. To do this, follow the following example:
\ No newline at end of file
diff --git a/samples/texts/348597/page_39.md b/samples/texts/348597/page_39.md
new file mode 100644
index 0000000000000000000000000000000000000000..510d41363196dfc404b567b695420aee6533f3ce
--- /dev/null
+++ b/samples/texts/348597/page_39.md
@@ -0,0 +1,97 @@
+x=1;y=2;
+save 'user volume:x'
+save 'user volume:workspace'
+
+The first **save** command saved the variable x into a file **x.mat**. You can
+change the name of the **.mat** file so it does not match the variable name, but
+that would be confusing. The second command saves all variables (x and y)
+in the workspace into **workspace.mat**.
+
+To load **x.mat** and **workspace.mat**, enter MATLAB and use the MATLAB
+load functions; note what you obtain if you entered the following commands:
+
+load 'user volume:x'
+x
+load 'user volume:workspace'
+y
+
+After loading the variables, you can see a list of all the variables in your
+workplace if you enter the MATLAB who command.
+
+What would you obtain if you had typed and entered the **who** command at
+this point?
+
+Now, to clear the workspace of some or all variables, use the MATLAB
+**clear** function.
+
+clear x %clears variable x from the workspace
+clear %clears all variables from workspace
+
+1.12 MATLAB Commands Review
+
+
+
+ |
+ axis
+ |
+
+ Sets the axis limits for both 2-D and 3-D plots. Axis supports the arguments equal and square, which makes the current graphs aspect ratio 1.
+ |
+
+
+ |
+ contour
+ |
+
+ Plots contour lines of a surface.
+ |
+
+
+ |
+ clear
+ |
+
+ Clears all variables from the workspace.
+ |
+
+
+ |
+ clf
+ |
+
+ Clears figure.
+ |
+
+
+ |
+ for
+ |
+
+ Runs a sequence of commands a given number of times.
+ |
+
+
+ |
+ getframe
+ |
+
+ Returns the pixel image of a movie frame.
+ |
+
+
+ |
+ help
+ |
+
+ Online help.
+ |
+
+
+ |
+ hold on(off)
+ |
+
+ Holds the plot axis with existing graphics on, so that multiple figures can be plotted on the same graph (release the hold of the axes).
+ |
+
+
\ No newline at end of file
diff --git a/samples/texts/348597/page_4.md b/samples/texts/348597/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..ff2cc880668d4b1b79ebd6cba3f43957b28210ee
--- /dev/null
+++ b/samples/texts/348597/page_4.md
@@ -0,0 +1,38 @@
+**Library of Congress Cataloging-in-Publication Data**
+
+Manassah, Jamal T.
+
+Elementary mathematical and computational tools for electrical and computer engineers using MATLAB/Jamal T. Manassah.
+
+p. cm.
+
+Includes bibliographical references and index.
+
+ISBN 0-8493-1080-6
+
+1. Electrical engineering—Mathematics. 2. Computer science—Mathematics. 3. MATLAB. I. Title.
+
+TK153 .M362 2001
+510'.24'62—dc21
+
+2001016138
+
+This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use.
+
+Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher.
+
+The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying.
+
+Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431.
+
+**Trademark Notice:** Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.
+
+Visit the CRC Press Web site at www.crcpress.com
+
+© 2001 by CRC Press LLC
+
+No claim to original U.S. Government works
+International Standard Book Number 0-8493-1080-6
+Library of Congress Card Number 2001016138
+Printed in the United States of America 1 2 3 4 5 6 7 8 9 0
+Printed on acid-free paper
\ No newline at end of file
diff --git a/samples/texts/348597/page_40.md b/samples/texts/348597/page_40.md
new file mode 100644
index 0000000000000000000000000000000000000000..657fbaa9f9a3704d148f7869f0e652769c17642a
--- /dev/null
+++ b/samples/texts/348597/page_40.md
@@ -0,0 +1,57 @@
+**if** Conditional evaluation.
+
+**length** Gives the length of an array.
+
+**load** Loads data or variable values from previous sessions into current MATLAB session.
+
+**linspace** Generates an array with a specified number of points between two values.
+
+**meshgrid** Makes a 2-D array of coordinate squares suitable for plotting surface meshes.
+
+**mesh** Plots a mesh surface of a surface stored in a matrix.
+
+**meshc** The same as mesh, but also plots in the same figure the contour plot.
+
+**min** Finds the smallest element of an array.
+
+**max** Finds the largest element of an array.
+
+**mean** Finds the mean of the elements of an array.
+
+**moviein** Creates the matrix that contains the frames of an animation.
+
+**movie** Plays the movie described by a matrix M.
+
+**orient** Orients the current graph to your needs.
+
+**plot** Plots points or pairs of arrays on a 2-D graph.
+
+**plot3** Plots points or array triples on a 3-D graph.
+
+**polar** Plots a polar plot on a polar grid.
+
+**pol2cart** Polar to Cartesian conversion.
+
+**print** Prints a figure to the default printer.
+
+**quit or exit** Leave MATLAB program.
+
+**rand** Generates an array with elements randomly chosen from the uniform distribution over the interval [0, 1].
+
+**randn** Generates an array with elements randomly chosen from the normal distribution function with zero mean and standard deviation 1.
+
+**subplot** Partitions the graphics window into sub-windows.
+
+**save** Saves MATLAB variables.
+
+**std** Finds the standard deviation of the elements of an array.
+
+**stem** Plots the data sequence as stems from the x-axis terminated with circles for the data value.
+
+**view** Views 3-D graphics from different perspectives.
+
+**who** Lists all variables in the workspace.
+
+**xlabel, ylabel, zlabel, title**
+
+**(x>=x1)** Boolean function that is equal to 1 when the condition inside the parenthesis is satisfied, and zero otherwise.
\ No newline at end of file
diff --git a/samples/texts/348597/page_41.md b/samples/texts/348597/page_41.md
new file mode 100644
index 0000000000000000000000000000000000000000..ee9b15c808b707088637634792789926ff759227
--- /dev/null
+++ b/samples/texts/348597/page_41.md
@@ -0,0 +1,25 @@
+# 2
+
+## Difference Equations
+
+This chapter introduces difference equations and examines some simple but important cases of their applications. We develop simple algorithms for their numerical solutions and apply these techniques to the solution of some problems of interest to the engineering professional. In particular, it illustrates each type of difference equation that is of widespread interest.
+
+### 2.1 Simple Linear Forms
+
+The following components are needed to define and solve a difference equation:
+
+1. An ordered array defining an index for the sequence of elements
+
+2. An equation connecting the value of an element having a certain index with the values of some of the elements having lower indices (the order of the equation being defined by the number of lower indices terms appearing in the difference equation)
+
+3. A sufficient number of the values of the elements at the lowest indices to act as seeds in the recursive generation of the higher indexed elements.
+
+For example, the Fibonacci numbers are defined as follows:
+
+1. The ordered array is the set of positive integers
+
+2. The defining difference equation is of second order and is given by:
+
+$$F(k + 2) = F(k + 1) + F(k) \quad (2.1)$$
+
+3. The initial conditions are $F(1) = F(2) = 1$ (note that the required number of initial conditions should be the same as the order of the equation).
\ No newline at end of file
diff --git a/samples/texts/348597/page_42.md b/samples/texts/348597/page_42.md
new file mode 100644
index 0000000000000000000000000000000000000000..a9c1a6cd3221150acc98f36bbfc4eb3371dccbf7
--- /dev/null
+++ b/samples/texts/348597/page_42.md
@@ -0,0 +1,37 @@
+From the above, it is then straightforward to compute the first few Fibonacci numbers:
+
+1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
+
+**Example 2.1**
+
+Write a program for finding the first 20 Fibonacci numbers.
+
+*Solution:* The following program fulfills this task:
+
+```pascal
+N=18;
+F(1)=1;
+F(2)=1;
+for k=1:N
+ F(k+2)=F(k)+F(k+1);
+end
+F
+```
+
+It should be noted that the value of the different elements of the sequence depends on the values of the initial conditions, as illustrated in **Pb. 2.1**, which follows.
+
+**In-Class Exercises**
+
+**Pb. 2.1** Find the first 20 elements of the sequence that obeys the same recursion relation as that of the Fibonacci numbers, but with the following initial conditions:
+
+$$F(1) = 0.5 \quad \text{and} \quad F(2) = 1$$
+
+**Pb. 2.2** Find the first 20 elements of the sequence generated by the following difference equation:
+
+$$F(k + 3) = F(k) + F(k + 1) + F(k + 2)$$
+
+with the following boundary conditions:
+
+$$F(1) = 1, \quad F(2) = 2, \quad \text{and} \quad F(3) = 3$$
+
+Why do we need to specify three initial conditions?
\ No newline at end of file
diff --git a/samples/texts/348597/page_43.md b/samples/texts/348597/page_43.md
new file mode 100644
index 0000000000000000000000000000000000000000..cbf15faafdd204531a67731fdd89d82f2a96a53c
--- /dev/null
+++ b/samples/texts/348597/page_43.md
@@ -0,0 +1,32 @@
+## 2.2 Amortization
+
+In this application of difference equations, we examine simple problems of finance that are of major importance to every engineer, on both the personal and professional levels. When the purchase of any capital equipment or real estate is made on credit, the assumed debt is normally paid for by means of a process known as amortization. Under this plan, a debt is repaid in a sequence of periodic payments where a portion of each payment reduces the outstanding principal, while the remaining portion is for interest on the loan.
+
+Suppose that the original debt to be paid is C and that interest charges are compounded at the rate r per payment period. Let $y(k)$ be the outstanding principal after the $k$th payment, and $u(k)$ the amount of the $k$th payment.
+
+After the $k$th payment period, the outstanding debt increased by the interest due on the previous principal $y(k-1)$, and decreased by the amount of payment $u(k)$, this relation can be written in the following difference equation form:
+
+$$y(k) = (1 + r)y(k-1) - u(k) \quad (2.2)$$
+
+We can simplify the problem and assume here that the bank wants its money back in equal amounts over N periods (this can be in days, weeks, months, or years; note, however, that whatever unit is used here should be the same as used for the assignment of the value of the interest rate r). Therefore, let
+
+$$u(k) = p \quad \text{for } k = 1, 2, 3, \dots, N \quad (2.3)$$
+
+Now, using Eq. (2.2), let us iterate the first few terms of the difference equation:
+
+$$y(1) = (1 + r)y(0) - p = (1 + r)C - p \quad (2.4)$$
+
+Since C is the original capital borrowed;
+
+At $k=2$, using Eq. (2.2) and Eq. (2.4), we obtain:
+
+$$y(2) = (1 + r)y(1) - p = (1 + r)^2 C - p(1 + r) - p \quad (2.5)$$
+
+At $k=3$, using Eq. (2.2), (2.4), and (2.5), we obtain:
+
+$$\begin{align}
+& y(3) = (1 + r)y(2) - p = (1 + r)^3 C - p(1 + r)^2 - p(1 + r) - p \tag{2.6} \\
+& \text{etc....} \nonumber
+\end{align}$$
+
+and for an arbitrary $k$, we can write, by induction, the general expression:
\ No newline at end of file
diff --git a/samples/texts/348597/page_44.md b/samples/texts/348597/page_44.md
new file mode 100644
index 0000000000000000000000000000000000000000..99b52df53dbf91a58d04b70e48773eeb2319c6f0
--- /dev/null
+++ b/samples/texts/348597/page_44.md
@@ -0,0 +1,23 @@
+$$y(k) = (1+r)^k C - p \sum_{i=0}^{k-1} (1+r)^i \quad (2.7)$$
+
+Using the expression for the sum of a geometric series, from the appendix, the expression for $y(k)$ then reduces to:
+
+$$y(k) = (1+r)^k C - p \left[ \frac{(1+r)^k - 1}{r} \right] \quad (2.8)$$
+
+At $k = N$, the debt is paid off and the bank is owed no further payment; therefore:
+
+$$y(N) = 0 = (1+r)^N C - p \left[ \frac{(1+r)^N - 1}{r} \right] \quad (2.9)$$
+
+From this equation, we can determine the amount of each of the (equal) payments:
+
+$$p = \frac{r(1+r)^N}{(1+r)^N - 1} C \quad (2.10)$$
+
+**Question:** What percentage of the first payment is going into retiring the principal?
+
+## In-Class Exercises
+
+**Pb. 2.3** Given the principal, the number of periods and the interest rate, use Eq. (2.10) to write a MATLAB program to find the amount of payment per period, assuming the payment per period is the same for all periods.
+
+**Pb. 2.4** Use the same reasoning as for the amortization problem to write the difference equation for an individual's savings plan. Let $y(k)$ be the savings balance on the first day of the $k$th year and $u(k)$ the amount of deposit made in the $k$th year.
+
+Write a MATLAB program to compute $y(k)$ if the sequence $u(k)$ and the interest rate $r$ are given. Specialize to the case where you deposit an amount that increases by the rate of inflation $i$. Compute and plot the total value of the savings as a function of $k$ if the deposit in the first year is \$1000, the yearly interest rate is 6%, and the yearly rate of inflation is 3%. (Hint: For simplicity, assume that the deposits are made on December 31 of each year, and that the balance statement is issued on January 1 of each year.)
\ No newline at end of file
diff --git a/samples/texts/348597/page_45.md b/samples/texts/348597/page_45.md
new file mode 100644
index 0000000000000000000000000000000000000000..346a52e2b7dfdee1210c09cf3d431b1165311c8b
--- /dev/null
+++ b/samples/texts/348597/page_45.md
@@ -0,0 +1,17 @@
+FIGURE 2.1
+The first few steps in the construction of the Koch curve.
+
+## 2.3 An Iterative Geometric Construct: The Koch Curve
+
+In your previous studies of 2-D geometry, you encountered classical geomet-
+ric objects such as the circle, the triangle, the square, different polygons, etc.
+These shapes only approximate the shapes that you observe in nature (e.g.,
+the shapes of clouds, mountain ranges, rivers, coastlines, etc.). In a successful
+effort to address the limitations of classical geometry, mathematicians have
+developed, over the last century and more intensely over the last three
+decades, a new geometry called fractal geometry. This geometry defines the
+geometrical object through an iterative transformation applied an infinite
+number of times on an initial simple geometrical object. We illustrate this
+new concept in geometry by considering the Koch curve (see Figure 2.1).
+
+The Koch curve has the following simple geometrical construction. Begin with a straight line of length L. This initial object is called the initiator. Now partition it into three equal parts. Then replace the middle line segment by an equilateral triangle (the segment you removed is its base). This completes the basic construction, which transformed the line segment into four non-circular smaller parts. This constructional prescription is called the generator. We now repeat the transformation, taking each of the resulting line segments, partitioning them into three equal parts, removing the middle section, etc.
\ No newline at end of file
diff --git a/samples/texts/348597/page_46.md b/samples/texts/348597/page_46.md
new file mode 100644
index 0000000000000000000000000000000000000000..72b2dcec042c0a514d227e51d398ef36bed22471
--- /dev/null
+++ b/samples/texts/348597/page_46.md
@@ -0,0 +1,25 @@
+This process is repeated indefinitely. Figure 2.1 the first two steps of this construction. It is interesting to observe that the Koch curve is an example of a curve where there is no way to fit a tangent to any of its points. In a sense, it is an example of a curve that is made out of corners everywhere.
+
+The detailed study of these objects is covered in courses in fractal geometry, chaos, dynamic systems, etc. We limit ourselves here to the simple problems of determining the number of segments, the length of each segment, the length of the curve, and the area bounded by the curve and the horizontal axis, following the *k*^th step:
+
+1. After the first step, we are left with a curve made up of four line segments of equal length; after the second step, we have (4 × 4) segments; and the number of segments after *k* steps, is
+
+$$n(k) = 4^k \qquad (2.11)$$
+
+2. If the initiator had length *L*, the length of the segment after the first step is $L/3$, $L/(3)^2$, after the second step and after *k* steps:
+
+$$s(k) = L/(3)^k \qquad (2.12)$$
+
+3. Combining the results of Eqs. (2.11) and (2.12), we deduce that the length of the curve after *k* steps:
+
+$$P(k) = L \times \left(\frac{4}{3}\right)^k \qquad (2.13)$$
+
+4. The number of vertices in this curve, denoted by *u*(*k*), is equal to the number of segments plus one:
+
+$$u(k) = 4^k + 1 \qquad (2.14)$$
+
+5. The area enclosed by the Koch curve and the horizontal line can be deduced from solving a difference equation: the area enclosed after the *k*-th step is equal to the area enclosed in the $(k-1)$-th step plus the number of the added triangles multiplied by their individual area:
+
+$$\text{Number of new triangles} = \left( \frac{u(k) - u(k-1)}{3} \right) \qquad (2.15)$$
+
+$$\text{Area of the new equilateral triangle} = \frac{\sqrt{3}}{4}s^2(k) = \frac{\sqrt{3}}{4} \left(\frac{1}{3}\right)^{2k} L^2 \quad (2.16)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_47.md b/samples/texts/348597/page_47.md
new file mode 100644
index 0000000000000000000000000000000000000000..8142403ec2603b93204df62df83c8931ff196f79
--- /dev/null
+++ b/samples/texts/348597/page_47.md
@@ -0,0 +1,40 @@
+from which the difference equation for the area can be deduced:
+
+$$
+\begin{align}
+A(k) &= A(k-1) + \left[ \frac{u(k)-u(k-1)}{3} \right] \frac{\sqrt{3}}{4} \frac{L^2}{3^{2k}} \tag{2.17} \\
+&= A(k-1) + \frac{\sqrt{3}}{24} \left( \frac{2}{3} \right)^{2k-1} L^2
+\end{align}
+$$
+
+The initial condition for this difference equation is:
+
+$$
+A(1) = \frac{\sqrt{3} L^2}{4 9} \tag{2.18}
+$$
+
+Clearly, the solution of the above difference equation is the sum of a geo-
+metric series, and can therefore be written analytically. For $k \to \infty$, this area
+has the limit:
+
+$$
+A(k \to \infty) = \frac{\sqrt{3}}{20} L^2 \quad (2.19)
+$$
+
+However, if you did not notice the relationship of the above difference
+equation with the sum of a geometric series, you can still solve this equation
+numerically, using the following routine and assuming $L = 1$:
+
+N=25;
+A=zeros(N,1); %preallocating size of array speeds
+% computation
+m=1:N;
+A(1)=(sqrt(3)/24)*(2/3);
+for k=2:N
+A(k)=A(k-1)+(sqrt(3)/24)*((2/3)**(2*k-1));
+end
+stem(m,A,'*')
+
+The above plot shows the value of the area on the first 20 iterations of the function, and as can be verified, the numerical limit of this area has the same value as the analytical expression given in Eq. (2.19).
+
+Before leaving the Koch curve, we note that although the area of the curve goes to a finite limit as the index increases, the value of the length of the curve [Eq. (2.13)] continues to increase. This is a feature not encountered in the classical geometric objects with which you are most familiar.
\ No newline at end of file
diff --git a/samples/texts/348597/page_48.md b/samples/texts/348597/page_48.md
new file mode 100644
index 0000000000000000000000000000000000000000..57c118743641fb7680487c2753fc6d54cde3b82b
--- /dev/null
+++ b/samples/texts/348597/page_48.md
@@ -0,0 +1,23 @@
+**In-Class Exercise**
+
+Pb. 2.5 Write a program to draw the Koch curve at the $k^{th}$ step. (Hint: Starting with the farthest left vertex and going clockwise, write a difference equation relating the coordinates of a vertex with those of the preceding vertex, the length of the segment, and the angle that the line connecting the two consecutive vertices makes with the x-axis.)
+
+## 2.4 Solution of Linear Constant Coefficients Difference Equations
+
+In Section 2.1, we explored the general numerical techniques for solving difference equations. In this section, we consider, some special techniques for obtaining the analytical solutions for the class of linear constant coefficients difference equations. The related physical problem is to determine, for a linear system, the output $y(k)$, $k > 0$, given a specific input $u(k)$ and a specific set of initial conditions. We discuss, at this stage, the so-called direct method.
+
+The general expression for this class of difference equation is given by:
+
+$$ \sum_{j=0}^{N} a_j y(k-j) = \sum_{m=0}^{M} b_m u(k-m) \quad (2.20) $$
+
+The direct method assumes that the total solution of a linear difference equation is the sum of two parts — the homogeneous solution and the particular solution:
+
+$$ y(k) = y_{\text{homog.}}(k) + y_{\text{partic.}}(k) \quad (2.21) $$
+
+The homogeneous solution is independent of the input $u(k)$, and the RHS of the difference equation is equated to zero; that is,
+
+$$ \sum_{j=0}^{N} a_j y(k-j) = 0 \quad (2.22) $$
+
+### 2.4.1 Homogeneous Solution
+
+Assume that the solution is of the form:
\ No newline at end of file
diff --git a/samples/texts/348597/page_49.md b/samples/texts/348597/page_49.md
new file mode 100644
index 0000000000000000000000000000000000000000..85516168c0425bfe967de752681c17d24da6bc9b
--- /dev/null
+++ b/samples/texts/348597/page_49.md
@@ -0,0 +1,33 @@
+$$y_{\text{homog.}}(k) = \lambda^k \quad (2.23)$$
+
+Substituting in the homogeneous equation, we obtain the following algebraic equation:
+
+$$\sum_{j=0}^{N} a_j \lambda^{k-j} = 0 \quad (2.24)$$
+
+or
+
+$$\lambda^{k-N} (a_0 \lambda^N + a_1 \lambda^{N-1} + a_2 \lambda^{N-2} + \dots + a_{N-1} \lambda + a_N) = 0 \quad (2.25)$$
+
+The polynomial in parentheses is called the characteristic polynomial of the system. The roots can be obtained analytically for all polynomials up to order 4; otherwise, they are obtained numerically. In MATLAB, they can be obtained graphically when they are all real, or through the `roots` command in the most general case. We introduce this command in Chapter 5. In all the following examples in this chapter, we restrict ourselves to cases for which the roots can be obtained analytically.
+
+If we assume that the roots are all distinct, the general solution to the homogeneous difference equation is:
+
+$$y_{\text{homog.}}(k) = C_1 \lambda_1^k + C_2 \lambda_2^k + \dots + C_N \lambda_N^k \quad (2.26)$$
+
+where $\lambda_1, \lambda_2, \lambda_3, \dots, \lambda_N$ are the roots of the characteristic polynomial.
+
+**Example 2.2**
+
+Find the homogeneous solution of the difference equation
+
+$$y(k) - 3y(k-1) - 4y(k-2) = 0$$
+
+**Solution:** The characteristic polynomial associated with this equation leads to the quadratic equation:
+
+$$\lambda^2 - 3\lambda - 4 = 0$$
+
+The roots of this equation are -1 and 4, respectively. Therefore, the solution of the homogeneous equation is:
+
+$$y_{\text{homog.}}(k) = C_1(-1)^k + C_2(4)^k$$
+
+The constants $C_1$ and $C_2$ are determined from the initial conditions $y(1)$ and $y(2)$. Substituting, we obtain:
\ No newline at end of file
diff --git a/samples/texts/348597/page_5.md b/samples/texts/348597/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..44e90544c239cb854b1bcbc9982a73e41f6a53dd
--- /dev/null
+++ b/samples/texts/348597/page_5.md
@@ -0,0 +1,3 @@
+About the Author
+
+**Jamal T. Manassah**, has been Professor of Electrical Engineering at the City College of New York since 1981. He received his B.Sc. degree in Physics from the American University of Beirut, and his M.A. and Ph.D. in Theoretical Physics from Columbia University. Dr. Manassah was a Member of the Institute for Advanced Study. His current research interests are in theoretical and computational quantum and nonlinear optics, and in photonics.
\ No newline at end of file
diff --git a/samples/texts/348597/page_50.md b/samples/texts/348597/page_50.md
new file mode 100644
index 0000000000000000000000000000000000000000..26f59e2a8dda6e7b1be9bee3c2945bc8fb802cea
--- /dev/null
+++ b/samples/texts/348597/page_50.md
@@ -0,0 +1,27 @@
+$$C_1 = -\frac{4}{5}y(1) + \frac{y(2)}{5} \quad \text{and} \quad C_2 = \frac{y(1)+y(2)}{20}$$
+
+NOTE If the characteristic polynomial has roots of multiplicity *m*, then the portion of the homogeneous solution corresponding to that root can be written, instead of $C_1\lambda^k$, as:
+
+$$C_1^{(1)}\lambda^k + C_1^{(2)}k\lambda^k + \dots + C_1^{(m)}k^{m-1}\lambda^k$$
+
+**In-Class Exercises**
+
+Pb. 2.6 Find the homogeneous solution of the following second-order difference equation:
+
+$$y(k) = 3y(k-1) - 2y(k-2)$$
+
+with the initial conditions: $y(0) = 1$ and $y(1) = 2$. Then check your results numerically.
+
+Pb. 2.7 Find the homogeneous solution of the following second-order difference equation:
+
+$$y(k) = [2 \cos(\theta)]y(k-1) - y(k-2)$$
+
+with the initial conditions: $y(-2) = 0$ and $y(-1) = 1$. Check your results numerically.
+
+### 2.4.2 Particular Solution
+
+The particular solution depends on the form of the input signal. The following table summarizes the form of the particular solution of a linear equation for some simple input functions:
+
+| Input Signal | Particular Solution | | A (constant) | B (constant) | | AMk | BMk | | AKM | B0kM + B1kM-1 + ... + BM | | {A cos(ω0k), A sin(ω0k)} | B1 cos(ω0k) + B2 sin(ω0k) |
+
+For more complicated input signals, the z-transform technique provides the simplest solution method. This technique is discussed in great detail in courses on linear systems.
\ No newline at end of file
diff --git a/samples/texts/348597/page_51.md b/samples/texts/348597/page_51.md
new file mode 100644
index 0000000000000000000000000000000000000000..22c0ca4c281edcf0588de1f4fc3b5a1a17446475
--- /dev/null
+++ b/samples/texts/348597/page_51.md
@@ -0,0 +1,33 @@
+**In-Class Exercise**
+
+**Pb. 2.8** Find the particular solution of the following second-order difference equation:
+
+$$y(k) - 3y(k-1) + 2y(k-2) = (3)^k \quad \text{for } k > 0$$
+
+**2.4.3 General Solution**
+
+The general solution of a linear difference equation is the sum of its homogeneous solution and its particular solution, with the constants adjusted, so as to satisfy the initial conditions. We illustrate this general prescription with an example.
+
+**Example 2.3**
+
+Find the complete solution of the first-order difference equation:
+
+$$y(k + 1) + y(k) = k$$
+
+with the initial condition $y(0) = 0$.
+
+*Solution:* First, solve the homogeneous equation $y(k+1) + y(k) = 0$. The characteristic polynomial is $\lambda + 1 = 0$; therefore,
+
+$$y_{\text{homog.}} = C(-1)^k$$
+
+The particular solution can be obtained from the above table. Noting that the input signal has the functional form $k^M$, with $M = 1$, then the particular solution is of the form:
+
+$$y_{\text{partic.}} = B_0 k + B_1 \qquad (2.27)$$
+
+Substituting back into the original equation, and grouping the different powers of $k$, we deduce that:
+
+$$B_0 = 1/2 \quad \text{and} \quad B_1 = -1/4$$
+
+The complete solution of the difference equation is then:
+
+$$y(k) = C(-1)^k + \frac{2k-1}{4}$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_52.md b/samples/texts/348597/page_52.md
new file mode 100644
index 0000000000000000000000000000000000000000..477e6f59278278678908f0cede74ed54cd96b4c2
--- /dev/null
+++ b/samples/texts/348597/page_52.md
@@ -0,0 +1,36 @@
+The constant C is determined from the initial condition:
+
+$$y(0) = 0 = C(-1)^0 + \frac{(-1)}{4}$$
+
+giving for the constant C the value 1/4.
+
+## In-Class Exercises
+
+**Pb. 2.9** Use the following program to model Example 2.3:
+
+```pascal
+N=19;
+y(1)=0;
+for k=1:N
+ y(k+1)=k-y(k);
+end
+y
+```
+
+Verify the closed-form answer.
+
+**Pb. 2.10** Find, for $k \ge 2$, the general solution of the second-order difference equation:
+
+$$y(k) - 3y(k-1) - 4y(k-2) = 4^k + 2 \times 4^{k-1}$$
+
+with the initial conditions $y(0) = 1$ and $y(1) = 9$. (Hint: When the functional form of the homogeneous and particular solutions are the same, use the same functional form for the solutions as in the case of multiple roots for the characteristic polynomial.)
+
+Answer: $y(k) = \left[ -\frac{1}{25}(-1)^k + \frac{26}{25}(4)^k \right] \left( \frac{6}{5}k4^k \right)$
+
+## Homework Problems
+
+**Pb. 2.11** Given the general geometric series $y(k)$, where:
+
+$$y(k) = 1 + a + a^2 + \dots + a^k$$
+
+show that $y(k)$ obeys the first-order equation:
\ No newline at end of file
diff --git a/samples/texts/348597/page_53.md b/samples/texts/348597/page_53.md
new file mode 100644
index 0000000000000000000000000000000000000000..abe0bd2a366707450115e16f893278a098750591
--- /dev/null
+++ b/samples/texts/348597/page_53.md
@@ -0,0 +1,39 @@
+$$y(k) = y(k - 1) + a^k$$
+
+**Pb. 2.12** Show that the response of the system:
+
+$$y(k) = (1 - a)u(k) + a y(k - 1)$$
+
+to a step signal of amplitude c; that is, $u(k) = c$ for all positive k, is given by:
+
+$$y(k) = c(1 - a^{k+1}) \quad \text{for } k = 0, 1, 2, \dots$$
+
+where the initial condition $y(-1) = 0$.
+
+**Pb. 2.13** Given the first-order difference equation:
+
+$$y(k) = u(k) + y(k - 1) \quad \text{for } k = 0, 1, 2, \dots$$
+
+with the input signal $u(k) = k$, and the initial condition $y(-1) = 0$. Verify that its solution also satisfies the second-order difference equation:
+
+$$y(k) = 2y(k-1) - y(k-2) + 1$$
+
+with the initial conditions $y(0) = 0$ and $y(-1) = 0$.
+
+**Pb. 2.14** Verify that the response of the system governed by the first-order difference equation:
+
+$$y(k) = bu(k) + ay(k-1)$$
+
+to the alternating input: $u(k) = (-1)^k$ for $k = 0, 1, 2, 3, \dots$ is given by:
+
+$$y(k) = \frac{b}{1+a} [(-1)^k + a^{k+1}] \quad \text{for } k = 0, 1, 2, 3, \dots$$
+
+if the initial condition is: $y(-1) = 0$.
+
+**Pb. 2.15** The impulse response of a system is the output from this system when excited by an input signal $\delta(k)$ that is zero everywhere, except at $k=0$, where it is equal to 1. Using this definition and the general form of the solution of a difference equation, write the output of a linear system described by:
+
+$$y(k) - 3y(k - 1) - 4y(k - 2) = \delta(k) + 2\delta(k - 1)$$
+
+The initial conditions are: $y(-2) = y(-1) = 0$.
+
+Answer: $y(k) = \left[-\frac{1}{5}(-1)^k + \frac{6}{5}(4)^k\right]$ for $k > 0$
\ No newline at end of file
diff --git a/samples/texts/348597/page_54.md b/samples/texts/348597/page_54.md
new file mode 100644
index 0000000000000000000000000000000000000000..7659cd32c5e1b40b0b721b6f0c17d6cf83d4d435
--- /dev/null
+++ b/samples/texts/348597/page_54.md
@@ -0,0 +1,31 @@
+**Pb. 2.16** The expression for the National Income is given by:
+
+$$y(k) = c(k) + i(k) + g(k)$$
+
+where *c* is consumer expenditure, *i* is the induced private investment, *g* is the government expenditure, and *k* is the accounting period, typically corresponding to a particular quarter. Samuelson theory, introduced to many engineers in Cadzow's classic *Discrete Time Systems* (see reference list), assumes the following properties for the above three components of the National Income:
+
+1. Consumer expenditure in any period *k* is proportional to the National Income at the previous period:
+
+$$c(k) = ay(k-1)$$
+
+2. Induced private investment in any period *k* is proportional to the increase in consumer expenditure from the preceding period:
+
+$$i(k) = b[c(k) - c(k-1)] = ab[y(k-1) - y(k-2)]$$
+
+3. Government expenditure is the same for all accounting periods:
+
+$$g(k) = g$$
+
+Combining the above equations, the National Income obeys the second-order difference equation:
+
+$$y(k) = g + a(1 + b)y(k-1) - aby(k-2) \quad \text{for } k = 1, 2, 3, \dots$$
+
+The initial conditions $y(-1)$ and $y(0)$ are to be specified.
+
+Plot the National Income for the first 40 quarters of a new national entity, assuming that: $a = 1/6$, $b = 1$, $g = \$10,000,000$, $y(-1) = \$20,000,000$, $y(0) = \$30,000,000$.
+
+How would the National Income curve change if the marginal propensity to consume (i.e., the constant *a*) is decreased to 1/8?
+
+## 2.5 Convolution-Summation of a First-Order System with Constant Coefficients
+
+The amortization problem in Section 2.2 was solved by obtaining the present output, $y(k)$, as a linear combination of the present and all past inputs, $(u(k))$,
\ No newline at end of file
diff --git a/samples/texts/348597/page_55.md b/samples/texts/348597/page_55.md
new file mode 100644
index 0000000000000000000000000000000000000000..824deaad88dc7b02a56159647a1e63e85b4fdb7f
--- /dev/null
+++ b/samples/texts/348597/page_55.md
@@ -0,0 +1,27 @@
+$u(k-1), u(k-2), \dots$. This solution technique is referred to as the convolution-summation representation:
+
+$$y(k) = \sum_{i=0}^{\infty} w(i) u(k-i) \quad (2.28)$$
+
+where the $w(i)$ is the weighting function (or weight). Usually, the infinite sum is reduced to a finite sum because the inputs with negative indexes are usually assumed to be zeros.
+
+On the other hand, in the difference equation formulation of this class of problems, the present output $y(k)$ is expressed as a linear combination of the present and $m$ most recent inputs and of the $n$ most recent outputs, specifically:
+
+$$y(k) = b_0 u(k) + b_1 u(k-1) + \dots + b_m u(k-m) \\ - a_1 y(k-1) - a_2 y(k-2) - \dots - a_n y(k-n) \quad (2.29)$$
+
+where, of course, $n$ is the order of the difference equation. Elementary techniques for solving this class of equations were introduced in Section 2.4. However, the most powerful technique to directly solve the linear difference equation with constant coefficients is, as pointed out earlier, the z-transform technique.
+
+Each of the above formulations of the input-output problem has distinct advantages in different circumstances. The direct difference equation formulation is the most amenable to numerical computations because of lower computer memory requirements, while the convolution-summation technique has the advantage of being suitable for developing mathematical proofs and finding general features for the difference equation.
+
+Relating the parameters of the two formulations of this problem is usually cumbersome without the z-transform technique. However, for first-order difference equations, this task is rather simple.
+
+**Example 2.4**
+
+Relate, for a first-order difference equation with constant coefficients, the sets $\{a_n\}$ and $\{b_n\}$ with $\{w_n\}$.
+
+**Solution:** The first-order difference equation is given by:
+
+$$y(k) = b_0 u(k) + b_1 u(k-1) - a_1 y(k-1)$$
+
+where $u(k) = 0$ for all $k$ negative. From the difference equation and the initial conditions, we can directly write:
+
+$$y(0) = b_0 u(0)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_56.md b/samples/texts/348597/page_56.md
new file mode 100644
index 0000000000000000000000000000000000000000..52a01ada5fdc727e9295dedd8133df76eec71a56
--- /dev/null
+++ b/samples/texts/348597/page_56.md
@@ -0,0 +1,33 @@
+$$ \text{for } k = 1, \begin{cases} y(1) = b_0 u(1) + b_1 u(0) - a_1 y(0) \\ = b_0 u(1) + b_1 u(0) - a_1 b_0 u(0) \\ = b_0 u(1) + (b_1 - a_1 b_0) u(0) \end{cases} $$
+
+Similarly,
+
+$$ y(2) = b_0 u(2) + (b_1 - a_1 b_0) u(1) - a_1 (b_1 - a_1 b_0) u(0) $$
+
+$$ y(3) = b_0 u(3) + (b_1 - a_1 b_0)u(2) - a_1(b_1 - a_1 b_0)u(1) + a_1^2(b_1 - a_1 b_0)u(0) $$
+
+or, more generally, if:
+
+$$ y(k) = w(0)u(k) + w(1)u(k-1) + \dots + w(k)u(0) $$
+
+then,
+
+$$ w(0) = b_0 $$
+
+$$ w(i) = (-a_1)^{i-1}(b_1 - a_1 b_0) \quad \text{for } i=1,2,3,... $$
+
+## In-Class Exercises
+
+**Pb. 2.17** Using the convolution-summation technique, find the closed form solution for:
+
+$$ y(k) = u(k) - \frac{1}{3}u(k-1) + \frac{1}{2}y(k-1) $$
+
+and the input function given by: $\begin{cases} u(k) = 0 & \text{for } k \text{ negative} \\ u(k) = 1 & \text{otherwise} \end{cases}$
+
+Compare your analytical answer with the numerical solution.
+
+**Pb. 2.18** Show that the resultant weight functions for two systems are, respectively:
+
+$$ w(k) = w_1(k) + w_2(k) \quad \text{if connected in parallel} $$
+
+$$ w(k) = \sum_{i=0}^{k} w_2(i)w_1(k-i) \quad \text{if connected in cascade} $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_57.md b/samples/texts/348597/page_57.md
new file mode 100644
index 0000000000000000000000000000000000000000..39fd7eeddc3122826116a9700203f345c5d2c94f
--- /dev/null
+++ b/samples/texts/348597/page_57.md
@@ -0,0 +1,34 @@
+## 2.6 General First-Order Linear Difference Equations*
+
+Thus far, we have considered difference equations with constant coefficients. Now we consider first-order difference equations with arbitrary functions as coefficients:
+
+$$y(k + 1) + A(k)y(k) = B(k) \tag{2.30}$$
+
+The homogeneous equation corresponding to this form satisfies the following equation:
+
+$$l(k + 1) + A(k)l(k) = 0 \tag{2.31}$$
+
+Its expression can be easily found:
+
+$$\begin{align*}
+l(k+1) &= -A(k)l(k) = A(k)A(k-1)l(k-1) = \dots \\
+&= (-1)^{k+1} A(k)A(k-1)\dots A(0)l(0) = \left\{ \prod_{i=0}^{k} [-A(i)] \right\} l(0) \tag{2.32}
+\end{align*}$$
+
+Assuming that the general solution is of the form:
+
+$$y(k) = l(k)v(k) \tag{2.33}$$
+
+let us find $v(k)$. Substituting the above trial solution in the difference equation, we obtain:
+
+$$l(k + 1)v(k + 1) + A(k)l(k)v(k) = B(k) \tag{2.34}$$
+
+Further, assuming that:
+
+$$v(k + 1) = v(k) + \Delta v(k) \tag{2.35}$$
+
+substituting in the difference equation, and recalling that $l(k)$ is the solution of the homogeneous equation, we obtain:
+
+$$\Delta v(k) = \frac{B(k)}{l(k+1)} \tag{2.36}$$
+
+Summing this over the variable $k$ from 0 to $k$, we deduce that:
\ No newline at end of file
diff --git a/samples/texts/348597/page_58.md b/samples/texts/348597/page_58.md
new file mode 100644
index 0000000000000000000000000000000000000000..51f2718f1472cb735cc834f4fd1277efd8f8ebe6
--- /dev/null
+++ b/samples/texts/348597/page_58.md
@@ -0,0 +1,41 @@
+$$v(k + 1) = \sum_{j=0}^{k} \frac{B(j)}{l(j + 1)} + C \quad (2.37)$$
+
+where C is a constant.
+
+**Example 2.5**
+
+Find the general solution of the following first-order difference equation:
+
+$$y(k + 1) - k^2 y(k) = 0$$
+
+with $y(1) = 1$.
+
+**Solution:**
+
+$$
+\begin{align*}
+y(k+1) &= k^2 y(k) = k^2 (k-1)^2 y(k-1) = k^2 (k-1)^2 (k-2)^2 y(k-2) \\
+&= k^2 (k-1)^2 (k-2)^2 (k-3)^2 y(k-3) = \dots \\
+&= k^2 (k-1)^2 (k-2)^2 (k-3)^2 \dots (2)^2 (1)^2 y(1) = (k!)^2
+\end{align*}
+$$
+
+**Example 2.6**
+
+Find the general solution of the following first-order difference equation:
+
+$$ (k + 1)y(k + 1) - ky(k) = k^2 $$
+
+with $y(1) = 1$.
+
+**Solution:** Reducing this equation to the standard form, we have:
+
+$$A(k) = -\frac{k}{k+1} \quad \text{and} \quad B(k) = \frac{k^2}{k+1}$$
+
+The homogeneous solution is given by:
+
+$$l(k + 1) = \frac{k!}{(k + 1)!} = \frac{1}{k + 1}$$
+
+The particular solution is given by:
+
+$$v(k + 1) = \sum_{j=1}^{k} \frac{j^2}{(j + 1)} (j + 1) + C = \sum_{j=1}^{k} j^2 + C = \frac{(k + 1)(2k + 1)k}{6} + C$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_59.md b/samples/texts/348597/page_59.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa53cf4c86eba8438557fa95c86c6c4c3c90cc1a
--- /dev/null
+++ b/samples/texts/348597/page_59.md
@@ -0,0 +1,27 @@
+where we used the expression for the sum of the square of integers (see Appendix).
+
+The general solution is then:
+
+$$y(k + 1) = \frac{(2k + 1)k}{6} + \frac{C}{(k + 1)}$$
+
+From the initial condition $y(1) = 1$, we deduce that: $C = 1$.
+
+**In-Class Exercise**
+
+**Pb. 2.19** Find the general solutions for the following difference equations, assuming that $y(1) = 1$.
+
+a. $y(k + 1) - 3ky(k) = 3^k$.
+
+b. $y(k + 1) - ky(k) = k$.
+
+## 2.7 Nonlinear Difference Equations
+
+In this and the following chapter section, we explore a number of nonlinear difference equations that exhibit some general features typical of certain classes of solutions and observe other instances with novel qualitative features. Our exploration is purely experimental, in the sense that we restrict our treatment to guided computer runs. The underlying theories of most of the models presented are the subject of more advanced courses; however, many educators, including this author, believe that there is virtue in exposing students qualitatively early on to these fascinating and generally new developments in mathematics.
+
+### 2.7.1 Computing Irrational Numbers
+
+In this model, we want to exhibit an example of a nonlinear difference equation whose solution is a sequence that approaches a specific limit, irrespective, within reasonable constraints, of the initial condition imposed on it. This type of difference equation has been used to compute a class of irrational numbers. For example, a well-defined approximation for computing $\sqrt{A}$ is the feedback process:
+
+$$y(k + 1) = \frac{1}{2} \left[ y(k) + \frac{A}{y(k)} \right] \quad (2.38)$$
+
+This equation's main features are explored in the following exercise.
\ No newline at end of file
diff --git a/samples/texts/348597/page_6.md b/samples/texts/348597/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..0ac0516cd615d9876611804b168165fe80ea14b3
--- /dev/null
+++ b/samples/texts/348597/page_6.md
@@ -0,0 +1,19 @@
+# Introduction
+
+This book is mostly based on a series of notes for a primer course in electrical and computer engineering that I taught at the City College of New York School of Engineering. Each week, the class met for an hour of lecture and a three-hour computer laboratory session where students were divided into small groups of 12 to 15 students each. The students met in an informal learning community setting, a computer laboratory, where each student had the exclusive use of a PC. The small size of the groups permitted a great deal of individualized instruction, which was a key ingredient to cater successfully to the needs of students with heterogeneous high school backgrounds.
+
+A student usually takes this course in the second semester of his or her freshman year. Typically, the student would have completed one semester of college calculus, and would be enrolled in the second course of the college calculus sequence and in the first course of the physics sequence for students in the physical sciences and engineering.
+
+My purpose in developing this book is to help bring the beginner engineering student's analytical and computational skills to a level of competency that would permit him or her to participate, enjoy, and succeed in subsequent electrical and computer engineering courses. My experience indicates that the lack of mastery of fundamental quantitative tools is the main impediment to a student's progress in engineering studies.
+
+The specific goals of this book are:
+
+1. To make you more comfortable applying the mathematics and physics that you learned in high school or in college courses, through interactive activities.
+
+2. To introduce you, through examples, to many new practical tools of mathematics, including discrete variables material that are essential to your success in future electrical engineering courses.
+
+3. To instruct you in the use of a powerful computer program, MATLAB®*, which was designed to be simultaneously user-friendly and powerful in tackling efficiently the most demanding problems of engineering and sciences.
+
+4. To give you, through the applications and examples covered, glimpses of some of the fascinating problems that an electrical or
+
+* MATLAB® is a registered trademark of the MathWorks, Inc., 3 Apple Hill Drive, Natick, MA, 01760-2098, USA. Tel: 508-647-7000, Fax: 508-647-7101, e-mail: info@mathworks.com, Web: www.mathworks.com.
\ No newline at end of file
diff --git a/samples/texts/348597/page_60.md b/samples/texts/348597/page_60.md
new file mode 100644
index 0000000000000000000000000000000000000000..18e36488d4ace0793b59b295401012a4daa4435a
--- /dev/null
+++ b/samples/texts/348597/page_60.md
@@ -0,0 +1,25 @@
+**In-Class Exercise**
+
+**Pb. 2.20** Using the difference equation given by Eq. (2.38):
+
+a. Write down a routine to compute $\sqrt{2}$. As an initial guess, take the initial value to be successively: 1, 1.5, 2; even consider 5, 10, and 20. What is the limit of each of the obtained sequences?
+
+b. How many iterations are required to obtain $\sqrt{2}$ accurate to four digits for each of the above initial conditions?
+
+c. Would any of the above properties be different for a different choice of $A$.
+
+Now, having established that the above sequence goes to a limit, let us prove that this limit is indeed $\sqrt{A}$. To prove the above assertion, let this limit be denoted by $y_{\text{lim}}$; that is, for large $k$, both $y(k)$ and $y(k + 1) \Rightarrow y_{\text{lim}}$, and the above difference equation goes in the limit to:
+
+$$y_{\text{lim}} = \frac{1}{2} \left[ y_{\text{lim}} + \frac{A}{y_{\text{lim}}} \right] \quad (2.39)$$
+
+Solving this equation, we obtain:
+
+$$y_{\text{lim}} = \sqrt{A} \qquad (2.40)$$
+
+It should be noted that the above derivation is meaningful only when a limit exists and is in the domain of definition of the sequence (in this case, the real numbers). In Section 2.7.2, we encounter a sequence where, for some values of the parameters, there is no limit.
+
+## 2.7.2 The Logistic Equation
+
+Section 2.7.1 illustrated the case in which the solution of a nonlinear difference equation converges to a single limit for large values of the iteration index. In this chapter subsection, we consider the case in which a succession of iterates (called orbits) bifurcate, yielding orbits of period length 2, 4, 8, 16, *ad infinitum*, ending in what is called a “chaotic” orbit of infinite period length. We illustrate the prototype for this class of difference equations by exploring the logistic difference equation.
+
+The logistic equation was introduced by Verhulst to model the growth of populations limited by finite resources (the name logistic was coined by the French army under Napoleon when this equation was used for the planning of “logement” of troops in camps). In more modern settings of ecology, the
\ No newline at end of file
diff --git a/samples/texts/348597/page_61.md b/samples/texts/348597/page_61.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa8bb0243ed00810077928518fdccd03b9e0e798
--- /dev/null
+++ b/samples/texts/348597/page_61.md
@@ -0,0 +1,35 @@
+above model is used to simulate a population growth model. Specifically, in an ecological or growth process, the normalized measure $y(k+1)$ of the next generation of a specie (the number of animals, for example) is a linear function of the present measure $y(k)$; that is,
+
+$$y(k + 1) = ry(k) \tag{2.41}$$
+
+where *r* is the growth parameter. If unchecked, the growth of the specie follows a geometric series, which for $r > 1$ grows to infinity. But growth is often limited by finite resources. In other words, the larger $y(k)$, the smaller the growth factor. The simplest way to model this decline in the growth factor is to replace *r* by $r(1 - y(k))$, so that as $y(k)$ approaches the theoretical limit (1 in this case), the effective growth factor goes to zero. The difference equation goes to:
+
+$$y(k + 1) = r(1 - y(k))y(k) \tag{2.42}$$
+
+which is the standard form for the logistic equation.
+
+In the next series of exercises, we explore the solution of Eq. (2.42) as we vary the value of *r*. We find that qualitatively different classes of solutions may appear for different values of *r*.
+
+We start by writing the simple subroutine that models Eq. (2.42):
+
+```fortran
+N=127; r=1;y(1)=1;
+m=1:N+1;
+for k=1:N
+ y(k+1)= r*(1-y(k))*y(k);
+end
+plot(m,y,'*-')
+x
+```
+
+The values of *r* and $y(1)$ are to be keyed in for each of the specific cases under consideration.
+
+## In-Class Exercises
+
+In the following two problems, we take in the logistic equation $r > 1$ and $y(1) < 1$.
+
+**Pb. 2.21** Consider the case that $1 < r < 3$ and $y(1) = 0.5$.
+
+a. Show that by running the above program for different values of *r* and $y(1)$ that the iteration of the logistic equation leads to the limit
+
+$$y(N \gg 1) = \left(\frac{r-1}{r}\right).$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_62.md b/samples/texts/348597/page_62.md
new file mode 100644
index 0000000000000000000000000000000000000000..29b8993b16fd5f21a0c96d482f4e3846bcb3bce8
--- /dev/null
+++ b/samples/texts/348597/page_62.md
@@ -0,0 +1,17 @@
+b. Does the value of this limit change if the value of y(1) is modified, while r is kept fixed?
+
+**Pb. 2.22** Find the iterates of the logistic equation for the following values of r: 3.1, 3.236068, 3.3, 3.498561699, 3.566667, and 3.569946, assuming the following three initial conditions:
+
+$$y(1) = 0.2, \quad y(1) = 0.5, \quad y(1) = 0.7$$
+
+In particular, specify for each case:
+
+a. The period of the orbit for large *N*, and the values of each of the iterates.
+
+b. Whether the orbit is super-stable (i.e., the periodicity is present for all values of *N*).
+
+This section provided a quick glimpse of two types of nonlinear difference equations, one of which may not necessarily converge to one value. We discovered that a great number of classes of solutions may exist for different values of the equation's parameters. In Section 2.8 we generalize to 2-D. Section 2.8 illustrates nonlinear difference equations in 2-D geometry. The study of these equations has led in the last few decades to various mathematical discoveries in the branches of mathematics called Symbolic Dynamical theory, Fractal Geometry, and Chaos theory, which have far-reaching implications in many fields of engineering. The interested student/reader is encouraged to consult the References section of this book for a deeper understanding of this subject.
+
+## 2.8 Fractals and Computer Art
+
+In Section 2.4, we introduced a fractal type having *a priori* well-defined and apparent spatial symmetries, namely, the Koch curve. In Section 2.7, we discovered that a certain type of 1-D nonlinear difference equation may lead, for a certain range of parameters, to a sequence that may have different orbits. Section 2.8.1 explores examples of 2-D fractals, generated by coupled difference equations, whose solution morphology can also be quite distinct due solely to a minor change in one of the parameters of the difference equations. Section 2.8.2 illustrates another possible feature observed in some types of fractals. We show how the 2-D orbit representing the solution of a particular nonlinear difference equation can also be substantially changed through a minor variation in the initial conditions of the equation.
\ No newline at end of file
diff --git a/samples/texts/348597/page_63.md b/samples/texts/348597/page_63.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f000fc9916c294fc1a4864d3ca0428ab3979c7d
--- /dev/null
+++ b/samples/texts/348597/page_63.md
@@ -0,0 +1,25 @@
+FIGURE 2.2
+Plot of the Mira curve for a = 0.99. The starting point coordinates are (4, 0). Top panel: b =
+1, bottom panel: b = 0.98.
+
+**2.8.1 Mira's Model**
+
+The coordinates of the points on the Mira curve are generated iteratively
+through the following system of nonlinear difference equations:
+
+$$
+\begin{align}
+x(k + 1) &= by(k) + F(x(k)) \notag \\
+y(k + 1) &= -x(k) + F((x(k) + 1))
+\end{align}
+\tag{2.43}
+$$
+
+where
+
+$$
+F(x) = ax + \frac{2(1-a)x^2}{1+x^2} \tag{2.44}
+$$
+
+We illustrate the different morphologies of the solutions in two different
+cases, and leave other cases as exercises for your fun and exploration.
\ No newline at end of file
diff --git a/samples/texts/348597/page_64.md b/samples/texts/348597/page_64.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8cfbb68a45e276d78bedbd1f46991eca40ba988
--- /dev/null
+++ b/samples/texts/348597/page_64.md
@@ -0,0 +1,25 @@
+**Case 1** Here, $a = -0.99$, and we consider the cases $b = 1$ and $b = 0.98$. The starting point coordinates are (4, 0). See Figure 2.2. This case can be viewed by editing and executing the following script M-file:
+
+```matlab
+for n=1:12000
+ a=-0.99;b1=1;b2=0.98;
+ x1(1)=4;y1(1)=0;x2(1)=4;y2(1)=0;
+ x1(n+1)=b1*y1(n)+a*x1(n)+2*(1-a)*(x1(n))^2/(1+(x1(n)^2));
+ y1(n+1)=-x1(n)+a*x1(n+1)+2*(1-a)*(x1(n+1)^2)/(1+(x1(n+1)^2));
+ x2(n+1)=b2*y2(n)+a*x2(n)+2*(1-a)*(x2(n))^2/(1+(x2(n)^2));
+ y2(n+1)=-x2(n)+a*x2(n+1)+2*(1-a)*(x2(n+1)^2)/(1+(x2(n+1)^2));
+end
+
+subplot(2,1,1); plot(x1,y1,'.)
+title('a=-0.99 b=1')
+subplot(2,1,2); plot(x2,y2,'.)
+title('a=-0.99 b=0.98')
+```
+
+**Case 2** Here, $a = 0.7$, and we consider the cases $b = 1$ and $b = 0.9998$. The starting point coordinates are (0, 12.1). See Figure 2.3.
+
+## In-Class Exercise
+
+Pb. 2.23 Manifest the computer artist inside yourself. Generate new geometrical morphologies, in Mira's model, by new choices of the parameters ($-1 < a < 1$ and $b \approx 1$) and of the starting point. You can start with:
+
+| a | b1 | b2 | (x1, y1) |
|---|
| -0.48 | 1 | 0.93 | (4, 0) | | -0.25 | 1 | 0.99 | (3, 0) | | 0.1 | 1 | 0.99 | (3, 0) | | 0.5 | 1 | 0.9998 | (3, 0) | | 0.99 | 1 | 0.9998 | (0, 12) |
\ No newline at end of file
diff --git a/samples/texts/348597/page_65.md b/samples/texts/348597/page_65.md
new file mode 100644
index 0000000000000000000000000000000000000000..75b834798b6b9eeaba47e792ce06d740c28a3d88
--- /dev/null
+++ b/samples/texts/348597/page_65.md
@@ -0,0 +1,21 @@
+FIGURE 2.3
+Plot of the Mira curve for a = 0.7. The starting point coordinates are (0, 12.1). Top panel: b = 1, bottom panel: b = 0.9998.
+
+**2.8.2 Hénon's Model**
+
+The coordinates of the Hénon's orbits are generated iteratively through the following system of nonlinear difference equations:
+
+$$
+\begin{aligned}
+x(k+1) &= ax(k+1) - b(y(k) - (x(k))^2) \\
+y(k+1) &= bx(k+1) + a(y(k) - (x(k))^2)
+\end{aligned}
+\qquad (2.45) $$
+
+where $|a| \le 1$ and $b = \sqrt{1-a^2}$.
+
+Executing the following *script M-file* illustrates the possibility of generating two distinct orbits if the starting points of the iteration are slightly different (here, *a* = 0.24), and the starting points are slightly different from each other. The two cases initial point coordinates are given, respectively, by (0.5696, 0.1622) and (0.5650, 0.1650). See Figure 2.4.
+
+a=0.24;
+
+b=0.9708;
\ No newline at end of file
diff --git a/samples/texts/348597/page_66.md b/samples/texts/348597/page_66.md
new file mode 100644
index 0000000000000000000000000000000000000000..f4b2ade18c689daa5b1b3eceba7d2d0a88f01d7c
--- /dev/null
+++ b/samples/texts/348597/page_66.md
@@ -0,0 +1,26 @@
+FIGURE 2.4
+
+Plot of two Hénon orbits having the same $a = 0.25$ but different starting points. (o) corresponds to the orbit with starting point (0.5696, 0.1622), (x) corresponds to the orbit with starting point (0.5650, 0.1650).
+
+```matlab
+x1(1)=0.5696;y1(1)=0.1622;
+x2(1)=0.5650;y2(1)=0.1650;
+for n=1:120
+ x1(n+1)=a*x1(n)-b*(y1(n)-(x1(n))^2);
+ y1(n+1)=b*x1(n)+a*(y1(n)-(x1(n))^2);
+ x2(n+1)=a*x2(n)-b*(y2(n)-(x2(n))^2);
+ y2(n+1)=b*x2(n)+a*(y2(n)-(x2(n))^2);
+end
+plot(x1,y1,'ro',x2,y2,'bx')
+```
+
+### 2.8.2.1 Demonstration
+
+Different orbits for Hénon's model can be plotted if different starting points are randomly chosen. Executing the following *script M-file* illustrates the $a = 0.24$ case, with random initial conditions. See Figure 2.5.
+
+```matlab
+a=0.24;
+b=sqrt(1-a^2);
+rx=rand(1,40);
+ry=rand(1,40);
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_67.md b/samples/texts/348597/page_67.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc113005bde38fff711dc570a9ea6f28442d45d4
--- /dev/null
+++ b/samples/texts/348597/page_67.md
@@ -0,0 +1,18 @@
+FIGURE 2.5
+Plot of multiple Hénon orbits having the same a = 0.25 but random starting points.
+
+for n=1:1500
+ for m=1:40
+ x(1,m)=-0.99+2*rx(m);
+ y(1,m)=-0.99+2*ry(m);
+ x(n+1,m)=a*x(n,m)-b*(y(n,m)-(x(n,m))^2);
+ y(n+1,m)=b*x(n,m)+a*(y(n,m)-(x(n,m))^2);
+ end
+end
+plot(x,y,'r.)
+axis([-1 1 -1 1])
+axis square
+
+## 2.9 Generation of Special Functions from Their Recursion Relations*
+
+In this section, we go back to more classical mathematics. We consider the case of the special functions of mathematical physics. In this case, we need to
\ No newline at end of file
diff --git a/samples/texts/348597/page_68.md b/samples/texts/348597/page_68.md
new file mode 100644
index 0000000000000000000000000000000000000000..cdadc84e888a413d220b0b9f308feba846a58ae1
--- /dev/null
+++ b/samples/texts/348597/page_68.md
@@ -0,0 +1,34 @@
+define the iterated quantities by two indices: the order of the function and the value of the argument of the function.
+
+In many electrical engineering problems, it is convenient to use a class of polynomials called the orthogonal polynomials. For example, in filter design, the set of Chebyshev polynomials are of particular interest.
+
+The Chebyshev polynomials can be defined through recursion relations, which are similar to difference equations and relate the value of a polynomial of a certain order at a particular point to the values of the polynomials of lower orders at the same point. These are defined through the following recursion relation:
+
+$$T_k(x) = 2xT_{k-1}(x) - T_{k-2}(x) \quad (2.46)$$
+
+Now, instead of giving two values for the initial conditions as we would have in difference equations, we need to give the explicit functions for two of the lower-order polynomials. For example, the first- and second-order Chebyshev polynomials are
+
+$$T_1(x) = x \quad (2.47)$$
+
+$$T_2(x) = 2x^2 - 1 \quad (2.48)$$
+
+**Example 2.7**
+
+Plot over the interval $0 \le x \le 1$, the fifth-order Chebyshev polynomial.
+
+*Solution:* The strategy to solve this problem is to build an array to represent the *x*-interval, and then use the difference equation routine to find the value of the Chebyshev polynomial at each value of the array, remembering that the indexing should always be a positive integer.
+
+The following program implements the above strategy:
+
+```matlab
+N=5;
+x1=1:101;
+x=(x1-1)/100;
+T(1,x1)=x;
+T(2,x1)=2*x.^2-1;
+for k=3:N
+ T(k,x1)=2.*x.*T(k-1,x1)-T(k-2,x1);
+end
+y=T(N,x1);
+plot(x,y)
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_69.md b/samples/texts/348597/page_69.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e088d64fb94015c1a1ee93275b9a1312eca8337
--- /dev/null
+++ b/samples/texts/348597/page_69.md
@@ -0,0 +1,43 @@
+**In-Class Exercise**
+
+Pb. 2.24 By comparing their plots, verify that the above definition for the Chebyshev polynomial gives the same graph as that obtained from the closed-form expression:
+
+$$T_N(x) = \cos(N \cos^{-1}(x)) \quad \text{for } 0 \le x \le 1$$
+
+In addition to the Chebyshev polynomials, you will encounter other
+orthogonal polynomials in your engineering studies. In particular, the solu-
+tions of a number of problems in electromagnetic theory and in quantum
+mechanics (QM) call on the Legendre, Hermite, Laguerre polynomials, etc. In
+the following exercises, we explore, in a preliminary manner, some of these
+polynomials. We also explore another important type of the special functions:
+the spherical Bessel function.
+
+**Homework Problems**
+
+Pb. 2.25 Plot the function $y$ defined, in each case:
+
+$$ (m+2)P_{m+2}(x) = (2m+3)xP_{m+1}(x) - (m+1)P_m(x) $$
+
+a. Legendre polynomials:
+
+$P_1(x) = x$ and $P_2(x) = \frac{1}{2}(3x^2 - 1)$
+
+For $0 \le x \le 1$, plot $y = P_5(x)$
+
+These polynomials describe the electric field distribution from a nonspherical
+charge distribution.
+
+$$ \textbf{b. Hermite polynomials:} \begin{cases} H_{m+2}(x) = 2xH_{m+1}(x) - (m+1)H_m(x) \\ H_1(x) = 2x \end{cases} \text{ and } H_2(x) = 4x^2 - 2 $$
+
+For $0 \le x \le 6$, plot $y = A_5 H_5(x) \exp(-x^2 / 2)$, where $A_m = (2^m m! \sqrt{\pi})^{-1/2}$
+
+The function $y$ describes the QM wave-function of the harmonic oscillator.
+
+c. Laguerre polynomials:
+
+$$ \begin{cases} L_{m+2}(x) = [(3 + 2m + -x)L_{m+1}(x) - (m+1)^2 L_m(x)] / (m+2) \\ L_1(x) = 1 - x \end{cases} \text{ and } L_2(x) = (1 - 2x + x^2 / 2) $$
+
+For $0 \le x \le 6$, plot $y = \exp(-x/2)L_5(x)$
+
+The Laguerre polynomials figure in the solutions of the QM problem of
+atoms and molecules.
\ No newline at end of file
diff --git a/samples/texts/348597/page_7.md b/samples/texts/348597/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b5e6931d01b1037f507b7dcb907999e5ce6e6a0
--- /dev/null
+++ b/samples/texts/348597/page_7.md
@@ -0,0 +1,17 @@
+computer engineer solves in the course of completing many of his or her design projects.
+
+My experience indicates that you can achieve the above goals through the following work habits that I usually recommend to my own students:
+
+* Read carefully the material from this book that is assigned to you by your instructor for the upcoming week, and make sure to solve the suggested preparatory exercises in advance of the weekly lecture.
+
+* Attend the lecture and follow closely the material presented, in particular the solutions to the more difficult preparatory exercises and the demonstrations.
+
+* Following the lecture, make a list of questions on the preparatory material to which you still seek answers, and ask your instructor for help and clarification on these questions, preferably in the first 30 minutes of your computer lab session.
+
+* Complete the in-class exercises during the computer lab session. If you have not finished solving all in-class exercises, make sure you complete them on your own, when the lab is open, or at home if you own a computer, and certainly before the next class session, along with the problems designated in the book as homework problems and assigned to you by your instructor.
+
+In managing this course, I found it helpful for both students and instructors to require each student to solve all problems in a bound notebook. The advantage to the student is to have easy access to his or her previous work, personal notes, and reminders that he or she made as the course progressed. The advantage to the instructor is to enhance his or her ability to assess, more easily and readily, an individual student's progress as the semester progresses.
+
+This book may be used for self-study by readers with perhaps a little more mathematical maturity acquired through a second semester of college calculus. The advanced reader of this book who is familiar with numerical methods will note that, in some instances, I did not follow the canonical order for the sequence of presentation of certain algorithms, thus sacrificing some optimality in the structure of some of the elementary programs included. This was necessitated by the goal I set for this book, which is to introduce both analytical and computational tools simultaneously.
+
+The sections of this book that are marked with asterisks include material that I assigned as projects to students with either strong theoretical interest or more mathematical maturity than a typical second semester freshman student. Although incorporated in the text, they can be skipped in a first reading. I hope that, by their inclusion, I will facilitate to the interested reader a smooth transition to some new mathematical concepts and computational tools that are of particular interest to electrical engineers.
\ No newline at end of file
diff --git a/samples/texts/348597/page_70.md b/samples/texts/348597/page_70.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e8453adf08897bf71e7c3b4cd12e475aeecd58e
--- /dev/null
+++ b/samples/texts/348597/page_70.md
@@ -0,0 +1,9 @@
+**Pb. 2.26** The recursion relations can, in addition to defining orthogonal polynomials, also define some special functions of mathematical physics. For example, the spherical Bessel functions that play an important role in defining the modes of spherical cavities in electrodynamics and scattering amplitudes in both classical and quantum physics are defined through the following recursion relation:
+
+$$j_{m+2}(x) = \left(\frac{3+2m}{x}\right) j_{m+1}(x) - j_m(x)$$
+
+With
+
+$$j_1(x) = \frac{\sin(x)}{x^2} - \frac{\cos(x)}{x} \quad \text{and} \quad j_2(x) = \left[\frac{3}{x^3} - \frac{1}{x}\right] \sin(x) - \frac{3\cos(x)}{x^2}$$
+
+Plot $j_5(x)$ over the interval $0 < x < 15$.
\ No newline at end of file
diff --git a/samples/texts/348597/page_71.md b/samples/texts/348597/page_71.md
new file mode 100644
index 0000000000000000000000000000000000000000..d98456a8662f3eceeb8cc3e4868a4b8320166bce
--- /dev/null
+++ b/samples/texts/348597/page_71.md
@@ -0,0 +1,28 @@
+# 3
+
+## Elementary Functions and Some of Their Uses
+
+The purpose of this chapter is to illustrate and build some practice in the use of elementary functions in selected basic electrical engineering problems. We also construct some simple signal functions that you will encounter in future engineering analysis and design problems.
+
+**NOTE** It is essential to review the Supplement at the end of this book in case you want to refresh your memory on the particular elementary functions covered in the different chapter sections.
+
+### 3.1 Function Files
+
+To analyze and graph functions using MATLAB, we have to be able to construct functions that can be called from within the MATLAB environment. In MATLAB, functions are made and stored in *function M-files*. We already used one kind of *M-file* (script file) to store various executable commands in a routine. *Function M-files* differ from *script M-files* in that they have designated input(s) and output(s).
+
+The following is an example of a function. Type and save the following function in a file named `aline.m`:
+
+```matlab
+function y=aline(x)
+% (x,y) is a point on a line that has slope 3
+% and y-intercept -5
+y=3*x-5;
+```
+
+**NOTES**
+
+1. The word **function** at the beginning of the file makes it a function rather than a script file.
+
+2. The function name, `aline`, that appears in the first line of this file should match the name that we assign to this file name when saving it (i.e., `aline.m`).
+
+Having created a *function M-file* in your user volume, move to the command window to learn how to call this function. There are two basic ways to use a function file:
\ No newline at end of file
diff --git a/samples/texts/348597/page_72.md b/samples/texts/348597/page_72.md
new file mode 100644
index 0000000000000000000000000000000000000000..3dfbe6eee8b25bd283584a69a2e8cbc50dfd8c08
--- /dev/null
+++ b/samples/texts/348597/page_72.md
@@ -0,0 +1,32 @@
+1. To evaluate the function for a specified value x=x1, enter
+ `aline(x1)` to get the function value at this point; that is, $y_1 = 3x_1 - 5$.
+
+2. To plot $y_1 = 3x_1 - 5$ for a range of $x$ values, say [-2, 7], enter:
+
+`fplot('aline', [-2,7])`
+
+**NOTE** The above example illustrates a function with one input and one output. The construction of a *function M-file* of a function having *n* inputs and *m* outputs starts with:
+
+`function [y1,y2,...,ym]=funname(x1,x2,...,xn)`
+
+Above, using a *function M-file*, we showed a method to plot the defined function `aline` on the interval (-2, 7) using the `fplot` command. An alternative method is, of course, to use arrays, in the manner specified in Chapter 1. Specifically, we could have plotted the `'aline'` function in the following alternate method:
+
+```
+x=-2:.01:7;
+y=3*x-5;
+plot(x,y)
+```
+
+To compare the two methods, we note that:
+
+1. **plot** requires a user-supplied *x*-array (abscissa points) and a constructed *y*-array (ordinate points), while **fplot** only requires the name of the function file, defined previously and stored in a *function M-file* and the endpoints of the interval.
+
+2. The `fplot` automatically creates a sampled domain that is used to plot the function, taking into account the type of function being plotted and using enough points to make the display appear continuous. On the other hand, `plot` requires that you choose the array length yourself.
+
+Both methods, therefore, have their own advantages and it depends on the particular problem whether to use `plot` or `fplot`.
+
+We are now in position to explore the use of some of the most familiar functions.
+
+## 3.2 Examples with Affine Functions
+
+The equation of an affine function is given by:
\ No newline at end of file
diff --git a/samples/texts/348597/page_73.md b/samples/texts/348597/page_73.md
new file mode 100644
index 0000000000000000000000000000000000000000..084bbef899076920ce470ec59fb3bdb89e15ed24
--- /dev/null
+++ b/samples/texts/348597/page_73.md
@@ -0,0 +1,27 @@
+$$y(x) = ax + b \tag{3.1}$$
+
+## In-Class Exercises
+
+**Pb. 3.1** Generate four *function M-files* for the following four functions:
+
+$$y_1(x) = 3x + 2; \quad y_2(x) = 3x + 5; \quad y_3(x) = -\frac{x}{3} + 3; \quad y_4(x) = -\frac{x}{3} + 4$$
+
+**Pb. 3.2** Sketch the functions of **Pb. 3.1** on the interval $-5 < x < 5$. What can you say about the angle between each of the two lines' pairs. (Did you remember to make your aspect ratio $= 1$?)
+
+**Pb. 3.3** Read off the graphs the coordinates of the points of intersection of the lines in **Pb. 3.1**. (Become familiar with the use and syntax of the `zoom` and `ginput` commands for a more accurate reading of the coordinates of a point.)
+
+**Pb. 3.4** Write a *function M-file* for the line passing through a given point and intersecting another given line at a given angle.
+
+$$\left( \text{Hint: } \tan(a+b) = \frac{\tan(a) + \tan(b)}{1 - \tan(a)\tan(b)} \right)$$
+
+## Application to a Simple Circuit
+
+The purpose of this application is to show that:
+
+1. The solution to a simple circuit problem can be viewed as the simultaneous solution of two affine equations, or, equivalently, as the intersection of two straight lines.
+
+2. The variations in the circuit performance can be studied through a knowledge of the affine functions, relating the voltages and the current.
+
+Consider the simple circuit shown in Figure 3.1. In the terminology of the circuit engineer, the voltage source $V_S$ is called the input to the circuit, and the current $I$ and the voltage $V$ are called the circuit outputs. Thus, this is an example of a system with one input and two outputs. As you may have studied in high school physics courses, all of circuit analysis with resistors as elements can be accomplished using Kirchhoff's current law, Kirchoff's voltage law, and Ohm's law.
+
+* Kirchoff's voltage law: The sum of all voltage drops around a closed loop is balanced by the sum of all voltage sources around the same loop.
\ No newline at end of file
diff --git a/samples/texts/348597/page_74.md b/samples/texts/348597/page_74.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8653777db92e8a7354a23f9ced04ae565c90fd0
--- /dev/null
+++ b/samples/texts/348597/page_74.md
@@ -0,0 +1,23 @@
+FIGURE 3.1
+A simple resistor circuit.
+
+* Kirchoff's current law: The algebraic sum of all currents entering (exiting) a circuit node must be zero. (Assign the + sign to those currents that are entering the node, and the – sign to those current exiting the node.)
+
+* Ohm's law: The ratio of the voltage drop across a resistor to the current passing through the resistor is a constant, defined as the resistance of the element; that is, $R = \frac{V}{I}$
+
+The quantities we are looking for include (1) the current *I* through the circuit, and (2) the voltage *V* across the load resistor *R*.
+
+Using Kirchoff's voltage law and Ohm's law for resistance R₁, we obtain:
+
+$$V_s = V + V_1 = V + IR_1 \quad (3.2)$$
+
+while applying Ohm's law for the load resistor gives:
+
+$$V = IR \quad (3.3)$$
+
+These two equations can be rewritten in the form of affine functions of *I* as
+functions of *V*:
+
+$$L_1: I = \frac{(V_s - V)}{R_1} \qquad (3.4)$$
+
+$$L_2: I = \frac{V}{R} \qquad (3.5)$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_75.md b/samples/texts/348597/page_75.md
new file mode 100644
index 0000000000000000000000000000000000000000..a02aa6c15ab55d23042de4ba7feab01b713c180e
--- /dev/null
+++ b/samples/texts/348597/page_75.md
@@ -0,0 +1,34 @@
+If we know the value of $V_s$, $R$, and $R_1$, then Eqs. (3.4) and (3.5) can be represented as lines drawn on a plane with ordinate $I$ and abscissa $V$.
+
+Suppose we are interested in finding the value of the current $I$ and the voltage $V$ when $R_1 = 100\Omega$, $R = 100\Omega$, and $V_s = 5 V$. To solve this problem graphically, we plot each of the $L_1$ and $L_2$ functions on the same graph and find their point of intersection.
+
+The functions $L_1$ and $L_2$ are programmed as follows:
+
+```matlab
+function I=L1(V)
+ R1=100;
+ R=100;
+ Vs=5;
+ I=(Vs-V)/R1;
+
+function I=L2(V)
+ R1=100;
+ R=100;
+ Vs=5;
+ I=V/R;
+```
+
+Because the voltage $V$ is smaller than the source potential, due to losses in the resistor, a suitable domain for $V$ would be [0, 5]. We now plot the two lines on the same graph:
+
+```matlab
+fplot('L1', [0,5])
+hold on
+fplot('L2', [0,5])
+hold off
+```
+
+## In-Class Exercise
+
+**Pb. 3.5** Verify that the two lines $L_1$ and $L_2$ intersect at the point: ($I = 0.025$, $V = 2.5$).
+
+In the above analysis, we had to declare the numerical values of the parameters $R_1$ and $R$ in the definition of each of the two functions. This can, at best, be tedious if you are dealing with more than two *function M-files* or two parameters; or worse, can lead to errors if you overlook changing the values of the parameters in any of the relevant *function M-files* when you decide to modify them. To avoid these types of problems, it is good practice to call all
\ No newline at end of file
diff --git a/samples/texts/348597/page_76.md b/samples/texts/348597/page_76.md
new file mode 100644
index 0000000000000000000000000000000000000000..983680b77a5c31338d74046f2e4bfe2a2c7e7c3b
--- /dev/null
+++ b/samples/texts/348597/page_76.md
@@ -0,0 +1,33 @@
+functions from a single *script M-file* and link the parameters' values together so that you only need to edit the calling *script M-file*. To link the values of parameters to all functions in use, you can use the MATLAB **global** command. To see how this works, rewrite the above *function M-files* as follows:
+
+```matlab
+function I=L1(V)
+ global R1 R % global statement
+ Vs=5;
+ I=(Vs-V)/R1;
+
+function I=L2(V)
+ global R1 R % global statement
+ Vs=5;
+ I=V/R;
+```
+
+The calling *script M-file* now reads:
+
+```
+global R1 R %global statement
+R1=100; %set global resistance values
+R=100;
+V=0:.01:5; %set the voltage range
+I1=L1(V); %evaluate I1
+I2=L2(V); %evaluate I2
+plot(V,I1,V,I2,'-.') %plot the two curves
+```
+
+## In-Class Exercise
+
+Pb. 3.6 In the above *script M-file*, we used arrays and the `plot` command. Rewrite this script file such that you make use of the `fplot` command.
+
+## Further Consideration of Figure 3.1
+
+Calculating the circuit values for fixed resistor values is important, but we can also ask about the behavior of the circuit as we vary the resistor values. Suppose we keep $R_1 = 100\Omega$ and $V_s = 5V$ fixed, but vary the value that $R$ can take. To this end, an analytic solution would be useful because it would give us the circuit responses for a range of values of the circuit parameters $R_1, R, V_s$. However, a plot of the lines $L_1$ and $L_2$ for different values of $R$ can also provide a great deal of qualitative information regarding how the simultaneous solution to $L_1$ and $L_2$ changes as the value of $R$ changes.
\ No newline at end of file
diff --git a/samples/texts/348597/page_77.md b/samples/texts/348597/page_77.md
new file mode 100644
index 0000000000000000000000000000000000000000..e33fef159786c0e446b4dc644aced0427e05cf75
--- /dev/null
+++ b/samples/texts/348597/page_77.md
@@ -0,0 +1,28 @@
+The following problem serves to give you a better qualitative idea as to
+how the circuit outputs vary as different values are chosen for the resistor R.
+
+**In-Class Exercise**
+
+Pb. 3.7 This problem still refers to the circuit of Figure 3.1.
+
+a. Redraw the lines $L_1$ and $L_2$, using the previous values for the circuit parameters.
+
+b. Holding the graph for the case $R = 100\Omega$, sketch $L_1$ and $L_2$ again for $R = 50\Omega$ and $R = 500\Omega$. How do the values of the voltage and the current change as $R$ increases; and decreases?
+
+c. Determine the largest values of the current and voltage that can exist in this circuit when $R$ varies over non-negative values.
+
+d. The usual nomenclature for the circuit conditions is as follows: the circuit is called an open circuit when $R = \infty$, while it is called a short circuit when $R = 0$. What are the $(V, I)$ solutions for these two cases? Can you generalize your statement?
+
+Now, to validate the qualitative results obtained in **Pb. 3.7**, let us solve analytically the $L_1$ and $L_2$ system. Solving this system of two linear equations in two unknowns gives, for the current and the voltage, the following expressions:
+
+$$V(R) = \left( \frac{R}{R + R_1} \right) V_s \qquad (3.6)$$
+
+$$I(R) = \left( \frac{1}{R + R_1} \right) V_s \qquad (3.7)$$
+
+Note that the above analytic expressions for *V* and *I* are neither linear nor affine functions in the value of the resistance.
+
+**In-Class Exercise**
+
+Pb. 3.8 This problem still refers to the circuit of Figure 3.1.
+
+a. Keeping the values of $V_s$ and $R_1$ fixed, sketch the functions $V(R)$ and $I(R)$ for this circuit, and verify that the solutions you found previously in Pbs. 3.7 and 3.8, for the various values of R, agree with those found here.
\ No newline at end of file
diff --git a/samples/texts/348597/page_78.md b/samples/texts/348597/page_78.md
new file mode 100644
index 0000000000000000000000000000000000000000..dfe2c1d31feb2401cedf6ebf82f17a49665eb786
--- /dev/null
+++ b/samples/texts/348597/page_78.md
@@ -0,0 +1,27 @@
+b. Given that the power lost in a resistive element is the product of the voltage across the resistor multiplied by the current through the resistor, plot the power through the variable resistor as a function of $R$.
+
+c. Determine the value of $R$ such that the power lost in this resistor is maximized.
+
+d. Find, in general, the relation between $R$ and $R_1$ that ensures that the power lost in the load resistance is maximized. (This general result is called Thevenin's theorem.)
+
+### 3.3 Examples with Quadratic Functions
+
+A quadratic function is of the form:
+
+$$y(x) = ax^2 + bx + c \quad (3.8)$$
+
+#### Preparatory Exercises
+
+**Pb. 3.9** Find the coordinates of the vertex of the parabola described by Eq. (3.8) as functions of the *a*, *b*, *c* parameters.
+
+**Pb. 3.10** If $a = 1$, show that the quadratic Eq. (3.8) can be factored as:
+
+$$y(x) = (x - x_{+})(x - x_{-})$$
+
+where $x_{\pm}$ are the roots of the quadratic equation. Further, show that, for arbitrary $a$, the product of the roots is $\frac{c}{a}$, and their sum is $\frac{-b}{a}$.
+
+#### In-Class Exercises
+
+**Pb. 3.11** Develop a function M-file that inputs the two real roots of a second-degree equation and returns the value of this function for an arbitrary $x$. Is this function unique?
+
+**Pb. 3.12** In your elementary mechanics course, you learned that the trajectory of a projectile in a gravitational field (oriented in the –y direction) with
\ No newline at end of file
diff --git a/samples/texts/348597/page_79.md b/samples/texts/348597/page_79.md
new file mode 100644
index 0000000000000000000000000000000000000000..1af0a565899abb98b5a0a83ea7eb6cad68322ab2
--- /dev/null
+++ b/samples/texts/348597/page_79.md
@@ -0,0 +1,41 @@
+an initial velocity $v_{0,x}$ in the x-direction and $v_{0,y}$ in the y-direction satisfies the
+following parametric equations:
+
+$$x = v_{0,x}t \quad \text{and} \quad y = -\frac{1}{2}gt^2 + v_{0,y}t$$
+
+where *t* is time and the origin of the axis was chosen to correspond to the
+position of the particle at *t* = 0 and *g* = 9.8 ms⁻²
+
+a. By eliminating the time t, show that the projectile trajectory y(x) is a parabola.
+
+b. Noting that the components of the initial velocity can be written as function of the projectile initial speed and its angle of inclination:
+
+$$v_{0,y} = v_0 \sin(\phi) \quad \text{and} \quad v_{0,x} = v_0 \cos(\phi)$$
+
+show that, for a given initial speed, the maximum range for the
+projectile is achieved when the inclination angle of the initial veloc-
+ity is 45°.
+
+c. Plot the range for a fixed inclination angle as a function of the initial speed.
+
+**3.4 Examples with Polynomial Functions**
+
+As pointed out in the Supplement, a polynomial function is an expression of the form:
+
+$$p(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0 \quad (3.9)$$
+
+where $a_n \neq 0$ for an $n$-th-degree polynomial. In MATLAB, we can represent the
+polynomial function as an array:
+
+$$p = [a_n a_{n-1} \dots a_0] \tag{3.10}$$
+
+**Example 3.1**
+
+You are given the array of coefficients of the polynomial. Write a function M-file for this polynomial using array operations. Let p = [1 3 2 1 0 3]:
+
+Solution:
+
+```matlab
+function y=polfct(x)
+p=[1 3 2 1 0 3];
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_8.md b/samples/texts/348597/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..de2fec64720c811331a5218574fcfdedb15b14e1
--- /dev/null
+++ b/samples/texts/348597/page_8.md
@@ -0,0 +1,11 @@
+This text greatly benefited from course material previously prepared by my colleagues in the departments of electrical engineering and computer science at City College of the City University of New York, in particular, P. Combes, I. Gladkova, B. Gross, and F. Thau. They provided either the starting point for my subsequent efforts in this course, or the peer critique for the early versions of this manuscript. I owe them many thanks and, of course, do not hold them responsible for any of the remaining imperfections in the text.
+
+The preparation of this book also owes a lot to my students. Their questions and interest in the material contributed to many modifications in the order and in the presentation of the different chapters. Their desire for working out more applications led me to expand the scope of the examples and exercises included in the text. To all of them, I am grateful.
+
+I am also grateful to Erwin Cohen, who introduced me to the fine team at CRC Press, and to Jerry Papke whose stewardship of the project from start to end at CRC Press was most supportive and pleasant. The editorial and production teams at CRC in particular, Samar Haddad, the project editor, deserve credit for the quality of the final product rendering. Naomi Fernandes and her colleagues at The MathWorks Inc. kindly provided me with a copy of the new release of MATLAB for which I am grateful.
+
+I dedicate this book to Azza, Tala, and Nigh whose support and love always made difficult tasks a lot easier.
+
+**Jamal T. Manassah**
+
+New York, January 2001
\ No newline at end of file
diff --git a/samples/texts/348597/page_80.md b/samples/texts/348597/page_80.md
new file mode 100644
index 0000000000000000000000000000000000000000..e091f77ff9d37588789de2531c7fee827b998949
--- /dev/null
+++ b/samples/texts/348597/page_80.md
@@ -0,0 +1,31 @@
+L = length(p);
+v = x.^[(L-1):-1:0];
+y = sum(p.*v);
+
+In-Class Exercises
+
+**Pb. 3.13** Show that, for the polynomial *p* defined by Eq. (3.9), the product of the roots is $(-1)^n \frac{a_0}{a_n}$, and the sum of the roots is $-\frac{a_{n-1}}{a_n}$.
+
+**Pb. 3.14** Find graphically the real roots of the polynomial $p = [1 \ 3 \ 2 \ 1 \ 0 \ 3]$.
+
+## 3.5 Examples with the Trigonometric Functions
+
+A time-dependent cosine function of the form:
+
+$$x = a \cos(\omega t + \phi) \tag{3.11}$$
+
+appears often in many applications of electrical engineering: *a* is called the amplitude, ω the angular frequency, and φ the phase. Note that we do not have to have a separate discussion of the sine function because the sine function, as shown in the Supplement, differs from the cosine function by a constant phase. Therefore, by suitably changing only the value of the phase parameter, it is possible to transform the sine function into a cosine function.
+
+In the following example, we examine the period of the different powers of the cosine function; your preparatory task is to predict analytically the relationship between the periods of the two curves given in Example 3.2 and then verify your answer numerically.
+
+### Example 3.2
+
+Plot simultaneously, $x_1(t) = \cos^3(t)$ and $x_2 = \cos(t)$ on $t \in [0, 6\pi]$.
+
+*Solution:* To implement this task, edit and execute the following *script M-file*:
+
+```m
+t=0:.2:6*pi; % t-array
+a=1;w=1; % desired parameters
+x1=a*(cos(w*t))^3; % x1-array constructed
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_81.md b/samples/texts/348597/page_81.md
new file mode 100644
index 0000000000000000000000000000000000000000..56296e9226d2f119d461112245cbc8109bb04903
--- /dev/null
+++ b/samples/texts/348597/page_81.md
@@ -0,0 +1,28 @@
+`x2=a*cos(w*t); % x2-array constructed`
+`plot(t,x1,t,x2,'--')`
+
+## In-Class Exercises
+
+**Pb. 3.15** Determine the phase relation between the sine and cosine functions of the same argument.
+
+**Pb. 3.16** The meaning of amplitude, angular frequency, and phase can be better understood using MATLAB to obtain graphs of the cosine function for a family of $a$ values, $\omega$ values, and $\phi$ values.
+
+a. With $\omega = 1$ and $\phi = \pi/3$, plot the cosine curves corresponding to $a = 1:0.1:2$.
+
+b. With $a = 1$ and $\omega = 1$, plot the cosine curves corresponding to $\phi = 0:\pi/10:\pi$.
+
+c. With $a = 1$ and $\phi = \pi/4$, plot the cosine curves corresponding to $\omega = 1:0.1:2$.
+
+## *Homework Problem*
+
+**Pb. 3.17** Find the period of the function obtained by summing the following three cosine functions:
+
+$$x_1 = 3\cos(t / 3 + \pi / 3), x_2 = \cos(t + \pi), x_3 = \frac{1}{3}\cos\left(\frac{3}{2}(t + \pi)\right)$$
+
+Verify your result graphically.
+
+## **3.6 Examples with the Logarithmic Function**
+
+### **3.6.1 Ideal Coaxial Capacitor**
+
+An ideal capacitor can be loosely defined as two metallic plates separated by an insulator. If a potential is established between the plates, for example through the means of connecting the two plates to the different terminals of a battery, the plates will be charged by equal and opposite charges, with the battery serving as a pump to move the charges around. The capacitance of a
\ No newline at end of file
diff --git a/samples/texts/348597/page_82.md b/samples/texts/348597/page_82.md
new file mode 100644
index 0000000000000000000000000000000000000000..deae68dc84b67b2742ec21ed3e4da581564c4b4d
--- /dev/null
+++ b/samples/texts/348597/page_82.md
@@ -0,0 +1,21 @@
+capacitor is defined as the ratio of the magnitude of the charge accumulated on either of the plates divided by the potential difference across the plates.
+
+Using the Gauss law of electrostatics, it can be shown that the capacitance per unit length of an infinitely long coaxial cable is:
+
+$$ \frac{C}{l} = \frac{2\pi\epsilon}{\ln(b/a)} \qquad (3.12) $$
+
+where *a* and *b* are the radius of the internal and external conductors, respectively, and *ε* is the permittivity of the dielectric material sandwiched between the conductors. (The permittivity of vacuum is approximately ε₀ = 8.85 × 10⁻¹², while that of oil, polystyrene, glass, quartz, bakelite, and mica are, respectively, 2.1, 2.6, 4.5–10, 3.8–5, 5, and 5.4–6 larger.)
+
+**In-Class Exercise**
+
+Pb. 3.18 Find the ratio of the capacitance of two coaxial cables with the same dielectric material for, respectively: *b*/*a* = 5 and 50.
+
+### 3.6.2 The Decibel Scale
+
+In the SI units used by electrical engineers, the unit of power is the Watt. However, in a number of applications, it is convenient to express the power as a ratio of its value to a reference value. Because the value of this ratio can vary over several orders of magnitude, it is often more convenient to represent this ratio on a logarithmic scale, called the decibel scale:
+
+$$ G[\text{dB}] = 10 \log \left( \frac{P}{P_{\text{ref}}} \right) \qquad (3.13) $$
+
+where the function log is the logarithm to base 10. The table below converts the power ratio to its value in decibels (dB):
+
+P/Pref (10n) | dB values (10 n) |
|---|
| 4 | 6 | | 2 | 3 | | 1 | 0 | | 0.5 | -3 | | 0.25 | -6 | | 0.1 | -10 | | 10-3 | -30 |
\ No newline at end of file
diff --git a/samples/texts/348597/page_83.md b/samples/texts/348597/page_83.md
new file mode 100644
index 0000000000000000000000000000000000000000..4bd0fdd6100645dc2206b1aed567792433e58600
--- /dev/null
+++ b/samples/texts/348597/page_83.md
@@ -0,0 +1,28 @@
+**In-Class Exercise**
+
+Pb. 3.19 In a measurement of two power values, $P_1$ and $P_2$, it was determined that:
+
+$$G_1 = 9 \text{ dB} \quad \text{and} \quad G_2 = -11 \text{ dB}$$
+
+Using the above table, determine the value of the ratio $P_1/P_2$.
+
+### 3.6.3 Entropy
+
+Given a random variable X (such as the number of spots on the face of a thrown die) whose possible outcomes are $x_1, x_2, x_3, ...$, and such that the probability for each outcome is, respectively, $p(x_1), p(x_2), p(x_3), ...$ then, the entropy for this system described by the outcome of one random variable is defined by:
+
+$$H(X) = -\sum_{i=1}^{N} p(x_i) \log_2(p(x_i)) \quad (3.14)$$
+
+where N is the number of possible outcomes, and the logarithm is to base 2.
+The entropy is a measure of the uncertainty in the value of the random variable. In Information Theory, it will be shown that the entropy, so defined, is the number of bits, on average, required to describe the random variable X.
+
+**In-Class Exercises**
+
+Pb. 3.20 In each of the following cases, find the entropy:
+
+a. $N = 32$ and $p(x_i) = \frac{1}{32}$ for all $i$
+
+b. $N = 8$ and $p = \begin{bmatrix} \frac{1}{2} & \frac{1}{4} & \frac{1}{8} \\ \frac{1}{16} & \frac{1}{64} & \frac{1}{64} \\ \frac{1}{64} & \frac{1}{64} & \frac{1}{64} \end{bmatrix}$
+
+c. $N = 4$ and $p = \begin{bmatrix} \frac{1}{2} \\ \frac{1}{4} \\ \frac{1}{8} \\ 0 \end{bmatrix}$
+
+d. $N = 4$ and $p = \begin{bmatrix} \frac{1}{2} \\ \frac{1}{4} \\ \frac{1}{4} \\ 0 \end{bmatrix}$
\ No newline at end of file
diff --git a/samples/texts/348597/page_84.md b/samples/texts/348597/page_84.md
new file mode 100644
index 0000000000000000000000000000000000000000..275cafb7af401ea65f0d7d90c8b3ab0cccbcca2a
--- /dev/null
+++ b/samples/texts/348597/page_84.md
@@ -0,0 +1,27 @@
+**Pb. 3.21** Assume that you have two dices (die), one red and the other blue. Tabulate all possible outcomes that you can obtain by throwing these die together. Now assume that all you care about is the sum of spots on the two die. Find the entropy of the outcome.
+
+## *Homework Problem*
+
+**Pb. 3.22** A so-called A-law compander (compressor followed by an expander) uses a compressor that relates output to input voltages by:
+
+$$y = \pm \frac{A|x|}{1 + \log(A)} \quad \text{for } |x| \le 1/A$$
+
+$$y = \pm \frac{1 + \log(A|x|)}{1 + \log(A)} \quad \text{for } \frac{1}{A} \le |x| \le 1$$
+
+Here, the + sign applies when $x$ is positive and the – sign when $x$ is negative. $x = v_i/V$ and $y = v_o/V$, where $v_i$ and $v_o$ are the input and output voltages. The range of allowable voltages is $-V$ to $V$. The parameter $A$ determines the degree of compression.
+
+For a value of $A = 87.6$, plot $y$ vs. $x$ in the interval $[-1, 1]$.
+
+## **3.7 Examples with the Exponential Function**
+
+Take a few minutes to review the section on the exponential function in the Supplement before proceeding further.
+
+(Recall that $\exp(1) = e$.)
+
+### *In-Class Exercises*
+
+**Pb. 3.23** Plot the function $y(x) = (x^{13} + x^9 + x^5 + x^2 + 1) \exp(-4x)$ over the interval $[0,10]$.
+
+**Pb. 3.24** Plot the function $y(x) = \cos(5x) \exp(-x/2)$ over the interval $[0, 10]$.
+
+**Pb. 3.25** From the results of Pb s. 3.23 and 3.24, what can you deduce about the behavior of a function at infinity if one of its factors is an exponentially decreasing function of $x$, while the other factor is a polynomial or trigonometric
\ No newline at end of file
diff --git a/samples/texts/348597/page_85.md b/samples/texts/348597/page_85.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d4684a330532e47cf39f392a93848725f6125f0
--- /dev/null
+++ b/samples/texts/348597/page_85.md
@@ -0,0 +1,33 @@
+ric function of x? What modification to the curve is observed if the degree of
+the polynomial is increased?
+
+Application to a Simple RC Circuit
+
+The solution giving the voltage across the capacitor in Figure 3.2 following
+the closing of the switch can be written in the following form:
+
+$$
+V_c(t) = V_c(0) \exp \left[ -\frac{t}{RC} \right] + V_s \left[ 1 - \exp \left[ -\frac{t}{RC} \right] \right] \quad (3.15)
+$$
+
+$V_c(t)$ is called the time response of the RC circuit, or the circuit output result-
+ing from the constant input $V_s$. The time constant RC of the circuit has the
+units of seconds and, as you will observe in the present analysis and other
+problems in subsequent chapters, its ratio to the characteristic time of a given
+input potential determines qualitatively the output of the system.
+
+FIGURE 3.2
+The circuit used in charging a capacitor.
+
+In-Class Exercise
+
+Pb. 3.26 A circuit designer can produce outputs of various shapes by select-
+ing specific values for the circuit time constant RC. In the following simula-
+tions, you can examine the influence of this time constant on the response of
+the circuit of Figure 3.2.
+
+Using $V_c(0) = 3$ volts, $V_s = 10$ volts (capacitor charging process), and $RC = 1$ s:
+
+a. Sketch a graph of $V_c(t)$. What is the asymptotic value of the solution? How long does it take the capacitor voltage to reach the value of 9 volts?
+
+b. Produce an *M-file* that will plot several curves of $V_c(t)$ corresponding to:
\ No newline at end of file
diff --git a/samples/texts/348597/page_86.md b/samples/texts/348597/page_86.md
new file mode 100644
index 0000000000000000000000000000000000000000..19022d2363a9d470f1ba745c0a2d433e10c92511
--- /dev/null
+++ b/samples/texts/348597/page_86.md
@@ -0,0 +1,41 @@
+(i) $RC = 1$
+
+(ii) $RC = 5$
+
+(iii) $RC = 10$
+
+Which of these time constants results in the fastest approach of $V_c(t)$ toward $V_s$?
+
+c. Repeat the above simulations for the case $V_s = 0$ (capacitor discharge)?
+
+d. What would you expect to occur if $V_c(0) = V_s$?
+
+## *Homework Problem*
+
+**Pb. 3.27** The Fermi-Dirac distribution, which gives the average population of electrons in a state with energy $\epsilon$, neglecting the electron spin for the moment, is given by:
+
+$$f(\epsilon) = \frac{1}{\exp[(\epsilon - \mu)/\Theta] + 1}$$
+
+where $\mu$ is the Fermi (or chemical) potential and $\Theta$ is proportional to the absolute (or Kelvin) temperature.
+
+a. Plot the function $f(\epsilon)$ as function of $\epsilon$, for the following cases:
+
+(i) $\mu = 1$ and $\Theta = 0.002$
+
+(ii) $\mu = 0.03$ and $\Theta = 0.025$
+
+(iii) $\mu = 0.01$ and $\Theta = 0.025$
+
+(iv) $\mu = 0.001$ and $\Theta = 0.001$
+
+b. What is the value of $f(\epsilon)$ when $\epsilon = \mu$?
+
+c. Determine the condition under which we can approximate the Fermi-Dirac distribution function by:
+
+$$f(\epsilon) \approx \exp[(\mu - \epsilon)/\Theta]$$
+
+## **3.8 Examples with the Hyperbolic Functions and Their Inverses**
+
+### **3.8.1 Capacitance of Two Parallel Wires**
+
+The capacitance per unit length of two parallel wires, each of radius $a$ and having their axis separated by distance $D$, is given by:
\ No newline at end of file
diff --git a/samples/texts/348597/page_87.md b/samples/texts/348597/page_87.md
new file mode 100644
index 0000000000000000000000000000000000000000..cdda986700e932cb05a66c859aa00bcf880bf6c9
--- /dev/null
+++ b/samples/texts/348597/page_87.md
@@ -0,0 +1,25 @@
+$$ \frac{C}{l} = \frac{\pi \varepsilon_0}{\cosh^{-1}\left(\frac{D}{2a}\right)} \qquad (3.16) $$
+
+where $\varepsilon_0$ is the permittivity of air (taken to be that of vacuum) $= 8.854 \times 10^{-12}$ Farad/m.
+
+**Question:** Write this expression in a different form using the logarithmic function.
+
+## In-Class Exercises
+
+**Pb. 3.28** Find the capacitance per unit length of two wires of radii 1 cm separated by a distance of 1 m. Express your answer using the most appropriate of the following sub-units:
+
+mF = $10^{-3}$ F (milli-Farad); $\mu F = 10^{-6}$ F (micro-Farad);
+
+nF = $10^{-9}$ F (nano-Farad); $pF = 10^{-12}$ F (pico-Farad);
+
+fF = $10^{-15}$ F (femto-Farad); $aF = 10^{-18}$ F (atto-Farad);
+
+**Pb. 3.29** Assume that you have two capacitors, one consisting of a coaxial cable (radii $a$ and $b$) and the other of two parallel wires, separated by the distance $D$. Further assume that the radius of the wires is equal to the radius of the inner cylinder of the coaxial cable. Plot the ratio $\frac{D}{a}$ as a function of $\frac{b}{a}$, if we desire the two geometrical configurations for the capacitor to end up having the same value for the capacitance. ($\text{Take } \frac{\varepsilon}{\varepsilon_0} = 2.6$.)
+
+## 3.9 Commonly Used Signal Processing Functions
+
+In studying signals and systems, you will also encounter, *inter alia*, the following functions (or variation thereof), in addition to the functions discussed previously in this chapter:
+
+* Unit step function
+
+* Unit slope ramp function
\ No newline at end of file
diff --git a/samples/texts/348597/page_88.md b/samples/texts/348597/page_88.md
new file mode 100644
index 0000000000000000000000000000000000000000..c9e8da5de63474461c73cdb72048005f6bc16fc1
--- /dev/null
+++ b/samples/texts/348597/page_88.md
@@ -0,0 +1,24 @@
+FIGURE 3.3
+Various useful signal processing functions.
+
+* Unit area rectangle pulse
+
+• Unit slope right angle triangle function
+
+• Equilateral triangle function
+
+• Periodic traces
+
+These functions are plotted in Figure 3.3, and the corresponding function M-files are ($x$ is everywhere a scalar):
+
+A. Unit Step function
+
+```matlab
+function y=stepf(x)
+global astep
+if x0 & s<=pi
+y=sin(s);
+elseif s>pi & s<=2*pi
+y=0;
+else
+y=0
+end
+```
+
+## In-Class Exercises
+
+**Pb. 3.30** In the above definition of all the special shape functions, we used the `if-else-end` form. Write each of the *function M-files* to define these same functions using only Boolean expressions.
+
+**Pb. 3.31** An adder is a device that adds the input signals to give an output signal equal to the sum of the inputs. Using the functions previously obtained in this section, write the *function M-file* for the signal in Figure 3.4.
+
+**Pb. 3.32** A multiplier is a device that multiplies two inputs. Find the product of the inputs given in Figures 3.5 and 3.6.
+
+## Homework Problems
+
+The first three problems in this set are a brief introduction to the different analog modulation schemes of communication theory.
\ No newline at end of file
diff --git a/samples/texts/348597/page_91.md b/samples/texts/348597/page_91.md
new file mode 100644
index 0000000000000000000000000000000000000000..e742fa639da32a32cf3b9e5037a84763a99aee7b
--- /dev/null
+++ b/samples/texts/348597/page_91.md
@@ -0,0 +1,15 @@
+FIGURE 3.4
+Profile of the signal of Pb. 3.31.
+
+FIGURE 3.5
+Profile of the first input to Pb. 3.32.
+
+**Pb. 3.33** In DSB-AM (double-sideband amplitude modulation), the amplitude of the modulated signal is proportional to the message signal, which means that the time domain representation of the modulated signal is given by:
+
+$$u_{\text{DSB}}(t) = A_c m(t) \cos(2\pi f_c t)$$
+
+where the carrier-wave shape is
+
+$$c(t) = A_c \cos(2\pi f_c t)$$
+
+and the message signal is $m(t)$.
\ No newline at end of file
diff --git a/samples/texts/348597/page_92.md b/samples/texts/348597/page_92.md
new file mode 100644
index 0000000000000000000000000000000000000000..ed0ac02bdc0fd1575071fa6ed243bc3a4902d25a
--- /dev/null
+++ b/samples/texts/348597/page_92.md
@@ -0,0 +1,20 @@
+FIGURE 3.6
+Profile of the second input to Pb. 3.32.
+
+For a message signal given by:
+
+$$m(t) = \begin{cases} 1 & 0 \le t \le t_0 / 3 \\ -3 & t_0 / 3 < t \le 2t_0 / 3 \\ 0 & \text{otherwise} \end{cases}$$
+
+a. Write the expression for the modulated signal using the unit area rectangle and the trigonometric functions.
+
+b. Plot the modulated signal as function of time. (Let $f_c = 200$ and $t_0 = 0.01$.)
+
+**Pb. 3.34** In conventional AM, *m(t)* in the DSB-AM expression for the modulated signal is replaced by [*1 + amn(t)]*, where *mn(t)* is the normalized message signal (i.e., *mn(t) = m(t)/max(m(t))*). *a* is the index of modulation (*0 ≤ a ≤ 1*). The modulated signal expression is then given by:
+
+$$u_{AM}(t) = A_c [1 + am_n(t)] \cos(2\pi f_c t)$$
+
+For the same message as that of **Pb. 3.33** and the same carrier frequency, and assuming the modulation index *a* = 0.85:
+
+a. Write the expression for the modulated signal.
+
+b. Plot the modulated signal.
\ No newline at end of file
diff --git a/samples/texts/348597/page_93.md b/samples/texts/348597/page_93.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b11865c6e276ce17bcd40b357c626fa2dc441cd
--- /dev/null
+++ b/samples/texts/348597/page_93.md
@@ -0,0 +1,37 @@
+**Pb. 3.35** The angle modulation scheme, which includes frequency modulation (FM) and phase modulation (PM), has the modulated signal given by:
+
+$$u_{\text{PM}}(t) = A_c \cos(2\pi f_c t + k_p m(t))$$
+
+$$u_{\text{FM}}(t) = A_c \cos\left(2\pi f_c t + 2\pi k_f \int_{-\infty}^{t} m(\tau) d\tau\right)$$
+
+Assuming the same message as in **Pb. 3.33**:
+
+a. Write the expression for the modulated signal in both schemes.
+
+b. Plot the modulated signal in both schemes. Let $k_p = k_f = 100$.
+
+**Pb. 3.36** If $f(x) = f(-x)$ for all $x$, then the graph of $f(x)$ is symmetric with respect to the $y$-axis, and the function $f(x)$ is called an even function. If $f(x) = -f(-x)$ for all $x$, the graph of $f(x)$ is anti-symmetric with respect to the origin, and we call such a function an odd function.
+
+a. Show that any function can be written as the sum of an odd function plus an even function. List as many even and odd functions as you can.
+
+b. State what conditions must be true for a polynomial to be even, or to be odd.
+
+c. Show that the product of two even functions is even; the product of two odd functions is even; and the product of an odd and even function is odd.
+
+d. Replace in c above the word product by either quotient or power and deduce the parity of the resulting function.
+
+e. Deduce from the above results that the sign/parity of a function follows algebraic rules.
+
+f. Find the even and odd parts of the following functions:
+
+(i) $f(x) = x^7 + 3x^4 + 6x + 2$
+
+(ii) $f(x) = (\sin(x) + 3) \sinh^2(x) \exp(-x^2)$
+
+**Pb. 3.37** Decompose the signal shown in Figure 3.7 into its even and odd parts:
+
+**Pb. 3.38** Plot the function $y$ defined through:
+
+$$y(x) = \begin{cases} x^2 + 4x + 4 & \text{for } -2 \le x < -1 \\ 0.16x^2 - 0.48x & \text{for } -1 < x < 1.5 \\ 0 & \text{elsewhere} \end{cases}$$
+
+and find its even and odd parts.
\ No newline at end of file
diff --git a/samples/texts/348597/page_94.md b/samples/texts/348597/page_94.md
new file mode 100644
index 0000000000000000000000000000000000000000..e14f2c0c373d2f9fb6a1f2078cfbecc2e4ea4c3e
--- /dev/null
+++ b/samples/texts/348597/page_94.md
@@ -0,0 +1,20 @@
+FIGURE 3.7
+Profile of the signal of Pb. 3.37.
+
+## 3.10 Animation of a Moving Rectangular Pulse
+
+You might often want to plot the time development of one of the above signal processing functions if its defining parameters are changing in time. Take, for example, a theatrical spotlight of constant intensity density across its cross-section, but assume that its position varies with time. The light spot size can be represented by a rectangular pulse (e.g., of width 2 m and height 1 m) that is moving to the right with a constant speed of 1 m/s. Assume that the center of the spot is originally at $x = 1$ m, and that its final position is at $x = 8$ m. We want to write a program that will illustrate its time development, and then play the resulting movie.
+
+To illustrate the use of other commands not often utilized in this chapter, we can, instead of the `if-else-end` syntax used in the previous section, use the Boolean syntax, and define the array by the `linspace` command.
+
+Edit and execute the following *script M-file*:
+
+```matlab
+lrect=0;hrect=2;
+x=linspace(0,10,200);
+t=linspace(0,8,40);
+M=moviein(40);
+for m=1:40
+ y=(x>=lrect+t(m)).*(x<=hrect+t(m));
+ plot(x,y,'r')
+```
\ No newline at end of file
diff --git a/samples/texts/348597/page_95.md b/samples/texts/348597/page_95.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b14587878a3c2c607cea39160f2403e853aa16e
--- /dev/null
+++ b/samples/texts/348597/page_95.md
@@ -0,0 +1,16 @@
+axis([-2 12 0 1.2]);
+M(:,m)=getframe;
+end
+movie(M,3)
+
+*Question:* How would you modify the above program if the speed of the light beam is not 1?
+
+### 3.11 MATLAB Commands Review
+
+**fplot** Plots a specified function over a specified interval.
+
+**ginput** Mouse-controlling command to read off coordinates of a point in a graph.
+
+**global** Allows variables to share their values in multiple programs.
+
+**zoom** Zooms in and out on a 2-D plot.
\ No newline at end of file
diff --git a/samples/texts/348597/page_96.md b/samples/texts/348597/page_96.md
new file mode 100644
index 0000000000000000000000000000000000000000..3b0e22ed96dbbc2e6f611274fe270d82b912fab3
--- /dev/null
+++ b/samples/texts/348597/page_96.md
@@ -0,0 +1,21 @@
+# 4
+
+## *Numerical Differentiation, Integration, and Solutions of Ordinary Differential Equations*
+
+This chapter discusses the basic methods for numerically finding the value of the limit of an indeterminate form, the value of a derivative, the value of a convergent infinite sum, and the value of a definite integral. Using an improved form of the differentiator, we also present first-order iterator techniques for solving ordinary first-order and second-order linear differential equations. The Runge-Kutta technique for solving ordinary differential equations (ODE) is briefly discussed. The mode of use of some of the MATLAB packages to perform each of the previous tasks is also described in each instance of interest.
+
+### 4.1 Limits of Indeterminate Forms
+
+**DEFINITION** If $\lim_{x \to x_0} u(x) = \lim_{x \to x_0} v(x) = 0$, the quotient $u(x)/v(x)$ is said to have an indeterminate form of the 0/0 kind.
+
+* If $\lim_{x \to x_0} u(x) = \lim_{x \to x_0} v(x) = \infty$, the quotient $u(x)/v(x)$ is said to have an indeterminate form of the $\infty/\infty$ kind.
+
+In your elementary calculus course, you learned that the standard technique for solving this kind of problem is through the use of *L'Hopital's Rule*, which states that:
+
+if:
+
+$$ \lim_{x \to x_0} \frac{u'(x)}{v'(x)} = C \quad (4.1) $$
+
+then:
+
+$$ \lim_{x \to x_0} \frac{u(x)}{v(x)} = C \quad (4.2) $$
\ No newline at end of file
diff --git a/samples/texts/348597/page_97.md b/samples/texts/348597/page_97.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1992db0c06759a17c84b42559b50e96c0c77d0a
--- /dev/null
+++ b/samples/texts/348597/page_97.md
@@ -0,0 +1,42 @@
+In this section, we discuss a simple algorithm to obtain this limit using MATLAB. The method consists of the following steps:
+
+1. Construct a sequence of points whose limit is $x_0$. In the examples below, consider the sequence $\left\{ x_n = x_0 - \left(\frac{1}{2}\right)^n \right\}$. Recall in this regard that as $n \to \infty$, the $n$th power of any number whose magnitude is smaller than one goes to zero.
+
+2. Construct the sequence of function values corresponding to the x-sequence, and find its limit.
+
+**Example 4.1**
+
+Compute numerically the $\lim_{x \to 0} \frac{\sin(x)}{x}$.
+
+*Solution:* Enter the following instructions in your MATLAB command window:
+
+```
+N=20; n=1:N;
+x0=0;
+dxn=-(1/2).^n;
+xn=x0+dxn;
+yn=sin(xn)/xn;
+plot(xn,yn)
+```
+
+The limit of the **yn** sequence is clearly equal to 1. The deviation of the
+sequence of the **yn** from the value of the limit can be obtained by entering:
+
+```
+dyn=yn-1;
+semiology(n,dyn)
+```
+
+The last command plots the curve with the ordinate *y* expressed logarithmically. This mode of display is the most convenient in this case because the ordinate spans many decades of values.
+
+**In-Class Exercises**
+
+Find the limits of the following functions at the indicated points:
+
+$$
+\textbf{Pb. 4.1} \quad \frac{(x^2 - 2x - 3)}{(x-3)} \quad \text{at } x \to 3
+$$
+
+$$
+\text{Pb. 4.2} \quad \left( \frac{1 + \sin(x)}{x} - \frac{1}{\sin(x)} \right) \quad \text{at } x \to 0
+$$
\ No newline at end of file
diff --git a/samples/texts/348597/page_98.md b/samples/texts/348597/page_98.md
new file mode 100644
index 0000000000000000000000000000000000000000..228c64bee50c7b129a7e5b46c00746f8ea236393
--- /dev/null
+++ b/samples/texts/348597/page_98.md
@@ -0,0 +1,34 @@
+Pb. 4.3 $(x \cot(x))$ at $x \to 0$
+
+Pb. 4.4 $\frac{(1 - \cos(2x))}{x^2}$ at $x \to 0$
+
+Pb. 4.5 $\sin(2x) \cot(3x)$ at $x \to 0$
+
+## 4.2 Derivative of a Function
+
+**DEFINITION** The derivative of a certain function at a particular point is defined as:
+
+$$f'(x_0) = \lim_{x \to x_0} \frac{f(x) - f(x_0)}{x - x_0} \quad (4.3)$$
+
+Numerically, the derivative is computed at the point $x_0$ as follows:
+
+1. Construct an *x*-sequence that approaches $x_0$.
+
+2. Compute a sequence of the function values corresponding to the *x*-sequence.
+
+3. Evaluate the sequence of the ratio, appearing in the definition of the derivative in Eq. (4.3).
+
+4. Read off the limit of this ratio sequence. This will be the value of the derivative at the point $x_0$.
+
+### Example 4.2
+
+Find numerically the derivative of the function $\ln(1+x)$ at $x=0$.
+
+**Solution:** Edit and execute the following *script M-file*:
+
+N=20;n=1:N;
+x0=0;
+dxn=(1/2).^[-1:N];
+xn=x0+dxn;
+yn=log(1+xn);
+dyn=yn-log(1+x0);
\ No newline at end of file
diff --git a/samples/texts/348597/page_99.md b/samples/texts/348597/page_99.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1a29bb60c635e35ceda2dc496d6f5561ff4dd95
--- /dev/null
+++ b/samples/texts/348597/page_99.md
@@ -0,0 +1,36 @@
+`deryn=dyn./dxn;`
+
+`plot(n,deryn)`
+
+The limit of the deryn's sequence is clearly equal to 1, the value of this function derivative at 0.
+
+NOTE The choice of N should always be such that dxn is larger than the machine precision; that is, N < 53, since (1/2)53 ≈ 10-16.
+
+## In-Class Exercises
+
+Find numerically, to one part per 10,000 accuracy, the derivatives of the following functions at the indicated points:
+
+Pb. 4.6 $x^4(\cos^3(x) - \sin(2x))$ at $x \to \pi$
+
+Pb. 4.7 $\frac{\exp(x^2 + 3)}{(2 + \cos^2(x))}$ at $x \to 0$
+
+Pb. 4.8 $\frac{(1 + \sin^2(x))}{(2 - \cos^3(x))}$ at $x \to \pi / 2$
+
+Pb. 4.9 $\ln\left(\frac{x-1/2}{x+1}\right)$ at $x \to 1$
+
+Pb. 4.10 $\tan^{-1}(x^2 + 3)$ at $x \to 0$
+
+## Example 4.3
+
+Plot the derivative of the function $x^2 \sin(x)$ over the interval $0 \le x \le 2\pi$.
+
+Solution: Edit and execute the following script M-file:
+
+`dx=10^(-4);`
+`x=0:dx:2*pi+dx;`
+`df=diff(sin(x).*x.^2)/dx;`
+`plot(0:dx:2+pi,df)`
+
+where `diff` is a MATLAB command, which when acting on an array X, gives the new array [X(2) - X(1)X(3) - X(2) ... X(n) - X(n - 1)], whose length is one unit shorter than the array X.
+
+The accuracy of the above algorithm depends on the choice of `dx`. Ideally, the smaller it is, the more accurate the result. However, using any computer, we should always choose a `dx` that is larger than the machine precision, while
\ No newline at end of file
diff --git a/samples/texts/4320847/page_1.md b/samples/texts/4320847/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8362cad0031fe50482693158bf701a416b1899c
--- /dev/null
+++ b/samples/texts/4320847/page_1.md
@@ -0,0 +1,32 @@
+# Bayesian inference for spectral projectors of the covariance matrix
+
+Igor Silin¹,³,⁴,⁵ and Vladimir Spokoiny²,³,⁴,⁵
+
+¹Moscow Institute of Physics and Technology,
+141701, Dolgoprudny, RF.
+e-mail: siliniv@gmail.com
+
+²Weierstrass Institute and Humboldt University,
+Mohrenstr. 39, 10117 Berlin, Germany.
+e-mail: spokoiny@wias-berlin.de
+
+³National Research University Higher School of Economics,
+20 Myasnitskaya ulitsa, 101000, Moscow, RF.
+
+⁴Skolkovo Institute of Science and Technology (Skoltech),
+143026, Moscow, RF.
+
+⁵Institute for Information Transmission Problems RAS,
+Bolshoy Karetny per. 19, 127051, Moscow, RF.
+
+**Abstract:** Let $X_1, \dots, X_n$ be an i.i.d. sample in $\mathbb{R}^p$ with zero mean and the covariance matrix $\Sigma^*$. The classical PCA approach recovers the projector $P_J^*$ onto the principal eigenspace of $\Sigma^*$ by its empirical counterpart $\hat{P}_J$. Recent paper [24] investigated the asymptotic distribution of the Frobenius distance between the projectors $\|\hat{P}_J - P_J^*\|_2$, while [27] offered a bootstrap procedure to measure uncertainty in recovering this subspace $P_J^*$ even in a finite sample setup. The present paper considers this problem from a Bayesian perspective and suggests to use the credible sets of the pseudo-posterior distribution on the space of covariance matrices induced by the conjugated Inverse Wishart prior as sharp confidence sets. This yields a numerically efficient procedure. Moreover, we theoretically justify this method and derive finite sample bounds on the corresponding coverage probability. Contrary to [24, 27], the obtained results are valid for non-Gaussian data: the main assumption that we impose is the concentration of the sample covariance $\hat{\Sigma}$ in a vicinity of $\Sigma^*$. Numerical simulations illustrate good performance of the proposed procedure even on non-Gaussian data in a rather challenging regime.
+
+MSC 2010 subject classifications: Primary 62F15, 62H25, 62G20; secondary 62F25.
+
+Keywords and phrases: Covariance matrix, spectral projector, principal component analysis, Bernstein-von Mises theorem.
+
+Received March 2018.
+
+## 1. Introduction
+
+Let the observed data $X^n = (X_1, \dots, X_n)$ be a collection of independent identically distributed zero-mean random vectors in $\mathbb{R}^p$ and let $X$ be a generic
\ No newline at end of file
diff --git a/samples/texts/4320847/page_10.md b/samples/texts/4320847/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..36b8d1e213df1959a265b8b9e0dd4a7d898bd0cf
--- /dev/null
+++ b/samples/texts/4320847/page_10.md
@@ -0,0 +1,40 @@
+**2.4. Gaussian approximation and frequentist uncertainty quantification for spectral projectors**
+
+For the Gaussian data, Theorem 4.3 of [27] provides the explicit error bound
+(1.1) with the error term $\bar{\Delta}$ of the following form:
+
+$$
+\begin{align*}
+& \sup_{x \in \mathbb{R}} \left| \mathbb{P}(n \| \hat{\mathbf{P}}_r - \mathbf{P}_r^* \|_2^2 \le x) - \mathbb{P}(\| \xi \|_1^2 \le x) \right| \lesssim \bar{\Delta}, \\
+& \bar{\Delta} = \bar{\Delta}(n, p, \mathbf{\Sigma}^*) \stackrel{\text{def}}{=} \frac{\sqrt{m_r^*} \operatorname{Tr}(\Gamma_r^*)}{\sqrt{\lambda_1(\Gamma_r^*) \lambda_2(\Gamma_r^*)}} \left( \sqrt{\frac{\log(n)}{n}} + \sqrt{\frac{\log(p)}{n}} \right) \\
+& \qquad + \frac{m_r^*}{g_r^{*3}} \frac{\operatorname{Tr}^3(\mathbf{\Sigma}^*)}{\sqrt{\lambda_1(\Gamma_r^*) \lambda_2(\Gamma_r^*)}} \sqrt{\frac{\log^3(n)}{n}}. \tag{2.5}
+\end{align*}
+$$
+
+The goal of this section is to extend this result to include the case of a generalized
+spectral cluster and of non-Gaussian data. Before formulating the result, let us
+introduce the following auxiliary matrices
+
+$$
+\begin{align*}
+U_J^* &\stackrel{\text{def}}{=} \left\{ \frac{u_k^{*\top}}{\sqrt{\mu_r^{*}}} \right\}_{\substack{r \in J \\ k \in \Delta_r^*}} \in \mathbb{R}^{m_J^* \times p}, \\
+V_J^* &\stackrel{\text{def}}{=} \left\{ \frac{u_l^{*\top}}{\sqrt{\mu_s^{*}}} \right\}_{\substack{s \notin J \\ l \in \Delta_s^*}} \in \mathbb{R}^{(p-m_J^*) \times p}.
+\end{align*}
+$$
+
+Then the following theorem holds.
+
+**Theorem 2.2.** Assume the distribution of the data **X***n* = (X1, ..., Xn) ful-
+fills the sample covariance concentration property (2.3) with some $\hat{\delta}_n$. Suppose
+additionally that the projections **P**J***X** and (**I**p − **P**J*)**X are independent and
+the following third moments are finite:
+
+$$
+\mathbb{E}\|\mathbf{U}_{\mathcal{J}}^* X\|^3 \le +\infty, \quad \mathbb{E}\|\mathbf{V}_{\mathcal{J}}^* X\|^3 \le +\infty.
+$$
+
+Let $\xi \sim N(0, \Gamma_J^*)$ with $\Gamma_J^*$ defined by (2.2). Then
+
+$$
+\sup_{x \in \mathbb{R}} |P(n || \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* ||_2^2 \le x) - P(||\xi||_1^2 \le x)| \lesssim \bar{\Delta},
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_11.md b/samples/texts/4320847/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..55b8f11694e9e9416aab728f8de61d97cacd40f0
--- /dev/null
+++ b/samples/texts/4320847/page_11.md
@@ -0,0 +1,39 @@
+where
+
+$$
+\begin{align}
+\bar{\diamond} &= \bar{\diamond}(n, p, \mathbf{\Sigma}^*) \stackrel{\text{def}}{=} \mathbb{E}\|\mathbf{U}_{\mathcal{J}}^* X\|^3 \mathbb{E}\|\mathbf{V}_{\mathcal{J}}^* X\|^3 \frac{p^{1/4}}{\sqrt{n}} \tag{2.6} \\
+&\qquad + \frac{\bar{\Delta}}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_\infty^2)^{1/4}}, \nonumber \\
+\bar{\Delta} &\stackrel{\text{def}}{=} nm_{\mathcal{J}}^* \left(1 + \frac{l_{\mathcal{J}}^*}{g_{\mathcal{J}}^*}\right) \left\{ \left(1 + \frac{l_{\mathcal{J}}^*}{g_{\mathcal{J}}^*}\right) \frac{\hat{\delta}_n^4}{g_{\mathcal{J}}^{*4}} \lor \frac{|\mathcal{J}| \hat{\delta}_n^3}{g_{\mathcal{J}}^{*3}} \right\}. \nonumber
+\end{align}
+$$
+
+**Remark 2.2.** The condition on independence of $P_J^* X$ and $(I_p - P_J^*)X$ is not very restrictive, in fact, it has a natural interpretation: while we are interested in the “signal” $P_J^* X$, the orthogonal part $(I_p - P_J^*)X$ can be considered as “noise”, and it is plausible to assume that the “noise” is independent from the “signal”. It also worth mentioning that this condition was not required in our main result above about the behavior of the pseudo-posterior.
+
+**Remark 2.3.** The components of $U_J^* X$ are the $m_J^*$-dimensional coordinates of $X$ after projecting onto the eigenspace of interest and proper scaling. Similarly, the components of $V_J^* X$ are the $(p - m_J^*)$-dimensional coordinates of $X$ after projecting onto the orthogonal complement and proper scaling. In general, the factors $\mathbb{E}\|U_J^* X\|^3$ and $\mathbb{E}\|V_J^* X\|^3$ from the error bound $\bar{\diamond}$ depend on how heavy the tails of the distribution of $X$ are. However, it is easy to show that in case of a sub-Gaussian random vector $X$ the behaviour is as follows:
+
+$$
+\begin{align*}
+\mathbb{E}\|\boldsymbol{U}_{\mathcal{J}}^{*} X\|^3 &\lesssim (m_{\mathcal{J}}^{*})^{3/2}, \\
+\mathbb{E}\|\boldsymbol{V}_{\mathcal{J}}^{*} X\|^3 &\lesssim (p - m_{\mathcal{J}}^{*})^{3/2}.
+\end{align*}
+$$
+
+Coupled with Theorem A.1, (ii), this allows to bound the error term $\bar{\diamond}$ in the
+sub-Gaussian case as
+
+$$
+\bar{\diamond} \lesssim \sqrt{\frac{(m_{\mathcal{J}}^*)^3 p^{3.5}}{n}} + m_{\mathcal{J}}^* |\mathcal{J}| \sqrt{\frac{p^3 + \log^3(n)}}{n},
+$$
+
+where, for simplicity, the characteristics of $\Sigma^*$ are hidden in the constants.
+
+The proof of this result is presented in Appendix B. The obtained bound is
+worse than (2.5) when the the full dimension *p* is large. This is the payment for
+the Gaussian approximation which appears for non-Gaussian data. Our result
+makes use of the Gaussian approximation technique from [2]. Some recent devel-
+opments in Gaussian approximation for a probability of a ball indicate that the
+bound (2.6) can be improved even further; see [30]. Comparison of the results
+of Theorem 2.1 and Theorem 2.2 reveals that the pseudo-posterior distribution
+of $n\|\boldsymbol{P}_{\mathcal{J}} - \hat{\boldsymbol{P}}_{\mathcal{J}}\|_2^2$ given the data perfectly mimics the distribution of
+$n\|\hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^*\|_2^2$, and, therefore, can be applied to building of elliptic confidence
\ No newline at end of file
diff --git a/samples/texts/4320847/page_12.md b/samples/texts/4320847/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..87984da6c250ba03320bf7cafef35876c64a2491
--- /dev/null
+++ b/samples/texts/4320847/page_12.md
@@ -0,0 +1,21 @@
+sets for the true projector. Specifically, for any significance level $\alpha \in (0; 1)$ (or
+confidence level $1 - \alpha$) we can estimate the true quantile
+
+$$ \gamma_{\alpha} \stackrel{\text{def}}{=} \inf \left\{ \gamma > 0 : \mathbb{P} \left( n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma \right) \leq \alpha \right\} $$
+
+by the following counterpart which can be numerically assessed using Bayesian
+credible sets:
+
+$$ \gamma_{\alpha}^{\circ} \stackrel{\text{def}}{=} \inf \left\{ \gamma > 0 : \Pi \left( n \| \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} \|_2^2 > \gamma \mid \mathbf{X}^n \right) \leq \alpha \right\}. $$
+
+Then, the main results presented above imply the following corollary.
+
+**Corollary 2.3.** *Assume that all conditions of Theorem 2.1 and Theorem 2.2 are fulfilled. Then*
+
+$$ \sup_{\alpha \in (0; 1)} \left| \alpha - \mathbb{P} \left( n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma_{\alpha}^{\circ} \right) \right| \lesssim \diamond + \bar{\diamond}, $$
+
+where $\diamond = \diamond(n, p, \Sigma^*)$, $\bar{\diamond} = \bar{\diamond}(n, p, \Sigma^*)$ are defined by (2.4), (2.6), respectively.
+
+**3. Numerical experiments**
+
+This section shows by mean of artificial data that the proposed pseudo-Bayesian approach works quite well even for large data dimension and limited sample size. We also want to track how the quality depends on the sample size $n$ and the dimension $p$. In our experiments we first fix some true covariance matrix $\Sigma^*$ of size $p \times p$. Without loss of generality we consider only diagonal matrices $\Sigma^*$, so $\Sigma^*$ is defined by the distinct eigenvalues $\mu_r^*$ and the multiplicities $m_r^*$. We also specify the desired subspace that we want to investigate by fixing $\mathcal{J}$. Further, for different sample sizes $n$ we repeat the following two-step procedure. The first step is to determine the quantiles of $n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2$. For that we generate 3000 samples $\mathbf{X}^n$, compute the corresponding $\hat{\mathbf{P}}_\mathcal{J}$ and then just take $\alpha$-quantiles of the obtained realizations $n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2$ for $\alpha$ from 0.001 to 0.999 with step 0.001. The second step is to estimate the quantiles of the pseudo-posterior distribution of $n\|\mathbf{P}_\mathcal{J} - \hat{\mathbf{P}}_\mathcal{J}\|_2^2$. We generate 50 samples $\mathbf{X}^n$ and for each realization we generate 3000 pseudo-posterior covariance matrices $\Sigma$ from the Inverse Wishart distribution with $G = I_p$, $b = 1$. Then we compute the corresponding $\mathbf{P}_\mathcal{J}$ and take the $\alpha$-quantiles of $n\|\mathbf{P}_\mathcal{J} - \hat{\mathbf{P}}_\mathcal{J}\|_2^2$ just as in the first step. Namely, for each $\alpha$ we get 50 quantile estimates $\gamma_\alpha^{(j)}$, $j \in \{1, \dots, 50\}$ (suppose we order them in ascending order) and take median of them. For the true quantiles from the first step and the medians of the quantile estimates from the second step we build a QQ-plot, which consists of points with coordinates $(\gamma_\alpha, \gamma_\alpha^{(25)})$ for various $\alpha$. We expect that the constructed
\ No newline at end of file
diff --git a/samples/texts/4320847/page_13.md b/samples/texts/4320847/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..9df78d69d060b2604550ac5700ce61ec15e05a7b
--- /dev/null
+++ b/samples/texts/4320847/page_13.md
@@ -0,0 +1,50 @@
+QQ-plot is close to the identity line indicating that these two distributions are
+close to each other. Also we present a table with median coverage probabilities
+
+$$
+\mathbb{P} \left( n \|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 \le \gamma_{\alpha}^{\circ (25)} \right)
+$$
+
+and interquartile ranges
+
+$$
+\mathbb{P} \left( n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^{*} \|_{2}^{2} \leq \gamma_{\alpha}^{\circ (38)} \right) - \mathbb{P} \left( n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^{*} \|_{2}^{2} \leq \gamma_{\alpha}^{\circ (12)} \right)
+$$
+
+of this coverage probability for the desired confidence levels $1 - \alpha$ from the list
+
+$\{0.99, 0.95, 0.90, 0.85, 0.80, 0.75\}.$
+
+In the first experiment we work with Gaussian data. The parameters of the experiment are as follows:
+
+• $p = 100$, $m_r^* = 1$ for all $r \in \{1, \dots, 100\}$.
+
+• $\mu_1^* = 25.698$, $\mu_2^* = 15.7688$, $\mu_3^* = 10.0907$, $\mu_4^* = 5.9214$, $\mu_5^* = 3.4321$ and the rest of the eigenvalues $\mu_6^*, \dots, \mu_{100}^*$ are from the Marchenko-Pastur law with support $[0.71; 1.34]$.
+
+• $\mathcal{J} = \{1\}$, so we investigate the one-dimensional principal subspace given by $\mathbf{P}_1^*$.
+
+The QQ-plots are depicted on Figure 1 while the coverage probabilities and the interquartile ranges are presented in Table 1.
+
+The setup of this experiment is exactly the same as the second example of [27],
+so the performance of our pseudo-Bayesian method can be directly compared
+with the performance of the Bootstrap approach (cf. Figure 2 and Table 2 of
+[27]). The accuracy of these two procedures is approximately the same.
+
+In the second experiment we check how our method performs on non-Gaussian
+data. We generate each component of the vectors $X_j$ independently yielding di-
+agonal covariance matrix. In addition to Gaussian distribution, we consider also
+the following three options: the uniform distribution on the interval $[-a; a]$,
+the Laplace distribution with scaling parameter $a$ and the discrete uniform dis-
+tribution with three values $\{-a, 0, a\}$. In each case the parameter $a$ is chosen
+in such a way that ensures the variance located on the diagonal of the covariance
+matrix fixed earlier. So, the parameters of the experiment are as follows:
+
+• $p = 100$, $m_1^* = 3$, $m_2^* = 3$, $m_3^* = 3$ and the rest of the multiplicities $m_4^*, \dots, m_{91}^*$ are one.
+
+• $\mu_1^* = 25$, $\mu_2^* = 20$, $\mu_3^* = 15$, $\mu_4^* = 10$, $\mu_5^* = 7.5$, $\mu_6^* = 5$ and the rest of the eigenvalues $\mu_7^*, \dots, \mu_{100}^*$ are from the uniform distribution on $[0; 3]$.
+
+• The first nine components were generated according to: uniform, Laplace, discrete, Gaussian, Laplace, discrete, Laplace, Laplace, uniform distributions, respectively. The rest of the components are Gaussian.
+
+• $\mathcal{J} = \{1, 2, 3\}$, so we investigate nine-dimensional subspace given by
+$\mathbf{P}_1^* + \mathbf{P}_2^* + \mathbf{P}_3^*$.
+
diff --git a/samples/texts/4320847/page_14.md b/samples/texts/4320847/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..97f85dd38d733687b6bd51680be3d66f9a0c05a0
--- /dev/null
+++ b/samples/texts/4320847/page_14.md
@@ -0,0 +1,4 @@
+FIG 1. QQ-plots of the proposed pseudo-Bayesian procedure for the first experiment (Gaussian data).
+
+The QQ-plots are depicted on Figure 2 while the coverage probabilities and the
+interquartile ranges are presented in Table 2.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_15.md b/samples/texts/4320847/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..da9603a091c13cd36fafac3fdcb144011b382135
--- /dev/null
+++ b/samples/texts/4320847/page_15.md
@@ -0,0 +1,3 @@
+FIG 2. QQ-plots of the proposed pseudo-Bayesian procedure for the second experiment (non-Gaussian data).
+
+The performance of the proposed procedure is very good except the case when the sample size is of the same order as the dimension. However, this
\ No newline at end of file
diff --git a/samples/texts/4320847/page_16.md b/samples/texts/4320847/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e8a3ffcc9533178f2bb60b31ab8b76625ad580d
--- /dev/null
+++ b/samples/texts/4320847/page_16.md
@@ -0,0 +1,86 @@
+TABLE 1
+Coverage probabilities and interquartile ranges of the proposed pseudo-Bayesian procedure for the first experiment (Gaussian data).
+
+| n | Confidence levels (1 − α) |
|---|
| 0.99 | 0.95 | 0.90 | 0.85 | 0.80 | 0.75 |
|---|
| 100 | 0.993 | 0.968 | 0.929 | 0.893 | 0.854 | 0.809 | | 0.023 | 0.061 | 0.103 | 0.145 | 0.176 | 0.204 | | 300 | 0.988 | 0.952 | 0.906 | 0.851 | 0.805 | 0.762 | | 0.026 | 0.085 | 0.143 | 0.184 | 0.199 | 0.216 | | 500 | 0.993 | 0.955 | 0.909 | 0.865 | 0.812 | 0.771 | | 0.022 | 0.072 | 0.099 | 0.108 | 0.123 | 0.126 | | 1000 | 0.990 | 0.956 | 0.908 | 0.859 | 0.817 | 0.767 | | 0.014 | 0.049 | 0.066 | 0.067 | 0.091 | 0.104 | | 2000 | 0.992 | 0.952 | 0.898 | 0.847 | 0.793 | 0.747 | | 0.009 | 0.030 | 0.058 | 0.064 | 0.067 | 0.065 | | 3000 | 0.992 | 0.959 | 0.908 | 0.849 | 0.802 | 0.750 | | 0.005 | 0.024 | 0.036 | 0.054 | 0.046 | 0.063 |
+
+TABLE 2
+Coverage probabilities and interquartile ranges of the proposed pseudo-Bayesian procedure for the second experiment (non-Gaussian data)
+
+
+
+
+ | n |
+ Confidence levels (1 − α) |
+
+
+ | 0.99 |
+ 0.95 |
+ 0.90 |
+ 0.85 |
+ 0.80 |
+ 0.75 |
+
+
+
+
+ | 100 |
+ 0.997 0.009 |
+ 0.979 0.044 |
+ 0.954 0.070 |
+ 0.927 0.085 |
+ 0.895 0.094 |
+ 0.870 0.099 |
+
+
+ | 300 |
+ 0.993 0.008 |
+ 0.964 0.047 |
+ 0.935 0.077 |
+ 0.903 0.108 |
+ 0.868 0.144 |
+ 0.836 0.176 |
+
+
+ | 500 |
+ 0.996 0.014 |
+ 0.972 0.053 |
+ 0.944 0.099 |
+ 0.917 0.139 |
+ 0.874 0.166 |
+ 0.832 0.194 |
+
+
+ | 1000 |
+ 0.990 0.011 |
+ 0.957 0.050 |
+ 0.920 0.098 |
+ 0.882 0.131 |
+ 0.841 0.171 |
+ 0.796 0.188 |
+
+
+ | 2000 |
+ 0.991 0.019 |
+ 0.951 0.048 |
+ 0.904 0.088 |
+ 0.850 0.113 |
+ 0.803 0.124 |
+ 0.755 0.146 |
+
+
+ | 3000 |
+ 0.994 0.007 |
+ 0.958 0.033 |
+ 0.913 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- ---- --
+ |
+
+
+regime lies beyond the scope of our results. If we have enough data, the method demonstrates very good results even in such challenging situations as recovering a direct sum of several subspaces from non-Gaussian (even not sub-Gaussian) data.
+
+**4. Main proofs**
+
+This section collects the proofs of the main results. Some additional technical statements are postponed to the Appendix.
+
+**4.1.** *Proof of Theorem 2.1*
+
+The Inverse Wishart prior $IW_p(G, p+b-1)$ is conjugate to the multivariate Gaussian distribution, so our pseudo-posterior $\Pi(\Sigma | X^n)$ is
\ No newline at end of file
diff --git a/samples/texts/4320847/page_17.md b/samples/texts/4320847/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f9f0320dcee05b6159e2113c6485d4b8624d337
--- /dev/null
+++ b/samples/texts/4320847/page_17.md
@@ -0,0 +1,47 @@
+$\mathcal{I}W_p(G + n\hat{\Sigma}, n+p+b-1)$. We will actively use the following well-known property of the Wishart distribution:
+
+$$
+\Sigma^{-1} | \mathbf{X}^n \stackrel{d}{=} \sum_{j=1}^{n+p+b-1} W_j W_j^\top | \mathbf{X}^n,
+$$
+
+where $W_j \mid \mathbf{X}^n \stackrel{\text{i.i.d.}}{\sim} \mathcal{N}(0, (\mathbf{G} + n\hat{\mathbf{\Sigma}})^{-1})$.
+
+For shortness in this section we will use the notation $n_p \stackrel{\text{def}}{=} n+p+b-1$ and we assume that $b \lesssim p$. As we will see, this assumption will help us to simplify the bounds, while the case $b \gtrsim p$ does not bring any gain. Moreover, define
+
+$$
+\Sigma_{n,p} \stackrel{\text{def}}{=} \frac{1}{n_p} G + \frac{n}{n_p} \hat{\Sigma}
+$$
+
+and
+
+$$
+\boldsymbol{E}_{n,p} \stackrel{\text{def}}{=} \frac{1}{n_p} \sum_{j=1}^{n_p} Z_j Z_j^{\top} - \boldsymbol{I}_p,
+$$
+
+where $Z_j | \mathbf{X}^n \stackrel{\text{i.i.d.}}{\sim} \mathcal{N}(0, I_p)$. Then $\boldsymbol{\Sigma}^{-1} | \mathbf{X}^n$ can be represented as
+
+$$
+\Sigma^{-1} | \mathbf{X}^n \triangleq \Sigma_{n,p}^{-1/2} (\mathbf{E}_{n,p} + \mathbf{I}_p) \Sigma_{n,p}^{-1/2}.
+$$
+
+We may think that in the “posterior” world all randomness comes from $\mathbf{E}_{n,p}$.
+Moreover, due to Theorem A.1, (i), there is a random set $\Upsilon$ such that on this
+set
+
+$$
+\|\mathbf{E}_{n,p}\|_{\infty} \lesssim \sqrt{\frac{\log(n_p) + p}{n_p}} \le \sqrt{\frac{\log(n) + p}{n}},
+$$
+
+and its pseudo-posterior measure
+
+$$
+\Pi(\Upsilon | \mathbf{X}^n) \geq 1 - \frac{1}{n}.
+$$
+
+**Step 1** First, we will need the following lemma.
+
+**Lemma 4.1.** *The following holds on the random set Υ:*
+
+$$
+\|\Sigma - \hat{\Sigma}\|_{\infty} \lesssim \|\hat{\Sigma}\|_{\infty} \sqrt{\frac{\log(n) + p}{n}} + \frac{\|\mathbf{G}\|_{\infty}}{n}. \quad (4.1)
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_18.md b/samples/texts/4320847/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..a13bdba920bd59927b7b5ab1baf53678d4528da1
--- /dev/null
+++ b/samples/texts/4320847/page_18.md
@@ -0,0 +1,54 @@
+*Proof.* Since $\Sigma^{-1} | \mathbf{X}^n \stackrel{d}{=} \Sigma_{n,p}^{-1/2} (\mathbf{E}_{n,p} + \mathbf{I}_p) \Sigma_{n,p}^{-1/2}$, we have
+
+$$
+\begin{align*}
+\Sigma - \hat{\Sigma} &= \Sigma_{n,p}^{1/2} (\mathbf{E}_{n,p} + \mathbf{I}_p)^{-1} \Sigma_{n,p}^{1/2} - \hat{\Sigma} \\
+&= \Sigma_{n,p}^{1/2} [(\mathbf{E}_{n,p} + \mathbf{I}_p)^{-1} - \mathbf{I}_p] \Sigma_{n,p}^{1/2} + \Sigma_{n,p} - \hat{\Sigma}.
+\end{align*}
+$$
+
+Note that
+
+$$
+\|(\boldsymbol{E}_{n,p} + \boldsymbol{I}_p)^{-1} - \boldsymbol{I}_p\|_{\infty} = \left\| \sum_{s=1}^{\infty} (-\boldsymbol{E}_{n,p})^s \right\|_{\infty} \\
+\leq \sum_{s=1}^{\infty} \| \boldsymbol{E}_{n,p} \|_{\infty}^{s} = \frac{\| \boldsymbol{E}_{n,p} \|_{\infty}}{1 - \| \boldsymbol{E}_{n,p} \|_{\infty}} \lesssim \| \boldsymbol{E}_{n,p} \|_{\infty}.
+$$
+
+Hence,
+
+$$
+\|\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}}\|_{\infty} \lesssim \|\boldsymbol{\Sigma}_{n,p}\|_{\infty} \|E_{n,p}\|_{\infty} + \|\boldsymbol{\Sigma}_{n,p} - \hat{\boldsymbol{\Sigma}}\|_{\infty}.
+$$
+
+Finally, the observations that
+
+$$
+\begin{align*}
+\|\boldsymbol{\Sigma}_{n,p}\|_{\infty} &\le \frac{\|\mathbf{G}\|_{\infty}}{n} + \|\hat{\boldsymbol{\Sigma}}\|_{\infty}, \\
+\|\boldsymbol{\Sigma}_{n,p} - \hat{\boldsymbol{\Sigma}}\|_{\infty} &\le \frac{\|\mathbf{G}\|_{\infty}}{n} + \frac{n_p - n}{n} \|\hat{\boldsymbol{\Sigma}}\|_{\infty},
+\end{align*}
+$$
+
+finish the proof.
+
+The condition on the significant spectral gap for $\mathbf{\Sigma}^*$ and the bound (2.3) on the operator norm $\|\hat{\mathbf{\Sigma}} - \mathbf{\Sigma}^*\|$ imply a significant spectral gap for the empirical covariance $\hat{\mathbf{\Sigma}}$. The crucial Lemma A.2 applied with the central projector $\hat{\mathbf{P}}_\mathcal{J}$ in place of $\mathbf{P}_\mathcal{J}^*$ allows to obtain the bound on how close the linear operator
+
+$$
+\hat{L}_{\mathcal{J}}(\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}}) \stackrel{\text{def}}{=} \sum_{k \in \mathcal{I}_{\mathcal{J}}} \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \frac{\hat{u}_k \hat{u}_k^\top (\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}}) \hat{u}_l \hat{u}_l^\top + \hat{u}_l \hat{u}_l^\top (\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}}) \hat{u}_k \hat{u}_k^\top}{\hat{\sigma}_k - \hat{\sigma}_l}
+$$
+
+is to $P_J - P_J^*$.
+
+**Lemma 4.2.** The following holds on the random set $\Upsilon$:
+
+$$
+\sqrt{n}\|\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} - \hat{L}_{\mathcal{J}}(\mathbf{\Sigma} - \hat{\mathbf{\Sigma}})\|_2 \lesssim \hat{\Delta}_0,
+$$
+
+where
+
+$$
+\hat{\Delta}_0 \stackrel{\text{def}}{=} \sqrt{\frac{m_{\mathcal{J}}^*}{n}} \left( 1 + \frac{\hat{l}_{\mathcal{J}}}{\hat{g}_{\mathcal{J}}} \right) \frac{(\log(n) + p) \| \hat{\mathbf{\Sigma}} \|_{\infty}^2 + \| \mathbf{G} \|_{\infty}^2 / n}{\hat{g}_{\mathcal{J}}^2},
+$$
+
+and $\hat{l}_{\mathcal{J}}, \hat{g}_{\mathcal{J}}$ are empirical versions of $l_{\mathcal{J}}^*, g_{\mathcal{J}}^*$.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_19.md b/samples/texts/4320847/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..acc5b9ed7ac6a1482af163b856ad4a223f4ae024
--- /dev/null
+++ b/samples/texts/4320847/page_19.md
@@ -0,0 +1,55 @@
+*Proof.* It follows from (A.2) from Lemma A.2 that
+
+$$
+\|\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} - \hat{L}_{\mathcal{J}}(\mathbf{\Sigma} - \hat{\mathbf{\Sigma}})\|_{\infty} \lesssim \left(1 + \frac{\hat{l}_{\mathcal{J}}}{\hat{g}_{\mathcal{J}}} \right) \frac{\|\mathbf{\Sigma} - \hat{\mathbf{\Sigma}}\|_{\infty}^2}{\hat{g}_{\mathcal{J}}^2}.
+$$
+
+It is easy to see that the rank of $\hat{L}_{\mathcal{J}}(\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}})$ is at most $2m_{\mathcal{J}}^*$, and thus the rank of $\boldsymbol{P}_{\mathcal{J}} - \hat{\boldsymbol{P}}_{\mathcal{J}} - \hat{L}_{\mathcal{J}}(\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}})$ is at most $4m_{\mathcal{J}}^*$. Hence, taking into account the relation between the Frobenius and the spectral norm of a matrix via rank and (4.1) from Lemma 4.1, we obtain the desired statement. $\square$
+
+The representation
+
+$$
+\Sigma^{-1} |X^n \stackrel{d}{=} \Sigma_{n,p}^{-1/2} (E_{n,p} + I_p) \Sigma_{n,p}^{-1/2}.
+$$
+
+helps to obtain the next result showing that $\hat{L}_J(\Sigma - \hat{\Sigma})$ can be approximated
+by $\hat{S}_J = \hat{L}_J(-\hat{\Sigma}^{1/2} E_{n,p} \hat{\Sigma}^{1/2})$.
+
+**Lemma 4.3.** *It holds*
+
+$$
+\hat{L}_{\mathcal{J}}(\boldsymbol{\Sigma}) = \hat{L}_{\mathcal{J}}\left(-\hat{\boldsymbol{\Sigma}}^{1/2} \boldsymbol{E}_{n,p} \hat{\boldsymbol{\Sigma}}^{1/2}\right) + \mathcal{R}_{\mathcal{J}} = \hat{S}_{\mathcal{J}} + \mathcal{R}_{\mathcal{J}},
+$$
+
+where the remainder $\mathcal{R}_J$ fulfills on the random set $\Upsilon$
+
+$$
+\sqrt{n}\|\mathcal{R}_J\|_2 \lesssim \hat{\Delta}_1 \stackrel{\text{def}}{=} \frac{m_J^*}{\sqrt{n}} \cdot \frac{(\log(n)+p)\|\hat{\Sigma}\|_\infty + \|\mathcal{G}\|_\infty}{\hat{g}_J}.
+$$
+
+*Proof.* Define $\mathbf{R}_{n,p}$ by
+
+$$
+\mathbf{R}_{n,p} \stackrel{\text{def}}{=} (\mathbf{I}_p + \mathbf{E}_{n,p})^{-1} - \mathbf{I}_p + \mathbf{E}_{n,p}.
+$$
+
+Its spectral norm can be bounded as
+
+$$
+\|\mathbf{R}_{n,p}\|_{\infty} \lesssim \left\| \sum_{s=2}^{\infty} (-\mathbf{E}_{n,p})^s \right\|_{\infty} \leq \sum_{s=2}^{\infty} \|\mathbf{E}_{n,p}\|_{\infty}^{s} = \frac{\|\mathbf{E}_{n,p}\|_{\infty}^{2}}{1 - \|\mathbf{E}_{n,p}\|_{\infty}} \lesssim \|\mathbf{E}_{n,p}\|_{\infty}^{2}.
+$$
+
+So
+
+$$
+\Sigma = \Sigma_{n,p}^{1/2} (\mathbf{E}_{n,p} + \mathbf{I}_p)^{-1} \Sigma_{n,p}^{1/2} = \Sigma_{n,p}^{1/2} (\mathbf{I}_p - \mathbf{E}_{n,p} + \mathbf{R}_{n,p}) \Sigma_{n,p}^{1/2}.
+$$
+
+Therefore for $\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}}$ we have
+
+$$
+\begin{align*}
+\boldsymbol{\Sigma} - \hat{\boldsymbol{\Sigma}} &= \boldsymbol{\Sigma}_{n,p}^{1/2} (\boldsymbol{I}_p - \boldsymbol{E}_{n,p} + \boldsymbol{R}_{n,p}) \boldsymbol{\Sigma}_{n,p}^{1/2} - \hat{\boldsymbol{\Sigma}} \\
+&= -\boldsymbol{\Sigma}_{n,p}^{1/2} \boldsymbol{E}_{n,p} \boldsymbol{\Sigma}_{n,p}^{1/2} + \boldsymbol{\Sigma}_{n,p}^{1/2} \boldsymbol{R}_{n,p} \boldsymbol{\Sigma}_{n,p}^{1/2} + \boldsymbol{\Sigma}_{n,p} - \hat{\boldsymbol{\Sigma}}.
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_2.md b/samples/texts/4320847/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..3299ef151abd4b5cb5c445fe697c8c3c6e36eaf0
--- /dev/null
+++ b/samples/texts/4320847/page_2.md
@@ -0,0 +1,17 @@
+random vector with the same distribution. Denote by $\Sigma^*$ its covariance matrix:
+
+$$ \Sigma^* \stackrel{\text{def}}{=} \mathbb{E}(XX^\top). $$
+
+Usually one estimates the true unknown covariance by the sample covariance matrix, given by
+
+$$ \hat{\Sigma} \stackrel{\text{def}}{=} \frac{1}{n} \sum_{j=1}^{n} X_j X_j^\top . $$
+
+Quantifying the quality of approximation of $\Sigma^*$ by $\hat{\Sigma}$ is one of the most classical problems in statistics. Surprisingly, a number of deep and strong results in this area appeared quite recently. The progress is mainly due to Bernstein type results on the spectral norm $\|\hat{\Sigma} - \Sigma^*\|_\infty$ in the random matrix theory, see, for instance, [22, 29, 31, 33, 1]. It appears that the quality of approximation is of order $n^{-1/2}$ while the dimensionality $p$ only enters logarithmically in the error bound. This allows to apply the results even in the cases of very high data dimension.
+
+Functionals of the covariance matrix also arise in applications frequently. For instance, eigenvalues are well-studied in different regimes, see [26, 9, 19, 34] and many more references therein. The Frobenius norm and other $l_r$-norms of covariance matrix are of great interest in financial applications; see, e.g. [10].
+
+Much less is known about the quality of estimation of a spectral projector which is a nonlinear functional of the covariance matrix. However, such objects arise in dimension reduction methods, manifold learning and spectral methods in community detection, see [11] and references therein for an overview of problems where spectral projectors play crucial role. Special attention should be focused on the Principal Component Analysis (PCA), probably the most famous dimension reduction method. Nowadays PCA-based methods are actively used in deep networking architecture [17] and finance [12], along with other applications. Over the past decade huge progress was achieved in theoretical guarantees for sparse PCA in high dimensions, see [18, 5, 3, 6, 13].
+
+Suppose we fix some set $\mathcal{J}$ of eigenspaces of $\Sigma^*$ and consider a direct sum of these eigenspaces and the associated true projector $P_{\mathcal{J}}^*$. Its empirical counterpart is given by $\hat{P}_{\mathcal{J}}$ computed from the sample covariance $\hat{\Sigma}$. The recent paper [28] presents new bounds on so called excess risk of PCA defined as $\text{Tr}[\Sigma^*(P_{\mathcal{J}}^* - \hat{P}_{\mathcal{J}})]$.
+
+This paper focuses on quantification of uncertainty in recovering the spectral projector $P_{\mathcal{J}}^*$ from its empirical counterpart $\hat{P}_{\mathcal{J}}$. More precisely, the random quantity of our interest is the squared Frobenius distance between the true projector and the sample one $\|\hat{P}_{\mathcal{J}} - P_{\mathcal{J}}^*\|_2^2$. Even though the projector $P_{\mathcal{J}}^*$ is a complex non-linear mapping of $\Sigma^*$, a recent technique from [21] allows to approximate $(\hat{P}_{\mathcal{J}} - P_{\mathcal{J}}^*)$ by a linear functional of $(\hat{\Sigma} - \Sigma^*)$ with root-n accuracy. Several results about the distribution of this random variable are
\ No newline at end of file
diff --git a/samples/texts/4320847/page_20.md b/samples/texts/4320847/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..091fd986e0bd78b37ab809debb920c108acee993
--- /dev/null
+++ b/samples/texts/4320847/page_20.md
@@ -0,0 +1,55 @@
+From $\Sigma_{n,p}^{1/2} E_{n,p} \Sigma_{n,p}^{1/2}$ we pass to $\hat{\Sigma}^{1/2} E_{n,p} \hat{\Sigma}^{1/2}$:
+
+$$
+\begin{align*}
+\Sigma - \hat{\Sigma} &= -\hat{\Sigma}^{1/2} E_{n,p} \hat{\Sigma}^{1/2} + (\hat{\Sigma}^{1/2} E_{n,p} \hat{\Sigma}^{1/2} - \Sigma_{n,p}^{1/2} E_{n,p} \Sigma_{n,p}^{1/2}) \\
+&\quad + \Sigma_{n,p}^{1/2} R_{n,p} \Sigma_{n,p}^{1/2} + \Sigma_{n,p} - \hat{\Sigma} \\
+&= -\hat{\Sigma}^{1/2} E_{n,p} \hat{\Sigma}^{1/2} + R_1 + R_2 + R_3,
+\end{align*}
+$$
+
+where we introduce the remainder terms
+
+$$
+\begin{align*}
+\mathbf{R}_1 &\stackrel{\text{def}}{=} \hat{\mathbf{\Sigma}}^{1/2} \mathbf{E}_{n,p} \hat{\mathbf{\Sigma}}^{1/2} - \mathbf{\Sigma}_{n,p}^{1/2} \mathbf{E}_{n,p} \mathbf{\Sigma}_{n,p}^{1/2}, \\
+\mathbf{R}_2 &\stackrel{\text{def}}{=} \mathbf{\Sigma}_{n,p}^{1/2} \mathbf{R}_{n,p} \mathbf{\Sigma}_{n,p}^{1/2}, \\
+\mathbf{R}_3 &\stackrel{\text{def}}{=} \mathbf{\Sigma}_{n,p} - \hat{\mathbf{\Sigma}}.
+\end{align*}
+$$
+
+They can be bounded as
+
+$$
+\begin{align*}
+\|\mathbf{R}_1\|_\infty &\leq \|\mathbf{E}_{n,p}\|_\infty \|\hat{\mathbf{\Sigma}} - \mathbf{\Sigma}_{n,p}\|_\infty^{1/2} (\|\mathbf{\Sigma}_{n,p}\|_\infty^{1/2} + \|\hat{\mathbf{\Sigma}}\|_\infty^{1/2}), \\
+\|\mathbf{R}_2\|_\infty &\leq \|\mathbf{R}_{n,p}\|_\infty \|\mathbf{\Sigma}_{n,p}\|_\infty \lesssim \|\mathbf{E}_{n,p}\|_\infty^2 \|\mathbf{\Sigma}_{n,p}\|_\infty, \\
+\|\mathbf{R}_3\|_\infty &\lesssim \frac{\|\mathbf{G}\|_\infty + (n_p - n) \|\hat{\mathbf{\Sigma}}\|_\infty}{n_p}.
+\end{align*}
+$$
+
+Hence, omitting higher order terms, on $\Upsilon$ we have
+
+$$
+\begin{align*}
+\|\mathbf{R}_1\|_{\infty} &\lesssim \|\hat{\mathbf{\Sigma}}\|_{\infty}^{1/2} \left( \|G\|_{\infty} + p\|\hat{\mathbf{\Sigma}}\|_{\infty} \right)^{1/2} \frac{\sqrt{\log(n)+p}}{n}, \\
+\|\mathbf{R}_2\|_{\infty} &\lesssim \|\hat{\mathbf{\Sigma}}\|_{\infty} \frac{\log(n)+p}{n}, \\
+\|\mathbf{R}_3\|_{\infty} &\lesssim \frac{\|G\|_{\infty} + p\|\hat{\mathbf{\Sigma}}\|_{\infty}}{n}.
+\end{align*}
+$$
+
+Now we summarize
+
+$$
+\hat{L}_{J}(\boldsymbol{\Sigma}-\hat{\boldsymbol{\Sigma}})=\hat{S}_{J}+\mathcal{R}_{J}
+$$
+
+with
+
+$$
+\hat{S}_{J} \stackrel{\text { def }}{-}\sum_{k \in I_J} \sum_{l \notin I_J} \frac{\hat{\sigma}_{k}^{1 / 2} \hat{\sigma}_{l}^{1 / 2}\left(\hat{u}_{k} \hat{u}_{k}^{\top} E_{n, p} \hat{u}_{l} \hat{u}_{l}^{\top}+\hat{u}_{l} \hat{u}_{l}^{\top} E_{n, p} \hat{u}_{k} \hat{u}_{k}^{\top}\right)}{\hat{\sigma}_{k}-\hat{\sigma}_{l}},
+$$
+
+$$
+R_J = \sum_{k \in I_J} \sum_{l \notin I_J} \frac{(\hat{u}_k \hat{u}_k^\top (\mathbf{R}_1 + \mathbf{R}_2 + \mathbf{R}_3) \hat{u}_l \hat{u}_l^\top + \hat{u}_l \hat{u}_l^\top (\mathbf{R}_1 + \mathbf{R}_2 + \mathbf{R}_3) \hat{u}_k \hat{u}_k^\top)}{\hat{\sigma}_k - \hat{\sigma}_l}.
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_21.md b/samples/texts/4320847/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..4615b082a9742f420f7dac187465dd97ef94e228
--- /dev/null
+++ b/samples/texts/4320847/page_21.md
@@ -0,0 +1,51 @@
+Moreover,
+
+$$
+\begin{align*}
+\|\mathcal{R}_J\|_2 & \le 2 \left\| \sum_{k \in \mathcal{I}_J} \hat{u}_k \hat{u}_k^\top \sum_{l \notin \mathcal{I}_J} \frac{(\mathbf{R}_1 + \mathbf{R}_2 + \mathbf{R}_3) \hat{u}_l \hat{u}_l^\top}{\hat{\sigma}_k - \hat{\sigma}_l} \right\|_2 \\
+& \le 2 \sum_{k \in \mathcal{I}_J} \|\hat{u}_k \hat{u}_k^\top\|_2 \left\| \sum_{l \notin \mathcal{I}_J} \frac{(R_1 + R_2 + R_3) \hat{u}_l \hat{u}_l^\top}{\hat{\sigma}_k - \hat{\sigma}_l} \right\|_\infty \\
+& \le 2 \sum_{k \in \mathcal{I}_J} \|R_1 + R_2 + R_3\|_\infty \left\| \sum_{l \notin \mathcal{I}_J} \frac{\hat{u}_l \hat{u}_l^\top}{\hat{\sigma}_k - \hat{\sigma}_l} \right\|_\infty \\
+& \le \frac{2 m_J^*}{g_J} (\|R_1\|_\infty + \|R_2\|_\infty + \|R_3\|_\infty),
+\end{align*}
+$$
+
+which provides the desired bound. Similarly, we have
+
+$$
+\|\hat{S}_{\mathcal{J}}\|_2 \leq \frac{2 m_{\mathcal{J}}^* \| \hat{\Sigma} \|_{\infty}}{\hat{g}_{\mathcal{J}}} \| E_{n,p} \|_{\infty} \lesssim \frac{m_{\mathcal{J}}^* \| \hat{\Sigma} \|_{\infty}}{\hat{g}_{\mathcal{J}}} \sqrt{\frac{\log(n) + p}{n}},
+$$
+
+where the last inequality holds on $\mathcal{Y}$.
+
+The results of Lemmas 4.2 and 4.3 yield on the random set $\mathcal{Y}$
+
+$$
+\sqrt{n}\|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{S}}_{\mathcal{J}}\|_2 \lesssim \hat{\Delta}_0 + \hat{\Delta}_1.
+$$
+
+In addition,
+
+$$
+\begin{align*}
+& |n\|\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}}\|_2^2 - n\|\hat{\mathbf{S}}_{\mathcal{J}}\|_2^2| \\
+&= n\|\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} - \hat{\mathbf{S}}_{\mathcal{J}}\|_2^2 + 2 \langle \sqrt{n}(\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} - \hat{\mathbf{S}}_{\mathcal{J}}), \sqrt{n}\hat{\mathbf{S}}_{\mathcal{J}} \rangle_2 \\
+&\le n\|\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} - \hat{\mathbf{S}}_{\mathcal{J}}\|_2^2 + 2\sqrt{n}\|\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} - \hat{\mathbf{S}}_{\mathcal{J}}\|_2 \cdot \sqrt{n}\|\hat{\mathbf{S}}_{\mathcal{J}}\|_2.
+\end{align*}
+$$
+
+Thus, taking into account the bound for $\|\hat{\mathbf{S}}_{\mathcal{J}}\|_2$ and neglecting higher order terms, on $\mathcal{Y}$ we obtain
+
+$$
+|n ||\mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}}||_2^2 - n ||\hat{\mathbf{S}}_{\mathcal{J}}||_2^2| \lesssim \hat{\Delta}_2, \quad (4.2)
+$$
+
+where
+
+$$
+\hat{\Delta}_2 = \operatorname*{def}{}
+\left\{
+ (\log(n) + p) \left( \left(1 + \frac{l_J}{g_J}\right) \frac{\sqrt{m_J^* ||\boldsymbol{\Sigma}||_\infty}}{\hat{g}_J} + m_J^* \right) ||\boldsymbol{\Sigma}||_\infty + m_J^* ||\boldsymbol{G}||_\infty
+ \times
+ \frac{m_J^* ||\boldsymbol{\Sigma}||_\infty}{g_J^2} \sqrt{\frac{\log(n)+p}{n}}
+\right\}.
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_22.md b/samples/texts/4320847/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..ddb8ebb4ab1d12d5ae68986826eae129938d8a88
--- /dev/null
+++ b/samples/texts/4320847/page_22.md
@@ -0,0 +1,52 @@
+**Step 2** The norm $n\|\hat{S}_{\mathcal{J}}\|_2^2$ can be decomposed as follows:
+
+$$
+\begin{align*}
+n\|\hat{S}_{\mathcal{J}}\|_2^2 &= 2n \sum_{k'=1}^{p} \sum_{l'=1}^{p} \sum_{k \in \mathcal{I}_{\mathcal{J}}} \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} (\hat{u}_{k'}^\top \hat{u}_k \hat{u}_k^\top \mathbf{E}_{n,p} \hat{u}_l \hat{u}_l^\top \hat{u}_{l'})^2 \\
+&= 2n \sum_{k \in \mathcal{I}_{\mathcal{J}}} \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} (\hat{u}_k^\top \mathbf{E}_{n,p} \hat{u}_l)^2.
+\end{align*}
+$$
+
+Introduce a vector $\hat{\xi}_{\mathcal{J}} \in \mathbb{R}^{m_{\mathcal{J}}^*(p-m_{\mathcal{J}}^*)}$ with components
+
+$$
+\hat{\xi}_{k,l} = \sqrt{2n} \frac{\hat{\sigma}_k^{1/2} \hat{\sigma}_l^{1/2}}{\hat{\sigma}_k - \hat{\sigma}_l} \hat{u}_k^\top \mathbf{E}_{n,p} \hat{u}_l,
+$$
+
+for $k \in \mathcal{I}_J, l \notin \mathcal{I}_J$, ordered in some particular way that will become clear later.
+Note that $n\|\hat{S}_J\|_2^2 = \|\hat{\xi}_J\|^2$. Clearly, for each $k \le p$ and $j \le n_p$
+
+$$
+\eta_{k,j} \stackrel{\text{def}}{=} \hat{u}_k^\top Z_j |X^n| \stackrel{\text{i.i.d.}}{\sim} \mathcal{N}(0, 1).
+$$
+
+Then the components can be rewritten as
+
+$$
+\hat{\xi}_{k,l} = \frac{\sqrt{2} \hat{\sigma}_k^{1/2} \hat{\sigma}_l^{1/2}}{\hat{\sigma}_k - \hat{\sigma}_l} \frac{\sqrt{n}}{n_p} \sum_{j=1}^{n_p} \eta_{k,j} \eta_{l,j},
+$$
+
+for $k \in \mathcal{I}_J, l \notin \mathcal{I}_J$. To understand the covariance structure of $\hat{\xi}_J$, consider one
+more pair $(k', l')$ and investigate the covariance:
+
+$$
+\begin{align*}
+\hat{\Gamma}_{(k,l),(k',l')} &\stackrel{\text{def}}{=} \operatorname{Cov}(\hat{\xi}_{k,l}, \hat{\xi}_{k',l'} | \mathbf{X}^n) \\
+&= \frac{2n}{n_p^2} \sum_{j,j'=1}^{n_p} \frac{\hat{\sigma}_k^{1/2} \hat{\sigma}_l^{1/2} \hat{\sigma}_{k'}^{1/2} \hat{\sigma}_{l'}^{1/2}}{(\hat{\sigma}_k - \hat{\sigma}_l)(\hat{\sigma}_{k'} - \hat{\sigma}_{l'})} \mathrm{E}(\eta_{k,j} \eta_{l,j'} \eta_{k',j'} \eta_{l',j'} | \mathbf{X}^n) \\
+&= \frac{2n}{n_p} \delta_{k,k'} \delta_{l,l'} \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2}
+\end{align*}
+$$
+
+with $\delta_{k,k'} = 1(k = k')$. Therefore, the covariance matrix of $\hat{\xi}_{\mathcal{J}}$ is diagonal:
+
+$$
+\hat{\Gamma}_{\mathcal{J}}^* = \frac{2n}{n_p} \cdot \operatorname{diag} \left( \frac{2\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} \right)_{k \in I_{\mathcal{J}}, l \notin I_{\mathcal{J}}},
+$$
+
+This matrix $\hat{\Gamma}_{\mathcal{J}}$ can be compared with the matrix $\Gamma_{\mathcal{J}}^*$ defined in (2.2).
+
+**Lemma 4.4.** On the event where $\|\hat{\Sigma} - \Sigma^*\|_\infty \le g_{\mathcal{J}}^*/4$ it holds
+
+$$
+\|\hat{\Gamma}_{\mathcal{J}} - \Gamma_{\mathcal{J}}^*\|_1 \lesssim \Delta_3
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_23.md b/samples/texts/4320847/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..95df5c5378545cc1ea44de166bcd9c33d4b1ce45
--- /dev/null
+++ b/samples/texts/4320847/page_23.md
@@ -0,0 +1,39 @@
+with
+
+$$
+\hat{\Delta}_3 \stackrel{\text{def}}{=} \frac{p \left(m_{\mathcal{J}}^* \|\boldsymbol{\Sigma}^*\|_{\infty}^2 \wedge \operatorname{Tr}(\boldsymbol{\Sigma}^{*2})\right)}{g_{\mathcal{J}}^{*3}} \left( \|\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*\|_{\infty} + \frac{p}{n} \|\boldsymbol{\Sigma}^*\|_{\infty} \right).
+$$
+
+Proof. As both matrices $\hat{\Gamma}_{\mathcal{J}}$ and $\Gamma_{\mathcal{J}}^*$ are diagonal, it holds
+
+$$
+\|\hat{\Gamma}_{\mathcal{J}} - \Gamma_{\mathcal{J}}^*\|_1 \le 2 \sum_{k \in \mathcal{I}_{\mathcal{J}}} \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \left| \frac{n}{n_p} \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right|.
+$$
+
+Let us fix arbitrary $k \in \mathcal{I}_{\mathcal{J}}, l \notin \mathcal{I}_{\mathcal{J}}$ and upperbound the corresponding term of the sum. We will extensively use $|\hat{\sigma}_k - \sigma_k^*| \le \|\hat{\Sigma} - \Sigma^*\|_\infty$ and $|\hat{\sigma}_l - \sigma_l^*| \le \|\hat{\Sigma} - \Sigma^*\|_\infty$ which holds due to the Weyl's inequality.
+
+So, we have
+
+$$
+\left| \frac{n}{n_p} \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| \leq & \left| \frac{n}{n_p} \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{n}{n_p} \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| \\
+& + \left| \left( \frac{n}{n_p} - 1 \right) \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right|.
+$$
+
+Since $n/n_p \le 1$, the first term is controlled by
+
+$$
+\begin{align*}
+\left| \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| &\leq \left| \frac{\hat{\sigma}_k (\hat{\sigma}_l (\sigma_k^* - \sigma_l^*)^2 - \sigma_k^* \sigma_l^* (\hat{\sigma}_k - \hat{\sigma}_l)^2)}{(\sigma_k^* - \sigma_l^*)^2 (\hat{\sigma}_k - \hat{\sigma}_l)^2} \right| \\
+&= \left| \frac{(\sigma_k^* + \varepsilon_k)(\sigma_l^* + \varepsilon_l)(\sigma_k^* - \sigma_l^*)^2 - \sigma_k^* \sigma_l^* (\sigma_k^* - \sigma_l^* + \varepsilon_k - \varepsilon_l)^2}{(\sigma_k^* - \sigma_l^*)^2 (\hat{\sigma}_k - \hat{\sigma}_l)^2} \right|,
+\end{align*}
+$$
+
+where we introduced $\varepsilon_k = \hat{\sigma}_k - \sigma_k^*$ and $\varepsilon_l = \hat{\sigma}_l - \sigma_l^*$. Then, crossing out the term $\sigma_k^* \sigma_l^* (\sigma_k^* - \sigma_l^*)^2$ in the numerator, we obtain
+
+$$
+\left| \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| &\le \\
+&\le \left| \frac{(\sigma_k^* \varepsilon_l + \sigma_l^* \varepsilon_k + \varepsilon_k \varepsilon_l)(\sigma_k^* - \sigma_l^*)^2 - \sigma_k^* \sigma_l^* (2(\sigma_k^* - \sigma_l^*)(\varepsilon_k - \varepsilon_l) + (\varepsilon_k - \varepsilon_l)^2)}{(\sigma_k^* - \sigma_l^*)^2 (\hat{\sigma}_k - \hat{\sigma}_l)^2} \right|.
+$$
+
+On the event where $\|\hat{\Sigma} - \Sigma^*\|_{\infty} \le g_{\mathcal{J}}^{*}/4$ we have $|\varepsilon_k|, |\varepsilon_l| \le |\sigma_k^* - \sigma_l^*|/4,$
+therefore we can omit the terms $\varepsilon_k\varepsilon_l$ and $(\varepsilon_k - \varepsilon_l)^2$ paying a constant factor
\ No newline at end of file
diff --git a/samples/texts/4320847/page_24.md b/samples/texts/4320847/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..5bf7e402598f325bb70308e0c3b583a4f5d96e5a
--- /dev/null
+++ b/samples/texts/4320847/page_24.md
@@ -0,0 +1,46 @@
+for that:
+
+$$
+\begin{align*}
+& \left| \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| \\
+& \lesssim \left| \frac{(\sigma_k^* \varepsilon_l + \sigma_l^* \varepsilon_k) (\sigma_k^* - \sigma_l^*)^2 - 2 \sigma_k^* \sigma_l^* (\sigma_k^* - \sigma_l^*) (\varepsilon_k - \varepsilon_l)}{(\sigma_k^* - \sigma_l^*)^2 (\hat{\sigma}_k - \hat{\sigma}_l)^2} \right| \\
+& = \left| \frac{(\sigma_k^* \varepsilon_l + \sigma_l^* \varepsilon_k) (\sigma_k^* - \sigma_l^*) - 2 \sigma_k^* \sigma_l^* (\varepsilon_k - \varepsilon_l)}{(\sigma_k^* - \sigma_l^*) (\hat{\sigma}_k - \hat{\sigma}_l)^2} \right| \\
+& = \left| \frac{\varepsilon_k (\sigma_k^{*2} + \sigma_l^{*2}) - \varepsilon_l (\sigma_l^{*2} + \sigma_k^{*2})}{(\sigma_k^* - \sigma_l^*) (\hat{\sigma}_k - \hat{\sigma}_l)^2} \right|.
+\end{align*}
+$$
+
+Since $\sigma_k^*\sigma_l^* \le (\sigma_k^{*2} + \sigma_l^{*2})/2$, we get
+
+$$
+\left| \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| \lesssim \frac{\sigma_k^{*2} + \sigma_l^{*2}}{(\sigma_k^* - \sigma_l^*) (\hat{\sigma}_k - \hat{\sigma}_l)^2} \| \hat{\Sigma} - \Sigma^* \|_{\infty}.
+$$
+
+As to the denominator, again considering the event where $||\hat{\Sigma} - \Sigma^*||_\infty \le g_J^*/4$,
+we have $|\varepsilon_k|, |\varepsilon_l| \le |\sigma_k^* - \sigma_l^*|/4$ that results in
+
+$|\hat{\sigma}_k - \hat{\sigma}_l| = |\sigma_k^* - \sigma_l^* + \epsilon_k - \epsilon_l| \geq |\sigma_k^* - \sigma_l^*|/2.$
+
+Hence,
+
+$$
+\left| \frac{\hat{\sigma}_k \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)^2} - \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| \lesssim \frac{\sigma_k^{*2} + \sigma_l^{*2}}{(\sigma_k^* - \sigma_l^*)^3} \| \hat{\Sigma} - \Sigma^* \|_{\infty}.
+$$
+
+As to the term $\left| \left( \frac{n}{n_p} - 1 \right) \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right|$, it is simply bounded as
+
+$$
+\left| \left( \frac{n}{n_p} - 1 \right) \frac{\sigma_k^* \sigma_l^*}{(\sigma_k^* - \sigma_l^*)^2} \right| \lesssim \frac{p}{n} \cdot \frac{\sigma_k^{*2} + \sigma_l^{*2}}{(\sigma_k^* - \sigma_l^*)^2} \leq \frac{p}{n} \cdot \frac{\sigma_k^{*2} + \sigma_l^{*2}}{(\sigma_k^* - \sigma_l^*)^3} \| \Sigma^* \|_{\infty}.
+$$
+
+The last inequality uses the fact that $|\sigma_k^* - \sigma_l^*| \le \| \Sigma^* \|_\infty$ which is rather
+useless, however it allows to write the final bound in a convenient form and
+doesn't worsen the result.
+
+Putting this all together, we get
+
+$$
+\|\hat{\Gamma}_{\mathcal{J}} - \Gamma_{\mathcal{J}}^{*}\|_{1} \lesssim \sum_{k \in I_{\mathcal{J}}} \sum_{l \notin I_{\mathcal{J}}} \frac{\sigma_{k}^{*2} + \sigma_{l}^{*2}}{(\sigma_{k}^{*} - \sigma_{l}^{*})^{3}} (\|\hat{\Sigma} - \Sigma^{*}\|_{\infty} + \frac{p}{n} \|S^{*}\|_{\infty}),
+$$
+
+which provides the desired result once we notice that $\sum_{k \in I_J} \sum_{l \notin I_J} (\sigma_k^{*2} + \sigma_l^{*2})$ can
+be bounded by both $2p \text{Tr}(\Sigma^{*2})$ and $2pm_J^\ast ||\Sigma||_\infty$. $\square$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_25.md b/samples/texts/4320847/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..0ecfe40fdb71a5c4fe3df0fe92b5e3149c1ed038
--- /dev/null
+++ b/samples/texts/4320847/page_25.md
@@ -0,0 +1,36 @@
+Unfortunately, the entries $\hat{\xi}_{k,l}$ of $\hat{\xi}_J$ are not Gaussian because of the product $\eta_{k,j} \eta_{l,j}$. This does not allow to apply the Gaussian comparison Lemma A.4. To get rid of this issue, we condition on $\hat{P}_J Z$. Namely, in the “posterior” world random vectors $\hat{P}_J Z_j$ and $(I_p - \hat{P}_J)Z_j$ are Gaussian and uncorrelated, therefore, independent, so we can condition on $Z_J \stackrel{\text{def}}{=} (\hat{P}_J Z_1, \dots, \hat{P}_J Z_{n_p})$ to get that $\hat{S}_J$ is conditionally on $\mathbf{X}^n$, $Z_J$ Gaussian random vector with the covariance matrix
+
+$$ \tilde{\Gamma}_J \stackrel{\text{def}}{=} \operatorname{Cov}(\hat{\xi}_J | \mathbf{X}^n, Z_J). $$
+
+It holds similarly to the above
+
+$$
+\begin{align*}
+\tilde{\Gamma}_{(k,l),(k',l')} &\stackrel{\text{def}}{=} \operatorname{Cov}(\hat{\xi}_{k,l}, \hat{\xi}_{k',l'} | \mathbf{X}^n, Z_J) \\
+&= \frac{2n}{n_p^2} \sum_{j,j'=1}^{n_p} \frac{\hat{\sigma}_k^{1/2} \hat{\sigma}_l^{1/2} \hat{\sigma}_{k'}^{1/2} \hat{\sigma}_{l'}^{1/2}}{(\hat{\sigma}_k - \hat{\sigma}_l)(\hat{\sigma}_{k'} - \hat{\sigma}_{l'})} \operatorname{E}(\eta_{k,j} \eta_{l,j} \eta_{k',j'} \eta_{l',j'} | \mathbf{X}^n, Z_J) \\
+&= \frac{2n}{n_p} \tilde{\delta}_{k,k'} \delta_{l,l'} \frac{\hat{\sigma}_k^{1/2} \hat{\sigma}_{k'}^{1/2} \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)(\hat{\sigma}_{k'} - \hat{\sigma}_l)}
+\end{align*}
+$$
+
+with
+
+$$ \tilde{\delta}_{k,k'} \stackrel{\text{def}}{=} \frac{1}{n_p} \sum_{j=1}^{n_p} \eta_{k,j} \eta_{k',j}. $$
+
+**Lemma 4.5.** It holds on a random set of pseudo-posterior measure $1 - \frac{1}{n}$
+
+$$ \max_{k,k' \in I_J} |\tilde{\delta}_{k,k'} - \delta_{k,k'}| \lesssim \sqrt{\frac{\log(n_p + m_J^*)}{n_p}}, $$
+
+and on this set
+
+$$ \| \tilde{\Gamma}_J - \hat{\Gamma}_J \|_1 \lesssim \hat{\Delta}_4 \stackrel{\text{def}}{=} \frac{(m_J^*)^{3/2} \| \hat{\Sigma} \|_\infty \operatorname{Tr}(\hat{\Sigma})}{\hat{g}_J^2} \sqrt{\frac{\log(n_p + m_J^*)}{n_p}}. \quad (4.4) $$
+
+*Proof.* The first result of the lemma follows easily from usual concentration inequalities for sub-exponential random variables and union bound for at most $|\mathcal{I}_J|^2 = (m_J^*)^2$ pairs of $k, k'$.
+
+To obtain the second inequality we represent $\tilde{\Gamma}_J$ and $\hat{\Gamma}_J$ as
+
+$$
+\begin{align*}
+\tilde{\Gamma}_J &= \operatorname{diag}\left(\tilde{\Gamma}_J^{(l)}\right)_{l \notin I_J}, \\
+\hat{\Gamma}_J &= \operatorname{diag}\left(\hat{\Gamma}_J^{(l)}\right)_{l \notin I_J}.
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_26.md b/samples/texts/4320847/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f611e5a9d525950f94bd3029b4d90aa90fe8b6f
--- /dev/null
+++ b/samples/texts/4320847/page_26.md
@@ -0,0 +1,60 @@
+Due to this block structure we have
+
+$$
+\|\tilde{\Gamma}_{\mathcal{J}} - \hat{\Gamma}_{\mathcal{J}}\|_1 = \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \|\tilde{\Gamma}_{\mathcal{J}}^{(l)} - \hat{\Gamma}_{\mathcal{J}}^{(l)}\|_1.
+$$
+
+Let us fix $l \notin \mathcal{I}_J$ and focus on the corresponding block with size $m_J^* \times m_J^*$.
+It's easy to observe that for each $k, k' \in \mathcal{I}_J$
+
+$$
+\tilde{\Gamma}_{(k,l),(k',l)} - \hat{\Gamma}_{(k,l),(k',l)} = \frac{2n}{n_p} \frac{\hat{\sigma}_k^{1/2} \hat{\sigma}_{k'}^{1/2} \hat{\sigma}_l}{(\hat{\sigma}_k - \hat{\sigma}_l)(\hat{\sigma}_{k'} - \hat{\sigma}_l)} \cdot (\tilde{\delta}_{k,k'} - \delta_{k,k'})
+$$
+
+and, therefore,
+
+$$
+\max_{k,k' \in \mathcal{I}_J} |\tilde{\Gamma}_{(k,l),(k',l)} - \hat{\Gamma}_{(k,l),(k',l)}| \leq \frac{2\|\hat{\Sigma}\|_{\infty} \hat{\sigma}_l}{g_{\mathcal{J}}^2} \max_{k,k' \in \mathcal{I}_J} |\tilde{\delta}_{k,k'} - \delta_{k,k'}|.
+$$
+
+Finally, since
+
+$$
+\begin{align*}
+\|\tilde{\Gamma}_{\mathcal{J}}^{(l)} - \hat{\Gamma}_{\mathcal{J}}^{(l)}\|_1 &\le \sqrt{m_{\mathcal{J}}^*} \|\tilde{\Gamma}_{\mathcal{J}}^{(l)} - \hat{\Gamma}_{\mathcal{J}}^{(l)}\|_2 \\
+&\le (m_{\mathcal{J}}^*)^{3/2} \max_{k,k' \in \mathcal{I}_{\mathcal{J}}} |\tilde{\Gamma}_{(k,l),(k',l')} - \hat{\Gamma}_{(k,l),(k',l')}|,
+\end{align*}
+$$
+
+the obtained inequalities provide the result of the lemma.
+
+Putting together (4.3) and (4.4) yields the bound $\square$
+
+$$
+\|\tilde{\Gamma}_{\mathcal{J}} - \Gamma_{\mathcal{J}}^*\|_1 \lesssim \hat{\Delta}_3 + \hat{\Delta}_4.
+$$
+
+The Gaussian comparison Lemma A.4 can be used to compare the conditional
+distribution of $\|\hat{\xi}_{\mathcal{J}}\|$ given $\mathbf{X}^n$, $\hat{\mathbf{P}}_{\mathcal{J}}Z$ and the unconditional distribution of
+$\|\xi_{\mathcal{J}}\|$: on a random set of pseudo-posterior measure $1 - \frac{1}{n}$
+
+$$
+\sup_{x \in \mathbb{R}} \left| \Pi \left( \| \hat{\xi}_{\mathcal{J}} \|^{2} \le x \mid \mathbf{X}^{n}, Z_{\mathcal{J}} \right) - P(\| \xi_{\mathcal{J}} \|^{2} \le x) \right|
+$$
+
+$$
+\lesssim \frac{\hat{\Delta}_3 + \hat{\Delta}_4}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_{\infty}^2)^{1/4}} .
+$$
+
+Of course, integrating w.r.t. $\hat{\mathbf{P}}_\mathcal{J}Z$ ensures similar result when conditioning on
+the data $\mathbf{X}^n$ only:
+
+$$
+\sup_{x \in \mathbb{R}} \left| \Pi \left( \| \hat{\xi}_J \|^{2} \le x \mid \mathbf{X}^n \right) - P(\| \xi_J \|^{2} \le x) \right|
+$$
+
+$$
+\lesssim \frac{\hat{\Delta}_3 + \hat{\Delta}_4}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_{\infty}^2)^{1/4}} + \frac{1}{n} \quad (4.5)
+$$
+
+with probability one.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_27.md b/samples/texts/4320847/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e17073cb99c9aafc090c672ec425c00686a1aa6
--- /dev/null
+++ b/samples/texts/4320847/page_27.md
@@ -0,0 +1,42 @@
+**Step 3** So far our bounds $\hat{\Delta}_2, \hat{\Delta}_3, \hat{\Delta}_4$ were obtained in the “posterior” world, and they are random in the $X^n$-world since they depend on $\hat{\Sigma}$. We want to bound them by deterministic counterparts $\Delta_1, \Delta_2, \Delta_3$ with probability $1 - 1/n$. To do so, we basically need to upperbound $\|\hat{\Sigma}\|_{\infty}$, $\text{Tr}(\hat{\Sigma})$, $\hat{l}_{\mathcal{J}}$ and to lowerbound $\hat{g}_{\mathcal{J}}$. This can be done as follows:
+
+$$
+\begin{align*}
+\|\hat{\Sigma}\|_{\infty} &\leq \|\Sigma^*\|_{\infty} + \|\hat{\Sigma} - \Sigma^*\|_{\infty} \leq \|\Sigma^*\|_{\infty}(1 + \hat{\delta}_n), \\
+\text{Tr}(\hat{\Sigma}) &\leq \text{Tr}(\Sigma^*) + p\|\hat{\Sigma} - \Sigma^*\|_{\infty} \leq \text{Tr}(\Sigma^*) \left(1 + \frac{p\hat{\delta}_n \|\Sigma^*\|_{\infty}}{\text{Tr}(\Sigma^*)}\right).
+\end{align*}
+$$
+
+with probability $1 - 1/n$. Due to the definition of the spectral gap, there is an index $j$ such that $\hat{g}_{\mathcal{J}} = \hat{\sigma}_j - \hat{\sigma}_{j+1}$. Then due to the Weyl's inequality we have
+
+$$
+\begin{align*}
+\hat{g}_{\mathcal{J}} &= \hat{\sigma}_j - \hat{\sigma}_{j+1} \geq \sigma_j^* - \sigma_{j+1}^* - |\hat{\sigma}_j - \sigma_j^*| - |\hat{\sigma}_{j+1} - \sigma_{j+1}^*| \\
+&\geq g_{\mathcal{J}}^* - 2\|\hat{\Sigma} - \Sigma^*\|_{\infty} \geq g_{\mathcal{J}}^* - 2\hat{\delta}_n \|\Sigma^*\|_{\infty}
+\end{align*}
+$$
+
+with probability $1 - 1/n$. Similarly we can upperbound $\hat{l}_{\mathcal{J}}$. Further we can plug the obtained inequalities directly in $\hat{\Delta}_2, \hat{\Delta}_3, \hat{\Delta}_4$ in order to get $\Delta_1, \Delta_2, \Delta_3$ without any assumptions on $\hat{\delta}_n$. Another option is to use our assumption
+
+$$
+\hat{\delta}_n \leq \frac{g_{\mathcal{J}}^*}{4 \| \boldsymbol{\Sigma}^* \|_{\infty}} \wedge \frac{r(\boldsymbol{\Sigma}^*)}{p},
+$$
+
+that ensures $\|\hat{\Sigma}\|_{\infty} \lesssim \|\Sigma^*\|_{\infty}$, $\text{Tr}(\hat{\Sigma}) \lesssim \text{Tr}(\Sigma^*)$, $\hat{g}_{\mathcal{J}} \gtrsim g_{\mathcal{J}}^*$, $\hat{l}_{\mathcal{J}} \lesssim l_{\mathcal{J}}^*$. This allows to obtain more transparent bounds on $\Delta_1, \Delta_2, \Delta_3$. Also note that this assumption guarantees that the event from Lemma 4.4 is of probability at least $1 - 1/n$. We conclude that
+
+$$
+\begin{align*}
+\hat{\Delta}_2 &\lesssim \hat{\Delta}_2 \stackrel{\text{def}}{=} \left\{
+ \begin{aligned}[t]
+ & (\log(n) + p) \left( \left(1 + \frac{l_{\mathcal{J}}^*}{g_{\mathcal{J}}^*}\right) \frac{\sqrt{m_{\mathcal{J}}^*} \|\boldsymbol{\Sigma}^*\|_\infty}{g_{\mathcal{J}}^*} + 1 \right) \|\boldsymbol{\Sigma}^*\|_\infty + \|\mathbf{G}\|_\infty \\
+ & \quad \times \frac{m_{\mathcal{J}}^* \|\boldsymbol{\Sigma}^*\|_\infty}{g_{\mathcal{J}}^{*2}} \sqrt{\frac{\log(n)+p}{n}},
+ \end{aligned}
+\right\\
+\\
+\hat{\Delta}_3 &\lesssim \hat{\Delta}_3 \stackrel{\text{def}}{=} \frac{\|\boldsymbol{\Sigma}^*\|_\infty (m_{\mathcal{J}}^* \|\boldsymbol{\Sigma}^*\|_\infty^2 \wedge \text{Tr}(\boldsymbol{\Sigma}^{*2}))}{g_{\mathcal{J}}^{*3}} p (\hat{\delta}_n + \frac{p}{n}),
+\\
+\hat{\Delta}_4 &\lesssim \hat{\Delta}_4 \stackrel{\text{def}}{=} \frac{(m_{\mathcal{J}}^*)^{3/2} \|\boldsymbol{\Sigma}^*\|_\infty \text{Tr}(\boldsymbol{\Sigma}^*)}{g_{\mathcal{J}}^{*2}} \sqrt{\frac{\log(n)}{n}}
+\end{align*}
+$$
+
+with probability $1 - 1/n$ in the $X^n$-world.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_28.md b/samples/texts/4320847/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..a404fa85c3fa613545cb975a4d6b1405333adf64
--- /dev/null
+++ b/samples/texts/4320847/page_28.md
@@ -0,0 +1,46 @@
+Now we combine the obtained bounds. For $\Delta_2$ defined above and arbitrary $x \in \mathbb{R}$ it holds
+
+$$
+\begin{aligned}
+\Pi (n \| \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} \|_2^2 &\le x |X^n) \\
+&\le \Pi (n \| \hat{\mathcal{S}}_{\mathcal{J}} \|_2^2 \le x + \Delta_2 |X^n) \\
+&\quad + \Pi (n \| \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} \|_2^2 - n \| \hat{\mathcal{S}}_{\mathcal{J}} \|_2^2 \le -\Delta_2 |X^n).
+\end{aligned}
+$$
+
+Since $n\|\hat{\mathcal{S}}_{\mathcal{J}}\|_2^2 |\mathbf{X}^n \stackrel{d}{=} \|\hat{\xi}_{\mathcal{J}}\|^2 |\mathbf{X}^n$, $\hat{\Delta}_2 \lesssim \Delta_2$ with probability $1-\frac{1}{n}$, and taking (4.2) into account, we deduce
+
+$$ \Pi (n \| \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} \|_2^2 \leq x | \mathbf{X}^n ) \leq \Pi ( \| \xi_{\mathcal{J}} \|^2 \leq x + \Delta_2 | \mathbf{X}^n ) + \Pi (\gamma^c | \mathbf{X}^n ) $$
+
+with probability $1 - \frac{1}{n}$. Subtracting $\mathbb{P}(\|\xi_{\mathcal{J}}\|^2 \le x)$ and taking supremum of both sides, we get
+
+$$
+\begin{aligned}
+& \sup_{x \in \mathbb{R}} \left\{ \Pi (n \| \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} \|_2^2 \le x | \mathbf{X}^n) - \mathbb{P} (\| \xi_{\mathcal{J}} \|^2 \le x) \right\} \\
+& \le \sup_{x \in \mathbb{R}} \left\{ \Pi (\| \hat{\xi}_{\mathcal{J}} \|_2^2 \le x + \Delta_2 | \mathbf{X}^n) - \mathbb{P} (\| \xi_{\mathcal{J}} \|_2^2 \le x + \Delta_2) \right\} \\
+& \quad + \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} (\| \xi_{\mathcal{J}} \|_2^2 \le x + \Delta_2) - \mathbb{P} (\| \xi_{\mathcal{J}} \|_2^2 \le x) \right\} + \Pi (\gamma^c | \mathbf{X}^n).
+\end{aligned}
+$$
+
+The first term in the right-hand side is bounded by $\frac{\Delta_3+\Delta_4}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2}(\|\Gamma_{\mathcal{J}}^*\|_2^2-\|\Gamma_{\mathcal{J}}^*\|_\infty^2)^{1/4}} +$
+$\frac{1}{n}$ with probability $1 - \frac{1}{n}$ due to (4.5). The second term does not exceed
+$\frac{\Delta_2}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2}(\|\Gamma_{\mathcal{J}}^*\|_2^2-\|\Gamma_{\mathcal{J}}^*\|_\infty^2)^{1/4}}$ according to the Gaussian anti-concentration Lemma
+A.3. The last term is at most $\frac{1}{n}$ by definition of $\Upsilon$. Therefore,
+
+$$
+\begin{aligned}
+& \sup_{x \in \mathbb{R}} \left\{ \Pi (n \| \boldsymbol{\mathrm{P}}_\mathcal{J} - \hat{\boldsymbol{\mathrm{P}}}_\mathcal{J} \|_2^2 \le x | \boldsymbol{\mathrm{X}}^n) - \mathbb{P} (\| \boldsymbol{\xi}_\mathcal{J} \|^2 \le x) \right\} \\
+& \lesssim \frac{\Delta_2 + \Delta_3 + \Delta_4}{\|\Gamma_\mathcal{J}^*\|_2^{1/2} (\|\Gamma_\mathcal{J}^*\|_2^2 - \|\Gamma_\mathcal{J}^*\|_\infty^2)^{1/4}} + \frac{1}{n}
+\end{aligned}
+$$
+
+with probability $1 - 1/n$. Similarly, one derives
+
+$$
+\begin{aligned}
+& \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} (\|\xi_{\mathcal{J}}\|^2 \le x) - \Pi (n \| \boldsymbol{\mathrm{P}}_{\mathcal{J}} - \hat{\boldsymbol{\mathrm{P}}}_{\mathcal{J}} \|_2^2 \le x | \boldsymbol{\mathrm{X}}^n) \right\} \\
+& \lesssim \frac{\Delta_2 + \Delta_3 + \Delta_4}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_\infty^2)^{1/4}} + \frac{1}{n}
+\end{aligned}
+$$
+
+with probability $1 - 1/n$. The previous two inequalities yield the desired result.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_29.md b/samples/texts/4320847/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..5acdb275d0c2647e0b60edbfb2097e8a618a7185
--- /dev/null
+++ b/samples/texts/4320847/page_29.md
@@ -0,0 +1,44 @@
+### 4.2. Proof of Corollary 2.3
+
+Let $\xi_J \sim \mathcal{N}(0, \Gamma_J^*)$. Due to Theorem 2.2 we have
+
+$$ \sup_{x \in \mathbb{R}} \left| \mathbb{P} \left( n \| \hat{\mathbf{P}}_J - \mathbf{P}_J^* \|_2^2 > x \right) - \mathbb{P} (\| \xi_J \|_2^2 > x) \right| \lesssim \bar{\Delta}. $$
+
+Fix arbitrary significance level $\alpha \in (0; 1)$ (or confidence level $1-\alpha$). Recall that by $\gamma_\alpha$ we denote $\alpha$-quantile of $n\|\hat{\mathbf{P}}_J - \mathbf{P}_J^*\|_2^2$. Let us fix an event $\Theta$ such that
+
+$$ \sup_{x \in \mathbb{R}} \left| \Pi \left( n \| \mathbf{P}_J - \hat{\mathbf{P}}_J \|_2^2 > x \mid \mathbf{X}^n \right) - \mathbb{P} (\| \xi_J \|_2^2 > x) \right| \lesssim \diamond. $$
+
+According to Theorem 2.1 its probability is at least $1 - 1/n$. Hence, by the triangle inequality it holds on $\Theta$
+
+$$ \sup_{x \in \mathbb{R}} \left| \Pi \left( n \| \mathbf{P}_J - \hat{\mathbf{P}}_J \|_2^2 > x \mid \mathbf{X}^n \right) - \mathbb{P} \left( n \| \hat{\mathbf{P}}_J - \mathbf{P}_J^* \|_2^2 > x \right) \right| \leq \diamond' \asymp \bar{\Delta} + \diamond. $$
+
+Therefore, taking $x = \gamma_{\alpha-\diamond'}$ and $x = \gamma_{\alpha+\diamond'}$, we get on $\Theta$
+
+$$
+\begin{aligned}
+& \left| \Pi \left( n \| \mathbf{P}_J - \hat{\mathbf{P}}_J \|_2^2 > \gamma_{\alpha-\diamond'} \| \mathbf{X}^n \right) - (\alpha - \diamond') \right| \leq \diamond', \\
+& \left| \Pi \left( n \| \mathbf{P}_J - \hat{\mathbf{P}}_J \|_2^2 > \gamma_{\alpha+\diamond'} \| \mathbf{X}^n \right) - (\alpha + \diamond') \right| \leq \diamond'.
+\end{aligned}
+$$
+
+Thus,
+
+$$
+\begin{aligned}
+\Pi (n \| \mathbf{P}_J - \hat{\mathbf{P}}_J \|_2^2 > \gamma_{\alpha-\diamond'} | \mathbf{X}^n) &\le (\alpha - \diamond') + \diamond' = \alpha, \\
+\Pi (n \| \mathbf{P}_J - \hat{\mathbf{P}}_J \|_2^2 > \gamma_{\alpha+\diamond'} | \mathbf{X}^n) &\ge (\alpha + \diamond') - \diamond' = \alpha.
+\end{aligned}
+$$
+
+By definition of $\gamma_\alpha^\circ$ the previous two inequalities yield
+
+$$ \gamma_{\alpha+\diamond'} \leq \gamma_\alpha^\circ \leq \gamma_{\alpha-\diamond'} \quad \text{on } \Theta. $$
+
+Hence,
+
+$$
+\begin{align*}
+\mathbb{P}(\gamma_{\alpha}^{\circ} < \gamma_{\alpha+\diamond'}) &\leq \mathbb{P}(\Theta^{c}) \leq \frac{1}{n}, \\
+\mathbb{P}(\gamma_{\alpha}^{\circ} > \gamma_{\alpha-\diamond'}) &\leq \mathbb{P}(\Theta^{c}) \leq \frac{1}{n}.
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_3.md b/samples/texts/4320847/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..64659e42bf81bc0758c1dd3f1d3ae27952eff1cc
--- /dev/null
+++ b/samples/texts/4320847/page_3.md
@@ -0,0 +1,11 @@
+available for Gaussian observations: $X_1, \dots, X_n \stackrel{\text{i.i.d.}}{\sim} \mathcal{N}(0, \Sigma^*)$. For the case when $\mathcal{J}$ corresponds to a single eigenvalue in the spectrum and consists of one eigenspace, the normal approximation of $n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2$ was shown in [24] with a tight bound on
+
+$$ \sup_{x \in \mathbb{R}} \left| \frac{n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2 - \mathrm{E}\left(n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2\right)}{\mathrm{Var}^{1/2}\left(n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2\right)} \le x \right| - \Phi(x), $$
+
+where $\Phi(x)$ is the standard normal distribution function. However, the distribution of $n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2$ depends on the unknown covariance matrix via the first two moments of $n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2$. Due to this fact, it is difficult to use this result directly for constructing the confidence sets for the true projector $\mathbf{P}_\mathcal{J}^*$. The paper [23] demonstrates convergence to a Cauchy-type limit independent from the true covariance operator in a “high-complexity” setting where $\frac{\mathrm{Tr}(\Sigma^*)}{\|\Sigma^*\|_\infty} \to \infty$. Besides, a bootstrap approach can be used to overcome the described problem; see [27]. The bootstrap validity result is based on the approximation of the distribution of $n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2$ by the distribution of a Gaussian quadratic form $\|\xi\|^2$. Namely, for the Gaussian data, Theorem 4.3 of [27] provides the following statement:
+
+$$ \sup_{x \in \mathbb{R}} \left| \mathbb{P}(n\|\hat{\mathbf{P}}_\mathcal{J} - \mathbf{P}_\mathcal{J}^*\|_2^2 \le x) - \mathbb{P}(\|\xi\|^2 \le x) \right| \le \bar{\Delta}, \quad (1.1) $$
+
+where $\xi$ is a zero mean Gaussian vector with a specific covariance structure and $\bar{\Delta}$ is an explicit error term. The similar approximation is obtained in the bootstrap world, this reduces the original problem to the question about Gaussian comparison and Gaussian anti-concentration for large balls.
+
+This paper suggests to look at this problem from a Bayesian point of view. The standard approach for a nonparametric analysis of the posterior distribution is based on the prominent Bernstein–von Mises (BvM) phenomenon. The BvM result states some pivotal (Gaussian) behavior of the posterior. The paper [8] developed a general framework for functional BvM theorem, while [35] used similar ideas to demonstrate asymptotic normality of approximately linear functionals of covariance and precision matrices. In particular, it can be used to justify the use of Bayesian credible sets as frequentist confidence sets for the target parameter; see [25, 32, 15, 20, 4, 7] among others. In this work, we aim to address a similar question specifically for spectral projectors of the covariance matrix. It appears that the general BvM technique can be significantly improved and refined for the problem at hand. The use of the classical conjugated Wishart prior helps not only to build a numerically efficient procedure but also to establish precise finite sample results for the posterior credible sets under mild and general assumptions on the data distribution. The key observation here is that, similarly to the bootstrap approach of [27], the credible level sets for the posterior are nearly elliptic, and the corresponding posterior probability can be approximated by a generalized chi-squared-type distribution. This
\ No newline at end of file
diff --git a/samples/texts/4320847/page_30.md b/samples/texts/4320847/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..73c7428c0487718194233b99565fede28b74a298
--- /dev/null
+++ b/samples/texts/4320847/page_30.md
@@ -0,0 +1,42 @@
+Now we can write the following chain of inequalities:
+
+$$
+\begin{align*}
+& \mathbb{P} (n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma_{\alpha}^\circ) \\
+& \leq \mathbb{P} \left( \left\{ n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma_{\alpha+\diamond'} \right\} \cup \left\{ \gamma_{\alpha}^\circ < \gamma_{\alpha+\diamond'} \right\} \right) \\
+& \leq \mathbb{P} (n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma_{\alpha+\diamond'}) + \mathbb{P} (\gamma_{\alpha}^\circ < \gamma_{\alpha+\diamond'}) \leq \alpha + \diamond' + \frac{1}{n}
+\end{align*}
+$$
+
+and
+
+$$
+\begin{align*}
+\mathbb{P} (n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma_{\alpha}^{\circ}) &= 1 - \mathbb{P} (n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 \leq \gamma_{\alpha}^{\circ}) \\
+&\geq 1 - \mathbb{P} \left( \left\{ n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 \leq \gamma_{\alpha-\diamond'} \right\} \cup \left\{ \gamma_{\alpha}^{\circ} > \gamma_{\alpha-\diamond'} \right\} \right) \\
+&\geq 1 - \mathbb{P} (n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 \leq \gamma_{\alpha-\diamond'}) - \mathbb{P} (\gamma_{\alpha}^{\circ} > \gamma_{\alpha-\diamond'}) \\
+&= \mathbb{P} (n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 > \gamma_{\alpha-\diamond'}) - \mathbb{P} (\gamma_{\alpha}^{\circ} > \gamma_{\alpha-\diamond'}) \geq \alpha - \diamond' - \frac{1}{n}.
+\end{align*}
+$$
+
+Finally, these inequalities imply the following bound
+
+$$
+|\alpha - P(n ||\hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^*||_2^2 > \gamma_\alpha^\circ)| \leq \diamond' + \frac{1}{n},
+$$
+
+which concludes the proof.
+
+**Acknowledgements**
+
+We thank an anonymous Referee for very valuable comments and suggestions which led to a significant improvement of the paper.
+
+This work has been funded by the Russian Academic Excellence Project ‘5-100’. Results of Section 2 has been obtained under support of the RSF grant No. 18-11-00132. Financial support by the German Research Foundation (DFG) through the Collaborative Research Center 1294 is gratefully acknowledged.
+
+**Appendix A: Auxiliary results**
+
+Here we formulate some well-known results that were used throughout the paper.
+
+The following theorem gathers several crucial results on concentration of sample covariance.
+
+**Theorem A.1.** Let $X_1, \dots, X_n$ be i.i.d. zero-mean random vectors in $\mathbb{R}^p$. Denote the true covariance matrix as $\Sigma^* = \operatorname{def}\equiv E(X_i X_i^\top)$ and the sample covariance as $\hat{\Sigma} = \frac{1}{n} \sum_{i=1}^n X_i X_i^\top$. Suppose the data are obtained from:
\ No newline at end of file
diff --git a/samples/texts/4320847/page_31.md b/samples/texts/4320847/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..1e003528954f41bfea707929aa1c9dfb8ba5208a
--- /dev/null
+++ b/samples/texts/4320847/page_31.md
@@ -0,0 +1,33 @@
+(i) Gaussian distribution $\mathcal{N}(0, \boldsymbol{\Sigma}^*)$. In this case, define $\hat{\delta}_n$ as
+
+$$\hat{\delta}_n \asymp \sqrt{\frac{r(\boldsymbol{\Sigma}^*) + \log(n)}{n}};$$
+
+(ii) Sub-Gaussian distribution. In this case, define $\hat{\delta}_n$ as
+
+$$\hat{\delta}_n \asymp \sqrt{\frac{p + \log(n)}{n}};$$
+
+(iii) a distribution supported in some centered Euclidean ball of radius $R$. In this case, define $\hat{\delta}_n$ as
+
+$$\hat{\delta}_n \asymp \frac{R}{\sqrt{\|\boldsymbol{\Sigma}^*\|}} \sqrt{\frac{\log(n)}{n}};$$
+
+(iv) log-concave probability measure. In this case, define $\hat{\delta}_n$ as
+
+$$\hat{\delta}_n \asymp \sqrt{\frac{\log^6(n)}{np}}.$$
+
+Then in all the cases above the following concentration result for $\hat{\boldsymbol{\Sigma}}$ holds with the corresponding $\hat{\delta}_n$:
+
+$$\|\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*\|_{\infty} \leq \hat{\delta}_n \|\boldsymbol{\Sigma}^*\|_{\infty}$$
+
+with probability at least $1 - \frac{1}{n}$.
+
+*Proof.* (i) See [22], Corollary 2. (ii) This is a well-known simple result presented in a range of papers and lecture notes. See, e.g. [29], Theorem 4.6. (iii) See [33], Corollary 5.52. Usually the radius $R$ is taken such that $\frac{R}{\sqrt{\|\boldsymbol{\Sigma}^*\|}} \asymp \frac{\sqrt{\mathrm{Tr}(\boldsymbol{\Sigma}^*)}}{\sqrt{\|\boldsymbol{\Sigma}^*\|}} = \sqrt{r(\boldsymbol{\Sigma}^*)}$. (iv) See [1], Theorem 4.1. $\square$
+
+The following lemma is a crucial tool when working with spectral projectors.
+
+**Lemma A.2.** The following bound holds for all $\mathcal{J} = \{r^{-}, r^{-}+1, \dots, r^{+}\}$ with $1 \le r^{-} \le r^{+} \le q$:
+
+$$\|\tilde{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^{*}\|_{\infty} \le 4 \left(1 + \frac{2 l_{\mathcal{J}}^{*}}{\pi g_{\mathcal{J}}^{*}}\right) \frac{\|\tilde{\mathbf{\Sigma}} - \mathbf{\Sigma}^{*}\|_{\infty}}{g_{\mathcal{J}}^{*}}.$$
+
+Moreover, the following representation holds:
+
+$$\tilde{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* = L_{\mathcal{J}}(\tilde{\mathbf{\Sigma}} - \mathbf{\Sigma}^*) + R_{\mathcal{J}}(\tilde{\mathbf{\Sigma}} - \mathbf{\Sigma}^*), \quad (\text{A.1})$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_32.md b/samples/texts/4320847/page_32.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a4b5b6161b310989e5b462e0495d4df6982c048
--- /dev/null
+++ b/samples/texts/4320847/page_32.md
@@ -0,0 +1,33 @@
+where
+
+$$L_J(\tilde{\Sigma} - \Sigma^*) \stackrel{\text{def}}{=} \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \frac{\mathbf{P}_r^*(\tilde{\Sigma} - \Sigma^*) \mathbf{P}_s^* + \mathbf{P}_s^*(\tilde{\Sigma} - \Sigma^*) \mathbf{P}_r^*}{\mu_r^* - \mu_s^*}$$
+
+and
+
+$$\|R_J(\tilde{\Sigma} - \Sigma^*)\|_{\infty} \le 15 \left(1 + \frac{2 l_J^*}{\pi g_J^*}\right) \left(\frac{\|\tilde{\Sigma} - \Sigma^*\|_{\infty}}{g_J^*}\right)^2 . \quad (A.2)$$
+
+*Proof.* Apply Lemma 2 from [21]. □
+
+This lemma shows that $\tilde{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*$ can be approximated by the linear operator $L_{\mathcal{J}}(\tilde{\Sigma} - \Sigma^*)$.
+
+The next lemma from [16] provides upper bound for $\Delta$-band of the squared norm of a Gaussian element.
+
+**Lemma A.3 (Gaussian anti-concentration).** Let $\xi$ be a Gaussian element in Hilbert space $\mathbb{H}$ with zero mean and covariance operator $\Sigma_\xi$. Then for arbitrary $\Delta > 0$ one has
+
+$$\mathbb{P}(x < \| \xi \|^{2} < x + \Delta) \leq \frac{\Delta}{\|\Sigma_{\xi}\|_{2}^{1/2} (\|\Sigma_{\xi}\|_{2}^{2} - \|\Sigma_{\xi}\|_{\infty}^{2})^{1/4}}.$$
+
+*Proof.* See [16], Theorem 2.7. □
+
+One more lemma from [16] describes how close are the distributions of the norms of two Gaussian elements in terms of their covariance operators. Note that the bound is dimension free.
+
+**Lemma A.4 (Gaussian comparison).** Let $\xi$ and $\eta$ be Gaussian elements in Hilbert space $\mathbb{H}$ with zero mean and covariance operators $\Sigma_\xi$ and $\Sigma_\eta$, respectively. The following inequality holds
+
+$$\sup_{x \in \mathbb{R}} |\mathbb{P}(\|\xi\|^2 \ge x) - \mathbb{P}(\|\eta\|^2 \ge x)| \lesssim \|\Sigma_\xi - \Sigma_\eta\|_1 \times \\ \times \left( \frac{1}{\|\Sigma_\xi\|_2^{1/2} (\|\Sigma_\xi\|_2^2 - \|\Sigma_\xi\|_\infty^2)^{1/4}} + \frac{1}{\|\Sigma_\eta\|_2^{1/2} (\|\Sigma_\eta\|_2^2 - \|\Sigma_\eta\|_\infty^2)^{1/4}} \right).$$
+
+*Proof.* See [16], Theorem 2.1. □
+
+# Appendix B: Auxiliary proofs
+
+## B.1. *Proof of Theorem 2.2*
+
+The proof consists of three steps.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_33.md b/samples/texts/4320847/page_33.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8e0fa9d3d34e61a528a446716fc7ea223093297
--- /dev/null
+++ b/samples/texts/4320847/page_33.md
@@ -0,0 +1,26 @@
+**Step 1** Apply the representation (A.1) from Lemma A.2 to $\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*$:
+
+$$ \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* = L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*) + R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*). $$
+
+Then, for $n\|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*\|_2^2$ one has
+
+$$ n\|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*\|_2^2 = n\|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2 + n\|R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2 \\ + 2n\langle L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*), R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*) \rangle_2. $$
+
+Let us estimate how good $n\|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2$ approximates $n\|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*\|_2^2$:
+clearly, we have
+
+$$ \begin{aligned} & |n\|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*\|_2^2 - n\|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2| \\ & \le n\|R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2 + 2n\|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2 \|R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2. \end{aligned} $$
+
+Let us elaborate on the right-hand side. First, since
+
+$$ R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*) = \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* - \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \frac{\mathbf{P}_r^*(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*) \mathbf{P}_s^* + \mathbf{P}_s^*(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*) \mathbf{P}_r^*}{\mu_r^* - \mu_s^*} $$
+
+and $\hat{\mathbf{P}}_{\mathcal{J}}, \mathbf{P}_{\mathcal{J}}^*, \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \mathbf{P}_r^*(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*) \mathbf{P}_s^*$ have rank at most $m_{\mathcal{J}}^*$, then the
+rank of $R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)$ is at most $4m_{\mathcal{J}}^*$. Hence, due to the relation between the
+Frobenius and the operator norms via rank, we have
+
+$$ \|R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2 \leq \sqrt{4m_{\mathcal{J}}^*} \|R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_{\infty}. $$
+
+The bound (A.2) from Lemma A.2 gives
+
+$$ \|R_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2 \leq \sqrt{4m_{\mathcal{J}}^*} \cdot 15 \left(1 + \frac{2 l_{\mathcal{J}}^*}{\pi g_{\mathcal{J}}^*}\right) \frac{\|\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*\|_{\infty}^2}{g_{\mathcal{J}}^{*2}}. $$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_34.md b/samples/texts/4320847/page_34.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a4f617652744945d4b89e8a6de95c869a56ffce
--- /dev/null
+++ b/samples/texts/4320847/page_34.md
@@ -0,0 +1,37 @@
+Now let us bound $\|L_J(\hat{\Sigma} - \Sigma^*)\|_\infty$:
+
+$$
+\begin{align*}
+\| L_J(\hat{\Sigma} - \Sigma^*) \|_{\infty} &= \left\| \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \frac{\mathbf{P}_r^*(\hat{\Sigma} - \Sigma^*) \mathbf{P}_s^* + \mathbf{P}_s^*(\hat{\Sigma} - \Sigma^*) \mathbf{P}_r^*}{\mu_r^* - \mu_s^*} \right\|_{\infty} \\
+&\le 2 \left\| \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \frac{\mathbf{P}_r^*(\hat{\Sigma} - \Sigma^*) \mathbf{P}_s^*}{\mu_r^* - \mu_s^*} \right\|_{\infty} = 2 \left\| \sum_{r \in \mathcal{J}} \mathbf{P}_r^* \sum_{s \notin \mathcal{J}} \frac{(\hat{\Sigma} - \Sigma^*) \mathbf{P}_s^*}{\mu_r^* - \mu_s^*} \right\|_{\infty} \\
+&\le 2 \sum_{r \in \mathcal{J}} \|\mathbf{P}_r^*\| \left\| \sum_{s \notin \mathcal{J}} \frac{\mathbf{P}_s^*}{\mu_r^* - \mu_s^*} \right\|_{\infty} \|(\hat{\Sigma} - \Sigma^*)\|_{\infty} \\
+&\le \frac{2 |\mathcal{J}| (\|\hat{\Sigma} - \Sigma^*\|_{\infty})}{\min_{r \in \mathcal{J}, s \notin \mathcal{J}} |\mu_r^* - \mu_s^*|} \le 2 |\mathcal{J}| \frac{(\|\hat{\Sigma} - \Sigma^*\|_{\infty})}{g_{\mathcal{J}}^*}.
+\end{align*}
+$$
+
+Then, for $\|L_J(\hat{\Sigma} - \Sigma^*)\|_2$ we have
+
+$$
+\|L_J(\hat{\Sigma} - \Sigma^*)\|_2 = \sqrt{2m_J^*} \|L_J(\hat{\Sigma} - \Sigma^*)\|_\infty \leq \sqrt{2m_J^*} 2|\mathcal{J}| \frac{||\hat{\Sigma} - \Sigma^*||_\infty}{g_J^*}.
+$$
+
+Putting this all together, we obtain
+
+$$
+\begin{aligned}
+& |n \| \hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^* \|_2^2 - n \| L_r (\hat{\Sigma} - \boldsymbol{\Sigma}^*) \|_2^2| \\
+& \lesssim n m_{\mathcal{J}}^* \left(1 + \frac{l_{\mathcal{J}}^*}{g_{\mathcal{J}}^*}\right)^2 \frac{\|\hat{\Sigma} - \boldsymbol{\Sigma}^*\|_{\infty}^4}{g_{\mathcal{J}}^{*4}} + n m_{\mathcal{J}}^* |\mathcal{J}| \left(1 + \frac{l_{\mathcal{J}}^*}{g_{\mathcal{J}}^*}\right) \frac{\|\hat{\Sigma} - \boldsymbol{\Sigma}^*\|_{\infty}^3}{g_{\mathcal{J}}^{*3}}.
+\end{aligned}
+$$
+
+The concentration condition for the sample covariance (2.3) provides
+
+$$
+|n ||\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*||_2^2 - n ||L_J(\hat{\Sigma} - \mathbf{\Sigma}^*)||_2^2| \lesssim \bar{\Delta},
+$$
+
+$$
+\bar{\Delta} = nm_J^* \left(1 + \frac{l_J^*}{g_J^*}\right) \left( \left(1 + \frac{l_J^*}{g_J^*}\right) \frac{\hat{\delta}_n^4}{g_J^{*4}} \lor \frac{|\mathcal{J}|}{g_J^{*3}} \delta_n^3 \right) \quad (B.1)
+$$
+
+with probability 1 − 1 n .
\ No newline at end of file
diff --git a/samples/texts/4320847/page_35.md b/samples/texts/4320847/page_35.md
new file mode 100644
index 0000000000000000000000000000000000000000..70e215c5f2bcf97ce22e8518fe7d0d5b052d6317
--- /dev/null
+++ b/samples/texts/4320847/page_35.md
@@ -0,0 +1,33 @@
+**Step 2** Following [27], we can choose {$u_j^*$}$_{j=1}^p$ as an orthonormal basis in $\mathbb{R}^p$ and represent $n\|L_J(\hat{\Sigma} - \Sigma^*)\|_2^2$ as
+
+$$
+\begin{align*}
+n \| L_J(\hat{\Sigma} - \Sigma^*) \|_2^2 &= n \left\| \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \frac{P_r^*(\hat{\Sigma} - \Sigma^*) P_s^* + P_s^*(\hat{\Sigma} - \Sigma^*) P_r^*}{\mu_r^* - \mu_s^*} \right\|_2^2 \\
+&= n \sum_{l,k=1}^p \left( u_k^{*T} \sum_{r \in \mathcal{J}} \sum_{s \notin \mathcal{J}} \frac{P_r^*(\hat{\Sigma} - \Sigma^*) P_s^* + P_s^*(\hat{\Sigma} - \Sigma^*) P_r^*}{\mu_r^* - \mu_s^*} u_l^* \right)^2 \\
+&= n \sum_{l,k=1}^p \left( u_k^{*T} \sum_{r_1 \in \mathcal{J}} \sum_{s_1 \notin \mathcal{J}} \frac{P_{r_1}^*(\hat{\Sigma} - \Sigma^*) P_{s_1}^* + P_{s_1}^*(\hat{\Sigma} - \Sigma^*) P_{r_1}^*}{\mu_{r_1}^* - \mu_{s_1}^*} u_l^* \right) \\
+&\qquad \times \left( u_k^{*T} \sum_{r_2 \in \mathcal{J}} \sum_{s_2 \notin \mathcal{J}} \frac{P_{r_2}^*(\hat{\Sigma} - \Sigma^*) P_{s_2}^* + P_{s_2}^*(\hat{\Sigma} - \Sigma^*) P_{r_2}^*}{\mu_{r_2}^* - \mu_{s_2}^*} u_l^* \right) \\
+&= n \sum_{l,k=1}^p \sum_{r_1 \in \mathcal{J}, r_2 \in \mathcal{J}} \sum_{s_1 \notin \mathcal{J}, s_2 \notin \mathcal{J}} \left( u_k^{*T} \frac{P_{r_1}^*(\hat{\Sigma} - \Sigma^*) P_{s_1}^* + P_{s_1}^*(\hat{\Sigma} - \Sigma^*) P_{r_1}^*}{\mu_{r_1}^* - \mu_{s_1}^*} u_l^* \right) \\
+&\qquad \times \left( u_k^{*T} \frac{P_{r_2}^*(\hat{\Sigma} - \Sigma^*) P_{s_2}^* + P_{s_2}^*(\hat{\Sigma} - \Sigma^*) P_{r_2}^*}{\mu_{r_2}^* - \mu_{s_2}^*} u_l^* \right).
+\end{align*}
+$$
+
+As we can see, the only terms that survive in this sum are the terms with $r_1 = r_2 = r \in J$, $s_1 = s_2 = s \notin J$, $k \in \Delta_r^*$, $l \in \Delta_s^*$, and due to the symmetry the factor 2 appears. So, we derive
+
+$$
+\begin{align*}
+n \| L_J(\hat{\Sigma} - \Sigma^*) \|_2^2 &= 2n \sum_{k \in \Delta_r^*, r \in J} \sum_{l \in \Delta_s^*, s \notin J} \left( u_k^{*T} \frac{P_r^*(\hat{\Sigma} - \Sigma^*) P_s^*}{\mu_r^* - \mu_s^*} u_l^* \right)^2 \\
+&= 2n \sum_{k \in \Delta_r^*, r \in J} \sum_{l \in \Delta_s^*, s \notin J} \left( \frac{u_k^{*T} (\hat{\Sigma} - \Sigma^*) u_l^*}{\mu_r^* - \mu_s^*} \right)^2 .
+\end{align*}
+$$
+
+Now let us define for all $k \in I_J$ and $l \notin I_J$
+
+$$
+S_J(u_k^*, u_l^*) = \sqrt{2n} \frac{u_k^{*\top}(\hat{\Sigma} - \Sigma^*) u_l^*}{\mu_r^* - \mu_s^*}.
+$$
+
+This set of quantities can be considered as matrix
+
+$$
+\{S_J(u_k^*, u_l^*)\}_{\substack{k \in I_J \\ l \notin I_J}} \in \mathbb{R}^{m_J^* \times (p-m_J^*)},
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_36.md b/samples/texts/4320847/page_36.md
new file mode 100644
index 0000000000000000000000000000000000000000..08a2f270528bc1291a4dae99f636ac4c5a203219
--- /dev/null
+++ b/samples/texts/4320847/page_36.md
@@ -0,0 +1,53 @@
+or, we can arrange a vector $S_J \in \mathbb{R}^{m_J^*(p-m_J^*)}$ with components $S_J(u_k^*, u_l^*)$
+ordered in some particular way. Let us notice that
+
+$$
+n \|L_{\mathcal{J}}(\hat{\Sigma} - \Sigma^*)\|_2^2 = \|S_{\mathcal{J}}\|^2.
+$$
+
+**Step 3** Now our goal is to show that $S_J$ is approximately $\mathcal{N}(0, \Gamma_J^*)$ using a version of Berry-Esseen theorem given by [2]. Represent $S_J$ as
+
+$$
+S_J = \frac{1}{\sqrt{n}} \sum_{j=1}^{n} S^{(j)},
+$$
+
+where $S^{(j)}$ is a random vector with components
+
+$$
+S^{(j)}(u_k^*, u_l^*) = \frac{\sqrt{2}}{\mu_r^* - \mu_s^*} (u_k^{\top} X_j) \cdot (u_l^{\top} X_j)
+$$
+
+for all $k \in \mathcal{I}_\mathcal{J}$ and $l \notin \mathcal{I}_\mathcal{J}$.
+
+It is straightforward to verify that the covariance matrix of $S^{(j)}$ (and hence
+of $S_{\mathcal{J}}$) is $\Gamma_{\mathcal{J}}^*$ from (2.2) under the condition that $\mathbf{P}_{\mathcal{J}}^* X_j$ and $(\mathbf{I}_p - \mathbf{P}_{\mathcal{J}}^*) X_j$
+are independent. Consider an entry of the covariance matrix of $S^{(j)}$ indexed
+by $(k,l)$ and $(k', l')$, where $k \in \Delta_r^*, k' \in \Delta_{r'}^*, r,r' \in \mathcal{J}$ and $l \in \Delta_s^*, l' \in$
+$\Delta_{s'}$, $s,s' \notin \mathcal{J}$:
+
+$$
+\begin{align*}
+\operatorname{Cov} (S^{(j)})_{(k,l) \atop (k',l')}
+&= \operatorname{E} (S^{(j)}(u_k^*, u_l^*) \cdot S^{(j)}(u_{k'}, u_{l'}) \\
+&= \frac{2 \operatorname{E} [(u_k^{\top} X_j) \cdot (u_l^{\top} X_j) \cdot (u_{k'}^{\top} X_j) \cdot (u_{l'}^{\top} X_j)]}{(\mu_r^* - \mu_s^*)(\mu_{r'}^* - \mu_{s'}^*)}.
+\end{align*}
+$$
+
+Now, the independence of $\mathbf{P}_{\mathcal{J}}^* X_j$ and $(\mathbf{I}_p - \mathbf{P}_{\mathcal{J}}^*) X_j$ implies the independence
+of $(u_k^*, u_{k'}^\top) (\mathbf{P}_{\mathcal{J}}^* X_j)$ and $(u_l^*, u_{l'}^\top) (\mathbf{I}_p - \mathbf{P}_{\mathcal{J}}^*) X_j$,
+which can be rewritten as
+independence of $(u_k^{\top} X_j, u_{k'}^\top X_j)^\top$ and $(u_l^{\top} X_j, u_{l'}^\top X_j)^\top$. This means that
+the expectation in the expression for the covariance entry can be splitted as
+
+$$
+\mathrm{Cov} (S^{(j)})_{(k,l) \atop (k',l')} = \frac{2 \mathbb{E} \left[ (u_k^{\top} X_j)(u_{k'}^{\top} X_j) \right] \cdot \mathbb{E} \left[ (u_l^{\top} X_j)(u_{l'}^{\top} X_j) \right]}{(\mu_r^* - \mu_s^*)(\mu_{r'}^* - \mu_{s'}^*)}.
+$$
+
+The observation that $u_k^{\top} \boldsymbol{\Sigma}^* u_{k'} = \mu_r^* \cdot 1\{k=k'\}$ and $u_l^{\top} \boldsymbol{\Sigma}^* u_{l'} = \mu_s^* \cdot 1\{l=l'\}$ establishes the fact that $\mathrm{Cov}(S^{(j)}) = \Gamma_\mathcal{J}^*$.
+
+To apply Theorem 1.1 from [2], we need to bound $\mathbb{E}\|\Gamma_{\mathcal{J}}^{*-1/2} S^{(j)}\|^3$. First,
+let us notice that
+
+$$
+[\Gamma_{\mathcal{J}}^{*-1/2} S^{(j)}](u_k^*, u_l^*) = \frac{u_k^{*\top} X_j}{\sqrt{\mu_r^*}} \cdot \frac{u_l^{*\top} X_j}{\sqrt{\mu_s^*}}.
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_37.md b/samples/texts/4320847/page_37.md
new file mode 100644
index 0000000000000000000000000000000000000000..a356feedc1a2cd614a2bf62faed35501a5655165
--- /dev/null
+++ b/samples/texts/4320847/page_37.md
@@ -0,0 +1,52 @@
+Further, recalling the auxiliary matrices
+
+$$
+\begin{align*}
+\boldsymbol{U}_{\mathcal{J}}^* & \stackrel{\text{def}}{=} \left\{ \frac{u_k^{\top}}{\sqrt{\mu_r^*}} \right\}_{k \in \mathcal{I}_{\mathcal{J}}} & \in \mathbb{R}^{m_{\mathcal{J}}^* \times p}, \\
+\boldsymbol{V}_{\mathcal{J}}^* & \stackrel{\text{def}}{=} \left\{ \frac{u_l^{\top}}{\sqrt{\mu_s^*}} \right\}_{l \notin \mathcal{I}_{\mathcal{J}}} & \in \mathbb{R}^{(p-m_{\mathcal{J}}^*) \times p},
+\end{align*}
+$$
+
+we have
+
+$$
+\begin{align*}
+\|\Gamma_{\mathcal{J}}^{*-1/2} S^{(j)}\|^2 &= \sum_{k \in \mathcal{I}_{\mathcal{J}}} \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \frac{(u_k^{\top} X_j)^2}{\mu_r^*} \cdot \frac{(u_l^{\top} X_j)^2}{\mu_s^*} \\
+&= \left\{ \sum_{k \in \mathcal{I}_{\mathcal{J}}} \frac{(u_k^{\top} X_j)^2}{\mu_r^*} \right\} \cdot \left\{ \sum_{l \notin \mathcal{I}_{\mathcal{J}}} \frac{(u_l^{\top} X_j)^2}{\mu_s^*} \right\} = \|U_{\mathcal{J}}^* X_j\|^2 \|V_{\mathcal{J}}^* X_j\|^2.
+\end{align*}
+$$
+
+Then,
+
+$$
+\mathbb{E}\|\Gamma_{\mathcal{J}}^{-1/2} S^{(j)}\|^3 = \mathbb{E}\left(\|\mathbf{U}_{\mathcal{J}}^* X_j\|^3 \|\mathbf{V}_{\mathcal{J}}^* X_j\|^3\right) = \mathbb{E}\|\mathbf{U}_{\mathcal{J}}^* X_j\|^3 \cdot \mathbb{E}\|\mathbf{V}_{\mathcal{J}}^* X_j\|^3,
+$$
+
+where we again used the fact that the independence of $\mathbf{P}_{\mathcal{J}}^* X_j$ and $(I_p - \mathbf{P}_{\mathcal{J}}^*) X_j$
+implies the independence of $\mathbf{U}_{\mathcal{J}}^*\mathbf{P}_{\mathcal{J}}^* X_j = \mathbf{U}_{\mathcal{J}}^* X_j$ and $\mathbf{V}_{\mathcal{J}}^*(I_p - \mathbf{P}_{\mathcal{J}}^*) X_j =$
+$\mathbf{V}_{\mathcal{J}}^* X_j$.
+
+Therefore, Theorem 1.1 from [2] yields
+
+$$
+\sup_{x \in \mathbb{R}} |\mathbb{P}(\|S_J\|^2 \le x) - \mathbb{P}(\|\xi\|^2 \le x)| \lesssim \mathbb{E}\|\mathbf{U}_J^* X\|^3 \cdot \mathbb{E}\|\mathbf{V}_J^* X\|^3 \cdot \frac{p^{1/4}}{\sqrt{n}},
+$$
+
+or, recalling that $\|S_J\|^2 = n \|L_J(\hat{\Sigma} - \Sigma^*)\|_2^2$,
+
+$$
+\begin{align*}
+& \sup_{x \in \mathbb{R}} \left| \mathbb{P} \left( n \| L_J ( \hat{\Sigma} - \boldsymbol{\Sigma}^* ) \|_2^2 \le x \right) - \mathbb{P} ( \| \boldsymbol{\xi} \|_2^2 \le x ) \right| \\
+& \lesssim \mathbb{E} \| U_{\mathcal{J}}^* X \|_3^3 \cdot \mathbb{E} \| V_{\mathcal{J}}^* X \|_3^3 \cdot \frac{p^{1/4}}{\sqrt{n}} .
+\end{align*}
+$$
+
+**Step 4** Next, for $\bar{\Delta}$ defined by (B.1) from Step 1 we may write for any $x \in \mathbb{R}$
+
+$$
+\begin{align*}
+& \mathbb{P}\left(n\|\hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^*\|_2^2 \ge x\right) \\
+& \le \mathbb{P}\left(n\|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2 \ge x - \bar{\Delta}\right) \\
+& + \mathbb{P}\left(n\|\hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^*\|_2^2 - n\|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2 \ge \bar{\Delta}\right).
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/4320847/page_38.md b/samples/texts/4320847/page_38.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5a3b71ae201c4dbabfafb70cce23605b958d70d
--- /dev/null
+++ b/samples/texts/4320847/page_38.md
@@ -0,0 +1,47 @@
+Hence,
+
+$$
+\begin{align*}
+& \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} \left( n \| \hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^* \|_2^2 \ge x \right) - \mathbb{P} (\| \boldsymbol{\xi} \|^2 \ge x) \right\} \\
+& \le \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} \left( \|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|^2 \ge x - \bar{\Delta} \right) - \mathbb{P} (\| \boldsymbol{\xi} \|^2 \ge x - \bar{\Delta}) \right\} \\
+& \quad + \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} \left( \| \boldsymbol{\xi} \|^2 \ge x - \bar{\Delta} \right) - \mathbb{P} (\| \boldsymbol{\xi} \|^2 \ge x) \right\} \\
+& \quad + \mathbb{P} \left( n \| \hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^* \|_2^2 - n \|L_{\mathcal{J}}(\hat{\boldsymbol{\Sigma}} - \boldsymbol{\Sigma}^*)\|_2^2 \ge \bar{\Delta} \right).
+\end{align*}
+$$
+
+The first term in the right-hand side was bounded in Step 3 by
+
+$$
+\mathbb{E}\|\mathbf{U}_{\mathcal{J}}^{*}\mathbf{X}\|^3 \cdot \mathbb{E}\|\mathbf{V}_{\mathcal{J}}^{*}\mathbf{X}\|^3 \cdot \frac{p^{1/4}}{\sqrt{n}}.
+$$
+
+The second term is bounded by $\frac{\bar{\Delta}}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_\infty^2)^{1/4}}$ according to the Anti-concentration Lemma A.3. The last term is less than $1/n$ in view of (B.1) from Step 1. Therefore,
+
+$$
+\begin{align*}
+& \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} \left( n \| \hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^* \|_2^2 \ge x \right) - \mathbb{P} (\| \boldsymbol{\xi} \|^2 \ge x) \right\} \\
+&\lesssim \mathbb{E} \| \boldsymbol{U}_{\mathcal{J}}^* X \|^3 \cdot \mathbb{E} \| \boldsymbol{V}_{\mathcal{J}}^* X \|^3 \cdot \frac{p^{1/4}}{\sqrt{n}} + \frac{\bar{\Delta}}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_\infty^2)^{1/4}} + \frac{1}{n}.
+\end{align*}
+$$
+
+Similarly, one can verify that
+
+$$
+\begin{align*}
+& \sup_{x \in \mathbb{R}} \left\{ \mathbb{P} (\|\boldsymbol{\xi}\|^2 \ge x) - \mathbb{P} (n\|\hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^*\|_2^2 \ge x) \right\} \\
+&\lesssim \mathbb{E}\|\boldsymbol{U}_{\mathcal{J}}^*\boldsymbol{X}\|^3 \cdot \mathbb{E}\|\boldsymbol{V}_{\mathcal{J}}^*\boldsymbol{X}\|^3 \cdot \frac{p^{1/4}}{\sqrt{n}} + \frac{\bar{\Delta}}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|(\Gamma_{\mathcal{J}}^*)^2\|_\infty)} + \frac{1}{n}.
+\end{align*}
+$$
+
+Putting together the previous two bounds, we derive the final result:
+
+$$
+\begin{align*}
+& \sup_{x \in \mathbb{R}} | \mathbb{P} (n \| \hat{\boldsymbol{P}}_{\mathcal{J}} - \boldsymbol{P}_{\mathcal{J}}^* \|_2^2 \ge x) - \mathbb{P} (\| \boldsymbol{\xi} \|_2^2 \ge x) | \\
+&\lesssim \mathbb{E} \| \boldsymbol{U}_{\mathcal{J}}^* X \|_2^3 \cdot \mathbb{E} \| \boldsymbol{V}_{\mathcal{J}}^* X \|_2^3 \cdot \frac{p^{1/4}}{\sqrt{n}} + \frac{\bar{\Delta}}{\| \Gamma_{\mathcal{J}}^* \|_2^{1/2} (\| \Gamma_{\mathcal{J}}^* \|_2^2 - \| \Gamma_{\mathcal{J}}^* \|_\infty^2)^{1/4}} + \frac{1}{n}.
+\end{align*}
+$$
+
+**References**
+
+[1] ADAMCZAK, R., LITVAK, A. E., PAJOR, A. and TOMCZAK-JAEGERMANN, N. (2010). Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles. *J. Amer. Math. Soc.*, **23**, 535–561. MR2601042
\ No newline at end of file
diff --git a/samples/texts/4320847/page_39.md b/samples/texts/4320847/page_39.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e960b8b1c24edc81dd18262a02b5e41337b4634
--- /dev/null
+++ b/samples/texts/4320847/page_39.md
@@ -0,0 +1,35 @@
+[2] BENTKUS, V. (2005). A Lyapunov-type bound in $\mathbb{R}^d$. *Theory Probab. Appl.*, **49**, 2, 311–323. MR2144310
+
+[3] BERTHET, Q. and RIGOLLET, P. (2013). Optimal detection of sparse principal components in high dimension. *Ann. Statist.*, **41**, 4, 1780–1815. MR3127849
+
+[4] BICKEL, P. J. and KLEIJN, B. J. K. (2012). The semiparametric Bernstein – von Mises theorem. *Ann. Statist.*, **40**, 206–237. MR3013185
+
+[5] BIRNBAUM, A., JOHNSTONE, I. M., NADLER, B. and PAUL, D. (2013). Minimax bounds for sparse PCA with noisy high-dimensional data. *Ann. Statist.*, **41**, 3, 1055–1084. MR3113803
+
+[6] CAI, T. T., MA, Z. and WU, Y. (2013). Sparse PCA: optimal rates and adaptive estimation. *Ann. Statist.*, **41**, 6, 3074–3110. MR3161458
+
+[7] CASTILLO, I. and NICKL, R. (2013). Nonparametric Bernstein – von Mises theorems in Gaussian white noise. *Ann. Statist.*, **41**, 4, 1999–2028. MR3127856
+
+[8] CASTILLO, I. and ROUSSEAU, J. (2015). A Bernstein – von Mises theorem for smooth functionals in semiparametric models. *Ann. Statist.*, **43**, 6, 2353–2383. MR3405597
+
+[9] EL KAROUI, N. (2007). Tracy – Widom limit for the largest eigenvalue of a large class of complex sample covariance matrices. *Ann. Probab.*, **35**, 2, 663–714. MR2308592
+
+[10] FAN, J., RIGOLLET, P. and WANG, W. (2015). Estimation of functionals of sparse covariance matrices. *Ann. Statist.*, **43**, 6, 2706–2737. MR3405609
+
+[11] FAN, J., SUN, Q., ZHOU, W-X. and ZHU, Z. (2018). Principal component analysis for big data. *arXiv:1801.01602*.
+
+[12] HOLTZ, M. (2010). *Sparse grid quadrature in high dimensions with applications in finance and insurance*. Lecture notes in computational science and engineering, **77**, Springer, Berlin. MR2743492
+
+[13] GAO, C. and ZHOU, H. H. (2015). Rate-optimal posterior contraction for sparse PCA. *Ann. Statist.*, **43**, 2, 785–818. MR3325710
+
+[14] GHOSH, J. K. and BASU, A. (2017) General Robust Bayes Pseudo-Posterior: Exponential Convergence results with Applications, *arXiv: 1708.09692*.
+
+[15] GHOSH, J. K. and RAMAMOORTHI, R. V. (2003). Introduction to the non-asymptotic analysis of random matrices. In *Bayesian nonparametrics*, Springer Verlag, New York. MR1992245
+
+[16] GOETZE, F., NAUMOV, A., SPOKOINY, V. and ULYANOV, V. (2018). Large ball probabilities, Gaussian comparison and anti-concentration. *arXiv:1708.08663*.
+
+[17] GOODFELLOW, I., BENGIO, Y. and COURVILLE, A. (2016). *Deep learning*, MIT Press. MR3617773
+
+[18] JOHNSTONE, I. M. and LU, A. Y. (2009). On consistency and sparsity for principal components analysis in high dimensions. *J. Amer. Statist. Assoc.*, **104**, 682–693. MR2751448
+
+[19] JOHNSTONE, I. M. (2010). High dimensional statistical inference and ran-
\ No newline at end of file
diff --git a/samples/texts/4320847/page_4.md b/samples/texts/4320847/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..98f948102d9af114429f8c7623277d3afa1ade05
--- /dev/null
+++ b/samples/texts/4320847/page_4.md
@@ -0,0 +1,17 @@
+allows to apply the recent "large ball probability" bounds on Gaussian comparison and Gaussian anti-concentration from [16]. Moreover, in the contrary to the latter paper [27], we do not require a Gaussian distribution of the data. Our results claim that the posterior credible sets can be used as frequentist confidence regions even under a possible model misspecification when the true data generation measure is not Gaussian. We still work with the Gaussian likelihood, in this sense our procedure is *pseudo-Bayesian* and the constructed credible sets should be referred to as *pseudo-posterior*; see [14]. In our study we allow the dimension $p$ to grow with the sample size $n$, however, we need to assume "$p^3/n$ is small" in order to make our results meaningful.
+
+The main contributions of this paper are as follows:
+
+* We offer a new procedure for building elliptic confidence sets for the true projector based on Bayesian simulation from the Inverse Wishart prior. The procedure is fully data-driven and numerically efficient, its complexity is proportional to the squared dimension and independent of sample size. Numerical simulations confirm good performance of the proposed method for artificial data: both Gaussian and non-Gaussian (not even sub-Gaussian).
+
+* We establish novel results on the coverage properties of pseudo-posterior credible sets for a complicated non-linear problem of recovering the eigenspace of the sample covariance matrix. The results apply under mild conditions on the data distribution. In particular, we do not require Gaussianity of the observations.
+
+The rest of the paper is structured as follows. Some notations are introduced in Section 2.1. Section 2.2 discusses the model. Our pseudo-Bayesian framework and the main result of the paper about the pseudo-posterior credible sets are described in Section 2.3. The use of such sets as frequentist confidence sets is discussed in Section 2.4. Some numerical results on simulated data are demonstrated in Section 3. Section 4 contains the proofs of the main theorems. Some auxiliary results from the literature and the rest of the proofs are collected in Appendix A and Appendix B, respectively.
+
+## 2. Problem and main results
+
+This section explains our setup and states the main results.
+
+### 2.1. Notations
+
+We will use the following notations throughout the paper. The space of real-valued $p \times p$ matrices is denoted by $\mathbb{R}^{p \times p}$, while $\mathbb{S}_+^p$ means the set of positive-semidefinite matrices. We write $I_d$ for the identity matrix of size $d \times d$, rank($A$) and Tr($B$) stand for the rank of a matrix $A$ and the trace of a square matrix $B$. Further, $\|A\|_\infty$ stands for the spectral norm of a matrix $A$, while $\|A\|_1$ means the nuclear norm. The Frobenius scalar product of two matrices $A$ and $B$ of
\ No newline at end of file
diff --git a/samples/texts/4320847/page_40.md b/samples/texts/4320847/page_40.md
new file mode 100644
index 0000000000000000000000000000000000000000..620da31969b566608a33b6ff1500833d415cdfac
--- /dev/null
+++ b/samples/texts/4320847/page_40.md
@@ -0,0 +1,33 @@
+dom matrices. In *International Congress of Mathematicians*, I, 307–333, Eur. Math. Soc., Zurich. MR2334195
+
+[20] JOHNSTONE, I. M. (2007). High dimensional Bernstein-von Mises: simple examples *Inst. Math. Stat. Collect.*, **6**, 87–98. MR2798513
+
+[21] KOLTCHINSKII, V. and LOUNICI, K. (2016). Asymptotics and concentration bounds for bilinear forms of spectral projectors of sample covariance. *Ann. Inst. H. Poincaré Probab. Statist.*, **52**, 4, 1976–2013. MR3573302
+
+[22] KOLTCHINSKII, V. and LOUNICI, K. (2017). Concentration inequalities and moment bounds for sample covariance operators. *Bernoulli*, **23**, 1, 110–133. MR3556768
+
+[23] KOLTCHINSKII, V. and LOUNICI, K. (2017). New asymptotic results in Principal Component Analysis. *Sankhya A.*, **79**, 2, 254–297. MR3707422
+
+[24] KOLTCHINSKII, V. and LOUNICI, K. (2017). Normal approximation and concentration of spectral projectors of sample covariance. *Ann. Statist.*, **45**, 1, 121–157. MR3611488
+
+[25] LE CAM, L. and YANG, G. L. (1990). *Asymptotics in statistics: some basic concepts*. Springer, New York. MR1066869
+
+[26] MARCHENKO, V. A. and PASTUR, L. A. (1967). Distribution of eigenvalues in certain sets of random matrices. *Mat. Sb. (N.S.)*, **72** (**114**), 4, 507–536. MR0208649
+
+[27] NAUMOV, A., SPOKOINY, V. and ULYANOV, V. (2017). Bootstrap confidence sets for spectral projectors of sample covariance. arXiv:1703.00871.
+
+[28] REISS, M. and WAHL, M. (2018). Non-asymptotic upper bounds for the reconstruction error of PCA. arXiv:1609.03779.
+
+[29] RIGOLLET, P. (2015). *Lecture notes on High-dimensional statistics*.
+
+[30] SPOKOINY, V. Gaussian approximation for a large ball probability. Manuscript.
+
+[31] TROPP, J. (2012). User-Friendly Tail Bounds for Sums of Random Matrices. *Found. Comput. Math.*, **12**, 4, 389–434. MR2946459
+
+[32] VAN DER VAART, A. W. (2000). *Asymptotic statistics*. Cambridge series in statistical and probabilistic mathematics **3**, Cambridge University Press, Cambridge. MR1652247
+
+[33] VERSHYNIN, R. (2016). Introduction to the non-asymptotic analysis of random matrices. In *Compressed sensing*, 210–268, Cambridge University Press, Cambridge. MR2963170
+
+[34] WANG, W. and FAN, J. (2017). Asymptotics of empirical eigenstructure for high dimensional spiked covariance *Ann. Statist.*, **45**, 3, 1342–1374. MR3662457
+
+[35] ZHOU, H. H. and GAO, C. (2016). Bernstein-von Mises theorems for functionals of the covariance matrix. *Electron. J. Statist.*, **10**, 2, 1751–1806. MR3522660
\ No newline at end of file
diff --git a/samples/texts/4320847/page_5.md b/samples/texts/4320847/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..da4181ada54b26cbded250abc8c3e51d17a0ab4f
--- /dev/null
+++ b/samples/texts/4320847/page_5.md
@@ -0,0 +1,17 @@
+the same size is $\langle A, B \rangle_2 \stackrel{\text{def}}{=} \operatorname{Tr}(A^\top B)$, while the Frobenius norm is denoted by $\|A\|_2$. When applied to a vector, $\|\cdot\|$ means just its Euclidean norm. The effective rank of a square matrix $B$ is defined by $r(B) \stackrel{\text{def}}{=} \frac{\operatorname{Tr}(B)}{\|B\|_\infty}$. The relation $a \lesssim b$ means that there exists an absolute constant $C$, different from line to line, such that $a \le Cb$, while $a \asymp b$ means that $a \lesssim b$ and $b \lesssim a$. By $a \vee b$ and $a \wedge b$ we mean maximum and minimum of $a$ and $b$, respectively. In the sequel we will often be considering intersections of events of probability greater than $1 - 1/n$. Without loss of generality, we will write that the probability measure of such an intersection is $1 - 1/n$, since it can be easily achieved by adjusting constants. Throughout the paper we assume that $p < n$.
+
+## 2.2. Setup and problem
+
+Let $X_1, \dots, X_n$ be i.i.d. zero mean with $\operatorname{Var}(X) = \Sigma^*$. Without loss of generality, we can assume that $\Sigma^* \in S_+^p$ is invertible, otherwise one can easily transform the data in such a way that the covariance matrix for the transformed data will be invertible. Let $\sigma_1^* \ge \dots \ge \sigma_p^*$ be the ordered eigenvalues of $\Sigma^*$. Suppose that among them there are $q$ distinct eigenvalues $\mu_1^* > \dots > \mu_q^*$. Introduce groups of indices $\Delta_r^* = \{j : \mu_r^* = \sigma_j^*\}$ and denote by $m_r^*$ the multiplicity factor (dimension) $|\Delta_r^*|$ for all $r \in \{1, \dots, q\}$. The corresponding eigenvectors are denoted as $u_1^*, \dots, u_p^*$. We will use the projector on the $r$-th eigenspace of dimension $m_r^*$:
+
+$$ P_r^* = \sum_{j \in \Delta_r^*} u_j^* u_j^{*\top} $$
+
+and the eigendecomposition
+
+$$ \Sigma^* = \sum_{j=1}^{p} \sigma_j^* u_j^* u_j^{*\top} = \sum_{r=1}^{q} \mu_r^* \left( \sum_{j \in \Delta_r^*} u_j^* u_j^{*\top} \right) = \sum_{r=1}^{q} \mu_r^* P_r^*. $$
+
+We also introduce the spectral gaps $g_r^*$:
+
+$$ g_r^* = \begin{cases} \mu_1^* - \mu_2^*, & r=1, \\ (\mu_{r-1}^* - \mu_r^*) \wedge (\mu_r^* - \mu_{r+1}^*), & r \in \{2, \dots, q-1\}, \\ \mu_{q-1}^* - \mu_q^*, & r=q. \end{cases} $$
+
+Suppose that $\hat{\Sigma}$ has $p$ eigenvalues $\hat{\sigma}_1 > \dots > \hat{\sigma}_p$ (distinct with probability one). The corresponding eigenvectors are denoted as $\hat{u}_1, \dots, \hat{u}_p$. Suppose that $\|\hat{\Sigma} - \Sigma^*\|_\infty \le \frac{1}{4} \min_{r \in \{1, \dots, q\}} g_r^*$. Then, as shown in [21], we can identify clusters of the eigenvalues of $\hat{\Sigma}$ corresponding to each eigenvalue of $\Sigma^*$ and therefore
\ No newline at end of file
diff --git a/samples/texts/4320847/page_6.md b/samples/texts/4320847/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..1521ce88458f41a14163f9d216dbbec21f1907f1
--- /dev/null
+++ b/samples/texts/4320847/page_6.md
@@ -0,0 +1,31 @@
+determine $\Delta_r^*$ and $m_r^*$ for all $r \in \{1, \dots, q\}$. Then we can define the sample projector on the $r$-th eigenspace of dimension $m_r^*$:
+
+$$ \hat{\mathbf{P}}_r = \sum_{j \in \Delta_r^*} \hat{u}_j \hat{u}_j^\top. $$
+
+Under the condition that the spectral gap is sufficiently large, [27] approximated the distribution of $n\|\hat{\mathbf{P}}_r - \mathbf{P}_r^*\|_2^2$ by the distribution of a Gaussian quadratic form $\|\xi\|^2$ with $\xi \sim N(0, \Gamma_r^*)$ and $\Gamma_r^*$ is a block-matrix of the form
+
+$$ \Gamma_r^* \stackrel{\text{def}}{=} \begin{bmatrix} \Gamma_{r1}^* & O & \dots & O \\ O & \Gamma_{r2}^* & \dots & O \\ \vdots & \vdots & \ddots & \vdots \\ O & O & \dots & \Gamma_{rq}^* \end{bmatrix} \quad (2.1) $$
+
+with $(q-1)$ diagonal blocks of sizes $m_r^* m_s^* \times m_r^* m_s^*$:
+
+$$ \Gamma_{rs}^* \stackrel{\text{def}}{=} \frac{2\mu_r^*\mu_s^*}{(\mu_r^* - \mu_s^*)^2} \cdot \mathbf{I}_{m_r^* m_s^*}, \quad s \neq r. $$
+
+Below we extend these result in two aspects. First, our approach allows to pick a block of eigenspaces corresponding to an interval $\mathcal{J}$ in $\{1, \dots, q\}$ from $r^{-}$ to $r^{+}$. Second, we relax the assumption on Gaussianity of the data.
+
+Let
+
+$$ \mathcal{J} = \{r^{-}, r^{-} + 1, \dots, r^{+}\}. $$
+
+Define also the subset of indices
+
+$$ \mathcal{I}_{\mathcal{J}} \stackrel{\text{def}}{=} \{k: k \in \Delta_r^*, r \in \mathcal{J}\}, $$
+
+and introduce the projector onto the direct sum of the eigenspaces associated with $\mathbf{P}_r^*$ for all $r \in \mathcal{J}$:
+
+$$ \mathbf{P}_{\mathcal{J}}^* \stackrel{\text{def}}{=} \sum_{r \in \mathcal{J}} \mathbf{P}_r^* = \sum_{k \in \mathcal{I}_{\mathcal{J}}} u_k^* u_k^{\top}. $$
+
+Its empirical counterpart is given by
+
+$$ \hat{\mathbf{P}}_{\mathcal{J}} \stackrel{\text{def}}{=} \sum_{r \in \mathcal{J}} \hat{\mathbf{P}}_r = \sum_{k \in \mathcal{I}_{\mathcal{J}}} \hat{u}_k \hat{u}_k^{\top}. $$
+
+For instance, when $\mathcal{J} = \{1, \dots, q_{\text{eff}}\}$ for some $q_{\text{eff}} < q$, then $\hat{\mathbf{P}}_{\mathcal{J}}$ is exactly what is recovered by PCA. Below we focus on $n\|\hat{\mathbf{P}}_{\mathcal{J}} - \mathbf{P}_{\mathcal{J}}^*\|_2^2$ rather than $n\|\hat{\mathbf{P}}_r - \mathbf{P}_r^*\|_2^2$.
\ No newline at end of file
diff --git a/samples/texts/4320847/page_7.md b/samples/texts/4320847/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..2fbfe6e251f894347e1b3bf670c017a4dfcf6790
--- /dev/null
+++ b/samples/texts/4320847/page_7.md
@@ -0,0 +1,33 @@
+The projector dimension for $\mathcal{J}$ is given by $m_{\mathcal{J}}^* = \sum_{r \in \mathcal{J}} m_r^*$. Its spectral gap can be defined as
+
+$$ g_{\mathcal{J}}^* \stackrel{\text{def}}{=} \begin{cases} \mu_{r+}^* - \mu_{r+1}^*, & \text{if } r^- = 1; \\ \mu_{r-1}^* - \mu_{r-}^*, & \text{if } r^+ = q; \\ (\mu_{r-1}^* - \mu_{r-}^*) \wedge (\mu_{r+}^* - \mu_{r+1}^*), & \text{otherwise.} \end{cases} $$
+
+Define also for $\mathcal{J} = \{r^{-}, r^{-} + 1, \dots, r^{+}\}$
+
+$$ l_{\mathcal{J}}^{*} = \mu_{r^{-}}^{*} - \mu_{r^{+}}^{*}. $$
+
+To describe the distribution of the projector $\hat{\mathbf{P}}_{\mathcal{J}}$, introduce the following matrix $\Gamma_{\mathcal{J}}^{*}$ of size $m_{\mathcal{J}}^{*}(p-m_{\mathcal{J}}^{*}) \times m_{\mathcal{J}}^{*}(p-m_{\mathcal{J}}^{*})$:
+
+$$
+\begin{align}
+\Gamma_{\mathcal{J}}^{*} &\stackrel{\text{def}}{=} \operatorname{diag}(\Gamma_{\mathcal{J}}^{r})_{r \in \mathcal{J}}, \tag{2.2} \\
+\Gamma_{\mathcal{J}}^{r} &\stackrel{\text{def}}{=} \operatorname{diag}(\Gamma^{r,s})_{s \notin \mathcal{J}}, \\
+\Gamma^{r,s} &\stackrel{\text{def}}{=} \frac{2\mu_r^*\mu_s^*}{(\mu_r^* - \mu_s^*)^2} \cdot \mathbf{I}_{m_r^* m_s^*}, \quad r \in \mathcal{J}, s \notin \mathcal{J}.
+\end{align}
+$$
+
+It is easy to notice that when $\mathcal{J} = \{r\}$ then this definition coincides with (2.1).
+
+Our results apply under one rather mild and natural condition on the distribution of $\mathbf{X}^n = (X_1, \dots, X_n)$ that the sample covariance matrix $\hat{\Sigma}$ concentrates around the true covariance $\Sigma^*$:
+
+$$ \|\hat{\Sigma} - \Sigma^*\|_{\infty} \leq \hat{\delta}_n \|S^*\|_{\infty} \quad (2.3) $$
+
+with probability $1 - 1/n$. In this result we do not use, say, independence of the $X_i$'s or zero mean property, everything is done conditioned on the data $\mathbf{X}^n$. The value $\hat{\delta}_n$ clearly depends on the underlying data distributions, but it allows to work with much wider classes of probability measures rather than just Gaussian or sub-Gaussian. For the Gaussian case one may take
+
+$$ \hat{\delta}_n \asymp \sqrt{\frac{r(\Sigma^*) + \log(n)}{n}}. $$
+
+Several more examples of possible distributions and the corresponding $\hat{\delta}_n$ for them are provided in Appendix A, see Theorem A.1. So, throughout the rest of the paper we assume that the data satisfy condition (2.3).
+
+## 2.3. Pseudo-Bayesian framework and credible level sets
+
+Let $\Pi$ be a prior distribution on the set of considered covariance matrices $\Sigma$. Even though our data are not necessary Gaussian, we can consider the Gaussian
\ No newline at end of file
diff --git a/samples/texts/4320847/page_8.md b/samples/texts/4320847/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac18ff6d3e6124f35bf690dc3e8d7aedcc6d1faf
--- /dev/null
+++ b/samples/texts/4320847/page_8.md
@@ -0,0 +1,51 @@
+log-likelihood:
+
+$$
+\ell_n(\boldsymbol{\Sigma}) = -\frac{n}{2} \log \det(\boldsymbol{\Sigma}) - \frac{n}{2} \mathrm{Tr}(\boldsymbol{\Sigma}^{-1} \hat{\boldsymbol{\Sigma}}) - \frac{np}{2} \log (2\pi).
+$$
+
+In case of Gaussian data, the posterior measure of a set $\mathcal{A} \subset S_+^p$ can be expressed as
+
+$$
+\Pi(A | \mathbf{X}^n) = \frac{\int_A \exp(\ell_n(\Sigma)) d\Pi(\Sigma)}{\int_{S_+^p} \exp(\ell_n(\Sigma)) d\Pi(\Sigma)}.
+$$
+
+However, we can study this random measure for non-Gaussian data as well. As
+the Gaussian log-likelihood $l_n(\Sigma)$ does not necessarily correspond to the true
+distribution of our data, we call the random measure $\Pi (\cdot | \mathbf{X}^n)$ a pseudo-
+posterior, [14]. Once a prior is fixed, we can easily sample matrices $\Sigma$ from
+this pseudo-posterior distribution. Denote eigenvalues of $\Sigma$ as $\sigma_1 > \dots > \sigma_p$
+(assume they are distinct with probability one) and eigenvectors as $u_1, \dots, u_p$.
+The corresponding projector onto the $r$-th eigenspace of dimension $m_r^*$ is
+
+$$
+\mathbf{P}_r = \sum_{k \in \Delta_r^*} u_k u_k^\top.
+$$
+
+and the projector on the direct sum of eigenspaces associated with $P_r$ for
+$r \in \mathcal{J}$ is
+
+$$
+P_{\mathcal{J}} = \sum_{r \in \mathcal{J}} P_r = \sum_{k \in \mathcal{I}_{\mathcal{J}}} u_k u_k^{\top}.
+$$
+
+In this work we focus on the conjugate prior to the multivariate Gaussian dis-
+tribution, that is, the Inverse Wishart distribution $\mathcal{IW}_p(G, p + b - 1)$ with
+$G \in S_+^p$, $0 < b \lesssim p$. Its density is given by
+
+$$
+\frac{d\Pi(\boldsymbol{\Sigma})}{d\boldsymbol{\Sigma}} \propto \exp \left( -\frac{2p+b}{2} \log \det(\boldsymbol{\Sigma}) - \frac{1}{2} \operatorname{Tr}(\boldsymbol{G}\boldsymbol{\Sigma}^{-1}) \right).
+$$
+
+Some nice properties of the Inverse Wishart prior distribution allow us to obtain
+the following result which we will use for uncertainty quantification statements
+in the next section instead of the Bernstein-von Mises Theorem.
+
+**Theorem 2.1.** Assume that the distribution of the data **X**n = (X1, ..., Xn) fulfills the sample covariance concentration property (2.3) with $\hat{\delta}_n$ satisfying
+
+$$
+\hat{\delta}_n \leq \frac{g_J^*}{4 \| \Sigma^* \|_\infty} \wedge \frac{r(\Sigma^*)}{p}.
+$$
+
+Consider the prior $\Pi(\Sigma)$ given by the Inverse Wishart distribution
+$\mathcal{I}\mathcal{W}_p(G, p + b - 1)$. Let $\xi \sim \mathcal{N}(0, \Gamma_J^*)$ with $\Gamma_J^*$ defined by (2.2).
\ No newline at end of file
diff --git a/samples/texts/4320847/page_9.md b/samples/texts/4320847/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf269ac7451b35fb148e6397979020cbb4d4f148
--- /dev/null
+++ b/samples/texts/4320847/page_9.md
@@ -0,0 +1,48 @@
+Then with probability $1 - \frac{1}{n}$
+
+$$
+\sup_{x \in \mathbb{R}} \left| \Pi \left( n \| \mathbf{P}_{\mathcal{J}} - \hat{\mathbf{P}}_{\mathcal{J}} \|_2^2 \le x \middle| \mathbf{X}^n \right) - \mathbb{P}(\|\xi\|^2 \le x) \right| \lesssim \diamond,
+$$
+
+where
+
+$$
+\diamond = \diamond(n, p, \boldsymbol{\Sigma}^*) \stackrel{\text{def}}{=} \frac{\diamond_1 + \diamond_2 + \diamond_3}{\|\Gamma_{\mathcal{J}}^*\|_2^{1/2} (\|\Gamma_{\mathcal{J}}^*\|_2^2 - \|\Gamma_{\mathcal{J}}^*\|_{\infty}^2)^{1/4}} + \frac{1}{n}. \quad (2.4)
+$$
+
+The terms $\diamond_1$ through $\diamond_3$ can be described as
+
+$$
+\begin{align*}
+\diamond_1 &\asymp \left\{ (\log(n)+p) \left( \left(1+\frac{l_\mathcal{J}^*}{g_\mathcal{J}^*}\right) \frac{\sqrt{m_\mathcal{J}^*} \|\boldsymbol{\Sigma}^*\|_\infty}{g_\mathcal{J}^*} + m_\mathcal{J}^* \right) \|\boldsymbol{\Sigma}^*\|_\infty + m_\mathcal{J}^* \|\mathbf{G}\|_\infty \right\} \\
+&\qquad \times \frac{m_\mathcal{J}^* \|\boldsymbol{\Sigma}^*\|_\infty}{g_\mathcal{J}^*{}^2} \sqrt{\frac{\log(n)+p}{n}}, \\
+\diamond_2 &\asymp \frac{\|\boldsymbol{\Sigma}^*\|_\infty (m_\mathcal{J}^* \|\boldsymbol{\Sigma}^*\|_\infty^2 \wedge \operatorname{Tr}(\boldsymbol{\Sigma}^{*2}))}{g_\mathcal{J}^*{}^3} p\left(\hat{\delta}_n + \frac{p}{n}\right), \\
+\diamond_3 &\asymp \frac{(m_\mathcal{J}^*)^{3/2} \|\boldsymbol{\Sigma}^*\|_\infty \operatorname{Tr}(\boldsymbol{\Sigma}^*)}{g_\mathcal{J}^*{}^2} \sqrt{\frac{\log(n)}{n}}.
+\end{align*}
+$$
+
+**Remark 2.1.** The bound (2.4) can be made more transparent if we fix **Σ**\* and focus on the dependence on *p*, *n*, $\hat{\delta}_n$ and the desired subspace dimension $m_J^*$ only (freezing the eigenvalues, the spectral gaps and multiplicities of the eigenvalues):
+
+$$
+\diamond \asymp \sqrt{\frac{(m_{\mathcal{J}}^{*})^{4}(p^{3} + \log^{3}(n))}{n}} \vee m_{\mathcal{J}}^{*} p \hat{\delta}_{n},
+$$
+
+or, in the sub-Gaussian case,
+
+$$
+\diamond \asymp \sqrt{\frac{(m_{\mathcal{J}}^*)^4 (p^3 + \log^3(n))}{n}}.
+$$
+
+Moreover, in the case of spiked covariance model we expect $\|\Gamma_{\mathcal{J}}^*\|_2$ to behave as $\sqrt{pm_{\mathcal{J}}^*}$ which improves the previous bounds to
+
+$$
+\diamond \asymp \sqrt{\frac{(m_{\mathcal{J}}^*)^3 (p^2 + \log^3(n)/p)}{n}} \vee \sqrt{m_{\mathcal{J}}^* p} \hat{\delta}_n,
+$$
+
+and
+
+$$
+\diamond \asymp \sqrt{\frac{(m_{\mathcal{J}}^{*})^{3} (p^{2} + \log^{3}(n)/p)}{n}},
+$$
+
+respectively.
\ No newline at end of file
diff --git a/samples/texts/470846/page_1.md b/samples/texts/470846/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b460797869707bd1d440cda6f2b1c142142de9e
--- /dev/null
+++ b/samples/texts/470846/page_1.md
@@ -0,0 +1,21 @@
+# Mixed-integer Bilevel Optimization for Capacity Planning with Rational Markets
+
+Pablo Garcia-Herrerosa, Lei Zhangb, Pratik Misrac, Sanjay Mehtac, and Ignacio E. Grossmanna
+
+aDepartment of Chemical Engineering, Carnegie Mellon University, Pittsburgh, USA
+
+bDepartment of Chemical Engineering, Tsinghua University, Beijing, China
+
+cAir Products and Chemicals, Inc. Allentown, USA
+
+April 30, 2015
+
+## Abstract
+
+We formulate the capacity expansion planning as a bilevel optimization to model the hierarchical decision structure involving industrial producers and consumers. The formulation is a mixed-integer bilevel linear program in which the upper level maximizes the profit of a producer and the lower level minimizes the cost paid by markets. The upper-level problem includes mixed-integer variables that establish the expansion plan; the lower level problem is an LP that decides demands assignments. We reformulate the bilevel optimization as a single-level problem using two different approaches: KKT reformulation and duality-based reformulation. We analyze the performance of these reformulations and compare their results with the expansion plans obtained from the traditional single-level formulation. For the solution of large-scale problems, we propose improvements on the duality-based reformulation that allows reducing the number of variables and constraints. The formulations and the solution methods are illustrated with examples from the air separation industry.
+
+**Keywords:** Capacity planning; Bilevel optimization; Rational markets.
+
+## 1 Introduction
+
+Capacity expansion is one of the most important strategic decisions for industrial gas companies. In this industry, most of the markets are served by local producers because of the competitive advantage given by the location of the production facilities. The dynamics of the industrial gas markets imply that companies must anticipate demand increases in order to plan their capacity expansion, maintain supply availability, and avoid regional incursion of new producers. The selection of the right investment and distribution plan plays a critical role for companies in this environment. A rigorous approach based on mathematical modeling and optimization offers the possibility to find the investment and distribution plan that yields the greatest economic benefit.
\ No newline at end of file
diff --git a/samples/texts/470846/page_10.md b/samples/texts/470846/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f3631bfac453330b81673b6948ea2f52bc88bbe
--- /dev/null
+++ b/samples/texts/470846/page_10.md
@@ -0,0 +1,58 @@
+Similarely, the Big-M reformulation of constraint set (29) is presented in Eqn. (37).
+
+$$
+\begin{align}
+& y_{t,i,j,k} \le M z_{t,i,k}^2 \notag \\
+& \gamma_{t,i,j,k} \le M (1 - z_{t,i,k}^2) \tag{37} \\
+& z_{t,i,j,k}^2 \in \{0, 1\} \notag
+\end{align}
+$$
+
+The result of replacing constraints (23), (24), (27), (28) and (29) by (34), (35), (36), and (37) is a single-level MILP formulation that is equivalent to the bilevel formulation presented in Section 4.
+
+5.2 Duality-based Reformulation
+
+The alternative reformulation for the bilevel capacity planning problem is obtained by introducing
+constraints that guarantee the satisfaction of strong duality [22, 14]. This is achieved by replacing
+the lower-level problem described by Eqns. (12) - (16) with its primal and dual constraints, and
+equating their objective functions. The dual formulation corresponding to the lower-level LP is
+presented by Eqns. (38) - (41).
+
+$$
+\max \quad \sum_{t \in T} \sum_{k \in K} \left[ \sum_{j \in J} D_{t,j,k} \lambda_{t,j,k} - \sum_{i \in I^1} c_{t,i,k} \mu_{t,i,k} - \sum_{i \in I^2} C_{i,k}^0 \mu_{t,i,k} \right] \qquad (38)
+$$
+
+$$
+\text{s.t.} \quad \lambda_{t,j,k} - \mu_{t,i,k} \le \frac{1}{(1+R)^t} P_{t,i,j,k} \quad \forall t \in T, i \in I, j \in J, k \in K \quad (39)
+$$
+
+$$
+\mu_{t,i,k} \in \mathbb{R}^{+} \qquad \forall t \in T, i \in I, k \in K \qquad (40)
+$$
+
+$$
+\lambda_{t,j,k} \in \mathbb{R}, \qquad \forall t \in T, j \in J, k \in K \tag{41}
+$$
+
+The resulting duality-based reformulation is presented in Eqns. (42) - (54).
+
+$$
+\begin{align}
+\max & \quad \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \nonumber \\
+& - \sum_{t \in T} \sum_{i \in I^1} \frac{1}{(1+R)^t} (A_{t,i} v_{t,i} + B_{t,i} w_{t,i}) \nonumber \\
+& - \sum_{t \in T} \sum_{i \in I^1} \sum_{k \in K} \frac{1}{(1+R)^t} \left( E_{t,i,k} x_{t,i,k} + F_{t,i,k} \sum_{j \in J} y_{t,i,j,k} \right) \nonumber \\
+& - \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in I} \sum_{k \in K} \frac{1}{(1+R)^t} G_{t,i,j,k} y_{t,i,j,k} \tag{42}
+\end{align}
+$$
+
+$$
+\text{s.t.} \quad w_{t,i} = V_i^0 + \sum_{t' \in T'} v_{t',i} && \forall t \in T, i \in I^1 && (43)
+$$
+
+$$
+x_{t,i,k} \le w_{t,i} && \forall t \in T, i \in I^1, k \in K && (44)
+$$
+
+$$
+c_{t,i,k} = C_{i,k}^{0} + \sum_{t' \in T_t'} H_{i,k} x_{t',i,k} && \forall t \in T, i \in I^1, k \in K && (45)
+$$
\ No newline at end of file
diff --git a/samples/texts/470846/page_11.md b/samples/texts/470846/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..b0bcac3a1033a782bda6a8224743b51b0e1bb399
--- /dev/null
+++ b/samples/texts/470846/page_11.md
@@ -0,0 +1,40 @@
+$$
+\begin{align}
+& \sum_{t \in T} \sum_{i \in I} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \nonumber \\
+&= \sum_{t \in T} \sum_{k \in K} \left[ \sum_{j \in J} D_{t,j,k} \lambda_{t,j,k} - \sum_{i \in I^1} c_{t,i,k} \mu_{t,i,k} - \sum_{i \in I^2} C_{i,k}^0 \mu_{t,i,k} \right] \tag{46} \\
+& \sum_{j \in J} y_{t,i,j,k} \le c_{t,i,k} && \quad \forall t \in T, i \in I^1, k \in K \tag{47} \\
+& \sum_{j \in J} y_{t,i,j,k} \le C_{i,k}^0 && \quad \forall t \in T, i \in I^2, k \in K \tag{48} \\
+& \sum_{i \in I} y_{t,i,j,k} = D_{t,j,k} && \quad \forall t \in T, j \in J, k \in K \tag{49} \\
+& \lambda_{t,j,k} - \mu_{t,i,k} \le \frac{1}{(1+R)^t} P_{t,i,j,k} && \quad \forall t \in T, i \in I, j \in J, k \in K \tag{50} \\
+& y_{t,i,j,k}, \mu_{t,i,k} \in \mathbb{R}^+ && \quad \forall t \in T, i \in I, j \in J, k \in K \tag{51} \\
+& \lambda_{t,j,k} \in \mathbb{R}, && \quad \forall t \in T, j \in J, k \in K \tag{52} \\
+& c_{t,i,k} \in \mathbb{R}^+ && \quad \forall t \in T, i \in I^1, k \in K \tag{53} \\
+& v_{t,i}, w_{t,i}, x_{t,i,k} \in \{0,1\} && \quad \forall t \in T, i \in I^1, k \in K \tag{54}
+\end{align}
+$$
+
+The upper-level problem represented by Eqns. (42) - (45) remains unchanged in the duality-based reformulation. Strong duality is enforced by equating the primal and dual objective functions as presented in Eqn. (46). Lower-level primal constraints (47) and (49) are kept in the formulation to guarantee primal feasibility. Dual feasibility of the lower level is ensured with constraints (50).
+
+It must be noted that this reformulation yields a Mixed-Integer Nonlinear Program (MINLP). The nonlinearity arises from the dual objective function in the right hand side of Eqn. (46), because of the product of upper-level variable $c_{t,i,k}$ and lower-level dual variable $\mu_{t,i,k}$. Fortunately, the problem can be posed as a MILP because the variable $c_{t,i,k}$ only takes values in discrete increments as indicated by Eqn. (45). The linearization procedure is based on eliminating variable $c_{t,i,k}$ from the formulation by replacing it according to Eqn. (45). The resulting bilinear terms are products of continuous variables ($\mu_{t,i,k}$) and binary variables ($x_{t',i,k}$). Therefore, they can be modeled with a set of mixed-integer constraints by including a continuous variable ($u_{t',t,i,k}$) for each bilinear term.
+
+The resulting linearized MILP formulation is obtained after replacing Eqn. (46) with Eqn. (55),
+
+$$
+\begin{equation}
+\begin{split}
+& \sum_{t \in T} \sum_{i \in I} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \\
+& = \sum_{t \in T} \sum_{k \in K} \left[ \sum_{j \in J} D_{t,j,k} \lambda_{t,j,k} - \sum_{i \in I} C_{i,k}^0 \mu_{t,i,k} - \sum_{i \in I^1} \sum_{t' \in T'_i} H_{i,k} u_{t,t',i,k} \right]
+\end{split}
+\tag{55}
+\end{equation}
+$$
+
+and introducing the mixed-integer constraints in Eqns. (56) - (57).
+
+$$
+\begin{align}
+& u_{t,t',i,k} \geq \mu_{t,i,k} - M(1 - x_{t',i,k}) && t \in T, t' \in T'_i, i \in I^1, k \in K \\[1ex]
+& u_{t,t',i,k} \in \mathbb{R}^{+} && t' > 0
+\end{align}
+\tag{56}\text{(57)}
+$$
\ No newline at end of file
diff --git a/samples/texts/470846/page_12.md b/samples/texts/470846/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..97c82ec3cc16f9425339f41a87c0b56cf54b17fd
--- /dev/null
+++ b/samples/texts/470846/page_12.md
@@ -0,0 +1,15 @@
+It is important to note that only the two terms presented in Eqns. (56) and (57) are necessary to linearize the bilinear terms because they are sufficient to bound the values of $u_{t,t',i,k}$ in the improving direction of the objective function.
+
+# 6 Illustrative Example
+
+Both MILP reformulations of the bilevel capacity planning problem are implemented to solve a small case study from the air separation industry. The illustrative example considers two existing facilities of the leader, one candidate location for a new facility of the leader, and a single facility of the competition. Facilities controlled by the leader and the competitor must satisfy the demand of 15 markets for a single commodity. The problem has a time horizon of 3 years divided in 12 time periods (quarters of year). In this time horizon, the leader is allowed to execute investment decisions in time periods 1, 5, and 12. Capacity expansion is achieved by installing additional production lines with capacity of 9,000 ton/period. The complete dataset for this illustrative example is presented in Appendix A. All examples use a discount rate ($R$) of 3% per time period.
+
+The computational statistics of the single-level capacity planning with captive markets and the reformulations of the capacity planning in a competitive environment are presented in Table 1. All MILP problems were implemented in GAMS 24.4.1 and solved using GUROBI 6.0.0 on an Intel Core i7 CPU 2.93 Ghz processor with 4 GB of RAM.
+
+Table 1: Model statistics for the illustrative example.
+
+| Model statistic | Single-level with captive markets | KKT reformulation | Dualility-base reformulation |
|---|
| Number of constraints: | 225 | 1,473 | 682 | | Number of continuous variables: | 420 | 996 | 636 | | Number of binary variables: | 48 | 480 | 48 | | LP relaxation at rootnode: | 110 | 110 | 101 | | Final incumbent value: | 110 | 97 | 97 | | Final optimality gap: | 0.00% | 0.00% | 0.00% | | Number of B&B nodes: | 1 | 262 | 1 | | Solution time: | 0.01 s | 0.63 s | 0.19 s |
+
+Table 1 shows the number of constraints and variables for the proposed formulations. It can be observed that the KKT reformulation is significantly larger than the duality-based reformulation; in particular, it requires 10 times more binary variables because of the complementarity constraints. The growth in the number of binary variables does not have much impact for the solution time of this small example, but it is likely to complicate the solution of larger instances.
+
+The solutions obtained from the optimization problems establish the investment plan for the leader. The plan obtained from the formulation with captive markets does not expand any facilities in the time horizon. The optimal investment plan obtained from the bilevel formulation (both reformulations) expands facility 1 in the first time period. The bilevel optimal demand assignments
\ No newline at end of file
diff --git a/samples/texts/470846/page_13.md b/samples/texts/470846/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..6b4bbdf9bdf5fae8fa12df02c6b68e9e1b5d18b1
--- /dev/null
+++ b/samples/texts/470846/page_13.md
@@ -0,0 +1,145 @@
+in the first time period of this illustrative example are presented in Fig. 2; it can be observed
+that some markets have dual sourcing because of the capacity limitations of production facilities.
+Table 2 compares the income, investment costs, and operating costs for the single-level and bilevel
+expansion plans. In order to quantify the potential regret of implementing an expansion plan
+that ignores the decision criterion of markets, the expansion plan obtained from the single-level
+formulation is also evaluated in an environment of rational markets.
+
+Figure 2: Optimal demand assignments obtained in the first time period of the illustrative example using the bilevel formulation.
+
+Table 2: Results of the single-level and bilevel expansion plans for the illustrative example.
+
+
+
+
+ |
+ Term in objective function
+ |
+
+ Single-level with captive markets
+ |
+
+ Single-level with rational markets
+ |
+
+ Bilevel with rational markets
+ |
+
+
+
+
+ |
+ Income from sales (MM$):
+ |
+
+ 354
+ |
+
+ 345
+ |
+
+ 398
+ |
+
+
+ |
+ Investment in new facilities (MM$):
+ |
+
+ 0
+ |
+
+ 0
+ |
+
+ 0
+ |
+
+
+ |
+ Expansion cost (MM$):
+ |
+
+ 0
+ |
+
+ 0
+ |
+
+ 29
+ |
+
+
+ |
+ Maintenance cost (MM$):
+ |
+
+ 31
+ |
+
+ 31
+ |
+
+ 31
+ |
+
+
+ |
+ Production cost (MM$):
+ |
+
+ 139
+ |
+
+ 139
+ |
+
+ 162
+ |
+
+
+ |
+ Transportation cost (MM$):
+ |
+
+ 74
+ |
+
+ 118
+ |
+
+ 79
+ |
+
+
+ |
+ NPV (MM$):
+ |
+
+ 110
+ |
+
+ 57
+ |
+
+ 97
+ |
+
+
+ |
+ Market cost (MM$):
+ |
+
+ 523
+ |
+
+ 510
+ |
+
+ 508
+ |
+
+
+
+
+Table 2 shows the benefits of the expansion plan obtained from the bilevel formulation when markets are considered rational. The single-level formulation with captive markets predicts a level of income that is not attainable with rational markets. The bilevel formulation offers the lowest cost for the markets with a small deterioration of the leader’s NPV in comparison to what could be obtained with captive markets. When markets are considered rational, the NPV obtained with the bilevel expansion plan is MM$40 higher than the one obtained by the single-level expansion plan; this measure of regret accounts for 41% of the potential NPV.
\ No newline at end of file
diff --git a/samples/texts/470846/page_14.md b/samples/texts/470846/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..34d461d48f287e2a2eeabef23c6c321985c7cf07
--- /dev/null
+++ b/samples/texts/470846/page_14.md
@@ -0,0 +1,13 @@
+## 7 Middle-size Instances
+
+From the illustrative example presented in Section 6, we observe that the KKT and the duality-based reformulations yield exactly the same results. Despite the difference in formulation sizes shown in Table 1, both reformulation solve the illustrative example in approximately the same time. In order to predict the performance of the reformulations on large-scale instances, we use a middle-size example of the capacity planning problem.
+
+The example comprises the production and distribution of one product to 15 markets. Initially, the leader has three production facilities with capacities equal to 27,000 ton/period, 13,500 ton/period, and 31,500 ton/period. The leader also considers the possibility of opening a new facility at a candidate location. We evaluate the investment decisions in a time horizon of 5 years divided in 20 time periods.
+
+We analyze two instances that share the same data but allow different timing for the investment decisions. In the first instance (Middle-size 1), the leader is allowed to open the new facility and expand capacities in every fourth time period. In the second instance (Middle-size 2), the leader is allowed to execute the investments only every eight time periods. In both cases, capacity must be expanded in discrete increments of 9,000 ton/period. The investment costs associated with opening the new facility and expanding production capacity grow in time according to inflation; the maintenance cost of open facilities also increase with time.
+
+Market demands in each time period vary during the time horizon. Fig. 3 shows the trajectory of the demands for the middle-size example. The selling prices offered by the leader to the markets are presented in Fig. 4; each market is offered a different price based on their proximity to the production facilities of the leader. Unit production costs at the facilities controlled by the leader are presented in Fig. 5; they show the characteristic seasonal variation caused by the electricity cost. Other cost coefficients of the example are not revealed by confidentiality reasons, but they maintain the same magnitudes presented in Appendix A.
+
+Figure 3: Evolution of market demands in the middle-size instances.
+
+The computational statistics for the two middle-size instances of the capacity planning with rational markets are presented in Table 3. The KKT and the duality-based reformulations were
\ No newline at end of file
diff --git a/samples/texts/470846/page_15.md b/samples/texts/470846/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..20f714301aaa963dad006147f33a95b9561ec646
--- /dev/null
+++ b/samples/texts/470846/page_15.md
@@ -0,0 +1,11 @@
+Figure 4: Evolution of selling prices in the middle-size instances.
+
+Figure 5: Evolution of production costs in the middle-size instances.
+
+implemented in GAMS 24.4.1 and solved using GUROBI 6.0.0.
+
+Table 3 demonstrates the benefits of the duality-based reformulation in comparison to the KKT reformulation. The time required to solve both instances using the duality-based reformulation is less than 1 second, whereas the KKT reformulation requires a few minutes for each instance. Interestingly, the KKT reformulation takes longer to solve the second middle-size instance that has fewer investment options. The reason behind this counter-intuitive behavior is that the solver takes longer to find a feasible solution to the problem.
+
+The significant difference in solution time for both reformulations is explained by the number of constraints and variables in the problem. The KKT reformulation requires in both instances 2,240 additional binary variables to model complementarity conditions. The growth in the number of binary variables has a severe impact in the solution time of the problem.
+
+Table 4 compares the income, investment costs, and operating costs of the proposed expansion plans. It shows that the expansion plan obtained for the first instance produces a slightly higher NPV when compared with the plan obtained for the second instance. This result can be anticipated because the feasible region of the first instance contains the feasible region of the second instance completely. However, the additional restrictions for the execution of investment decisions in the
\ No newline at end of file
diff --git a/samples/texts/470846/page_16.md b/samples/texts/470846/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..0eaa9a874601108ca323aabd3bf77df9f8afe3e3
--- /dev/null
+++ b/samples/texts/470846/page_16.md
@@ -0,0 +1,7 @@
+Table 3: Model statistics for middle-size instances.
+
+| Model statistic | Middle-size 1 | Middle-size 2 |
|---|
| KKT reformulation | Dualilty-base reformulation | KKT reformulation | Dualilty-base reformulation |
|---|
| Number of constraints: | 7,200 | 2,961 | 7,192 | 2,857 | | Number of continuous variables: | 4,860 | 2,965 | 4,860 | 2,763 | | Number of binary variables: | 2,345 | 105 | 2,335 | 95 | | LP relaxation at rootnode: | 372 | 346 | 346 | 324 | | Final incumbent value: | 316 | 316 | 308 | 308 | | Final optimality gap: | 0.01% | 0.00% | 0.01% | 0.00% | | Number of B&B nodes: | 1 | 11,367 | 16,786 | 1 | | Solution time: | 157 s | 0.83 s | 282 s | 0.73 s |
+
+Table 4: Results of the bilevel expansion plans for the middle-size instances.
+
+| Term in objective function | Middle-size 1 | Middle-size 2 |
|---|
| Income from sales (MM$): | 895 | 885 | | Investment in new facilities (MM$): | 0 | 0 | | Expansion cost (MM$): | 85 | 82 | | Maintenance cost (MM$): | 94 | 94 | | Production cost (MM$): | 315 | 313 | | Transportation cost (MM$): | 85 | 88 | | NPV (MM$): | 316 | 308 | | Market cost (MM$): | 1,319 | 1,319 |
\ No newline at end of file
diff --git a/samples/texts/470846/page_17.md b/samples/texts/470846/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..2b5fdaa62240445d4ed1346e711af007f56468fd
--- /dev/null
+++ b/samples/texts/470846/page_17.md
@@ -0,0 +1,9 @@
+second instance only produces a decrease of 1.1% in its NPV.
+
+The investment plans obtained from the bilevel formulation do not invest to open the new facility in any of the instances. In the first instance, the plan expands facilities 2 and 3 in the first time period, and facility 3 in the fifth time period. The optimal capacities and production levels at the facilities controlled by the leader in first instance are presented in Fig. 6; arrows indicate the time periods in which capacity is expanded. We can observe in Fig. 6 that all production facilities have high utilization. The expanded capacities in facilities 2 and 3 are used as soon as they are available; facility 1 experiences a temporary decrease in its production because of the capacity increase at facility 3, but it returns to full utilization with demand growth. The bilevel expansion plan obtained for the second instance is very similar to the plan obtained for the first instance; it expands facilities 2 and 3 in the first time period, and delays the second expansion of facility 3 until the ninth time period. In both instances, investment and maintenance cost are equal for all facilities controlled by the leader; therefore, the expansion trends observed are good indicators of the competitiveness of facilities with respect to production and transportation cost.
+
+Figure 6: Capacity and production of the leader in the first instance of the middle-size example.
+
+# 8 Solution Strategies for Large-scale problems
+
+The implementation of the bilevel formulation for capacity planning problems in industrial instances requires developing a solution strategy for large-scale problems. The results obtained from the
\ No newline at end of file
diff --git a/samples/texts/470846/page_18.md b/samples/texts/470846/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..8dab7a81c84cc30671dc18190f32c4ec82824cd5
--- /dev/null
+++ b/samples/texts/470846/page_18.md
@@ -0,0 +1,45 @@
+middle-size instance suggest that the KKT reformulation is not appropriate to solve large instances.
+Additionally, we can expect the duality-based reformulation to struggle solving large-scale instances
+given the relative weakness of its LP relaxation. Therefore, we propose an improved duality-based
+reformulation and a domain reduction scheme; these solution strategies are evaluated in Section 9
+with an industrial example.
+
+## 8.1 Strengthened Duality-based Reformulation
+
+The LP relaxation of the duality-based reformulation can be strengthened by enforcing strong du-
+ality independently for each commodity in every time period. The justification for this modification
+comes from the observation that once the leader has fixed its capacity, the optimization problem
+of the follower can be decomposed by time period and commodity. Consequently, we can replace
+Eqn. (55) by its disaggregated version presented in Eqn. (58).
+
+$$
+\begin{align}
+& \sum_{i \in I} \sum_{j \in J} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \nonumber \\
+& \qquad = \sum_{j \in J} D_{t,j,k} \lambda_{t,j,k} - \sum_{i \in I} C_{i,k}^0 \mu_{t,i,k} - \sum_{i \in I} \sum_{l'=1}^{t} H_{i,k} u_{t,l',i,k} \quad \forall t \in T, k \in K \tag{58}
+\end{align}
+$$
+
+Replacing Eqn. (55) by Eqn. (58) yields a modest improvement in the LP relaxation of the duality-based reformulation. In the first instance of the middle-size example presented in Section 7, the value of the LP relaxation is reduced from MM$346 to MM$343 (9.49% to 8.54% initial gap).
+
+## 8.2 Domain Reduction for the Duality-based Reformulation
+
+A clever strategy to reduce the size of the capacity planning problem with rational markets derives
+from the analysis of the feasible region of the bilevel optimization problem. In the bilevel optimiza-
+tion literature, the bilevel feasible region is called the inducible region [2]. In essence, the inducible
+region is the set of upper-level feasible solutions and their corresponding rational reactions in the
+lower-level problem. In order to describe the inducible region mathematically, we define the set of
+upper-level feasible solutions as the capacity expansion plans that satisfy upper-level constraints.
+This set of upper-level feasible solutions is represented in Eqn. (59),
+
+$$
+(v, w, x, c) \in X \tag{59}
+$$
+
+where $X$ denotes the polyhedron described by upper-level constraints (9)-(11) and upper-level domains (17)-(18).
+
+The rational reaction set for the follower is defined by expression (60) as a function of the
+upper-level variables,
+
+$$
+\Psi(v, w, x, c) = \left\{ y : \arg\min_{y \in Y} \left[ \sum_{t \in T} \sum_{i \in I} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \right] \right\} \quad (60)
+$$
\ No newline at end of file
diff --git a/samples/texts/470846/page_19.md b/samples/texts/470846/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..30986862f15d2ee60a7cb8cbb9ff30f69c77854c
--- /dev/null
+++ b/samples/texts/470846/page_19.md
@@ -0,0 +1,19 @@
+where $Y$ denotes the polyhedron described by lower-level constraints (13)-(16).
+
+According to expressions (59) and (60), the inducible region of the bilevel capacity expansion problem is defined by expression (61).
+
+$$IR = \{(v, w, x, c, y) : (v, w, x, c) \in X, y \in \Psi(v, w, x, c)\} \quad (61)$$
+
+We know from our original assumptions that any expansion plan satisfying Eqn. (59) has a nonempty rational reaction set ($\Psi(v, w, x, c)$). However, not all demand assignments satisfying the lower-level constraints ($y \in Y$) are bilevel feasible because some of them might be suboptimal for all expansion plans of the leader. Hence, it is possible to reduce the dimension of the bilevel formulation by excluding from its domain those demand assignments ($y \in Y$) that are never optimal in the lower level.
+
+The first step to identify demand assignments that are bilevel infeasible is to solve the lower-level problem with the production capacities of the leader fixed to their upper bounds. Once we know the optimal demand assignments in the lower-level problem with maximum capacity, we can infer which demands are never assigned to the leader. The intuition for this inference is that only the demands ($D_{t,j,k}$) that are assigned to the leader when the capacity is at its upper bound, can be assigned to the leader when its capacity is more constrained.
+
+The idea behind the domain reduction is that demand assignments that are nonbasic in the optimal solution of the LP with maximum capacity, must remain nonbasic when capacity is reduced. Proposition 1 formalizes this idea. Its proof can be found in Appendix B.
+
+**Proposition 1.** A demand assignment $(y_{t,i,j,k})$ with positive reduced cost in the optimal solution of the lower-level problem with maximum capacity also has a positive reduced cost when capacities are reduced.
+
+For the implementation of the domain reduction strategy, it is important to remember that nonbasic variables are associated with positive reduced costs in the minimization problem. In order to identify nonbasic variables, we denote by $\mu^U_{t,i,k}$ and $\lambda^U_{t,j,k}$ the optimal dual solution of the lower-level problem with capacities of the leader are at their upper bound ($C^U_{t,i,k}, \forall i \in I^1$). Then, according to Proposition 1, Eqn. (62) establishes valid upper bounds for the demand assignments in the bilevel capacity expansion problem.
+
+$$y_{t,i,j,k} \le \begin{cases} 0 & \text{if } \frac{1}{(1+R)^t} P_{t,i,j,k} + \mu^U_{t,i,k} - \lambda^U_{t,j,k} > 0 \\ D_{t,j,k} & \text{otherwise} \end{cases} \quad \forall t \in T, i \in I^1, j \in J, k \in K \quad (62)$$
+
+The range reduction proposed in expression (62) can have a significant impact in the size of the bilevel formulation because many assignment variables can be fixed if we determine that zero is their only bilevel feasible value. However, it is not the only advantage of the domain reduction strategy when we use the duality-based reformulation. If we analyze the lower-level LP in light
\ No newline at end of file
diff --git a/samples/texts/470846/page_2.md b/samples/texts/470846/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..17895a406f59615c6e489710c59658075bb25fc2
--- /dev/null
+++ b/samples/texts/470846/page_2.md
@@ -0,0 +1,11 @@
+A rather large body of literature has been published on capacity planning problems in several industries [20]. Since the late 1950s, capacity expansion planning has been studied to develop models and solution approaches for diverse applications in the process industries [30], communication networks [7], electric power services [24], and water resource systems [25]. Sahinidis et al. [31] proposed a comprehensive MILP model for long range planning of process networks. Van den Heever and Grossmann [35] used disjunctive programming to extend this methodology to multi-period design and planning of nonlinear chemical processes. An MILP formulation that integrates scheduling with capacity planning for product development was presented by Maravelias and Grossmann [21]. Sundaramoorthy et al. [34] proposed a two-stage stochastic programming formulation for the integration of capacity and operations planning. In summary, capacity planning is considered a central problem for enterprise-wide optimization, a topic for which comprehensive reviews are available [15, 16].
+
+Despite the importance of capacity expansion in industry, the study of the problem in a competitive environment has not received much attention. Soyster and Murphy [33] formulated a capacity planning problem for a perfectly competitive market. However, perfect competition is a strong assumption. A more realistic hypothesis is to assume an oligopolistic market as presented by Murphy and Smeers [23]. Game theory models have also been used [37] for the supply chain planning in cooperative and competitive environments.
+
+The competition between two players whose decisions are made sequentially can be modeled as a Stackelberg game [36]. A Stackelberg competition is an extensive game with perfect information in which the leader chooses his actions before the follower has the opportunity to play. It is known that the most interesting equilibria of such games correspond to the solution of a bilevel optimization problem [26].
+
+Bilevel optimization problems are mathematical programs with optimization problems in the constraints [4]. They are suitable to model problems in which two independent decision makers try to optimize their own objective functions [6, 2]. We present a mixed-integer linear bilevel formulation for the capacity expansion planning of an industrial gas company operating in a competitive environment. The purpose of the upper-level problem is to determine the investment and distribution plan that maximizes the Net Present Value (NPV). The response of markets that can choose among different producers is modeled in the lower-level as a linear programming (LP) problem. The lower-level objective function is selected to represent the rational behavior of the markets.
+
+Solution approaches for bilevel optimization problems with lower-level LPs leverage the fact that optimal solutions occur at vertexes of the region described by upper and lower level constraints. They rely on vertex enumeration, directional derivatives, penalty terms, or optimality conditions [29]. The most direct approach is to reformulate the bilevel optimization as a single-level problem using the optimality conditions of the lower-level LP. The classic reformulation using Karush-Kuhn-Tucker (KKT) conditions maintains linearity of the problem except for the introduction of complementarity constraints [12, 1, 3]. An equivalent reformulation replaces the lower level problem by its primal and dual constraints, and guarantees optimality by enforcing strong duality [22, 14].
+
+Strategic investment planning for electric power networks has been the most prolific application
\ No newline at end of file
diff --git a/samples/texts/470846/page_20.md b/samples/texts/470846/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..298a77259eb8c2d12d44d40a800e3f39ea1593ff
--- /dev/null
+++ b/samples/texts/470846/page_20.md
@@ -0,0 +1,13 @@
+of complementary slackness, we can conclude that expression (62) also implies that some dual constraints (50) are never active. In particular, those dual constraints (50) corresponding to the variables $y_{t,i,j,k}$ that can be fixed to zero are irrelevant in the duality-based formulation. Therefore, the domain reduction strategy proposed for the bilevel capacity expansion planning offers the double benefit of reducing the number of continuous variables and the number of constraints in the duality-based reformulation.
+
+## 9 Industrial Example
+
+The solution strategies proposed for large-scale instances are tested with a capacity planning problem for an air separation company. This large-scale example includes 3 existing facilities of the leader, 2 candidate facilities of the leader, and 5 facilities of competitors. Demands of 20 markets for 2 different commodities are considered in a time horizon of 20 years divided in 80 time periods. Two instances allowing different timing for the investment decisions are analyzed: the first instance allows investments every fourth time period and the second instance allows investments every eighth time period.
+
+According to the formulation, the leader maximizes the NPV obtained during the 20-year time horizon. Markets select their providers by controlling the demand assignments with the objective of minimizing the discounted cost they pay. A discount rate ($R$) of 3% per time period is used in both objective functions. Cost coefficients and all other parameters are omitted because of confidentiality reasons.
+
+The computational statistics for the original duality-based reformulation and the large-scale duality-based reformulation are presented in Table 5; the large-scale reformulation enforces strong duality for each commodity in every time period and implements the domain reduction strategy to fix variables and eliminate constraints. Table 5 shows that both instances of the industrial example have a significant number of constraints, continuous and discrete variables. However, if we compare the original and the large-scale duality-based reformulations, we observe a reduction between 13% and 17% in the number of continuous variables and constraints.
+
+The performance of both reformulations is also presented in Table 5; we observe a significant difference in the performance of the original and the large-scale duality-based reformulations. A major advantage of the large-scale reformulation is related to its LP relaxation at the rootnode. This improvement derives partially from disaggregating strong duality, and more importantly from excluding demand assignments that are bilevel infeasible. In the first industrial instance the LP relaxation gap is reduced from 34.9% to 3.9%, whereas in the second industrial instance the reduction is from 34.4% to 3.7%.
+
+Even after implementing the proposed strategies for large-scale problems, the industrial instances are still difficult to solve using GUROBI 6.0.0. For our industrial example, only the second instance was solved to the desired optimality gap of 1% with the large-scale duality-based reformulation. However, if we compare the best solutions obtained for both industrial instances, we observe that allowing more frequent expansions in the first instance produces a NPV that is MM$45 higher,
\ No newline at end of file
diff --git a/samples/texts/470846/page_21.md b/samples/texts/470846/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..5d5bda2c216ee606f43bffb8cbf0a66756106811
--- /dev/null
+++ b/samples/texts/470846/page_21.md
@@ -0,0 +1,17 @@
+Table 5: Model statistics for industrial instances.
+
+| Model statistic | Industrial 1 | Industrial 2 |
|---|
| Original duality-based | large-scale duality-based | Original duality-based | large-scale duality-based |
|---|
| Number of constraints: | 46,601 | 40,025 | 42,501 | 35,925 | | Number of continuous variables: | 46,000 | 39,905 | 42,520 | 35,265 | | Number of binary variables: | 640 | 640 | 520 | 520 | | LP relaxation at rootnode: | 4,289 | 2,906 | 4,002 | 2,851 | | Final incumbent value: | 2,662 | 2,791 | 2,689 | 2,746 | | Final optimality gap: | 33.2% | 1.27% | 26.4% | 0.98% | | Solution time: | 60 min* | 60 min* | 60 min* | 5 min |
+
+* Time limit reached
+
+Table 6: Results of the bilevel expansion plans for the industrial instances.
+
+| Term in objective function | Industrial 1 | Industrial 2 |
|---|
| Income from sales (MM$): | 5,984 | 5,888 | | Investment in new facilities (MM$): | 0 | 0 | | Expansion cost (MM$): | 439 | 411 | | Maintenance cost (MM$): | 215 | 215 | | Production cost (MM$): | 2,100 | 2,072 | | Transportation cost (MM$): | 439 | 444 | | NPV (MM$): | 2,791 | 2,746 | | Market cost (MM$): | 10,545 | 10,546 |
+
+which accounts for 1.6% of the potential profit. Table 6 presents in detail the terms in the objective function for the best solutions obtained; the table shows that allowing more frequent expansions in the first instance generates a more dynamic expansion plan that can capture a higher market share. However, the optimal number of expansions is the same for both instances and none of them includes investments in new facilities.
+
+The optimal capacity and production levels at facilities controlled by the leader in the first industrial instance are presented in Figs. 7 and 8 for commodity 1 and 2, respectively; the figures show that utilization of the production capacities is high for all the facilities being expanded. The only capacity that is not expanded in the entire time horizon is the production capacity of commodity 1 at facility 1; the utilization of this production capacity fluctuates according to the available capacity at the facilities 2 and 3. The expansion trends observed preserve a close relation with the competitiveness of facilities that is mainly determined by their production and distribution costs.
+
+# 10 Conclusions
+
+We have developed a novel formulation for capacity planning problems that considers markets as rational decision makers. The formulation is based on bilevel optimization, a framework that
\ No newline at end of file
diff --git a/samples/texts/470846/page_22.md b/samples/texts/470846/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..872f20359aa8d52a052a23746b833708fee51cf7
--- /dev/null
+++ b/samples/texts/470846/page_22.md
@@ -0,0 +1,7 @@
+Figure 7: Capacity and production of commodity 1 at the facilities controlled by the leader in the first instance of the industrial example.
+
+allows modeling the conflicting interests of producers and consumers. The expansion plans obtained from the bilevel formulation produce greater economic benefits when the producers operate in a competitive environment. In particular, the single-level formulation tends to overestimate the market share that can be obtained and might generate expensive investment plans that are less profitable.
+
+The bilevel formulation for capacity planning is a challenging optimization problem. We have proposed two different approaches to reformulate it as a single-level MILP. The first approach ensures optimality of the lower-level problem through its KKT conditions. The second approach uses strong duality of LPs for the reformulation. In the middle-size instances, we have shown that the duality-based reformulation offers superior performance compared to the KKT reformulation; this result is explained by the large number of binary variables required in the KKT approach to linearize the complementarity constraints. The duality-based reformulation does not require the addition of binary variables but the strong duality condition gives rise to nonlinearities; for the case in which all upper-level variables are discrete, the nonlinearities can be avoided with the introduction of continuous variables and linear constraints.
+
+Despite the relative advantage of the duality-based reformulation, the solution of large-scale instances of the bilevel capacity planning problem is still computationally demanding. We proposed two strategies to improve the duality-based reformulation. The first strategy leverages separability of the lower-level problem by disaggregating the strong duality constraint. The second strategy uses
\ No newline at end of file
diff --git a/samples/texts/470846/page_23.md b/samples/texts/470846/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..cd9837d7d94762e1fa56e3d13ec54fcf6b013ad1
--- /dev/null
+++ b/samples/texts/470846/page_23.md
@@ -0,0 +1,13 @@
+Figure 8: Capacity and production of commodity 2 at the facilities controlled by the leader in the first instance of the industrial example.
+
+the topology of the bilevel feasible region to reduce the number of variables and constraints in the duality-based reformulation. The implementation of these strategies yields a significant reduction in the solution time of large-scale problems.
+
+The bilevel formulation for capacity planning has shown to be useful for developing capacity expansion plans that considers markets as rational decision makers. This novel approach is more realistic than the traditional formulation because it models the dynamic nature of industrial markets. Furthermore, we have proposed an effective strategy to solve large-scale instances that allows using the bilevel capacity planning formulation in industrial applications.
+
+## References
+
+[1] J. F. Bard and J. E. Falk. An explicit solution to the multi-level programming problem. *Computers & Operations Research*, 9:77-100, 1982.
+
+[2] J. F. Bard and J. T. Moore. An algorithm for the discrete bilevel programming problem. *Naval Research Logistics (NRL)*, 39(3):419-435, 1992.
+
+[3] W. F. Bialas and M. H. Karwan. On two-level optimization. *Automatic Control, IEEE Transactions on*, 27:211-214, 1982.
\ No newline at end of file
diff --git a/samples/texts/470846/page_24.md b/samples/texts/470846/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..716ca798c96a504cf6b35c3d4e4686a9516210cf
--- /dev/null
+++ b/samples/texts/470846/page_24.md
@@ -0,0 +1,33 @@
+[4] J. Bracken and J. T. McGill. Mathematical programs with optimization problems in the constraints. *Operations Research*, 21:p37-44, 1973.
+
+[5] Anthony P. Burgard and Costas D. Maranas. Optimization-based framework for inferring and testing hypothesized metabolic objective functions. *Biotechnology and Bioengineering*, 82:670-677, 2003.
+
+[6] W. Candler and R. Townsley. A linear two-level programming problem. *Computers & Operations Research*, 9(1):59-76, 1982.
+
+[7] S-G. Chang and B. Gavish. Telecommunications network topological design and capacity expansion: Formulations and algorithms. *Telecommunication Systems*, 1(1):99-131, 1993.
+
+[8] Y. Chu and F. You. Integrated Scheduling and Dynamic Optimization by Stackelberg Game: Bilevel Model Formulation and Efficient Solution Algorithm. *Industrial & Engineering Chemistry Research*, 53 (13):5564-5581, 2014.
+
+[9] P. A. Clark and A. W. Westerberg. Bilevel programming for steady-state chemical process design: I. fundamentals and algorithms. *Computers & Chemical Engineering*, 14:87-97, 1990.
+
+[10] B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. *Annals of Operations Research*, 153(1):235-256, 2007.
+
+[11] K. Fischer. Sequential discrete p-facility models for competitive location planning. *Annals of Operations Research*, 111:253-270, 2002.
+
+[12] J. Fortuny-Amat and B. McCarl. A representation and economic interpretation of a two-level programming problem. *The Journal of the Operational Research Society*, 32:783-792, 1981.
+
+[13] D. Gale, H. W. Kuhn, and A. W. Tucker. *Activity Analysis of Production and Allocation*, chapter Linear programming and the theory of games, page 317329. New York: Wiley, 1951.
+
+[14] L. P. Garces, A. J. Conejo, R. Garcia-Bertrand, and R. Romero. A bilevel approach to transmission expansion planning within a market environment. *Power Systems, IEEE Transactions on*, 24:1513-1522, 2009.
+
+[15] I. E. Grossmann. Enterprise-wide optimization: A new frontier in process systems engineering. *Aiche Journal*, 51:1846-1857, 2005.
+
+[16] I. E. Grossmann. Advances in mathematical programming models for enterprise-wide optimization. *Computers & Chemical Engineering*, 47:2-18, 2012.
+
+[17] I. E. Grossmann and C. A. Floudas. Active constraint strategy for flexibility analysis in chemical processes. *Computers & Chemical Engineering*, 11:675-693, 1987.
+
+[18] D. Huppmann and J. Egerer. National-Strategic Investment in European Power Transmission Capacity. *DIW Berlin Discussion Paper No. 1379*, pages 1-23, 2014.
+
+[19] P. Loridan and J. Morgan. Weak via strong Stackelberg problem: New results. *Journal of Global Optimization*, 8:263-287, 1996. ISSN 0925-5001.
+
+[20] H. Luss. Operations research and capacity expansion problems: A survey. *Operations Research*, 30(5): 907-947, 1982.
\ No newline at end of file
diff --git a/samples/texts/470846/page_25.md b/samples/texts/470846/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..55e4f190415014662d848856484307546035d6d6
--- /dev/null
+++ b/samples/texts/470846/page_25.md
@@ -0,0 +1,33 @@
+[21] C. T. Maravelias and I. E. Grossmann. Simultaneous planning for new product development and batch manufacturing facilities. *Industrial & Engineering Chemistry Research*, 40(26):6147-6164, 2001.
+
+[22] A. L. Motto, J. M. Arroyo, and F. D. Galiana. A Mixed-Integer LP Procedure for the Analysis of Electric Grid Security Under Disruptive Threat. *Power Systems, IEEE Transactions on*, 20:1357-1365, 2005.
+
+[23] F. H. Murphy and Y. Smeers. Generation capacity expansion in imperfectly competitive restructured electricity markets. *Operations Research*, 53(4):646-661, 2005.
+
+[24] F. H. Murphy, S. Sen, and A. L. Soyster. Electric utility capacity expansion planning with uncertain load forecasts. *A II E Transactions*, 14(1):52-59, 1982.
+
+[25] W. S. Nainis and Y. Y. Haimes. A multilevel approach to planning for capacity expansion in water resource systems. *Systems, Man and Cybernetics, IEEE Transactions on*, SMC-5(1):53-63, 1975.
+
+[26] M. J. Osborne and A. Rubenstein. *A course in game theory*, chapter 6. Extensive games with perfect information, pages 89-115. Cambridge (MA): The MIT Press., 1994.
+
+[27] C. Ruiz, AJ. Conejo, and Y. Smeers. Equilibria in an oligopolistic electricity pool with stepwise offer curves. *IEEE Transactions on Power Systems*, 27:752-761, 2012.
+
+[28] J-H. Ryu, V. Dua, and E. N. Pistikopoulos. A bilevel programming framework for enterprise-wide process networks under uncertainty. *Computers & Chemical Engineering*, 28:1121-1129, 2004.
+
+[29] G. K. D. Saharidis, A. J. Conejo, and G. Kozanidis. Exact solution methodologies for linear and (mixed) integer bilevel programming. In El-Ghazali Talbi, editor, *Metaheuristics for Bi-level Optimization*, volume 482 of *Studies in Computational Intelligence*, pages 221-245. Springer Berlin Heidelberg, 2013.
+
+[30] N. V. Sahinidis and I. E. Grossmann. Reformulation of the Multiperiod MILP Model for Capacity Expansion of Chemical Processes. *Operations Research*, 40:S127-S144, 1992.
+
+[31] N. V. Sahinidis, I. E. Grossmann, R. E. Fornari, and M. Chathrathi. Optimization model for long range planning in the chemical industry. *Computers & Chemical Engineering*, 13(9):1049-1063, 1989.
+
+[32] J. Salmeron, K. Wood, and R. Baldick. Analysis of electric grid security under terrorist threat. *Power Systems, IEEE Transactions on*, 19:905-912, 2004.
+
+[33] A. L. Soyster and F. H. Murphy. *Economic Behaviour of Electric Utilities*. Prentice-Hall, 1989.
+
+[34] A. Sundaramoorthy, J. M. B. Evans, and P. I. Barton. Capacity planning under clinical trials uncertainty in continuous pharmaceutical manufacturing, 1: Mathematical framework. *Industrial & Engineering Chemistry Research*, 51(42):13692-13702, 2012.
+
+[35] S. A. Van den Heever and I. E. Grossmann. Disjunctive multiperiod optimization methods for design and planning of chemical process systems. *Computers & Chemical Engineering*, 23(8):1075-1095, 1999.
+
+[36] H. von Stackelberg. *Market Structure and Equilibrium*. Springer, 2011.
+
+[37] M. A. Zamarripa, A. M. Aguirre, C. A. Mendez, and A. Espuna. Improving supply chain planning in a competitive environment. *Computers & Chemical Engineering*, 42:178-188, 2012.
\ No newline at end of file
diff --git a/samples/texts/470846/page_26.md b/samples/texts/470846/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..d55a37f5c0100b5af1c840f2e12f1d60839388e5
--- /dev/null
+++ b/samples/texts/470846/page_26.md
@@ -0,0 +1,19 @@
+# Appendix A: data for illustrative example
+
+Table A.1 shows the cardinality of the datasets used for the three examples presented in the paper.
+
+Table A.1: Summary of the datasets used in examples.
+
+ | Illustrative example | middle-size instance | Industrial instance |
|---|
| Existing facilities of the leader: | 2 | 3 | 3 | | Candidate facilities of the leader: | 1 | 1 | 2 | | Facilities of the competitors: | 1 | 3 | 5 | | Markets: | 8 | 15 | 20 | | Commodities: | 1 | 1 | 2 | | Time periods: | 12 | 20 | 80 |
+
+The complete dataset for the illustrative example is presented in Tables A.2 - A.10. The initial production capacity of the facilities is presented in Table A.2.
+
+Table A.2: Initial capacities for facilities in the illustrative example.
+
+| Facility | Commodity 1 [ton/period] |
|---|
| Leader 1 | 22,500 | | Leader 2 | 36,000 | | Leader 3 | 0 | | Competitor 1 | 36,000 |
+
+Market demands for all time periods are presented in Table A.3.
+
+Table A.3: Market demands ($D_{t,j,k}$) in the illustrative example.
+
+| Time period | Market demand [ton/period] |
|---|
| $D_1$ | $D_2$ | $D_3$ | $D_4$ | $D_5$ | $D_6$ | $D_7$ | $D_8$ |
|---|
| 1 | 15,300 | 8,100 | 4,500 | 4,500 | 5,400 | 11,700 | 3,600 | 27,000 | | 2 | 15,500 | 8,200 | 4,600 | 4,600 | 5,500 | 11,900 | 3,700 | 27,600 | | 3 | 15,700 | 8,300 | 4,600 | 4,700 | 5,500 | 12,200 | 3,800 | 27,900 | | 4 | 15,800 | 8,400 | 4,700 | 4,700 | 5,600 | 12,400 | 3,800 | 28,000 | | 5 | 15,900 | 8,400 | 4,800 | 4,800 | 5,600 | 12,600 | 3,900 | 28,200 | | 6 | 15,900 | 8,400 | 4,800 | 4,900 | 5,600 | 12,700 | 3,900 | 28,100 | | 7 | 16,000 | 8,500 | 4,900 | 5,000 | 5,700 | 13,000 | 4,000 | 28,600 | | 8 | 16,100 | 8,500 | 5,000 | 5,000 | 5,700 | 13,300 | 4,100 | 29,100 | | 9 | 16,200 | 8,600 | 5,100 | 5,100 | 5,800 | 13,500 | 4,200 | 29,800 | | 10 | 16,200 | 8,600 | 5,200 | 5,100 | 5,800 | 13,700 | 4,200 | 29,900 | | 11 | 16,100 | 8,500 | 5,300 | 5,200 | 5,800 | 13,600 | 4,200 | 29,700 | | 12 | 16,200 | 8,600 | 5,300 | 5,200 | 5,800 | 13,600 | 4,200 | 29,800 |
\ No newline at end of file
diff --git a/samples/texts/470846/page_27.md b/samples/texts/470846/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..afafe6e5f8bddd8b58309a7b416e6d03fd8aed4c
--- /dev/null
+++ b/samples/texts/470846/page_27.md
@@ -0,0 +1,19 @@
+Tables A.4 - A.10 present the cost coefficients for the objective function of the illustrative example. Table A.4 shows the cost ($A_{t,3}$) of opening the candidate production facility in different time periods. In the illustrative example, it is allowed to open the new facility only in time periods 1, 5, and 9.
+
+Table A.4: Investment cost ($A_{t,3}$) for the leader to open facility 3 in the illustrative example.
+
+| Time period | Investment cost [MM$] | | 1 | 20.00 | | 5 | 20.40 | | 9 | 20.86 |
+
+Table A.5 presents the maintenance cost per time period ($B_{t,i}$) incurred by open facilities.
+
+Table A.5: Maintenance cost ($B_{t,i}$) in the illustrative example.
+
+| Time period | Maintenance cost [MM$/period] |
|---|
| Leader 1 | Leader 2 | Leader 3 |
|---|
| 1 | 1.000 | 2.000 | 3.000 | | 2 | 1.005 | 2.010 | 3.015 | | 3 | 1.010 | 2.020 | 3.030 | | 4 | 1.013 | 2.026 | 3.039 | | 5 | 1.020 | 2.040 | 3.060 | | 6 | 1.029 | 2.058 | 3.087 | | 7 | 1.032 | 2.064 | 3.096 | | 8 | 1.035 | 2.070 | 3.105 | | 9 | 1.043 | 2.086 | 3.129 | | 10 | 1.049 | 2.098 | 3.147 | | 11 | 1.054 | 2.108 | 3.162 | | 12 | 1.058 | 2.116 | 3.174 |
+
+Table A.6 presents the investment cost ($E_{t,i,1}$) associated to the expansion of production capacity by 9,000 ton/period. In the illustrative example, all facilities are assumed to have the same expansion cost and expansions are allowed only in time periods 1, 5, and 9.
+
+Table A.6: Expansion cost ($E_{t,i,1}$) in the illustrative example.
+
+| Time period | Expansion cost [MM$/9,000 ton] |
|---|
| Leader 1 | Leader 2 | Leader 3 |
|---|
| 1 | 30.00 | 30.00 | 30.00 | | 5 | 30.60 | 30.60 | 30.60 | | 9 | 31.29 | 31.29 | 31.29 |
+
+The production cost of facilities ($F_{t,i,1}$) in the illustrative example are presented in Table A.7.
\ No newline at end of file
diff --git a/samples/texts/470846/page_28.md b/samples/texts/470846/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..83c34d77370601bfb2c5ebc02fe32aa9fd8cdcff
--- /dev/null
+++ b/samples/texts/470846/page_28.md
@@ -0,0 +1,15 @@
+Table A.7: Production cost ($F_{t,i,1}$) in the illustrative example.
+
+| Time period | Production cost [$/ton] |
|---|
| Leader 1 | Leader 2 | Leader 3 |
|---|
| 1 | 250 | 220 | 180 | | 2 | 257 | 226 | 185 | | 3 | 246 | 217 | 177 | | 4 | 246 | 216 | 177 | | 5 | 254 | 223 | 183 | | 6 | 263 | 231 | 189 | | 7 | 253 | 222 | 182 | | 8 | 255 | 225 | 184 | | 9 | 262 | 230 | 188 | | 10 | 284 | 250 | 204 | | 11 | 271 | 239 | 195 | | 12 | 269 | 237 | 194 |
+
+The transportation cost from facilities to markets in each time period are calculated from the transportation costs at the initial time period and their growth rate, according to Eqn. A.1. Initial transportation costs ($G_{i,j,k}^0$) are presented in Table A.8; their growth rate ($G_t^{Rt}$) are presented in Table A.10.
+
+$$G_{t,i,j,k} = G_{i,j,k}^0 G_t^{Rt} \quad (A.1)$$
+
+Table A.8: Transportation cost ($G_{t,i,j,1}$) in the illustrative example.
+
+| Market | Transportation cost [$/ton] |
|---|
| Leader 1 | Leader 2 | Leader 3 |
|---|
| 1 | 26 | 325 | 234 | | 2 | 13 | 299 | 260 | | 3 | 65 | 195 | 325 | | 4 | 104 | 130 | 156 | | 5 | 78 | 260 | 221 | | 6 | 208 | 195 | 46 | | 7 | 195 | 169 | 59 | | 8 | 234 | 169 | 0.4 |
+
+Selling prices offered by facilities to markets are calculated from the selling prices at the initial time period and their growth rate according to Eqn. A.2. Initial selling prices ($P_{i,j,k}^0$) are presented in Table A.9; their growth rates ($P_t^{Rt}$) are presented in Table A.10.
+
+$$P_{t,i,j,k} = P_{i,j,k}^0 P_t^{Rt} \quad (A.2)$$
\ No newline at end of file
diff --git a/samples/texts/470846/page_29.md b/samples/texts/470846/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..4679bd098b2d83ab312eba4820136549dce2ad51
--- /dev/null
+++ b/samples/texts/470846/page_29.md
@@ -0,0 +1,7 @@
+Table A.9: Initial selling prices ($P_{i,j,k}^0$) from facilities to markets in the illustrative example.
+
+| Market | Leader 1, 2 & 3 [$/ton] | Competitor 1 [$/ton] |
|---|
| 1 | 586 | 615 | | 2 | 573 | 726 | | 3 | 625 | 785 | | 4 | 664 | 633 | | 5 | 638 | 794 | | 6 | 606 | 619 | | 7 | 619 | 606 | | 8 | 560 | 580 |
+
+Table A.10: Growth rates for transportation costs ($G_t^{Rt}$) and selling prices ($P_t^{Rt}$) in the illustrative example.
+
+| Time period | Growth rate for transportation | Grow rate [MM$] for selling prices |
|---|
| 1 | 1.00 | 1 | | 2 | 1.00 | 1 | | 3 | 1.03 | 1.001 | | 4 | 1.05 | 1.002 | | 5 | 1.09 | 1.013 | | 6 | 1.09 | 1.013 | | 7 | 1.12 | 1.015 | | 8 | 1.12 | 1.015 | | 9 | 1.12 | 1.047 | | 10 | 1.14 | 1.048 | | 11 | 1.14 | 1.048 | | 12 | 1.16 | 1.049 |
\ No newline at end of file
diff --git a/samples/texts/470846/page_3.md b/samples/texts/470846/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..b423dc6a523c50da51f9481a4d2a30bb4d9f61fd
--- /dev/null
+++ b/samples/texts/470846/page_3.md
@@ -0,0 +1,7 @@
+of bilevel optimization models. Motto et al. [22] implemented the duality-based reformulation for the analysis of electric grid security under disruptive threat. This bilevel problem was originally formulated by Salmeron et al. [32] with the purpose of identifying the interdictions that maximize network disruptions. A bilevel formulation for the expansion of transmission networks was developed by Garces et al. [14] to maximize the average social welfare over a set of lower-level problems representing different market clearing scenarios; they also implemented the duality-based reformulation. Ruiz et al. [27] modeled electricity markets as an Equilibrium Problem with Equilibrium Constraints (EPEC) in which competing producers maximize their profit in the upper level and a market operator maximizes social welfare in the lower level; they use the duality-based reformulation to guarantee optimality of the lower level problem and obtain an equilibrium solution by jointly formulating the KKT conditions of all producers. A similar strategy that includes the combination of duality-based and KKT reformulations was used by Huppmann and Egerer [18] to solve a three-level optimization problem that models the roles of independent system operators, regional planners, and supra-national coordination in the European energy system.
+
+Another interesting application of bilevel optimization is the facility location problem in a duopolistic environment. The model presented by Fischer [11] selects facilities among a set of candidate locations and considers selling prices as optimization variables, which leads to a nonlinear bilevel formulation. The problem is simplified to a linear discrete bilevel formulation under the assumption that Nash equilibrium is reached for the prices. The solution to the discrete bilevel optimization problem is obtained using a heuristic algorithm.
+
+Bilevel optimization models have also found application in chemical engineering. Clark and Westerberg [9] presented a nonlinear bilevel programming approach for the design of chemical processes and proposed algorithms to solve them. In their formulation, the upper level optimizes the process design and the lower level models thermodynamic equilibrium by minimizing Gibbs free energy. Burgard and Maranas [5] used bilevel optimization to test the consistency of experimental data obtained from metabolic networks with hypothesized objective functions. In the upper level, the problem minimizes the square deviation of the fluxes predicted by the metabolic model with respect to experimental data, whereas the lower level quantifies the individual importance of the fluxes. A bilevel programming model for supply chain optimization under uncertainty was developed by Ryu et al. [28]; the conflicting interests of production and distribution operations in a supply chain are modeled using separate objective functions. They reformulate the bilevel problem in single-level after finding the solution of the lower-level problem as parametric functions of the upper-level variables and the uncertain parameters. Chu and You [8] presented an integrated scheduling and dynamic optimization problem for batch processes. The scheduling problem, formulated in the upper level, is subject to the processing times and costs determined by the nonlinear dynamic lower-level problem. The bilevel formulation is transformed to a single level problem by replacing the lower-level with piece-wise linear response functions. They assert that the bilevel formulation can be used as a distributed optimization approach whose solutions can easily adapt to variation in the problem's parameters.
+
+The novelty of our research resides on the application of bilevel optimization for capacity ex-
\ No newline at end of file
diff --git a/samples/texts/470846/page_30.md b/samples/texts/470846/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..8a8b5ce1d48a0c229d4d501df769b1473fc73d8e
--- /dev/null
+++ b/samples/texts/470846/page_30.md
@@ -0,0 +1,33 @@
+# Appendix B: proof of Proposition 1
+
+**Proposition 1.** A demand assignment $(y_{t,i,j,k})$ with positive reduced cost in the optimal solution of the lower-level problem with maximum capacity also has a positive reduced cost when capacities are reduced.
+
+*Proof.* We want to prove that the optimal reduced cost of the leader's assignment variables cannot decrease when capacities are reduced from their maximum feasible value $(C_{t,i,k}^U)$. For this analysis, we decompose the lower-level problems by time periods $(t \in T)$ and by commodities $(k \in K)$; the problem minimizing the cost paid by markets is decomposable since all terms in the objective function and constraints are indexed by $(t, k)$. Intuitively, this means that we can solve independent problems to minimize the cost paid at time period $t$ for commodity $k$. The lower-level problem resulting from this decomposition is presented in Eqns. (B.1)-(B.5).
+
+$$ \min \frac{1}{(1+R)^t} \sum_{i \in I} \sum_{j \in J} P_{i,j} y_{i,j} \qquad (\text{B.1}) $$
+
+$$ \text{s.t.} \quad \sum_{j \in J} y_{i,j} \le C_i \qquad \forall i \in I^1 \qquad (\text{B.2}) $$
+
+$$ \sum_{j \in J} y_{i,j} \le C_i^0 \qquad \forall i \in I^2 \qquad (\text{B.3}) $$
+
+$$ \sum_{i \in I} y_{i,j} = D_j \qquad \forall j \in J \qquad (\text{B.4}) $$
+
+$$ y_{i,j} \in \mathbb{R}^+ \qquad \forall i \in I, j \in J \qquad (\text{B.5}) $$
+
+Similarly, the dual lower-level problem disaggregated by time periods and commodities is presented in Eqns. (B.6) - (B.10).
+
+$$ \max \sum_{j \in J} D_j \lambda_j - \sum_{i \in I^1} C_i \mu_i - \sum_{i \in I^2} C_i^0 \mu_i \qquad (\text{B.6}) $$
+
+$$ \text{s.t.} \quad \lambda_j - \mu_i \le \frac{1}{(1+R)^t} P_{i,j} \qquad \forall i \in I^1, j \in J \qquad (\text{B.7}) $$
+
+$$ \lambda_j - \mu_i \le \frac{1}{(1+R)^t} P_{i,j} \qquad \forall i \in I^2, j \in J \qquad (\text{B.8}) $$
+
+$$ \mu_i \in \mathbb{R}^+ \qquad \forall i \in I \qquad (\text{B.9}) $$
+
+$$ \lambda_j \in \mathbb{R} \qquad \forall j \in J \qquad (\text{B.10}) $$
+
+We assume that the dual lower-level problem is bounded (and the primal lower-level problem is feasible). The condition that guarantees a finite solution for the dual of the lower-level problem is presented in Eqn. (B.11).
+
+$$ \sum_{j \in J} D_j \le \sum_{i \in I^1} C_i + \sum_{i \in I^2} C_i^0 \qquad (\text{B.11}) $$
+
+An important observation regarding dual variables $\mu_i$ ($i \in I^1$) is that they all have the same optimal value. It is the case because constraints (B.7) are identical for all facilities of the leader (facilities of the leader offer the same price to each market) and the coefficients of all $\mu_i$ have the same sign in the objective function.
\ No newline at end of file
diff --git a/samples/texts/470846/page_31.md b/samples/texts/470846/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f07d8910c354e845bda8edb889e42400480c1cd
--- /dev/null
+++ b/samples/texts/470846/page_31.md
@@ -0,0 +1,31 @@
+We also note that the condition presented in Eqn. (B.12) must be satisfied by the optimal solution of the dual lower-level problem in order to get the largest values of $\lambda_j$ allowed by dual constraints (B.7) - (B.8).
+
+$$ \lambda_j = \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i \right) \quad \forall j \in J \qquad (B.12) $$
+
+Using Eqn. (B.12), we can rewrite the dual lower-level problem as presented by Eqn. (B.13).
+
+$$ \max_{\mu_i \ge 0} \left\{ \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i \right) \right] - \sum_{i \in I^1} C_i \mu_i - \sum_{i \in I^2} C_i^0 \mu_i \right\} \qquad (B.13) $$
+
+In order to prove that the optimal reduced costs of the leader's assignment variables cannot decrease when capacities are reduced, we divide the proof in four steps.
+
+**Step 1:** optimal values of $\mu_i$ ($i \in I^{1}$) cannot be less than their optimal values obtained with maximum capacity.
+
+We assume that $C_i^U$ is the upper bound of the coefficient of dual variable $\mu_i$ in Eqn. (B.6), and we denote by $(\mu_i^U, \lambda_j^U)$ the corresponding optimal solution of the dual lower-level problem. Now, let us assume that the coefficients of $\mu_i$ are reduced by $\Delta C_i$, and let us denote by $(\mu_i^*, \lambda_j^*)$ the optimal dual solution corresponding to capacities $C_i^* = C_i^U - \Delta C_i$. If we consider that Eqn. (B.13) is a maximization problem, we can establish the sequence of inequalities (B.14)-(B.17).
+
+$$ \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^* \right) \right] - \sum_{i \in I^1} C_i^U \mu_i^* - \sum_{i \in I^2} C_i^0 \mu_i^* \qquad (B.14) $$
+
+$$ \leq \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U \right) \right] - \sum_{i \in I^1} C_i^U \mu_i^U - \sum_{i \in I^2} C_i^0 \mu_i^U \qquad (B.15) $$
+
+$$ \leq \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U \right) \right] - \sum_{i \in I^1} (C_i^U - \Delta C_i) \mu_i^U - \sum_{i \in I^2} C_i^0 \mu_i^U \qquad (B.16) $$
+
+$$ \leq \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^* \right) \right] - \sum_{i \in I^1} (C_i^U - \Delta C_i) \mu_i^* - \sum_{i \in I^2} C_i^0 \mu_i^* \qquad (B.17) $$
+
+We note that $\sum_{i \in I^1} \Delta C_i \mu_i^*$ is the difference between expressions (B.17) and (B.14). Similarly, the difference between expressions (B.16) and (B.15) is $\sum_{i \in I^1} \Delta C_i \mu_i^U$. Hence, we can infer that $\sum_{i \in I^1} \Delta C_i \mu_i^* \geq \sum_{i \in I^1} \Delta C_i \mu_i^U$. Since dual variables $\mu_i$ must have the same optimal value for all $i \in I^1$, then $\mu_i^* \geq \mu_i^U$ for all $i \in I^1$.
+
+**Step 2:** optimal values of $\mu_i$ ($i \in I^{2}$) cannot be less than their optimal values obtained with maximum capacity.
+
+In order to continue with the argument, let us define $\epsilon_i$ according to Eqn. (B.18).
+
+$$ \epsilon_i = \mu_i^* - \mu_i^U $$
+
+(B.18)
\ No newline at end of file
diff --git a/samples/texts/470846/page_32.md b/samples/texts/470846/page_32.md
new file mode 100644
index 0000000000000000000000000000000000000000..d2d0c37444106b9a29abada763c9ba9be4679cfa
--- /dev/null
+++ b/samples/texts/470846/page_32.md
@@ -0,0 +1,29 @@
+By optimality of Eqn. (B.15), we know that any deviation of $\mu_i^U$ from their optimal values yields a lower bound as presented in Eqns. (B.19) - (B.20).
+
+$$ \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \min_{i'} [\epsilon_{i'}] \right) \right] - \sum_{i \in I^1} C_i^U (\mu_i^U + \min_{i'} [\epsilon_{i'}]) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \min_{i'} [\epsilon_{i'}]) \quad (B.19) $$
+
+$$ \leq \sum_{j \in J} D_j \left[ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U \right) \right] - \sum_{i \in I^1} C_i^U \mu_i^U - \sum_{i \in I^2} C_i^0 \mu_i^U \quad (B.20) $$
+
+subtracting (B.20) from (B.19), we obtain inequality (B.21),
+
+$$ -\sum_{j \in J} D_j \min_{i'} [\epsilon_{i'}] + \sum_{i \in I^1} C_i^U \min_{i'} [\epsilon_{i'}] + \sum_{i \in I^2} C_i^0 \min_{i'} [\epsilon_{i'}] \geq 0 \quad (B.21) $$
+
+which implies $\min_{i \in I} [\epsilon_i] \geq 0$ according to inequality (B.11).
+
+**Step 3:** if capacities of the leader are reduced, optimal values of $\mu_i$ ($i \in I^2$) cannot increase faster than the values of $\mu_i$ ($i \in I^1$).
+
+We want to show that $\max_{i \in I}[\epsilon_i] = \max_{i \in I^1}[\epsilon_i]$. Since all dual variables $\mu_i$ have the same optimal value for all $i \in I^1$, we denote by $\mu_1^U$ their optimal value in the problem with maximum capacity and by $\epsilon_1$ their optimal deviation when capacities of the leader are reduced by $\Delta C_i$.
+
+By optimality of Eqn. (B.15), we can deduce inequality (B.22).
+
+$$ \begin{aligned} & \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_i - \epsilon_1 \right) \right\} - \sum_{i \in I^1} C_i^U (\mu_i^U + \epsilon_i - \epsilon_1) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_i - \epsilon_1) \\ & \leq \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U \right) \right\} - \sum_{i \in I^1} C_i^U \mu_i^U - \sum_{i \in I^2} C_i^0 \mu_i^U \end{aligned} \quad (B.22) $$
+
+which implies inequality (B.23),
+
+$$ \begin{aligned} & \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_i \right) \right\} - \sum_{i \in I^1} C_i^U (\mu_i^U + \epsilon_i) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_i) \\ & \leq \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_1 \right) \right\} - \sum_{i \in I^1} C_i^U (\mu_i^U + \epsilon_1) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_1) \end{aligned} \quad (B.23) $$
+
+By optimality, we also know that inequality (B.24) must be satisfied.
+
+$$ \begin{aligned} & \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_1 \right) \right\} - \sum_{i \in I^1} (C_i^U - \Delta C_i)(\mu_i^U + \epsilon_1) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_1) \\ & \leq \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_i \right) \right\} - \sum_{i \in I^1} (C_i^U - \Delta C_i)(\mu_i^U + \epsilon_i) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_i) \end{aligned} \quad (B.24) $$
+
+Furthermore, an upper bound on the right-hand side of inequality (B.24) is presented in Eqn. (B.25).
\ No newline at end of file
diff --git a/samples/texts/470846/page_33.md b/samples/texts/470846/page_33.md
new file mode 100644
index 0000000000000000000000000000000000000000..dfec7679ed5eabff192305c310a0eff12b3982c4
--- /dev/null
+++ b/samples/texts/470846/page_33.md
@@ -0,0 +1,39 @@
+$$
+\begin{align}
+& \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_i \right) \right\} - \sum_{i \in I^1} (C_i^U - \Delta C_i) (\mu_i^U + \epsilon_i) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_i) \nonumber \\
+& \le \sum_{j \in J} D_j \left\{ \min_{i \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U \right) + \max_i[\epsilon_i] \right\} - \sum_{i \in I^1} (C_i^U - \Delta C_i) (\mu_i^U + \epsilon_i) - \sum_{i \in I^2} C_i^0 (\mu_i^U + \epsilon_i) \tag{B.25}
+\end{align}
+$$
+
+If we subtract the left-hand side of (B.24) from the right-hand side of (B.25), we can infer inequality
+(B.26),
+
+$$
+\sum_{j \in J} D_j \left\{ \max_i [\epsilon_i] - \epsilon_1 \right\} - \sum_{i \in I^2} C_i^0 (\epsilon_i - \epsilon_1) \ge 0 \quad (B.26)
+$$
+
+Now, let us assume that $\max_{i \in I}[\epsilon_i] > \epsilon_1$. Then, for $i' = \argmax[\epsilon_i]$, inequality (B.27) must be satisfied.
+
+$$
+C_{i'}^{0} \leq \frac{\sum_{j \in J} \{\max_{i}[\epsilon_i] - \epsilon_1\} - \sum_{i \in I^2 \setminus \{i'\}} C_i^0 (\epsilon_i - \epsilon_1)}{(\epsilon_{i'} - \epsilon_1)} \quad (B.27)
+$$
+
+But we have not imposed any restrictions on the capacity of the competitors. Therefore, $\epsilon_1 = \max_i[\epsilon_i]$.
+
+**Step 4:** reduced costs of assignment variables for the leader cannot decrease when its capacities are reduced.
+
+A necessary condition for optimality of a minimization linear program is that the reduced cost of the nonbasic variables must be nonnegative. Therefore, optimal demand assignments to the leader that are nonbasic ($y_{i,j}^U = 0$ $i \in I^1$) in the problem with maximum capacity must have nonnegative reduced costs as presented by inequality (B.28).
+
+$$
+r_{i,j}^{U} = \frac{1}{(1+R)^t} P_{i,j} + \mu_{i}^{U} - \lambda_{j}^{U} \geq 0 \quad \forall (i,j) \in \{(i,j): i \in I^1, j \in J, y_{i,j}^{U} = 0\} \quad (B.28)
+$$
+
+Using Eqn. (B.12), we can rewrite the reduced cost ($r_{i,j}$) for nonbasic variables $y_{i,j}$ only in terms of dual variables $\mu_i$,
+
+$$
+r_{i,j}^{U} = \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U - \min_{i' \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_{i'}^U \right) \ge 0 \quad \forall (i,j) \in \{(i,j): i \in I^1, j \in J, y_{i,j}^U = 0\} \tag{B.29}
+$$
+
+Recall that the lower-level problem is degenerate because the leader offers a single price to each market from all facilities. This degeneracy implies that some assignment variables are nonbasic but their reduced costs are strictly equal to zero. In order to keep in the bilevel problem the degenerate assignments, we restrict the domain reduction to variables with strictly positive reduced costs in the lower-level problem with maximum capacity.
+
+In Step 3, we established that dual variables $\mu_i$ ($i \in I^2$) cannot increase more than dual variables $\mu_i$ ($i \in I^1$) when production capacities of the leader are reduced from $C_i^U$ to $C_i^U - \Delta C_i$. Then, according to inequality (B.30), the reduced cost of the variables of the leader cannot decrease when capacities are reduced.
\ No newline at end of file
diff --git a/samples/texts/470846/page_34.md b/samples/texts/470846/page_34.md
new file mode 100644
index 0000000000000000000000000000000000000000..b58d7b8b53ea7951a867984e426a682789a476db
--- /dev/null
+++ b/samples/texts/470846/page_34.md
@@ -0,0 +1,10 @@
+$$
+\begin{align*}
+& \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U - \min_{i' \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_{i'}^U \right) \\
+& \le \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_i - \min_{i' \in I} \left( \frac{1}{(1+R)^t} P_{i,j} + \mu_i^U + \epsilon_{i'} \right) \quad \forall (i,j) \in \{(i,j) : i \in I^1, j \in J\} \tag{B.30}
+\end{align*}
+$$
+
+Therefore, variables $y_{i,j}$ ($i \in I^1$) with positive reduced cost in the lower-level problem with maximum capacity have positive reduced cost regardless of the leader's expansion strategy.
+
+☐
\ No newline at end of file
diff --git a/samples/texts/470846/page_4.md b/samples/texts/470846/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..bab236249416ecf299f730a22aab8ea201269a83
--- /dev/null
+++ b/samples/texts/470846/page_4.md
@@ -0,0 +1,11 @@
+pansion planning in a competitive environment. Bilevel programming for these kind of problems can be seen as a risk mitigation strategy given the significant influence of external decision makers in the economic success of investment plans. In particular, we propose a mathematical model that includes a rational market behavior beyond the classic game theoretical models. The investment plans obtained from this approach are found to be less sensitive to changes in the business environment in comparison to the single-level formulations.
+
+In order to solve the challenging bilevel formulation, we test the KKT and the duality-based reformulations with an illustrative example, a middle-size example, and an industrial example. The results show the advantages of the duality-based reformulation in terms of computational effort. Despite the efficiency obtained with this reformulation, we found necessary to implement two additional improvement strategies to solve large-scale instances.
+
+The remaining paper is organized as follows. In Section 2, we describe the problem. In Section 3, we present the single-level capacity planning formulation. Section 4 presents the bilevel capacity planning problem with rational markets. In Section 5, we develop two reformulations that allow solving the bilevel optimization problem. Section 6, presents a small example that illustrates the proposed formulations. Subsequently, in Section 7 we evaluate the performance of the proposed reformulations with a middle-size example. In section 8, we elaborate on solution approaches for large-scale bilevel capacity planning problems. Section 9 presents an industrial example. Finally, in Section 10 we present our analysis and conclusions.
+
+## 2 Problem statement
+
+A company that produces and commercializes industrial products in a given geographic region is interested in developing an investment plan to expand its capacity in anticipation of future demand increase. The company operates some facilities with limited production capacity. Existing facilities are eligible for capacity expansion and other locations are candidates to open new facilities. The construction and expansion of facilities requires the investment of capital to develop the project and install new production lines. The potential increases in production capacity are assumed to be discrete and the corresponding investments are given by fixed costs. Based on the available capacity in the facilities, the company allocates production to market demands. Figure 1 shows a schematic representation of a region with several industrial producers and gas markets.
+
+The company obtains revenue from selling its products at fixed prices in each market. The goal of the company is to find the investment plan that maximizes the Net Present Value (NPV) of its profit during a finite time horizon. The NPV is calculated by applying the appropriate discount factor to the income received from sales and the expenses related to investment, production, maintenance, and transportation costs.
\ No newline at end of file
diff --git a/samples/texts/470846/page_5.md b/samples/texts/470846/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..ce1d8d4b29d01bf7b72bf0af4fe3ed5bc4df7110
--- /dev/null
+++ b/samples/texts/470846/page_5.md
@@ -0,0 +1,44 @@
+Figure 1: Superstructure of regional gas markets.
+
+**3 Single-level Capacity Planning with Captive Markets**
+
+The basic model to plan the capacity expansion of a company serving industrial markets assumes that all market demands are willing to buy the products at the price offered. In this context, markets are regarded as captive. The capacity expansion planning with captive markets can be formulated as the single-level Mixed-Integer Linear Program (MILP) presented in Eqns. (1) - (7).
+
+$$
+\begin{equation}
+\begin{aligned}
+\max \quad & \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \\
+& - \sum_{t \in T} \sum_{i \in I^1} \frac{1}{(1+R)^t} (A_{t,i} v_{t,i} + B_{t,i} w_{t,i}) \\
+& - \sum_{t \in T} \sum_{i \in I^1} \sum_{k \in K} \frac{1}{(1+R)^t} \left( E_{t,i,k} x_{t,i,k} + F_{t,i,k} \sum_{j \in J} y_{t,i,j,k} \right) \\
+& - \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in I} \sum_{k \in K} \frac{1}{(1+R)^t} G_{t,i,j,k} y_{t,i,j,k}
+\end{aligned}
+\tag{1}
+\end{equation}
+$$
+
+$$
+\text{s.t.} \quad w_{t,i} = V_{0,i} + \sum_{t' \in T_t'} v_{t',i} \qquad \forall t \in T, i \in I^1 \quad (2)
+$$
+
+$$
+x_{t,i,k} \le w_{t,i} \qquad \forall t \in T, i \in I^1, k \in K \tag{3}
+$$
+
+$$
+c_{t,i,k} = C_{0,i,k} + \sum_{t' \in T_t'} H_{i,k} x_{t',i,k} \qquad \forall t \in T, i \in I^1, k \in K \quad (4)
+$$
+
+$$
+\sum_{j \in J} y_{t,i,j,k} \le c_{t,i,k} \qquad \forall t \in T, i \in I^1, k \in K \quad (5)
+$$
+
+$$
+\sum_{i \in I^1} y_{t,i,j,k} \le D_{t,j,k} && \forall t \in T, j \in J, k \in K && (6)
+$$
+
+$$
+v_{t,i}, w_{t,i}, x_{t,i,k} \in \{0,1\}; c_{t,i,k}, y_{t,i,j,k} \in \mathbb{R}^{+} && \forall t \in T, i \in I^1, j \in J, k \in K && (7)
+$$
+
+where $T$, $I^1$, $J$, and $K$ are respectively, the index sets for time ($t$), production facilities of the decision maker ($i^1$), markets ($j$), and products ($k$). We also define $T'$ as the subset of time periods $T$ in which expansions are allowed, and $T'_t$ as the subset of time periods before $t$ in which expansions are allowed. Formally, $T' = \{t' : t' \in T', t' \le t\}$.
+
diff --git a/samples/texts/470846/page_6.md b/samples/texts/470846/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..5f93aeaec54045a571a2680c5f71ce7b09a22101
--- /dev/null
+++ b/samples/texts/470846/page_6.md
@@ -0,0 +1,9 @@
+The first term in expression (1) represents the income obtained from sales. Income is proportional to demand assignments $(y_{t,i,j,k})$ according to the price paid by the markets $(P_{t,i,j,k})$. The second term includes the cost of opening new facilities and the maintenance cost of open facilities. The binary variable deciding if a new facility is open at location $i$ at time period $t$ is $v_{t,i}$; parameter $A_{t,i}$ determines the fixed cost to build a new facility. The binary variable $w_{t,i}$ indicates if facility $i$ is open at time period $t$; if the facility is open at period $t$, a fixed cost $B_{t,i}$ must be paid for maintenance. The third term includes expansion and production costs. The expansion of production capacity for product $k$ in facility $j$ at period $t$ is decided with binary variable $x_{t,i,k}$; the cost of expansions is given by parameter $E_{t,i,k}$. Production costs are proportional to demand assignments $(y_{t,i,j,k})$ according to their unit production cost ($F_{t,i,k}$). Finally, the last term represents the transportation cost from production facilities to markets. Transportation is proportional to demand assignments $(y_{t,i,j,k})$ according to the unit transportation cost ($G_{t,i,j,k}$). All terms are discounted in every time period with an interest rate $(R)$.
+
+Constraint set (2) is used to model the maintenance cost of facilities during the time periods when they are open; the binary parameter $V_i^0$ indicates the facilities that are initially open. Constraint set (3) requires capacity expansions to take place only at open facilities. Constraint set (4) determines the production capacity of facilities according to the expansion decisions; parameters $C_{0,i,k}$ indicate the initial capacities and $H_k$ is the magnitude of the potential capacity expansion. Constraints (5) bound the demand assignments according to the production capacities. Finally, constraints set (6) bounds demand assignments according to market demands. The domains of the variables are given by expressions (7).
+
+# 4 Bilevel Capacity Planning with Rational Markets
+
+The most intuitive way to model a competitive environment is to assume that the markets have the possibility to select their providers according to their own interest. The rational behavior of the markets can be modeled with a mathematical program that optimizes their objective function. The behavior of the markets is included in the constraints of the capacity planning problem, yielding a bilevel optimization formulation. In this formulation, the upper-level problem is intended to find the optimal capacity expansion plan by selecting the investments that maximize the NPV of the leader. The lower-level represents the response of markets that select production facilities as providers with the unique interest of satisfying their demands at lowest cost.
+
+The formulation presented in Section 3 is modified to ensure that market demands are completely satisfied. This is achieved by transforming constraint set (6) into equality constraints. This change is necessary to avoid the market cost from dropping to zero by leaving all demands unsatisfied. Additionally, the set of potential providers is expanded to include facilities from independent producers. We assume that the initial capacity of all production facilities is large enough to satisfy all market demands regardless of the expansion plan of the leader. This assumption is also useful to avoid unprofitable investments in capacity expansions driven by the need to maintain feasibility of the problem.
\ No newline at end of file
diff --git a/samples/texts/470846/page_7.md b/samples/texts/470846/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f1c92d9b597f67ab00c49a4a474930daaccdcd7
--- /dev/null
+++ b/samples/texts/470846/page_7.md
@@ -0,0 +1,8 @@
+The products offered by the competing producers are considered homogeneous and the markets have no other preference for producers than price. Cases in which the markets have no preference between two or more facilities are resolved by the upper level according to the interest of the leader; this modeling framework is known as the *optimistic approach* [19]. In our model, the *optimistic approach* is a key assumption because all facilities controlled by the leader offer the same price to each market. Therefore, the optimization problem of the markets is degenerate. However, the markets are only concerned about selecting the producer that offers the lowest price and they are indifferent to the facility from which they are served; consequently, the leader is free to choose the facilities it uses to satisfy its demands.
+
+The bilevel optimization problem for the capacity expansion planning in a competitive environment is presented in Eqn. (8) - (18).
+
+$$ \begin{array}{ll} \max_{v,w,x} & \displaystyle \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \\ & - \sum_{t \in T} \sum_{i \in I^1} \frac{1}{(1+R)^t} (A_{t,i} v_{t,i} + B_{t,i} w_{t,i}) \\ & - \sum_{t \in T} \sum_{i \in I^1} \sum_{k \in K} \frac{1}{(1+R)^t} \left( E_{t,i,k} x_{t,i,k} + F_{t,i,k} \sum_{j \in J} y_{t,i,j,k} \right) \\ & - \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} G_{t,i,j,k} y_{t,i,j,k} \\ & \text{s.t.} \quad w_{t,i} = V_i^0 + \sum_{t' \in T_t'} v_{t',i} & \forall t \in T, i \in I^1 & (9) \\ & x_{t,i,k} \le w_{t,i} & \forall t \in T, i \in I^1, k \in K & (10) \\ & c_{t,i,k} = C_{i,k}^0 + \sum_{t' \in T_t'} H_{i,k} x_{t',i,k} & \forall t \in T, i \in I^1, k \in K & (11) \\ \min_y & \displaystyle \sum_{t \in T} \sum_{i \in I} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} & (12) \\ \text{s.t.} & \displaystyle \sum_{j \in J} y_{t,i,j,k} \le c_{t,i,k} & \forall t \in T, i \in I^1, k \in K & (13) \\ & \displaystyle \sum_{j \in J} y_{t,i,j,k} \le C_{i,k}^0 & \forall t \in T, i \in I^2, k \in K & (14) \\ & \displaystyle \sum_{i \in I} y_{t,i,j,k} = D_{t,j,k} & \forall t \in T, j \in J, k \in K & (15) \\ & y_{t,i,j,k} \in \mathbb{R}^+ & \forall t \in T, i \in I, j \in J, k \in K & (16) \\ & c_{t,i,k} \in \mathbb{R}^+ & \forall t \in T, i \in I^1, j \in J, k \in K & (17) \\ & v_{t,i}, w_{t,i}, x_{t,i,k} \in \{0, 1\} & \forall t \in T, i \in I^1, k \in K & (18)
+\end{array} $$
+
+where $I$ is the set of all production facilities, $I^1 \subset I$ is the subset of facilities controlled by the leader, and $I^2 \subset I$ is the subset of facilities controlled by the competitors. It should be noted that Eqns. (8) - (11) are identical to Eqns. (1) - (4) in the single-level formulation. However, in the bilevel formulation the upper-level decision maker only controls variables $v_{t,i}$, $w_{t,i}$, $x_{t,i,k}$, and
\ No newline at end of file
diff --git a/samples/texts/470846/page_8.md b/samples/texts/470846/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1bea577296029d531a719dbc2fe4e2b2f7cd812
--- /dev/null
+++ b/samples/texts/470846/page_8.md
@@ -0,0 +1,19 @@
+$c_{t,i,k}$. Demand assignment decisions $(y_{t,i,j,k})$ are controlled by the lower level with the objective of minimizing the cost paid by the markets according to Eqn. (12). Eqns. (13) and (14) constrain the production capacity of the facilities; Eqn. (15) enforces demand satisfaction in every time period. The domains of the variables are presented in Eqns. (16) - (18). It is important to note that upper-level variables only take discrete values and all lower-level variables are continuous. This attribute of the model is crucial for the reformulations that we propose.
+
+# 5 Reformulation as a Single-level Optimization Problem
+
+An optimistic bilevel program with a convex and regular lower-level can be transformed into a single-level optimization problem using its optimality conditions [10]. The key property of convex programs is that their KKT conditions are necessary and sufficient to characterize their corresponding global optimal solutions. In the case of linear programs, KKT optimality conditions are equivalent to the satisfaction of primal feasibility, dual feasibility, and strong duality [13]. Based on this equivalence, we derive two single-level reformulations for the capacity planning problem in a competitive environment.
+
+## 5.1 KKT Reformulation
+
+The classic reformulation for bilevel programs with a lower-level LP is to replace the lower level problem by its KKT conditions. In the case of the capacity planning in a competitive environment, the KKT reformulation is obtained by introducing constraints that guarantee the stationarity conditions, primal feasibility, dual feasibility, and complementary slackness for the cost minimization problem modeling markets behavior. The resulting reformulation is presented in Eqns. (19) - (32).
+
+$$ \begin{aligned} \max & \quad \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in J} \sum_{k \in K} \frac{1}{(1+R)^t} P_{t,i,j,k} y_{t,i,j,k} \\ & - \sum_{t \in T} \sum_{i \in I^1} \frac{1}{(1+R)^t} (A_{t,i} v_{t,i} + B_{t,i} w_{t,i}) \\ & - \sum_{t \in T} \sum_{i \in I^1} \sum_{k \in K} \frac{1}{(1+R)^t} \left( E_{t,i,k} x_{t,i,k} + F_{t,i,k} \sum_{j \in J} y_{t,i,j,k} \right) \\ & - \sum_{t \in T} \sum_{i \in I^1} \sum_{j \in I} \sum_{k \in K} \frac{1}{(1+R)^t} G_{t,i,j,k} y_{t,i,j,k} \end{aligned} \tag{19} $$
+
+$$ w_{t,i} = V_i^0 + \sum_{t' \in T_t'} v_{t',i} \qquad \forall t \in T, i \in I^1 \tag{20} $$
+
+$$ x_{t,i,k} \le w_{t,i} \qquad \forall t \in T, i \in I^1, k \in K \tag{21} $$
+
+$$ c_{t,i,k} = C_{i,k}^0 + \sum_{t' \in T_t'} H_{i,k} x_{t',i,k} \qquad \forall t \in T, i \in I^1, k \in K \tag{22} $$
+
+$$ \sum_{j \in J} y_{t,i,j,k} \le c_{t,i,k} \qquad \forall t \in T, i \in I^1, k \in K \tag{23} $$
\ No newline at end of file
diff --git a/samples/texts/470846/page_9.md b/samples/texts/470846/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..5cd61d5833e1d57fdd09e440613be91183ef6e04
--- /dev/null
+++ b/samples/texts/470846/page_9.md
@@ -0,0 +1,35 @@
+$$ \sum_{j \in J} y_{t,i,j,k} \le C_{i,k}^0 \qquad \forall t \in T, i \in I^2, k \in K \tag{24} $$
+
+$$ \sum_{i \in I} y_{t,i,j,k} = D_{t,j,k} \qquad \forall t \in T, j \in J, k \in K \tag{25} $$
+
+$$ \frac{1}{(1+R)^t} P_{t,i,k} + \lambda_{t,j,k} + \mu_{t,i,k} - \gamma_{t,i,j,k} = 0 \qquad \forall t \in T, i \in I, j \in J, k \in K \tag{26} $$
+
+$$ \mu_{t,i,k} \left( \sum_j y_{t,i,j,k} - c_{t,i,k} \right) = 0 \qquad \forall t \in T, i \in I^1, k \in K \tag{27} $$
+
+$$ \mu_{t,i,k} \left( \sum_j y_{t,i,j,k} - C_{i,k}^0 \right) = 0 \qquad \forall t \in T, i \in I^2, k \in K \tag{28} $$
+
+$$ \gamma_{t,i,j,k} y_{t,i,j,k} = 0 \qquad \forall t \in T, i \in I, j \in J, k \in K \tag{29} $$
+
+$$ y_{t,i,j,k}, \mu_{t,i,k}, \gamma_{t,i,j,k} \in \mathbb{R}^+ \qquad \forall t \in T, i \in I, j \in J, k \in K \tag{30} $$
+
+$$ \lambda_{t,j,k} \in \mathbb{R}, \qquad \forall t \in T, j \in J, k \in K \tag{31} $$
+
+$$ c_{t,i,k} \in \mathbb{R}^+ \qquad \forall t \in T, i \in I^1, k \in K \tag{32} $$
+
+$$ v_{t,i}, w_{t,i}, x_{t,i,k} \in \{0,1\} \qquad \forall t \in T, i \in I^1, k \in K \tag{33} $$
+
+where $\mu_{t,i,k}$, $\lambda_{t,j,k}$, and $\gamma_{t,i,j,k}$ are the Lagrange multipliers of the lower-level constraints presented in Eqns. (13) - (14), (15), and (16), respectively. The upper-level problem is kept unchanged as shown in Eqns. (19) - (22). Constraints (23) - (25) ensure primal feasibility of the lower level; the constraints presented in (26) are the stationary conditions for the lower level; Eqns. (27) and (28) represent the complementary conditions corresponding to inequalities (13) and (14); the constraints (29) are the complementary conditions corresponding to the domain of the lower-level variables presented in Eqn. (16). The domains are presented in Eqns. (30) - (33).
+
+The main disadvantage associated to this reformulation is the introduction of non-linear complementary constraints. In order to avoid the solution of a nonconvex Mixed-Integer Non-Linear Program (MINLP), the complementary constraints can be formulated as disjunctions that are transformed into mixed-integer constraints [17]. In particular, we rewrite Eqns. (23) and (24) as equality constraints by introducing the slack variables $s_{t,i,k}$,
+
+$$ \sum_{j \in J} y_{t,i,j,k} + s_{t,i,k} = c_{t,i,k} \quad \forall t \in T, i \in I^1, k \in K \tag{34} $$
+
+$$ \sum_{j \in J} y_{t,i,j,k} + s_{t,i,k} = C_{i,k}^0 \quad \forall t \in T, i \in I^2, k \in K \tag{35} $$
+
+and use the Big-M reformulation to express that either constraints (34) and (35) are active or the corresponding multipliers ($\mu_{t,i,k}$) are zero. The Big-M constraints modeling this disjunction are presented in Eqn. (36) using binary variable $z_{t,i,k}^{1}$.
+
+$$ s_{t,i,k} \le M z_{t,i,k}^{1} $$
+
+$$ \mu_{t,i,k} \le M (1 - z_{t,i,k}^{1}) \qquad \forall t \in T, i \in I, k \in K \tag{36} $$
+
+$$ z_{t,i,k}^{1} \in \{0,1\} $$
\ No newline at end of file
diff --git a/samples/texts/5048783/page_1.md b/samples/texts/5048783/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..042689ed3e938454b8be9571e58d863637e8699f
--- /dev/null
+++ b/samples/texts/5048783/page_1.md
@@ -0,0 +1,24 @@
+# Stagnation point flow of a second grade fluid with uniform suction or blowing and heat generation
+
+Hazem Ali Attia
+
+Department of Mathematics, College of Science, Al-Qasseem University, P.O. Box 237, Buraidah 81999,
+KINGDOM OF SAUDI ARABIA
+
+On leave from: Department of Engineering Mathematics and Physics, Faculty of Engineering, El-Fayoum University, El-Fayoum, EGYPT
+
+e-mail: ah1113@yahoo.com
+
+## SUMMARY
+
+The steady laminar flow of an incompressible non-Newtonian second grade fluid impinging on a permeable flat plate with heat generation is investigated. A uniform suction or blowing is applied normal to the plate which is maintained at a constant temperature. Numerical solution for the governing nonlinear momentum and energy equations is obtained. The effect of the uniform suction or blowing and the characteristics of the non-Newtonian fluid on both the flow and heat transfer is presented and discussed.
+
+**Key words:** stagnation point flow, non-Newtonian fluid, suction, steady laminar flow, heat generation.
+
+## 1. INTRODUCTION
+
+The two-dimensional flow of a fluid near a stagnation point is a classical problem in fluid mechanics. It was first examined by Hiemenz [1] who demonstrated that the Navier-Stokes equations governing the flow can be reduced to an ordinary differential equation of third order using similarity transformation. Owing to the nonlinearities in the reduced differential equation, no analytical solution is available and the nonlinear equation is usually solved numerically subject to two-point boundary conditions, one of which is prescribed at infinity.
+
+Later the problem of stagnation point flow was extended in numerous ways to include various physical effects. The axisymmetric three-dimensional stagnation point flow was studied by Homann [2]. The results of these studies are of great technical importance, for example in the prediction of skin-friction as well as heat/mass transfer near stagnation regions of bodies in high speed flows and also in the design of thrust bearings and radial diffusers, drag
+
+reduction, transpiration cooling and thermal oil recovery. Either in the two or three-dimensional case Navier-Stokes equations governing the flow are reduced to an ordinary differential equation of third order using a similarity transformation. The effect of suction on Hiemenz problem has been considered in the literature. Schlichting and Bussman [3] gave the numerical results first. More detailed solutions were later presented by Preston [4]. An approximate solution to the problem of uniform suction is given by Ariel [5]. The effect of uniform suction on Homann problem where the flat plate is oscillating in its own plane is considered by Weidman and Mahalingam [6]. In hydromagnetics, the problem of Hiemenz flow was chosen by Na [7] to illustrate the solution of a third-order boundary value problem using the technique of finite differences. An approximate solution of the same problem has been provided by Ariel [8]. The effect of an externally applied uniform magnetic field on the two or three-dimensional stagnation point flow was given, respectively, by Attia in Refs. [9] and [10] in the presence of uniform suction or injection.
\ No newline at end of file
diff --git a/samples/texts/5048783/page_2.md b/samples/texts/5048783/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..5fea496e19245f604eb3b2a7552faaa547a006ac
--- /dev/null
+++ b/samples/texts/5048783/page_2.md
@@ -0,0 +1,37 @@
+The study of heat transfer in boundary layer flows is of importance in many engineering applications such as the design of thrust bearings and radial diffusers, transpiration cooling, drag reduction, thermal recovery of oil, etc. Massoudi and Ramezan [11] used a perturbation technique to solve for the stagnation point flow and heat transfer of a non-Newtonian fluid of second grade. Their analysis is valid only for small values of the parameter that determines the behaviour of the non-Newtonian fluid. Later Massoudi and Ramezan [12] extended the problem to nonisothermal surface. Garg [13] improved the solution obtained by Massoudi and Ramezan [12] by computing numerically the flow characteristics for any value of the non-Newtonian parameter using a pseudo-similarity solution.
+
+Non-Newtonian fluids were considered by many researchers. Thus, among the non-Newtonian fluids, the solution of the stagnation point flow, for viscoelastic fluids, has been given by Rajeshwari and Rathna [14], Beard and Walters [15], Teipel [16], Arial [17], and others; for power-law fluid by Djukić [18]; and for micropolar fluids by Nath [19], Kelson et al. [20], Desseaux [21] and Nazar et al. [22]. Stagnation point flow of a non-Newtonian second grade fluid was studied by Teipel [23] and Ariel [24] in the hydrodynamic case. In hydromagnetics, Attia [25] introduced the influence of a magnetic field on the flow of a second grade fluid.
+
+The purpose of the present paper is to study the steady laminar flow of an incompressible non-Newtonian second grade fluid at a two-dimensional stagnation point with heat generation. A uniform suction or blowing directed normal to the plane of the wall is applied. The wall and stream temperatures are assumed to be constants. A numerical solution is obtained for the governing momentum and energy equations using finite difference approximations which takes into account the asymptotic boundary conditions. The numerical solution is used to determine the flow and heat characteristics for the whole range of the non-Newtonian fluid characteristics, the suction or blowing parameter and Prandtl number.
+
+## 2. FORMULATION OF THE PROBLEM
+
+Consider the two-dimensional stagnation point flow of an incompressible non-Newtonian Rivlin-Ericksen fluid impinging perpendicular on a permeable wall and flows away along the x-axis. This is an example of a plane potential flow that arrives from the y-axis and impinges on a flat wall placed at $y=0$, divides into two streams on the wall and leaves in both directions. The viscous flow must adhere to the wall, whereas the potential flow slides along it. The $(u,v)$ are the components of velocity at any point $(x,y)$ for the viscous flow whereas $(U,V)$ are the velocity components for
+
+the potential flow. A uniform suction or blowing is applied at the plate with a transpiration velocity at the boundary of the plate given by $-v_0$, where $v_0>0$ for suction. The velocity distribution in the frictionless flow in the neighborhood of the stagnation point is given by:
+
+$$U(x) = ax, \quad V(y) = -ay$$
+
+where the constant $a > 0$ is proportional to the free stream velocity far away from the stretching surface. A second grade fluid is defined such that the Cauchy stress tensor is related to the fluid motion in the following manner [23]:
+
+$$T = -pI + \mu A_1 + \alpha_1 A_2 + \alpha_2 A_I^2 \quad (1)$$
+
+where $p$ denotes the hydrostatic pressure, $I$ is the identity tensor, $\mu$ is the viscosity of the fluid, $\alpha_1$ and $\alpha_2$ are scalar constants named as normal stress moduli, and $A_1$ and $A_2$ are the first two Rivlin-Ericksen tensors. For $\alpha_1=\alpha_2=0$, Eq. (1) describes a common Newtonian fluid. Then, $A_1$ represents the usual deformation tensor. All the stress components have to be introduced into the equations of motion. Here, we consider the case $\alpha_2=0$, i.e. the case of a reduced Rivlin-Ericksen fluid. Then, for the two-dimensional steady-state flows, the continuity and momentum equations, using the usual boundary layer approximations [24] and by introducing the stress components, reduce to:
+
+$$\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} = 0, \quad (2)$$
+
+$$\begin{aligned} \rho \left( u \frac{\partial u}{\partial x} + v \frac{\partial u}{\partial y} \right) = & U \frac{dU}{dx} + \mu \left( \frac{\partial^2 u}{\partial y^2} \right) + \\ & + \alpha_I \left( \frac{\partial u}{\partial y} \frac{\partial^2 u}{\partial y^2} + v \frac{\partial^3 u}{\partial y^3} + \frac{\partial}{\partial x} \left( u \frac{\partial^2 u}{\partial y^2} \right) \right) = 0 \end{aligned} \quad (3)$$
+
+where $\rho$ is the density of the fluid, and $U(x)$ is the potential flow velocity over the body surface.
+
+Using the boundary layer approximations and neglecting the dissipation, the equation of energy for temperature $T$ with heat generation or absorption is given [11,12]:
+
+$$\rho c_p \left( u \frac{\partial T}{\partial x} + v \frac{\partial T}{\partial y} \right) = k \frac{\partial^2 T}{\partial y^2} + Q(T - T_{\infty}) \quad (4)$$
+
+where $c_p$ is the specific heat capacity at constant pressure of the fluid, $k$ is the thermal conductivity of the fluid, and $Q$ is the volumetric rate of heat generation/absorption. A similarity solution exists if the wall and stream temperatures, $T_w$ and $T_\infty$ are constants – a realistic approximation in typical stagnation point heat transfer problems [11,12].
+
+The boundary conditions are:
+
+$$y = 0: u = 0, v = -v_o, \quad (5a)$$
+
+$$y \to \infty: u \to ax, \quad (5b)$$
\ No newline at end of file
diff --git a/samples/texts/5048783/page_3.md b/samples/texts/5048783/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..152f67d1be28c12affda6f6d5ed015b055d839c0
--- /dev/null
+++ b/samples/texts/5048783/page_3.md
@@ -0,0 +1,51 @@
+$$y = 0 : T = T_w, \quad (6a)$$
+
+$$y \to \infty : T \to T_\infty \quad (6b)$$
+
+A little inspection shows that boundary-layer Eqs. (2) to (4) admit a similarity solution:
+
+$$u(x,y) = \alpha x f'(z), \quad v(x,y) = -\sqrt{av} f(z), \quad z = \sqrt{a/\nu} y \quad (7)$$
+
+where the prime denotes differentiation with respect to $z$ and $\nu=\mu/\rho$. By introducing the non-dimensional variable:
+
+$$\theta = \frac{T - T_{\infty}}{T_w - T_{\infty}}$$
+
+and using Eq. (7), we find that Eq. (2) is identically satisfied and Eqs. (3)-(6) reduce to:
+
+$$K(f f^{iv} - 2 f' f'' + f''^2) - f''' - f f'' + f'^2 - I = 0 \quad (8)$$
+
+$$\theta'' + Pr f \theta' + Pr B \theta = 0 \quad (9)$$
+
+$$f(0) = A, \quad f'(0) = 0, \quad f'(\infty) = 1 \quad (10)$$
+
+$$\theta(0) = 1, \quad \theta(\infty) = 0 \quad (11)$$
+
+where $A$ is the suction parameter, $A = v_o / \sqrt{av}$; $Pr$ is the Prandtl number, $Pr = \mu c_p / k$; $K$ is the dimensionless normal stress modulus, $K = \alpha_j a / \mu$; $B = Q / a \rho c_p$ is the dimensionless heat generation/absorption coefficient and the prime denotes differentiation with respect to $z$. The heat transfer from the surface to the fluid is computed by application of Fourier's law:
+
+$$q = -k \left( \frac{\partial T}{\partial y} \right)_{y=0}$$
+
+Introducing the transformed variables, the expression for $q$ becomes:
+
+$$q = -k(T_w - T_\infty) \sqrt{a/\nu} \theta'(0) \quad (12)$$
+
+The heat transfer coefficient in terms of the Nusselt number $Nu$ can be expressed as:
+
+$$Nu = \frac{q}{k(T_w - T_\infty)\sqrt{a/\nu}} \quad (13)$$
+
+where $\sqrt{a/\nu}$ plays the role of a characteristic length. Using Eq. (12), Eq. (13) becomes:
+
+$$Nu = -\theta'(0) \quad (14)$$
+
+The equations to be solved are Eqs. (8)-(11). The flow Eqs. (8) and (10) are decoupled from the energy Eqs. (9) and (11), and need to be solved before the latter can be solved. The flow Eq. (8) constitutes a non-linear, non-homogeneous boundary value problem (BVP). In the absence of an analytical solution of a problem, a numerical solution is required. The flow Eqs. (8) and (10) are solved numerically using finite difference approximations. A quasi-linearization technique is first applied to replace the non-linear terms
+
+at a linear stage, with the corrections incorporated in subsequent iterative steps until convergence. The quasi-linearized form of Eq. (8) is:
+
+$$\begin{aligned} K(f_n f_{n+I}^{iv} + f_n^{iv} f_{n+I} - f_n f_n^{iv} - & \\ & -2(f'_n f''_{n+I} + f''_n f'_{n+I} - f''_n f')_n + 2f''_n f''_{n+I} - f_n^{m2}) - \\ & -f''_{n+I} - f_n f''_{n+I} - f''_n f_{n+I} + f''_n f_n + 2f'_n f'_{n+I} - f_n^{t2} - I = 0 \end{aligned}$$
+
+where the subscript $n$ or $n+1$ represents the $n^{th}$ or $(n+1)^{th}$ approximation to the solution. Then, Crank-Nicolson method is used to replace the different terms by their second order central difference approximations. An iterative scheme is used to solve the quasi-linearized system of difference equations. The solution for the Newtonian case is chosen as an initial guess and the iterations are continued till convergence within prescribed accuracy. Finally, the resulting block tri-diagonal system was solved using generalized Thomas' algorithm.
+
+The energy Eq. (9) is a linear second order ordinary differential equation with variable coefficient, $f(z)$, which is known from the solution of the flow Eqs. (8) and (10) and the Prandtl number $Pr$ is assumed constant. Equation (9) is solved numerically under the boundary condition (11) using central differences for the derivatives and Thomas' algorithm for the solution of the set of discretized equations. The resulting system of equations has to be solved in the infinite domain $0 < z < \infty$. A finite domain in the $z$-direction can be used instead with $z$ chosen large enough to ensure that the solutions are not affected by imposing the asymptotic conditions at a finite distance. Grid-independence studies show that the computational domain $0 < z < z_\infty$ can be divided into intervals each is of uniform step size which equals 0.02. This reduces the number of points between $0 < z < z_\infty$ without sacrificing accuracy. The value $z_\infty=10$ was found to be adequate for all the ranges of parameters studied here. Convergence is assumed when the ratio of every one of $f, f', f''$ or $f'''$ for the last two approximations differed from unity by less than $10^{-5}$ at all values of $z$ in $0 < z < z_\infty$.
+
+## 3. RESULTS AND DISCUSSION
+
+Figures 1 and 2 present the profiles of $f$ and $f'$, respectively, for various values of the non-Newtonian parameter $K$ and the suction parameter $A$. The figures show that increasing the parameter $K$ decreases both $f$ and $f'$, but increasing $A$ increases them. The figures indicate also that the effect of $K$ on $f$ and $f'$ is more pronounced for higher values of $A$ (case of suction). However, the effect of $A$ on $f$ and $f'$ becomes more pronounced for higher values of $K$. Also, increasing $K$ increases the velocity boundary layer thickness while increasing $A$ decreases it.
\ No newline at end of file
diff --git a/samples/texts/5048783/page_4.md b/samples/texts/5048783/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c5ddf66fa26a084aa8f1875c62a5ad29e35c584
--- /dev/null
+++ b/samples/texts/5048783/page_4.md
@@ -0,0 +1,13 @@
+Fig. 1 Profiles of f' for different values of the non-Newtonian parameter K and the suction parameter A
+
+Fig. 2 Profiles of f' for different values of the non-Newtonian parameter K and the suction parameter A
+
+Fig. 3 Profiles of θ for different values of the non-Newtonian parameter K and the suction parameter A (Pr=0.5)
+
+Figures 4 and 5 present the temperature profiles for various values of the parameter *K* and *Pr* and for *A*=-0.5 and 0.5, respectively and for *B*=0. The figures bring out clearly the effect of the Prandtl number on the thermal boundary layer thickness. For the suction case (*A*=0.5), as shown in Figure 5, increasing *Pr* decreases the thermal boundary layer thickness for all *K*. However, for the blowing and Newtonian case (*A*=-0.5, *K*=0), as clear in Figure 4, increasing *Pr* decreases *θ*. But for the non-Newtonian case (*K*=1), increasing *Pr* increases *θ* and increasing *Pr* more decreases *θ* for some distance. The effect of *K* on *θ* is more pronounced for smaller values of *Pr* for the blowing case (see Figure 4).
+
+Figure 3 presents the profile of temperature $\theta$ for various values of the non-Newtonian parameter $K$ and the suction parameter $A$ and for $Pr=0.5$ and $B=0$. It is clear that increasing $K$ increases $\theta$ and its effect on $\theta$ becomes more apparent for higher values of $A$ (suction case). The figure indicates that the thermal boundary layer thickness increases when $K$ increases. Increasing $A$ decreases $\theta$ for all $K$ which emphasizes the influence of the injected flow in the cooling process. The action of fluid injection ($A<0$) is to fill the space immediately adjacent to the disk with fluid having nearly the same temperature as that of the wall. As the injection becomes stronger, so that does the blanket extend to greater distances from the surface. As shown in Figure 3, these effects are manifested by the progressive flattening of the temperature profile adjacent to the wall. Thus, the injected flow forms an effective insulating layer, decreasing the heat transfer from the wall. Suction, on the other hand, serves the function of bringing large quantities of ambient fluid into the immediate neighborhood of the surface of the wall. As a consequence of the increased heat-consuming ability of this augment flow, the temperature drops quickly as we proceed away from the wall. The presence of fluid at near-ambient temperature close to the surface increases the heat transfer.
+
+Fig. 4 Profiles of θ for different values of the non-Newtonian parameter K and the Prandtl number Pr (A=-0.5)
+
+Fig. 5 Profiles of θ for different values of the non-Newtonian parameter K and the Prandtl number Pr (A=0.5)
\ No newline at end of file
diff --git a/samples/texts/5048783/page_5.md b/samples/texts/5048783/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..657f7be006189af2b77ca0653b1741ed744e8110
--- /dev/null
+++ b/samples/texts/5048783/page_5.md
@@ -0,0 +1,19 @@
+Tables 1 and 2 present the variation of the wall shear stress $f''(0)$ and the heat transfer rate at the wall $-\theta'(0)$, respectively, for various values of K and A and for $Pr=0.7$ and $B=0$. For $A \ge 0$, increasing K decreases $ff''(0)$. However, for $A < 0$, the effect of K on $f''(0)$ depends on the value of K. Also, increasing suction velocity ($A > 0$) increases $f''(0)$ for all K. However, the variation of $f''(0)$ with blowing velocity depends on K. Table 2 shows that for suction, increasing K decreases $-\theta'(0)$ the effect of K on $-\theta'(0)$ in the blowing case depends on K. Increasing A increases $-\theta'(0)$ for all K.
+
+Table 3 presents the effect of the parameters A and B on $-\theta'(0)$ for various values of Pr and for $K=1$ and $Pr=0.7$. Increasing A decreases $-\theta'(0)$ for all B but increasing B decreases $-\theta'(0)$ for all A. This is expected since increasing temperature as a result of heat generation decreases the heat transfer rate. Table 4 shows the variation of $-\theta'(0)$ for various values of Pr and B and for $K=1$ and $A=0$. Increasing Pr increases $-\theta'(0)$ for all B.
+
+Table 1 Variation of the wall shear stress $f''(0)$ with K and A ($Pr=0.7, B=0$)
+
+| A | K=0 | K=0.5 | K=1 | K=1.5 | K=2 |
|---|
| -2 | 0.4758 | 5.6708 | 3.3592 | 3.1893 | 2.9899 | | -1 | 0.7566 | 10.7083 | 5.9994 | 5.9150 | 5.5018 | | 0 | 1.2326 | 0.9025 | 0.7528 | 0.6733 | 0.5967 | | 1 | 1.8892 | 1.0805 | 0.8469 | 0.7219 | 0.6405 | | 2 | 2.6699 | 1.1658 | 0.8857 | 0.7453 | 0.6566 |
+
+Table 2 Variation of the wall heat transfer rate $-\theta'(0)$ with K and A ($Pr=0.7, B=0$)
+
+| A | K=0 | K=0.5 | K=1 | K=1.5 | K=2 |
|---|
| -2 | 0.0167 | 0.1033 | 0.0749 | 0.0761 | 0.0743 | | -1 | 0.1456 | 0.3418 | 0.2862 | 0.2931 | 0.2887 | | 0 | 0.4959 | 0.4584 | 0.4374 | 0.4270 | 0.4114 | | 1 | 1.0162 | 0.9587 | 0.9346 | 0.9193 | 0.9080 | | 2 | 1.6217 | 1.5550 | 1.5343 | 1.5219 | 1.5132 |
+
+Table 3 Variation of the wall heat transfer rate $-\theta'(0)$ with A and B ($K=1, Pr=0.7$)
+
+| B | A=-2 | A=-1 | A=0 | A=1 | A=2 |
|---|
| -0.1 | 0.1206 | 0.3352 | 0.4969 | 0.9823 | 1.5709 |
|---|
| 0 | 0.0749 | 0.2862 | 0.4374 | 0.9346 | 1.5343 |
|---|
| 0.1 | 0.0261 | 0.2341 | 0.3729 | 0.8844 | 1.4965 |
|---|
+
+Table 4 Variation of the wall heat transfer rate $-\theta'(0)$ with Pr and B ($K=1, A=0$)
+
+| B | Pr=0.05 | Pr=0.1 | Pr=0.5 | Pr=1 | Pr=2 |
|---|
| -0.1 | 0.1688 | 0.2217 | 0.4333 | 0.5742 | 0.7593 |
|---|
| 0 | 0.1567 | 0.2027 | 0.3846 | 0.5004 | 0.6465 |
|---|
| 0.1 | 0.1440 | 0.1825 | 0.3321 | 0.4199 | 0.5217 |
|---|
\ No newline at end of file
diff --git a/samples/texts/5048783/page_6.md b/samples/texts/5048783/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..e574b6439b8071a042cd8aac040acde54c74c49a
--- /dev/null
+++ b/samples/texts/5048783/page_6.md
@@ -0,0 +1,53 @@
+## 4. CONCLUSIONS
+
+The two-dimensional stagnation point flow of a viscous incompressible non-Newtonian second grade fluid with heat transfer is studied in the presence of uniform suction or blowing. A numerical solution for the governing equations is obtained which allows the computation of the flow and heat transfer characteristics for various values of the non-Newtonian parameter *K*, the suction parameter *A*, the heat generation/absorption parameter *B*, and the Prandtl number *Pr*. The results indicate that increasing the parameter *K* increases both the velocity and thermal boundary layer thickness while increasing *A* decreases the thickness of both layers. The effect of the parameter *K* on the velocity and temperature is more apparent for suction than blowing. The effect of the blowing velocity on the shear stress at the wall depends on the value of the non-Newtonian parameter *K*.
+
+## 5. REFERENCES
+
+[1] K. Hiemenz, Die Grenzschicht in einem in dem gleichformingen Flussigkeitsstrom eingetauchten gerade Krieszylinder, *Dingler Polytechnic Journal*, Vol. 326, pp. 321-410, 1911.
+
+[2] F. Homann, Der Einfluss grosser Zahigkeit bei der Stromung um den Zylinder und um die Kugel *Z. Angew. Math. Mech.*, Vol. 16, No. 3, pp. 153-164, 1936.
+
+[3] H. Schlichting and K. Bussmann, Exakte Lösungen für die Laminare Grenzchicht mit Absaugung und Ausblasen, *Schriften der Deutschen Akademie der Luftfahrtforschung, Series B*, Vol. 7, No. 2, pp. 25-69, 1943.
+
+[4] J.H. Preston, The boundary layer flow over a permeable surface through which suction is applied, *Reports and Memoirs*, British Aerospace Research Council, London, No. 2244, 1946.
+
+[5] P.D. Ariel, Stagnation point flow with suction: An approximate solution, *J. Applied Mechanics*, Vol. 61, No. 4, pp. 976-978, 1994.
+
+[6] P.D. Weidman and S. Mahalingam, Axisymmetric stagnation-point flow impinging on a transversely oscillating plate with suction, *J. Engineering Mathematics*, Vol. 31, No. 4, pp. 305-318, 1997.
+
+[7] T.Y. Na, *Computational Methods in Engineering Boundary Value Problem*, Academic Press, New York, p. 107-121, 1979.
+
+[8] P.D. Ariel, Hiemenz flow in hydromagnetics, *Acta Mechanica*, Vol. 103, No. 1-4, pp. 31-43, 1994.
+
+[9] H.A. Attia, Hydromagnetic stagnation point flow with heat transfer over a permeable surface, *Arabian Journal for Science and Engineering*, Vol. 28, Issue 1B, pp. 107-112, 2003.
+
+[10] H.A. Attia, Homann magnetic flow and heat transfer with uniform suction or injection, *Canadian Journal of Physics*, Vol. 81, No. 10, pp. 1223-1230, 2003.
+
+[11] M. Massoudi and M. Ramezan, Boundary layer heat transfer analysis of a viscoelastic fluid at a stagnation point, *ASME Heat Transfer Division*, Vol. 130, pp. 81-86, 1990.
+
+[12] M. Massoudi and M. Ramezan, Heat transfer analysis of a viscoelastic fluid at a stagnation point, *Mechanics Research Communication*, Vol. 19, No. 2, pp. 129-134, 1992.
+
+[13] V.K. Garg, Heat transfer due to stagnation point flow of a non-Newtonian fluid, *Acta Mechanica*, Vol. 104, No. 3-4, pp. 159-171, 1994.
+
+[14] G.K. Rajeshwari and S.L. Rathna, Flow of a particular class of non-Newtonian visco-elastic and visco-inelastic fluids near a stagnation point, *Z. Angew. Math. Phys.*, Vol. 13, No. 1, pp. 43-57, 1962.
+
+[15] D.W. Beard and K. Walters, Elastico-viscous boundary-layer flows - Part 1: Two-dimensional flow near a stagnation point, *Proc. Cambridge Philos. Soc.*, Vol. 60, pp. 667-674, 1964.
+
+[16] I. Teipel, Die raumliche Staupunktstromung für ein viskoelastisches fluid, *Rheologica Acta*, Vol. 25, No. 2, pp. 75-79, 1986.
+
+[17] P.D. Ariel, Hybrid method for computing the flow of viscoelastic fluids, *Int. J. Numerical Methods in Fluids*, Vol. 14, No. 7, pp. 757-774, 1992.
+
+[18] D.S. Djukic, Hiemenz magnetic flow of power-law fluids, *J. Applied Mechanics - Trans. ASME*, Vol. 41, Series E, No. 3, pp. 822-823, 1974.
+
+[19] G. Nath, Similar solutions for the incompressible laminar boundary layer with pressure gradient in micropolar fluids, *Rheologica Acta*, Vol. 14, No. 9, pp. 850-857, 1975.
+
+[20] N.A. Kelson and T.W. Farrell, Micropolar flow over a porous stretching sheet with strong suction or injection, *Int. Commun. Heat Mass Transfer*, Vol. 28, pp. 479-488, 2001.
+
+[21] A. Desseaux and M. Bellalij, Improved solutions to a micropolar fluid driven by a continuous porous plate, *Int. J. Num. Methods Heat & Fluid Flow*, Vol. 9, No. 7, pp. 730-742, 1999.
+
+[22] R. Nazar, N. Amin, D. Filip and I. Pop, Stagnation point flow of a micropolar fluid towards a stretching sheet, *Int. J. Non-Linear Mechanics*, Vol. 39, No. 7, pp. 1227-1235, 2004.
+
+[23] I. Teipel, Stagnation point flow of a non-Newtonian second order fluid, *Trans. Canadian Soc. Mech. Engng.*, Vol. 12, No. 2, pp. 57-61, 1988.
+
+[24] P.D. Ariel, A numerical algorithm for computing the stagnation point flow of a second grade fluid with/without suction, *J. Comput. & Appl. Math.*, Vol. 59, No. 1, pp. 9-24, 1995.
\ No newline at end of file
diff --git a/samples/texts/5048783/page_7.md b/samples/texts/5048783/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..29a8f740a1520f8bd70b0d80feb0baf720c15eb9
--- /dev/null
+++ b/samples/texts/5048783/page_7.md
@@ -0,0 +1,13 @@
+[25] H.A. Attia, Hiemenz magnetic flow of a non-Newtonian fluid of second grade with heat transfer, *Canadian Journal of Physics*, Vol. 78, No. 9, pp. 875-882, 2000.
+
+[26] R.S. Rivlin, The hydrodynamics of non-Newtonian fluids, Proc. Royal Soc., Vol. A 193, pp. 260-281, 1948.
+
+[27] M.F. White, Viscous Fluid Flow, McGraw-Hill, New York, 1991.
+
+# TOČKA STAGNACIJE PROTOKA DRUGO-STUPANJSKE TEKUĆINE S JEDNOLIKIM USISAVANJEM ILI PUHANJEM I GENERIRANJEM TOPLINE
+
+## SAŽETAK
+
+U ovome se radu istražuje laminarno strujanje nestlačive ne-Newtonove drugo-stupanjske tekućine nanešene na propusnu ravnu ploču s toplinskim zagrijavanjem. Primijenjeno je jednoliko usisavanje ili puhanje okomito na ploču koja se drži na konstantnoj temperaturi. Dobiveno je numeričko rješenje za vodeći nelinearni impuls i jednadžbe energije. Predstavljen je i raspravljen učinak jednolikog usisivanja ili puhanja i obilježja ne-Newtonove tekućine, kako na strujanje, tako i na prijenos topline.
+
+**Ključne riječi:** točka stagnacije, ne-Newton-ova tekućina, usisavanje, jednoliko laminarno tečenje, generiranje topline.
\ No newline at end of file
diff --git a/samples/texts/5115107/page_1.md b/samples/texts/5115107/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..b25fe539a18257471fb97e543a94a208ca748845
--- /dev/null
+++ b/samples/texts/5115107/page_1.md
@@ -0,0 +1,23 @@
+**Supplementary Information**
+
+**Table S1.** The relative position-specific propensities of each amino acid at each position.
+
+| A | C | D | E | F | G | H |
|---|
| 1.0608 | 0.6400 | 0.9923 | 0.9987 | 0.8570 | 1.0526 | 1.2277 | | I | K | L | M | N | P | Q |
|---|
| 1.0897 | 1.2497 | 0.9639 | 0.9355 | 1.1295 | 0.9997 | 0.9530 | | R | S | T | V | W | Y | |
|---|
| 1.1203 | 0.9166 | 1.0205 | 1.0007 | 0.7402 | 0.9247 | |
+
+**Table S2.** Jackknife results for different weight parameters using BPB + Ecomposition + Scomposition.
+
+| W1 | Sn (%) | Sp (%) | Acc (%) | MCC |
|---|
| 1 | 22.19 | 92.60 | 69.13 | 0.2121 | | 1.5 | 50.51 | 77.30 | 68.37 | 0.2811 | | 2 | 65.31 | 65.63 | 65.52 | 0.2933 | | 2.5 | 74.23 | 54.91 | 61.35 | 0.2761 |
+
+**Table S3.** Jackknife results for different weight parameters using BRABSB + Ecomposition + Scomposition.
+
+| W1 | Sn (%) | Sp (%) | Acc (%) | MCC |
|---|
| 1 | 30.36 | 88.58 | 69.18 | 0.2338 | | 1.5 | 52.42 | 74.36 | 67.05 | 0.2655 | | 2 | 67.09 | 63.97 | 65.01 | 0.2936 | | 2.5 | 73.09 | 58.16 | 63.14 | 0.2949 | | 3 | 77.55 | 52.17 | 60.63 | 0.2836 |
+
+**Table S4.** Jackknife results for different weight parameters using ANBPB + Ecomposition + Scomposition.
+
+| W1 | Sn (%) | Sp (%) | Acc (%) | MCC |
|---|
| 1 | 30.87 | 88.33 | 69.18 | 0.2352 | | 1.5 | 51.91 | 74.17 | 66.75 | 0.2586 | | 2 | 67.60 | 64.29 | 65.39 | 0.3014 |
|---|
| 2.5 | 73.47 | 58.42 | 63.44 | 0.3009 |
+
+**Table S5.** Jackknife results for different weight parameters using RANS + Ecomposition + Scomposition.
+
+| W1 | Sn (%) | Sp (%) | Acc (%) | MCC |
|---|
| 1 | 34.82 | 85.52 | 68.62 | 0.2344 | | 1.5 | 48.90 | 73.28 | 64.88 | 0.2128 | | 2 | 58.55 | 66.45 | 63.82 | 0.2389 | | 2.5 | 63.90 | 61.42 | 62.24 | 0.2391 | | 3 | 65.82 | 56.57 | 59.65 | 0.2111 |
+
+© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
\ No newline at end of file
diff --git a/samples/texts/5134879/page_1.md b/samples/texts/5134879/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..253e147a878bd0be34444b52574614de30bddd67
--- /dev/null
+++ b/samples/texts/5134879/page_1.md
@@ -0,0 +1,23 @@
+# Packing Two Disks into a Polygonal Environment*
+
+Prosenjit Bose¹, Pat Morin², and Antoine Vigneron³
+
+¹ Carleton University. jit@scs.carleton.ca
+
+² McGill University. morin@cs.mcgill.ca
+
+³ Hong Kong University of Science and Technology. antoine@cs.ust.hk
+
+**Abstract.** We consider the following problem. Given a polygon $P$, possibly with holes, and having $n$ vertices, compute a pair of equal radius disks that do not intersect each other, are contained in $P$, and whose radius is maximized. Our main result is a simple randomized algorithm whose expected running time, on worst case input, is $O(n \log n)$. This is optimal in the algebraic decision tree model of computation.
+
+## 1 Introduction
+
+Let $P$ be a polygon, possibly with holes, and having $n$ vertices. We consider the following problem, which we call 2-DISK: Find a pair of disks with radius $r^*$ that do not intersect each other, are contained in $P$, and such that $r^*$ is maximized. Biedl et al. [5] gave an $O(n^2)$ time algorithm to solve this problem.
+
+Special cases of 2-DISK have been studied previously. When $P$ is a convex polygon, Bose et al. [6] describe a linear time algorithm and Kim and Shin [10] describe an $O(n \log n)$ time algorithm. For simple polygons (i.e. polygons without holes), Beспамyatnikh [4] gives an $O(n \log^3 n)$ time algorithm based on the parametric search paradigm [11].
+
+Another special case occurs when the holes of $P$ degenerate to points. This is known as the *maximin 2-site facility location* problem [3, 9]. In this formulation we can think of the centers of the two disks as obnoxious facilities such as smokestacks, or nuclear power plants, and the points as population centers. The goal is to maximize the distance between each facility and the nearest population center. Katz et al. [9] give an $O(n \log n)$ time algorithm for the decision version of the 2-site facility location problem in which one is given a distance $d$ and asked if there exists a placement of 2 non-intersecting disks of radius $d$, each contained in $P$ such that no point is included in either of the disks.
+
+In this paper we present a simple randomized algorithm for the general case in which $P$ is not necessarily convex and may contain holes. Our algorithm runs in $O(n \log n)$ expected time. It can also be used to solve the optimization version of the 2-site maximin facility location problem in $O(n \log n)$ time. Finally we observe that, when we allow polygons with holes, $\Omega(n \log n)$ is a lower bound for 2-DISK by a simple reduction from MAX-GAP.
+
+* This research was supported by the Natural Sciences and Engineering Research Council of Canada and by the Hong Kong Research Grant Council CERG grant HKUST6137/98E.
\ No newline at end of file
diff --git a/samples/texts/5134879/page_2.md b/samples/texts/5134879/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd90d1689df699c40a820147e34bdcded61fc0e6
--- /dev/null
+++ b/samples/texts/5134879/page_2.md
@@ -0,0 +1,15 @@
+The remainder of the paper is organized as follows: Section 2 reviews definitions and previous results regarding the medial-axis. Section 3 describes our algorithm. Section 4 summarizes and concludes with an open problem.
+
+## 2 The Medial-Axis
+
+For the remainder of this paper, $P$ will be a polygon, possibly with holes, and having $n$ vertices. The *medial-axis* $M(P)$ of $P$ is the locus of all points $p$ for which there exists a disk centered at $p$, contained in $P$, and which intersects the boundary of $P$ in two or more points. See Fig. 1 for an example. Alternatively, $M(P)$ is a portion of the *Voronoi diagram* of the open line segments and vertices defined by the edges of $P$. To be more precise, we need to remove the Voronoi edges that are outside $P$ and those associated with an edge and one of its endpoints. It is well known that the medial-axis consists of $O(n)$ straight line segments and parabolic arcs.
+
+**Fig. 1.** The medial-axis of a polygon with a triangular hole.
+
+Algorithmically, the medial-axis is well understood. There exists an $O(n)$ time algorithm [7] for computing the medial-axis of a polygon without holes and $O(n \log n)$ time algorithms for computing the medial-axis of a polygon with holes [2]. Furthermore, these algorithms can compute a representation in which each segment or arc is represented as a segment or arc in $\mathbb{R}^3$, where the third dimension gives the radius of the disk that touches two or more points on the boundary of $P$.
+
+We say that a point $p \in P$ *supports* a disk of radius $r$ if the disk of radius $r$ centered at $p$ is contained in $P$. We call a vertex, parabolic arc or line segment $x$ of $M(P)$ an *elementary object* if the radius of the largest disk supported by $p \in x$ is monotone as $p$ moves from one endpoint of $x$ to the other. Each edge of $M(P)$ can be split into two elementary objects. Thus, $M(P)$ can be split into $O(n)$ elementary objects whose union is $M(P)$.
+
+## 3 The Algorithm
+
+Next we describe a randomized algorithm for 2-DISK with $O(n \log n)$ expected running time. We begin by restating 2-DISK as a problem of computing the diameter of a set of
\ No newline at end of file
diff --git a/samples/texts/5134879/page_3.md b/samples/texts/5134879/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..4ce2d82891e44bfa527b2871fef462833baa9e47
--- /dev/null
+++ b/samples/texts/5134879/page_3.md
@@ -0,0 +1,17 @@
+elementary objects under a rather unusual distance function. We then use an algorithm
+based on the work of Clarkson and Shor [8] to solve this problem in the stated time.
+
+The following lemma, of which similar versions appear in Bose *et al.* [6] and Biedl
+*et al.* [5], tells us that we can restrict our search to disks whose centers lie on $M(P)$.
+
+**Lemma 1.** Let $D_1$ and $D_2$ be a solution to 2-DISK which maximizes the distance between $D_1$ and $D_2$ and let $p_1$ and $p_2$ be the centers of $D_1$ and $D_2$, respectively. Then $D_1$ and $D_2$ each intersect the boundary of $P$ in at least two points and hence $p_1$ and $p_2$ are points of $M(P)$.
+
+*Proof*. Refer to Fig. 2. Suppose that one of the disks, say $D_1$, intersects the boundary of $P$ in at most one point. Let $o_1$ be this point, or if $D_1$ does not intersect the boundary of $P$ at all then let $o_1$ be any point on the boundary of $D_1$. Note that there is some value of $\epsilon > 0$ such that $D_1$ is free to move by a distance of $\epsilon$ in either of the two directions perpendicular to the direction $\vec{p_1o_1}$ while keeping $D_1$ in the interior of $P$. However, movement in at least one of these directions will increase the distance $|p_1p_2|$, which is a contradiction since this distance was chosen to be maximal over all possible solutions to 2-DISK.
+
+Fig. 2. The proof of Lemma 1
+
+Let $x_1$ and $x_2$ be two elementary objects of $M(P)$. We define the distance between $x_1$ and $x_2$, denoted $d(x_1, x_2)$ as $2r$, where $r$ is the radius of the largest pair of equal-radius non-intersecting disks $d_1$ and $d_2$, contained in $P$ and with $d_i$ centered on $x_i$, for $i = 1, 2$. There are two points to note about this definition of distance: (1) if the distance between two elementary objects is $2r$, then we can place two non-intersecting disks of radius $r$ in $P$, and (2) the distance from an elementary object to itself is not necessarily 0. Given two elementary objects it is possible, in constant time, to compute the distance between them as well as the locations of 2 disks that produce this distance [5].
+
+Let $E$ be the set of elementary objects obtained by taking the union of the following three sets of elementary objects:
+
+1. the set of vertices of $M(P)$,
\ No newline at end of file
diff --git a/samples/texts/5134879/page_4.md b/samples/texts/5134879/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1f28672eb4c3ed8096e0b745aaf30db09ab43ef
--- /dev/null
+++ b/samples/texts/5134879/page_4.md
@@ -0,0 +1,51 @@
+2. the set of elementary line segments obtained by splitting each straight line segment of $M(P)$ into at most two elementary objects.
+
+3. the set of elementary parabolic arcs obtained by splitting each parabolic arc of $M(P)$ into at most two elementary objects.
+
+We call the *diameter* of $E$ the maximum distance between any pair $x, y \in E$, where
+distance is defined as above. Now, it should be clear from Lemma 1 that 2-DISK can be
+solved by finding a pair of elements in $E$ whose distance is equal to the diameter of $E$.¹
+
+Thus, all that remains is to devise an algorithm for finding the diameter of $E$. Let $m$
+denote the cardinality of $E$ and note that, initially, $m = O(n)$. Motivated by Clarkson
+and Shor [8], we compute the diameter using the following algorithm. We begin by
+selecting a random element $x$ from $E$ and finding the element $x' \in E$ whose distance
+from $x$ is maximal, along with the corresponding radius $r$. This can be done in $O(m)$
+time, since each distance computation between two elementary objects can be done
+in constant time. Note that $r$ is a lower bound on $r^*$. We use this lower bound to do
+trimming and pruning on the elements of $E$.
+
+We trim each element $y \in E$ by partitioning $y$ into two subarcs,² each of which
+may be empty. The subarc $y_≥$ is the part of $y$ supporting disks of radius greater than or
+equal to $r$. The subarc $y_<$ is the remainder of $y$. We then *trim* $y_<$ from $y$ by removing
+$y$ from $E$ and replacing it with $y_≥$. During the trimming step we also remove from $E$
+any element that does not support a disk of radius greater than $r$. Each such trimming
+operation can be done in constant time, resulting in an $O(m)$ running time for this step.
+
+Next, we prune $E$. For any arc $y \in E$, the lowest point of $y$ is its closest point
+to the boundary of $P$. In the case of ties, we take a point which is closest to one of
+the endpoints of $P$. By the definition of elementary objects, the lowest point of $y$ is
+therefore an endpoint of $y$. The closed disk with radius $r$ centered on the lowest point
+of $y$ is denoted by $D(y)$. We discard all the elements $y \in E$ such that $D(y) \cap D(x) \neq \emptyset$
+for all $x \in E$.
+
+Pruning can be performed in $O(m \log m)$ time by computing, for each lowest end-
+point $p$, a matching lowest endpoint $q$ whose distance from $p$ is maximal and then
+discarding $p$ if $|pq| \le 2r$. This computation is known as *all-pairs furthest neighbors*
+and can be completed in $O(m \log m)$ time [1].
+
+Once all trimming and pruning is done, we have a new set of elementary objects
+$E'$ on which we recurse. The recursion completes when $|E'| \le 2$, at which point we
+compute the diameter of $E'$ in constant time using a brute-force algorithm. We output
+the largest pair of equal-radius non-overlapping disks found during any iteration of the
+algorithm.
+
+To prove that this algorithm is correct we consider a pair of non-intersecting disks
+$D_1$ and $D_2$, each contained in $P$ and having radius $r^*$, centered at $p_1$ and $p_2$, respec-
+tively, such that the Euclidean distance $|p_1p_2|$ is maximal. The following lemma shows
+that $p_1$ and $p_2$ are not discarded from consideration until an equally good solution is
+found.
+
+¹ Here we use the term “pair” loosely, since the diameter may be defined by the distance between an elementary object and itself.
+
+² We use the term subarc to mean both parts of segments and parts of parabolic arcs.
\ No newline at end of file
diff --git a/samples/texts/5134879/page_5.md b/samples/texts/5134879/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..afe596c6f4f653a00be3724b75939d86a4e8804a
--- /dev/null
+++ b/samples/texts/5134879/page_5.md
@@ -0,0 +1,15 @@
+**Lemma 2.** If, during the execution of one round, {$p_1, p_2$} $\subset \cup E$ and $r < r^*$, then {$p_1, p_2$} $\subset \cup E'$ at the end of the round.
+
+*Proof.* We need to show that at the end of the round, there exists elementary objects $y_1, y_2 \in E'$ such that $p_1 \in y_1$ and $p_2 \in y_2$. More specifically, we need to show there exists $y_1, y_2 \in E$ such that $p_1$, respectively $p_2$ is not trimmed from $y_1$, respectively $y_2$, and $y_1$ and $y_2$ are not pruned.
+
+To see that $p_1$ and $p_2$ are not trimmed from any elementary object that contains them we simply note that $p_1$ and $p_2$ both support disks of radius $r^* > r$ and are therefore not trimmed.
+
+To prove that $y_1$ and $y_2$ are not pruned we subdivide the plane into two open halfspaces $H_1$ and $H_2$ such that all points in $H_1$ are closer to $p_1$ than to $p_2$ and vice-versa. We denote by $L$ the line separating these two halfspaces.
+
+Recall that, after trimming, an elementary object $x$ is only pruned if $D(x) \cap D(y) \neq \emptyset$ for all $y \in E$. We will show that $D(y_1) \subseteq H_1$ and $D(y_2) \subseteq H_2$, therefore $D(y_1) \cap D(y_2) = \emptyset$ and neither $y_1$ nor $y_2$ are pruned. It suffices to prove that $D(y_1) \subseteq H_1$ since a symmetric argument shows that $D(y_2) \subseteq H_2$. We consider three separate cases depending on the location of $p_1$ on $M(P)$.
+
+**Case 1:** $p_1$ is a vertex of $M(P)$. In this case we choose $y_1$ to be the singleton elementary object {$p_1$}. Thus, $D(y_1)$ is centered at $p_1$. Furthermore, the distance between $p_1$ and $L$ is at least $r^* > r$. Therefore, one point of $D(y_1)$ is contained in $H_1$ and $D(y_1)$ does not intersect the boundary of $H_1$ so it must be that $D(y_1) \subseteq H_1$.
+
+Fig. 3. The proof of Lemma 2 Case 2
+
+**Case 2:** $p_1$ lies in the interior of a straight line segment of $M(P)$. Let $p'_1$ be the lower endpoint of $y_1$. Let $\theta$ be the angle ($p_2p_1, p_1p'_1$) (see Fig. 3). If $\theta \in [-\pi/2, \pi/2]$ then we can move $p_1$ slightly in the direction opposite to $p'_1$ while keeping $D_1$ inside P, thus contradicting the assumption that $|p_1p_2|$ is maximal. Therefore $\theta \in [\pi/2, 3\pi/2]$, which implies that $D(y_1)$ lies in $H_1$.
\ No newline at end of file
diff --git a/samples/texts/5134879/page_6.md b/samples/texts/5134879/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..a72c9f746b07692b8dafa5a15f4fca197c848925
--- /dev/null
+++ b/samples/texts/5134879/page_6.md
@@ -0,0 +1,15 @@
+**Case 3:** $p_1$ lies in the interior of a parabolic arc of $M(P)$. In this case $D_1$ is tangent to an edge $e_1$ of $P$ and touches one of its vertices $v$. $p'_1$ still denotes the lower endpoint of $y_1$. Without loss of generality, assume that $e_1$ is parallel to the x-axis and $x(p'_1) < x(p_1)$ (see Fig. 4). Let $L'$ be the line parallel to $L$ that crosses the segment $[p_1, p_2]$ and that is tangent to $D_1$. We denote by $o_1$ the point where $e_1$ is tangent to $D_1$, and we denote by $o'_1$ the point such that $(o_1, o'_1)$ is a diameter of $D_1$. Finally, the convex hull of $D_1$ and $D(y_1)$ is denoted by $\mathcal{C}$.
+
+**Fig. 4.** The proof of Lemma 2 Case 3.
+
+It must be that $x(p_2) > x(p_1)$, otherwise $p_1$ and $D_1$ could be moved in the positive x direction while keeping $D_1$ in $P$. This would increase the distance $|p_1p_2|$ which is defined as maximal. It follows that $L'$ is tangent to $D_1$ along the counterclockwise arc $(o'_1, o_1)$. Then $L'$ is tangent to $\mathcal{C}$, so by convexity $\mathcal{C}$ lies on the same side of $L'$ as $p_1$ which implies that $D_1$ is contained in $H_1$.
+
+Let $d_i$ denote the distance of the furthest element in $E$ from $x_i$, and suppose for the sake of analysis that the elements of $E$ are labeled $x_1, \dots, x_n$ so that $d_i \le d_{i+1}$. The following lemma helps to establish the running time of the algorithm.
+
+**Lemma 3.** If we select $x = x_i$ as the random element, then we discard all $x_j \in E$ such that $j \le i$ from $E$.
+
+*Proof.* For any $j \le i$, either $x_j$ does not support a disk of radius greater than $d_i$, or every point on $x_j$ that supports a disk of radius $d_i$ is of distance at most $d_i$ from any other point of $M(P)$ that supports a disk of radius $d_i$.
+
+In the first case, $x_j$ is removed from $E$ by trimming. In the second case, $D(x_j) \cap D(x_k) \neq \emptyset$ for all $x_k \in E$ and $x_j$ is removed by pruning.
+
+Finally, we state and prove our main theorem.
\ No newline at end of file
diff --git a/samples/texts/5134879/page_7.md b/samples/texts/5134879/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..29be38f488656a25e27805a2414a4032b5c31e8e
--- /dev/null
+++ b/samples/texts/5134879/page_7.md
@@ -0,0 +1,19 @@
+**Theorem 1.** The above algorithm solves 2-DISK in $O(n \log n)$ expected time.
+
+*Proof.* The algorithm is correct because, by Lemma 2, it never discards $p_1$ nor $p_2$ until it has found a solution with $r = r^*$, at which point it has already found an optimal solution that will be reported when the algorithm terminates.
+
+To prove the running time of the algorithm, we use the following facts. Each round of the algorithm can be completed in $O(m \log m)$ time where $m$ is the cardinality of $E$ at the beginning of the round. By Lemma 3, when we select $x_i$ as our random element, all elements $x_j$ with $j \le i$ disappear from $E$. Therefore, the expected running time of the algorithm is given by the recurrence
+
+$$T(m) \le \frac{1}{m} \sum_{i=1}^{m} T(m-i) + O(m \log m),$$
+
+which readily solves to $O(m \log m)$. Since $m \in O(n)$, this completes the proof.
+
+# 4 Conclusions
+
+We have given a randomized algorithm for 2-DISK that runs in $O(n \log n)$ expected time. The algorithm is considerably simpler than the $O(n \log^3 n)$ algorithm of Be-spamyathnikh [4] and has the additional advantage of solving the more general prob-lem of polygons with holes. Although we have described our algorithm as performing computations with distances, these can be replaced with squared distances to yield an algorithm that uses only algebraic computations.
+
+In the algebraic decision tree model of computation, one can also prove an $\Omega(n \log n)$ lower bound on any algorithm for 2-DISK through a reduction from MAX-GAP [12]. Suppose that the input to MAX-GAP is $y_1, \dots, y_n$. Without loss of generality one can assume that $y_1 = \min\{y_i : 1 \le i \le n\}$ and $y_n = \max\{y_i : 1 \le i \le n\}$. We then construct a rectangle with top and bottom sides at $y_1$ and $y_n$, respectively, and with width $2(y_n - y_1)$. The interior of this rectangle is then partitioned into rectangles with horizontal line segments having $y$ coordinates $y_1, \dots, y_n$. See Fig. 5 for an example.
+
+Fig. 5. Reducing MAX-GAP to 2-DISK.
+
+It should then be clear that the solution to 2-DISK for this problem corresponds to placing two disks in the rectangle corresponding to the gap between $y_i$ and $y_{i+1}$
\ No newline at end of file
diff --git a/samples/texts/5134879/page_8.md b/samples/texts/5134879/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..b3168001827034125186091a9eebbc75eb2abcec
--- /dev/null
+++ b/samples/texts/5134879/page_8.md
@@ -0,0 +1,29 @@
+which is maximal, i.e., it gives a solution to the original MAX-GAP problem. Since this reduction can be easily accomplished in linear time and MAX-GAP has an $Ω(n \log n)$ lower bound, this yields an $Ω(n \log n)$ lower bound on 2-DISK.
+
+The above reduction only works because we allow polygons with holes. An interesting open problem is that of determining the complexity of 2-DISK when restricted to simple polygons. Is there a linear time algorithm?
+
+## References
+
+1. P. K. Agarwal, J. Matoušek, and S. Suri. Farthest neighbors, maximum spanning trees, and related problems in higher dimensions. *Comput. Geom.: Theory & Appl.*, 4:189–201, 1992.
+
+2. Helmut Alt and Otfried Schwarzkopf. The Voronoi diagram of curved objects. In *Proc. 11th Annu. ACM Sympos. Comput. Geom.*, pages 89–97, 1995.
+
+3. Boaz Ben-Moshe, Matthew J. Katz, and Michael Segal. Obnoxious facility location: Complete service with minimal harm. *International Journal of Computational Geometry and Applications*, 10:581–592, 2000.
+
+4. S. Bespamyatnikh. Draft: Efficient algorithm for finding two largest empty circles. In *Proceedings of the 15th European Workshop on Computational Geometry (EuroCG'99)*, pages 37–38, 1999.
+
+5. T. C. Biedl, E. D. Demaine, M. L. Demaine, A. Lubiw, and G. T. Toussaint. Hiding disks in folded polygons. In *Proceedings of the 10th Canadian Conference on Computational Geometry (CCCG'98)*, 1998.
+
+6. P. Bose, J. Czyzowicz, E. Kranakis, and A. Maheshwari. Algorithms for packing two circles in a convex polygon. In *Proceedings of Japan Conference on Discrete and Computational Geometry (JCDCG'98)*, pages 93–103, 1998.
+
+7. F. Chin, J. Snoeyink, and C. A. Wang. Finding the medial axis of a simple polygon in linear time. *Discrete and Computational Geometry*, 21, 1999.
+
+8. K. L. Clarkson and P. W. Shor. Algorithms for diametral pairs and convex hulls that are optimal, randomized, and incremental. In *Proceedings of the Fourth Annual Symposium on Computational Geometry (SoCG'88)*, pages 12–17, 1988.
+
+9. Matthew J. Katz, Klara Kedem, and Michael Segal. Improved algorithms for placing undesirable facilities. In *Proceedings of the 11th Canadian Conference on Computational Geometry (CCCG'99)*, pages 65–67, 1999.
+
+10. S. K. Kim and C.-S. Shin. Placing two disks in a convex polygon. *Information Processing Letters*, 73, 2000.
+
+11. N. Megiddo. Applying parallel computation algorithms to the design of serial algorithms. *Journal of the ACM*, 30:852–865, 1983.
+
+12. F. P. Preparata and M. I. Shamos. *Computational Geometry*. Springer-Verlag, New York, 1985.
\ No newline at end of file
diff --git a/samples/texts/5157779/page_1.md b/samples/texts/5157779/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..27d86f02802ca2341353f19ccdb5090baf51ca9d
--- /dev/null
+++ b/samples/texts/5157779/page_1.md
@@ -0,0 +1,57 @@
+On the distance between $\langle X \rangle$ and $L^\infty$ in the space
+of continuous BMO-martingales
+
+by
+
+LITAN YAN (Shanghai) and NORIHIKO KAZAMAKI (Toyama)
+
+**Abstract.** Let $X = (X_t, \mathcal{F}_t)$ be a continuous BMO-martingale, that is,
+
+$$
+\|X\|_{\text{BMO}} = \sup_T \|E[|X_\infty - X_T| | \mathcal{F}_T]\|_\infty < \infty,
+$$
+
+where the supremum is taken over all stopping times $T$. Define the critical exponent $b(X)$ by
+
+$$
+b(X) = \{b > 0 : \sup_T \| E[\exp(b^2(\langle X \rangle_\infty - \langle X \rangle_T)) | \mathcal{F}_T] \|_\infty < \infty \},
+$$
+
+where the supremum is taken over all stopping times T. Consider the continuous martin-
+gale q(X) defined by
+
+$$
+q(X)_t = E[\langle X \rangle_{\infty} | \mathcal{F}_t] - E[\langle X \rangle_{\infty} | \mathcal{F}_0].
+$$
+
+We use $q(X)$ to characterize the distance between $\langle X \rangle$ and the class $L^\infty$ of all bounded martingales in the space of continuous BMO-martingales, and we show that the inequalities
+
+$$
+\frac{1}{4d_1(q(X), L^\infty)} \le b(X) \le \frac{4}{d_1(q(X), L^\infty)}
+$$
+
+hold for every continuous BMO-martingale $X$.
+
+**1. Introduction and preliminaries.** Throughout this paper, we fix a filtered complete probability space $(\Omega, \mathcal{F}, P, (\mathcal{F}_t))$ with the usual conditions, and we assume that every martingale is uniformly integrable and continuous.
+
+Recall that a uniformly integrable martingale $X = (X_t, \mathcal{F}_t)$ is said to be
+in $BMO_p$ ($p \ge 1$) if
+
+$$
+(1.1) \quad \|X\|_{BMO_p} \equiv \sup_T \|E[|X_\infty - X_T|^p | \mathcal{F}_T]^{1/p}\|_\infty < \infty,
+$$
+
+where the supremum is taken over all stopping times $T$. In particular,
+
+$$
+\|X\|_{\text{BMO}_2} = \sup_T \| E[\langle X \rangle_{\infty} - \langle X \rangle_T | \mathcal{F}_T]^{1/2} \|_{\infty}.
+$$
+
+Then, as is well known, $\|\cdot\|_{BMO_p}$ is a norm for all $p \ge 1$ and
+
+$$
+\|X\|_{\text{BMO}_1} \le \|X\|_{\text{BMO}_p} \le C_p \|X\|_{\text{BMO}_1},
+$$
+
+2000 Mathematics Subject Classification: 60G44, 60G46.
+Key words and phrases: continuous martingales, BMO.
\ No newline at end of file
diff --git a/samples/texts/5157779/page_2.md b/samples/texts/5157779/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..427e1d922235c113d1a86a0cef0d9aafb5571fec
--- /dev/null
+++ b/samples/texts/5157779/page_2.md
@@ -0,0 +1,69 @@
+where $C_p > 0$ is a constant depending only on $p$. For these, see, for example,
+[3, p. 28].
+
+Now, let BMO be the class of all uniformly integrable martingales $X$
+such that $\|X\|_{\text{BMO}_1} < \infty$. Then BMO is a Banach space with the norm
+$\|\cdot\|_{\text{BMO}_1}$, and we call the martingale $X$ in BMO a BMO-martingale. There
+exist two important subclasses of BMO, namely, the class $L^\infty$ of all bounded
+martingales and the class $H^\infty$ of all martingales $X$ such that $\langle X \rangle$ is bounded.
+For $X \in \text{BMO}$, let $a(X)$ be the supremum of the set of $a > 0$ for which
+
+$$
+\sup_T \|E[\exp(a|X_\infty - X_T|) | \mathcal{F}_T]\|_\infty < \infty,
+$$
+
+where the supremum is taken over all stopping times $T$, and for $M, N \in$
+BMO we set
+
+$$
+d_p(M, N) = \|M - N\|_{\text{BMO}_p} \quad (p \ge 1).
+$$
+
+Then there is a beautiful relationship between $a(X)$ and $d_1(\cdot, \cdot)$:
+
+$$
+(1.2) \qquad \frac{1}{4d_1(X, L^\infty)} \le a(X) \le \frac{4}{d_1(X, L^\infty)}
+$$
+
+for every $X \in \text{BMO}$. This is the Garnett-Jones theorem. For the proof,
+see [1], [3], [4].
+
+Let now $b(X)$ denote the supremum of the set of $b > 0$ for which
+
+$$
+\sup_T \| E[\exp(b^2(\langle X \rangle_\infty - \langle X \rangle_T)) | \mathcal{F}_T] \|_\infty < \infty
+$$
+
+for $X \in \text{BMO}$, where $T$ runs through all stopping times. Then we have
+(see [3])
+
+$$
+(1.3) \qquad \frac{1}{\sqrt{2} d_2(X, H^\infty)} \le b(X) \quad (X \in \text{BMO}).
+$$
+
+Furthermore, we shall see in Section 2 that $\sqrt{2} a(X) \ge b(X)$ for every $X \in$
+BMO.
+
+In this paper, we consider the continuous martingale $q(X)$ defined by
+
+$$
+q(X)_t = E[\langle X \rangle_{\infty} | \mathcal{F}_t] - E[\langle X \rangle_{\infty} | \mathcal{F}_0],
+$$
+
+where $X$ is a continuous martingale. We use $q(X)$ to characterize the dis-
+tance between $\langle X \rangle$ and $L^\infty$ in the space of continuous BMO-martingales.
+
+**2. Results and proofs.** In this section, we give the characterization of
+the distance between $\langle X \rangle$ and $L^\infty$ in the space of BMO-martingales.
+
+LEMMA 1. Let $X, Y \in \text{BMO}$. Assume that $q(X)$ and $q(Y)$ are defined as in Section 1. Then
+
+$$
+\|q(X) - q(Y)\|_{\text{BMO}_1} \le 2(\|X\|_{\text{BMO}_2} + \|Y\|_{\text{BMO}_2}) \|X - Y\|_{\text{BMO}_2}.
+$$
+
+Proof. Observing that
+
+$$
+\langle X \rangle - \langle Y \rangle = \langle X - Y, X \rangle + \langle X - Y, Y \rangle,
+$$
\ No newline at end of file
diff --git a/samples/texts/5157779/page_3.md b/samples/texts/5157779/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..5298960d0ca7cefd3f96c458f7eee4e91e3955b0
--- /dev/null
+++ b/samples/texts/5157779/page_3.md
@@ -0,0 +1,57 @@
+we find
+
+$$
+\begin{align*}
+& q(X)_{\infty} - q(Y)_{\infty} - E[q(X)_{\infty} - q(Y)_{\infty} | \mathcal{F}_T] \\
+&= \langle X \rangle_{\infty} - \langle Y \rangle_{\infty} - E[\langle X \rangle_{\infty} - \langle Y \rangle_{\infty} | \mathcal{F}_T] \\
+&= (\langle X - Y, X \rangle_{\infty} - \langle X - Y, X \rangle_T) - E[\langle X - Y, X \rangle_{\infty} - \langle X - Y, X \rangle_T | \mathcal{F}_T] \\
+&\quad + l(\langle X - Y, Y \rangle_{\infty} - \langle X - Y, Y \rangle_T) - E[\langle X - Y, Y \rangle_{\infty} - \langle X - Y, Y \rangle_T | \mathcal{F}_T].
+\end{align*}
+$$
+
+It follows from the Schwarz inequality that
+
+$$
+\begin{align*}
+E[|q(X)_\infty - q(Y)_\infty - E[q(X)_\infty - q(Y)_\infty | \mathcal{F}_T]| |\mathcal{F}_T|] &\\
+&\le 2E[|\langle X - Y, X \rangle_\infty - \langle X - Y, X \rangle_T| |\mathcal{F}_T|] &\\
+&\quad + 2E[|\langle X - Y, Y \rangle_\infty - \langle X - Y, Y \rangle_T| |\mathcal{F}_T|] &\\
+&\le 2E[|\langle X - Y \rangle_\infty - \langle X - Y \rangle_T| |\mathcal{F}_T|^{1/2} E[\langle X \rangle_\infty - \langle X \rangle_T | \mathcal{F}_T]^{1/2}] &\\
+&\quad + 2E[|\langle X - Y \rangle_\infty - \langle X - Y \rangle_T| |\mathcal{F}_T|^{1/2} E[\langle Y \rangle_\infty - \langle Y \rangle_T | \mathcal{F}_T]^{1/2}] &\\
+&\le 2(\|X\|_{\text{BMO}_2} + \|Y\|_{\text{BMO}_2}) \|X - Y\|_{\text{BMO}_2}.
+\end{align*}
+$$
+
+This completes the proof. $\blacksquare$
+
+As a consequence of the lemma, we see that $X \in \text{BMO}$ implies $q(X) \in \text{BMO}$. Furthermore, we have
+
+**THEOREM 1.** Let $X$ be a uniformly integrable continuous martingale and let $q(X)$ be defined as in Section 1. If $X \in \text{BMO}$, then
+
+$$
+(2.1) \qquad \frac{1}{4d_1(q(X), L^\infty)} \le b(X) \le \frac{4}{d_1(q(X), L^\infty)},
+$$
+
+and furthermore, we have $\sqrt{2} a(X) \ge b(X)$ for all $X \in \text{BMO}$.
+
+*Proof.* Let $X \in \text{BMO}$. Then for any $\lambda > 0$ we have
+
+$$
+\begin{align*}
+E[\exp(\lambda(\langle X \rangle_{\infty} - \langle X \rangle_T)) | \mathcal{F}_T] \\
+&= E[\exp(\lambda E[\langle X \rangle_{\infty} - \langle X \rangle_T | \mathcal{F}_T]) \exp(\lambda(\langle X \rangle_{\infty} - E[\langle X \rangle_{\infty} | \mathcal{F}_T])) | \mathcal{F}_T] \\
+&\le e^{\lambda \|X\|_{\text{BMO}_2}^2} E[\exp(\lambda|\langle X \rangle_{\infty} - E[\langle X \rangle_{\infty} | \mathcal{F}_T]|) | \mathcal{F}_T] \\
+&\le e^{\lambda \|X\|_{\text{BMO}_2}^2} E[\exp(\lambda|q(X)_{\infty} - q(X)_{T}|) | \mathcal{F}_T]
+\end{align*}
+$$
+
+and
+
+$$
+\begin{align*}
+E[\exp(\lambda|q(X)_{\infty} - q(X)_{T}|) | \mathcal{F}_{T}] \\
+&= E[\exp(\lambda|\langle X \rangle_{\infty} - E[\langle X \rangle_{\infty} | \mathcal{F}_{T}]|) | \mathcal{F}_{T}] \\
+&\le E[\exp(\lambda(\langle X \rangle_{\infty} - \langle X \rangle_{T})) \exp(\lambda E[\langle X \rangle_{\infty} - \langle X \rangle_{T} | \mathcal{F}_{T}] | \mathcal{F}_{T})] \\
+&\le e^{\lambda \|X\|_{\text{BMO}_2}^2} E[\exp(\lambda(\langle X \rangle_{\infty} - \langle X \rangle_{T})) | \mathcal{F}_{T}],
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/5157779/page_4.md b/samples/texts/5157779/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..b1bc4e764bb36769b307c10e34637135cd7448b3
--- /dev/null
+++ b/samples/texts/5157779/page_4.md
@@ -0,0 +1,35 @@
+which shows that $b(X) = a(q(X))$. Thus, inequalities (2.1) follow from inequalities (1.2).
+
+On the other hand, it is not difficult to show that the inequality
+
+$$ (2.2) \quad E[\exp(\lambda|X_\infty - X_T|) | \mathcal{F}_T] \le 2E[\exp(2\lambda^2((X)_\infty - (X)_T)) | \mathcal{F}_T]^{1/2} $$
+
+holds for all $\lambda > 0$. Indeed, by using the Schwarz inequality and noting that for $X \in \text{BMO}$ the continuous exponential martingale $\mathcal{E}(X)$ defined by
+
+$$ \mathcal{E}(X) = \exp\left(X - \frac{1}{2}\langle X \rangle\right) $$
+
+is uniformly integrable (see Theorem 2.3 in [3, p. 31]), for every real $\lambda > 0$ we have
+
+$$ \begin{align*} & E[\exp(\lambda(X_{\infty} - X_T)) | \mathcal{F}_T] \\ &= E\left[ \frac{\mathcal{E}(\lambda X)_{\infty}}{\mathcal{E}(\lambda X)_T} \exp(\lambda^2((X)_{\infty} - (X)_{T})) \middle| \mathcal{F}_T \right] \\ &\le E\left[ \frac{\mathcal{E}(2\lambda X)_{\infty}}{\mathcal{E}(2\lambda X)_T} \middle| \mathcal{F}_T \right]^{1/2} E[\exp(2\lambda^2((X)_{\infty} - (X)_{T})) | \mathcal{F}_T]^{1/2} \\ &\le E[\exp(2\lambda^2((X)_{\infty} - (X)_{T})) | \mathcal{F}_T]^{1/2}. \end{align*} $$
+
+The same argument works if $X$ is replaced by $-X$. Thus, we obtain (2.2). This shows that $\sqrt{2}a(X) \ge b(X)$. ■
+
+COROLLARY 1. If $X \in \text{BMO}$, then $q(X) \in \overline{L^\infty}$ is equivalent to $b(X) = \infty$, where $\overline{L^\infty}$ stands for the BMO-closure of $L^\infty$.
+
+Recall that the continuous exponential martingale $\mathcal{E}(X)$ is said to satisfy the $(A_p)$-condition $(1 0$,
+
+$$E[\exp(\lambda(\langle \alpha X + \beta Y \rangle_{\infty} - \langle \alpha X + \beta Y \rangle_{T})) | \mathcal{F}_{T}] \\
+\le E[\exp(4\alpha^2\lambda(\langle X \rangle_{\infty} - \langle X \rangle_{T})) | \mathcal{F}_{T}]^{1/2} \\
+\times E[\exp(4\beta^2\lambda(\langle Y \rangle_{\infty} - \langle Y \rangle_{T})) | \mathcal{F}_{T}]^{1/2},$$
+
+which shows that $b(\alpha X + \beta Y) = \infty$ for $b(X) = \infty$, $b(Y) = \infty$. Thus, $X, Y \in \mathcal{H}$ implies that $\alpha X + \beta Y \in \mathcal{H}$. This completes the proof. ■
+
+Now, it is natural to ask if the relationship $\overline{H^\infty} = \mathcal{H}$ holds. But we have not been able to settle this question so far.
+
+**Acknowledgments.** The authors wish to thank an anonymous earnest referee for a careful reading of the manuscript and many helpful comments.
+
+**References**
+
+[1] M. Emery, *Le théorème de Garnett-Jones, d'après Varopoulos*, in: Séminaire de Probabilités XV, Lecture Notes in Math. 850, Springer, Berlin, 1981, 278–284.
+
+[2] N. Kazamaki, *A new aspect of $L^\infty$ in the space of BMO-martingales*, Probab. Theory Related Fields 78 (1987), 113–126.
+
+[3] —, *Continuous Exponential Martingales and BMO*, Lecture Notes in Math. 1579, Springer, Berlin, 1994.
\ No newline at end of file
diff --git a/samples/texts/5157779/page_6.md b/samples/texts/5157779/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..ad983e6dbbb8878a67f367129a4900b0f0619cd0
--- /dev/null
+++ b/samples/texts/5157779/page_6.md
@@ -0,0 +1,19 @@
+[4] N. Th. Varopoulos, *A probabilistic proof of the Garnett–Jones theorem on BMO*, Pacific J. Math. 90 (1980), 201–221.
+
+Department of Mathematics
+College of Science
+Donghua University
+1882 West Yan'an Rd.
+Shanghai 200051, P.R. China
+E-mail: litanyan@dhu.edu.cn
+
+Department of Mathematics
+Toyama University
+3190 Gofuku, Toyama 930-8555, Japan
+E-mail: kaz@sci.toyama-u.ac.jp
+
+Received December 30, 2003
+
+Revised version December 23, 2004
+
+(5343)
\ No newline at end of file
diff --git a/samples/texts/568081/page_1.md b/samples/texts/568081/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..00d9e25d2a40261be109aff2102a4f4d476e47df
--- /dev/null
+++ b/samples/texts/568081/page_1.md
@@ -0,0 +1,36 @@
+# Statistical Query Lower Bounds for List-Decodable Linear Regression
+
+Ilias Diakonikolas*
+
+University of Wisconsin-Madison
+ilias@cs.wisc.edu
+
+Daniel M. Kane†
+
+University of California, San Diego
+dakane@cs.ucsd.edu
+
+Ankit Pensia‡
+
+University of Wisconsin-Madison
+ankitp@cs.wisc.edu
+
+Thanasis Pittas§
+
+University of Wisconsin-Madison
+pittas@wisc.edu
+
+Alistair Stewart
+Web 3 Foundation
+stewart.al@gmail.com
+
+June 18, 2021
+
+## Abstract
+
+We study the problem of list-decodable linear regression, where an adversary can corrupt a majority of the examples. Specifically, we are given a set $T$ of labeled examples $(x, y) \in \mathbb{R}^d \times \mathbb{R}$ and a parameter $0 < \alpha < 1/2$ such that an $\alpha$-fraction of the points in $T$ are i.i.d. samples from a linear regression model with Gaussian covariates, and the remaining $(1-\alpha)$-fraction of the points are drawn from an arbitrary noise distribution. The goal is to output a small list of hypothesis vectors such that at least one of them is close to the target regression vector. Our main result is a Statistical Query (SQ) lower bound of $d^{\text{poly}(1/\alpha)}$ for this problem. Our SQ lower bound qualitatively matches the performance of previously developed algorithms, providing evidence that current upper bounds for this task are nearly best possible.
+
+*Supported by NSF Award CCF-1652862 (CAREER), a Sloan Research Fellowship, and a DARPA Learning with Less Labels (LwLL) grant.
+†Supported by NSF Award CCF-1553288 (CAREER) and a Sloan Research Fellowship.
+‡Supported by NSF Award DMS-1749857.
+§Supported in part by NSF Award DMS-2023239 (TRIPODS).
\ No newline at end of file
diff --git a/samples/texts/568081/page_10.md b/samples/texts/568081/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..fb213948e3c0ea62d658aa0f4a2342946242af4b
--- /dev/null
+++ b/samples/texts/568081/page_10.md
@@ -0,0 +1,37 @@
+$$
+\le \frac{|T'_i| + |T'_j|}{2} \left( \frac{|T'_i \cap \mathcal{E}|}{|T'_i|} + \frac{|T'_j \cap \mathcal{E}^c|}{|T'_j|} \right) = \frac{|T'_i| + |T'_j|}{2} \left( \underset{(X,y) \sim_u T'_i}{\mathbf{Pr}}[\mathcal{E}] + \underset{(X,y) \sim_u T'_j}{\mathbf{Pr}}[\mathcal{E}^c] \right).
+$$
+
+As $\beta_j \in \mathcal{H}_{t,\gamma}$, we have that $\mathbf{P}_{(X,y) \sim_u T'_j}[\mathcal{E}^c] \le \alpha/20$ by condition (2). We now bound the first term.
+
+$$
+\underset{(X,y) \sim_u T'_i}{\mathbf{Pr}}[\mathcal{E}] = \underset{(X,y) \sim_u T'_i}{\mathbf{Pr}}[|y - X^T\beta_i - \gamma'v^T X| \leq \sigma t],
+$$
+
+which is less than $\alpha/20$ by the condition (3). This completes the proof of the claim. $\square$
+
+We use this to show that there cannot exist a $\gamma$-packing of size $k \ge 4/\alpha$. To see this, assume
+that $k = 4/\alpha$, then
+
+$$
+|T| \geq \sum_{i=1}^{k} |T'_{i}| - \sum_{1 \leq i < j \leq k} |T'_{i} \cap T'_{j}| \geq \left(1 - \frac{\alpha}{20}(k-1)\right) \sum_{i=1}^{k} |T'_{i}| \geq \frac{4}{5} k \alpha \frac{|T|}{2} > |T|.
+$$
+
+This yields a contradiction, completing the proof of Theorem 2.1. $\square$
+
+## 2.2 Information-Theoretic Lower Bound on Error
+
+We establish the following lower bound on the error of any list-decoding algorithm for linear regression.
+
+**Theorem 2.4.** Let $0 < \alpha < 1/2$, $\sigma > 0$, $k > 1$ such that $k = O(1/(\alpha^2 \log(1/\alpha)))$, and $d \in \mathbb{Z}_+$ such that $d > (\log(1/\alpha^k))^C$, where $C$ is a sufficiently large constant. Any list-decodable algorithm that receives a $(1-\alpha)$-corrupted version of $D_\beta$ (defined in Definition 1.2) for some unknown $\beta \in \mathbb{R}^d$, and returns a list $\mathcal{L}$ of size $|\mathcal{L}| = O((1/\alpha)^k)$ has error bound $\Omega\left(\frac{\sigma}{\alpha\sqrt{k\log(1/\alpha)}}\right)$ with high probability.
+
+*Proof.* Let $\rho > 0$ to be decided later. We will take $\beta$ to be of the form $\rho v$ for some unit vector $v$. By abusing notation, let $D_v(x,y)$ be the joint distribution on $(X,y)$ from the linear model $X \sim N(0, I_d)$, $y = \beta^T X + \eta$, where $\eta \sim N(0, \sigma^2)$ independently of $X$ and $\beta = \rho v$. As $d$ is large enough, let $S'$ be a subset of the set $S$ of nearly orthogonal unit vectors of $\mathbb{R}^d$ from Lemma 1.15 with $|S'| = \lceil 0.5(1/\alpha)^k \rceil$ for $k > 1$. Consider the set of distributions $\{D_v\}_{v \in S'}$ and note that for every distinct pair $u, v \in S$ we have that $\|\rho u - \rho v\|_2 \ge c\rho$ for some $c > 0$. We want to show that after adding $(1 - \alpha)$-fraction of outliers these distributions become indistinguishable, i.e., there exists some distribution that is pointwise greater than $\alpha D_v$ for every $v \in S'$. This will lead to a lower bound on error of the form $\Omega(\rho)$. Let $P$ be the joint pseudo-distribution on $(X,y)$ such that $P(x,y) = \max_{v \in S} D_v(x,y)$ and denote by $\|P\|_1$ the normalizing factor $\int_{\mathbb{R}} \int_{\mathbb{R}^d} P(x,y)dxdy$. We will show that $P/\|P\|_1 \ge \alpha D_v$ pointwise. To this end, it suffices to show that $\|P\|_1 \le 1/\alpha$. Denote $z := v^T x$. Noting that $D_v$'s marginal on $x$ is $N(0, I_d)$ and the conditional $D_v(y|x)$ is $N(\rho z, \sigma^2)$ we can write
+
+$$
+\begin{align*}
+D_v(x, y) &= \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{|y - \rho z|^2}{2\sigma^2}\right) \frac{1}{(\sqrt{2\pi})^d} \exp\left(-\frac{\|x\|^2}{2}\right) \\
+&= \frac{1}{(\sqrt{2\pi})^{d+1}\sigma} \exp\left(-\frac{|y - \rho z|^2}{2\sigma^2} - \frac{\|x\|^2}{2}\right).
+\end{align*}
+$$
+
+For some $\sigma_1$ to be defined later, take $R$ to be the reference distribution where $X \sim N(0, I_d)$ and $y \sim N(0, \sigma_1^2)$ independently. We now calculate the ratio of density of $R$ with $D_v$ at arbitrary $(x, y):
\ No newline at end of file
diff --git a/samples/texts/568081/page_11.md b/samples/texts/568081/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9969522a81c8d304f88e70c8388452611ac6261
--- /dev/null
+++ b/samples/texts/568081/page_11.md
@@ -0,0 +1,41 @@
+$$
+\begin{align*}
+\frac{R(x, y)}{D_v(x, y)} &= \frac{R(y)R(x|y)}{D_v(y)D_v(x|y)} \\
+&= \frac{\frac{1}{(\sqrt{2\pi})^{d+1}\sigma_1} \exp(-0.5 \|x\|^2 - 0.5y^2/\sigma_1^2)}{\frac{1}{(\sqrt{2\pi})^{d+1}\sigma} \exp(-0.5 \|x\|^2 - 0.5 \frac{\rho^2}{\sigma^2} (z - \frac{y}{\rho})^2)} \\
+&= \frac{\sigma}{\sigma_1} \exp\left(-\frac{y^2}{2\sigma_1^2} + \frac{\rho^2}{2\sigma^2} \left(z - \frac{y}{\rho}\right)^2\right) \\
+&\geq \frac{\sigma}{\sigma_1} \exp\left(-\frac{y^2}{2\sigma_1^2}\right).
+\end{align*}
+$$
+
+As we will show later, it suffices to show that this expression is greater than $2\alpha$ with high probability under $D_v$. As $y \sim \mathcal{N}(0, \sigma_y^2)$ under $D_v$, with probability $1 - \alpha^{k-1}$, $|y| \le 10\sqrt{k}\sigma_y\sqrt{\log(1/\alpha)}$. Setting $\sigma_1 = 10\sqrt{k}\sigma_y\sqrt{\log(1/\alpha)}$, we get that with the same probability,
+
+$$
+\frac{R(x, y)}{D_v(x, y)} \geq \frac{1}{100\sqrt{k}} \frac{\sigma}{\sigma_y \sqrt{\log(1/\alpha)}}.
+$$
+
+We can now try to maximize $\rho$ (and thus $\sigma_y$) so that the expression on the right-hand side is greater
+than $2\alpha$. This holds as long as $\rho$ satisfies the following:
+
+$$
+\sigma_y^2 = \sigma^2 + \rho^2 \leq \frac{\sigma^2}{C'k\alpha^2 \log(1/\alpha)},
+$$
+
+As $k = O(1/(\alpha^2 \log(1/\alpha)))$, the condition above shows that $\rho$ can be as large as $\Theta((\sigma/(\sqrt{k}\alpha))/\sqrt{\log(1/\alpha)})$. Finally we show that $\|P\|_1$ is less than $1/\alpha$ as follows:
+
+$$
+\begin{align*}
+\|P\|_{1} &= \int_{\mathbb{R}} \int_{\mathbb{R}^d} P(x,y)dxdy \\
+&= \int_{\mathbb{R}} \int_{\mathbb{R}^d} P(x,y)\mathbf{1}(|y| \le 10\sqrt{k}\sigma_y \sqrt{\log(1/\alpha)})dxdy + \int_{\mathbb{R}} \int_{\mathbb{R}^d} P(x,y)\mathbf{1}(|y| > 10\sqrt{k}\sigma_y \sqrt{\log(1/\alpha)})dxdy \\
+&\le \frac{1}{2\alpha} \int_{\mathbb{R}} \int_{\mathbb{R}^d} R(x,y)dxdy + \int_{\mathbb{R}} \int_{\mathbb{R}^d} P(x,y)\mathbf{1}(|y| > 10\sqrt{k}\sigma_y \sqrt{\log(1/\alpha)})dxdy \\
+&\le \frac{1}{2\alpha} + \sum_{v \in S'} \mathbf{Pr}_{(X,y) \sim D_v} [|y| > 10\sqrt{k}\sigma_y \sqrt{\log(1/\alpha)}] \\
+&\le \frac{1}{2\alpha} + |S'| \alpha^{k-1} \le 1/\alpha,
+\end{align*}
+$$
+
+where the last inequality follows by noting that $|S'| \le 0.5(1/\alpha)^k$. $\square$
+
+**3 Main Result: Proof of Theorem 1.5**
+
+In this section, we present the main result of this paper: SQ hardness of list-decodable linear regression (Definitions **1.1** and **1.2**). We consider the setting when $\beta$ has norm less than 1, i.e., $\beta = \rho v$ for $v \in S^{d-1}$ and $\rho \in (0, 1).^1$ Note that the marginal distribution of the labels is $\mathcal{N}(0, \sigma_y^2)$,
+
+¹This is a standard assumption and considered by existing works [KKK19, RY20a] (cf. Remark 3.11).
\ No newline at end of file
diff --git a/samples/texts/568081/page_12.md b/samples/texts/568081/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e6c9d6ba41b07d2f9dea1ae9295193924b889ed
--- /dev/null
+++ b/samples/texts/568081/page_12.md
@@ -0,0 +1,31 @@
+where $\sigma_y^2 = \rho^2 + \sigma^2$. We ensure that the labels $y$ have unit variance by using $\sigma^2 = 1 - \rho^2$. Specifically, the choice of parameters will be such that obtaining a $\rho/2$-additive approximation of the regressor $\beta$ is possible information-theoretically with poly($d/\alpha$) samples (cf. Section 2.1), but the complexity of any SQ algorithm for the task must necessarily be at least $d^{\text{poly}(1/\alpha)}/\sigma$. We show the following more detailed statement of Theorem 1.5.
+
+**Theorem 3.1** (SQ Lower Bound). Let $c \in (0, 1/2)$, $d \in \mathbb{Z}_+$ with $d = 2^{\Omega(1/(1/2-c))}$, $\alpha \in (0, 1/2)$, $\rho \in (0, 1)$, $\sigma^2 = 1 - \rho^2$, and $m \in \mathbb{Z}_+$ with $m \le c_1/\sqrt{\alpha}$ for some sufficiently small constant $c_1 > 0$. Any list-decoding algorithm that, given statistical query access to a $(1-\alpha)$-corrupted version of the distribution described by the model of Definition 1.2 with $\beta = \rho v$ for $v \in S^{d-1}$, returns a list $\mathcal{L}$ of hypotheses vectors that contains a $\hat{\beta}$ such that $\|\hat{\beta} - \beta\|_2 \le \rho/2$, does one of the following:
+
+* it uses at least one query to $\text{STAT}(\Omega(d)^{-(2m+1)(1/4-c/2)}e^{O(m)}/\sigma)$,
+
+* it makes $2^{\Omega(d^c)} d^{-(2m+1)(1/2-c)}$ many queries, or
+
+* it returns a list $\mathcal{L}$ of size $2^{\Omega(d^c)}$.
+
+In the rest of this section, we will explain the hard-to-learn construction for our SQ lower bound, i.e., a set of distributions with large statistical dimension. The proof would then follow from Lemma 1.12. We begin by describing additional notation that we will use.
+
+**Notation:** As $\beta = \rho v$ for a fixed $\rho$, we will slightly abuse notation by using $D_v(x, y)$ to denote the joint distribution of the inliers and we use $E_v(x, y)$ to denote the $(1-\alpha)$-corrupted version of $D_v(x, y)$. To avoid using multiple subscripts, we use $D_v(x|y)$ to denote the conditional distribution of $X|y$ according to the distribution $D_v$ and similarly for the other distributions. In addition, we use $D_v(y)$ to denote the marginal distribution of $y$ under $D_v$ and similarly for other distributions.
+
+Following the general construction of [DKS17], we will specify a reference joint distribution $R(x, y)$ where $X$ and $y$ are independent, and $X \sim \mathcal{N}(0, I_d)$. We will find a marginal distribution $R(y)$ and a way to add the outliers so that the following hold for each $E_v$ (where $m = \Theta(1/\sqrt{\alpha}))$:
+
+(I) $E_v$ is indeed a valid distribution of $(X, y)$ in our corruption model (i.e., can be written as a mixture $\alpha D_v(x,y) + (1-\alpha)N_v(x,y)$ for some noise distribution $N_v$). Moreover, the marginal of $E_v$ on the labels, $E_v(y)$, coincides with $R(y)$.
+
+(II) For every $y \in \mathbb{R}$, the conditional distribution $E_v(x|y)$ is of the form $P_{A_{y,v}}$ of Definition 1.13, with $A_y$ being a distribution that matches the first $2m$ moments with $\mathcal{N}(0, 1)^2$
+
+(III) For $A_y$ defined above, $\mathbf{E}_{y\sim R(y)}[\chi^2(A_y, \mathcal{N}(0, 1))]$ is bounded.
+
+We first briefly explain why a construction satisfying the above properties suffices to prove our main theorem (postponing a formal proof for the end of this section). We start by noting the following decomposition.
+
+**Lemma 3.2.** For $u, v \in S^{d-1}$, if $E_u$ and $E_v$ have the same marginals $R(y)$ on the labels, they satisfy $\chi_{R(x,y)}(E_v(x,y), E_u(x,y)) = \mathbf{E}_{y\sim R(y)}[\chi_{\mathcal{N}(0,I_d)}(E_v(x|y), E_u(x|y))]$.
+
+*Proof.* Let $\phi$ denote the density of $\mathcal{N}(0, 1)$. Using the fact that $E_v$ and $E_u$ have the same marginal $R(y)$ we have that
+
+$$ \chi_{R(x,y)}(E_v(x,y), E_u(x,y)) + 1 = \int_{\mathbb{R}} \int_{\mathbb{R}^d} \frac{E_v(x,y)E_u(x,y)}{\phi(x)R(y)} dx dy $$
+
+²We use even number of moments for simplicity. The analysis would slightly differ for odd number.
\ No newline at end of file
diff --git a/samples/texts/568081/page_13.md b/samples/texts/568081/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..ee42023c413c7b22811e06ac52a487c6531ce678
--- /dev/null
+++ b/samples/texts/568081/page_13.md
@@ -0,0 +1,40 @@
+$$
+\begin{align*}
+&= \int_{\mathbb{R}} \int_{\mathbb{R}^d} \frac{E_v(x|y)E_u(x|y)}{\phi(x)} R(y) \, dxdy \\
+&= \int_{\mathbb{R}} (1 + \chi_{\mathcal{N}(0, I_d)}(E_v(x|y)E_u(x|y))) R(y) \, dy \\
+&= 1 + \underset{y \sim R(y)}{\mathbf{E}} [\chi_{\mathcal{N}(0, I_d)}(E_v(x|y), E_u(x|y))] . \quad \square
+\end{align*}
+$$
+
+Using the decomposition in Lemma 3.2 for $E_u$ and $E_v$ satisfying Property (II), Lemma 1.14 implies that $|\chi_{R(x,y)}(E_v(x,y), E_u(x,y))| \le |u^T v|^{2m+1} \mathbf{E}_{y \sim R(y)}[\chi^2(A_y, N(0,1))]$. Letting $\mathcal{D} = \{E_v : v \in S\}$, where $S$ is the set of nearly uncorrelated unit vectors from Lemma 1.15, we get that $\mathcal{D}$ is $(\gamma, b)$-correlated relative to $R$, for $b = \mathbf{E}_{y \sim R(y)}[\chi^2(A_y, \mathcal{N}(0,1))]$ and $\gamma \le d^{-\Omega(m)}$ b. As $|S| = 2^{\Omega(d^c)}$, $b$ is bounded by Property (III), and the list size is much smaller than $|S|$, we can show that the statistical dimension of the list-decodable linear regression is large.
+
+Thus, in the rest of the section we focus on showing that such a construction exists. We first note that according to our linear model of Definition 1.2, the conditional distribution of X given y for the inliers is Gaussian with unit variance in all but one direction.
+
+**Fact 3.3.** Fix $\rho > 0$, $v \in S^{d-1}$, and consider the regression model of Definition 1.2 with $\beta = \rho v$. Then the conditional distribution $X|y$ of the inliers is $\mathcal{N}(ypv, I_d - \rho^2 vv^T)$, i.e., independent standard Gaussian in all directions perpendicular to $v$ and $\mathcal{N}(\rho y, 1 - \rho^2)$ in the direction of $v$.
+
+*Proof.* This is due to the following fact for the conditional distribution of the Gaussian distribution.
+
+**Fact 3.4.** If $[\begin{matrix} y_1 \\ y_2 \end{matrix}] \sim \mathcal{N}(\begin{pmatrix} \mu_1 \\ \mu_2 \end{pmatrix}, \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{bmatrix})$, then $y_1|y_2 \sim \mathcal{N}(\bar{\mu}, \bar{\Sigma})$, with $\bar{\mu} = \mu_1 + \Sigma_{12}\Sigma_{22}^{-1}(y_2 - \mu_2)$ and $\Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}$.
+
+We apply this fact for the pair $(X, y)$ by setting $y_1 = X, y_2 = y, \mu_1 = \mu_2 = 0$ and $\Sigma_{11} = I_d, \Sigma_{12} = \beta, \Sigma_{21} = \beta^T, \Sigma_{22} = \sigma^2 + \|\beta\|^2$. $\square$
+
+Since Fact 3.3 states that $D_v(x|y)$ is already of the desired form (standard normal in all di-
+rections perpendicular to $v$ and $\mathcal{N}(y\rho, 1-\rho^2)$ in the direction of $v$), the problem becomes one-
+dimensional. More specifically, for every $y \in \mathbb{R}$, we need to find a one-dimensional distribution
+$Q_y$ and appropriate values $\alpha_y \in [0, 1]$ such that the mixture $A_y = \alpha_y \mathcal{N}(y\rho, 1-\rho^2) + (1-\alpha_y)Q_y$
+matches the first $2m$ moments with $\mathcal{N}(0, 1)$. Then, multiplying by $\phi_{\perp v}$ (which denotes the contribu-
+tion of the space orthogonal to $v$ to the density of standard Gaussian, as defined in Definition 1.13)
+yields the $d$-dimensional mixture distribution $\alpha_y D_v(x|y) + (1-\alpha_y)Q_y(v^T x)\phi_{\perp v}(x)$. We show that
+an appropriate selection of $\alpha_y$ can ensure that this is a valid distribution for our contamination
+model.
+
+**Lemma 3.5.** Let $R$ be a distribution on pairs $(x,y) \in \mathbb{R}^{d+1}$ such that $\alpha_y := \alpha D_v(y)/R(y) \in [0,1]$ for all $y \in \mathbb{R}$. Suppose that for every $y \in \mathbb{R}$ there exists a univariate distribution $Q_y$ such that $A_y := \alpha_y \mathcal{N}(y\rho, 1-\rho^2) + (1-\alpha_y)Q_y$ matches the first $2m$ moments with $\mathcal{N}(0,1)$. If the distribution of the outliers is $N_v(x,y) = ((1-\alpha_y)/(1-\alpha))Q_y(v^Tx)\phi_{\perp v}(x)R(y)$, Properties (I) and (II) hold.
+
+*Proof.* First note that the noise distribution $N_v$ is indeed a valid distribution since it is non-negative everywhere because of the assumption $\alpha_y \in [0, 1]$ and it integrates to one:
+
+$$
+\begin{align*}
+& \frac{1}{1-\alpha} \int_{\mathbb{R}} \int_{\mathbb{R}^d} (1-\alpha_y) Q_y(v^T x) \phi_{\perp v}(x) R(y) dx dy \\
+&= \frac{1}{1-\alpha} \left( \int_{\mathbb{R}} \int_{\mathbb{R}^d} R(y) Q_y(v^T x) \phi_{\perp v}(x) dx dy - \alpha \int_{\mathbb{R}} \int_{\mathbb{R}^d} D_v(y) Q_y(v^T x) \phi_{\perp v}(x) dx dy \right) = 1.
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/568081/page_14.md b/samples/texts/568081/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..0cb804b053aac19d7694d927def8e3416ed4ebf7
--- /dev/null
+++ b/samples/texts/568081/page_14.md
@@ -0,0 +1,39 @@
+The joint distribution $E_v(x,y)$ can be written as
+
+$$
+\begin{align*}
+E_v(x, y) &= \alpha D_v(x, y) + (1-\alpha)N_v(x, y) \\
+&= \alpha D_v(x, y) + (1-\alpha) \frac{1-\alpha_y}{1-\alpha} Q_y(v^T x) \phi_{\perp v}(x) R(y) \\
+&= (\alpha_y D_v(x|y) + (1-\alpha_y)Q_y(v^T x) \phi_{\perp v}(x)) R(y).
+\end{align*}
+$$
+
+This means that the marginal of $y$ under $E_v$ is $R(y)$, which establishes Property (I), and the conditional distribution of $X|y$ under $E_v$ is $E_y(x|y) = \alpha_y D_v(x|y) + (1-\alpha_y)Q_y(v^T x)\phi_{\perp v}(x)$.
+
+The moment matching part of Property (II) holds by assumption. For the other part of Prop-
+erty (II), we note that $E_v(x|y)$ is standard Gaussian in directions perpendicular to $v$ because of
+Fact 3.3 and the form of the term $Q_y(v^T x)\phi_{\perp v}(x)$ that corresponds to the outliers. $\square$
+
+We will choose the reference distribution $R(x, y)$ to have $X \sim \mathcal{N}(0, I_d)$ and $y \sim \mathcal{N}(0, 1/\alpha)$ inde-
+pendently, which makes the corresponding value of $\alpha_y$ to be $\alpha_y = \alpha D_v(y)/R(y) = \sqrt{\alpha} \exp(-y^2(1-\alpha)/2)$. This satisfies the condition in Lemma 3.5 that $\alpha_y \in [0, 1]$. Our choice of $R(y)$ being $\mathcal{N}(0, 1/\alpha)$ is informed by Properties (II) and (III), and will be used later on in the proofs of Theorem 3.6 and Lemma 3.7 (also see the last paragraph of Section 1.3 for more intuition). Going back to our goal, i.e., making $A_y = \alpha_y \mathcal{N}(y\rho, 1-\rho^2) + (1-\alpha_y)Q_y$ match moments with $\mathcal{N}(0, 1)$, we will argue that it suffices to only look for $Q_y$ of the specific form $U_\rho F_y$, where $U_\rho$ is the Ornstein-Uhlenbeck operator. This suffices because $U_\rho \delta_y = \mathcal{N}(y\rho, 1-\rho^2)$ and the operator $U_\rho$ preserves the moments of a distribution if they match with $\mathcal{N}(0, 1)$ (see Lemma 3.7 (i) below). Letting $A_y = U_\rho(\alpha_y \delta_y + (1-\alpha_y)F_y)$, the new goal is to show that the argument of $U_\rho$ matches moments with $\mathcal{N}(0, 1)$. We show the following structural result:
+
+**Theorem 3.6.** Let $y \in \mathbb{R}$, $B \in \mathbb{R}$, $\alpha \in (0, 1/2)$, and define $\alpha_y := \sqrt{\alpha} \exp(-y^2(1-\alpha)/2)$. For any $m \in \mathbb{Z}_+$ such that $m \le C_1/\sqrt{\alpha}$ and $B \ge C_2\sqrt{m}$, with $C_1 > 0$ being a sufficiently small constant and $C_2$ being a sufficiently large constant, there exists a distribution $F_y$ that satisfies the following:
+
+1. The mixture distribution $\alpha_y \delta_y + (1 - \alpha_y) F_y$ matches the first 2m moments with $\mathcal{N}(0, 1)$.
+
+2. $F_y$ is a discrete distribution supported on at most $2m+1$ points, all of which lie in $[-B, B]$.
+
+The proof of Theorem 3.6 is the bulk of the technical work of this paper and is deferred to
+Section 4. As mentioned before, applying $U_\rho$ preserves the required moment-matching property.
+More crucially, it allows us to bound the $\chi^2$-divergence: the following result bounds $\chi^2(A_y, \mathcal{N}(0, 1))$
+using contraction properties of $U_\rho$, tail bounds of Hermite polynomials, and the discreteness of $F_y$.
+
+**Lemma 3.7.** In the setting of Theorem **3.6**, let $\rho > 0$ and $Q_y = U_\rho F_y$. Then the following holds for the mixture $A_y = \alpha_y \mathcal{N}(y\rho, 1-\rho^2) + (1-\alpha_y)Q_y$: (i) $A_y$ matches the first 2m moments with $\mathcal{N}(0, 1)$, and (ii) $\chi^2(A_y, \mathcal{N}(0, 1)) \le \alpha O(e^{y^2(\alpha^{-1/2})}/(1-\rho^2) + O(e^{B^2/2})/(1-\rho^2))$.
+
+*Proof.* The first property follows by noting that $A_y = \alpha_y \mathcal{N}(y\rho, 1-\rho^2) + (1-\alpha_y)Q_y = U_\rho(\alpha_y\delta_y + (1-\alpha_y)F_y)$ and using the eigenvalue property of Hermite polynomials (Fact 1.7). This implies that for all $i \le 2m$ we have that
+
+$$
+\underset{X \sim U_{\rho}(\alpha_{y}\delta_{y}+(1-\alpha_{y})F_{y})}{\mathbf{E}}[h_{i}(X)] = \rho^{i} \underset{X \sim \alpha_{y}\delta_{y}+(1-\alpha_{y})F_{y}}{\mathbf{E}}[h_{i}(X)] = \rho^{i} \underset{X \sim N(0,1)}{\mathbf{E}}[h_{i}(X)] = \underset{X \sim N(0,1)}{\mathbf{E}}[h_{i}(X)],
+$$
+
+where the last equation uses that $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[h_i(X)] = 0$ for $i > 0$ and $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[h_0(X)] = 1$. Since $\{h_i : i \in [2m]\}$ form a basis of $\mathcal{P}(2m)$, the space of all polynomials of degree at most $2m$, it follows that $A_y$ continues to matches $2m$ moments with $\mathcal{N}(0, 1)$.
\ No newline at end of file
diff --git a/samples/texts/568081/page_15.md b/samples/texts/568081/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..9da126c87d2d3aa5e886fbeb08e730ed4ef368a3
--- /dev/null
+++ b/samples/texts/568081/page_15.md
@@ -0,0 +1,48 @@
+The $\chi^2$ bound is due to the bounded support in $[-B, B]$ and the Gaussian smoothing operation
+and can be shown as follows. First, we need the following fact whose proof is included in Appendix A
+for completeness.
+
+**Fact 3.8.** For any one-dimensional distribution $P$ that matches the first $m$ moments with $\mathcal{N}(0, 1)$
+and has $\chi^2(P, \mathcal{N}(0, 1)) < \infty$ the following identity is true: $\chi^2(P, \mathcal{N}(0, 1)) = \sum_{i=m+1}^{\infty} (\mathbf{E}_{X\sim P}[h_i(X)])^2$.
+
+Let $M_y$ denote the distribution $\alpha_y\delta_y + (1-\alpha_y)F_y$, i.e., the mixture before applying the Ornstein-Uhlenbeck operator. In order to apply Fact 3.8 to $M_y$, we need to argue that its $\chi^2$-divergence is finite. As $F_y$ is a discrete distribution, the $U_\rho$ operator will transform it to a finite sum of Gaussians with variances strictly less than 2. We defer the proof of the following claim to Appendix A.
+
+**Claim 3.9.** If $P = \sum_{i=1}^k \lambda_i N(\mu_i, \sigma_i^2)$ with $\mu_i \in \mathbb{R}$, $\sigma_i < \sqrt{2}$ and $\lambda_i \ge 0$ such that $\sum_{i=1}^k \lambda_i = 1$, we have that $\chi^2(P, \mathcal{N}(0, 1)) < \infty$.
+
+Using the formula of Fact 3.8 and Fact 1.7 for the individual terms, we get that
+
+$$
+\begin{align*}
+\chi^2(A_y, \mathcal{N}(0,1)) &= \sum_{i=2m+1}^{\infty} \mathbf{E}_{X \sim U_\rho M_y} [h_i(X)]^2 \\
+&= \sum_{i=2m+1}^{\infty} \rho^{2i} \mathbf{E}_{X \sim M_y} [h_i(X)]^2 \\
+&= \sum_{i=2m+1}^{\infty} \rho^{2i} \left( \alpha_y h_i(y) + (1-\alpha_y) \mathbf{E}_{x \sim F_y} [h_i(X)] \right)^2 \\
+&\le 2\alpha_y^2 \sum_{i=2m+1}^{\infty} \rho^{2i} h_i^2(y) + 2(1-\alpha_y)^2 \sum_{i=2m+1}^{\infty} \rho^{2i} \mathbf{E}_{x \sim F_y} [h_i(X)]^2, \quad (4)
+\end{align*}
+$$
+
+where the inequality uses that $(a+b)^2 \le 2(a^2+b^2)$ for all $a,b \in \mathbb{R}$. To bound this expression from above we will use the following tail bound for Hermite polynomials.
+
+**Lemma 3.10 ([Kra04]).** Let $h_i$ be the $i$-th normalized probabilist's Hermite polynomial. Then
+$\max_{x \in \mathbb{R}} h_k^2(x) e^{-x^2/2} = O(k^{-1/6})$.
+
+More details on how Lemma 3.10 follows from the result of [Kra04] can be found in Section 4.3.
+For the first term of Equation (4), we have that
+
+$$
+\begin{align*}
+\sum_{i=2m+1}^{\infty} \rho^{2i} \alpha_y^2 h_i^2(y) &\leq \sum_{i=2m+1}^{\infty} \rho^{2i} \alpha e^{-y^2+\alpha y^2} O(e^{y^2/2}) \\
+&\leq \alpha O(e^{y^2(\alpha-1/2)}) \sum_{i=2m+1}^{\infty} \rho^{2i} \\
+&\leq \alpha O(e^{y^2(\alpha-1/2)}) \rho^{2(2m+1)} / (1-\rho^2),
+\end{align*}
+$$
+
+where the first inequality uses Lemma **3.10** and the definition of $\alpha_y$. For the second term, we use
+the bounded support of $F_y$ in $[-B, B]$ along with the bound of Lemma **3.10** to obtain
+
+$$
+\sum_{i=2m+1}^{\infty} \rho^{2i} \mathbf{E}_{x \sim F_y} [h_i(X)]^2 \leq \sum_{i=2m+1}^{\infty} \rho^{2i} O(e^{B^2/2}) \leq O(e^{B^2/2}) \sum_{i=2m+1}^{\infty} \rho^{2i} \leq O(e^{B^2/2}) \frac{\rho^{2(2m+1)}}{1-\rho^2}.
+$$
+
+This completes the proof of Lemma **3.7**.
+
+Putting everything together, we now prove our main theorem.
\ No newline at end of file
diff --git a/samples/texts/568081/page_16.md b/samples/texts/568081/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f8f1adabae545251dff02c788f5673ef55ba75b
--- /dev/null
+++ b/samples/texts/568081/page_16.md
@@ -0,0 +1,23 @@
+*Proof of Theorem 3.1.* We will show that the following search problem $\mathcal{Z}$ has large statistical dimension: $\mathcal{D}$ is the set of distributions of the form $E_v(x, y) = \alpha D_v(x, y) + (1-\alpha)N_v(x, y)$ for every $v \in S^{d-1}$ and noise distribution $N_v$ as in Lemma 3.5. The reference distribution $R$ is $R = \mathcal{N}(0, I_d) \times \mathcal{N}(0, 1/\alpha)$. Let $\beta(v) = \rho v$ denote the regression vector corresponding to $E_v$. The set of solutions $\mathcal{F}$ is the set of all lists of size $l$ containing vectors of norm $\rho$ in $\mathbb{R}^d$ and the solution set $\mathcal{Z}(E_v)$ for the distribution $E_v$ is exactly the set of lists from $\mathcal{F}$ having at least one element $u$ at distance $\|u - \beta(v)\|_2 \le \rho/2$. The appropriate subset of $\mathcal{D}$ that we will consider is the one corresponding to the set $S$ of nearly orthogonal vectors of Lemma 1.15, $\mathcal{D}_R = \{E_v\}_{v \in S}$.
+
+Note that for any $u \in \mathcal{F}$, there exists at most one element $E_v$ in $\mathcal{D}_R$ that satisfies $\|u - \beta(v)\|_2 \le \rho/2$, since if there exists another $v'$ with $\|u - \beta(v')\|_2 \le \rho/2$, then by triangle inequality $\|\beta(v) - \beta(v')\|_2 \le \rho$. However, this cannot happen because $|v^T(v')| \le O(d^{c-1/2})$ for all $v, v' \in S$ together with $d = 2^{\Omega(1/(1/2-c))}$ implies that $\|\beta(v) - \beta(v')\|_2 \ge \rho\sqrt{2(1-v^T(v'))} \ge \rho$. This implies that for any solution list $L$, $|\mathcal{D}_R \setminus \mathcal{Z}^{-1}(L)| \ge |\mathcal{D}_R| - l$. We choose $l = |\mathcal{D}_R|/2$. We now calculate the pairwise correlation of the set $\mathcal{D}_R$. Let a pair of $u, v \in S^{d-1}$.
+
+$$
+\begin{aligned}
+\chi_{R(x,y)}(E_v(x,y), E_u(x,y)) &= \underset{y \sim R(y)}{\mathbf{E}} [\chi_{\mathcal{N}(0,I_d)}(E_v(x|y), E_u(x|y))] \\
+&\le |u^T v|^{2m+1} \underset{y \sim R(y)}{\mathbf{E}} [\chi^2(A_y, \mathcal{N}(0,1))] \\
+&= |u^T v|^{2m+1} \left( O(e^{B^2/2})/(1-\rho^2) + \int_{\mathbb{R}} \alpha O(e^{y^2(\alpha-1/2)}) \sqrt{\alpha e^{-y^2\alpha}} dy \right) \\
+&\le |u^T v|^{2m+1} O(e^{B^2/2})/(1-\rho^2) \\
+&\le \Omega(d)^{-(2m+1)(1/2-c)} O(e^{B^2/2})/(1-\rho^2),
+\end{aligned}
+$$
+
+where the first line is due to Lemma 3.2, the second line is from Lemma 1.14 along with the observation that $E_v(x|y)$ is of the form $P_{A_y,v}$, the third line comes from the second part of Lemma 3.7, and the last one uses Lemma 1.15. Thus, by recalling that we can choose $B = C_2\sqrt{m}$ for a sufficiently large constant $C_2$, the set $\mathcal{D}_R$ is $(\gamma, b)$-correlated with respect to $R$, where $\gamma := \Omega(d)^{-(2m+1)(1/2-c)} e^{O(m)}/(1-\rho^2)$ and $b := e^{O(m)}/(1-\rho^2)$. The proof is concluded by applying Lemma 1.12 with $\gamma' = \gamma$. $\square$
+
+We conclude this section with a note on the model and existing algorithmic results (extending the relevant discussion of Section 1.1).
+
+**Remark 3.11** (Comparison of SQ Lower Bound to Existing Upper Bounds). We remark that the model used in Theorem 1.5 (i.e., having a regressor with norm at most one and additive noise with small variance) is considered in both recent works [KKK19, RY20a] that provided list-decoding algorithms for the problem. In particular, these works give the following upper bounds:
+
+• [KKK19] considers the model where $\|\beta\|_2 \le 1$ and gives an algorithm that for every $\epsilon > 0$, runs in time $(d/(\alpha\epsilon))^{O(1/\epsilon^8)}$ and outputs a list of size $O(1/\alpha)$ containing a $\hat{\beta}$ such that $\|\hat{\beta} - \beta\|_2 \le O(\sigma/\alpha) + \epsilon$. Note that this guarantee is better than the trivial upper bound of 1 only if $\sigma = O(\alpha)$. To achieve error 1/4, this algorithm runs in time $(d/\alpha)^{O(1/\epsilon^8)}$. On the other hand, our lower bound for the complexity of any SQ algorithm becomes $\alpha d^{\Omega(1/\sqrt{\alpha})}$.
+
+• [RY20a] does not impose any constraint on $\|\beta\|_2$ and gives an algorithm that runs in time $(\|\beta\|_2/\sigma)^{\log(1/\alpha)} d^{O(1/\alpha^4)}$ and outputs a list of size $O((\|\beta\|_2/\sigma)^{\log(1/\alpha)})$ including a $\hat{\beta}$ with the guarantee that $\|\hat{\beta} - \beta\|_2 \le O(\sigma/\alpha^{3/2})$. For the special case where $\|\beta\|_2 \le 1$ (and $\sigma = O(\alpha^{3/2})$
\ No newline at end of file
diff --git a/samples/texts/568081/page_17.md b/samples/texts/568081/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6bd5d6370c38422e6527113f4bb184264335b4f
--- /dev/null
+++ b/samples/texts/568081/page_17.md
@@ -0,0 +1,29 @@
+in order for the error guarantee to be meaningful), this algorithm can achieve error 1/4 in time $(1/\alpha^{3/2})^{\log(1/\alpha)} d^{O(1/\alpha^4)}$. In comparison, our lower bound becomes $\alpha^{3/2} d^{\Omega(1/\sqrt{\alpha})}$.
+
+# 4 Duality for Moment Matching: Proof of Theorem 3.6
+
+We now prove the existence of a bounded distribution $F_y$ such that the mixture $\alpha_y \delta_y + (1-\alpha_y)F_y$ matches the first $2m$ moments with $\mathcal{N}(0, 1)$. The proof follows a non-constructive argument based on the duality between the space of moments and the space of non-negative polynomials.
+
+Let $B > 0$ and $m \in \mathbb{Z}_+$. Let $\mathcal{P}(m)$ denote the class of all polynomials $p : \mathbb{R} \to \mathbb{R}$ with degree at most $m$. Let $\mathcal{P}^{\ge 0}(2m, B)$ be the class of polynomials that can be represented in either the form $p(t) = (\sum_{i=0}^m a_i t^i)^2$ or the form $p(t) = (B^2 - t^2)(\sum_{i=0}^{m-1} b_i t^i)^2$. The intuition for $\mathcal{P}^{\ge 0}(2m, B)$ is that every polynomial of degree at most $2m$ that is non-negative in $[-B, B]$ can be written as a finite sum of polynomials from $\mathcal{P}^{\ge 0}(2m, B)$. By slightly abusing notation, for a polynomial $p(t) = \sum_{i=0}^m p_i t^i$, we also use $p$ to denote the vector in $\mathbb{R}^{m+1}$ consisting of the coefficients $(p_0, \dots, p_m)$. The following classical result characterizes when a vector is realizable as the moment sequence of a distribution with support in $[-B, B]$ (for simplicity, we focus on matching an even number of moments in the rest of this section).
+
+**Theorem 4.1 (Theorem 16.1 of [KS53])**. Let $B > 0$, $k \in \mathbb{Z}_+$, and $x = (x_0, x_1, \dots, x_{2k}) \in \mathbb{R}^{2k+1}$ with $x_0 = 1$. There exists a distribution with support in $[-B, B]$ having as its first $2k$ moments the sequence $(x_1, \dots, x_{2k})$ if and only if for all $p \in \mathcal{P}^{\ge 0}(2k, B)$ it holds that $\sum_{i=0}^{2k} x_i p_i \ge 0$.
+
+As we require the distribution to be discrete, we prove the following result using Theorem 4.1:
+
+**Proposition 4.2.** Fix $y \in \mathbb{R}$, $\alpha_y \in (0, 1)$, $B > 0$, and $m \in \mathbb{Z}_+$. There exists a discrete distribution $F_y$ supported on at most $2m + 1$ points in $[-B, B]$ such that $\alpha_y \delta_y + (1-\alpha_y)F_y$ matches the first $2m$ moments with $\mathcal{N}(0, 1)$ if and only if $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[p(X)] \ge \alpha_y p(y)$ for all $p \in \mathcal{P}^{\ge 0}(2m, B)$.
+
+The proof of Proposition 4.2 is deferred to Section 4.1. To prove Theorem 3.6, we need to establish the condition of Proposition 4.2. To this end, we first need the following two technical lemmas, whose proofs are given in Sections 4.2 and 4.3.
+
+**Lemma 4.3.** Let $m \in \mathbb{Z}_+$. If $B \ge C\sqrt{m}$ for some sufficiently large constant $C > 0$, then for every $q \in \mathcal{P}(m)$, it holds that $B^2 \mathbf{E}_{X \sim \mathcal{N}(0,1)}[q^2(X)] \ge 2 \mathbf{E}_{X \sim \mathcal{N}(0,1)}[X^2 q^2(X)]$.
+
+**Lemma 4.4.** Let $y \in \mathbb{R}$, $\alpha \in (0, 1/2)$, $m \in \mathbb{Z}_+$, and $\alpha_y = \sqrt{\alpha} \exp(-y^2(1-\alpha)/2)$. Suppose $m \le C/\sqrt{\alpha}$ for some sufficiently small constant $C > 0$. Then for all $r \in \mathcal{P}(m)$, $r \ne 0$: $r^2(y)/(E_{X \sim \mathcal{N}(0,1)}[r^2(X)]) \le 1/(2\alpha_y)$.
+
+*Proof of Theorem 3.6.* By Proposition 4.2, it remains to show that if $B \ge C_2\sqrt{m}$, then the condition $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[p(X)] \ge \alpha_y p(y)$ holds for all $p \in \mathcal{P}^{\ge 0}(2m, B)$. Thus, it suffices to ensure that the following two inequalities hold for $X \sim \mathcal{N}(0, 1)$:
+
+$$
+\sup_{r \in \mathcal{P}(m), r \ne 0} \frac{r^2(y)}{\mathbf{E}[r^2(X)]} \le \frac{1}{\alpha_y} \quad \text{and} \quad
+\sup_{q \in \mathcal{P}(m-1), q \ne 0} \frac{(B^2 - y^2)q^2(y)}{\mathbf{E}[(B^2 - X^2)q^2(X)]} \le \frac{1}{\alpha_y},
+\quad (5)
+$$
+
+where we use Lemma 4.3 to show that $\mathbf{E}[(B^2 - X^2)q^2(X)] > 0$ for all non-zero polynomials $q \in \mathcal{P}(m-1)$. The first expression can be bounded using Lemma 4.4 when $m \le C_1/\sqrt{\alpha}$. We now focus
\ No newline at end of file
diff --git a/samples/texts/568081/page_18.md b/samples/texts/568081/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..76011716138142eb7b8e544b95e33f98b3f5dfbc
--- /dev/null
+++ b/samples/texts/568081/page_18.md
@@ -0,0 +1,32 @@
+on the second expression. By Lemma 4.3, $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[(B^2 - X^2)q^2(X)] \geq 0.5 \mathbf{E}_{X \sim \mathcal{N}(0,1)}[B^2q^2(X)]$. Therefore, we have that
+
+$$
+\begin{align*}
+\sup_{q \in \mathcal{P}(m-1), q \neq 0} \frac{(B^2 - y^2)q^2(y)}{\mathbf{E}_{X \sim \mathcal{N}(0,1)}[(B^2 - X^2)q^2(X)]} &\le \sup_{q \in \mathcal{P}(m-1), q \neq 0} \frac{B^2q^2(y)}{\mathbf{E}_{X \sim \mathcal{N}(0,1)}[(B^2 - X^2)q^2(X)]} \\
+&\le \sup_{q \in \mathcal{P}(m-1), q \neq 0} \frac{B^2q^2(y)}{\mathbf{E}_{X \sim \mathcal{N}(0,1)}[0.5B^2q^2(X)]} = 2 \sup_{q \in \mathcal{P}(m-1), q \neq 0} \frac{q^2(y)}{\mathbf{E}_{X \sim \mathcal{N}(0,1)}[q^2(X)]},
+\end{align*}
+$$
+
+where the first inequality uses that the denominator is positive and $y^2q^2(y) \ge 0$ and the second inequality uses that $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[(B^2 - X^2)q^2(X)] \ge 0.5 \mathbf{E}_{X \sim \mathcal{N}(0,1)}[B^2q^2(X)]$. The expression above is of the same form as the first expression in Equation (5), and thus is also bounded above by $1/\alpha_y$ when $m \le C_1/\sqrt{\alpha}$ using Lemma 4.4. This completes the proof of Theorem 3.6. $\square$
+
+## 4.1 Proof of Proposition 4.2
+
+We require the following result stating that for every distribution $Q$ with bounded support, there exists a discrete distribution $P$ with bounded support that matches the low-degree moments of $Q$.
+
+**Lemma 4.5.** Let $B > 0$, $k \in \mathbb{Z}_+$, and $Q$ be any distribution with support in $[-B, B]$. Then there exists a discrete distribution $P$ with the following properties: (i) the support of $P$ is contained in $[-B, B]$, (ii) the first $k$ moments of $P$ agree with the first $k$ moments of $Q$, and (iii) $P$ is supported on at most $k+1$ points.
+
+*Proof.* Let $Q$ be the set of distributions on $\mathbb{R}$ that are supported in $[-B, B]$ and let $Q' \subset Q$ be the set of Dirac delta distributions supported in $[-B, B]$, i.e., $Q' = \{\delta_y : y \in [-B, B]\}$. Let $C \subset \mathbb{R}^k$ and $C' \subset \mathbb{R}^k$ be the set of all vectors $(x_1, \dots, x_k)$ whose coordinates $x_i$ are the moments of a distribution in $Q$ and $Q'$ respectively, i.e.,
+
+$$ C := \{x \in \mathbb{R}^k : \exists Q \in Q : \forall i \in [k], x_i = \mathbf{E}_{X \sim Q}[X^i]\}, $$
+
+$$ C' := \{x \in \mathbb{R}^k : \exists Q' \in Q' : \forall i \in [k], x_i = \mathbf{E}_{X \sim Q'}[X^i]\}. $$
+
+Note that there is a bijection between $C'$ and $Q'$. We now recall the following classical result stating convexity properties of $C$ and its relation with $C'$. We say a set $M$ is a convex hull of a set $M'$ if every $x \in M$ can be written as $x = \sum_{i=1}^{j} \lambda_i y_i$, where $j \in \mathbb{Z}_+$, $\sum_{i=1}^{j} \lambda_i = 1$, and for all $i \in [j]: \lambda_i \ge 0$, $y_i \in M'$.
+
+**Lemma 4.6 (Theorem 7.2 and 7.3 of [KS53].)** *C* is convex, closed, and bounded. Moreover, *C* is a convex hull of *C*'.
+
+Let $x^* = (x_1^*, \dots, x_k^*)$ be the first $k$ moments of $Q$. Since $x^* \in C$, Caratheodory theorem and Lemma 4.6 implies that $x^*$ can be written as a convex combination of at most $k+1$ elements of $C'$. This implies that there is a distribution, which is a convex combination of at most $k+1$ Dirac delta distributions in $Q'$, that matches the first $k$ moments with $x^*$. This completes the proof. $\square$
+
+We can now prove the main result of this section.
+
+*Proof of Proposition 4.2.* Let $X \sim N(0, 1)$. We note that $F_y$ should have the moment sequence $x = (x_1, \dots, x_{2m})$ where $x_i = (\mathbf{E}_{X \sim N(0,1)}[X^i] - \alpha_y y^i)/(1-\alpha_y)$ for $i \in [2m]$. Theorem 4.1 implies that this happens if and only if for all $p = (p_0, \dots, p_{2m}) \in P^{\ge 0}(2m, B)$, we have that $\sum_{i=0}^{2m} x_i p_i \ge 0$. The desired expression follows by noting that $\sum_{i=0}^{2m} x_i p_i = (\sum_{i=0}^{2m} p_i \mathbf{E}_{X \sim N(0,1)}[X^i] - \alpha_y p_i y^i)/(1-\alpha_y) = (\mathbf{E}_{X \sim N(0,1)}[p(X)] - \alpha_y p(y))/(1-\alpha_y)$. The result that $F_y$ is discrete follows from Lemma 4.5. $\square$
\ No newline at end of file
diff --git a/samples/texts/568081/page_19.md b/samples/texts/568081/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ae9e03060376284fb45374cf86a0a992da338bc
--- /dev/null
+++ b/samples/texts/568081/page_19.md
@@ -0,0 +1,40 @@
+## 4.2 Proof of Lemma 4.3
+
+The proof of Lemma 4.3 is a relatively straightforward application of Hölder's inequality and the Gaussian Hypercontractivity Theorem (stated below). For $p \in (0, \infty)$, we define the $L^p$-norm of a random variable $X$ to be $\|X\|_{L^p} := (\mathbf{E}[|X|^p])^{1/p}$.
+
+**Fact 4.7 (Gaussian Hypercontractivity [Bog98, Nel73]).** Let $X \sim \mathcal{N}(0, 1)$. If $p \in \mathcal{P}(d)$ and $t \ge 2$, then $\|p(X)\|_{L^t} \le (t-1)^{d/2} \|p(X)\|_{L^2}$.
+
+*Proof of Lemma 4.3.* Let $X \sim \mathcal{N}(0, 1)$. We can assume that $q$ is a non-zero polynomial. Then it suffices to bound $B$ from above by $\sqrt{2}$ times the following expression:
+
+$$
+\begin{aligned}
+\sup_{q \in \mathcal{P}(m), q \neq 0} \sqrt{\frac{\mathbf{E}[X^2 q^2(X)]}{\mathbf{E}[q^2(X)]}} &\le \sup_{q \in \mathcal{P}(m), q \neq 0} \sqrt{\frac{(\mathbf{E}[(X^2)^{m+1}])^{1/(m+1)} (\mathbf{E}[(q^2(X))^{\frac{m+1}{m}}])^{\frac{m}{m+1}}}{\mathbf{E}[q^2(X)]}} \\
+&= \sup_{q \in \mathcal{P}(m), q \neq 0} \frac{\|X\|_{L^{2m+2}} \|q(X)\|_L^{\frac{2m+2}{m}}}{\|q(X)\|_{L^2}^2},
+\end{aligned}
+$$
+
+where the first step uses Hölder's inequality. Using standard concentration bounds for the standard Gaussian (or Fact 4.7 with $p(x) = x$), we get that $\|X\|_{L^{2m+2}} = O(\sqrt{m})$. Gaussian Hypercontractivity (Fact 4.7) implies that for any polynomial of degree at most $m$ and $r > 2$, $\|q(X)\|_{L^r} \le (r-1)^{m/2} \|q(X)\|_{L^2}$. For $r = (2m+2)/m$, we get that
+
+$$ \frac{\|q(X)\|_L^{\frac{2m+2}{m}}}{\|q(X)\|_{L^2}^2} \le \left( \frac{2m+2}{m} - 1 \right)^{\frac{m}{2}} = \left( 1 + \frac{2}{m} \right)^{\frac{m}{2}} \le \exp(1). $$
+
+Therefore, $B \ge C\sqrt{m}$ suffices for a sufficiently large constant $C$. $\square$
+
+## 4.3 Proof of Lemma 4.4
+
+We first recall the result on the tails of Hermite polynomials.
+
+**Lemma 4.8 ([Kra04]).** Let $h_k$ be the k-th normalized probabilist's Hermite polynomial. Then $\max_{x \in \mathbb{R}} h_k^2(x) e^{-x^2/2} = O(k^{-1/6})$.
+
+For completeness, we give an explicit calculation that translates the result of [Kra04] in our setting.
+
+*Proof of Lemma 4.8.* We will split the analysis in two cases. First suppose the case when $k < 6$. As $h_k(\cdot)$ is a constant degree polynomial, we get that $\max_{x \in \mathbb{R}} h_k^2(x) \exp(-x^2/2)$ is a constant. For the rest of the proof, we will assume that $k \ge 6$.
+
+For brevity, we will only consider the case where $k$ is even. The case where $k$ is odd is similar. Let $H_k(\cdot)$ be the physicist's Hermite polynomial. Recall that we can relate $h_k(\cdot)$ with $H_k(\cdot)$ with the following change of variable: $H_k(x) = \sqrt{2^k k!} h_k(\sqrt{2}x)$.
+
+[Kra04, Theorem 1] implies the following:
+
+$$ \max_{x \in \mathbb{R}} ((H_k(x))^2 e^{-x^2}) = O\left(\sqrt{k} k^{-1/6} \binom{k}{0.5k} k!\right). \quad (6) $$
+
+From Equation (6) we obtain:
+
+$$ \max_{x \in \mathbb{R}} 2^k k! h_k^2(\sqrt{2}x) e^{-x^2} = \max_{x \in \mathbb{R}} 2^k k! h_k^2(x) e^{-x^2/2} = O\left(\sqrt{k} k^{-1/6} \binom{k}{0.5k} k!\right). $$
\ No newline at end of file
diff --git a/samples/texts/568081/page_2.md b/samples/texts/568081/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..0288aed38f2e8979704c29f9625e97218f464ae3
--- /dev/null
+++ b/samples/texts/568081/page_2.md
@@ -0,0 +1,15 @@
+# 1 Introduction
+
+## 1.1 Background and Motivation
+
+Linear regression is one of the oldest and most fundamental statistical tasks with numerous applications in the sciences [RL87, Die01, McD09]. In the standard setup, the data are labeled examples $(x^{(i)}, y^{(i)})$, where the examples (covariates) $x^{(i)}$ are i.i.d. samples from a distribution $D_x$ on $\mathbb{R}^d$ and the labels $y^{(i)}$ are noisy evaluations of a linear function. More specifically, each label is of the form $y^{(i)} = \beta \cdot x^{(i)} + \eta^{(i)}$, where $\eta^{(i)}$ is the observation noise, for an unknown target regression vector $\beta \in \mathbb{R}^d$. The objective is to approximately recover the hidden regression vector. In this basic setting, linear regression is well-understood. For example, under Gaussian distribution, the least-squares estimator is known to be statistically and computationally efficient.
+
+Unfortunately, classical efficient estimators inherently fail in the presence of even a very small fraction of adversarially corrupted data. In several applications of modern data analysis, including machine learning security [BNJT10, BNL12, SKL17, DKK$^{+}$19] and exploratory data analysis, e.g., in biology [RPW$^{+}$02, PLJD10, LAT$^{+}$08], typical datasets contain arbitrary or adversarial outliers. Hence, it is important to understand the algorithmic possibilities and fundamental limits of learning and inference in such settings. Robust statistics focuses on designing estimators tolerant to a small amount of contamination, where the outliers are the minority of the dataset. Classical work in this field [HRRS86, HR09] developed robust estimators for various basic tasks, alas with exponential runtime. More recently, a line of work in computer science, starting with [DKK$^{+}$16, LRV16], developed the first computationally efficient robust learning algorithms for various high-dimensional tasks. Subsequently, there has been significant progress in algorithmic robust statistics by several communities, see [DK19] for a survey on the topic.
+
+In this paper, we study high-dimensional robust linear regression in the presence of a majority of adversarial outliers. As we explain below, in several applications, asking for a minority of outliers is too strong of an assumption. It is thus natural to ask what notion of learning can capture the regime when the clean data points (inliers) constitute the minority of the dataset. While outputting a single accurate hypothesis in this regime is information-theoretically impossible, one may be able to compute a small list of hypotheses with the guarantee that at least one of them is accurate. This relaxed notion is known as list-decodable learning [BBV08, CSV17], formally defined below.
+
+**Definition 1.1 (List-Decodable Learning).** Given a parameter $0 < \alpha < 1/2$ and a distribution family $\mathcal{D}$ on $\mathbb{R}^d$, the algorithm specifies $n \in \mathbb{Z}_+$ and observes $n$ i.i.d. samples from a distribution $E = \alpha D + (1-\alpha)N$, where $D$ is an unknown distribution in $\mathcal{D}$ and $N$ is arbitrary. We say $D$ is the distribution of inliers, $N$ is the distribution of outliers, and $E$ is an $(1-\alpha)$-corrupted version of $D$. Given sample access to an $(1-\alpha)$-corrupted version of $D$, the goal is to output a “small” list of hypotheses $\mathcal{L}$ at least one of which is (with high probability) close to the target parameter of $D$.
+
+We note that a list of size $O(1/\alpha)$ typically suffices; an algorithm with a poly$(1/\alpha)$ sized list, or even a worse function of $1/\alpha$ (but independent of the dimension $d$) is also considered acceptable.
+
+Natural applications of list-decodable learning include crowdsourcing, where a majority of participants could be unreliable [SVC16, MV18], and semi-random community detection in stochastic block models [CSV17]. List-decoding is also useful in the context of semi-verified learning [CSV17, MV18], where a learner can audit a very small amount of trusted data. If the trusted dataset is too small to directly learn from, using a list-decodable learning procedure, one can pinpoint a candidate hypothesis consistent with the verified data. Importantly, list-decodable learning generalizes the task of learning mixture models, see, e.g., [DeV89, JJ94, ZJD16, LL18, KC20, CLS20, DK20] for the case of linear regression studied here. Roughly speaking, by running a list-decodable estimation
\ No newline at end of file
diff --git a/samples/texts/568081/page_20.md b/samples/texts/568081/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..5821908d6c15becc65b557f3c98df0ce705499f9
--- /dev/null
+++ b/samples/texts/568081/page_20.md
@@ -0,0 +1,46 @@
+This implies the following:
+
+$$
+\max_{x \in \mathbb{R}} h_k^2(x) e^{-x^2/2} = O \left( k^{-1/6} \sqrt{k} \binom{k}{0.5k} 2^{-k} \right) = O(k^{-1/6}),
+$$
+
+where we use that $\binom{k}{0.5k} 2^{-k} / \sqrt{k} = O(1).$
+
+□
+
+*Proof of Lemma 4.4.* Let $h_i$ be the $i$-th normalized probabilist’s Hermite polynomial. Since $r$ is a polynomial of degree at most $m$ and $\{h_i, i \in [m]\}$ form a basis for $\mathcal{P}(m)$, we can represent $r(x) = \sum_{i=1}^{m} a_i h_i(x)$ for some $a_i \in \mathbb{R}$. Using orthonormality of $h_i$ under the Gaussian measure, we get that $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[r^2(X)] = \sum_{i=1}^{m} a_i^2$. Since $r$ is a non-zero polynomial, we have that $\sum_{i=1}^{m} a_i^2 > 0$. We thus have that
+
+$$
+\begin{align*}
+\sup_{r \in \mathcal{P}(m), r \neq 0} \frac{r^2(y)}{\mathbf{E}_{X \sim \mathcal{N}(0,1)}[r^2(X)]} &= \sup_{a_1, \dots, a_m \in \mathbb{R}, \sum_{i=1}^m a_i^2 > 0} \frac{\sum_{i=1}^m \sum_{j=1}^m a_i a_j h_i(y) h_j(y)}{\sum_{i=1}^m a_i^2} \\
+&= \sup_{a_1, \dots, a_m \in \mathbb{R}, \sum_{i=1}^m a_i^2 > 0} \frac{\sqrt{\sum_{i=1}^m \sum_{j=1}^m a_i^2 a_j^2} \sqrt{\sum_{i=1}^m \sum_{j=1}^m h_i^2(y) h_j^2(y)}}{\sum_{i=1}^m a_i^2} \\
+&= \sum_{i=1}^m h_i^2(y).
+\end{align*}
+$$
+
+Therefore, we need to show that, for all $y \in \mathbb{R}$, $\sum_{i=1}^{m} \alpha_y h_i^2(y) \le 1/2$ whenever $m \le C/\sqrt{\alpha}$ for a sufficiently small constant $C > 0$. We will now split the analysis in two cases:
+
+**Case 1:** $|y| \le 1/\sqrt{\alpha}$. Using Lemma 4.8 and the assumption that $|y|^2\alpha \le 1$, we can bound the desired expression as follows:
+
+$$
+\begin{align*}
+\max_{|y| \le 1/\sqrt{\alpha}} \alpha_y h_i^2(y) &= \max_{|y| \le 1/\sqrt{\alpha}} \sqrt{\alpha} \exp(y^2\alpha/2) \exp(-y^2/2) h_i^2(y) \\
+&\le \sqrt{\alpha e} \sup_{y \in \mathbb{R}} \exp(-y^2/2) h_i^2(y) \\
+&= O(\sqrt{\alpha i^{-1/6}}).
+\end{align*}
+$$
+
+Therefore, we get the following bound on $\sum_i h_i^2(y)$.
+
+$$
+\sum_{i=1}^{m} \alpha_y h_i^2(y) = O\left(\sqrt{\alpha} \sum_{i=1}^{m} i^{-1/6}\right) = O(\sqrt{\alpha m^{5/6}}).
+$$
+
+The last expression is less than $1/2$ when $m = O(1/\alpha^{3/5})$.
+
+**Case 2:** $|y| \ge 1/\sqrt{\alpha}$. We will use rather crude bounds here. We have the following explicit expression of $h_i(\cdot)$ (see, for example, [AAR99, Sze89]):
+
+$$
+|h_i(x)| = \left|\frac{He_i(x)}{\sqrt{i!}}\right| = \left|\sqrt{i!} \sum_{j=0}^{[i/2]} \frac{(-1)^j}{j!(i-2j)!} \frac{x^{i-2j}}{2^j}\right| = \left|\sqrt{i!} x^i \sum_{j=0}^{[i/2]} \frac{(-1)^j}{(2j)!(i-2j)!} x^{-2j} \frac{(2j)!}{j!2^j}\right| \\
+\le \sqrt{i!} |x|^i \sum_{k=0}^{i} \frac{i!}{k!(i-k)!} |x|^{-k} \le (i|x|^i)(1+|x|^{-1})^i = i^i (1+|x|^i).
+$$
\ No newline at end of file
diff --git a/samples/texts/568081/page_21.md b/samples/texts/568081/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e4bbbd4cc1df17b8de2b9cd10da4386634d43de
--- /dev/null
+++ b/samples/texts/568081/page_21.md
@@ -0,0 +1,39 @@
+Therefore, we get the following relation for all $|y| > 1$, $\alpha < 0.5$, and $i \in \mathbb{N}$:
+
+$$
+\begin{aligned}
+\alpha_y h_i^2(y) &= \sqrt{\alpha} \exp(-y^2(1-\alpha)/2) h_i^2(y) \\
+&\leq \sqrt{\alpha} \exp(-y^2/4)(2i)^i |y|^i \\
+&= \sqrt{\alpha} \exp(-y^2/4 + i \log(2i|y|)).
+\end{aligned}
+$$
+
+The expression above is at most $C'\sqrt{\alpha}$ for a constant $C' > 0$ for all $|y| \ge c'\sqrt{i \log i}$ for a constant $c' > 0$. The latter condition holds whenever $1/\sqrt{\alpha} \ge c'\sqrt{i \log i}$. It suffices that $i = O(1/\alpha^{0.9})$. Overall, we get the following bound when $m = O(1/\alpha^{0.9})$:
+
+$$ \sup_{|y|>1/\sqrt{\alpha}} \sum_{i=1}^{m} \alpha_y h_i^2(y) = O(\sqrt{\alpha m}). $$
+
+The last expression is less than $1/2$ when $m \le C/\sqrt{\alpha}$ for some constant $C > 0$. This completes the proof of Lemma 4.4. □
+
+# 5 Hypothesis Testing Version of List-Decodable Linear Regression
+
+**Organization** We introduce Problem 5.2, which is a hypothesis testing problem related to the search problem we discussed in Section 3. We first show the SQ-hardness of Problem 5.2 in Theorem 5.3. In Section 5.2, we give an efficient reduction from Problem 5.2 to list-decodable linear regression, showing that Problem 5.2 is indeed not harder than list-decodable linear regression. In Section 6, we also show the hardness of Problem 5.2 against low-degree polynomial tests.
+
+We begin by formally defining a hypothesis problem.
+
+**Definition 5.1 (Hypothesis testing).** Let a distribution $D_0$ and a set $S = \{D_u\}_{u \in S}$ of distributions on $\mathbb{R}^d$. Let $\mu$ be a prior distribution on the indices $S$ of that family. We are given access (via i.i.d. samples or oracle) to an *underlying* distribution where one of the two is true:
+
+• $H_0$: The underlying distribution is $D_0$.
+
+• $H_1$: First $u$ is drawn from $\mu$ and then the underlying distribution is set to be $D_u$.
+
+We say that a (randomized) algorithm solves the hypothesis testing problem if it succeeds with non-trivial probability (i.e., greater than 0.9).
+
+We now introduce the following hypothesis testing variant of the $(1-\alpha)$-contaminated linear regression problem:
+
+**Problem 5.2.** Let $\alpha \in (0, 1/2)$, $\rho \in (0, 1)$. Let $S$ be the set of $d$-dimensional nearly orthogonal vectors from Lemma 1.15. We are given access (via i.i.d. samples or oracle) to an *underlying* distribution where one of the two is true:
+
+• $H_0$: The underlying distribution is $R = N(0, I_d) \times N(0, 1/\alpha)$.
+
+• $H_1$: First, a vector $v$ is chosen uniformly at random from $S$. The underlying distribution is set to be $E_v$, i.e., the $(1-\alpha)$-additively corrupted linear model of Definition 1.2 with $\beta = \rho v$, $\sigma^2 = 1 - \rho^2$, and a fixed noise distribution $N_v$ as specified in Lemma 3.5.
+
+Using the reduction outlined in Lemma 5.9, it follows that $O(d/\alpha^3)$ samples suffice to solve Problem 5.2 when $\sigma \le O(\alpha/\sqrt{\log(1/\alpha)})$. On the other hand, the following result shows an SQ lower bound of $d^{\text{poly}(1/\alpha)}$.
\ No newline at end of file
diff --git a/samples/texts/568081/page_22.md b/samples/texts/568081/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..79310e6d4bb94a8c4b729e83f24acdaedee004ce
--- /dev/null
+++ b/samples/texts/568081/page_22.md
@@ -0,0 +1,27 @@
+**Theorem 5.3 (SQ Hardness of Problem 5.2).** Let $0 < c < 1/2$, $m \in \mathbb{Z}_+$ with $m \le c_1/\sqrt{\alpha}$ for some sufficiently small constant $c_1 > 0$ and $d = m^{\Omega(1/c)}$. Every SQ algorithm that solves Problem 5.2 either performs $2^{\Omega(d^{c/4})}$ queries or performs at least one query to $\text{STAT}(\Omega(d)^{-(2m+1)(1/4-c/2)}e^{O(m)/\sigma})$.
+
+We note that the lower bound on the (appropriate) statistical dimension implies SQ hardness of the (corresponding) hypothesis testing problem. As the Problem 5.2 differs slightly from the kind of hypothesis testing problems considered in [FGR$^{+}$17], we provide the proof of Theorem 5.3 in Section 5.1, where we introduce the relevant statistical dimension from [BBH$^{+}$20] (Definition 5.4 in this paper).
+
+## 5.1 Hardness of Problem 5.2 in the SQ Model
+
+We need the following variant of the statistical dimension from [BBH$^{+}$20], which is closely related to the hypothesis testing problems considered in this section. Since this is a slightly different definition from the statistical dimension (SD) used so far, we will assign the distinct notation (SDA) for it.
+
+**Notation** For $f: \mathbb{R} \to \mathbb{R}$, $g: \mathbb{R} \to \mathbb{R}$ and a distribution $D$, we define the inner product $\langle f, g \rangle_D = \mathbf{E}_{X \sim D}[f(X)g(X)]$ and the norm $\|f\|_D = \sqrt{\langle f, f \rangle_D}$.
+
+**Definition 5.4 (Statistical Dimension).** For the hypothesis testing problem of Definition 5.1, we define the statistical dimension SDA($S$, $\mu$, $n$) as follows:
+
+$$ \text{SDA}(S, \mu, n) = \max \left\{ q \in \mathbb{N} : \mathbf{E}_{u,v \sim \mu} [|(\bar{D}_u, \bar{D}_v)_{D_0} - 1| \mid \mathcal{E}] \le \frac{1}{n} \text{ for all events } \mathcal{E} \text{ s.t. } \mathbf{Pr}_{u,v \sim \mu}[\mathcal{E}] \ge \frac{1}{q^2} \right\}. $$
+
+We will omit writing $\mu$ when it is clear from the context.
+
+**Theorem 5.5 (Theorem A.5 of [BBH$^{+}$20]).** Let $S = \{D_u\}_{u \in S}$ vs. $D_0$ be a hypothesis testing problem with prior $\mu$ on $S$. If $\text{SDA}(S, \mu, 3/t) > q$, then every SQ algorithm that solves the hypothesis testing problem either makes at least *q* queries, or makes at least one query to $\text{STAT}(\sqrt{t})$.
+
+In order to prove Theorem 5.2, we will prove a lower bound on the SDA of Problem 5.2. As we will show later, Problem 5.2 is a special case of the following hypothesis testing problem:
+
+**Problem 5.6 (Non-Gaussian Component Hypothesis Testing).** Let $R$ be the joint distribution $R$ over the pair $(X, y) \in \mathbb{R}^{d+1}$ where $X \sim N(0, I_d)$ and $y \sim R(y)$ independently of $X$. Let $E_v$ be the joint distribution over pairs $(X, y) \in \mathbb{R}^{d+1}$ where the marginal on $y$ is again $R(y)$ but the conditional distribution $E_v(x|y)$ is of the form $P_{A_y,v}$ (with $P_{A_y,v}$ as in Definition 1.13). Define $S = \{E_v\}_{v \in S}$ for $S$ being the set of $d$-dimensional nearly orthogonal vectors from Lemma 1.15 and let the hypothesis testing problem be distinguishing between $R$ vs. $S$ with prior $\mu$ being the uniform distribution on $S$.
+
+The following lemma translates the $(\gamma, \beta)$-correlation of $S$ to a lower bound for the statistical dimension of the hypothesis testing problem. The proof is very similar to that of Corollary 8.28 of [BBH$^{+}$20] but it is given below for completeness.
+
+**Lemma 5.7.** Let $0 < c < 1/2$ and $d, m \in \mathbb{Z}_+$ such that $d = m^{\Omega(1/c)}$. Consider the hypothesis testing problem of Problem 5.6 where for every $y \in \mathbb{R}$ the distribution $A_y$ matches the first $m$ moments with $\mathcal{N}(0, 1)$ and $\mathbf{E}_{y \sim R(y)}[\chi^2(A_y, \mathcal{N}(0, 1))] < \infty$. Then, for any $q \ge 1$,
+
+$$ \text{SDA} \left( D, \frac{\Omega(d)^{(m+1)(1/2-c)}}{\mathbf{E}_{y \sim R(y)} [\chi^2(A_y, \mathcal{N}(0,1))] (\frac{q^2}{2\Omega(d^{c/2})} + 1)} \right) \ge q. $$
\ No newline at end of file
diff --git a/samples/texts/568081/page_23.md b/samples/texts/568081/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..b8e53e13d23b54f689c867047455f69e5d1610d8
--- /dev/null
+++ b/samples/texts/568081/page_23.md
@@ -0,0 +1,29 @@
+*Proof.* The first part is to calculate the correlation of the set $\mathcal{S}$ exactly as we did in the proof of Theorem 3.1. By Lemma 1.15, Lemma 1.14 and Lemma 3.2 we know that the set $\mathcal{S}$ is $(\gamma, \beta)$-correlated with $\gamma = \Omega(d)^{-(m+1)(1/2-c)} \mathbf{E}_{y\sim R(y)}[\chi^2(A_y, \mathcal{N}(0,1))]$ and $\beta = \mathbf{E}_{y\sim R(y)}[\chi^2(A_y, \mathcal{N}(0,1))]$.
+
+We next calculate the SDA according to Definition 5.4. We denote by $\bar{E}_v$ the ratios of the density of $E_v$ to the density of $R$. Note that the quantity $\langle \bar{E}_u, \bar{E}_v \rangle - 1$ used there is equal to $\langle \bar{E}_u - 1, \bar{E}_v - 1 \rangle$. Let $\mathcal{E}$ be an event that has $\textbf{Pr}_{u,v\sim\mu}[\mathcal{E}] \geq 1/q^2$. For $d$ sufficiently large we have that
+
+$$
+\begin{align*}
+\underset{u,v \sim \mu}{\mathbf{E}} [|\langle \bar{E}_u, \bar{E}_v \rangle - 1| \mathcal{E}] &\le \min \left(1, \frac{1}{|\mathcal{S}| \mathrm{Pr}[\mathcal{E}]}\right) \underset{y \sim R(y)}{\mathbf{E}} [\chi^2(A_y, \mathcal{N}(0,1))] \\
+&\quad + \max \left(0, 1 - \frac{1}{|\mathcal{S}| \mathrm{Pr}[\mathcal{E}]}\right) \frac{\underset{y \sim R(y)}{\mathbf{E}}[\chi^2(A_y, \mathcal{N}(0,1))]}{\Omega(d)^{(m+1)(1/2-c)}} \\
+&\le \underset{y \sim R(y)}{\mathbf{E}} [\chi^2(A_y, \mathcal{N}(0,1))] \left(\frac{q^2}{2^{\Omega(d^c)}} + \frac{1}{\Omega(d)^{(m+1)(1/2-c)}}\right) \\
+&= \underset{y \sim R(y)}{\mathbf{E}} [\chi^2(A_y, \mathcal{N}(0,1))] \frac{q^2 \Omega(d)^{(m+1)(1/2-c)} + 2^{\Omega(d^c)}}{2^{\Omega(d^c)} \Omega(d)^{(m+1)(1/2-c)}} \\
+&= \underset{y \sim R(y)}{\mathbf{E}} [\chi^2(A_y, \mathcal{N}(0,1))] \left(\frac{\Omega(d)^{(m+1)(1/2-c)}}{q^2 \Omega(d)^{(m+1)(1/2-c)}/2^{\Omega(d^c)} + 1}\right)^{-1} \\
+&= \underset{y \sim R(y)}{\mathbf{E}} [\chi^2(A_y, \mathcal{N}(0,1))] \left(\frac{\Omega(d)^{(m+1)(1/2-c)}}{q^2/(2^{\Omega(d^c/2)}) + 1}\right)^{-1},
+\end{align*}
+$$
+
+where the first inequality uses that $\textbf{Pr}[u=v|\mathcal{E}] = \textbf{Pr}[u=v,\mathcal{E}]/\textbf{Pr}[\mathcal{E}]$ and bounds the numerator in two different ways: $\textbf{Pr}[u=v,\mathcal{E}]/\textbf{Pr}[\mathcal{E}] \leq \textbf{Pr}[u=v]/\textbf{Pr}[\mathcal{E}] = 1/(|\mathcal{D}|\textbf{Pr}[\mathcal{E}])$ and $\textbf{Pr}[u=v,\mathcal{E}]/\textbf{Pr}[\mathcal{E}] \leq \textbf{Pr}[\mathcal{E}]/\textbf{Pr}[\mathcal{E}] = 1$. $\square$
+
+We note that the lemma above and Theorem 5.5 show SQ hardness of Problem 5.6. In the remainder of this section, we will apply these results to Problem 5.2.
+
+**Corollary 5.8.** Let $0 < c < 1/2$, $m \in \mathbb{Z}_+$ with $m \le c_1/\sqrt{\alpha}$ for some sufficiently small constant $c_1 > 0$ and $d = m^{\Omega(1/c)}$. Consider the hypothesis testing problem of Problem 5.2. Then, for any $k < d^{c/4}$:
+
+$$
+\text{SDA} \left( D, \frac{\Omega(d)^{(2m+1)(1/2-c)}}{e^{O(m)/(1-\rho^2)}} \right) \geq 100^k .
+$$
+
+*Proof.* We note that Problem 5.2 is a special case of Problem 5.6 (see Fact 3.3 and Lemma 3.5 which show that the conditional distributions are of the form $P_{A_{y,v}}$). In Lemma 5.7 we use $q = \sqrt{2\Omega(d^{c/2})/(n/n')}$ with $n' = n = \frac{\Omega(d)^{(2m+1)(1/2-c)}}{\mathbf{E}_{y\sim R(y)}[\chi^2(A_y, \mathcal{N}(0,1))]}$ to get that $\text{SDA}(D,n) > 100^k$ for $k < d^{c/4}$. The first part of Lemma 3.7 states that the distributions $A_y$'s match the first $2m$ moments with $\mathcal{N}(0,1)$ for $m \le c_1/\sqrt{\alpha}$ and the second part implies that $\mathbf{E}_{y\sim R(y)}[\chi^2(A_y, \mathcal{N}(0,1))] = O(e^m)/(1-\rho^2)$. This completes the proof. $\square$
+
+We conclude by noting the hardness of Problem 5.6 and thus Problem 5.2 in the SQ model.
+The proof of Theorem 5.3 follows from Corollary 5.8 and Theorem 5.5.
\ No newline at end of file
diff --git a/samples/texts/568081/page_24.md b/samples/texts/568081/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..c8726ae736aeae3cd4722dd68032adb0eeb89af4
--- /dev/null
+++ b/samples/texts/568081/page_24.md
@@ -0,0 +1,30 @@
+## 5.2 Reduction of Hypothesis Testing to List-Decodable Linear Regression
+
+We now show that any list-decoding algorithm for robust linear regression can be efficiently used to solve Problem 5.2, that is, hypothesis testing efficiently reduces to list-decodable estimation. For a list $\mathcal{L}$ and $i \in [|\mathcal{L}|]$, we use $\mathcal{L}(i)$ to denote the $i$-th element of $\mathcal{L}$.
+
+**Lemma 5.9.** Let $d \in \mathbb{Z}_+$ with $d = 2^{\Omega(1/(1/2-c))}$. Consider the $(1-\alpha)$-corrupted linear regression model of Definition 1.2 with $\beta = \rho v$ for $v \in S^{d-1}$, $\rho \in (0,1)$, $\sigma^2 = 1 - \rho^2$. There exists an algorithm `LIST_REGRESSION_TO_TESTING` that, given a list-decoding algorithm $A$ with the guarantee of returning a list $\mathcal{L}$ of candidate vectors such that for some $i \in \{1, \dots, |\mathcal{L}|\}$, $\|\mathcal{L}(i) - \beta\|_2 \le \rho/4$, solves the hypothesis testing Problem 5.2 with probability at least $1 - |\mathcal{L}|^2 e^{-\Omega(d^{2c})}$. The running time of this reduction is quadratic in $|\mathcal{L}|$.
+
+*Proof.* The reduction is described in Algorithm 5.2. To see correctness, first assume that the
+
+**Algorithm 1 Reduction from Hypothesis Testing to List-Decodable Linear Regression.**
+
+A($\rho$, ($X_1, y_1$), . . . , ($X_n, y_n$)): List-decoding algorithm returning a list $L$ such that $\|\mathcal{L}(i) - \beta\|_2 \le \rho/4$ for some $i \in \{1, \dots, |\mathcal{L}|\}$.
+
+1: **function** LIST_REGRESSION_TO_TESTING($\rho$, ($X_1, y_1$), . . . , ($X_{2n}, y_{2n}$))
+2: Split dataset into two equally sized parts {($X_i, y_i$)}$_{i=1}^n$, {($X'_i, y'_i$)}$_{i=1}^n$.
+3: Let $A$ be a random rotation matrix independent of data.
+4: $\mathcal{L}_1 \leftarrow A(\rho, (X_1, y_1), \dots, (X_n, y_n))$.
+5: $\mathcal{L}_2 \leftarrow A(\rho, (AX'_1, y'_1), \dots, (AX'_{2n}, y'_{n}))$.
+6: **for** $i \leftarrow 1$ **to** $|\mathcal{L}_1|$ **do**
+7: **for** $j \leftarrow 1$ **to** $|\mathcal{L}_2|$ **do**
+8: **if** $\|\mathcal{L}_1(i)\|_2, \|\mathcal{L}_2(j)\|_2 \in [3\rho/4, 5\rho/4]$ and $\|\mathcal{L}_1(i) - A^T\mathcal{L}_2(j)\|_2 \le \rho/2$ **then**
+9: **return** $H_1$
+10: **return** $H_0$
+
+alternative hypothesis holds. We note that the rotated points $(AX'_1, y'_1), \dots, (AX'_n, y'_n)$ come from the Gaussian linear regression model of Definition 1.2 having $\beta' = A\beta$ as the regressor. Thus $\mathcal{A}$ finds lists $\mathcal{L}_1, \mathcal{L}_2$ such that there exist $i^* \in \{1, \dots, |\mathcal{L}_1|\}$ with $\|\mathcal{L}_1(i^*) - \beta\|_2 \le \rho/4$ and $j^* \in \{1, \dots, |\mathcal{L}_2|\}$ with $\|A^T\mathcal{L}_2(j^*) - \beta\|_2 \le \rho/4$, where we use that $A^T A = I$. Moreover, since we are considering the regression model with $\|\beta\|_2 = \rho$, $\mathcal{L}_1(i^*)$ and $A^T\mathcal{L}_2(j^*)$ must have norms belonging in $[3\rho/4, 5\rho/4]$. By the triangle inequality we get that $\|\mathcal{L}_1(i^*) - A^T\mathcal{L}_2(j^*)\|_2 \le \rho/2$ and thus the algorithm correctly outputs $H_1$.
+
+Now assume that the null hypothesis holds, where the marginal on points is $\mathcal{N}(0, I_d)$ and labels are independently distributed as $\mathcal{N}(0, 1/\alpha)$. Fix a pair $i \in [|\mathcal{L}_1|], j \in [|\mathcal{L}_2|$] for which $\|\mathcal{L}_1(i)\|_2, \|\mathcal{L}_2(j)\|_2 \in [3\rho/4, 5\rho/4]$. Note that, by rotation invariance of the standard Gaussian distribution and the independence between covariates and response under the null distribution, the input {($AX'_i, y'_i$)}$_{i=1}^n$ for the second execution of the list-decoding algorithm is independent of $\mathcal{A}$. Thus the list $\mathcal{L}_2$ is independent of $\mathcal{A}$ (and also independent of $\mathcal{L}_1$). Thus, $A^T\mathcal{L}_2(j)$ is a random vector selected uniformly from the sphere of radius $\|\mathcal{L}_2(j)\|_2$ and independently of $\mathcal{L}_1(i)$. Recall that two random vectors are almost orthogonal with high probability.
+
+**Lemma 5.10 (see, e.g., [CFJ13]).** Let $\theta$ be the angle between two random unit vectors uniformly distributed over $S^{d-1}$. Then we have that $\text{Pr}[|\cos\theta| \ge \Omega(d^{c-1/2})] \le e^{-\Omega(d^{2c})}$ for any $0 < c < 1/2$.
+
+Taking a union bound over the $|\mathcal{L}_1| \cdot |\mathcal{L}_2|$ possible pairs of candidate vectors, we have that with
\ No newline at end of file
diff --git a/samples/texts/568081/page_25.md b/samples/texts/568081/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..e8127b4df7c8dd497793d2f00c573e63e381497d
--- /dev/null
+++ b/samples/texts/568081/page_25.md
@@ -0,0 +1,32 @@
+probability at least $1 - |\mathcal{L}_1| \cdot |\mathcal{L}_2|e^{-\Omega(d^{2c})}$, for all $i \in [|\mathcal{L}_1|], j \in [|\mathcal{L}_2|]$ we have that
+
+$$
+\begin{align*}
+\|\mathcal{L}_1(i) - A^T \mathcal{L}_2(j)\|_2 &= \sqrt{\|\mathcal{L}_1(i)\|_2^2 + \|\mathcal{A}^T \mathcal{L}_2(j)\|_2^2 - 2(\mathcal{L}_1(i))^T (\mathcal{A}^T \mathcal{L}_2(j))} \\
+&\geq \sqrt{2(3\rho/4)^2 (1 - \Omega(d^{c-1/2}))} > \rho,
+\end{align*}
+$$
+
+where in the last inequality we used that $d = 2^{\Omega(1/(1/2-c))}$. This concludes correctness for the case of the null hypothesis. $\square$
+
+We note that the Algorithm 5.2 can be implemented in both of the models of computation that we consider: SQ model and low-degree polynomial test (Section 6). For the SQ model, we can simulate the queries on the rotated X by modifying the queries to explicitly perform the rotation on X by a matrix A. For the low-degree polynomial model, Remark 6.5 shows that this reduction can be implemented as a low-degree polynomial algorithm.
+
+# 6 Hardness Against Low-Degree Polynomial Algorithms
+
+In this section, we recall the recently established connection between the statistical query framework and low-degree polynomials, shown in [BBH$^{+}$20], and deduce hardness results in the latter model. Section 6.1 and Section 6.2 are dedicated to the hypothesis problem. In Section 6.3, we show that the reduction of Section 5.2 can be expressed as a low-degree polynomial test.
+
+## 6.1 Preliminaries: Low-Degree Method
+
+We begin by recording the necessary notation, definitions, and facts. This section mostly follows [BBH$^{+}$20].
+
+**Notation** For a distribution $D$, we denote by $D^{\otimes n}$ the joint distribution of $n$ independent samples from $D$. For $f : \mathbb{R} \to \mathbb{R}$, $g : \mathbb{R} \to \mathbb{R}$ and a distribution $D$, we define the inner product $\langle f, g \rangle_D = \mathbf{E}_{X \sim D}[f(X)g(X)]$ and the norm $\|f\|_D = \sqrt{\langle f, f \rangle_D}$. We will omit the subscripts when they are clear from the context.
+
+**Low-Degree Polynomials** A function $f : \mathbb{R}^a \to \mathbb{R}^b$ is a polynomial of degree at most $k$ if it can be written in the form
+
+$$f(x) = (f_1(x), f_2(x), \dots, f_b(x)),$$
+
+where each $f_i : \mathbb{R}^a \to \mathbb{R}$ is a polynomial of degree at most $k$. We allow polynomials to have random coefficients as long as they are independent of the input $x$. When considering *list-decodable estimation* problems, an algorithm in this model of computation is a polynomial $f : \mathbb{R}^{d_1 \times n} \to \mathbb{R}^{d_2 \times l}$, where $d_1$ is the dimension of each sample, $n$ is the number of samples, $d_2$ is the dimension of the output hypotheses, and $l$ is the number of hypotheses returned. On the other hand, [BBH$^{+}$20] focuses on *binary hypothesis testing* problems defined in Definition 5.1.
+
+A degree-$k$ polynomial test for Definition 5.1 is a degree-$k$ polynomial $f : \mathbb{R}^{d \times n} \to \mathbb{R}$ and a threshold $t \in \mathbb{R}$. The corresponding algorithm consists of evaluating $f$ on the input $x_1, \dots, x_n$ and returning $H_0$ if and only if $f(x_1, \dots, x_n) > t$.
+
+**Definition 6.1** (*n*-sample *ε*-good distinguisher). We say that the polynomial $p : \mathbb{R}^{d \times n} \to \mathbb{R}$ is an *n*-sample *ε*-distinguisher for the hypothesis testing problem in Definition 5.1 if $|\mathbf{E}_{X \sim D_u^{\otimes n}}[p(X)]| - |\mathbf{E}_{u \sim \mu} \mathbf{E}_{X \sim D_u^{\otimes n}}[p(X)]| \ge \epsilon \sqrt{\mathrm{Var}_{X \sim D_u^{\otimes n}}[p(X)]}$. We call *ε* the *advantage* of the distinguisher.
\ No newline at end of file
diff --git a/samples/texts/568081/page_26.md b/samples/texts/568081/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..b43b04e6abf18ac700c159f304782ae5f894e56e
--- /dev/null
+++ b/samples/texts/568081/page_26.md
@@ -0,0 +1,27 @@
+Let $\mathcal{C}$ be the linear space of polynomials with degree at most $k$. The best possible advantage is given by the low-degree likelihood ratio
+
+$$ \max_{p \in \mathcal{C}} \left| \mathbf{E}_{u \sim \mu} \mathbf{E}_{X \sim D_u^{\otimes n}} [p(X)] - \mathbf{E}_{X \sim D_0^{\otimes n}} [p(X)] \right| = \left\| \mathbf{E}_{u \sim \mu} [(\bar{D}_u^{\otimes n})^{\le r}] - 1 \right\|_{D_0^{\otimes n}}, $$
+
+where we denote $\bar{D}_u = D_u/D_0$ and the notation $f^{\le k}$ denotes the orthogonal projection of $f$ to $\mathcal{C}$.
+
+Another notation we will use regarding a finer notion of degrees is the following: We say that the polynomial $f(x_1, \ldots, x_n) : \mathbb{R}^{d \times n} \to \mathbb{R}$ has samplewise degree $(r, k)$ if it is a polynomial, where each monomial uses at most $k$ different samples from $x_1, \ldots, x_n$ and uses degree at most $d$ for each of them. In analogy to what was stated for the best degree-$k$ distinguisher, the best distinguisher of samplewise degree $(r, k)$-achieves advantage $\|\mathbf{E}_{u \sim \mu}[(\bar{D}_u^{\otimes n})^{\le r,k}] - 1\|_{D_0^{\otimes n}}$ the notation $f^{\le r,k}$ now means the orthogonal projection of $f$ to the space of all samplewise degree-(r,k) polynomials with unit norm.
+
+## 6.2 Hardness of Hypothesis Testing Against Low-Degree Polynomials
+
+In this section, we show the following result:
+
+**Theorem 6.2.** Let $0 < c < 1/2$ and $m \in \mathbb{Z}_+$ with $m \le c_1/\sqrt{\alpha}$ for some sufficiently small constant $c_1 > 0$. Consider the hypothesis testing problem of Problem 5.2. For $d \in \mathbb{Z}_+$ with $d = m^{\Omega(1/c)}$, any $n \le \Omega(d)^{(2m+1)(1/2-c)}e^{-O(m)}(1-\rho^2)$ and any even integer $k < d^{c/4}$, we have that
+
+$$ \left\| \underset{u \sim \mu}{\mathbf{E}} \left[ (\bar{E}_u^{\otimes n})^{\le \infty, \Omega(k)} \right] - 1 \right\|_{R^{\otimes n}}^2 \le 1. $$
+
+We prove Theorem 6.2 by using the lower bound on SDA in Corollary 5.8 and the relation between SDA and low-degree polynomials established in [BBH$^{+}$20]. In [BBH$^{+}$20], the following relation between SDA and low-degree likelihood ratio is established.
+
+**Theorem 6.3 (Theorem 4.1 of [BBH$^{+}$20]).** Let $\mathcal{D}$ be a hypothesis testing problem on $\mathbb{R}^d$ with respect to null hypothesis $D_0$. Let $n, k \in \mathbb{N}$ with $k$ even. Suppose that for all $0 \le n' \le n$, SDA$(S, n') \ge 100^k (n/n')^k$. Then, for all $r$, $\|\mathbf{E}_{u \sim \mu}[(\bar{D}_u^{\otimes n})^{\le r, \Omega(k)}] - 1\|_{D_0^{\otimes n}}^2 \le 1$.
+
+We first apply Theorem 6.3 to the more general Problem 5.6. In Lemma 5.7 we set $n = \frac{\Omega(d)^{(m+1)(1/2-c)}}{\mathbf{E}_{y \sim R(y)}[\chi^2(A_y, \mathcal{N}(0,1))]}$ and $q = \sqrt{2\Omega(d^{c/2})(n/n')}$. Then, SDA$(S, n') \ge \sqrt{2\Omega(d^{c/2})(n/n')} \ge (100n/n')^k$ for $k < d^{c/4}$. Thus, we have shown the following.
+
+**Corollary 6.4.** Let $0 < c < 1/2$ and the hypothesis testing problem of Problem 5.6 where for every $y \in R$ the distribution $A_y$ matches the first $m$ moments with $\mathcal{N}(0, 1)$. For any $d \in \mathbb{Z}_+$ with $d = m^{\Omega(1/c)}$, any $n \le \Omega(d)^{(m+1)(1/2-c)}/\mathbf{E}_{y \sim R(y)}[\chi^2(A_y, \mathcal{N}(0, 1))]$ and any even integer $k < d^{c/4}$, we have that
+
+$$ \left\| \underset{u \sim \mu}{\mathbf{E}} \left[ (\bar{D}_u^{\otimes n})^{\le \infty, \Omega(k)} \right] - 1 \right\|_{R^{\otimes n}}^2 \le 1. $$
+
+*Proof of Theorem 6.2.* We now apply the Corollary 6.4 to Problem 5.2, which is a special case of Problem 5.6. The first part of Lemma 3.7 states that the distributions $A_y$'s match the first $2m$ moments with $\mathcal{N}(0, 1)$ for $m \le c_1/\sqrt{\alpha}$ and the second part implies that $\mathbf{E}_{y \sim R(y)}[\chi^2(A_y, \mathcal{N}(0, 1))] = O(e^m)/(1-\rho^2)$. An application of Corollary 6.4 completes the proof. □
\ No newline at end of file
diff --git a/samples/texts/568081/page_27.md b/samples/texts/568081/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8578fa4a9e29fb5f4c434bc35739dea29bef0b1
--- /dev/null
+++ b/samples/texts/568081/page_27.md
@@ -0,0 +1,11 @@
+## 6.3 Low-Degree Polynomial Reduction to List-Decodable Regression
+
+**Remark 6.5.** We note that the reduction of Lemma 5.9 is an algorithm that can be expressed in the low-degree polynomials model. The modification of the algorithm is the following: First note that the $l_2$-norm of a vector is indeed a polynomial of degree two in each coordinate. Second, one can check whether there exists a pair $i \in [|\mathcal{L}_1|], j \in [\mathcal{L}_2|$] with $\|\mathcal{L}_1(i)\|_2, \|\mathcal{L}_2(j)\|_2 \in [3\rho/4, 5\rho/4]$ for which $\|\mathcal{L}_1(i) - A^T \mathcal{L}_2(j)\|_2 \le \rho/2$ using the condition
+
+$$ \sum_{i \in 1}^{|L_1|} \sum_{j \in 1}^{|L_2|} \mathbf{1}(\|\mathcal{L}_1(i)\|_2^2 \ge (3\rho/4)^2) \cdot \mathbf{1}(\|A^T \mathcal{L}_2(j)\|_2^2 \le (5\rho/4)^2) \cdot \mathbf{1}(\|\mathcal{L}_1(i) - A^T \mathcal{L}_2(j)\|_2^2 \le \rho^2/4) = 0, $$
+
+and use a polynomial approximation for the step function in order to express each term as a polynomial. The degree needed for a uniform $\epsilon$-approximation has been well-studied [GR08, Gan02, EY07].
+
+**Lemma 6.6 ([EY07]).** Let $f : \mathbb{R} \to \mathbb{R}$ be the step function defined as $f(x) = 1$ for all $x \ge 0$ and $f(x) = 0$ otherwise. The minimum $k \in \mathbb{Z}_+$ for which there exists a degree-k polynomial $p : \mathbb{R} \to \mathbb{R}$ such that $\max_{x \in [-1,1]} |f(x) - p(x)| \le \epsilon$ is $k = \Theta(1/\epsilon^2)$.
+
+For our purpose, it suffices to approximate the step function up to error $\epsilon = \Theta(1/(|\mathcal{L}_1| \cdot |\mathcal{L}_2|))$, thus the resulting polynomial test has degree $\Theta(|\mathcal{L}_1|^2 \cdot |\mathcal{L}_2|^2)$.
\ No newline at end of file
diff --git a/samples/texts/568081/page_28.md b/samples/texts/568081/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..b19ad041bf5dfea6e3ec9b37e5301681adf76ebb
--- /dev/null
+++ b/samples/texts/568081/page_28.md
@@ -0,0 +1,29 @@
+References
+
+[AAR99] G. E. Andrews, R. Askey, and R. Roy. *Special Functions*. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1999.
+
+[BBH$^{+}$20] M. Brennan, G. Bresler, S. B. Hopkins, J. Li, and T. Schramm. Statistical query algorithms and low-degree tests are almost equivalent. *arXiv preprint arXiv:2009.06107*, 2020. To appear in *34th Annual Conference on Learning Theory (COLT 2021)*.
+
+[BBV08] M. F. Balcan, A. Blum, and S. Vempala. A discriminative framework for clustering via similarity functions. In *Proceedings of the 40th Annual ACM Symposium on Theory of Computing*, pages 671–680, 2008.
+
+[BFJ$^{+}$94] A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour, and S. Rudich. Weakly learning DNF and characterizing statistical query learning using Fourier analysis. In *Proceedings of the Twenty-Sixth Annual Symposium on Theory of Computing*, pages 253–262, 1994.
+
+[BK21] A. Bakshi and P. Kothari. List-decodable subspace recovery: Dimension independent error in polynomial time. In *Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA)*, pages 1279–1297. SIAM, 2021.
+
+[BNJT10] M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar. The security of machine learning. *Machine Learning*, 81(2):121–148, 2010.
+
+[BNL12] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In *Proceedings of the 29th International Conference on Machine Learning*, ICML 2012, 2012.
+
+[Bog98] V. Bogachev. *Gaussian measures*. Mathematical surveys and monographs, vol. 62, 1998.
+
+[CFJ13] T. Cai, J. Fan, and T. Jiang. Distributions of angles in random packing on spheres. *Journal of Machine Learning Research*, 14(1):1837–1864, 2013.
+
+[CLS20] S. Chen, J. Li, and Z. Song. Learning mixtures of linear regressions in subexponential time via fourier moments. In *Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing*, pages 587–600, 2020.
+
+[CMY20] Y. Cherapanamjeri, S. Mohanty, and M. Yau. List decodable mean estimation in nearly linear time. In *2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS)*, pages 141–148, 2020.
+
+[CSV17] M. Charikar, J. Steinhardt, and G. Valiant. Learning from untrusted data. In *Proceedings of STOC 2017*, pages 47–60, 2017.
+
+[DeV89] R. D. DeVeaux. Mixtures of linear regressions. *Computational Statistics & Data Analysis*, 8(3):227–245, November 1989.
+
+[Die01] T. E. Dielman. *Applied Regression Analysis for Business and Economics*. Duxbury/Thomson Learning Pacific Grove, CA, 2001.
\ No newline at end of file
diff --git a/samples/texts/568081/page_29.md b/samples/texts/568081/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..35c4b9c4a57c8bc7cde2609181103c257d2e69ee
--- /dev/null
+++ b/samples/texts/568081/page_29.md
@@ -0,0 +1,27 @@
+[DK19] I. Diakonikolas and D. M. Kane. Recent advances in algorithmic high-dimensional robust statistics. CoRR, abs/1911.05911, 2019.
+
+[DK20] I. Diakonikolas and D. M. Kane. Small covers for near-zero sets of polynomials and learning latent variable models. In Proceedings of the 61st Annual IEEE Symposium on Foundations of Computer Science (FOCS 2020), 2020.
+
+[DKK$^{+}$16] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators in high dimensions without the computational intractability. In Proceedings of FOCS’16, pages 655–664, 2016.
+
+[DKK$^{+}$17] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Being robust (in high dimensions) can be practical. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, pages 999–1008, 2017.
+
+[DKK$^{+}$19] I. Diakonikolas, G. Kamath, D. Kane, J. Li, J. Steinhardt, and A. Stewart. Sever: A robust meta-algorithm for stochastic optimization. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, pages 1596–1606, 2019.
+
+[DKK20a] I. Diakonikolas, D. M. Kane, and D. Kongsgaard. List-decodable mean estimation via iterative multi-filtering. Advances in Neural Information Processing Systems, 33, 2020.
+
+[DKK$^{+}$20b] I. Diakonikolas, D. M. Kane, D. Kongsgaard, J. Li, and K. Tian. List-decodable mean estimation in nearly-pca time. CoRR, abs/2011.09973, 2020.
+
+[DKK$^{+}$21] I. Diakonikolas, D. M. Kane, D. Kongsgaard, J. Li, and K. Tian. Clustering mixture models in almost-linear time via list-decodable mean estimation. CoRR, abs/2106.08537, 2021.
+
+[DKS17] I. Diakonikolas, D. M. Kane, and A. Stewart. Statistical query lower bounds for robust estimation of high-dimensional gaussians and gaussian mixtures. In 58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017, pages 73–84, 2017. Full version at http://arxiv.org/abs/1611.03473.
+
+[DKS18] I. Diakonikolas, D. M. Kane, and A. Stewart. List-decodable robust mean estimation and learning mixtures of spherical gaussians. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, pages 1047–1060, 2018. Full version available at https://arxiv.org/abs/1711.07211.
+
+[DKS19] I. Diakonikolas, W. Kong, and A. Stewart. Efficient algorithms and lower bounds for robust linear regression. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, pages 2745–2754, 2019.
+
+[EY07] A. Eremenko and P. Yuditskii. Uniform approximation of sgn x by polynomials and entire functions. Journal d'Analyse Mathématique, 101(1):313–324, 2007.
+
+[Fel16] V. Feldman. Statistical query learning. In Encyclopedia of Algorithms, pages 2090–2095. Springer New York, 2016.
+
+[Fel17] V. Feldman. A general characterization of the statistical query complexity. In Proceedings of the 30th Conference on Learning Theory, COLT 2017, pages 785–830. PMLR, 2017.
\ No newline at end of file
diff --git a/samples/texts/568081/page_3.md b/samples/texts/568081/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..9035701ac7e27cfbf61cea31c3e7cc39a78de74b
--- /dev/null
+++ b/samples/texts/568081/page_3.md
@@ -0,0 +1,25 @@
+procedure with the parameter $\alpha$ equal to the smallest mixing weight, each true cluster of points is an equally valid ground-truth distribution, so the output list must contain candidate parameters close to each of the true parameters.
+
+In list-decodable linear regression (the focus of this paper), $D$ is a distribution on pairs $(X,y)$, where $X$ is a standard Gaussian on $\mathbb{R}^d$, $y$ is approximately a linear function of $x$, and the algorithm is asked to approximate the hidden regressor. The following definition specifies the distribution family $\mathcal{D}$ of the inliers for the case of linear regression with Gaussian covariates.
+
+**Definition 1.2 (Gaussian Linear Regression).** Fix $\sigma > 0$. For $\beta \in \mathbb{R}^d$, let $D_\beta$ be the distribution over $(X,y)$, $X \in \mathbb{R}^d$, $y \in \mathbb{R}$, such that $X \sim \mathcal{N}(0, I_d)$ and $y = \beta^T X + \eta$, where $\eta \sim \mathcal{N}(0, \sigma^2)$ independently of $X$. We define $\mathcal{D}$ to be the set $\{D_\beta : \beta \in S'\}$ for some set $S' \subseteq \mathbb{R}^d$.
+
+Recent algorithmic progress [KKK19, RY20a] has been made on this problem using the SoS hierarchy. The guarantees in [KKK19, RY20a] are very far from the information-theoretic limit in terms of sample complexity. In particular, they require $d^{\text{poly}(1/\alpha)}$ samples and time to obtain non-trivial error guarantees (see Table 1): [KKK19] obtains an error guarantee of $O(\sigma/\alpha)$ with a list of size $O(1/\alpha)$, whereas [RY20a] obtains an error guarantee of $O(\sigma/\alpha^{3/2})$ with a list of size $(1/\alpha)^{O(\log(1/\alpha))}$.
+
+On the other hand, as shown in this paper (see Theorem 1.4), $\text{poly}(d/\alpha)$ samples information-theoretically suffice to obtain near-optimal error guarantees. This raises the following natural question:
+
+*What is the complexity of list-decodable linear regression?*
+
+*Are there efficient algorithms with significantly better sample-time tradeoffs?*
+
+We study the above question in a natural and well-studied restricted model of computation, known as the Statistical Query (SQ) model [Kea98]. As the main result of this paper, we prove strong SQ lower bounds for this problem. Via a recently established equivalence [BBH$^+$20], our SQ lower bound also implies low-degree testing lower bounds for this task. Our lower bounds can be viewed as evidence that current upper bounds for this problem may be qualitatively best possible.
+
+Before we state our contributions in detail, we give some background on SQ algorithms. SQ algorithms are a broad class of algorithms that are only allowed to query expectations of bounded functions of the distribution rather than directly access samples. Formally, an SQ algorithm has access to the following oracle.
+
+**Definition 1.3 (STAT Oracle).** Let $D$ be a distribution on $\mathbb{R}^d$. A statistical query is a bounded function $q: \mathbb{R}^d \to [-1, 1]$. For $\tau > 0$, the STAT($\tau$) oracle responds to the query $q$ with a value $v$ such that $|v - E_{X\sim D}[q(X)]| \le \tau$. We call $\tau$ the tolerance of the statistical query.
+
+The SQ model was introduced by Kearns [Kea98] in the context of supervised learning as a natural restriction of the PAC model [Val84]. Subsequently, the SQ model has been extensively studied in a plethora of contexts (see, e.g., [Fel16] and references therein). The class of SQ algorithms is rather broad and captures a range of known supervised learning algorithms. More broadly, several known algorithmic techniques in machine learning are known to be implementable using SQs. These include spectral techniques, moment and tensor methods, local search (e.g., Expectation Maximization), and many others (see, e.g., [FGR$^+$17, FGV17]).
+
+## 1.2 Our Results
+
+We start by showing that $\text{poly}(d/\alpha)$ samples are sufficient to obtain a near-optimal error estimator, albeit with a computationally inefficient algorithm.
\ No newline at end of file
diff --git a/samples/texts/568081/page_30.md b/samples/texts/568081/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..054481ea21ec3812b455ccec91f31003cbb259f3
--- /dev/null
+++ b/samples/texts/568081/page_30.md
@@ -0,0 +1,31 @@
+[FGR$^{+}$17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound for detecting planted cliques. *J. ACM*, 64(2):8:1–8:37, 2017.
+
+[FGV17] V. Feldman, C. Guzman, and S. S. Vempala. Statistical query algorithms for mean vector estimation and stochastic convex optimization. In *Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017*, pages 1265–1277. SIAM, 2017.
+
+[Gan02] M. I. Ganzburg. Limit theorems for polynomial approximation with hermite and freud weights. *Approximation Theory X: Abstract and Classical Analysis (CK Chui, et al, eds.)*, pages 211–221, 2002.
+
+[GR08] M. I. Ganzburg and J. Rognes. *Limit theorems of polynomial approximation with exponential weights*. American Mathematical Soc., 2008.
+
+[HKP$^{+}$17] S. B. Hopkins, P. K. Kothari, A. Potechin, P. Raghavendra, T. Schramm, and D. Steurer. The power of sum-of-squares for detecting hidden structures. In *58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017*, pages 720–731. IEEE Computer Society, 2017.
+
+[Hop18] S. B. Hopkins. *Statistical inference and the sum of squares method*. PhD thesis, Cornell University, 2018.
+
+[HR09] P.J. Huber and E. M. Ronchetti. *Robust statistics*. Wiley New York, 2009.
+
+[HRRS86] F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. *Robust statistics*. *The approach based on influence functions*. Wiley New York, 1986.
+
+[HS17] S. B. Hopkins and D. Steurer. Efficient bayesian estimation from few samples: Community detection and related problems. In *58th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2017*, pages 379–390. IEEE Computer Society, 2017.
+
+[Hub64] P. J. Huber. Robust estimation of a location parameter. *Ann. Math. Statist.*, 35(1):73–101, 03 1964.
+
+[JJ94] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. *Neural Computation*, 6(2):181–214, 1994.
+
+[KC20] J. Kwon and C. Caramanis. EM converges for a mixture of many linear regressions. In *International Conference on Artificial Intelligence and Statistics*, pages 1727–1736. PMLR, 2020.
+
+[Kea98] M. J. Kearns. Efficient noise-tolerant learning from statistical queries. *Journal of the ACM*, 45(6):983–1006, 1998.
+
+[KKK19] S. Karmalkar, A. R. Klivans, and P. Kothari. List-decodable linear regression. In *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019*, pages 7423–7432, 2019.
+
+[Kra04] I. Krasikov. New bounds on the Hermite polynomials. arXiv preprint math/0401310, 2004.
+
+[KS53] S. Karlin and L. S. Shapley. *Geometry of moment spaces*, volume 12. American Mathematical Soc., 1953.
\ No newline at end of file
diff --git a/samples/texts/568081/page_31.md b/samples/texts/568081/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f0a78f46349c18ca854d4435b15fcd37e1e9b8a
--- /dev/null
+++ b/samples/texts/568081/page_31.md
@@ -0,0 +1,31 @@
+[KSS18] P. K. Kothari, J. Steinhardt, and D. Steurer. Robust moment estimation and improved clustering via sum of squares. In *Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018*, pages 1035–1046, 2018.
+
+[LAT+08] J.Z. Li, D.M. Absher, H. Tang, A.M. Southwick, A.M. Casto, S. Ramachandran, H.M. Cann, G.S. Barsh, M. Feldman, L.L. Cavalli-Sforza, and R.M. Myers. Worldwide human relationships inferred from genome-wide patterns of variation. *Science*, 319:1100–1104, 2008.
+
+[LL18] Y. Li and Y. Liang. Learning mixtures of linear regressions with nearly optimal complexity. In *Conference On Learning Theory, COLT 2018*, pages 1125–1144. PMLR, 2018.
+
+[LRV16] K. A. Lai, A. B. Rao, and S. Vempala. Agnostic estimation of mean and covariance. In *Proceedings of FOCS'16*, 2016.
+
+[McD09] J. H. McDonald. *Handbook of Biological Statistics, volume 2*. Sparky House Publishing, Baltimore, MD, 2009.
+
+[MV18] M. Meister and G. Valiant. A data prism: Semi-verified learning in the small-alpha regime. In *Conference On Learning Theory, COLT 2018*, volume 75 of *Proceedings of Machine Learning Research*, pages 1530–1546. PMLR, 2018.
+
+[Nel73] E. Nelson. The free markoff field. *Journal of Functional Analysis*, 12(2):211–227, 1973.
+
+[O'D14] R. O'Donnell. *Analysis of Boolean Functions*. Cambridge University Press, 2014.
+
+[PLJD10] P. Paschou, J. Lewis, A. Javed, and P. Drineas. Ancestry informative markers for fine-scale individual assignment to worldwide populations. *Journal of Medical Genetics*, 47:835–847, 2010.
+
+[RL87] P. J. Rousseeuw and A. M. Leroy. *Robust Regression and Outlier Detection*. John Wiley & Sons, Inc., New York, NY, USA, 1987.
+
+[RPW+02] N. Rosenberg, J. Pritchard, J. Weber, H. Cann, K. Kidd, L.A. Zhivotovsky, and M.W. Feldman. Genetic structure of human populations. *Science*, 298:2381–2385, 2002.
+
+[RY20a] P. Raghavendra and M. Yau. List decodable learning via sum of squares. In *Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020*, pages 161–180. SIAM, 2020.
+
+[RY20b] P. Raghavendra and M. Yau. List decodable subspace recovery. In *Conference on Learning Theory, COLT 2020*, volume 125 of *Proceedings of Machine Learning Research*, pages 3206–3226. PMLR, 2020.
+
+[SKL17] J. Steinhardt, P. W. Koh, and P. S. Liang. Certified defenses for data poisoning attacks. In *Advances in Neural Information Processing Systems 30*, pages 3520–3532, 2017.
+
+[SVC16] J. Steinhardt, G. Valiant, and M. Charikar. Avoiding imposters and delinquents: Adversarial crowdsourcing and peer prediction. In *NIPS*, pages 4439–4447, 2016.
+
+[Sze89] G. Szegö. *Orthogonal Polynomials*, volume XXIII of *American Mathematical Society Colloquium Publications*. A.M.S, Providence, 1989.
\ No newline at end of file
diff --git a/samples/texts/568081/page_32.md b/samples/texts/568081/page_32.md
new file mode 100644
index 0000000000000000000000000000000000000000..7863bb9f27a8b1ed78c6683715837722fc2c8ad2
--- /dev/null
+++ b/samples/texts/568081/page_32.md
@@ -0,0 +1,7 @@
+[TLM18] B. Tran, J. Li, and A. Madry. Spectral signatures in backdoor attacks. In *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018*, pages 8011–8021, 2018.
+
+[Tuk75] J.W. Tukey. Mathematics and picturing of data. In *Proceedings of ICM*, volume 6, pages 523–531, 1975.
+
+[Val84] L. Valiant. A theory of the learnable. *Communications of the ACM*, 27(11):1134–1142, 1984.
+
+[ZJD16] K. Zhong, P. Jain, and I. S. Dhillon. Mixed linear regression with multiple components. In *Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016*, pages 2190–2198, 2016.
\ No newline at end of file
diff --git a/samples/texts/568081/page_33.md b/samples/texts/568081/page_33.md
new file mode 100644
index 0000000000000000000000000000000000000000..06826a54be4c083a10cd6a7d327f51621cf6befd
--- /dev/null
+++ b/samples/texts/568081/page_33.md
@@ -0,0 +1,31 @@
+# Appendix
+
+## A Additional Technical Facts
+
+Our bounds in Lemma 3.7 required the fact below. Here we provide its proof for completeness.
+
+**Fact A.1.** For any one-dimensional distribution $P$ that matches the first $m$ moments with $\mathcal{N}(0, 1)$ and has $\chi^2(P, \mathcal{N}(0, 1)) < \infty$ the following identity is true
+
+$$ \chi^2(P, \mathcal{N}(0, 1)) = \sum_{i=m+1}^{\infty} \left( \mathbf{E}_{X \sim P}[h_i(X)] \right)^2 . $$
+
+*Proof.* Let $\phi$ denote the pdf of the standard one-dimensional Gaussian. For this proof, we use a slightly different definition of the space $L^2(\mathbb{R}, \mathcal{N}(0,1))$. We define it as the space of functions for which $\int_{\mathbb{R}} f^2(x)/\phi(x)dx < \infty$ with the inner product $\langle f,g \rangle := \int_{\mathbb{R}} f(x)g(x)/\phi(x)dx$ (note the similarity with the definition of $\chi^2$-divergence). The Hermite functions (or often called Hermite-Gauss functions) $h_i(x)\phi(x)$ for $i = 0, 1, \dots$ form a complete orthonormal basis of the space $L^2(\mathbb{R}, \mathcal{N}(0,1))$ with respect to that inner product. It is easy to check that this statement is equivalent to the statement that Hermite polynomials $\{h_i\}_n$ form a complete orthonormal basis of the space of all functions $f: \mathbb{R} \to \mathbb{R}$ for which $\mathbf{E}_{x \sim \mathcal{N}(0,1)}[f^2(x)] < \infty$ (i.e., our old definition of $L^2(\mathbb{R}, \mathcal{N}(0,1)))$. Since $\chi^2(P, \mathcal{N}(0,1)) < \infty$ we have $P \in L^2(\mathbb{R}, \mathcal{N}(0,1))$ and thus we can write $P(x) = \sum_{i=0}^{\infty} a_i h_i(x) \phi(x)$, where $a_i = \mathbf{E}_{X \sim P}[h_i(X)]$. Using the fact that $P$ agrees with the first $m$ moments of $\mathcal{N}(0,1)$ and the property of Hermite polynomials $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[h_i(X)] = \mathbf{1}(i=0)$ we get that $a_0 = \mathbf{E}_{X \sim \mathcal{N}(0,1)}[h_0(X)] = 1$ and $a_i = \mathbf{E}_{X \sim \mathcal{N}(0,1)}[h_i(X)] = 0$ for $0 < i \le m$. Thus
+
+$$ P(x) = \phi(x) + \sum_{i=m+1}^{\infty} a_i h_i(x) \phi(x) . $$
+
+The $\chi^2$-divergence can then be written as
+
+$$ \chi^2(P, \mathcal{N}(0, 1)) = \int_{\mathbb{R}} \frac{(P(x) - \phi(x))^2}{\phi(x)} dx = \int_{\mathbb{R}} \frac{1}{\phi(x)} \left( \sum_{i=m+1}^{\infty} a_i h_i(x) \phi(x) \right)^2 dx = \sum_{i=m+1}^{\infty} a_i^2, $$
+
+where the last part uses orthonormality of the functions $h_i(x)\phi(x)$. $\square$
+
+We now turn to Claim 3.9 which is restated below.
+
+**Claim A.2.** If $P = \sum_{i=1}^k \lambda_i N(\mu_i, \sigma_i^2)$ with $\mu_i \in \mathbb{R}$, $\sigma_i < \sqrt{2}$ and $\lambda_i \ge 0$ such that $\sum_{i=1}^k \lambda_i = 1$, we have that $\chi^2(P, \mathcal{N}(0, 1)) < \infty$.
+
+For that we need the following two facts about $\chi^2$-distance between Gaussians. Their proofs can be done by direct calculations.
+
+**Fact A.3.** Let $k \in \mathbb{Z}_+$, distributions $P_i$ and $\lambda_i \ge 0$, for $i \in [k]$ such that $\sum_{i=1}^k \lambda_i = 1$. We have that $\chi^2(\sum_{i=1}^k \lambda_i P_i, D) = \sum_{i=1}^k \sum_{j=1}^k \lambda_i \lambda_j \chi_D(P_i, P_j)$.
+
+*Proof.*
+
+$$ \chi^2\left(\sum_{i=1}^{k} \lambda_i P_i, D\right) + 1 = \int_{\mathbb{R}} \left(\sum_{i=1}^{k} \lambda_i P_i(x)\right)^2/D(x)dx = \sum_{i=1}^{k} \sum_{j=1}^{k} \lambda_i \lambda_j \int_{\mathbb{R}} P_i(x)P_j(x)/D(x)dx $$
\ No newline at end of file
diff --git a/samples/texts/568081/page_34.md b/samples/texts/568081/page_34.md
new file mode 100644
index 0000000000000000000000000000000000000000..7545eea5b2dac59da2f9322f92037f9b5e7b18d1
--- /dev/null
+++ b/samples/texts/568081/page_34.md
@@ -0,0 +1,14 @@
+$$
+\begin{aligned}
+&= \sum_{i=1}^{k} \sum_{j=1}^{k} \lambda_i \lambda_j (\chi_D(P_i, P_j) + 1) \\
+&= \sum_{i=1}^{k} \sum_{j=1}^{k} \lambda_i \lambda_j \chi_D(P_i, P_j) + \left( \sum_{i=1}^{k} \lambda_i \right)^2 \\
+&= \sum_{i=1}^{k} \sum_{j=1}^{k} \lambda_i \lambda_j \chi_D(P_i, P_j) + 1.
+\end{aligned}
+$$
+
+**Fact A.4.**
+
+$$ \chi_{\mathcal{N}(0,1)}(\mathcal{N}(\mu_1, \sigma_1^2), \mathcal{N}(\mu_2, \sigma_2^2)) = \frac{\exp\left(-\frac{\mu_1^2(\sigma_2^2-1)+2\mu_1\mu_2+\mu_2^2(\sigma_1^2-1)}{2\sigma_1^2(\sigma_2^2-1)-2\sigma_2^2}\right)}{\sqrt{\sigma_1^2+\sigma_2^2-\sigma_1^2\sigma_2^2}} - 1. $$
+
+The proof of Claim A.2 then consists of applying Fact A.3 and using Fact A.4 for each one of
+the generated terms.
\ No newline at end of file
diff --git a/samples/texts/568081/page_4.md b/samples/texts/568081/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..2f4c321792b87bd279a08cda18f6b122cfb2c422
--- /dev/null
+++ b/samples/texts/568081/page_4.md
@@ -0,0 +1,21 @@
+ | Algorithmic Result | Sample Size | Running Time | List size |
|---|
| Karmalkar-Klivans-Kothari [KKK19] | (d/α)O(1/α4) | (d/α)O(1/α8) | O(1/α) | | Raghavendra and Yau [RY20a] | dO(1/α4) | dO(1/α8)(1/α)log(1/α) | (1/α)O(log(1/α)) |
+
+Table 1: The table summarizes the sample complexity, running time, and list size of the known list-decodable linear regression algorithms in order to obtain a 1/4-additive approximation to the hidden regression vector $\beta$ in the setting of Theorem 1.5, i.e., when $\|\beta\|_2 \le 1$ and $\sigma$ is sufficiently small as a function of $\alpha$: [KKK19] requires $\sigma = O(\alpha)$ and [RY20a] requires $\sigma = O(\alpha^{3/2})$.
+
+**Theorem 1.4 (Information-Theoretic Bound).** *There is a (computationally inefficient) list-decoding algorithm for Gaussian linear regression that uses $O(d/\alpha^3)$ samples, returns a list of $O(1/\alpha)$ many hypothesis vectors, and has $l_2$-error guarantee of $O((\sigma/\alpha)\sqrt{\log(1/\alpha)})$. Moreover, if the dimension $d$ is sufficiently large, any list-decoding algorithm that outputs a list of size poly$(1/\alpha)$ must have $l_2$-error at least $\Omega((\sigma/\alpha)/\sqrt{\log(1/\alpha)})$.*
+
+The proof of this result is given in Section 2 (see Theorems 2.1 and 2.4). Our main result is a strong SQ lower bound for the list-decodable Gaussian linear regression problem. We establish the following theorem (see Theorem 3.1 for a more detailed formal statement).
+
+**Theorem 1.5 (SQ Lower Bound).** *Assume that the dimension $d \in \mathbb{Z}_+$ is sufficiently large and consider the problem of list-decodable linear regression, where the fraction of inliers is $\alpha \in (0, 1/2)$, the regression vector $\beta \in \mathbb{R}^d$ has norm $\|\beta\|_2 \le 1$, and the additive noise has standard deviation $\sigma \le \alpha$. Then any SQ algorithm that returns a list $\mathcal{L}$ of candidate vectors containing a $\hat{\beta}$ such that $\|\hat{\beta} - \beta\|_2 \le 1/4$ does one of the following:*
+
+* it uses at least one query with tolerance at most $d^{-\Omega(1/\sqrt{\alpha})}/\sigma$,
+
+* it makes $2^{d^{\Omega(1)}}$ queries, or
+
+* it returns a list of size $|\mathcal{L}| = 2^{d^{\Omega(1)}}$.
+
+Informally speaking, Theorem 1.5 shows that no SQ algorithm can approximate $\beta$ to constant accuracy with a sub-exponential in $d^{\Omega(1)}$ size list and sub-exponential in $d^{\Omega(1)}$ many queries, unless using queries of very small tolerance – that would require at least $\sigma d^{\Omega(1/\sqrt{\alpha})}$ samples to simulate. For $\sigma$ not too small, e.g., $\sigma = \text{poly}(\alpha)$, in view of Theorem 1.4, this result can be viewed as an information-computation tradeoff for the problem, within the class of SQ algorithms.
+
+A conceptual implication of Theorem 1.5 is that list-decodable linear regression is harder (within the class of SQ algorithms) than the related problem of learning mixtures of linear regressions (MLR). Recent work [DK20] gave an algorithm (easily implementable in SQ) for learning MLR with $k$ equal weight separated components (under Gaussian covariates) with sample complexity and running time $k^{\text{polylog}(k)}$, i.e., quasi-polynomial in $k$. Recalling that one can reduce $k$-MLR (with well-separated components) to list-decodable linear regression for $\alpha = 1/k$, Theorem 1.5 implies that the aforementioned algorithmic result cannot be obtained via such a reduction.
+
+**Remark 1.6.** While the main focus of this work is on the SQ model, our result has immediate implications to a related popular restricted computational model — that of low-degree (polynomial) algorithms [HS17, HKP$^{+}$17, Hop18]. Recent work [BBH$^{+}$20] established that (under certain assumptions) an SQ lower bound also implies a qualitatively similar lower bound in the low-degree model. We leverage this connection to show a similar lower bound in this model (see Section 6).
\ No newline at end of file
diff --git a/samples/texts/568081/page_5.md b/samples/texts/568081/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..eda296a3daf1dab4e198ef171bfc322087da0220
--- /dev/null
+++ b/samples/texts/568081/page_5.md
@@ -0,0 +1,11 @@
+## 1.3 Overview of Techniques
+
+In this section, we provide a detailed overview of our SQ lower bound construction. We recall that there exists a general methodology for establishing SQ lower bounds via an appropriate complexity measure, known as SQ dimension. Several related notions of SQ dimension exist in the literature, see, e.g., [BFJ+94, FGR+17, Fel17]. Here we focus on the framework introduced in [FGR+17] for search problems over distributions, which is more natural in our setting. A lower bound on the SQ dimension of a search problem provides an unconditional lower bound on the SQ complexity of the problem. Roughly speaking, for a notion of correlation between distributions in our family $\mathcal{D}$ (Definition 1.9), establishing an SQ lower bound amounts to constructing a large cardinality sub-family $\mathcal{D}' \subseteq \mathcal{D}$ such that every pair of distributions in $\mathcal{D}'$ are nearly uncorrelated with respect to a given reference distribution $R$ (see Definition 1.11 and Lemma 1.12).
+
+A general framework for constructing SQ-hard families of distributions was introduced in [DKS17], which showed the following: Let the reference distribution $R$ be $\mathcal{N}(0, I)$ and $A$ be a univariate distribution whose low-degree moments match those of the standard Gaussian (and which satisfies an additional mild technical condition). Let $P_{A,v}$ be the distribution that is a copy of $A$ in the $v$-direction and standard Gaussian in the orthogonal complement (Definition 1.13). Then the distribution family $\{P_{A,v}\}_{v \in S}$, where $S$ is a set of nearly orthogonal unit vectors, satisfies the pairwise nearly uncorrelated property (Lemma 1.14), and is therefore SQ-hard to learn.
+
+Unfortunately, the [DKS17] framework does not suffice in the supervised setting of the current paper for the following reason: The joint distribution over labeled examples $(X, y)$ in our setting does not possess the symmetry properties required for moment-matching with the reference $R = \mathcal{N}(0, I)$ to be possible. Specifically, the behavior of $y$ will necessarily be somewhat different than the behavior of $X$. To circumvent this issue, we leverage an idea from [DKS19]. The high-level idea is to construct distributions $E_v$ on $(X, y)$ such that for any fixed value $y_0$ of $y$, the conditional distribution of $X \mid y = y_0$ under $E_v$ is of the form $P_{A,v}$ described above, where $A$ is replaced with some $A_{y_0}$.
+
+We further explain this modified construction. Note that $E_v$ should be of the form $\alpha D_v + (1-\alpha)N_v$, where $D_v$ is the inlier distribution (corresponding to the clean samples from the linear regression model) and $N_v$ is the outlier (noise) distribution. To understand what properties our distribution should satisfy, we start by looking at the inlier distribution $D$. By definition, for $(X, y) \sim D$, we have that $y = \beta^T X + \eta$, where $X \sim N(0, I)$ and $\eta \sim N(0, \sigma^2)$ is independent of $X$. A good place to start here is to understand the distribution of $X$ conditioned on $y = y_0$, for some $y_0$, under $D$. It is not hard to show (Fact 3.3) that this conditional distribution is already of the desired form $P_{\Lambda,\beta}$: it is a product of a $(d-1)$-dimensional standard Gaussian in directions orthogonal to $\beta$, while in the $\beta$-direction it is a much narrower Gaussian with mean proportional to $y_0$. To establish our SQ-hardness result, we would like to mix this conditional distribution with a carefully selected outlier distribution $N \mid y = y_0$, such that the resulting mixture $E \mid y = y_0$ matches many of its low-degree moments with the standard Gaussian in the $\beta$-direction, while being standard Gaussian in the orthogonal directions. In the setting of minority of outliers, [DKS19] was able to provide an explicit formula for $N$ and match three moments to show an SQ lower bound of $\Omega(d^2)$. The main technical difficulty in our paper is that, in order to prove the desired SQ lower bound of $\Omega(d^{\text{poly}(1/\alpha)})$, we need to match poly$(1/\alpha)$ many moments. We explain how to achieve this below.
+
+Here we take a different approach and establish the existence of the desired outlier distribution $N|y=y_0$ in a non-constructive manner. We note that our problem is an instance of the moment-matching problem, where given a sequence of real numbers, the goal is to decide whether a distribution exists having that sequence as its low-degree moments. At a high-level, we leverage classical results that tackle this general question by formulating a linear program (LP) and using
\ No newline at end of file
diff --git a/samples/texts/568081/page_6.md b/samples/texts/568081/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b8b903a2e79f02dbe4e1d17960bb47eaa1e11a9
--- /dev/null
+++ b/samples/texts/568081/page_6.md
@@ -0,0 +1,11 @@
+LP-duality to derive necessary and sufficient feasibility conditions (see [KS53] and Theorem 4.1). This moment-matching via LP duality approach is fairly general, but stumbles upon two technical obstacles in our setting.
+
+The first technical issue is that our final distributions $E_v$ on $(X, y)$ need to have bounded $\chi^2$-divergence with respect to the reference distribution, since the pairwise correlations scale with this quantity (see Lemma 1.14). To guarantee this, we can ensure that the outlier distribution in the $\beta$-direction is in fact equal to the convolution of a distribution with bounded support with a narrow Gaussian: (i) The contraction property of this convolution operator means that it can only reduce the $\chi^2$-divergence, and (ii) the bounded support can be used in combination with tail-bounds on Hermite polynomials (Lemma 3.10) to bound from above the contribution to the $\chi^2$-divergence of each Hermite coefficient of our distribution (Lemma 3.7). These additional constraints necessitate a modification to the moment-matching problem, but it can still be readily analyzed (Theorem 3.6).
+
+The second and more complicated issue involves the fraction of outliers, i.e., the parameter "$1-\alpha$". Unfortunately, it is easy to see that the fraction of outliers necessary to make the conditional distributions match the desired number of moments must necessarily go to 1 as $|y|$ goes to infinity: As $|y|$ gets bigger, the conditional distribution of inliers moves further away from $\mathcal{N}(0, I)$ (Fact 3.3) and thus needs to be mixed more heavily with outliers to be corrected. This is a significant problem, since by definition we can only afford to use a $(1-\alpha)$-fraction of outliers overall. To handle this issue, we consider a reference distribution $R$ on $(X, y)$ that has much heavier tails in $y$ than the distribution of inliers has. This essentially means that as $|y|$ gets large, the conditional probability that a sample is an outlier gets larger and larger. This is balanced by having slightly lower fraction of outliers for smaller values of $|y|$, in order to ensure that the total fraction of outliers is still at most $1-\alpha$. To address this issue, we leverage the fact that the probability that a clean sample has large value of $|y|$ is very small. Consequently, we can afford to make the error rates for such $y$ quite large without increasing the overall probability of error by very much.
+
+## 1.4 Preliminaries
+
+**Notation** We use $\mathbb{N}$ to denote natural numbers and $\mathbb{Z}_+$ to denote positive integers. For $n \in \mathbb{Z}_+$ we denote $[n] \stackrel{\text{def}}{=} \{1, \dots, n\}$ and use $S^{d-1}$ for the $d$-dimensional unit sphere. We denote by $\mathbf{1}(\mathcal{E})$ the indicator function of the event $\mathcal{E}$. We use $I_d$ to denote the $d \times d$ identity matrix. For a random variable $X$, we use $\mathbf{E}[X]$ for its expectation. For $m \in \mathbb{Z}_+$, the $m$-th moment of $X$ is defined as $\mathbf{E}[X^m]$. We use $\mathcal{N}(\mu, \Sigma)$ to denote the Gaussian distribution with mean $\mu$ and covariance matrix $\Sigma$. We let $\phi$ denote the pdf of the one-dimensional standard Gaussian. When $D$ is a distribution, we use $X \sim D$ to denote that the random variable $X$ is distributed according to $D$. For a vector $x \in \mathbb{R}^d$, we let $\|x\|_2$ denote its $l_2$-norm. For $y \in \mathbb{R}$, we denote by $\delta_y$ the Dirac delta distribution at $y$, i.e., the distribution that assigns probability mass 1 to the single point $y \in \mathbb{R}$ and zero elsewhere. When there is no confusion, we will use the same letters for distributions and their probability density functions.
+
+**Hermite Analysis** Hermite polynomials form a complete orthogonal basis of the vector space $L^2(\mathbb{R}, \mathcal{N}(0,1))$ of all functions $f : \mathbb{R} \to \mathbb{R}$ such that $\mathbf{E}_{X \sim \mathcal{N}(0,1)}[f^2(X)] < \infty$. There are two commonly used types of Hermite polynomials. The physicist's Hermite polynomials, denoted by $H_k$ for $k \in \mathbb{N}$ satisfy the following orthogonality property with respect to the weight function $e^{-x^2}$: for all $k, m \in \mathbb{N}$, $\int_{\mathbb{R}} H_k(x) H_m(x) e^{-x^2} dx = \sqrt{\pi} 2^k k! \mathbf{1}(k=m)$. The probabilist's Hermite polynomials $H_{e_k}$ for $k \in \mathbb{N}$ satisfy $\int_{\mathbb{R}} H_{e_k}(x) H_{e_m}(x) e^{-x^2/2} dx = k! \sqrt{2\pi} \mathbf{1}(k=m)$ and are related to the physicist's polynomials through $H_{e_k}(x) = 2^{-k/2} H_k(x/\sqrt{2})$. We will mostly use the normalized
\ No newline at end of file
diff --git a/samples/texts/568081/page_7.md b/samples/texts/568081/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a5c2b3ef64098d767f3fa1fed87fd3f07a89767
--- /dev/null
+++ b/samples/texts/568081/page_7.md
@@ -0,0 +1,23 @@
+probabilist's Hermite polynomials, $h_k(x) = H_{e_k}(x)/\sqrt{k!}$, $k \in \mathbb{N}$ for which $\int_{\mathbb{R}} h_k(x)h_m(x)e^{-x^2/2}dx = \sqrt{2\pi}1(k=m)$. These polynomials are the ones obtained by Gram-Schmidt orthonormalization of the basis $\{1, x, x^2, ...\}$ with respect to the inner product $\langle f, g \rangle_{\mathcal{N}(0,1)} = \mathbf{E}_{X \sim \mathcal{N}(0,1)}[f(X)g(X)]$. Every function $f \in L^2(\mathbb{R}, \mathcal{N}(0,1))$ can be uniquely written as $f(x) = \sum_{i \in \mathbb{N}} a_i h_i(x)$ and we have $\lim_{n \to \infty} \mathbf{E}_{X \sim \mathcal{N}(0,1)}[(f(x) - \sum_{i=0}^n a_i h_i(x))^2] = 0$ (see, e.g., [AAR99]).
+
+**Ornstein-Uhlenbeck Operator** For a $\rho > 0$, we define the Gaussian noise (or Ornstein-Uhlenbeck) operator $U_\rho$ as the operator that maps a distribution $F$ on $\mathbb{R}$ to the distribution of the random variable $\rho X + \sqrt{1-\rho^2}Z$, where $X \sim F$ and $Z \sim \mathcal{N}(0,1)$ independently of $X$. A well-known property of Ornstein-Uhlenbeck operator is that it operates diagonally with respect to Hermite polynomials.
+
+**Fact 1.7** (see, e.g., [O'D14]). For any Hermite polynomial $h_i$, any distribution $F$ on $\mathbb{R}$, and $\rho \in (0,1)$, it holds that $\mathbf{E}_{X \sim U_\rho F}[h_i(X)] = \rho^i \mathbf{E}_{X \sim F}[h_i(X)]$.
+
+**Background on the SQ Model** We provide the basic definitions and facts that we use.
+
+**Definition 1.8** (Search problems over distributions). Let $\mathcal{D}$ be a set of distributions over $\mathbb{R}^d$, $\mathcal{F}$ be a set called solutions, and $\mathcal{Z}: \mathcal{D} \to 2^\mathcal{F}$ be a map that assigns sets of solutions to distributions of $\mathcal{D}$. The distributional search problem $\mathcal{Z}$ over $\mathcal{D}$ and $\mathcal{F}$ is to find a valid solution $f \in \mathcal{Z}(\mathcal{D})$ given statistical query oracle access to an unknown $D \in \mathcal{D}$.
+
+The hardness of these problems is conveniently captured by the SQ dimension. For this, we first need to define the notion of correlation between distributions.
+
+**Definition 1.9** (Pairwise Correlation). The pairwise correlation of two distributions with probability density functions $D_1, D_2: \mathbb{R}^d \to \mathbb{R}_+$ with respect to a reference distribution with density $R: \mathbb{R}^d \to \mathbb{R}_+$, where the support of $R$ contains the supports of $D_1$ and $D_2$, is defined as $\chi_R(D_1, D_2) \stackrel{\text{def}}{=} \int_{\mathbb{R}^d} D_1(x)D_2(x)/R(x)dx - 1$. When $D_1 = D_2$, the pairwise correlation becomes the same as the $\chi^2$-divergence between $D_1$ and $R$, i.e., $\chi^2(D_1, R) \stackrel{\text{def}}{=} \int_{\mathbb{R}^d} D_1^2(x)/R(x)dx - 1$.
+
+**Definition 1.10.** For $\gamma, \beta > 0$, the set of distributions $\mathcal{D} = \{D_1, \dots, D_m\}$ is called $(\gamma, \beta)$-correlated relative to the distribution $R$ if $|\chi_R(D_i, D_j)| \le \gamma$, if $i \ne j$, and $|\chi_R(D_i, D_j)| \le \beta$ otherwise.
+
+The statistical dimension of a search problem is based on the largest set of $(\gamma, \beta)$-correlated distributions assigned to each solution.
+
+**Definition 1.11** (Statistical Dimension). For $\gamma, \beta > 0$, a search problem $\mathcal{Z}$ over a set of solutions $\mathcal{F}$ and a class $\mathcal{D}$ of distributions over $X$, we define the statistical dimension of $\mathcal{Z}$, denoted by $\text{SD}(\mathcal{Z}, \gamma, \beta)$, to be the largest integer $m$ such that there exists a reference distribution $R$ over $X$ and a finite set of distributions $\mathcal{D}_R \subseteq \mathcal{D}$ such that for any solution $f \in \mathcal{F}$, the set $\mathcal{D}_f = \mathcal{D}_R \setminus \mathcal{Z}^{-1}(f)$ is $(\gamma, \beta)$-correlated relative to $R$ and $\mathcal{D}_f | m$.
+
+**Lemma 1.12** (Corollary 3.12 in [FGR+17]). Let $\mathcal{Z}$ be a search problem over a set of solutions $\mathcal{F}$ and a class of distributions $\mathcal{D}$ over $\mathbb{R}^d$. For $\gamma, \beta > 0$, let $s = \text{SD}(\mathcal{Z}, \gamma, \beta)$ be the statistical dimension of the problem. For any $\gamma' > 0$, any SQ algorithm for $\mathcal{Z}$ requires either $s\gamma' / (\beta - \gamma)$ queries or at least one query to $\text{STAT}(\sqrt{\gamma + \gamma'})$ oracle.
+
+We continue by recalling the machinery from [DKS17] that will be used for our construction.
\ No newline at end of file
diff --git a/samples/texts/568081/page_8.md b/samples/texts/568081/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..26e7558adab7d58153a29d2218dd54a45ed9dfda
--- /dev/null
+++ b/samples/texts/568081/page_8.md
@@ -0,0 +1,25 @@
+**Definition 1.13 (High-Dimensional Hidden Direction Distribution).** For a unit vector $v \in \mathbb{R}^d$ and a distribution $A$ on the real line with probability density function $A(x)$, define $P_{A,v}$ to be a distribution over $\mathbb{R}^d$, where $P_{A,v}$ is the product distribution whose orthogonal projection onto the direction of $v$ is $A$, and onto the subspace perpendicular to $v$ is the standard $(d-1)$-dimensional normal distribution. That is, $P_{A,v}(x) := A(v^T x)\phi_{\perp v}(x)$, where $\phi_{\perp v}(x) = \exp(-\|x - (v^T x)v\|_2^2/2) / (2\pi)^{(d-1)/2}$.
+
+The distributions {$P_{A,v}$} defined above are shown to be nearly uncorrelated as long as the directions where $A$ is embedded are pairwise nearly orthogonal.
+
+**Lemma 1.14 (Lemma 3.4 in [DKS17]).** Let $m \in \mathbb{Z}_+$. Let $A$ be a distribution over $\mathbb{R}$ that agrees with the first $m$ moments of $\mathcal{N}(0, 1)$. For any $v$, let $P_{A,v}$ denote the distribution from Definition 1.13. For all $v, u \in \mathbb{R}^d$, we have that $\chi_{\mathcal{N}(0,I_d)}(P_{A,v}, P_{A,u}) \le |u^T v|^{m+1} \chi^2(A, \mathcal{N}(0, 1))$.
+
+The following result shows that there are exponentially many nearly-orthogonal unit vectors.
+
+**Lemma 1.15 (see, e.g., Lemma 3.7 in [DKS17]).** For any $0 < c < 1/2$, there is a set $S$, of at least $2^{\Omega(d^c)}$ unit vectors in $\mathbb{R}^d$, such that for each pair of distinct $v, v' \in S$, it holds $|v^T v'| \le O(d^{c-1/2})$.
+
+## 1.5 Prior and Related Work
+
+Early work in robust statistics, starting with the pioneering works of Huber and Tukey [Hub64, Tuk75], pinned down the sample complexity of high-dimensional robust estimation with a minority of outliers. In contrast, until relatively recently, even the most basic computational questions in this field were poorly understood. Two concurrent works [DKK$^{+}$16, LRV16] gave the first provably robust and efficiently computable estimators for robust mean and covariance estimation. Since the dissemination of these works, there has been a flurry of activity on algorithmic robust estimation in a variety of high-dimensional settings; see [DK19] for a recent survey on the topic. Notably, the robust estimators developed in [DKK$^{+}$16] are scalable in practice and yield a number of applications in exploratory data analysis [DKK$^{+}$17] and adversarial machine learning [TLM18, DKK$^{+}$19].
+
+The list-decodable learning setting studied in this paper was first considered in [CSV17] with a focus on mean estimation. [CSV17] gave a polynomial-time algorithm with near-optimal statistical guarantees for list-decodable mean estimation under a bounded covariance assumption on the clean. Subsequent work has led to significantly faster algorithms for the bounded covariance setting [DKK20a, CMY20, DKK$^{+}$20b, DKK$^{+}$21] and polynomial-time algorithms with improved error guarantees under stronger distributional assumptions [DKS18, KSS18]. More recently, a line of work developed list-decodable learners for more challenging tasks, including linear regression [KKK19, RY20a] and subspace recovery [RY20b, BK21].
+
+# 2 Information-Theoretic Bounds
+
+## 2.1 Upper Bound on Sample Complexity
+
+In this section, we show that $n = \text{poly}(d, 1/\alpha)$ samples suffice for list-decodable linear regression.
+
+**Theorem 2.1.** There is a (computationally inefficient) algorithm that uses $O(d/\alpha^3)$ samples from $a (1-\alpha)$-corrupted version of a Gaussian linear regression model of Definition 1.2 with $S' = \mathbb{R}^d$, and returns a list $\mathcal{L}$ of $|\mathcal{L}| = O(1/\alpha)$ many hypotheses such that with high probability at least one of them is within $\ell_2$-distance $O((\sigma/\alpha)\sqrt{\log(1/\alpha)})$ from the regression vector.
+
+The proof strategy is similar to [DKS18]. When $S$ is a set, we use the notation $X \sim_u S$ to denote that $X$ is distributed according to the uniform distribution on $S$. We require the following theorem:
\ No newline at end of file
diff --git a/samples/texts/568081/page_9.md b/samples/texts/568081/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..db837d5d13b6461f869aa75d791f11f248a7118f
--- /dev/null
+++ b/samples/texts/568081/page_9.md
@@ -0,0 +1,27 @@
+**Fact 2.2 (VC Inequality).** Let $\mathcal{F}$ be a class of Boolean functions with finite VC dimension $\text{VC}(\mathcal{F})$ and let a probability distribution $D$ over the domain of these functions. For a set $S$ of $n$ independent samples from $D$
+
+$$ \sup_{f \in \mathcal{F}} |\mathbf{Pr}_{X \sim uS}[f(X)] - \mathbf{Pr}_{X \sim D}[f(X)]| \lesssim \sqrt{\frac{\text{VC}(\mathcal{F})}{n}} + \sqrt{\frac{\log(1/\tau)}{n}}, $$
+
+with probability at least $1 - \tau$.
+
+*Proof of Theorem 2.1.* Recall the notation in Definitions 1.1 and 1.2. Let $T$ be the set of points generated by the $(1-\alpha)$-corrupted version of $D_{\beta^*}$ for some unknown $\beta^* \in \mathbb{R}^d$. Let $S_1$ be the set of points that are sampled from $D_{\beta^*}$. Since inliers are sampled with probability $\alpha$, we have that $|S_1| \ge \alpha|T|/2$ with high probability. For a $t \ge 0$, define $\mathcal{H}_{t,\gamma}$ as follows:
+
+$$ \mathcal{H}_{t,\gamma} := \left\{ \beta \in \mathbb{R}^d : \exists T' \subset T, |T'| = \alpha|T|/2, \right. \tag{1} $$
+
+$$ \mathbf{Pr}_{(X,y) \sim_u T'} [|y - X^T \beta| > \sigma t] \leq \alpha/20, \tag{2} $$
+
+$$ \forall v \in S^{d-1}, \gamma' \geq \gamma : \mathbf{Pr}_{(X,y) \sim_u T'} [|y - X^T \beta - \gamma' v^T X| \leq \sigma t] \leq \alpha/20 . \tag{3} $$
+
+Recall that the distribution of inliers is $X \sim \mathcal{N}(0, I_d)$ and $y = X^T\beta^* + \eta$, where $\eta \sim \mathcal{N}(0, \sigma^2)$ independent of $X$. If $|S_1| \ge C d / \alpha^2$ for a sufficiently large constant $C$, then we claim that $\beta^* \in \mathcal{H}_{t,\gamma}$ with $t = \Theta(\sqrt{\log(1/\alpha)})$ and $\gamma = 40\sigma t / \alpha = \Theta((\sigma/\alpha)\sqrt{\log(1/\alpha)})$. Let $S'$ be a set of i.i.d. points sampled from $D_{\beta^*}$ with $|S'| = |T|\alpha/2$. We first argue that conditions (2) and (3) hold under $(X, y) \sim D_{\beta^*}$, even after replacing $\alpha/20$ with $\alpha/40$ in conditions (2) and (3), with the claimed bounds on $t$ and $\gamma$, and then the required result on $(X, y) \sim_u S'$ will follow from the VC inequality. Since $y-X^T\beta^* \sim \mathcal{N}(0, \sigma^2)$ under $D_{\beta^*}$, we get that $\mathbf{Pr}[|y-X^T\beta^*| > \sigma t] \le \alpha/40$ because of Gaussian concentration. Let $G \sim \mathcal{N}(0, 1)$ independent of $\eta$. For condition (3), the expression again reduces to concentration of a Gaussian distribution:
+
+$$ \underset{\eta \sim \mathcal{N}(0, \sigma^2), G \sim \mathcal{N}(0, 1)}{\mathbf{Pr}} [|η + γ'G| ≤ σt] = \underset{Z \sim \mathcal{N}(0, σ^2+γ'^2)}{\mathbf{Pr}} [|Z| ≤ σt] ≤ \frac{σt}{γ'}, $$
+
+which is less than $\alpha/40$ for $\gamma' \geq \gamma = 40t\sigma/\alpha$. The desired conclusion now follows by noting that conditions (2) and (3) follow by uniform concentration of linear threshold functions on $(X, y)$, which have VC dimension $O(d)$ and the condition that $|S'| = O(d/\alpha^2)$.
+
+We then show that any $\gamma$-packing of the set $\mathcal{H}_{t,\gamma}$ has size $O(1/\alpha)$. Having this, it follows that there exists a $\gamma$-cover of size $O(1/\alpha)$ and the output of the algorithm, $\mathcal{L}$, consists of returning any such cover. The key claim for bounding the size of any $\gamma$-packing is that the pairwise intersections between the sets $T'$ from condition (1) are small.
+
+**Claim 2.3.** Let $\beta_1, \dots, \beta_k \in \mathcal{H}_{t,\gamma}$ such that $\|\beta_i - \beta_j\|_2 > \gamma$ for all $i, j \in [k]$ and $i \neq j$. Let $T'_i$ be the corresponding subsets of $T$ satisfying the condition (1). Then $|T'_i \cap T'_j| \leq \alpha(|T'_i| + |T'_j|)/20$.
+
+*Proof.* Fix an $i \neq j$. Let $\beta_i - \beta_j = v\gamma'$, where $v \in S^{d-1}$ and $\gamma' \geq \gamma$. Let $\mathcal{E}$ be the event $\{(X,y) : |y - X^T\beta_j| \leq \sigma t\}$ and $\mathcal{E}^c$ be its complement. As $T'_i$ and $T'_j$ are sets of size $\alpha|T'|/2$, we have that
+
+$$ |T'_i \cap T'_j| = \frac{|T'_i| + |T'_j|}{2} \left( \frac{|T'_i \cap T'_j \cap \mathcal{E}|}{|T'_i|} + \frac{|T'_i \cap T'_j \cap \mathcal{E}^c|}{|T'_j|} \right) $$
\ No newline at end of file
diff --git a/samples/texts/6110726/page_1.md b/samples/texts/6110726/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..074bbb4e236cc16e57e0fb8e49105c76b92b6821
--- /dev/null
+++ b/samples/texts/6110726/page_1.md
@@ -0,0 +1,40 @@
+Differentiating Natural Logarithms & Exponentials
+
+1. Given that $y = \ln(1 + x) - \frac{x}{1+x}$, find $\frac{dy}{dx}$ and show that $y$ is positive for all positive values of $x$.
+
+2. Differentiate $e^{2x}$ and $\frac{e^{2x}}{1+e^x}$ with respect to $x$.
+Hence find the coordinates of the stationary point and determine its nature. Sketch the curve.
+
+3. Find the co-ordinates of the stationary points of the curve $y = x^2 e^{-x}$, and determine its nature. Hence sketch $y = x^2 e^{-x}$.
+Explain, from your graph, why, if $0 < k < \frac{4}{e^2}$, the equation $ke^x = x^2$ has three real roots.
+
+4. Find the local maximum and minimum values of $y = xe^{-2x}$.
+Find where the curve crosses the axes.
+Sketch the curve (make it reasonably large).
+Find also the points of inflexion of the curve (and they do not occur where $\frac{dy}{dx} = 0$).
+Identify these inflexions on your sketch.
+
+5. Calculate the co-ordinates of the stationary point of the graph of $\frac{\ln x}{x}$ and prove whether it is a maximum, minimum or point of inflexion.
+Sketch the graph of the function for the domain $x > 0$.
+Given that $1 < a < b$, and $\frac{\ln a}{a} = \frac{\ln b}{b}$, show that there is a number $k$ such that $a < k < b$ whatever pair of $a$ and $b$ are chosen. Find this value of $k$.
+With the help of your sketch, or otherwise, find all the pairs of positive integers $(x,y)$ such that
+
+$$x^y = y^x.$$
+
+6. (a) If $y = e^{-x} \ln(1+x)$, show that
+
+$$\frac{dy}{dx} = e^{-x} \left( \frac{1}{1+x} - \ln(1+x) \right)$$
+
+and hence that
+$$ (1+x)\frac{d^2y}{dx^2} + (2x+3)\frac{dy}{dx} + (x+2)y = 0. $$
+
+(b) Using the same set of axes, sketch the graphs of $y = \ln(1+x)$ and $y = \frac{1}{1+x}$. Hence show that the function $y = e^{-x}\ln(1+x)$ has exactly one stationary point, which occurs for a positive value of $x$ and, from your sketch or by using (a), identify its nature.
+
+(c) Sketch the graph of $y = e^{-x} \ln(1+x)$.
+
+7. Find the stationary points of $y = \ln(x^3 - 3x + 3)$.
+Find where the curve meets the axes.
+Sketch the curve.
+
+8. Differentiate $x^n e^{-x}$.
+Denoting this function by $f_n(x)$, prove that $f_n(x)$ attains its maximum value in the range $x \ge 0$ when $x = n$.
\ No newline at end of file
diff --git a/samples/texts/6110726/page_2.md b/samples/texts/6110726/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..211d57298be0294c2d15171d01ffc803b4bf21d0
--- /dev/null
+++ b/samples/texts/6110726/page_2.md
@@ -0,0 +1,21 @@
+(a) By considering $f_n(n)$ and $f_n(n+1)$ prove that $(1+\frac{1}{n})^n < e$.
+
+By considering $f_{n+1}(n)$ and $f_{n+1}(n+1)$ prove that $(1+\frac{1}{n})^{n+1} > e$, and thus
+
+$$ \left(1 + \frac{1}{n}\right)^n > \frac{e}{1 + \frac{1}{n}} $$
+
+Hence show that $\lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n = e$.
+
+(b) Similarly show that $\lim_{n \to \infty} \left(1 - \frac{1}{n}\right)^n = \frac{1}{e}$.
+
+(c) Using (a) and (b), suggest a value for $\lim_{n \to \infty} \left(1 - \frac{1}{n^2}\right)^n$, and confirm it with numerical approximations on your calculator.
+
+(d) Find the following:
+
+i. $\lim_{n \to \infty} \left(1 + \frac{1}{n^2}\right)^n;$
+
+ii. $\lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^{\sqrt{n}};$
+
+iii. $\lim_{n \to \infty} \left(1 + \frac{1}{\sqrt{n}}\right)^n;$
+
+iv. $\lim_{n \to \infty} \left(1 + \frac{x}{n}\right)^n.$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_1.md b/samples/texts/6768269/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..a0a1eab8dc0b8784f7cd09b2df6932f8e231d9e0
--- /dev/null
+++ b/samples/texts/6768269/page_1.md
@@ -0,0 +1,28 @@
+LINEAR MAPS PRESERVING KY FAN NORMS AND SCHATTEN NORMS OF
+TENSOR PRODUCTS OF MATRICES
+
+AJDA FOŠNER*, ZEJUN HUANG†, CHI-KWONG LI‡, AND NUNG-SING SZE§
+
+**Abstract.** For a positive integer $n$, let $M_n$ be the set of $n \times n$ complex matrices. Suppose $\|\cdot\|$ is the Ky Fan $k$-norm with $1 \le k \le mn$ or the Schatten $p$-norm with $1 \le p \le \infty$ ($p \ne 2$) on $M_{mn}$, where $m, n \ge 2$ are positive integers. It is shown that a linear map $\phi : M_{mn} \to M_{mn}$ satisfying
+
+$$
+\|A \otimes B\| = \| \phi(A \otimes B) \| \quad \text{for all } A \in M_m \text{ and } B \in M_n
+$$
+
+if and only if there are unitary $U, V \in M_{mn}$ such that $\phi$ has the form $A \otimes B \mapsto U(\varphi_1(A) \otimes \varphi_2(B))V$, where $\varphi_s(X)$ is either the identity map $X \mapsto X$ or the transposition map $X \mapsto X^t$. The results are extended to tensor space $M_{n_1} \otimes \cdots \otimes M_{n_m}$ of higher level. The connection of the problem to quantum information science is mentioned.
+
+AMS subject classifications. 15A69, 15A86, 15A60, 15A18.
+
+**Key words.** Complex matrix, linear preserver, spectral norm, Ky Fan $k$-norm, Schatten $p$-norm, tensor product.
+
+**1. Introduction and preliminaries.** For a positive integer $n$, let $M_n$ be the set of $n \times n$ complex matrices. Now, suppose that $m, n \ge 2$ are positive integers. Then for $A \in M_m$ and $B \in M_n$, we denote by $A \otimes B \in M_{mn}$ their tensor product (a.k.a. the Kronecker product). In many applied and pure studies, one considers the tensor product of matrices; for example, see [2, 9, 18, 21]. Most noticeably, the tensor product is often used in quantum information science [19]. In a quantum system, quantum states are represented as density matrices (positive semi-definite matrices with trace one). Suppose $A \in M_m$ and $B \in M_n$ are two quantum states in two quantum systems. Then their tensor product $A \otimes B$ describes the joint state in the bipartite system, in which the general states are density matrices in $M_{mn}$. More generally, one may consider tensor states and general states in a multipartite system $M_{n_1} \otimes \cdots \otimes M_{n_m}$ identified with $M_N$ where $N = \prod_{i=1}^m n_i$.
+
+In general, it is relatively easy to construct and extract information from matrices in tensor product form. For instance, the eigenvalues (respectively, the singular values) of $A \otimes B$ have the form $a_i b_j$ with $1 \le i \le m$ and $1 \le j \le n$ if $A \in M_m$ and $B \in M_n$ have eigenvalues (respectively, singular values) $a_1, \dots, a_m$ and $b_1, \dots, b_n$, respectively. Thus, it is interesting to get information on the tensor space $M_{mn}$ by examining the properties of the small collection of matrices in tensor form $A \otimes B$. In particular, if we consider a linear map $\phi : M_{mn} \to M_{mn}$ and if one knows the images $\phi(A \otimes B)$ for $A \in M_m$ and $B \in M_n$, then the map $\phi$ can be completely characterized as every $C \in M_{mn}$ is a linear combination of matrices in tensor form $A \otimes B$. Nevertheless, the challenge is to use the limited information of the linear map $\phi$ on matrices in tensor form to determine the structure of $\phi$. In [5], we considered linear maps preserving the spectrum $\sigma(A \otimes B)$ and spectral
+
+* Faculty of Management, University of Primorska, Cankarjeva 5, SI-6104 Koper, Slovenia. (Email: ajda.fosner@fm-kp.si)
+
+† Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong. (Email: huangzejun@yahoo.cn)
+
+‡ Department of Mathematics, College of William and Mary, Williamsburg, VA 23187, USA; Department of Mathematics, University of Hong Kong, Pokfulam, Hong Kong. (Email: ckli@math.wm.edu)
+
+§ Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong. (Email: raymond.sze@polyu.edu.hk)
\ No newline at end of file
diff --git a/samples/texts/6768269/page_10.md b/samples/texts/6768269/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..1dcd5cb1e07b8f71c5a145b027543f8293e68298
--- /dev/null
+++ b/samples/texts/6768269/page_10.md
@@ -0,0 +1,48 @@
+If $i=r$ or $j=s$, we have
+
+$$
+\|\phi(E_{ii} \otimes E_{jj}) + \phi(E_{rr} \otimes E_{ss})\|_p^p + \|\phi(E_{ii} \otimes E_{jj}) - \phi(E_{rr} \otimes E_{ss})\|_p^p = 2 (\|\phi(E_{ii} \otimes E_{jj})\|_p^p + \|\phi(E_{rr} \otimes E_{ss})\|_p^p).
+$$
+
+Applying Lemma 2.7 and Remark 2.8, we get (2.5). If $i \neq r$ and $j \neq s$, we have
+
+$$
+\begin{align*}
+& \|\phi(E_{ii} \otimes (E_{jj} + E_{ss})) + \phi(E_{rr} \otimes (E_{jj} + E_{ss}))\|_p^p + \|\phi(E_{ii} \otimes (E_{jj} + E_{ss})) - \phi(E_{rr} \otimes (E_{jj} + E_{ss}))\|_p^p \\
+& \qquad = 2 (\|\phi(E_{ii} \otimes (E_{jj} + E_{ss}))\|_p^p + \|\phi(E_{rr} \otimes (E_{jj} + E_{ss}))\|_p^p).
+\end{align*}
+$$
+
+By Lemma 2.7, we have $\phi(E_{ii} \otimes (E_{jj} + E_{ss})) \perp \phi(E_{rr} \otimes (E_{jj} + E_{ss}))$. With the fact that $\phi(E_{ii} \otimes E_{jj}) \perp \phi(E_{ii} \otimes E_{ss})$ and $\phi(E_{rr} \otimes E_{jj}) \perp \phi(E_{rr} \otimes E_{ss})$, we conclude that (2.5) holds. This completes the proof. $\square$
+
+REMARK 2.10. For $p=2$, i.e., the Frobenius norm case, the statements (a) and (b) in Theorem 2.9 are not equivalent. One can consider the linear map $\psi(A) = [b_{ij}]$ for $A = [a_{ij}] \in M_{mn}$ such that $b_{1,mn} = a_{mn,1}$, $b_{mn,1} = a_{1,mn}$ and $b_{ij} = a_{ij}$ for $(i,j) \notin \{(1,mn), (mn,1)\}$. In fact, any linear map on $M_{mn}$ preserving the inner product $(A,B) = \text{tr}(AB^*)$ on $M_{mn}$ will preserve the Frobenius norm.
+
+REMARK 2.11. Note that the maps in Theorems 2.1, 2.6, and 2.9 may not satisfy $\|\phi(C)\| = \|C\|$ for all $C \in M_{mn}$. For instance, if $\phi(A \otimes B) = A \otimes B^t$ and if
+
+$$
+C_r = r^2 E_{11} \otimes E_{11} + r(E_{12} \otimes E_{12} + E_{21} \otimes E_{21}) + E_{22} \otimes E_{22} \quad \text{for } r \ge 0,
+$$
+
+then $C_r$ has singular values $r^2+1, 0, \dots, 0$, and $\phi(C_r)$ has singular values $r^2, r, r, 1, 0, \dots, 0$. Then for any positive number $r \ne 1$, $\|\phi(C_r)\| \ne \|C_r\|$ unless $\|\cdot\|$ is the Frobenius norm. However, if we assume that $\phi$ satisfies $\|\phi(C)\| = \|C\|$ for $C$ of the tensor form $A \otimes B$, and also for $C = C_r$ for some positive number $r \ne 1$, then one easily deduces that both $\varphi_i$ mentioned in the theorems have to be of the same type, i.e., both are identity map, or both are the transposition map. It follows that there are unitary $U, V \in M_{mn}$ such that $\phi$ has the form
+
+$$
+X \mapsto U X V \quad \text{or} \quad X \mapsto U X^t V
+$$
+
+and hence, $\|\phi(X)\| = \|X\|$ for all $X \in M_{mn}$.
+
+**3. Multipartite systems.** In this section we extend the previous results to multipartite systems $M_{n_1} \otimes \cdots \otimes M_{n_m}, m \ge 2$. Let $A_i \in M_{n_i}, i = 1, \ldots, m$. We denote $\bigotimes_{i=1}^m A_i = A_1 \otimes A_2 \otimes \cdots \otimes A_m$.
+
+THEOREM 3.1. Let $1 \le k \le \prod_{i=1}^m n_i$ and $\phi: M_{n_1}\cdots n_m \to M_{n_1}\cdots n_m$ be a linear map. The following are equivalent.
+
+(a) $\|\phi(A_1 \otimes \cdots \otimes A_m)\|_k = \|A_1 \otimes \cdots \otimes A_m\|_k$ for all $A_i \in M_{n_i}, i = 1, \dots, m$.
+
+(b) There are unitary matrices $U,V \in M_{n_1 \cdots n_m}$ such that
+
+$$
+\phi(A_1 \otimes \cdots \otimes A_m) = U(\varphi_1(A_1) \otimes \cdots \otimes \varphi_m(A_m))V \quad \text{for all } A_i \in M_{n_i}, i = 1, \dots, m, (3.1)
+$$
+
+where $\varphi_s$ is the identity map or the transposition map $X \mapsto X^t$ for $s = 1, \dots, m$.
+
+*Proof.* The sufficiency part is clear. To prove the necessity part, we use induction on *m*. By Theorem 2.1 and Theorem 2.6, we already know that the statement of Theorem 3.1 is true for bipartite systems. So, assume that *m* ≥ 3 and that the result holds for all (*m* − 1)-partite systems. We need to prove that the same is true for *m*-partite systems.
\ No newline at end of file
diff --git a/samples/texts/6768269/page_11.md b/samples/texts/6768269/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa86807f6c87d584d68aa53d2dd51b5221c20f64
--- /dev/null
+++ b/samples/texts/6768269/page_11.md
@@ -0,0 +1,44 @@
+Denote $N = \prod_{i=1}^{m} n_i$. First we claim that there exist unitary $U, V \in M_N$ such that
+
+$$ \phi(E_{j_1 j_1} \otimes \cdots \otimes E_{j_m j_m}) = U (E_{j_1 j_1} \otimes \cdots \otimes E_{j_m j_m}) V \quad \text{for all } 1 \le j_i \le n_i. \qquad (3.2) $$
+
+When $k=1$, as in the proof of Theorem 2.1, we can successively consider
+
+$$ \begin{aligned} & \phi(E_{i_1 i_1} \otimes \cdots \otimes E_{i_{m-2} i_{m-2}} \otimes E_{i_{m-1} i_{m-1}} \otimes (E_{i_m i_m} + \gamma E_{j_m j_m})), \\ & \qquad \phi(E_{i_1 i_1} \otimes \cdots \otimes E_{i_{m-2} i_{m-2}} \otimes (E_{i_{m-1} i_{m-1}} + \gamma E_{j_{m-1} j_{m-1}}) \otimes D_m), \dots, \\ & \qquad \phi((E_{i_1 i_1} + \gamma E_{j_1 j_1}) \otimes D_2 \otimes \cdots \otimes D_{m-2} \otimes D_{m-1} \otimes D_m) \end{aligned} $$
+
+to obtain (3.2), where $D_2, \dots, D_m$ are arbitrary diagonal matrices.
+
+When $k \ge 2$, as in the proof of Theorem 2.6, it suffices to show
+
+$$ \phi(E_{i_1 i_1} \otimes \cdots \otimes E_{i_m i_m}) \perp \phi(E_{j_1 j_1} \otimes \cdots \otimes E_{j_m j_m}) \quad \text{for any } (i_1, \dots, i_m) \ne (j_1, \dots, j_m). \quad (3.3) $$
+
+To confirm (3.3), it suffices to verify that for any $1 \le r \le m$,
+
+$$ \phi\left(\bigotimes_{u=1}^{r-1} (E_{iu} i_u + E_{ju} j_u) \otimes E_{ir} i_r \otimes \bigotimes_{u=r+1}^{m} E_{iu} i_u\right) \perp \phi\left(\bigotimes_{u=1}^{r-1} (E_{iu} i_u + E_{ju} j_u) \otimes E_{jr} j_r \otimes \bigotimes_{u=r+1}^{m} E_{iu} i_u\right) \tag{3.4} $$
+
+for any distinct **i** = (i₁, ..., iₘ) and **j** = (j₁, ..., jₘ) with iᵢ ≠ jᵢ, 1 ≤ u ≤ r. Denote by G_r = G_r(**i**, **j**) and $\hat{G}_r = \hat{G}_r(\mathbf{i}, \mathbf{j})$ the two matrices in (3.4) accordingly. We consider two cases.
+
+Case 1. For $r \le \log_2 k$, as $\|\alpha G_r + \beta \hat{G}_r\|_{(k)} = |\alpha|||G_r||_{(k)} + |\beta|||\hat{G}_r||_{(k)}$ for all complex $\alpha$ and $\beta$. Applying Lemma 2.2, we get
+
+$$ G_r \perp \hat{G}_r \quad \text{and} \quad \text{rank } G_r + \text{rank } \hat{G}_r \le k \quad \text{for all} \quad r \le \log_2 k. $$
+
+Now as $G_{s+1} = G_s + \hat{G}_s$ and $G_s \perp \hat{G}_s$ for all $s \le \log_2 k$,
+
+$$ \|G_{s+1}\| \leq \max\{\|G_s\|, \|(\hat{G}_s)\|\} \leq \max\{\|G_s(\mathbf{i},\mathbf{j})\| : i_u \neq j_u, 1 \leq u \leq s\}, $$
+
+where $\|\cdot\|$ is the spectral norm. Hence,
+
+$$ \max\{\|G_{s+1}(\mathbf{i},\mathbf{j})\| : i_u \neq j_u, 1 \le u \le s+1\} \leq \max\{\|G_s(\mathbf{i},\mathbf{j})\| : i_u \neq j_u, 1 \le u \le s\}. $$
+
+As the inequality holds for all $s \le \log_2 k$, it follows that
+
+$$
+\begin{align*}
+\|G_r\| &\leq \max\{\|G_1(\mathbf{i},\mathbf{j})\| : i_1 \neq j_1\} = \max\{\|\phi(E_{i_1 i_1} \otimes \cdots \otimes E_{i_m i_m})\| : (i_1, \ldots, i_m)\} \\
+&\leq \max\{\|\phi(E_{i_1 i_1} \otimes \cdots \otimes E_{i_m i_m})\|_{(k)} : (i_1, \ldots, i_m)\} = 1.
+\end{align*}
+$$
+
+Similarly, one conclude that $\|\hat{G}_r\| \le 1$.
+
+Case 2. For $r > \log_2 k$, we claim that $G_r$ and $\hat{G}_r$ are orthogonal and both of them have singular values 0 and 1 only. We prove the claim by induction on $r$.
\ No newline at end of file
diff --git a/samples/texts/6768269/page_12.md b/samples/texts/6768269/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a53431d026262f088f8b57b6759ecd10ac4e826
--- /dev/null
+++ b/samples/texts/6768269/page_12.md
@@ -0,0 +1,27 @@
+Suppose $\log_2 k < r \le 1 + \log_2 k$. Notice that $G_r = G_{r-1} + \hat{G}_{r-1}$. By Case 1,
+
+$$G_{r-1} \perp \hat{G}_{r-1}, \quad \|G_{r-1}\| \le 1, \quad \|\hat{G}_{r-1}\| \le 1, \quad \text{and } \operatorname{rank} G_{r-1} + \operatorname{rank} \hat{G}_{r-1} \le k.$$
+
+Therefore, $\|G_r\| \le 1$ and $\operatorname{rank} G_r \le k$. Since
+
+$$\|G_r + \alpha \hat{G}_r\|_{(k)} = k \quad \text{and} \quad \|2G_r + \beta \hat{G}_r\|_{(k)} = \|G_r\|_{(k)} + \|G_r + \beta \hat{G}_r\|_{(k)},$$
+
+for any complex unit $\alpha$, by Lemma 2.4, we obtain $G_r \perp \hat{G}_r$. Moreover, $G_r$ has singular values 0 and 1 only. Similarly, one can conclude that $\hat{G}_r$ has singular values 0 and 1 only. Now assume that the claim holds for some $r > \log_2 k$. We will show that the claim also holds for $r+1$. By induction assumption and the fact that $G_{r+1} = G_r + \hat{G}_r$, we conclude that $G_{r+1}$ has singular values 0 and 1 only. The same conclusion holds for $\hat{G}_{r+1}$. By Lemma 2.5 and the fact that $\|G_{r+1} + \alpha \hat{G}_{r+1}\| = k$ for all complex unit $\alpha$, we get $G_{r+1} \perp \hat{G}_{r+1}$. Therefore the claim holds.
+
+Combining the above two cases, we see that (3.4) holds and hence the statement (3.3) follows. Therefore, the claim (3.2) holds for all $k$. Without loss of generality, we may assume $U = V = I_N$ in (3.2). Following a similar argument as in Theorems 2.1 and 2.6, one can conclude that
+
+$$\phi(E_{j_1 j_1} \otimes \cdots \otimes E_{j_{m-1} j_{m-1}} \otimes B) = E_{j_1 j_1} \otimes \cdots \otimes E_{j_{m-1} j_{m-1}} \otimes \varphi_{j_1, \ldots, j_{m-1}}(B)$$
+
+for all $1 \le j_i \le n_i$ with $1 \le i \le m-1$ and $B \in M_{n_m}$, where $\varphi_{j_1, \ldots, j_{m-1}}$ can be assumed to be either the identity map or the transposition map. Following the same argument, we can further conclude that for any $X = X_1 \otimes \cdots \otimes X_{m-1}$ with $X_i \in M_{n_i}$ being unitary for $1 \le i \le m-1$, there are unitary $U_X$ and $V_X$ such that
+
+$$\phi\left(\left(\bigotimes_{i=1}^{m-1} X_i E_{j_i j_i} X_i^*\right) \otimes B\right) = U_X \left(\left(\bigotimes_{i=1}^{m-1} X_i E_{j_i j_i} X_i^*\right) \otimes \varphi_{j_1, \dots, j_{m-1}, X}(B)\right) V_X$$
+
+for all $1 \le j_i \le n_i$ with $1 \le i \le m-1$ and $B \in M_{n_m}$, where $\varphi_{j_1, \ldots, j_{m-1}, X}$ can be assumed to be either the identity map or the transposition map, depending on $j_1, \ldots, j_{m-1}$ and $X$. By the fact that $\phi(I_N) = I_N$, we have $V_X^* = U_X$.
+
+Again, considering all symmetric $S \in M_{n_m}$ as in the proof of Theorem 2.1, we can show that there exists $W_X \in M_{n_1 \cdots n_{m-1}}$ such that
+
+$$\phi\left(\left(\bigotimes_{i=1}^{m-1} X_i E_{j_i j_i} X_i^*\right) \otimes B\right) = W_X \left(\left(\bigotimes_{i=1}^{m-1} X_i E_{j_i j_i} X_i^*\right) W_X^* \otimes \varphi_{j_1, \dots, j_{m-1}, X}(B)\right)$$
+
+for all $1 \le j_i \le n_i$ with $1 \le i \le m-1$ and $B \in M_{n_m}$. Consider the linear map (known as the partial trace function in quantum information science context) on $M_N$ defined by $X \otimes Y \mapsto (\operatorname{tr} X)Y$ for $X \in M_{n_1 \cdots n_{m-1}}$ and $Y \in M_{n_m}$ and apply to the above equation, we see that every choice of $\bigotimes_{j=1}^{m-1} X_j E_{j_i j_i} X_i^*$ gives rise to a linear map $\varphi_{j_1, \ldots, j_m, X}$, which is either the identity map or the transposition map. Evidently, the map
+
+$$\bigotimes_{j=1}^{m-1} X_i E_{j_i j_i} X_i^* \mapsto \varphi_{j_1, \dots, j_m, X}$$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_13.md b/samples/texts/6768269/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d34040f83e6871e5ad5c7fe89419f077f99241c
--- /dev/null
+++ b/samples/texts/6768269/page_13.md
@@ -0,0 +1,41 @@
+is linear and hence continuous; the set
+
+$$
+\left\{
+\bigotimes_{j=1}^{m-1} X_i E_{j_i j_i} X_i^* : 1 \le j_i \le n_i \text{ and } X_i^* X_i = I_{n_i}, \text{ for } i=1, \dots, m-1
+\right\}
+\\[1em]
+= \{ x_1 x_1^* \otimes \cdots \otimes x_{m-1} x_{m-1}^* : x_i \in \mathbb{C}^{n_i} \text{ with } 1 \le i \le m-1 \}
+$$
+
+is connected. Thus, all the maps $\varphi_{j_1, \dots, j_{m-1}, X}$ have to be the same. Assume that this common map is $\varphi_m$, which is either the identity map or the transposition map. By linearity, one can conclude that for any $A_1 \otimes \dots \otimes A_m \in M_{n_1} \otimes \dots \otimes M_{n_m}$, we have
+
+$$
+\phi(A_1 \otimes \cdots \otimes A_m) = \psi(A_1 \otimes \cdots \otimes A_{m-1}) \otimes \varphi_m(A_m)
+$$
+
+for some $\psi(A_1 \otimes \cdots \otimes A_{m-1}) \in M_{n_1 \cdots n_{m-1}}$. Note that $\psi: M_{n_1 \cdots n_{m-1}} \to M_{n_1 \cdots n_{m-1}}$ preserves the Ky Fan $k$-norm of all matrices in $M_{n_1} \otimes \cdots \otimes M_{n_{m-1}}$. By the induction hypothesis, we know there exist unitary $\tilde{U}, \tilde{V}$ such that
+
+$$
+\psi(A_1 \otimes \cdots \otimes A_{m-1}) = \tilde{U}(\varphi_1(A_1) \otimes \cdots \otimes \varphi_{m-1}(A_{m-1}))\tilde{V}
+$$
+
+with each $\varphi_j$ being either the identity map or the transposition map. Hence, $\phi$ has the desired form and the proof is completed. $\square$
+
+Using a similar argument and applying Lemma 2.7, we can extend Theorem 2.9 to multipartite systems as follows.
+
+**THEOREM 3.2.** Let $1 \le p < \infty$ and $p \ne 2$ and $\phi : M_{n_1 \cdots n_m} \to M_{n_1 \cdots n_m}$ be a linear map. The following are equivalent.
+
+(a) $\|\phi(A_1 \otimes \cdots \otimes A_m)\|_p = \|A_1 \otimes \cdots \otimes A_m\|_p$ for all $A_i \in M_{n_i}, i = 1, \dots, m$.
+
+(b) There are unitary matrices $U, V \in M_{n_1 \cdots n_m}$ such that
+
+$$
+\phi(A_1 \otimes \cdots \otimes A_m) = U (\varphi_1(A_1) \otimes \cdots \otimes \varphi_m(A_m)) V, \text{ for all } A_i \in M_{n_i}, i = 1, \dots, m,
+$$
+
+where $\varphi_s$ is the identity map or the transposition map $X \mapsto X^t$ for $s = 1, \dots, m$.
+
+Acknowledgment
+
+This research was supported by a Hong Kong GRC grant PolyU 502411 with Sze as the PI. The grant also supported the post-doctoral fellowship of Huang and the visit of Fošner to the Hong Kong Polytechnic University in the summer of 2012. She gratefully acknowledged the support and kind hospitality from the host university. Li was supported by a GRC grant and a USA NSF grant; this research was done when he was a visiting professor of the University of Hong Kong in the spring of 2012; furthermore, he is an honorary professor of Taiyuan University of Technology (100 Talent Program scholar), and an honorary professor of the Shanghai University.
\ No newline at end of file
diff --git a/samples/texts/6768269/page_14.md b/samples/texts/6768269/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..a3d461ce12567d9338f2a1007dd9b91b5037ca3c
--- /dev/null
+++ b/samples/texts/6768269/page_14.md
@@ -0,0 +1,43 @@
+REFERENCES
+
+[1] J. Arazy, The isometries of $C_p$, Israel J. Math. 22 (1975), 247-256.
+
+[2] N. Bourbaki, Elements of mathematics, Algebra I, Springer-Verlag, New York, 1989.
+
+[3] J.T. Chan, C.K. Li, and N.S. Sze, Isometries for unitarily invariant norms, Linear Algebra Appl. 399 (2005), 53-70.
+
+[4] K. Chen and L.A. Wu, A matrix realignment method for recognizing entanglement, Quantum Inf. Comput. 3 (2003), 193.
+
+[5] A. Fošner, Z. Huang, C.K. Li, and N.S. Sze, Linear preservers and quantum information science, Linear and Multilinear Algebra, to appear. (Preprint available online http://dx.doi.org/10.1080/03081087.2012.740029)
+
+[6] S. Friedland, C.K. Li, Y.T. Poon, and N.S. Sze, The automorphism group of separable states in quantum information theory, J. Math. Phys., 52 (2011), 042203.
+
+[7] R. Grone and M. Marcus, Isometries of matrix algebras, J. Algebra 47 (1977), 180-189.
+
+[8] R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985.
+
+[9] A.K. Jain, Fundamentals of Digital Image Processing, Prentice Hall, New Jersey, 1989.
+
+[10] N. Johnston, Characterizing Operations Preserving Separability Measures via Linear Preserver Problems, Linear and Multilinear Algebra 59 (2011), 1171-1187.
+
+[11] N. Johnston and D.W. Kribs, A family of norms with applications in quantum information theory, J. Math. Phys. 51 (2010), 082202.
+
+[12] C.K. Li, Matrices with some extremal properties, Linear Algebra Appl. 101 (1988), 255-267.
+
+[13] C.K. Li and S. Pierce, Linear preserver problems, Amer. Math. Monthly 108 (2001), 591-605.
+
+[14] C.K. Li, Y.T. Poon, and N.S. Sze, Linear preservers of tensor product of unitary orbits, and product numerical range, Linear Algebra Appl., to appear. (Preprint available online http://dx.doi.org/10.1016/j.laa.2011.07.039)
+
+[15] C.K. Li, Y.T. Poon, and N.S. Sze, Isometries for Ky Fan norms between matrix spaces, Proc. Amer. Math. Soc. 133 (2005) 369-377.
+
+[16] C.K. Li, P. Šemrl, and A.R. Sourour, Isometries for Ky-Fan norms on block triangular matrix algebras, Archiv Math. 81 (2003), 175-181.
+
+[17] C.A. McCarthy, $C_p$, Israel J. Math. 5 (1967), 249-271.
+
+[18] S. Mac Lane and B. Birkhoff, Algebra, AMS Chelsea, Providence, 1999.
+
+[19] M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2000.
+
+[20] O. Rudolph, Further results on the cross norm criterion for separability, Quantum Inf. Process. 4 (2005), 219.
+
+[21] S. Willi-Hans and H. Yorick, Matrix Calculus And Kronecker Product: A Practical Approach to Linear and Multilinear Algebra, 2nd Edition, World Scientific, Singapore, 2011.
\ No newline at end of file
diff --git a/samples/texts/6768269/page_2.md b/samples/texts/6768269/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..db8ac21d02485f672320376a0bb28852bc3179a9
--- /dev/null
+++ b/samples/texts/6768269/page_2.md
@@ -0,0 +1,29 @@
+radius $r(A \otimes B)$ of Hermitian matrices $A \in M_m$ and $B \in M_n$. In [10], the author considered linear maps $\phi : M_{mn} \to M_{mn}$ satisfying
+
+$$|||A \otimes B||| = |||\phi(A \otimes B)||| \quad \text{for all } A \in M_m \text{ and } B \in M_n,$$
+
+where $||| \cdot |||$ is a certain (separability) norm defined in [11]. This family of (separability) norms was shown to be related to the problem of detecting bounded entangled non-positive partial transpose states and characterizing k-positive linear maps in quantum information science.
+
+Suppose $X \in M_N$ has singular values $s_1(X) \ge \cdots \ge s_N(X)$. The Ky Fan k-norm of X is defined by
+
+$$\|X\|_{(k)} = s_1(X) + \cdots + s_k(X).$$
+
+The Ky Fan 1-norm reduces to the *spectral norm* and the Ky Fan *N*-norm is also called the *trace norm*. For $p \ge 1$, the *Schatten p-norm* of X is defined by
+
+$$\|X\|_p = \left( \sum_{i=1}^{N} s_i(X)^p \right)^{1/p}.$$
+
+The limiting case $p = \infty$ is just the spectral norm, $\|\cdot\|_1$ is the trace norm, and $\|\cdot\|_2$ is the *Frobenius norm*, i.e., $\|X\|_2 = (\text{tr}(XX^*))^{1/2}$. In bipartite quantum systems, a well known criterion for separability of a state is the computable cross norm (CCNR) criterion [4, 20], which asserts that if a state (density matrix) $X$ in $M_{mn}$ is separable, the trace norm of the realignment of $X$ (see e.g. [4]) is at most 1.
+
+The purpose of this paper is to study linear maps $\phi: M_{mn} \to M_{mn}$ satisfying
+
+$$||A \otimes B|| = ||\phi(A \otimes B)|| \quad \text{for all } A \in M_m \text{ and } B \in M_n,$$
+
+where $\|\cdot\|$ denotes the Ky Fan k-norm with $1 \le k \le mn$, or the Schatten p-norm with $1 \le p \le \infty$.
+
+Note that even if we know that $\phi: M_{mn} \to M_{mn}$ is linear and satisfies $\|A \otimes B\| = \|\phi(A \otimes B)\|$ for all $A \in M_m$ and $B \in M_n$, it does not ensure that $\|A_1 \otimes B_1 + A_2 \otimes B_2\| = \|\phi(A_1 \otimes B_1) + \phi(A_2 \otimes B_2)\|$ because $A_1 \otimes B_1 + A_2 \otimes B_2$ may not be of the form $A \otimes B$. Thus, the proofs of our main results (Theorems 2.6 and 2.9) are quite delicate as shown in the following discussion (in Section 2). We will also extend the results to multipartite systems $M_{n_1} \otimes \dots \otimes M_{n_m}$ in Section 3.
+
+One may see [1, 3, 7, 13] and their references for some background on linear preserver problems, and the preservers of the Ky Fan k-norms and Schatten p-norms (without the tensor structure). It was shown in these papers that such norm preservers (except for the Schatten 2-norm preservers) $\phi: M_n \to M_n$ have the form
+
+$$\phi(A) = UAV \quad \text{or} \quad \phi(A) = UA^tV$$
+
+for some unitary matrices $U, V \in M_n$, where $A^t$ is the transpose of $A$. One can also see [5, 6, 10, 14] and their references for some recent results on linear preserver problems on tensor spaces arising in quantum information science.
\ No newline at end of file
diff --git a/samples/texts/6768269/page_3.md b/samples/texts/6768269/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..a9befa510a4e9884293eb1c64ac313555b6b0620
--- /dev/null
+++ b/samples/texts/6768269/page_3.md
@@ -0,0 +1,31 @@
+In our discussion, we will use $X^t$ and $X^*$ to denote the transpose and the conjugate transpose of a square matrix $X$, respectively. For any $A \in M_m$ and $B \in M_n$, we denote by $A \oplus B$ their direct sum. The $n \times n$ identity matrix will be denoted by $I_n$. Denote by $E_{ij}$ the square matrix which the $(i,j)$-entry is equal to one and all the others are equal to zero, where the size of $E_{ij}$ should be clear in the context.
+
+## 2. Bipartite systems.
+
+**2.1. Spectral norm.** In what follows we denote by $\|\cdot\|$ the spectral norm. Recall that the spectral norm is the same as Ky Fan 1-norm. We first present the result for spectral norm.
+
+**THEOREM 2.1.** *The following are equivalent for a linear map $\phi : M_{mn} \rightarrow M_{mn}$.*
+
+(a) $\|\phi(A \otimes B)\| = \|A \otimes B\|$ for all $A \in M_m$ and $B \in M_n$.
+
+(b) There are unitary matrices $U, V \in M_{mn}$ such that
+
+$$ \phi(A \otimes B) = U(\varphi_1(A) \otimes \varphi_2(B))V \quad \text{for all } A \in M_m \text{ and } B \in M_n, $$
+
+where $\varphi_s$ is the identity map or the transposition map $X \mapsto X^t$ for $s=1,2$.
+
+*Proof.* The implication (b) ⇒ (a) is obvious. Conversely, assume that $\|\phi(A \otimes B)\| = \|A \otimes B\|$ for all $A \in M_m$ and $B \in M_n$. In the following, we first show that $\phi$ maps the matrix $E_{ii} \otimes E_{jj}$ to $U(E_{ii} \otimes E_{jj})V$ for all $1 \le i \le m$ and $1 \le j \le n$, where $U$ and $V$ are some unitary matrices.
+
+Let $1 \le i \le m$ and $1 \le j, s \le n$ with $j \neq s$. Then $\phi(E_{ii} \otimes E_{jj})$ has norm one and the same is true for $\phi(E_{ii} \otimes E_{jj} + \gamma E_{ii} \otimes E_{ss})$ whenever $|\gamma| \le 1$. Suppose $x_j, y_j \in \mathbb{C}^{mn}$ are the left and right norm attaining unit vectors of $\phi(E_{ii} \otimes E_{jj})$, i.e., $x_j^*\phi(E_{ii} \otimes E_{jj})y_j = 1$. Then for any unitary matrices $X_j$ and $Y_j$ with $x_j$ and $y_j$ as their first columns, we have $X_j^*\phi(E_{ii} \otimes E_{jj})Y_j = [1] \oplus G_j$ for some $G_j \in M_{mn-1}$ with $\|G_j\| \le 1$. Since $\phi(E_{ii} \otimes E_{jj} + \gamma E_{ii} \otimes E_{ss})$ has norm one for all $|\gamma| \le 1$, the matrix $X_j^*\phi(E_{ii} \otimes E_{ss})Y_j$ must have the form $[0] \oplus G_s$ for some $G_s \in M_{mn-1}$. It follows that the left and right norm attaining vectors of $\phi(E_{ii} \otimes E_{ss})$, say $x_s$ and $y_s$, must be orthogonal to $x_j$ and $y_j$ respectively. As $j$ and $s$ are arbitrary, it follows that $\{x_1, \dots, x_n\}$ and $\{y_1, \dots, y_n\}$ are orthonormal sets.
+
+Let $U_i$ be an $mn \times mn$ unitary matrix with $x_1, \dots, x_n$ as its first $n$ columns and $V_i$ be an $mn \times mn$ unitary matrix with $y_1^*, \dots, y_n^*$ as its first $n$ rows. From the above discussion, one has
+
+$$ \phi(E_{ii} \otimes E_{jj}) = U_i(E_{jj} \oplus P_{ij})V_i \quad \text{for all } j = 1, \dots, n $$
+
+for some $P_{ij} \in M_{mn-n}$ and hence
+
+$$ \phi(E_{ii} \otimes D) = U_i(D \oplus P_{i,D})V_i \quad \text{for any diagonal matrix } D \in M_n $$
+
+for some $P_{i,D} \in M_{mn-n}$. Note also that if $1 \le i, r \le m$ with $i \neq r$, then $\phi(E_{ii} \otimes D + \gamma E_{rr} \otimes D)$ has norm one for any diagonal unitary matrix $D \in M_n$ and any scalar $\gamma$ with $|\gamma| \le 1$. Consequently, we see that the left and right norm attaining vectors of $\phi(E_{ii} \otimes D)$ and those of $\phi(E_{rr} \otimes D)$ are orthogonal. By a similar argument, there are unitary $U, V \in M_{mn}$ such that
+
+$$ \phi(E_{ii} \otimes D) = U(E_{ii} \otimes D)V \quad \text{for any } 1 \le i \le m \text{ and unitary diagonal } D \in M_n. $$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_4.md b/samples/texts/6768269/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..c96c84e23b31fdb9663814f0d0e4436d91443dc5
--- /dev/null
+++ b/samples/texts/6768269/page_4.md
@@ -0,0 +1,35 @@
+In particular,
+
+$$ \phi(E_{ii} \otimes E_{jj}) = U(E_{ii} \otimes E_{jj})V \quad \text{for } 1 \le i \le m \text{ and } 1 \le j \le n. $$
+
+For the sake of the simplicity, we assume that $U$ and $V$ are identity matrices. Next, we show that $\phi(E_{ii} \otimes B) = E_{ii} \otimes \varphi_i(B)$ for all $B \in M_n$, where $\varphi_i$ is a linear map on $M_n$. For any unitary $Y \in M_n$ and $1 \le i \le m$, $1 \le j \le n$ we can apply the argument in the preceding paragraphs to conclude that $\phi(E_{ii} \otimes Y E_{jj} Y^*)$ has rank one. Since
+
+$$ \| \phi(E_{ii} \otimes I_n) + \gamma \phi(E_{ii} \otimes Y E_{jj} Y^*) \| = \| \phi(E_{ii} \otimes I_n + \gamma E_{ii} \otimes Y E_{jj} Y^*) \| = 1 + \gamma $$
+
+for any positive scalar $\gamma$, the left and right norm attaining vectors of $\phi(E_{ii} \otimes Y E_{jj} Y^*)$ are also left and right norm attaining vectors of $\phi(E_{ii} \otimes I_n) = E_{ii} \otimes I_n$. Thus, $\phi(E_{ii} \otimes Y E_{jj} Y^*) = E_{ii} \otimes Z$ for some rank one $Z \in M_n$. Since this is true for any unitary $Y \in M_n$, by linearity of $\phi$, we conclude that there exists a linear map $\varphi_i : M_n \to M_n$ such that
+
+$$ \phi(E_{ii} \otimes B) = E_{ii} \otimes \varphi_i(B) \quad \text{for all } B \in M_n. $$
+
+Clearly, $\varphi_i$ preserves the spectral norm. Therefore, $\varphi_i$ has the form $X \mapsto W_i X \tilde{W}_i$ or $X \mapsto W_i X^t \tilde{W}_i$ for some unitary $W_i, \tilde{W}_i \in M_n$ (e.g., see [3] and its references). For simplicity, we may assume that $W_i = \tilde{W}_i = I_n$ for all $i = 1, \dots, m$.
+
+Let $X \in M_m$ be any unitary matrix. Repeating the same argument as above, one can show that
+
+$$ \phi(XE_{ii}X^* \otimes B) = U_X(E_{ii} \otimes \varphi_{i,X}(B))V_X $$
+
+for $1 \le i \le m$ and $B \in M_n$, where $U_X, V_X \in M_{mn}$ are unitary matrices depending on $X$ and $\varphi_{i,X}: M_n \to M_n$ is either the identity map or the transposition map depending on $i$ and $X$. Moreover, since $\phi(I_{mn}) = I_{mn}$, we have $V_X = U_X^*$.
+
+Now we show that all the maps $\varphi_{i,X}$ are the same. For any real symmetric $S \in M_n$ and any unitary $X \in M_m$ we have
+
+$$ \phi(I_m \otimes S) = \phi\left(\sum_{i=1}^{m} X E_{ii} X^* \otimes S\right) = U_X \left(\sum_{i=1}^{m} X E_{ii} X^* \otimes S\right) U_X^* = U_X (I_m \otimes S) U_X^*. $$
+
+In particular, when $X = I_m$, we have $\phi(I_m \otimes S) = I_m \otimes S$. Thus, $U_X(I_m \otimes S)U_X^* = I_m \otimes S$ and this yields that $U_X$ commutes with $I_m \otimes S$ for all real symmetric $S$. Hence, $U_X$ has the form $W_X \otimes I_n$ for some unitary $W_X \in M_m$ and
+
+$$ \phi(XE_{ii}X^* \otimes B) = (W_X E_{ii} W_X^*) \otimes \varphi_{i,X}(B) \quad \text{for } 1 \le i \le m \text{ and } B \in M_n. $$
+
+Now, consider the linear maps $\text{tr}_1: M_{mn} \to M_n$ and $\text{Tr}_1: M_{mn} \to M_n$ defined by
+
+$$ \text{tr}_1(A \otimes B) = (\text{tr}\,A)B \quad \text{and} \quad \text{Tr}_1(A \otimes B) = \text{tr}_1(\phi(A \otimes B)) $$
+
+for all $A \in M_m$ and $B \in M_n$. Notice that the map $\text{tr}_1$ is known as the partial trace function in quantum information science context. Then
+
+$$ \text{Tr}_1(\phi(XE_{ii}X^* \otimes B)) = \varphi_{i,X}(B). $$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_5.md b/samples/texts/6768269/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..1732d650539577fba99ee0ba5a586e519a60da94
--- /dev/null
+++ b/samples/texts/6768269/page_5.md
@@ -0,0 +1,39 @@
+So, $\text{Tr}_1$ induces a map $XE_{ii}X^* \mapsto \varphi_{i,X}$, where $\varphi_{i,X}$ is either the identity map or the transpose map.
+Note that $\text{Tr}_1$ is linear and therefore continuous, and the set
+
+$$ \{XE_{ii}X^*: 1 \le i \le m, X^*X = I_m\} = \{xx^* \in M_m : x^*x = 1\} $$
+
+is connected. So, all the maps $\varphi_{i,X}$ have to be the same. Replacing $\phi$ by the map $A \otimes B \mapsto \phi(A \otimes B^t)$,
+if necessary, we may assume that this common map is the identity map. Next, using the linearity
+of $\phi$, one can conclude that for every $A \in M_m$ and $B \in M_n$ we have
+
+$$ \phi (A \otimes B) = \varphi_1(A) \otimes B, $$
+
+where $\varphi_1(A) \in M_m$ depends on $A$ only. Recall that $\varphi_1: M_m \to M_m$ is a linear map and $\|\varphi_1(A)\| = \|A\|$ for all $A \in M_m$. Hence, $\varphi_1$ has the form $A \mapsto \text{UAV}$ or $A \mapsto \text{UA}^tV$ for some unitary $U, V \in M_m$. This completes the proof. $\square$
+
+**2.2. Ky Fan k-norms.** We now turn to Ky Fan $k$-norms. Two matrices $A, B \in M_n$ are called *orthogonal* if $AB^* = A^*B = 0$ (see [16]). We write $A \perp B$ to indicate that $A$ and $B$ are orthogonal. It is shown in [16] that $A \perp B$ if and only if there are unitary matrices $U, V \in M_n$ such that $UAV = \text{diag}(a_1, \dots, a_n)$ and $UBV = \text{diag}(b_1, \dots, b_n)$ with $a_i, b_i \ge 0$ and $a_i b_i = 0$ for $i = 1, \dots, n$. The matrices $A_1, \dots, A_t$ are said to be *pairwise orthogonal* if $A_i^* A_j = A_i A_j^*$ for any distinct $i, j \in \{1, \dots, t\}$. In this case, there are unitary matrices $U, V \in M_n$ such that $UA_i V = D_i$ for $i = 1, \dots, t$ with each $D_i$ being nonnegative diagonal matrix and $D_i D_j = 0$ for any distinct $i, j \in \{1, \dots, t\}$.
+
+We have the following lemmas relating to orthogonality, which are useful in the proof of Ky Fan
+$k$-norm results (Theorems 2.6 and 3.1).
+
+LEMMA 2.2. [16] Let $A, B \in M_n$ be nonzero matrices. Then
+
+$$ \| \alpha A + \beta B \|_{(k)} = |\alpha| \| A \|_{(k)} + |\beta| \| B \|_{(k)} $$
+
+for every pair of complex numbers $\alpha$ and $\beta$ if and only if $A \perp B$ and $\text{rank } A + \text{rank } B \le k$.
+
+Denote by $\sigma(A)$ the spectrum of a matrix $A \in M_n$. Using the same arguments as in the proof of Lemma 2 in [16], we have the following result. (One can also see [8, p.468, Problem 3] for part (a) of Lemma 2.3.)
+
+LEMMA 2.3. Let $A \in M_n$ be positive semidefinite and let $B \in M_n$ be Hermitian.
+
+(a) $\sigma(AB) \subseteq \mathbb{R}$.
+
+(b) If $\sigma(AB) = \{0\}$, then there exists a unitary $U \in M_n$ such that
+
+$$ UAU^* = \begin{bmatrix} A_1 & 0 \\ 0 & 0 \end{bmatrix} \quad \text{and} \quad UBU^* = \begin{bmatrix} 0 & X \\ X^* & B_1 \end{bmatrix}, $$
+
+where $A_1 \in M_s$ is invertible and $B_1 \in M_{n-s}$ with $0 \le s \le n$.
+
+LEMMA 2.4. Let $1 \le k \le n$ and $A, B \in M_n$ with spectral norm at most 1. Suppose
+
+$$ \operatorname{rank} A \le k, \quad \|A + \alpha B\|_{(k)} = k \quad \text{and} \quad \|2A + \alpha B\|_{(k)} = \|A\|_{(k)} + \|A + \alpha B\|_{(k)} $$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_6.md b/samples/texts/6768269/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..6665e01daaa532f6878590a41f38df876ca2885d
--- /dev/null
+++ b/samples/texts/6768269/page_6.md
@@ -0,0 +1,43 @@
+for any unit complex number $\alpha$. Then $A \perp B$ and $\sigma(A^*A) \subseteq \{0, 1\}$.
+
+*Proof.* First, let $\alpha = 1$ and suppose $2A + B$ has singular value decomposition $2A + B = U_1DV_1$ with $D = \text{diag}(d_1, \dots, d_n)$ and $d_1 \ge \dots \ge d_n \ge 0$. Then
+
+$$D = U_1^*(2A + B)V_1^* = U_1^*AV_1^* + U_1^*(A + B)V_1^*.$$
+
+Denote the diagonal entries of $U_1^*AV_1^*$ and $U_1^*BV_1^*$ by $a_1, \dots, a_n$ and $b_1, \dots, b_n$, respectively. Then we have
+
+$$\sum_{i=1}^{k} |a_i| \le \|A\|_{(k)}, \quad \sum_{i=1}^{k} |a_i + b_i| \le \|A + B\|_{(k)}, \quad \sum_{i=1}^{k} (2a_i + b_i) = \sum_{i=1}^{k} d_i = \|2A + B\|_{(k)}.$$
+
+It follows that
+
+$$\sum_{i=1}^{k} (a_i + (a_i + b_i)) \le \sum_{i=1}^{k} (|a_i| + |a_i + b_i|) \le \|A\|_{(k)} + \|A + B\|_{(k)} = \|2A + B\|_{(k)} = \sum_{i=1}^{k} (2a_i + b_i),$$
+
+and so all the above inequalities are indeed equalities, which ensure that $a_i = |a_i|$ and $a_i+b_i = |a_i+b_i|$ are nonnegative real numbers with
+
+$$\sum_{i=1}^{k} a_i = \sum_{i=1}^{k} |a_i| = \|A\|_{(k)} \quad \text{and} \quad \sum_{i=1}^{k} (a_i + b_i) = \sum_{i=1}^{k} |a_i + b_i| = \|A + B\|_{(k)}.$$
+
+By Theorem 3.1 of [12], we have
+
+$$U_1^*AV_1^* = A_1 \oplus A_2 \quad \text{and} \quad U_1^*(A+B)V_1^* = (A_1+B_1) \oplus (A_2+B_2),$$
+
+where $A_1$ and $A_1+B_1$ are $k \times k$ positive semidefinite matrices and
+
+$$\|A\|_{(k)} = \|A_1\|_{(k)} = \operatorname{tr} A_1, \quad \|A+B\|_{(k)} = \|A_1+B_1\|_{(k)} = \operatorname{tr}(A_1+B_1).$$
+
+Since rank $A \le k$, it follows that $A_2 = 0$ and we may assume that $B_2 = \text{diag}(d_{k+1}, \dots, d_n)$. Without loss of generality, we also assume that $U_1 = V_1 = I_n$ and
+
+$$A = A_1 \oplus 0, \quad B = B_1 \oplus \text{diag}(d_{k+1}, \dots, d_n).$$
+
+Now take a unit $\alpha_1 \ne \pm 1$. Suppose the singular value decomposition of $2A_1 + \alpha_1 B_1$ is $2A_1 + \alpha_1 B_1 = U_2 D_1 V_2$ with $D_1 = \text{diag}(\beta_1, \dots, \beta_k)$ and $\beta_1 \ge \dots \ge \beta_k \ge 0$. Let $U_3 = U_2^* \oplus I_{n-k}$ and $V_3 = V_2^* \oplus \alpha_1^{-1} I_{n-k}$. Set $A_3 = U_2^* A_1 V_2^*$ and $B_3 = \alpha_1 U_2^* B_1 V_2^*$. Then
+
+$$
+\begin{align*}
+\operatorname{diag}(\beta_1, \dots, \beta_k, d_{k+1}, \dots, d_n) &= U_3(2A + \alpha_1 B)V_3 \\
+&= U_2^* A_1 V_2^* \oplus 0_{n-k} + U_2^*(A_1 + \alpha_1 B_1)V_2^* \oplus \operatorname{diag}(d_{k+1}, \dots, d_n) \\
+&= A_3 \oplus 0_{n-k} + (A_3 + B_3) \oplus \operatorname{diag}(d_{k+1}, \dots, d_n).
+\end{align*}
+$$
+
+First, assume that $\|2A+\alpha_1 B\|_{(k)} = \sum_{i=1}^k \beta_i$. Using the same argument as above, we see that $A_3$ and $A_3+B_3$ are $k \times k$ positive semidefinite matrices with $\|A\|_{(k)} = \operatorname{tr} A_3$ and $\|A+\alpha_1 B\|_{(k)} = \operatorname{tr}(A_3+B_3)$. By Lemma 2.3(a), $\sigma(A_1B_1) \subseteq \mathbb{R}$ and $\sigma(A_3B_3) \subseteq \mathbb{R}$. Also
+
+$$\sigma(\alpha_1 A_3 B_3) = \sigma(\alpha_1 A_3 B_3^*) = \sigma(\alpha_1(U_2^* A_1 V_2^*)(\alpha_1 U_2^* B_1 V_2)^*) = \sigma(U_2^* A_1 B_1 U_2) = \sigma(A_1 B_1) \subseteq \mathbb{R}.$$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_7.md b/samples/texts/6768269/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..b10c3e47b5553de48e25d54cc47f6c2de864c39c
--- /dev/null
+++ b/samples/texts/6768269/page_7.md
@@ -0,0 +1,31 @@
+This implies $\sigma(A_1 B_1) = \sigma(A_3 B_3) = \{0\}$. According to Lemma 2.3(b), we know that there exists a unitary $U_4$ such that
+
+$$U_4 A_3 U_4^* = A_4 \oplus 0, \quad U_4 B_3 U_4^* = \begin{bmatrix} 0 & X \\ X^* & B_4 \end{bmatrix},$$
+
+where $A_4 \in M_s$ is positive definite for some $0 \le s \le k$, $B_4 \in M_{k-s}$, and $\|A_3+B_3\|_{(k)} = \text{tr } A_4 + \text{tr } B_4$. Since both $A$ and $B$ have spectral norm at most 1, the diagonal entries of $A_4$ and $B_4$ are less than or equal to one. So $\text{tr}(A_3+B_3) = \|A_3+B_3\|_{(k)} = \|A+\alpha_1 B\|_{(k)} = k$ ensures that all the diagonal entries of $A_4$ and $B_4$ are equal to one and that the singular values of $A_3+B_3$ are all equal to one. It follows that $X=0$, $A_4=I_s$, and $B_4 = I_{k-s}$, which implies $A \perp B$ and $\sigma(A^*A) \subseteq \{0,1\}$.
+
+Now, assume that $\|2A + \alpha_1 B\|_{(k)} \neq \sum_{i=1}^k \beta_i$ and suppose that the largest $k$ singular values of $2A + \alpha_1 B$ are $\beta_1, \dots, \beta_r, d_{k+1}, \dots, d_{k+s}$ with $r+s=k$ and $r 3$. We have
+
+$$\|\alpha G + \beta H\|_{(k)} = |\alpha||G\|_{(k)} + |\beta||H\|_{(k)}$$
+
+for all complex numbers $\alpha, \beta$. Applying Lemma 2.2 again, we get $G \perp H$ and, hence, (2.2) follows.
+
+From above, we showed that (2.1) holds. For the sake of the simplicity, we assume that $U$ and $V$ are identity matrices. Now, for any unitary $Y \in M_n$ and $1 \le i \le m, 1 \le j \le n$ we can apply the argument in the preceding paragraphs to conclude that $\phi(E_{ii} \otimes YE_{jj}Y^*)$ has rank one and it is orthogonal to $\phi(E_{rr} \otimes YE_{ss}Y^*)$ for any distinct pairs $(i,j)$ and $(r,s)$. It follows that
+
+$$\|\phi(E_{ii} \otimes I_n) + \gamma\phi(E_{ii} \otimes YE_{jj}Y^*)\| = \|\phi(E_{ii} \otimes I_n + \gamma E_{ii} \otimes YE_{jj}Y^*)\| = 1 + \gamma$$
\ No newline at end of file
diff --git a/samples/texts/6768269/page_9.md b/samples/texts/6768269/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..6e43131356543a3bed55925a9472c9f6a69a1b72
--- /dev/null
+++ b/samples/texts/6768269/page_9.md
@@ -0,0 +1,39 @@
+for any positive scalar $\gamma$. Thus, $\phi(E_{ii} \otimes Y E_{jj} Y^*) = E_{ii} \otimes Z$ for some rank one $Z \in M_n$. Since this is true for any $1 \le i \le m$ and unitary $Y \in M_n$, we conclude that there exists a linear map $\varphi_i : M_n \to M_n$ such that
+
+$$\phi(E_{ii} \otimes B) = E_{ii} \otimes \varphi_i(B)$$
+
+for all complex matrices $B \in M_n$. Clearly, $\varphi_i$ preserves the Ky Fan $k$-norm of all $B \in M_n$ and $\varphi_i(E_{jj}) = E_{jj}$ for all $1 \le j \le n$. (Here the Ky Fan $k$-norm reduces to the trace norm if $k \ge n$.) Hence, $\varphi_i$ has the form $X \mapsto W_i X \tilde{W}_i$ or $X \mapsto W_i X^t \tilde{W}_i$ for some unitary $W_i, \tilde{W}_i \in M_n$. Now, we can adapt the arguments in the last two paragraphs in the proof of Theorem 2.1 to obtain our conclusion. $\square$
+
+**2.3. Schatten p-norms.** We now study the linear preserver for Schatten $p$-norms. We first present a key lemma of the result.
+
+LEMMA 2.7. [17] Let $T, S \in M_n$. Then
+
+(i) $2^{p-1}(\|T\|_p^p + \|S\|_p^p) \le \|T+S\|_p^p + \|T-S\|_p^p \le 2(\|T\|_p^p + \|S\|_p^p)$ if $1 \le p \le 2$,
+
+(ii) $2(\|T\|_p^p + \|S\|_p^p) \le \|T+S\|_p^p + \|T-S\|_p^p \le 2^{p-1}(\|T\|_p^p + \|S\|_p^p)$ if $2 \le p < \infty$.
+
+If $p=2$, all equalities always hold; if $p \ne 2$, any equality holds if and only if $T^*TS^*S = S^*ST^*T = 0$.
+
+REMARK 2.8. Suppose $p \ne 2$ and the singular values of $T$ and $S$ are $\{t_1, \dots, t_n\}$ and $\{s_1, \dots, s_n\}$, respectively. If any of equality in Lemma 2.7 holds, then $T^*TS^*S = S^*ST^*T = 0$ implies $T^*T \perp S^*S$. So there exists unitary $U \in M_n$ such that $U^*T^*TU = \text{diag}(t_1^2, \dots, t_n^2)$ and $U^*S^*SU = \text{diag}(s_1^2, \dots, s_n^2)$ with $s_i t_i = 0$ for $i = 1, \dots, n$. Replacing $T$ and $S$ with $T^*$ and $S^*$, we get $TT^*SS^* = 0$ and there exists unitary $V \in M_n$ such that $V^*TT^*V = \text{diag}(t_1^2, \dots, t_n^2)$ and $V^*SS^*V = \text{diag}(s_1^2, \dots, s_n^2)$. It follows that $V^*TU = \text{diag}(t_1, \dots, t_n)$ and $V^*SU = \text{diag}(s_1, \dots, s_n)$, which implies $T \perp S$.
+
+THEOREM 2.9. Let $1 \le p < \infty$ and $p \ne 2$ and $\phi : M_{mn} \to M_{mn}$ be a linear map. Then the following are equivalent.
+
+(a) $\|\phi(A \otimes B)\|_p = \|A \otimes B\|_p$ for all $A \in M_m$ and $B \in M_n$.
+
+(b) There are unitary matrices $U, V \in M_{mn}$ such that
+
+$$\phi(A \otimes B) = U(\varphi_1(A) \otimes \varphi_2(B))V \quad \text{for all } A \in M_m \text{ and } B \in M_n,$$
+
+where $\varphi_s$ is the identity map or the transposition map $X \mapsto X^t$ for $s=1,2$.
+
+*Proof.* The implication (b) $\Rightarrow$ (a) is obvious. Conversely, assume that $\|\phi(A \otimes B)\|_p = \|A \otimes B\|_p$ for all $A \otimes B \in M_{mn}$. We first conclude there exist unitary $U, V \in M_{mn}$ such that
+
+$$\phi(E_{ii} \otimes E_{jj}) = U (E_{ii} \otimes E_{jj}) V \quad \text{for } 1 \le i \le m \text{ and } 1 \le j \le n. \qquad (2.4)$$
+
+Then we can adapt the arguments in the last three paragraphs in the proof of Theorem 2.1 to verify that $\phi$ has the form claimed in (b).
+
+As in the proof of Theorem 2.6, to prove (2.4), it suffices to show that
+
+$$\phi(E_{ii} \otimes E_{jj}) \perp \phi(E_{rr} \otimes E_{ss}) \qquad (2.5)$$
+
+for any distinct pairs $(i,j)$ and $(r,s)$ with $1 \le i, r \le m$ and $1 \le j, s \le n$.
\ No newline at end of file
diff --git a/samples/texts/6777986/page_1.md b/samples/texts/6777986/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..d1e24a04a53e9bdd4968369b8a57698b70221d27
--- /dev/null
+++ b/samples/texts/6777986/page_1.md
@@ -0,0 +1,122 @@
+For all questions, answer choice (E) NOTA means that none of the given answers is correct. Good Luck!
+
+1. Find the area defined by the intersection of the solutions of the two inequalities:
+
+$$
+\begin{cases}
+x^2 - 6x + y^2 > 0 \\
+|x| + |y| < 3
+\end{cases}
+ $$
+
+(A) $32 - 9\pi$
+
+(B) $18 - 9\pi$
+
+(C) $32 - \frac{9\pi}{4}$
+
+(D) $18 - \frac{9\pi}{4}$
+
+(E) NOTA
+
+2. Given that $f(x) = \frac{x^3 - x^2 - x + 1}{x^3 + 2x^2 - x - 2}$, how many asymptotes does the graph of $f(x)$ have?
+
+(A) 0
+
+(B) 1
+
+(C) 2
+
+(D) 3
+
+(E) NOTA
+
+3. Find the number of solutions to:
+
+$$ x - \frac{7}{x+5} = -5 - \frac{7}{x+5} $$
+
+(A) No solutions.
+
+(B) One integral solution.
+
+(C) Infinitely many solutions.
+
+(D) Two solutions.
+
+(E) NOTA
+
+4. Find the determinant of the matrix:
+
+$$ \begin{bmatrix} 5 & 6 & 7 \\ 1 & 2 & 3 \\ 0 & 5 & 0 \end{bmatrix} $$
+
+(A) -75
+
+(B) 35
+
+(C) 40
+
+(D) -40
+
+(E) NOTA
+
+5. How many zeroes are at the end of $28^4 * 360^7$?
+
+(A) 28
+
+(B) 7
+
+(C) 4
+
+(D) 5
+
+(E) NOTA
+
+6. Simplify the expression to remove any nested radicals: $\sqrt{5+\sqrt{24}}$.
+
+(A) $1 + \sqrt{5}$
+
+(B) $\sqrt{2} + \sqrt{3}$
+
+(C) $\sqrt{6} + 2$
+
+(D) Cannot be Reduced.
+
+(E) NOTA
+
+7. Identify the following conic:
+
+$$ 16x^2 - 96x - 9y^2 + 18y + 135 = 0 $$
+
+(A) Circle
+
+(B) Non-Circular Ellipse
+
+(C) Parabola
+
+(D) Hyperbola
+
+(E) NOTA
+
+8. Calculate the number of digits in $2^{1024}$ given that $\log_2 2 = 0.301$.
+
+(A) 300
+
+(B) 350
+
+(C) 1024
+
+(D) 512
+
+(E) NOTA
+
+9. What is the constant term of the expansion $(x - \frac{1}{x^3})^8$?
+
+(A) 28
+
+(B) 56
+
+(C) 128
+
+(D) 256
+
+(E) NOTA
\ No newline at end of file
diff --git a/samples/texts/6777986/page_2.md b/samples/texts/6777986/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f5d0677c036db9bf3d901085433a3b463f73cca
--- /dev/null
+++ b/samples/texts/6777986/page_2.md
@@ -0,0 +1,145 @@
+10. How many of the following numbers are transcendental: 0, $i$, $\pi$, $\sqrt{43}$, $\sqrt{2}$, $\sqrt{-2}$, $e$, 49, 36?
+
+(A) 3
+
+(B) 2
+
+(C) 1
+
+(D) 0
+
+(E) NOTA
+
+11. Given that $\log_2 3 = a$ and $\log_5 3 = b$, which of the following is an expression for $\log_{12} 10$?
+
+(A) $\frac{ab}{a+b}$
+
+(B) $\frac{a+b}{2a+1}$
+
+(C) $\frac{a+b}{2b+ab}$
+
+(D) $\frac{2a+1}{a+b}$
+
+(E) NOTA
+
+12. Solve for $x$: $(\frac{256}{2401})^{x-2} = (\frac{343}{64})^{5-2x}$.
+
+(A) $\frac{7}{2}$
+
+(B) $\frac{9}{2}$
+
+(C) $\frac{4}{3}$
+
+(D) $\frac{49}{16}$
+
+(E) NOTA
+
+13. Find the area of the conic: $3x^2 - 12x + 4y^2 - 8y + 16 = 0$.
+
+(A) $12\pi$
+
+(B) $24\pi$
+
+(C) $6\pi$
+
+(D) $3\pi$
+
+(E) NOTA
+
+14. Find the sum of the following series:
+
+$$ \sum_{k=1}^{\infty} \frac{1}{k^2 + 5k + 6} $$
+
+(A) 1
+
+(B) $\frac{1}{2}$
+
+(C) $\frac{1}{3}$
+
+(D) $\frac{1}{4}$
+
+(E) NOTA
+
+15. Find the number of distinct permutations of the letters in the word "NINETEEN".
+
+(A) 40320
+
+(B) 1120
+
+(C) 3360
+
+(D) (8!)(3!)(3!)
+
+(E) NOTA
+
+16. What is the remainder when $3x^3 - 5x^2 + 4x + 2$ is divided by $3x + 1$?
+
+(A) $3x - 6$
+
+(B) 0
+
+(C) 1
+
+(D) $2x + 4$
+
+(E) NOTA
+
+17. Let $g(x) = 8x^3 + 1$ be the inverse function of $f(x)$. Find the value of $f(28)$.
+
+(A) $\frac{27}{8}$
+
+(B) $\frac{3}{2}$
+
+(C) $\frac{9}{8}$
+
+(D) $\frac{3}{8}$
+
+(E) NOTA
+
+18. Two roots of the function $f(x)$ are 7 and 9. Which is a root of the function $f(x+17)$?
+
+(A) -10
+
+(B) 24
+
+(C) 26
+
+(D) -17
+
+(E) NOTA
+
+19. For $i = \sqrt{-1}$, $(i^{2017})^{2016} =$
+
+(A) $i$
+
+(B) 1
+
+(C) -1
+
+(D) -i
+
+(E) NOTA
+
+20. The roots of the equation $f(x) = x^3 - 3x^2 + x + 5$ are three complex numbers $z_1$, $z_2$, and $z_3$. What is the sum of the roots of $f(x)$ taken 2 at a time?
+
+(A) 1
+
+(B) -3
+
+(C) -1
+
+(D) 5
+
+(E) NOTA
+
+21. The polynomial $f(x) = x^3 + 2x^2 + 3x + 4$ has roots $a, b$, and $c$. The polynomial $g(x)$ has roots $a^2, b^2$, and $c^2$. If $g(0) = 8$, compute $g(2)$.
+
+(A) 7
+
+(B) 9
+
+(C) 14
+
+(D) 18
+
+(E) NOTA
\ No newline at end of file
diff --git a/samples/texts/6777986/page_3.md b/samples/texts/6777986/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..9c8366527645bfdcd21bebc2b5c9451577eac4e3
--- /dev/null
+++ b/samples/texts/6777986/page_3.md
@@ -0,0 +1,109 @@
+22. If $x - y = 2$ and $x^3 - y^3 = 20$, find $x^2 - y^2$.
+
+(A) $7\sqrt{3}$
+
+(B) $4\sqrt{3}$
+
+(C) $3\sqrt{7}$
+
+(D) 8
+
+(E) NOTA
+
+23. Express 0.252525... as a fraction.
+
+(A) $\frac{25}{9}$
+
+(B) $\frac{25}{90}$
+
+(C) $\frac{2}{9} + \frac{5}{9}$
+
+(D) $\frac{25}{99}$
+
+(E) NOTA
+
+24. Find the equation of the directrix of the parabola $y^2 + 6y + 8x + 25 = 0$.
+
+(A) $y=0$
+
+(B) $x=0$
+
+(C) $x=4$
+
+(D) $y=4$
+
+(E) NOTA
+
+25. Find the number of real integer solutions to $||x - 3| - |x + 1|| = 2$.
+
+(A) 0
+
+(B) 1
+
+(C) 2
+
+(D) 4
+
+(E) NOTA
+
+26. If x is real, then find the maximum of $\frac{4x^2 + 4x + \frac{13}{4}}{4x^2 + 4x + \frac{5}{4}}$.
+
+(A) 9
+
+(B) 8
+
+(C) 5
+
+(D) 4
+
+(E) NOTA
+
+27. Find the value of y given that $\frac{\log_m x}{\log_n x} = \frac{37}{101}$ and $\frac{m}{n} = n^y$.
+
+(A) 2
+
+(B) $-\frac{64}{101}$
+
+(C) $\frac{27}{37}$
+
+(D) $\frac{64}{37}$
+
+(E) NOTA
+
+28. Find the complex number x such that $\frac{x}{2+x} = -3+2i$.
+
+(A) $-3+2i$
+
+(B) $\frac{-4+i}{5}$
+
+(C) $\frac{4+i}{5}$
+
+(D) $\frac{-8+i}{5}$
+
+(E) NOTA
+
+29. What is the graph of the following equation in the Argand plane, given the complex number $z$?
+
+$$|z + 8i| + |z - 4| = 2017$$
+
+(A) 2017-gon
+
+(B) Non-Circular Ellipse
+
+(C) Hyperbola
+
+(D) Cardioid
+
+(E) NOTA
+
+30. Sri shoots a lot of shots. However, he does not usually make them, only making them 20 percent of the time. What is the probability, when Sri shoots his shot 3 times, that he makes exactly 2 of the shots?
+
+(A) $\frac{1}{25}$
+
+(B) $\frac{4}{25}$
+
+(C) $\frac{1}{125}$
+
+(D) $\frac{12}{125}$
+
+(E) NOTA
\ No newline at end of file
diff --git a/samples/texts/6940952/page_1.md b/samples/texts/6940952/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..4df40bbfaa21a2b084a9f8498a44450f41a5baea
--- /dev/null
+++ b/samples/texts/6940952/page_1.md
@@ -0,0 +1,28 @@
+Spectral Efficiency of CDMA Downlink Cellular
+Networks with Matched Filter
+
+Nicolas Bonneau, Mérouane Debbah, Eitan Altman
+
+► To cite this version:
+
+Nicolas Bonneau, Mérouane Debbah, Eitan Altman. Spectral Efficiency of CDMA Downlink Cellular Networks with Matched Filter. EURASIP Journal on Wireless Communications and Networking, SpringerOpen, 2006, 2006 (1), pp.074081. hal-00784480
+
+HAL Id: hal-00784480
+
+https://hal.inria.fr/hal-00784480
+
+Submitted on 4 Feb 2013
+
+**HAL** is a multi-disciplinary open access
+archive for the deposit and dissemination of sci-
+entific research documents, whether they are pub-
+lished or not. The documents may come from
+teaching and research institutions in France or
+abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est
+destinée au dépôt et à la diffusion de documents
+scientifiques de niveau recherche, publiés ou non,
+émanant des établissements d'enseignement et de
+recherche français ou étrangers, des laboratoires
+publics ou privés.
\ No newline at end of file
diff --git a/samples/texts/6940952/page_10.md b/samples/texts/6940952/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..f71f198107c533944bce448eaec10dad7b9b3731
--- /dev/null
+++ b/samples/texts/6940952/page_10.md
@@ -0,0 +1,82 @@
+Therefore,
+
+$$
+I_2 \xrightarrow[\frac{N-\infty}{d/N-a}]{N\to\infty} P_p(x_j) \frac{\alpha a}{W} \left( \int_{-W/2}^{W/2} |h_{pj}(f)|^4 df - \frac{1}{W} \left( \int_{-W/2}^{W/2} |h_{pj}(f)|^2 df \right)^2 \right). \quad (B.10)
+$$
+
+C. PROOF OF PROPOSITION 3
+
+Note that as $L_q \to \infty$,
+
+$$
+\sum_{\ell=0}^{L_q-1} |\eta_q(\ell)|^2 \longrightarrow \mathbb{E}\left[\sum_{\ell=0}^{L_q-1} |\eta_q(\ell)|^2\right] = \mathbb{E}[|h|^2], \quad (C.1)
+$$
+
+$$
+\begin{align*}
+& \sum_{\ell=0}^{L-1} \sum_{\ell' \neq \ell} |\eta(\ell)|^2 |\eta(\ell')|^2 \\
+& = \left(\sum_{\ell=0}^{L-1} |\eta(\ell)|^2\right)^2 + \sum_{\ell=0}^{L-1} \sum_{\ell' \neq \ell} |\eta(\ell)|^2 |\eta(\ell')|^2 \\
+& \quad - \left(\sum_{\ell=0}^{L-1} |\eta(\ell)|^2\right)^2 \longrightarrow \mathbb{E}[|h|^4] - (\mathbb{E}[|h|^2])^2, \tag{C.2}
+\end{align*}
+$$
+
+and $\sum_{\ell'\neq \ell} \eta(\ell)\eta(\ell')^*\eta_q(\ell)\eta_q(\ell') \to 0$.
+
+The rest of the proof is mainly an application of Proposition 1 where we consider a path loss of the form $Pe^{-\gamma(|x-qar|)}$ ($\gamma$ is a decaying factor) between the user $x$ ($x \in [-a/2, a/2]$) and base station $q$ ($q \in \mathbb{Z}$) with coordinates $m_q = qa$. In this case, the intercell interference has an explicit
+
+form:
+
+$$
+\begin{align*}
+\sum_{q \neq 0} P_q(x) &= P \sum_{q=-\infty}^{+\infty} e^{-y|x-qar|} \\
+&= Pe^{-yx} \sum_{q=1}^{+\infty} e^{-yqar} + Pe^{yx} \sum_{q=1}^{+\infty} e^{-yqar} \tag{C.3} \\
+&= \frac{2Pe^{-yar}}{1 - e^{-yar}} \cosh(yx).
+\end{align*}
+$$
+
+D. PROOF OF (27)
+
+Let $\gamma \to 0$ in (24). In this case,
+
+$$
+\lim_{y \to 0} C(a) \\
+= \frac{\alpha}{a} \int_{-a/2}^{a/2} \log_2 \left( 1 + \frac{P(1 - y|x| - (y^2/2)|x|^2)(\mathbb{E}[|h|^2])^2}{I + \sigma^2 \mathbb{E}[|h|^2]} \right) dx \\
++ O(y^3),
+\quad (D.1)
+$$
+
+where
+
+$$
+\begin{align}
+I &= \alpha a P (\mathbb{E}[|h|^2])^2 \frac{2e^{-y a}}{1 - e^{-y a}} + \alpha a P (\mathbb{E}[|h|^4] - (\mathbb{E}[|h|^2])^2) \nonumber \\
+ &= \alpha a P (\mathbb{E}[|h|^4] - 2(\mathbb{E}[|h|^2])^2) + \frac{2\alpha P}{\gamma} (\mathbb{E}[|h|^2])^2 \tag{D.2}
+\end{align}
+$$
+
+since $\lim_{y\to 0} 2/e^{ya} - 1 = 2/ya - 1$. We have therefore
+
+$$
+\begin{align}
+\lim_{y \to 0} C(a) &= \frac{(\alpha / \ln 2) P (\mathbb{E}[|h|^2])^2 (1 - y a / 4 - y^2 a^2 / 24)}{(2 \alpha P / y) (\mathbb{E}[|h|^2])^2 (1 + y ((a/2) (\mathbb{E}[|h|^4] / (\mathbb{E}[|h|^2])^2 - 2) + \sigma^2 / (2 \alpha P \mathbb{E}[|h|^2]))} + O(y^3) \\
+&= \frac{\gamma}{2 \ln 2} \left(1 - \frac{y a}{4} - \frac{\gamma^2 a^2}{24}\right) \left(1 - y \left(\frac{a}{2} \left(\frac{\mathbb{E}[|h|^4]}{(\mathbb{E}[|h|^2])^2} - 2\right) + \frac{\sigma^2}{(2 \alpha P \mathbb{E}[|h|^2])}\right)\right) + O(y^3) \tag{D.3}
+\end{align}
+$$
+
+which gives
+
+$$
+\begin{align}
+\lim_{y \to 0} C(a) &= \frac{\gamma}{2 \ln 2} \left( 1 - y \left( a/2 \left( \frac{\mathbb{E}[|h|^4]}{(\mathbb{E}[|h|^2])^2} - \frac{3}{2} \right) + \frac{\sigma^2}{2\alpha P \mathbb{E}[|h|^2]} \right) \right) + O(y^3), \\
+\frac{\partial C}{\partial a} &= -\frac{\gamma^2}{4 \ln 2} \left( \frac{\mathbb{E}[|h|^4]}{(\mathbb{E}[|h|^2])^2} - \frac{3}{2} \right) + O(y^3).
+\tag{D.4}
+\end{align}
+$$
+
+REFERENCES
+
+[1] B. M. Zaidel, S. Shamai, and S. Verdu, "Multicell uplink spectral efficiency of coded DS-CDMA with random signatures," *IEEE Journal on Selected Areas in Communications*, vol. 19, no. 8, pp. 1556–1569, 2001.
+
+[2] A. Sendonaris and V. Veeravalli, “The capacity-coverage trade-off in CDMA systems with soft handoff,” in *Proceedings of the 31st Asilomar Conference on Signals, Systems & Computers*, vol. 1, pp. 625–629, Pacific Grove, Calif., USA, November 1997.
+
+[3] N. Kong and L. B. Milstein, “Error probability of multicell CDMA over frequency selective fading channels with power control error,” *IEEE Transactions on Communications*, vol. 47, no. 4, pp. 608–617, 1999.
\ No newline at end of file
diff --git a/samples/texts/6940952/page_11.md b/samples/texts/6940952/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..89f2a6308ccfd0e3e26bb38fac0c944fb0497aaa
--- /dev/null
+++ b/samples/texts/6940952/page_11.md
@@ -0,0 +1,41 @@
+[4] O. K. Tonguz and M. M. Wang, "Cellular CDMA networks impaired by Rayleigh fading: system performance with power control," *IEEE Transactions on Vehicular Technology*, vol. 43, no. 3, part 1, pp. 515-527, 1994.
+
+[5] K. S. Gilhousen, I. M. Jacobs, R. Padovani, A. J. Viterbi, L. A. Weaver Jr., and C. E. Wheatley III, "On the capacity of a cellular CDMA system," *IEEE Transactions on Vehicular Technology*, vol. 40, no. 2, pp. 303-312, 1991.
+
+[6] G. E. Corazza, G. De Maio, and F. Vatalaro, "CDMA cellular systems performance with fading, shadowing, and imperfect power control," *IEEE Transactions on Vehicular Technology*, vol. 47, no. 2, pp. 450-459, 1998.
+
+[7] D. K. Kim and F. Adachi, "Theoretical analysis of reverse link capacity for an SIR-based power-controlled cellular CDMA system in a multipath fading environment," *IEEE Transactions on Vehicular Technology*, vol. 50, no. 2, pp. 452-464, 2001.
+
+[8] J. Zhang and V. Aalo, "Performance analysis of a multicell DS-CDMA system with base station diversity," *IEE Proceedings-Communications*, vol. 148, no. 2, pp. 112-118, 2001.
+
+[9] Z. Li and M. Latva-Aho, "Performance of a multicell MC-CDMA system with power control errors in Nakagami fading channels," *IEICE Transactions on Communications*, vol. E86-B, no. 9, pp. 2795-2798, 2003.
+
+[10] F. Hiai and D. Petz, *The Semicircle Law, Free Random Variables and Entropy*, vol. 77 of *Mathematical Surveys and Monographs*, American Mathematical Society, Providence, RI, USA, 2000.
+
+[11] D. N. C. Tse and S. V. Hanly, "Linear multiuser receivers: effective interference, effective bandwidth and user capacity," *IEEE Transactions on Information Theory*, vol. 45, no. 2, pp. 641-657, 1999.
+
+[12] S. Shamai and S. Verdu, "The impact of frequency-flat fading on the spectral efficiency of CDMA," *IEEE Transactions on Information Theory*, vol. 47, no. 4, pp. 1302-1327, 2001.
+
+[13] M. Debbah, W. Hachem, P. Loubaton, and M. de Courville, "MMSE analysis of certain large isometric random precoded systems," *IEEE Transactions on Information Theory*, vol. 49, no. 5, pp. 1293-1311, 2003.
+
+[14] M. Debbah and R. R. Müller, "MIMO channel modeling and the principle of maximum entropy," *IEEE Transactions on Information Theory*, vol. 51, no. 5, pp. 1667-1690, 2005.
+
+[15] R. M. Gray, *Toeplitz and Circulant Matrices*, Stanford University, Palo Alto, Calif, USA, 1st edition, 1977.
+
+[16] M. Franceschetti, J. Bruck, and L. J. Schulman, "A random walk model of wave propagation," *IEEE Transactions on Acoustics and Propagation*, vol. 52, no. 5, pp. 1304-1317, 2004.
+
+[17] S. Verdu and S. Shamai, "Spectral efficiency of CDMA with random spreading," *IEEE Transactions on Information Theory*, vol. 45, no. 2, pp. 622-640, 1999.
+
+[18] D. Guo, S. Verdu, and L. K. Rasmussen, "Asymptotic normality of linear multiuser receiver outputs," *IEEE Transactions on Information Theory*, vol. 48, no. 12, pp. 3080-3095, 2002.
+
+[19] N. Bonneau, M. Debbah, E. Altman, and G. Caire, "Spectral efficiency of CDMA uplink cellular networks," in *Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05)*, vol. 5, pp. 821-824, Philadelphia, Pa, USA, March 2005.
+
+**Nicolas Bonneau** graduated from École Polytechnique in 2002 and obtained an Engineer degree from École Nationale Supérieure des Télécommunications in 2004. He received the M.S. degree in algorithmics from Université Paris VI, Paris, in 2003. He is currently working towards Ph.D. degree in wireless communications. His research interests include information theory, ad hoc networks, as well as the applications of random matrix theory and game theory to the analysis of wireless communication systems.
+
+**Mérouane Debbah** was born in Madrid, Spain. He entered the École Normale Supérieure de Cachan (France) in 1996 where he received the M.S. and the Ph.D. degrees, respectively, in 1999 and 2002. From 1999 to 2002, he worked for Motorola Labs on Wireless Local Area Networks and prospective 4G systems. From October 2002, he was appointed Senior Researcher at the Vienna Research Center for Telecommunications (ftw.), Vienna, Austria working on MIMO wireless channel modeling issues. He is presently an Assistant Professor with the Department of Mobile Communications of the Institute Eurécom. His research interests are in information theory, signal processing, and wireless communications.
+
+**Eitan Altman** received the B.S. degree in electrical engineering (1984), the B.A. degree in physics (1984), and the Ph.D. degree in electrical engineering (1990), all from the Technion—Israel Institute, Haifa. In 1990, he further received his B.Mus. degree in music composition in Tel-Aviv university. Since 1990, he has been with INRIA (National Research Institute in Informatics and Control) in Sophia-Antipolis, France.
+His current research interests include performance evaluation and control of telecommunication networks and in particular congestion control, wireless communications, and networking games.
+He is in the editorial board of several scientific journals: Stochastic Models, JEDC, COMNET, SIAM SICON, and WINET.
+He has been the general chairman and the (co-)chairman of the program committee of several international conferences and workshops (on game theory, networking games, and mobile networks).
+More information can be found at http://www.inria.fr/mistral/personnel/Eitan.Altman/me.html.
\ No newline at end of file
diff --git a/samples/texts/6940952/page_2.md b/samples/texts/6940952/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..0398bd2b937bfe5e53954ef0e7c440758a6a6b3d
--- /dev/null
+++ b/samples/texts/6940952/page_2.md
@@ -0,0 +1,25 @@
+# Spectral Efficiency of CDMA Downlink Cellular Networks with Matched Filter
+
+Nicolas Bonneau,¹ Mérouane Debbah,² and Eitan Altman¹
+
+¹ MAESTRO, INRIA Sophia Antipolis, 2004 Route des Lucioles, B.P. 93, 06902 Sophia Antipolis, France
+
+² Mobile Communications Group, Institut Eurécom, 2229 Route des Crêtes, B.P. 193, 06904 Sophia Antipolis, France
+
+Received 20 May 2005; Revised 13 October 2005; Accepted 8 December 2005
+
+Recommended for Publication by Chia-Chin Chong
+
+In this contribution, the performance of a downlink code division multiple access (CDMA) system with orthogonal spreading and multicell interference is analyzed. A useful framework is provided in order to determine the optimal base station coverage for wireless frequency selective channels with dense networks where each user is equipped with a matched filter. Using asymptotic arguments, explicit expressions of the spectral efficiency are obtained and provide a simple expression of the network spectral efficiency based only on a few meaningful parameters. Contrarily to a common misconception which asserts that to increase spectral efficiency in a CDMA network, one has to increase the number of cells, we show that, depending on the path loss and the fading channel statistics, the code orthogonal gain (due to the synchronization of all the users at the base station) can compensate and even compete in some cases with the drawbacks due to intercell interference. The results are especially realistic and useful for the design of dense networks.
+
+Copyright © 2006 Nicolas Bonneau et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
+
+## 1. INTRODUCTION
+
+An important problem that arises in the design of CDMA systems concerns the deployment of an efficient architecture to cover the users. Increasing the number of cells in a given area yields indeed a better coverage but increases at the same time intercell interference. The gain provided by a cellular network is not at all straightforward and depends on many parameters: path loss, type of codes used, receiving filter, and channel characteristics. Previous studies have already studied the spectral efficiency of an uplink CDMA multicell network with Wyner's model [1] or with simple interference models [2–9]. However, none has taken explicitly into account the impact of the code structure (orthogonality) and the multipath channel characteristics.
+
+This contribution is a first step into analyzing the complex problem of downlink CDMA multicell networks, using a new approach based on unitary random matrix theory. The purpose of this contribution is to determine, for a dense and infinite multicell network, the optimal distance between base stations. A downlink frequency selective fading CDMA scheme where each user is equipped with a linear matched filter is considered. The users are assumed to be uniformly
+
+distributed along the area. Only orthogonal access codes are considered, as the users are synchronized within each cell. The problem is analyzed in the asymptotic regime: very dense networks are considered where the spreading length $N$ tends to infinity, the number of users per meter $d$ tends to infinity, but the load per meter $d/N = \alpha$ is constant. The analysis is mainly based on asymptotic results of unitary random matrices [10]. One of the great features of this tool is that performance measures such as signal-to-interference-plus-noise ratio (SINR) [11] or spectral efficiency [12] have very simple forms in the large system limit, independent of the particular CDMA code structure. Moreover, the theoretical results were shown to be very accurate predictions of the system's behavior in the finite size case (spreading length $N$ of 256 for the SINR [13] or number of antennas 8 for the mutual information [14]).
+
+This paper is structured as follows: in Section 2, the cellular CDMA model is introduced. In Section 3, the SINR expression is derived and an asymptotic analysis of the spectral efficiency with matched filter in case of downlink orthogonal CDMA is provided. Finally in Section 4, discussions as well as numerical simulations are provided in order to validate our analysis.
\ No newline at end of file
diff --git a/samples/texts/6940952/page_3.md b/samples/texts/6940952/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..4ce1696d2895f5c3a85386d37c484972d6d06be5
--- /dev/null
+++ b/samples/texts/6940952/page_3.md
@@ -0,0 +1,45 @@
+FIGURE 1: Representation of a CDMA cellular network.
+
+## 2. CDMA CELLULAR MODEL
+
+### 2.1. Cellular model
+
+Without loss of generality and in order to ease the understanding, we focus our analysis on a one-dimensional (1D) network. This scenario represents, for example, the case of the deployment of base stations along a motorway (users, i.e., cars are supposed to move along the motorway). Discussions on the two-dimensional (2D) case are given in Section 3.3. An infinite length base station deployment is considered (see Figure 1). The base stations are supposed equidistant with interbase station distance $a$. The spreading length $N$ is fixed and is independent of the number of users. The number of users per cell is $K = da$ ($d$ is the density of the network). Note that as the size of the cell increases, each cell accommodates more users (with the constraint $da \le N$). However, the total power emitted by the network does not increase since the same number of users is served by the network.
+
+### 2.2. Downlink CDMA model
+
+In the following, upper case and lower case boldface symbols will be used for matrices and column vectors, respectively. $(\cdot)^T$ will denote the transpose operator, $(\cdot)^*$ conjugation, and $(\cdot)^H = ((\cdot)^T)^*$ hermitian transpose. $\mathbb{E}$ denotes the expectation operator. The general case of downlink wide-band CDMA is considered where the signal transmitted by the base station in cell $p$ to user $j$ has complex envelope
+
+$$x_{pj}(t) = \sum_{n} s_{pj}(n)v_{pj}(t - nT). \quad (1)$$
+
+In (1), $v_{pj}(t)$ is a weighted sum of elementary modulation pulses $\psi(t)$ which satisfy the Nyquist criterion with respect to the chip interval $T_c$ ($T = NT_c$):
+
+$$v_{pj}(t) = \sum_{\ell=1}^{N} v_p \ell j \psi(t - (\ell - 1)T_c). \quad (2)$$
+
+The signal is transmitted over a frequency selective channel with impulse response $c_{pj}(\tau)$. Under the assumption of slowly varying fading, the continuous time signal $y_j^p(t)$ received by user $j$ in cell $p$ has the form
+
+$$y_j^p(t) = \sum_q \sum_n \sum_{k=1}^K s_{qk}(n) \int \sqrt{P_q(x_j)} c_{qk}(\tau) v_{qk} \\ \times (t - nT - \tau) d\tau + n(t), \quad (3)$$
+
+where $n(t)$ is the complex white Gaussian noise. In (3), the index $q$ stands for the cells, the index $n$ for the transmitted symbol, and the index $k$ for the users (in each cell there are $K$ users). User $j$ is determined by his position $x_j$. The signal (after pulse-matched filtering by $\psi^*(-t)$) is sampled at the chip rate to get a discrete-time received signal of user $j$ in cell $p$ of a downlink CDMA system that has the form
+
+$$y^p(x_j) = \sum_q \sqrt{P_q(x_j)} C_{qj} W_q s_q + n. \quad (4)$$
+
+$x_j$ are the coordinates of user $j$ in cell $p$. $y^p(x_j)$ is the $N \times 1$ received vector, $s_q = [s_q(1), \dots, s_q(K)]^T$ is the $K \times 1$ transmit vector of cell $q$, and $n = [n(1), \dots, n(N)]^T$ is an $N \times 1$ noise vector with zero mean and variance $\sigma^2$ Gaussian independent entries. $P_q(x_j)$ represents the path loss between base station $q$ and user $j$ whereas matrix $C_{qj}$ represents the $N \times N$ Toeplitz structured frequency selective channel between base station $q$ and user $j$. Each base station has an $N \times K$ code matrix $W_q = [\mathbf{w}_q^1, \dots, \mathbf{w}_q^K]$. User $j$ is subject to intra-cell interference from other users of cell $p$ as well as intercell interference from all the other cells.
+
+### 2.3. Assumptions
+
+The following assumptions are rather technical in order to simplify the analysis.
+
+#### 2.3.1. Code structure model
+
+In the downlink scenario, Walsh-Hadamard codes are usually used. However, in order to get interpretable expressions of the SINR, isometric matrices obtained by extracting $K < N$ columns from a Haar unitary matrix will be considered. An $N \times N$ random unitary matrix is said to be Haar distributed if its probability distribution is invariant by right (or equivalently left) multiplication by deterministic unitary matrices. In spite of the limited practical use, these random matrices represent a very useful analytical tool as simulations [13] and show that their use provides similar performances as Walsh-Hadamard codes. Note that each cell uses a different isometric code matrix.
+
+#### 2.3.2. Multipath channel
+
+We consider the case of a multipath channel. Under the assumption that the number of paths from base station $q$ to any user $j$ is given by $L_{qj}$, the model of the channel is given by
+
+$$c_{qj}(\tau) = \sum_{\ell=0}^{L_{qj}-1} \eta_{qj}(\ell) \psi(\tau - \tau_{qj}(\ell)), \quad (5)$$
+
+where we assume that the channel is invariant during the time considered. In order to compare channels at the same signal-to-noise ratio, we constrain the distribution of the i.i.d. fading coefficients $\eta_{qj}(\ell)$ such as
+
+$$\mathbb{E}[\eta_{qj}(\ell)] = 0, \quad \mathbb{E}[|\eta_{qj}(\ell)|^2] = \frac{\rho}{L_{qj}}. \quad (6)$$
\ No newline at end of file
diff --git a/samples/texts/6940952/page_4.md b/samples/texts/6940952/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..e7fcf2ece10a64b7ca046e494a87bfaa3892c856
--- /dev/null
+++ b/samples/texts/6940952/page_4.md
@@ -0,0 +1,104 @@
+Usually, fading coefficients $\eta_{qj}(\ell)$ are supposed to be in-
+dependent with decreasing variance as the delay increases.
+In all cases, $\rho$ is the average power of the channel, such as
+$\mathbb{E}[|c(\tau)|^2] = \sum_{\ell=0}^{L_{qj}-1} \mathbb{E}[|\eta_{qj}(\ell)|^2] = \rho$. For each base station
+$q$, let $h_{qj}(i)$ be the discrete Fourier transform of the fading
+process $c_{qj}(\tau)$. The frequency response of the channel at the
+receiver is given by
+
+$$
+h_{qj}(f) = \sum_{\ell=0}^{L_{qj}-1} \eta_{qj}(\ell) e^{-j2\pi f \tau_{qj}(\ell)} |\Psi(f)|^2, \quad (7)
+$$
+
+where we assume that the transmit filter $\Psi(f)$ and the receive
+filter $\Psi^*(-f)$ are such that, given the bandwidth $W$,
+
+$$
+\Psi(f) = \begin{cases} 1 & \text{if } -\frac{W}{2} \le f \le \frac{W}{2}, \\ 0 & \text{otherwise.} \end{cases} \tag{8}
+$$
+
+Sampling at the various frequencies $f_1 = -W/2$, $f_2 = -W/2 + 1/NW, \dots, f_N = -W/2 + ((N-1)/N)W$, we obtain the coefficients $h_{qj}(i)$, $1 \le i \le N$, as
+
+$$
+h_{qj}(i) = h_{qj}(f_i) = \sum_{\ell=0}^{L_{qj}-1} \eta_{qj}(\ell) e^{-j2\pi(i/N)W\tau_{qj}(\ell)} e^{j\pi W\tau_{qj}(\ell)}. \quad (9)
+$$
+
+Note that $\mathbb{E}[|h_{qj}(i)|^2] = \rho$.
+
+Notice that the assumption on the code structure model
+enables us to simplify the model (4) in the following way. Let
+Cqj = UqjHqjVqj denote the singular value decomposition of
+Cqj. Uqj and Vqj are unitary, while Hqj is a diagonal matrix
+with elements {h̃qj(1), ..., h̃qj(N)}.
+
+Since $\mathbf{U}_{qj}$ and $\mathbf{V}_{qj}$ are unitary, $\mathbf{U}_{qj}^H\mathbf{n}$ and $\mathbf{V}_{qj}\mathbf{W}_q$ have respectively the same distribution as $\mathbf{n}$ and $\mathbf{W}_q$ (the distribution of a Haar distributed matrix is left unchanged by left multiplication by a constant unitary matrix). As a consequence, model (4) is equivalent to
+
+$$
+\mathbf{y}^p(x_j) = \sum_q \sqrt{P_q(x_j)} \mathbf{H}_{qj} \mathbf{W}_q \mathbf{s}_q + \mathbf{n}, \quad (10)
+$$
+
+where $\mathbf{H}_{qj}$ is a diagonal matrix with diagonal elements $\{\tilde{h}_{qj}(i)\}_{i=1...N}$. Note that for any user $j$ in cell $p$, when $N \to \infty$, the coefficients $\tilde{h}_{qj}(i)$ tend to the discrete Fourier transform of the channel impulse response $h_{qj}(i)$ given by (9) (see [15]).
+
+2.3.3. *Path loss*
+
+The general path loss $P_q(x_j)$ depends on a path loss factor which characterizes the type of attenuation. The greater the factor is, the more severe the attenuation is. In Section 3, we will derive expressions for an exponential path loss $P_q(x_j) = Pe^{-\gamma|x_j - m_q|}$ [16], where $m_q$ are the coordinates of base station $q$. Note that in the usual model, the attenuation is of the polynomial form: $P_q(x_j) = P/(|x_j - m_q|)^{\beta}$. The polynomial path loss will be considered through simulations in Section 4 to validate our results. We use the exponential form for the sake
+
+of calculation simplicity and therefore put the framework in
+the most severe path loss scenario in favor of the multicell
+approach.
+
+3. PERFORMANCE ANALYSIS
+
+**3.1. General SINR formula**
+
+In all the following, without loss of generality, we will focus
+on user *j* of cell *p*. We assume that the user does not know the
+codes of the other cells as well as the codes of the other users
+within the same cell. As a consequence, the user is equipped
+with the matched filter receiver **g***pj* = **H***pj**w**p**j*.
+
+The output of the matched filter is given by
+
+$$
+\begin{equation}
+\begin{split}
+\mathbf{g}_{pj}^H \mathbf{y}^p(x_j) ={}& \sqrt{P_p(x_j)} \mathbf{g}_{pj}^H \mathbf{H}_{pj} \mathbf{w}_p^j s_p(j) \\
+& + \sqrt{P_p(x_j)} \mathbf{g}_{pj}^H \mathbf{H}_{pj} \mathbf{W}_p^{(-j)} \begin{bmatrix} s_p(1) \\ \vdots \\ s_p(K) \end{bmatrix}_{(K-1)\times 1} \\
+& + \sum_{q \neq p} \sqrt{P_q(x_j)} \mathbf{g}_{pj}^H \mathbf{H}_{qj} \mathbf{W}_q \mathbf{s}_q + \mathbf{g}_p^H \mathbf{n},
+\end{split}
+\tag{11}
+\end{equation}
+$$
+
+where $\mathbf{W}_p^{(-j)} = [\mathbf{w}_p^1, \dots, \mathbf{w}_p^{j-1}, \mathbf{w}_p^{j+1}, \dots, \mathbf{w}_p^K]$. From (11), we obtain the expression for the output SINR of user $j$ in cell $p$ with coordinates $x_j$ and code $\mathbf{w}_p^j$:
+
+$$
+\text{SINR}(x_j, \mathbf{w}_p^j) = \frac{S^*(x_j)}{I_1(x_j) + I_2(x_j) + \sigma^2 \mathbf{w}_p^{jH} \mathbf{H}_{pj}^H \mathbf{H}_{pj} \mathbf{w}_p^j}, \quad (12)
+$$
+
+where
+
+$$
+S^*(x_j) = P_p(x_j) |\mathbf{w}_p^{jH} \mathbf{H}_{pj}^H \mathbf{H}_{pj} \mathbf{w}_p^j|^2, \quad (13)
+$$
+
+$$
+I_1(x_j) = \sum_{q \neq p} P_q(x_j) w_p^{jH} H_{pj}^H H_{qj} W_q W_q^{H} H_{qj}^{H} H_{pj} w_p^j, \quad (14)
+$$
+
+$$
+I_2(x_j) = P_p(x_j) w_p^{jH} H_{pj}^H H_{pj} W_p^{(-j)} W_p^{(-j)H} H_{pj}^H H_{pj} w_p^j. \quad (15)
+$$
+
+Note that the SINR is a random variable with respect to
+the channel model. For a fixed *d* (or *K* = *da*) and *N*, it is
+extremely difficult to get some insight on expression (12). In
+order to provide a tractable expression, we will analyze (12)
+in the asymptotic regime (*N* → ∞, *d* → ∞, but *d/N* → *α*)
+and show in particular that SINR(*x*j, *w*p*j*) converges almost
+surely to a random value SINRlim(*x*j, *p*) independent of the
+code *w*p*j*. Usual analysis based on random matrices use the
+ratio *K/N* [17], also known as the load of the system. In our
+case, the ratio *K/N* is equal to *αa*.
+
+**Proposition 1.** When *N* grows towards infinity and *d*/*N* → *α*, the SINR of user *j* in cell *p* in downlink CDMA with orthogonal
\ No newline at end of file
diff --git a/samples/texts/6940952/page_5.md b/samples/texts/6940952/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..4556744e48a26f3ef512f0acea8e47801e3fc5a1
--- /dev/null
+++ b/samples/texts/6940952/page_5.md
@@ -0,0 +1,55 @@
+spreading codes and matched filter is given by
+
+$$
+\begin{align*}
+\text{SINR}_{\text{lim}}(x_j, p) &= \frac{P_p(x_j) \left( (1/W) \int_{-W/2}^{W/2} |h_{pj}(f)|^2 df \right)^2}{I_1(x_j) + I_2(x_j) + (\sigma^2/W) \int_{-W/2}^{W/2} |h_{pj}(f)|^2 df}, \\
+I_1(x_j) &= \frac{\alpha a}{W} \sum_{q \neq p} P_q(x_j) \int_{-W/2}^{W/2} |h_{pj}(f)|^2 |h_{qj}(f)|^2 df, \\
+I_2(x_j) &= \frac{\alpha a}{W} P_p(x_j) \left( \int_{-W/2}^{W/2} |h_{pj}(f)|^4 df \right. \\
+&\qquad \left. - \frac{1}{W} \left( \int_{-W/2}^{W/2} |h_{pj}(f)|^2 df \right)^2 \right). \tag{16}
+\end{align*}
+$$
+
+Proof. See Appendix B.
+
+## 3.2. Spectral efficiency
+
+We would like to quantify the number of bps/Hz the system is able to provide to all the users. It has been shown [18] that the interference plus noise at the output of the matched filter for randomly spread systems can be considered as Gaussian when K and N are large enough. In this case, the mean (with respect to the position of the users and the fading) spectral efficiency of cell *p* is given by
+
+$$C^p = \frac{1}{N} \mathbb{E}_{x,h} \left[ \sum_{j=1}^{K} \log_2 \left( 1 + \text{SINR} \left( x_j, w_p^j \right) \right) \right]. \quad (17)$$
+
+In the asymptotic case and due to invariance by translation, the spectral efficiency per cell is the same for all cells. As a consequence, the network spectral efficiency is infinite. Without loss of generality, we will consider a user in cell 0 ($x_j \in [-a/2, a/2]$) and the corresponding asymptotic SINR is denoted by $\text{SINR}_{\text{lim}}(x_j)$. Assuming the same distribution for all the users in cell 0, we drop the index *j*. The measure of performance in this case is the number of bits per second per
+
+hertz per meter (bps/Hz/meter) the system is able to deliver
+
+$$
+\begin{align}
+C &= \frac{1}{a N} \mathbb{E}_{x,h} [\log_2 (1 + \text{SINR}_{\text{lim}}(x))] \tag{18} \\
+&= \alpha \mathbb{E}_{x,h} [\log_2 (1 + \text{SINR}_{\text{lim}}(x))]. \nonumber
+\end{align}
+$$
+
+According to the size of the network, the total spectral efficiency scales linearly with the factor C. If we suppose that in each cell, the statistics of the channels are the same, then denoting $P_0(x) = P(x)$, $h_0(f) = h(f)$, and $L_0 = L$, we obtain the following proposition from (16).
+
+**Proposition 2.** When the spreading length *N* grows towards infinity and *d/N* → *α*, the asymptotic spectral efficiency per meter of downlink CDMA with random orthogonal spreading codes, general path loss, and matched filter is given by
+
+$$
+\begin{align}
+C(a) &= \frac{\alpha}{a} \mathbb{E}_h \left[ \int_{-a/2}^{a/2} \log_2 \left( 1 + \frac{P(x)((1/W) \int_{-W/2}^{W/2} |h(f)|^2 df)^2}{I(x) + (\sigma^2/W) \int_{-W/2}^{W/2} |h(f)|^2 df} \right) dx \right], \notag \\
+I(x) &= \frac{\alpha a}{W} \sum_{q \neq 0} P_q(x) \int_{-W/2}^{W/2} |h(f)|^2 |h_q(f)|^2 df \notag \\
+&\quad + \frac{\alpha a}{W} P(x) \left( \int_{-W/2}^{W/2} |h(f)|^4 df - \frac{1}{W} \left( \int_{-W/2}^{W/2} |h(f)|^2 df \right)^2 \right) \tag{19}
+\end{align}
+$$
+
+with $a \in [0, 1/\alpha]$.
+
+## 3.3. 2D network
+
+In the case of a 2D network, the expression for the general SINR (16) from Proposition 1 is still valid if we admit that $x_j = (x_j^1, x_j^2)$ represents the coordinates of the user considered, *d* is the density of users *per square meter*, and *a* = $|C|$ is the surface of the cell *C*. The expression (19) for the spectral efficiency from Proposition 2 can be immediately rewritten with a double integration over the surface of the cell:
+
+$$
+\begin{align}
+C(a) &= \frac{\alpha}{a} \mathbb{E}_h \left[ \iint_C \log_2 \left( 1 + \frac{P(x^1, x^2) ((1/W) \int_{-W/2}^{W/2} |h(f)|^2 df)^2}{I(x^1, x^2) + (\sigma^2/W) \int_{-W/2}^{W/2} |h(f)|^2 df} \right) dx^1 dx^2 \right], \\
+I(x^1, x^2) &= \frac{\alpha a}{W} \sum_{q \neq 0} P_q(x^1, x^2) \int_{-W/2}^{W/2} |h(f)|^4 |h_q(f)|^4 df \\
+&\quad + \frac{\alpha a}{W} P(x^1, x^2) \left( \int_{-W/2}^{W/2} |h(f)|^4 df - \frac{1}{W} \left( \int_{-W/2}^{W/2} |h(f)|^2 df \right)^4 \right) &\text{with } a \in [0, 1/\alpha]. \tag{20}
+\end{align}
+$$
\ No newline at end of file
diff --git a/samples/texts/6940952/page_6.md b/samples/texts/6940952/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f7ecb6badc8988efceeece95cb8a928c739a46a
--- /dev/null
+++ b/samples/texts/6940952/page_6.md
@@ -0,0 +1,101 @@
+**3.4. Simplifying assumptions**
+
+3.4.1. Equi-spaced delays
+
+For ease of understanding of the impact of the number of
+paths on the orthogonality gain, we suppose that in each cell
+$q$, all the users have the same number of paths $L_q$ and the
+delays are uniformly distributed according to the bandwidth:
+
+$$
+\tau_q(\ell) = \frac{\ell}{W}. \tag{21}
+$$
+
+Hence, replacing $h(f)$ and $h_q(f)$ with their expression
+with respect to the temporal coefficients (7) and using (21)
+and (19) from Proposition 2 reduces to
+
+$$
+C(a) = \frac{\alpha}{ar} \mathbb{E}_{\eta} \left[ \int_{-a/2}^{a/2} \log_2 \left( 1 + \frac{P(x) \left( \sum_{\ell=0}^{L-1} |\eta(\ell)|^2 \right)^2}{I(x) + \sigma^2 \sum_{\ell=0}^{L-1} |\eta(\ell)|^2} \right) dx \right],
+$$
+
+$$
+I(x) = \alpha a \sum_{q \neq 0} P_q(x) \left( \left( \sum_{\ell=0}^{L-1} |\eta(\ell)|^2 \right) \left( \sum_{\ell=0}^{L_q-1} |\eta_q(\ell)|^2 \right) \right. \\
+\left. + \sum_{\ell=0}^{\min(L,L_q)-1} \sum_{\ell' \neq \ell} \eta(\ell)\eta(\ell')^* \eta_q(\ell)^* \eta_q(\ell') \right) \\
++ \alpha a P(x) \left( \sum_{\ell=0}^{L-1} \sum_{\ell' \neq \ell} |\eta(\ell)|^2 |\eta(\ell')|^2 \right). \tag{22}
+$$
+
+In the case of a single path (i.e., $L_q = 1$ for all $q$), the signal is only affected by flat fading, therefore orthogonality is preserved and intracell interference vanishes:
+
+$$
+C(a) = \frac{\alpha}{ar} \mathbb{E}_{\eta} \left[ \int_{-a/2}^{a/2} \log_2 \left( 1 + \frac{P(x)|\eta|^2}{\alpha a \sum_{q \neq 0} P_q(x) |\eta_q|^2 + \sigma^2} \right) dx \right]. \quad (23)
+$$
+
+3.4.2. Exponential path loss and ergodic case
+
+In the case of an exponential path loss, explicit expressions
+of the spectral efficiency can be derived when $L_q \to \infty$ for all
+$q$, referred in the following as the ergodic case. Although $L_q$
+grows large, we suppose $L_q$ to be negligible with respect to
+$N$ (see the appendix). The impact of frequency reuse is also
+considered. In other words, $r$ adjacent cells may use different
+frequencies to reduce the amount of interference. This point
+is a critical issue to determine the impact of frequency reuse
+on the spectral efficiency of downlink CDMA networks.
+
+Proposition 3. When the spreading length N and the number of paths Lq (for all q) grow towards infinity with d/N → α and Lq/N → 0, the asymptotic spectral efficiency per meter of downlink CDMA with random orthogonal spreading codes, exponential path loss, frequency reuse r, and matched filter is
+
+given by
+
+$$
+C(a) = \frac{\alpha}{ar} \int_{-a/2}^{a/2} \log_2 \left( 1 + \frac{Pe^{-y|x|} (\mathbb{E}[|h|^2])^2}{I(x) + \sigma^2 \mathbb{E}[|h|^2]} \right) dx,
+$$
+
+$$
+I(x) = \alpha a P (\mathbb{E}[|h|^2])^2 \frac{2e^{-yar}}{1 - e^{-yar}} \cosh(yx) \\
+\qquad + \alpha a P e^{-y|x|} (\mathbb{E}[|h|^4] - (\mathbb{E}[|h|^2])^2)
+$$
+
+with $a \in [0, 1/\alpha]$.
+
+Proof. See Appendix C.
+
+In the case where $a \to 0$, the number of bps/Hz/meter depends only on the fading statistics, the path loss, and the factor $\alpha = d/N$ through
+
+$$
+C(0) = \alpha \log_2 \left( 1 + \frac{P\mathbb{E}[|h|^2]}{\sigma^2 + 2\alpha P\mathbb{E}[|h|^2]/\gamma} \right). \quad (25)
+$$
+
+For the proof, let $a \to 0$ in (24).
+
+4. DISCUSSION
+
+In all the following discussion, $P = 1$, $\sigma^2 = 10^{-7}$, $\alpha = 10^{-2}$,
+and $r = 1$ (unless specified otherwise).
+
+**4.1. Path loss versus orthogonality**
+
+We would like to quantify the impact of path loss on the over-
+all performance of the system when considering downlink
+unfaded CDMA. In this case,
+
+$$
+C(a) = \frac{\alpha}{a} \int_{-a/2}^{a/2} \log_2 \left( 1 + \frac{P(x)}{I(x) + \sigma^2} \right) dx,
+$$
+
+$$
+I(x) = \alpha a \sum_{q \neq 0} P_q(x)
+$$
+
+with $a \in [0, 1/\alpha]$.
+
+In Figures 2 and 3, we have plotted the spectral efficiency per meter with respect to the intercell distance for an exponential ($\gamma = 1, 2, 3$) and polynomial ($\beta = 4$) path loss, without frequency selective fading. Remarkably, for each path loss factor, there is an optimum intercell distance which maximizes the users’ spectral efficiency. This surprising result shows that there is no need into packing base stations without bound if one can remove completely the effect of frequency selective fading. It can be shown that optimal spacing depends mainly on the path loss factor $\gamma$ and increases with a decreasing path loss factor.
+
+**4.2. Ergodic fading versus orthogonality**
+
+We would like to quantify the impact of the channel statistics
+on the intercell distance. In other words, in the case of lim-
+ited path loss, should one increase or reduce the cell size? A
+neat framework can be formulated in the case of exponential
+path loss with vanishing values of the path loss factor $\gamma$ and
+ergodic fading. Although the spectral efficiency tends to zero,
\ No newline at end of file
diff --git a/samples/texts/6940952/page_7.md b/samples/texts/6940952/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..b52fb3f21e529b3732a249f2113abe94a734867d
--- /dev/null
+++ b/samples/texts/6940952/page_7.md
@@ -0,0 +1,27 @@
+FIGURE 2: Spectral efficiency versus intercell distance (in meters) in the case of exponential path loss and no fading: $\sigma^2 = 10^{-7}$, $P = 1$, and $\gamma = 1, 2, 3$.
+
+FIGURE 3: Spectral efficiency versus intercell distance (in meters) in the case of polynomial path loss ($\beta = 4$) and no fading: $\sigma^2 = 10^{-7}$, $P = 1$.
+
+one can infer on the behavior of the derivative of the spectral efficiency which is given by
+
+$$ \frac{\partial C}{\partial a} \propto \left( \frac{3}{2} - \frac{\mathbb{E}[|h|^4]}{(\mathbb{E}[|h|^2])^2} \right) \quad (27) $$
+
+with $a \in [0, 1/\alpha]$.
+
+For the proof, see Appendix D.
+
+This simplified case (exponential path loss with ergodic fading) is quite instructive on the impact of frequency selective fading on orthogonal downlink CDMA. In the ergodic
+
+FIGURE 4: Spectral efficiency versus intercell distance (in meters) in the case of exponential path loss and multipath fading: $\sigma^2 = 10^{-7}$, $P = 1$, $\gamma = 1$, and $L = 1, 2, 10$.
+
+case and with limited path loss, the optimum number of cells depends only on how “peaky” the channel is through the kurtosis $T = \mathbb{E}[|h|^4]/(\mathbb{E}[|h|^2])^2$. If $T > 3/2$, orthogonality is severely destroyed by the channel and one has to decrease the cell size whereas if $T \le 3/2$, one can increase the cell size.¹
+
+### 4.3. Number of paths versus orthogonality
+
+We would like to quantify the impact of the number of multipaths on the overall performance of the system. In Figures 4 and 5, we have plotted the spectral efficiency per meter with respect to the intercell distance for an exponential ($\gamma = 1$) and polynomial ($\beta = 4$) path loss, in each case for numbers of multipaths $L = 1, L = 2$, and $L = 10$ (supposing an equal number of paths is generated by each cell) and Rayleigh fading. For $L = 1$, fading does not destroy orthogonality and as a consequence, an optimum intercell distance is obtained as in the nonfading case. However, for any value of $L > 1$, the optimum intercell distance is equal to 0.
+
+### 4.4. Impact of reuse factor
+
+In Figure 6, we consider a realistic case with ergodic Rayleigh frequency selective fading and $\gamma = 2$ and reuse factor $r = 1, 2, 3$. The spectral efficiency has been plotted for various values of the intercell distance. The curve shows that the users' rate decreases with increasing intercell distance, which is mainly due to frequency selective fading. Note that the best
+
+¹ The value 3/2 is mainly dependent on the type of path loss (exponential, polynomial, ...).
\ No newline at end of file
diff --git a/samples/texts/6940952/page_8.md b/samples/texts/6940952/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..f8211bd4f27e45edb1f8964c12e47b3876c41e38
--- /dev/null
+++ b/samples/texts/6940952/page_8.md
@@ -0,0 +1,17 @@
+FIGURE 5: Spectral efficiency versus intercell distance (in meters) in the case of polynomial path loss ($\beta = 4$) and multipath fading: $\sigma^2 = 10^{-7}$, $P = 1$, and $L = 1, 2, 10$.
+
+FIGURE 6: Effect of the reuse factor: spectral efficiency versus intercell distance (in meters) in the case of exponential path loss and fading: $\sigma^2 = 10^{-7}$, $\gamma = 2$, $P = 1$, and $r = 1, 2, 3$.
+
+spectral efficiency is achieved for a reuse factor of 1, meaning that all base stations should use all the available bandwidth.
+
+### 4.5. General discussion
+
+We would like to show that, in a cellular system, multipath fading is in fact more dramatic than path loss and restoring orthogonality through diversity (multiple antennas at the base station) and equalization techniques (MMSE, ...) pays off. To visually confirm this fact, Figure 7 plots for a path loss factor $\gamma = 2$ the spectral efficiency per meter with respect to the intercell distance in the ergodic Rayleigh frequency selective fading, nonfading, and intercell interference-free case (i.e., $(1/a) \int_{-a/2}^{a/2} \log_2(1 + P(x)/\sigma^2)dx$). The figure shows that one can more than triple the spectral efficiency per meter by restoring orthogonal multiple access for any intercell distance. Moreover, for small values of the intercell distance, greater gains can be achieved if one removes intercell interference (by exploiting the statistics of the intercell interference, e.g.). Note also that even with fading and intercell interference, the capacity gain with respect to the number of base stations is not linear and therefore, based on economic constraints, the optimal interbase station distance can be determined. Hence, based on the quality of service targets for each user, the optimum intercell distance can be straightforwardly derived.
+
+FIGURE 7: Spectral efficiency versus intercell distance (in meters) in the case of exponential path loss with fading, without fading, and interference-free case (one cell): $\sigma^2 = 10^{-7}$, $P = 1$, and $\gamma = 2$.
+
+## 5. CONCLUSION
+
+Using asymptotic arguments, an explicit expression of the spectral efficiency was derived and was shown to depend only on a few meaningful parameters. This contribution is also very instructive in terms of future research directions. In the “traditional point of view” of cellular systems, the general
+
+guidance to increase the cell size has always been related to an increase in the transmitted power to reduce path loss. However, these results show that path loss is only the second part of the story and the first obstacle is on the contrary frequency selective fading since path loss does not destroy orthogonal multiple access whereas frequency selective fad- ing does. These considerations show therefore that all the ef- fort must be focused on combating frequency selective fad- ing through diversity and equalization techniques in order to
\ No newline at end of file
diff --git a/samples/texts/6940952/page_9.md b/samples/texts/6940952/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..9fbad200d0957c926a39f823594aa2f33d97a500
--- /dev/null
+++ b/samples/texts/6940952/page_9.md
@@ -0,0 +1,72 @@
+restore orthogonality. Finally, note that the results presented in this paper deal only with the downlink and any deployment strategy should take also into account the uplink traffic as in [19].
+
+# APPENDIX
+
+## A. PRELIMINARY RESULTS
+
+Let $X_p$ be an $N \times N$ random matrix with i.i.d. zero-mean unit variance Gaussian entries. The matrix $X_p(X_p^H X_p)^{-1/2}$ is unitary Haar distributed (see [13] for more details). The code matrix $W_p = [w_p^1, ..., w_p^K]$ is obtained by extracting $K$ orthogonal columns from $X_p(X_p^H X_p)^{-1/2}$. In this case, the entries of matrix $W_p$ verify [10] that
+
+$$ \mathbb{E}[|w_p^1(i)|^2] = \frac{1}{N}, \quad 1 \le i \le N, \qquad (\text{A.1}) $$
+
+$$ \mathbb{E}[|w_p^1(i)|^2 | w_p^k(i)|^2] = \frac{1}{N(N+1)}, \quad k > 1, \qquad (\text{A.2}) $$
+
+$$ \mathbb{E}[w_p^1(i)^* w_p^k(i) w_p^1(l) w_p^k(l)^*] = -\frac{1}{N(N^2-1)}, \quad k>1, i \neq l. \qquad (\text{A.3}) $$
+
+## B. PROOF OF PROPOSITION 1
+
+### Term S*
+
+Let us focus on the term $S^*$ of (13). As $N \to \infty$, (13) becomes
+
+$$ S^* = \mathbf{w}_p^{jH} \mathbf{H}_{pj}^H \mathbf{H}_{pj} \mathbf{w}_p^j = \sum_{i=1}^{N} |h_{pj}(i)|^2 |w_p^j(i)|^2. \qquad (\text{B.1}) $$
+
+Using (A.1), it is rather straightforward to show that
+
+$$ S^* \rightarrow \lim_{N \to \infty} \frac{1}{N} \sum_{i=1}^{N} |h_{pj}(i)|^2 \qquad (\text{B.2}) $$
+
+in the mean-square sense. Therefore,
+
+$$ S^* \xrightarrow[N \to \infty]{1/W} \int_{-W/2}^{W/2} |h_{pj}(f)|^2 df. \qquad (\text{B.3}) $$
+
+Formula (B.3) stems from the fact that as $N \to \infty$, the eigenvalues $|h_{pj}(i)|^2$ of $H_{pj}^H H_{pj}$ correspond to the squared frequency response of the channel in the case of a Toeplitz structure of $H_{pj}$ (see [15]).
+
+### Term $I_1$
+
+Let us now derive the term $I_1$ of (14). It can be shown that (since $\mathbf{w}_p^j$ is independent of $\mathbf{W}_q$, see proof in [13])
+
+$$ \begin{aligned} & \mathbf{w}_p^{jH} \mathbf{H}_{pj}^H \mathbf{H}_{qj} \mathbf{W}_q \mathbf{W}_q^H \mathbf{H}_{qj}^H \mathbf{H}_{pj} \mathbf{w}_p^j \\ & - \frac{1}{N} \operatorname{trace}(\mathbf{W}_q \mathbf{W}_q^H \mathbf{H}_{qj}^H \mathbf{H}_{pj} \mathbf{H}_{pj}^H \mathbf{H}_{qj}) - 0. \end{aligned} \qquad (\text{B.4}) $$
+
+Therefore, each term $I_q^*$ in the sum in (14) can be calculated as
+
+$$ I_q^* = \mathbf{w}_p^{jH} \mathbf{H}_{pj}^H \mathbf{H}_{qj} \mathbf{W}_q \mathbf{W}_q^H \mathbf{H}_{qj}^H \mathbf{H}_{pj} \mathbf{w}_p^j $$
+
+$$ -\frac{1}{N} \operatorname{trace}(\mathbf{W}_q \mathbf{W}_q^H \mathbf{H}_{qj}^H \mathbf{H}_{pj} \mathbf{H}_{pj}^H \mathbf{H}_{qj}) \qquad (\text{B.5}) $$
+
+$$ -\frac{1}{N} \sum_{k=1}^{K} \sum_{i=1}^{N} |h_{pj}(i)|^2 |h_{qj}(i)|^2 |w_q^k(i)|^2. $$
+
+Using (A.1), it is rather straightforward to show that
+
+$$ I_q^* \rightarrow \lim_{N \to \infty} \frac{1}{N^2} \sum_{k=1}^{K} \sum_{i=1}^{N} |h_{pj}(i)|^2 |h_{qj}(i)|^2 \qquad (\text{B.6}) $$
+
+in the mean-square sense. Therefore,
+
+$$ I_q^* \xrightarrow[\frac{N-\infty}{d/N-a}]{\alpha a} \frac{1}{W} \int_{-W/2}^{W/2} |h_{pj}(f)|^2 |h_{qj}(f)|^2 df. \qquad (\text{B.7}) $$
+
+### Term $I_2$
+
+Finally, let us derive the asymptotic expression of $I_2$ in (15). The proof follows here a different procedure as $\mathbf{w}_p^j$ is not independent of $\mathbf{W}_p^{(-j)}$. However, one can show that
+
+$$ I_2 = P_p(x_j) \mathbf{w}_p^{jH} H_{pj}^H H_{pj} W_p^{(-j)} W_p^{(-j)H} H_{pj}^H H_{pj} w_p^j $$
+
+$$ = P_p(x_j) \sum_{\substack{k=1 \\ k \neq j}}^{K} (\mathbf{w}_p^{jH} H_{pj}^H H_{pj} w_p^k)^2 $$
+
+$$ = P_p(x_j) \sum_{\substack{k=1 \\ k \neq j}}^{K} \left( \sum_{i=1}^{N} |h_{pj}(i)|^2 w_p^{j*(i)} w_p^{k*(i)} \right)^2 \qquad (\text{B.8}) $$
+
+$$ = P_p(x_j) \sum_{\substack{k=1 \\ k \neq j}}^{K} \sum_{i=1}^{N} \sum_{l=1}^{N} |h_{pj}(i)|^2 |h_{pj}(l)|^2 w_p^{j*(i)} \\
+\times w_p^{k*(l)} w_p^{k*(l)*}, $$
+
+and using (A.2) and (A.3), it can be shown that $I_2$ converges in the mean-square sense to
+
+$$ P_p(x_j) \lim_{N\to\infty} \frac{1}{N(N+1)} \sum_{\substack{k=1 \\ k\ne j}}^{K} \sum_{i=1}^{N} |h_{pj}(i)|^4 $$
+
+$$ -\frac{1}{N(N^2-1)} \sum_{\substack{k=1 \\ k\ne j}}^{K} \sum_{i=1}^{N} \sum_{\substack{l=1 \\ l\ne i}}^{N} |h_{pj}(i)|^2 |h_{pj}(l)|^2. \qquad (\text{B.9}) $$
\ No newline at end of file
diff --git a/samples/texts/7197773/page_1.md b/samples/texts/7197773/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..68b7df7a9f9774ab65787a18affa3fb51566903a
--- /dev/null
+++ b/samples/texts/7197773/page_1.md
@@ -0,0 +1,141 @@
+**CHEMISTRY-11**
+
+Name:
+
+Class:
+
+ID:
+
+Date: / /
+
+Time Allowed: 40 Min.
+
+Marks Total: **25**
+
+Marks Obtained:
+
+Maximum Marks: 09
+
+(OBJECTIVE TYPE)
+
+Time Allowed: 10 Min.
+
+NOTE: Tick The Correct Option:
+
+1) The net heat change in a chemical reaction is same, whether it is brought about in two or more different ways in one or several steps. It is known as:
+
+(a) Henry's law
+
+(b) Joule's principle
+
+(c) Hess's law
+
+(d) Law of conversation of energy
+
+2) In endothermic reaction, ΔH is taken as:
+
+(a) Positive
+
+(b) Zero
+
+(c) Negative
+
+(d) May be any value
+
+3) For the reaction: NaOH + HCl → NaCl + H₂O, the change in enthalpy is called:
+
+(a) Heat of reaction
+
+(b) Heat of neutralization
+
+(c) Heat of formation
+
+(d) Heat of combustion
+
+4) In a Bomb Calorimeter, the reactions are carried out at constant:
+
+(a) Pressure
+
+(b) Temperature
+
+(c) Volume
+
+(d) Enthalpy
+
+5) Which one is always exothermic?
+
+(a) $\Delta H°_f$
+
+(b) $\Delta H°_{at}$
+
+(c) $\Delta H°_c$
+
+(d) $\Delta H°_{sol}$
+
+6) When one mole of H₂ reacts with half mole O₂(g) to form one mole gaseous water H₂O(g), the enthalpy change is:
+
+(a) -242.2 kJ
+
+(b) -285.8 kJ
+
+(c) -484 kJ
+
+(d) +285.8 kJ
+
+7) The heat required to raise the temperature of one gram of a substance through 1°C or 1 K is called:
+
+(a) Specific heat
+
+(b) Heat capacity
+
+(c) Molar specific heat
+
+(d) All
+
+8) Born-Haber cycle helps us to calculate the ________ of binary ionic compounds.
+
+(a) Bond energies
+
+(b) Hydration energies
+
+(c) Lattice energies
+
+(d) Formation enthalpies
+
+9) Enthalpy of combustion of graphite is -393.51 kJ mol⁻¹ while that of diamond is -395.41 kJ mol⁻¹. Can you guess which one is more stable?
+
+(a) Diamond
+
+(b) Graphite
+
+(c) Both 'a' & 'b'
+
+(d) None
+
+Maximum Marks: 16
+
+(SUBJECTIVE TYPE)
+
+Time Allowed: 30 Min.
+
+SECTION-I
+
+Q.2: Give brief answers to the following questions: (12)
+
+i. What is a thermochemical equation? Give three examples. What information does it convey?
+
+ii. Why is it necessary to mention the physical states of reactants and products in a thermochemical reaction?
+
+iii. What do you mean by enthalpy of atomization?
+
+iv. Define standard enthalpy of solution. Give examples.
+
+v. How do we determine ΔH in the laboratory for food and fuel etc.
+
+vi. Differentiate between specific heat and heat capacity.
+
+SECTION-II
+
+NOTE: Attempt All Questions: (04)
+
+Q.3: State and explain Hess's law of constant heat summation with an examples.
\ No newline at end of file
diff --git a/samples/texts/7581440/page_1.md b/samples/texts/7581440/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..278bf2fb1cc3588a6be0e8649189730a20fe3654
--- /dev/null
+++ b/samples/texts/7581440/page_1.md
@@ -0,0 +1,31 @@
+# Strong Convergence of Self-adaptive Inertial Algorithms for Solving Split Variational Inclusion Problems with Applications
+
+Bing Tan¹ Ⓟ · Xiaolong Qin²,³ Ⓟ · Jen-Chih Yao⁴,⁵
+
+Received: 25 October 2020 / Revised: 15 January 2021 / Accepted: 8 February 2021
+© The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021
+
+**Abstract**
+
+In this paper, four self-adaptive iterative algorithms with inertial effects are introduced to solve a split variational inclusion problem in real Hilbert spaces. One of the advantages of the suggested algorithms is that they can work without knowing the prior information of the operator norm. Strong convergence theorems of these algorithms are established under mild and standard assumptions. As applications, the split feasibility problem and the split minimization problem in real Hilbert spaces are studied. Finally, several preliminary numerical experiments as well as an example in the field of compressed sensing are proposed to support the advantages and efficiency of the suggested methods over some existing ones.
+
+**Keywords** Split variational inclusion problem · Signal processing problem · Strong convergence · Inertial method · Mann method · Viscosity method
+
+**Mathematics Subject Classification** 65J15 · 68W10 · 65K15 · 47J20 · 90C25
+
+Xiaolong Qin
+qxlxajh@163.com
+Bing Tan
+bingtan72@gmail.com
+Jen-Chih Yao
+yaojc@mail.cmu.edu.tw
+
+¹ Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
+
+² Department of Mathematics, Hangzhou Normal University, Hangzhou, Zhejiang, China
+
+³ Department of Mathematics, Zhejiang Normal University, Jinhua, Zhejiang, China
+
+⁴ Research Center for Interneural Computing, China Medical University Hospital, China Medical University, Taichung 40447, Taiwan
+
+⁵ Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
\ No newline at end of file
diff --git a/samples/texts/7581440/page_10.md b/samples/texts/7581440/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..16dd48270f128181860e6acaed6d9dd084a9db34
--- /dev/null
+++ b/samples/texts/7581440/page_10.md
@@ -0,0 +1,60 @@
+By the definition of $u_n$, we can write
+
+$$
+\|u_n - p\| \le \|x_n - p\| + \sigma_n \cdot \frac{\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\|. \quad (3.12)
+$$
+
+According to Remark 3.1 (i), one has $\frac{\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| \to 0$. Therefore, there exists a constant $M_1 > 0$ such that
+
+$$
+\frac{\partial n}{\sigma_n} \|x_n - x_{n-1}\| \le M_1, \quad \forall n \ge 1. \tag{3.13}
+$$
+
+Combining (3.11), (3.12) and (3.13), we obtain
+
+$$
+\|g_n - p\| \le \|u_n - p\| \le \|x_n - p\| + \sigma_n M_1, \quad \forall n \ge 1. \quad (3.14)
+$$
+
+Thus,
+
+$$
+\begin{align*}
+\|x_{n+1} - p\| &\le \sigma_n \|f(x_n) - f(p)\| + \sigma_n \|f(p) - p\| + (1-\sigma_n)\|g_n - p\| \\
+&\le [1-\sigma_n(1-\rho)]\|x_n - p\| + \sigma_n(1-\rho) \frac{\|f(p)-p\| + M_1}{1-\rho} \\
+&\le \max \left\{ \|x_n - p\|, \frac{\|f(p)-p\| + M_1}{1-\rho} \right\} \\
+&\le \dots \le \max \left\{ \|x_0 - p\|, \frac{\|f(p)-p\| + M_1}{1-\rho} \right\}.
+\end{align*}
+$$
+
+This implies that sequence {$x_n$} is bounded. So, sequences {$f(x_n)$}, {$u_n$}, {$q_n$} and {$g_n$} are also bounded.
+
+**Claim 2.**
+
+$$
+(1 - \sigma_n) \frac{2 - \kappa}{\kappa} \|u_n - g_n\|^2 \le \|x_n - p\|^2 - \|x_{n+1} - p\|^2 + \sigma_n M_4
+$$
+
+for some $M_4 > 0$. Indeed, from (3.14), one sees that
+
+$$
+\begin{align}
+\|u_n - p\|^2 &\le (\|x_n - p\| + \sigma_n M_1)^2 \\
+&= \|x_n - p\|^2 + \sigma_n (2M_1 \|x_n - p\| + \sigma_n M_1^2) \tag{3.15} \\
+&\le \|x_n - p\|^2 + \sigma_n M_2
+\end{align}
+$$
+
+for some $M_2 > 0$. Combining Lemma 3.2 and (3.15), we get
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 &\le \sigma_n \|f(x_n) - p\|^2 + (1-\sigma_n)\|g_n - p\|^2 \\
+&\le \sigma_n (\|f(x_n) - f(p)\| + \|f(p) - p\|^2) + (1-\sigma_n)\|g_n - p\|^2 \\
+&\le \sigma_n (\|x_n - p\| + \|f(p) - p\|^2) + (1-\sigma_n)\|g_n - p\|^2 \\
+&= \sigma_n \|x_n - p\|^2 + (1-\sigma_n)\|g_n - p\|^2 \\
+&\quad + \sigma_n (2\|x_n - p\| \cdot \|f(p) - p\| + \|f(p) - p\|^2) \\
+&\le \sigma_n \|x_n - p\|^2 + (1-\sigma_n)\|g_n - p\|^2 + \sigma_n M_3 \\
+&\le \|x_n - p\|^2 - (1-\sigma_n) \frac{2-\kappa}{\kappa} \|u_n - g_n\|^2 + \sigma_n M_4,
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_11.md b/samples/texts/7581440/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..f8f4e720210530c65e577906dea9e10d7a3453f4
--- /dev/null
+++ b/samples/texts/7581440/page_11.md
@@ -0,0 +1,69 @@
+where $M_4 := M_2 + M_3$. The desired result can be achieved by a simple conversion.
+
+**Claim 3.**
+
+$$
+\begin{aligned}
+\|x_{n+1} - p\|^2 & \le (1 - (1-\rho)\sigma_n)\|x_n - p\|^2 + (1-\rho)\sigma_n \cdot \left[ \frac{3M}{1-\rho} \cdot \frac{\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| \right. \\
+& \qquad \left. + \frac{2}{1-\rho} \langle f(p) - p, x_{n+1} - p \rangle \right]
+\end{aligned}
+$$
+
+for some $M > 0$. Indeed, it follows from the definition of $u_n$ that
+
+$$
+\begin{align}
+\|u_n - p\|^2 &\le \|x_n - p\|^2 + 2\vartheta_n \|x_n - p\| \|x_n - x_{n-1}\| + \vartheta_n^2 \|x_n - x_{n-1}\|^2 \tag{3.16} \\
+&\le \|x_n - p\|^2 + 3M\vartheta_n \|x_n - x_{n-1}\|,
+\end{align}
+$$
+
+where $M := \sup_{n \in \mathbb{N}} \{ \|x_n - p\|, \vartheta \|x_n - x_{n-1}\| \} > 0$. Using (3.14) and (3.16), we have
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 &= \| \sigma_n (f(x_n) - f(p)) + (1-\sigma_n)(g_n - p) + \sigma_n (f(p) - p) \|^2 \\
+&\leq \| \sigma_n (f(x_n) - f(p)) + (1-\sigma_n)(g_n - p) \|^2 + 2\sigma_n \langle f(p) - p, x_{n+1} - p \rangle \\
+&\leq \sigma_n \|f(x_n) - f(p)\|^2 + (1-\sigma_n)\|g_n - p\|^2 + 2\sigma_n \langle f(p) - p, x_{n+1} - p \rangle \\
+&\leq \sigma_n \rho \|x_n - p\|^2 + (1-\sigma_n)\|u_n - p\|^2 + 2\sigma_n \langle f(p) - p, x_{n+1} - p \rangle \\
+&\leq (1-(1-\rho)\sigma_n)\|x_n - p\|^2 + (1-\rho)\sigma_n \cdot \left[ \frac{3M}{1-\rho} \cdot \frac{\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| \right. \\
+& \qquad \left. + \frac{2}{1-\rho} \langle f(p) - p, x_{n+1} - p \rangle \right].
+\end{align*}
+$$
+
+**Claim 4.** $\{\|x_n - p\|^2\}$ converges to zero. Indeed, by Lemma 2.4, it suffices to show that $\limsup_{k \to \infty} \langle f(p) - p, x_{n_k+1} - p \rangle \le 0$ for every subsequence $\{\|x_{n_k} - p\|\}$ of $\{\|x_n - p\|\}$ satisfying $\liminf_{k \to \infty} (\|x_{n_k+1} - p\| - \|x_{n_k} - p\|) \ge 0$.
+
+For this purpose, we assume that $\{\|x_{n_k} - p\|\}$ is a subsequence of $\{\|x_n - p\|\}$ such that $\liminf_{k \to \infty} (\|x_{n_k+1} - p\| - \|x_{n_k} - p\|) \ge 0$. Then,
+
+$$
+\begin{align*}
+& \liminf_{k \to \infty} (\|x_{n_k+1} - p\|^2 - \|x_{n_k} - p\|^2) \\
+&= \liminf_{k \to \infty} [(\|x_{n_k+1} - p\| - \|x_{n_k} - p\|)(\|x_{n_k+1} - p\| + \|x_{n_k} - p\|)] \ge 0.
+\end{align*}
+$$
+
+From Claim 2, one sees that
+
+$$
+(1 - \sigma_{n_k}) \frac{2-k}{k} \|u_{n_k} - g_{n_k}\|^2
+\leq & \limsup_{k \to \infty} [\|x_{n_k} - p\|^2 - \|x_{n_k+1} - p\|^2 + \sigma_{n_k} M_4] \\
+\leq & 0,
+$$
+
+which implies that
+
+$$
+\lim_{k \to \infty} \|g_{n_k} - u_{n_k}\| = 0. \tag{3.17}
+$$
+
+This together with Lemma 3.2 finds that $\limsup_{k \to \infty} \|q_{n_k} - u_{n_k}\| = 0$. Moreover, using Remark 3.1 (i) and Condition (C3), we have
+
+$$
+\|x_{n_k+1} - g_{n_k}\| = \sigma_{n_k} \|g_{n_k} - f(x_{n_k})\| \to 0, \quad (3.18)
+$$
+
+and
+
+$$
+\|x_{n_k} - u_{n_k}\| = \sigma_{n_k} \cdot \frac{\vartheta_{n_k}}{\sigma_{n_k}} \|x_{n_k} - x_{n_k-1}\| \to 0. \quad (3.19)
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_12.md b/samples/texts/7581440/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..ad81b584f480542f58868210807b6cd07b0a4ae6
--- /dev/null
+++ b/samples/texts/7581440/page_12.md
@@ -0,0 +1,63 @@
+From (3.17), (3.18) and (3.19), we conclude that
+
+$$
+\|x_{n_k+1} - x_{n_k}\| \le \|x_{n_k+1} - g_{n_k}\| + \|g_{n_k} - u_{n_k}\| + \|u_{n_k} - x_{n_k}\| \to 0. \quad (3.20)
+$$
+
+Since {$x_{n_k}$} is bounded, there exists a subsequence {$x_{n_{k_j}}$} of {$x_{n_k}$} such that $x_{n_{k_j}} \to z$. Furthermore,
+
+$$
+\limsup_{k \to \infty} \langle f(p) - p, x_{n_k} - p \rangle = \lim_{j \to \infty} \langle f(p) - p, x_{n_{k_j}} - p \rangle = \langle f(p) - p, z - p \rangle \quad (3.21)
+$$
+
+We get $u_{n_k} \to z$ since $\|x_{n_k} - u_{n_k}\| \to 0$. This together with $\lim_{k \to \infty} \|u_{n_k} - q_{n_k}\| = 0$ and Lemma 3.3 obtains that $z \in \Omega$. From the definition of $p$ and (3.21), we get
+
+$$
+\limsup_{k \to \infty} \langle f(p) - p, x_{n_k} - p \rangle = \langle f(p) - p, z - p \rangle \le 0. \quad (3.22)
+$$
+
+Combining (3.20) and (3.22), we obtain
+
+$$
+\limsup_{k \to \infty} \langle f(p) - p, x_{n_k+1} - p \rangle \leq \limsup_{k \to \infty} \langle f(p) - p, x_{n_k} - p \rangle \leq 0. \quad (3.23)
+$$
+
+Thus, from Remark 3.1 (i), (3.23), Claim 3 and Lemma 2.4, we conclude that $x_n \to p$. That is the desired result.
+$\square$
+
+**3.2 The Algorithm 3.2**
+
+In this subsection, we propose an inertial Mann-type projection and contraction algorithm to solve (SVIP). Before proposing our iterative scheme, we first assume that the algorithm satisfies conditions (C1)–(C3) and (C5).
+
+(C5) Assume that the real sequence {$\tau_n$} $\subset (0, 1)$ such that {$\tau_n$} $\subset (a, b) \subset (0, 1 - \sigma_n)$ for some $a > 0, b > 0$.
+
+The Algorithm 3.2 is of the form:
+
+**Algorithm 3.2** The inertial Mann-type projection and contraction algorithm for (SVIP)
+
+**Initialization:** Set $\lambda > 0$, $\vartheta > 0$, $\zeta > 0$, $\chi \in (0, 1)$, $\delta \in (0, 1)$, $\kappa \in (0, 2)$ and let $x_0, x_1 \in \mathcal{H}$.
+
+**Iterative Steps:** Calculate the next iteration point $x_{n+1}$ as follows:
+
+$$
+\begin{equation}
+\begin{cases}
+u_n = x_n + \vartheta_n (x_n - x_{n-1}), \\
+q_n = J_{\lambda F_1}[u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n], \\
+g_n = u_n - \kappa \mu_n c_n, \\
+x_{n+1} = (1 - \sigma_n - \tau_n)u_n + \tau_n g_n,
+\end{cases}
+\end{equation}
+$$
+
+where {$\vartheta_n$}, {$\gamma_n$} and {$c_n$} are defined in (3.1), (3.2) and (3.3), respectively.
+
+**Theorem 3.2** Suppose that Conditions (C1)–(C3) and (C5) hold. Then the sequence {$x_n$} created by Algorithm 3.2 converges to $p \in \Omega$ in norm, where $\|p\| = \min\{\|z\| : z \in \Omega\}$.
+
+*Proof* We divide this proof into four steps.
+
+Claim 1. The sequence {$x_n$} is bounded. Indeed, thanks to Lemma 3.2, we have
+
+$$
+\|g_n - p\| \le \|u_n - p\|. \tag{3.24}
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_13.md b/samples/texts/7581440/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..f1c86eefd1ad597a14f85d582f0f9a6e19e0444d
--- /dev/null
+++ b/samples/texts/7581440/page_13.md
@@ -0,0 +1,78 @@
+By the definition of $x_{n+1}$, one has
+
+$$
+\begin{align}
+\|x_{n+1} - p\| &= \| (1 - \sigma_n - \tau_n)(u_n - p) + \tau_n(g_n - p) - \sigma_n p \| \tag{3.25} \\
+&\leq \| (1 - \sigma_n - \tau_n)(u_n - p) + \tau_n(g_n - p) \| + \sigma_n \|p\|. \nonumber
+\end{align}
+$$
+
+It follows from (3.24) that
+
+$$
+\begin{align*}
+& \| (1 - \sigma_n - \tau_n)(u_n - p) + \tau_n(g_n - p) \|^2 \\
+&\le (1 - \sigma_n - \tau_n)^2 \|u_n - p\|^2 + 2(1 - \sigma_n - \tau_n)\tau_n \|g_n - p\| \|u_n - p\| + \tau_n^2 \|g_n - p\|^2 \\
+&\le (1 - \sigma_n - \tau_n)^2 \|u_n - p\|^2 + 2(1 - \sigma_n - \tau_n)\tau_n \|u_n - p\|^2 + \tau_n^2 \|u_n - p\|^2 \\
+&= (1 - \sigma_n)^2 \|u_n - p\|^2,
+\end{align*}
+$$
+
+which yields
+
+$$
+\Vert (1 - \sigma_n - \tau_n)(u_n - p) + \tau_n(g_n - p) \Vert \le (1 - \sigma_n) \Vert u_n - p \Vert. \quad (3.26)
+$$
+
+Combining (3.14), (3.25) and (3.26), we deduce that
+
+$$
+\begin{align*}
+\|x_{n+1} - p\| &\le (1-\sigma_n)\|u_n-p\| + \sigma_n\|p\| \\
+&\le (1-\sigma_n)\|x_n-p\| + \sigma_n(\|p\| + M_1) \\
+&\le \max\{\|x_n-p\|, \|p\| + M_1\} \\
+&\le \dots \le \max\{\|x_0-p\|, \|p\| + M_1\}.
+\end{align*}
+$$
+
+That is, {$x_n$} is bounded. So, the sequences {$g_n$} and {$u_n$} are also bounded.
+
+**Claim 2.**
+
+$$
+\tau_n \frac{2-\kappa}{\kappa} \|u_n-g_n\|^2 \le \|x_n-p\|^2 - \|x_{n+1}-p\|^2 + \sigma_n (\|p\|^2 + M_2).
+$$
+
+Indeed, using Lemma 3.2 and (3.15), we obtain
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 &\le (1 - \sigma_n - \tau_n)\|u_n - p\|^2 + \tau_n \|g_n - p\|^2 + \sigma_n \|p\|^2 \\
+&\le (1 - \sigma_n - \tau_n)\|u_n - p\|^2 + \tau_n \|u_n - p\|^2 - \tau_n \frac{2-\kappa}{\kappa} \|u_n - g_n\|^2 + \sigma_n \|p\|^2 \\
+&\le \|x_n - p\|^2 - \tau_n \frac{2-\kappa}{\kappa} \|u_n - g_n\|^2 + \sigma_n (\|p\|^2 + M_2).
+\end{align*}
+$$
+
+The desired result can be achieved by a simple conversion.
+
+**Claim 3.**
+
+$$
+\|x_{n+1} - p\|^2 \le (1 - \sigma_n)\|x_n - p\|^2 + \sigma_n \left[ 2\tau_n \|u_n - g_n\| \|x_{n+1} - p\| + 2\langle p, p - x_{n+1} \rangle + \frac{3M\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| \right].
+$$
+
+Setting $t_n = (1 - \tau_n)u_n + \tau_n g_n$, one has
+
+$$
+\| t_n - u_n \| = \tau_n \| u_n - g_n \| . \tag{3.27}
+$$
+
+It follows from (3.24) that
+
+$$
+\begin{align*}
+\|t_n - p\| &= \| (1 - \tau_n)(u_n - p) + \tau_n(g_n - p) \| \\
+&\le (1 - \tau_n)\|u_n - p\| + \tau_n \|u_n - p\| \\
+&= \|u_n - p\|.
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_14.md b/samples/texts/7581440/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..4192a5eaca8a9506b82181a2d5bb644a5c25a8d8
--- /dev/null
+++ b/samples/texts/7581440/page_14.md
@@ -0,0 +1,70 @@
+From (3.16), (3.27) and (3.28), we have
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 &= \| (1-\sigma_n)(t_n-p) - \sigma_n(u_n-t_n) - \sigma_n p \|^2 \\
+&\leq (1-\sigma_n)^2 \|t_n-p\|^2 - 2\sigma_n \langle u_n - t_n + p, x_{n+1} - p \rangle \\
+&\leq (1-\sigma_n)\|t_n-p\|^2 + 2\sigma_n \|u_n-t_n\| \|x_{n+1}-p\| + 2\sigma_n \langle p, p-x_{n+1} \rangle \\
+&\leq (1-\sigma_n)\|x_n-p\|^2 + \sigma_n \left[ 2\tau_n \|u_n-g_n\| \|x_{n+1}-p\| \right. \\
+&\qquad \left. + 2\langle p, p-x_{n+1} \rangle + \frac{3M}{\sigma_n} \|x_n-x_{n-1}\| \right].
+\end{align*}
+$$
+
+**Claim 4.** The sequence $\{\|x_n - p\|^2\}$ converges to zero. We assume that $\{\|x_{n_k} - p\|\}$ is a subsequence of $\{\|x_n - p\|\}$ such that $\liminf_{k \to \infty} (\|x_{n_k+1} - p\| - \|x_{n_k} - p\|) \ge 0$. By Claim 2 and Condition (C5), we have
+
+$$
+\begin{align*}
+\tau_{n_k} \frac{2-\kappa}{\kappa} \|u_{n_k} - g_{n_k}\|^2 &\le \limsup_{k \to \infty} [\|x_{n_k} - p\|^2 - \|x_{n_k+1} - p\|^2] + \limsup_{k \to \infty} \sigma_{n_k} (\|p\|^2 + M_2) \\
+&\le 0,
+\end{align*}
+$$
+
+which indicates that
+
+$$
+\lim_{k \to \infty} \|g_{n_k} - u_{n_k}\| = 0. \tag{3.29}
+$$
+
+In view of Lemma 3.2, one observes that $\lim_{k \to \infty} \|q_{n_k} - u_{n_k}\| = 0$. From (3.29) and the boundedness of $\{x_n\}$, we can further obtain
+
+$$
+\lim_{k \to \infty} \tau_{n_k} \|u_{n_k} - g_{n_k}\| \|x_{n_k+1} - p\| = 0. \quad (3.30)
+$$
+
+Moreover, using (3.29), Condition (C5) and Remark 3.1 (i), we have
+
+$$
+\|x_{n_k+1} - u_{n_k}\| = \sigma_{n_k} \|u_{n_k}\| + \tau_{n_k} \|g_{n_k} - u_{n_k}\| \to 0,
+$$
+
+and
+
+$$
+\|x_{n_k} - u_{n_k}\| = \sigma_{n_k} \cdot \frac{\vartheta_{n_k}}{\sigma_{n_k}} \|x_{n_k} - x_{n_k-1}\| \to 0.
+$$
+
+Thus, we conclude that
+
+$$
+\|x_{n_k+1} - x_{n_k}\| \le \|x_{n_k+1} - u_{n_k}\| + \|u_{n_k} - x_{n_k}\| \to 0. \quad (3.31)
+$$
+
+Since {$x_{n_k}$} is bounded, there exists a subsequence {$x_{n_{kj}}$} of {$x_{n_k}$} such that $x_{n_{kj}} \to z$. Moreover,
+
+$$
+\limsup_{k \to \infty} \langle p, p - x_{n_k} \rangle = \lim_{j \to \infty} \langle p, p - x_{n_{kj}} \rangle = \langle p, p-z \rangle. \quad (3.32)
+$$
+
+Since $\|x_{n_k} - u_{n_k}\| \to 0$, one has $u_{n_k} \to z$, which, together with $\limsup_{k \to \infty} \|u_{n_k} - q_{n_k}\| = 0$ and Lemma 3.3, gets that $z \in \Omega$. From the definition of $p$ and (3.32), we obtain
+
+$$
+\limsup_{k \to \infty} \langle p, p - x_{n_k} \rangle = \langle p, p-z \rangle \le 0. \quad (3.33)
+$$
+
+Combining (3.31) and (3.33), we have
+
+$$
+\limsup_{k \to \infty} \langle p, p - x_{n_k+1} \rangle \le \limsup_{k \to \infty} \langle p, p - x_{n_k} \rangle \le 0. \quad (3.34)
+$$
+
+Thus, from Remark 3.1 (i), (3.30), (3.34), Claim 3 and Lemma 2.4, we conclude that $x_n \to p$. The proof is completed. □
\ No newline at end of file
diff --git a/samples/texts/7581440/page_15.md b/samples/texts/7581440/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..f4ae6c9a0c0dc8a069c23526795f1712869e8b8f
--- /dev/null
+++ b/samples/texts/7581440/page_15.md
@@ -0,0 +1,62 @@
+**3.3 The Algorithm 3.3**
+
+In this subsection, an inertial Mann-type algorithm for solving (SVIP) will be given. It is worth noting that this method uses a new step size update criterion that does not require any line search process. More precisely, the approach is described as follows:
+
+**Algorithm 3.3** The self-adaptive inertial Mann-type algorithm for (SVIP)
+
+**Initialization:** Set $\lambda > 0$, $\vartheta > 0$, $\varphi_n \in (0, 2)$ and let $x_0, x_1 \in \mathcal{H}$.
+
+**Iterative Steps:** Calculate the next iteration point $x_{n+1}$ as follows:
+
+$$
+\begin{cases}
+u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\
+g_n = J_{\lambda F_1}[u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n], \\
+x_{n+1} = (1 - \sigma_n - \tau_n)u_n + \tau_n g_n,
+\end{cases}
+$$
+
+where {$\vartheta_n$} is defined in (3.1) and the stepsize $\gamma_n$ is updated by the following:
+
+$$
+\gamma_n = \begin{cases} \frac{\varphi_n \| (I - J_{\lambda F_2}) Au_n \|}{\| A^* (I - J_{\lambda F_2}) Au_n \|}^2, & \text{if } \| A^* (I - J_{\lambda F_2}) Au_n \| \neq 0; \\ 0, & \text{otherwise.} \end{cases} \quad (3.35)
+$$
+
+The following two lemmas are very important for the convergence analysis of the algo-
+rithms.
+
+**Lemma 3.4** *The sequence {γn} formed by (3.35) is bounded.*
+
+*Proof* Indeed, if ||*A**(*I* − *J*λF2)*Au*n|| ≠ 0, then
+
+$$
+\inf \left\{ \frac{2 \| (I - J_{\lambda F_2}) Au_n \|}{\| A^* (I - J_{\lambda F_2}) Au_n \|}^2 - \gamma_n \right\} > 0.
+$$
+
+On the other hand, from the fact that A is bounded and linear, we can show that
+
+$$
+\frac{\varphi_n \| (I - J_{\lambda F_2}) Au_n \|}{\| A^* (I - J_{\lambda F_2}) Au_n \|}^2 \geq \frac{\varphi_n \| (I - J_{\lambda F_2}) Au_n \|}{\| A \|}^2 = \frac{\varphi_n}{\| A \|}.
+$$
+
+Therefore, $\sup_n \gamma_n < \infty$ and thus {$\gamma_n$} is bounded. $\square$
+
+**Lemma 3.5** Suppose that Conditions (C1)–(C3) hold. Let the sequences {$u_n$} and {$g_n$} be made by Algorithm 3.3. Then
+
+$$
+\|g_n - p\|^2 \le \|u_n - p\|^2 - \gamma_n(2 - \varphi_n)\|(I - J_{\lambda F_2})Au_n\|^2.
+$$
+
+*Proof* From Lemma 2.1 (1), Lemma 2.2 (3) and the definition of $\gamma_n$, we have
+
+$$
+\begin{align*}
+\|g_n - p\|^2 &\le \|u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n - p\|^2 \\
+&= \|u_n - p\|^2 + \|\gamma_n A^*(I - J_{\lambda F_2})Au_n\|^2 - 2\gamma_n \langle u_n - p, A^*(I - J_{\lambda F_2})Au_n \rangle \\
+&\le \|u_n - p\|^2 + \gamma_n^2 \|A^*(I - J_{\lambda F_2})Au_n\|^2 - 2\gamma_n \| (I - J_{\lambda F_2})Au_n \| \\
+&= \|u_n - p\|^2 - \gamma_n (2 \| (I - J_{\lambda F_2})Au_n \| - \gamma_n \|A^*(I - J_{\lambda F_2})Au_n\|^2) \\
+&= \|u_n - p\|^2 - \gamma_n (2 - \varphi_n) \| (I - J_{\lambda F_2})Au_n \| ^2.
+\end{align*}
+$$
+
+The proof of the lemma is now complete. □
\ No newline at end of file
diff --git a/samples/texts/7581440/page_16.md b/samples/texts/7581440/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..93c142000b67b419b1f491d4b6264d01a975c196
--- /dev/null
+++ b/samples/texts/7581440/page_16.md
@@ -0,0 +1,68 @@
+**Theorem 3.3** Suppose that Conditions (C1)–(C3) and (C5) hold. Then the sequence {$x_n$} formed by Algorithm 3.3 converges to $p \in \Omega$ in norm, where $\|p\| = \min\{\|z\| : z \in \Omega\}$.
+
+*Proof* This proof is divided into four claims.
+
+**Claim 1.** The sequence {$x_n$} is bounded. Indeed, from Lemma 3.5 and $\varphi_n \in (0, 2)$, we have
+
+$$
+\begin{equation}
+\begin{aligned}
+\|g_n - p\| &\le \|u_n - p\|, \quad \forall n \ge 1.
+\end{aligned}
+\tag{3.36}
+\end{equation}
+$$
+
+From (3.14), (3.25), (3.26) and (3.36), we get
+
+$$
+\begin{align*}
+\|x_{n+1} - p\| &\le (1 - \sigma_n)\|u_n - p\| + \sigma_n \|p\| \\
+&\le (1 - \sigma_n)\|x_n - p\| + \sigma_n (\|p\| + M_1) \\
+&\le \max \{\|x_n - p\|, \|p\| + M_1\} \\
+&\le \dots \le \max \{\|x_0 - p\|, \|p\| + M_1\},
+\end{align*}
+$$
+
+where $M_1$ is defined in Claim 1 of Theorem 3.1. Thus, {$x_n$} is bounded. Consequently, {$u_n$} and {$g_n$} are also bounded.
+
+**Claim 2.**
+
+$$
+\begin{align*}
+& \tau_n \gamma_n (2 - \varphi_n) \| (I - J_{\lambda F_2}) A u_n \|^2 + \tau_n (1 - \sigma_n - \tau_n) \| u_n - g_n \|^2 \\
+& \le \| x_n - p \|^2 - \| x_{n+1} - p \|^2 + \sigma_n (\| p \|^2 + M_2) .
+\end{align*}
+$$
+
+Indeed, from Lemma 3.5 and (3.15), we get
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 &= \| (1 - \sigma_n - \tau_n)(u_n - p) + \tau_n(g_n - p) + \sigma_n(-p) \|^2 \\
+&\leq (1 - \sigma_n - \tau_n)\|u_n - p\|^2 + \tau_n\|g_n - p\|^2 + \sigma_n\|p\|^2 \\
+&\quad - \tau_n(1 - \sigma_n - \tau_n)\|u_n - g_n\|^2 \\
+&\leq \|x_n - p\|^2 - \tau_n\gamma_n(2 - \varphi_n)\|(I - J_{\lambda F_2})Au_n\|^2 \\
+&\quad + \sigma_n(\|p\|^2 + M_2) - \tau_n(1 - \sigma_n - \tau_n)\|u_n - g_n\|^2,
+\end{align*}
+$$
+
+where $M_2$ is defined in Claim 2 of Theorem 3.1. The desired result can be obtained by a simple conversion.
+
+**Claim 3.**
+
+$$
+\|x_{n+1} - p\|^2 \le (1 - \sigma_n)\|x_n - p\|^2 + \sigma_n \Biggl[ 2\tau_n \|u_n - g_n\| \|x_{n+1} - p\| \\
+\phantom{\|x_{n+1} - p\|^2 \le (1 - \sigma_n)\|x_n - p\|^2 + } + 2\langle p, p - x_{n+1} \rangle + \frac{3M\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| \Biggr].
+$$
+
+This result can be obtained by using the same facts as the Claim 3 of Theorem 3.2.
+
+**Claim 4.** The sequence $\{\|x_n - p\|^2\}$ converges to zero. We assume that $\{\|x_{n_k} - p\|\}$ is a subsequence of $\{\|x_n - p\|\}$ such that $\liminf_{k \to \infty} (\|x_{n_k+1} - p\| - \|x_{n_k} - p\|) \ge 0$. By Claim 2 and Conditions (C3) and (C5), we have
+
+$$
+\begin{align*}
+& \tau_{n_k} \gamma_{n_k} (2 - \varphi_{n_k}) \| (I - J_{\lambda F_2}) A u_{n_k} \|^2 + \tau_{n_k} (1 - \sigma_{n_k} - \tau_{n_k}) \| u_{n_k} - g_{n_k} \|^2 \\
+& \leq \limsup_{k \to \infty} [\| x_{n_k} - p \|^{2} - \| x_{n_k+1} - p \|^{2}] + \limsup_{k \to \infty} \sigma_{n_k} (\| p \|^{2} + M_2) \leq 0,
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_17.md b/samples/texts/7581440/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..3b8b181bc2518e5c0a00976f7bffbfc36cf91a02
--- /dev/null
+++ b/samples/texts/7581440/page_17.md
@@ -0,0 +1,54 @@
+which implies that $\lim_{k \to \infty} \| (I - J_{\lambda F_2}) A u_{n_k} \| = 0$ and $\lim_{k \to \infty} \| g_{n_k} - u_{n_k} \| = 0$. From (3.30)–(3.34), we can show that
+
+$$ \lim_{k \to \infty} \tau_{n_k} \|u_{n_k} - g_{n_k}\| \|x_{n_k+1} - p\| = 0, $$
+
+and
+
+$$ \limsup_{k \to \infty} \langle p, p - x_{n_k+1} \rangle \le 0. $$
+
+Combining these with Remark 3.1 (i), Claim 3 and Lemma 2.4, we deduce that $x_n \to p$. This completes the proof. □
+
+## 3.4 The Algorithm 3.4
+
+Finally, we introduce a modified version of Algorithm 3.3, which uses the viscosity-type method to ensure the strong convergence of the suggested iterative scheme. The method is stated as follows:
+
+**Algorithm 3.4** The self-adaptive inertial viscosity-type algorithm for (SVIP)
+
+**Initialization:** Set $\lambda > 0$, $\vartheta > 0$, $\varphi_n \in (0, 2)$ and let $x_0, x_1 \in \mathcal{H}$.
+
+**Iterative Steps:** Calculate the next iteration point $x_{n+1}$ as follows:
+
+$$
+\left\{
+\begin{aligned}
+u_n &= x_n + \vartheta_n (x_n - x_{n-1}), \\
+g_n &= J_{\lambda F_1} [u_n - \gamma_n A^* (I - J_{\lambda F_2}) A u_n], \\
+x_{n+1} &= \sigma_n f(x_n) + (1-\sigma_n) g_n,
+\end{aligned}
+\right.
+$$
+
+where $\{\vartheta_n\}$ and $\{\gamma_n\}$ are defined in (3.1) and (3.35), respectively.
+
+Based on the proofs of Theorems 3.1 and 3.3, we will give the convergence analysis of Algorithm 3.4 in a compact way.
+
+**Theorem 3.4** Suppose that Conditions (C1)–(C3) and (C5) hold. Then the sequence $\{x_n\}$ created by Algorithm 3.4 converges to $p \in \Omega$ in norm, where $p = P_\Omega \circ f(p)$.
+
+**Proof Claim 1.** The sequence $\{x_n\}$ is bounded. Indeed, using (3.12)–(3.14) and (3.36), we have
+
+$$
+\begin{aligned}
+\|x_{n+1} - p\| &= \| \sigma_n (f(x_n) - p) + (1-\sigma_n)(g_n - p) \| \\
+&\le \max \left\{ \|x_0 - p\|, \frac{\|f(p) - p\| + M_1}{1-\rho} \right\}.
+\end{aligned}
+$$
+
+This means that $\{x_n\}$ is bounded. Hence, $\{f(x_n)\}$, $\{u_n\}$ and $\{g_n\}$ are also bounded.
+
+**Claim 2.**
+
+$$ (1 - \sigma_n)\gamma_n(2 - \varphi_n)\|(I - J_{\lambda F_2})Au_n\|^2 \le \|x_n - p\|^2 - \|x_{n+1} - p\|^2 + \sigma_n M_4, $$
+
+and
+
+$$ (1 - \sigma_n)\|g_n - u_n\|^2 \le \|x_n - p\|^2 - \|x_{n+1} - p\|^2 + \sigma_n M_4 \\ + 2(1 - \sigma_n)\gamma_n \|g_n - p\| \|A^*(I - J_{\lambda F_2})Au_n\|. $$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_18.md b/samples/texts/7581440/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..f734327a5506f734f5eaf3466f38cf5cd5e9d7e7
--- /dev/null
+++ b/samples/texts/7581440/page_18.md
@@ -0,0 +1,54 @@
+Indeed, using (3.15) and Lemma 3.5, we get
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 & \leq \sigma_n \|f(x_n) - p\|^2 + (1 - \sigma_n)\|g_n - p\|^2 \\
+& \leq \sigma_n (\|f(x_n) - f(p)\| + \|f(p) - p\|^2 + (1 - \sigma_n)\|g_n - p\|^2 \\
+& \leq \sigma_n \|x_n - p\|^2 + (1 - \sigma_n)\|g_n - p\|^2 \\
+& \quad + \sigma_n(2\|x_n - p\| \cdot \|f(p) - p\| + \|f(p) - p\|^2) \\
+& \leq \sigma_n \|x_n - p\|^2 + (1 - \sigma_n)\|g_n - p\|^2 + \sigma_n M_3 \\
+& \leq \|x_n - p\|^2 - (1 - \sigma_n)\gamma_n(2 - \varphi_n)\|(I - J_{\lambda F_2})Au_n\|^2 + \sigma_n M_4,
+\end{align*}
+$$
+
+where $M_4 := M_2 + M_3$. The first desired result can be achieved by a simple conversion.
+On the other hand, from the fact that $J_{\lambda F_1}$ is firmly nonexpansive, we get
+
+$$
+\begin{align*}
+2 \|g_n - p\|^2 &= 2 \|J_{\lambda F_1}(u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n) - J_{\lambda F_1}(p)\|^2 \\
+&\le 2 \langle g_n - p, u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n - p \rangle \\
+&= \|g_n - p\|^2 + \|u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n - p\|^2 \\
+&\quad - \|g_n - p - (u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n - p)\|^2 \\
+&= \|g_n - p\|^2 + \|u_n - p\|^2 + \gamma_n^2 \|A^*(I - J_{\lambda F_2})Au_n\|^2 \\
+&\quad - 2 \langle u_n - p, \gamma_n A^*(I - J_{\lambda F_2})Au_n \rangle - \|g_n - u_n\|^2 \\
+&\quad - \gamma_n^2 \|A^*(I - J_{\lambda F_2})Au_n\|^2 - 2 \langle g_n - u_n, \gamma_n A^*(I - J_{\lambda F_2})Au_n \rangle \\
+&= \|g_n - p\|^2 + \|u_n - p\|^2 - \|g_n - u_n\|^2 \\
+&\quad + 2 \langle g_n - p, \gamma_n A^*(J_{\lambda F_2} - I)Au_n \rangle,
+\end{align*}
+$$
+
+which implies that
+
+$$
+\|g_n - p\|^2 \leq \|u_n - p\|^2 - \|g_n - u_n\|^2 + 2\gamma_n \|g_n - p\| \|A^*(I - J_{\lambda F_2})Au_n\|.
+$$
+
+This together with (3.15) and (3.37) obtains
+
+$$
+\begin{align*}
+\|x_{n+1} - p\|^2 & \leq \sigma_n \|x_n - p\|^2 + (1-\sigma_n)\|g_n - p\|^2 + \sigma_n M_3 \\
+& \leq \sigma_n \|x_n - p\|^2 + (1-\sigma_n)\|u_n - p\|^2 + \sigma_n M_3 \\
+& \quad - (1-\sigma_n)(\|g_n-u_n\|^2 - 2\gamma_n \|g_n-p\|\|A^*(I-J_{\lambda F_2})Au_n\|) \\
+& \leq \|x_n-p\|^2 - (1-\sigma_n)(\|g_n-u_n\|^2 - 2\gamma_n \|g_n-p\|\|A^*(I-J_{\lambda F_2})Au_n\|) + \sigma_n M_4.
+\end{align*}
+$$
+
+By a simple transformation, we get the second desired result.
+
+**Claim 3.**
+
+$$
+\|x_{n+1} - p\|^2 \le (1-(1-\rho)\sigma_n)\|x_n-p\|^2 + (1-\rho)\sigma_n \cdot \left[ \frac{3M}{1-\rho} \cdot \frac{\vartheta_n}{\sigma_n} \|x_n-x_{n-1}\| + \frac{2}{1-\rho} \langle f(p)-p, x_{n+1}-p \rangle \right],
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_19.md b/samples/texts/7581440/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..60b150e37755972a07d1d6d9b40607190ebc0af3
--- /dev/null
+++ b/samples/texts/7581440/page_19.md
@@ -0,0 +1,31 @@
+This result can be obtained by using the same facts as the Claim 3 of Theorem 3.1.
+
+**Claim 4.** $\{\|x_n - p\|^2\}$ converges to zero. We assume that $\{\|x_{n_k} - p\|\}$ is a subsequence of $\{\|x_n - p\|\}$ such that $\liminf_{k \to \infty} (\|x_{n_k+1} - p\| - \|x_{n_k} - p\|) \ge 0$. By Claim 2 and Condition (C3), we have
+
+$$
+\begin{align*}
+(1 - \sigma_{n_k}) \gamma_{n_k} (2 - \varphi_{n_k}) \| (I - J_{\lambda F_2}) A u_{n_k} \|^2 &\le \limsup_{k \to \infty} [\|x_{n_k} - p\|^2 - \|x_{n_k+1} - p\|^2 + \sigma_{n_k} M_4] \\
+&\le 0,
+\end{align*}
+$$
+
+which implies that $\lim_{k \to \infty} \| (I - J_{\lambda F_2}) A u_{n_k} \| = 0$. This together with Claim 2 yields that
+$\lim_{k \to \infty} \| g_{n_k} - u_{n_k} \| = 0$. From (3.18)-(3.23), we observe that
+
+$$
+\limsup_{k \to \infty} \langle f(p) - p, x_{n_k+1} - p \rangle \le 0.
+$$
+
+This together with Remark 3.1 (i), Claim 3 and Lemma 2.4 concludes that $x_n \to p$. The proof of the theorem is now complete. $\square$
+
+**Remark 3.3** We note here that the proposed algorithms directly improve some known results in the literature. The details are as follows:
+
+(i) Our presented methods have strong convergence in real Hilbert spaces, which is more preferable than the weak convergence results of Byrne et al. [7], Chuang [17], Majee and Nahak [18] and Kesornprom and Cholamjiak [28]. Moreover, our Algorithms 3.1 and 3.4 use the viscosity-type method to ensure strong convergence, which makes them faster than the Halpern-type methods in the literature [7,8,34].
+
+(ii) The selection of the step size in the algorithms provided by [7–10,12,13,17,18,24–26] requires the prior information of the operator (matrix) norm, while our algorithms can adaptively update the step size of each iteration. On the one hand, it is not easy to estimate the operator (matrix) norm of the bounded linear operator $A$ in practical applications. On the other hand, it should be pointed out that Armijo-type search methods need to evaluate the value of the iterative sequences $\{u_n, q_n\}$ at operator $A$ and the resolvent mapping of $F_2$ multiple times in each iteration. The proposed Algorithms 3.3 and 3.4 use a method that does not involve any line search process. The method only needs to use known information for a simple calculation in each iteration to complete the step size update. Therefore, our self-adaptive iterative schemes (especially for Algorithms 3.3 and 3.4) are more preferable than the fixed-step methods and the Armijo-type methods [28].
+
+(iii) In [24, Algorithm 3.3], Thong et al. [24] calculated $g_n$ by $g_n = u_n - \mu_n c_n$, however, our Algorithms 3.1 and 3.2 calculate $g_n$ via $g_n = u_n - \kappa \mu_n c_n$, where $\kappa \in (0, 2)$. Obviously, our two methods for calculating $g_n$ are preferable to Thong et al. [24]. Furthermore, Algorithm 3.3 and Anh et al.'s Algorithm [26, Algorithm 4] update $x_{n+1}$ differently. To be more precise, we calculate $x_{n+1} = (1 - \sigma_n - \tau_n)u_n + \tau_n g_n$, while Anh et al. calculated $x_{n+1} = (1 - \sigma_n - \tau_n)x_n + \tau_n g_n$. Numerical experiments show that our iterative scheme is more efficient than Anh et al.'s algorithm (cf. Sect. 5).
+
+# **4 Applications**
+
+In this section, we apply the proposed algorithms 3.1–3.4 to split feasibility problems and split minimization problems.
\ No newline at end of file
diff --git a/samples/texts/7581440/page_2.md b/samples/texts/7581440/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..7a0fcc6e708bbce7d70fb2517c8f13048de86747
--- /dev/null
+++ b/samples/texts/7581440/page_2.md
@@ -0,0 +1,15 @@
+# 1 Introduction
+
+Let $\mathcal{H}_1$ and $\mathcal{H}_2$ be real Hilbert spaces with inner product $\langle \cdot, \cdot \rangle$ and induced norm $\|\cdot\|$. In this paper, we focus on the following split variational inclusion problem (in short, SVIP):
+
+$$ \text{find } x^* \in \mathcal{H}_1 \text{ such that } 0 \in F_1(x^*) \text{ and } 0 \in F_2(Ax^*), \qquad (\text{SVIP}) $$
+
+where operators $F_1: \mathcal{H}_1 \to 2^{\mathcal{H}_1}$ and $F_2: \mathcal{H}_2 \to 2^{\mathcal{H}_2}$ are multi-valued maximal monotone, and $A: \mathcal{H}_1 \to \mathcal{H}_2$ is a bounded linear operator with its adjoint $A^*$. The solution set of (SVIP) is denoted by $\Omega := \{x^* \in \mathcal{H}_1 : 0 \in F_1(x^*) \text{ and } 0 \in F_2(Ax^*)\}$. The (SVIP) was first introduced by Moudafi [1]. It is worth noting that (SVIP) is a unified framework for many mathematical problems, including split minimization problem, split feasibility problems, fixed-point problems, linear inverse problems and variational inclusion problems; see, e.g., [2-4]. This formalism is also at the core of the modeling of many inverse problems arising for phase retrieval and other real-world problems; e.g., in sensor networks in computerized tomography and data compression; see [5,6] and the references therein. Thus, implementable and efficient solutions to this problem are of practical significance in many cases.
+
+The goal of this paper is to build some fast and efficient iterative algorithms to solve the (SVIP). In recent years, there has been tremendous interest in solving (SVIP) and many researchers have constructed a large number of methods to solve the problem; see, e.g., [7-13] and the references therein. Next, we first recall some known algorithms for solving (SVIP) in the literature and then propose our methods. Byrne et al. [7, Algorithm 3.1] introduced the following algorithm to find the solution for (SVIP): $x_{n+1} = J_{\lambda F_1}(x_n - \gamma A^*(I - J_{\lambda F_2})Ax_n)$, where $J_{\lambda F_1}$ and $J_{\lambda F_2}$ are the resolvent mappings of $F_1$ and $F_2$, respectively (see the definition in Sect. 2) and $I$ is the identity mapping. They proved that the iterative scheme converges weakly to a solution of (SVIP) provided that $\Omega \neq \emptyset$ and stepsize $\gamma \in (0, 2/\|\|A^*A\|$). An important method to solve the variational inclusion problem (i.e., find $x^* \in \mathcal{H}_1$ such that $0 \in F_1(x^*)$) is the proximal point method (in short, PPM): $x_{n+1} = J_{\lambda F_1}(x_n)$. In order to speed up the convergence speed of PPM, Alvarez and Attouch [14] considered the following iterative scheme: $x_{n+1} = J_{\lambda F_1}(x_n + \vartheta_n(x_n - x_{n-1}))$, where $F_1$ is a maximal monotone operator, $\lambda > 0$ and $\vartheta_n \in [0, 1)$. This iterative scheme is now called the inertial proximal point method (in short, IPPM). They proved that the iterative sequence generated by IPPM converges weakly to a zero of $F_1$ under the condition that $\sum_{n=1}^\infty \vartheta_n \|x_n - x_{n-1}\|^2 < \infty$. It should be noted that the inertial is induced by the term $\vartheta_n(x_n - x_{n-1})$ and it can be regarded as a procedure of speeding up the convergence properties; see [15,16]. Recently, the idea of the inertial has been widely studied by many scholars in the optimization community as a technology to build fast iterative algorithms; see, e.g., [17-22] and the references therein. Inspired by the method proposed by Byrne et al. [7], the inertial method [14] and the projection and contraction method [23], Chuang [17] introduced the following hybrid inertial proximal algorithm to solve the (SVIP):
+
+$$ \left\{ \begin{array}{l} u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\ q_n = J_{\lambda_n F_1}[u_n - \gamma_n A^*(I - J_{\lambda_n F_2})Au_n], \\ x_{n+1} = J_{\lambda_n F_1}(u_n - \mu_n c_n), \end{array} \right. \qquad (1.1) $$
+
+where $\lambda_n > 0$, $\{\vartheta_n\}$ is a sequence in $[0, \vartheta] \subset [0, 1)$ satisfying $\sum_{n=1}^\infty \vartheta_n \|x_n - x_{n-1}\|^2 < \infty$, sequences $\{\mu_n\}$ and $\{c_n\}$ are defined in (3.3), and $\{\gamma_n\}$ is a real sequence in $[\gamma, \delta/ \|A\|^2] \subset (0, \infty)$ satisfying
+
+$$ \gamma_n \|A^*(I - J_{\lambda n F_2})Au_n - A^*(I - J_{\lambda n F_2})Aq_n\| \leq \delta \|u_n - q_n\|, \quad 0 < \delta < 1. $$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_20.md b/samples/texts/7581440/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..35bc9b44228be3439cca9cab4e17d0ef8e2f62d6
--- /dev/null
+++ b/samples/texts/7581440/page_20.md
@@ -0,0 +1,44 @@
+## 4.1 Application to Split Feasibility Problems
+
+Recall that the split feasibility problem (SFP) introduced by Censor and Elfving [2] is described as follows:
+
+$$ \text{find } x^* \in C \text{ such that } Ax^* \in Q, \qquad (\text{SFP}) $$
+
+where $C$ and $Q$ are closed convex subsets of real Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively, and $A: \mathcal{H}_1 \to \mathcal{H}_2$ is a bounded linear operator with adjoint operator $A^*$. We shall denote $\Omega$ the solution set of (SFP). Based on Algorithm 3.1, we obtain the following result.
+
+**Corollary 4.1** Let $\mathcal{H}_1$, $\mathcal{H}_2$, $C$, $Q$, $A$, $A^*$ and $\Omega$ be the same as the above statement. Suppose that $\Omega \neq \emptyset$, $\vartheta > 0$, $\zeta > 0$, $\chi \in (0, 1)$, $\delta \in (0, 1)$, $\kappa \in (0, 2)$, and Conditions (C3) and (C4) hold. Let $x_0, x_1 \in \mathcal{H}$ and $\{x_n\}$ be a sequence generated by
+
+$$
+\begin{cases}
+u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\
+q_n = P_C[u_n - \gamma_n A^*(I - P_Q)Au_n], \\
+g_n = u_n - \kappa \mu_n c_n, \\
+x_{n+1} = \sigma_n f(x_n) + (1-\sigma_n)g_n,
+\end{cases}
+$$
+
+where $\vartheta_n$ is defined in (3.1), $c_n$ and $\mu_n$ are defined as follows:
+
+$$ c_n = u_n - q_n - \gamma_n[A^*(I - P_Q)Au_n - A^*(I - P_Q)Aq_n], \quad \mu_n = \frac{\langle u_n - q_n, c_n \rangle}{\|c_n\|^2}, $$
+
+and $\gamma_n = \zeta \chi^{r_n}$ and $r_n$ is the smallest nonnegative integer such that
+
+$$ \gamma_n \|A^*(I - P_Q)Au_n - A^*(I - P_Q)Aq_n\| \leq \delta \|u_n - q_n\|. $$
+
+Then the iterative sequence $\{x_n\}$ provided above converges to $p \in \Omega$ in norm.
+
+In Algorithms 3.2–3.4, if $J_{\lambda F_1} = P_C$ and $J_{\lambda F_2} = P_Q$, we can also obtain some sub-results on the split feasibility problem. We omit them here.
+
+## 4.2 Application to Split Minimization Problems
+
+In this subsection, we explore the solution of the following split minimization problem (SMP):
+
+$$ \text{find } x^* \in \mathcal{H}_1 \text{ such that } x^* \in \underset{x \in \mathcal{H}_1}{\operatorname{argmin}} f(x) \text{ and } Ax^* \in \underset{y \in \mathcal{H}_2}{\operatorname{argmin}} g(y), \quad (\text{SMP}) $$
+
+where $\mathcal{H}_1$ and $\mathcal{H}_2$ represent two real Hilbert spaces, $A : \mathcal{H}_1 \to \mathcal{H}_2$ is a linear bounded operator with its adjoint $A^*$, and convex functions $f : \mathcal{H}_1 \to \mathbb{R}$ and $g : \mathcal{H}_2 \to \mathbb{R}$ are proper lower semicontinuous. For convenience, we also use $\Omega$ to represent the solution set of (SMP) and assume that $\Omega \neq \emptyset$. Let $\operatorname{prox}_{\lambda f}$ represent the proximal mapping of a proper convex and lower semicontinuous function $f : \mathcal{H}_1 \to \mathbb{R}$ with a parameter $\lambda > 0$, which is defined as follows:
+
+$$ \operatorname{prox}_{\lambda f}(x) := \operatorname*{argmin}_{y \in \mathcal{H}_1} \left\{ \lambda f(y) + \frac{1}{2} \|y - x\|^2 \right\}. $$
+
+It is well known that $\operatorname{prox}_{\lambda f}(x) = (I + \lambda \partial f)^{-1}(x) = J_{\lambda \partial f}(x)$, where $\partial f$ is the subdifferential of $f$ defined by
+
+$$ \partial f(x) := \{z \in \mathcal{H} : f(z) - f(x) \geq \langle z, y-x\rangle, \forall y \in \mathcal{H}\}, \quad \forall x \in \operatorname{Dom}(f). $$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_21.md b/samples/texts/7581440/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..62ac3a06931cfd96bb60edc54282a82114e47c9e
--- /dev/null
+++ b/samples/texts/7581440/page_21.md
@@ -0,0 +1,37 @@
+It is known that $\partial f$ is maximal monotone and $\text{prox}_{\lambda f}$ is firmly nonexpansive. Thus, the following corollary follows directly from Algorithm 3.3.
+
+**Corollary 4.2** Let $\mathcal{H}_1, \mathcal{H}_2$, $f, g, A, A^*$ and $\Omega$ be the same as the above statement. Assume that $\Omega \neq \emptyset, \lambda > 0, \vartheta > 0, \varphi_n \in (0, 2)$, and Conditions (C3) and (C5) hold. Let $x_0, x_1 \in \mathcal{H}$ and $\{x_n\}$ be a sequence created by
+
+$$
+\left\{
+\begin{array}{l}
+u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\
+g_n = \text{prox}_{\lambda f}[u_n - \gamma_n A^*(I - \text{prox}_{\lambda g})A u_n], \\
+x_{n+1} = (1 - \sigma_n - \tau_n)u_n + \tau_n g_n,
+\end{array}
+\right.
+$$
+
+where $\vartheta_n$ is defined in (3.1) and the stepsize $\gamma_n$ is updated by the following:
+
+$$ \gamma_n = \begin{cases} \frac{\varphi_n \| (I - \operatorname{prox}_{\lambda g}) A u_n \|}{\| A^* (I - \operatorname{prox}_{\lambda g}) A u_n \|}^2 & \text{if } \| A^* (I - \operatorname{prox}_{\lambda g}) A u_n \| \neq 0; \\ 0 & \text{otherwise.} \end{cases} $$
+
+Then the iterative sequence $\{x_n\}$ constructed above converges to $p \in \Omega$ in norm.
+
+In Algorithms 3.1, 3.2 and 3.4, if $J_{\lambda F_1} = \text{prox}_{\lambda f}$ and $J_{\lambda F_2} = \text{prox}_{\lambda g}$, we can also obtain some sub-results on the split minimization problem. We omit them here.
+
+# 5 Numerical Experiments
+
+In this section, we provide some numerical examples occurring in finite- and infinite-dimensional spaces to show the advantages of our algorithms, and compare them with some known strongly convergent algorithms, including Byrne et al.'s Algorithm 4.4 (shortly, BCGR Alg. 4.4) [7], Kazmi and Rizvi's Algorithm (3.1) (KR Alg. (3.1)) [8], Thong et al.'s Algorithm 3.3 (TDC Alg. 3.3) [24], Long et al.'s Algorithm (49) (LTD Alg. (49)) [25], Anh et al.'s Algorithm (4) (ATD Alg. 4) [26] and Suantai et al.'s Algorithm 3 (SKC Alg. 3) [34]. All the programs are implemented in MATLAB 2018a on a Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz computer with RAM 8.00 GB. Before starting our numerical experiments, we first review the strongly convergent algorithms that need to be compared. These iterative schemes and their convergence conditions are described in Table 1.
+
+In the following numerical experiments, the parameters of all algorithms are set as follows:
+
+• In all algorithm, we set $\lambda = 1$, $\sigma_n = 1/(n+1)$, $\tau_n = 0.5(1-\sigma_n)$ and $f(x) = 0.5x$.
+
+• In BCGR Alg. 4.4, KR Alg. (3.1), TDC Alg. 3.3, LTD Alg. (49) and ATD Alg. 4, we choose stepsize $\gamma = 0.5/\|A\|^2$.
+
+• In TDC Alg. 3.3, LTD Alg. (49) and ATD Alg. 4, we update the inertia parameter $\vartheta_n$ through (3.1). In these three algorithms and the offered algorithms 3.1–3.4, we take $\varpi_n = 1/(n+1)^2$ and $\vartheta = 0.5$.
+
+• In our Algorithms 3.1 and 3.2, we adopt $\zeta = 2$, $\chi = 0.5$, $\delta = 0.5$, $\kappa = 1$. Set $\varphi_n = 1.5$ in our Algorithms 3.3 and 3.4. In SKC Alg. 3, we take $u = x_0$, $\varphi_n = 3$ and $\iota_n = 1/n^3$.
+
+**Example 5.1** Assume that $A, A_1, A_2 : \mathbb{R}^m \to \mathbb{R}^m$ are created from a normal distribution with mean zero and unit variance. Let $F_1 : \mathbb{R}^m \to \mathbb{R}^m$ and $F_2 : \mathbb{R}^m \to \mathbb{R}^m$ be defined by $F_1(x) = A_1^* A_1 x$ and $F_2(y) = A_2^* A_2 y$, respectively. Consider the problem of finding a point $\bar{x} = (\bar{x}_1, \dots, \bar{x}_m)^T \in \mathbb{R}^m$ such that $F_1(\bar{x}) = (0, \dots, 0)^T$ and $F_2(A\bar{x}) = (0, \dots, 0)^T$. It is easy to see that the minimum norm solution of the problem mentioned above is $x^* = (0, \dots, 0)^T$.
\ No newline at end of file
diff --git a/samples/texts/7581440/page_22.md b/samples/texts/7581440/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..cd590591e255fb06b517104f42f63d2d149ed011
--- /dev/null
+++ b/samples/texts/7581440/page_22.md
@@ -0,0 +1,5 @@
+**Table 1** Strongly convergent algorithms and their convergence conditions
+
+| Strongly convergent algorithms | Convergence conditions (L = ||A||2) |
|---|
| BCGR Alg. 4.4 | $$ \begin{cases} g_n = J_{\lambda F_1}[x_n - \gamma A^*(I - J_{\lambda F_2})Ax_n], \\ x_{n+1} = \sigma_n x_0 + (1 - \sigma_n)g_n, \end{cases} $$ | $$\lim_{n \to \infty} \sigma_n = 0, \quad \sum_{n=1}^{\infty} \sigma_n = \infty,$$ | | KR Alg. (3.1) | $$ \begin{cases} g_n = J_{\lambda F_1}[x_n - \gamma A^*(I - J_{\lambda F_2})Ax_n], \\ x_{n+1} = \sigma_n f(x_n) + (1 - \sigma_n)g_n, \end{cases} $$ | $$\lim_{n \to \infty} \sigma_n = 0, \quad \sum_{n=1}^{\infty} \sigma_n = 0,$$ | | TDC Alg. 3.3 | $$ \begin{cases} u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\ q_n = J_{\lambda F_1}[u_n - \gamma A^*(I - J_{\lambda F_2})Au_n] \\ g_n = u_n - \mu_n c_n, \\ x_{n+1} = \sigma_n f(x_n) + (1 - \sigma_n)g_n, \end{cases} $$ | $$\sum_{n=1}^{\infty} \sigma_n = \infty, \quad \sum_{n=1}^{\infty} |x_n - x_{n-1}| < \infty.$$ | | LTD Alg. (49) | $$ \begin{cases} u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\ g_n = J_{\lambda F_1}[u_n - \gamma A^*(I - J_{\lambda F_2})Au_n], \\ x_{n+1} = \sigma_n f(x_n) + (1 - \sigma_n)g_n, \end{cases} $$ | $$\lim_{n \to \infty} \vartheta_n/\sigma_n = 0, \quad \sum_{n=1}^{\infty} \sigma_n = 0,$$ | | ATD Alg. 4 | $$ \begin{cases} u_n = x_n + \vartheta_n(x_n - x_{n-1}), \\ g_n = J_{\lambda F_1}[u_n - \gamma A^*(I - J_{\lambda F_2})Au_n], \\ x_{n+1} = (1 - \sigma_n - t_n)x_n + t_n g_n, \end{cases} $$ | $$\lim_{n \to \infty} \vartheta_n/\sigma_n = 0, \quad \sum_{n=1}^{\infty} \sigma_n = 0,$$ | | SKC Alg. 3 | $$ \begin{cases} g_n = J_{\lambda F_1}[x_n - \gamma n A^*(I - J_{\lambda F_2})Ax_n], \\ x_{n+1} = \sigma_n u + (1 - \sigma_n)g_n, \end{cases} $$ | $$\lim_{n \to \infty} \sigma_n = 0, \quad \sum_{n=1}^{\infty} \sigma_n = \infty, \quad \inf \varphi_n (4 - \varphi_n) > 0,$$ |
+
+$$ \gamma_n = \frac{0.5\varphi_n ||(I - J_{\lambda F_2})Ax_n||^2}{||A^*(I - J_{\lambda F_2})Ax_n||^2 + t_n}, \quad \lim_{n \to \infty} t_n = 0. $$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_23.md b/samples/texts/7581440/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..88923e698556f1ec066214469c63d3ff6bad8dd2
--- /dev/null
+++ b/samples/texts/7581440/page_23.md
@@ -0,0 +1,22 @@
+**Table 2** The number of termination iterations and execution time of all algorithms with different stopping criteria ($m = 100$) in Example 5.1
+
+| Algorithms | $ε_n = 10^{-4}$ | $ε_n = 10^{-5}$ | $ε_n = 10^{-6}$ | $ε_n = 10^{-7}$ |
|---|
| Iter. | Time (s) | Iter. | Time (s) | Iter. | Time (s) | Iter. | Time (s) |
|---|
| Our Alg. 3.1 | 23 | 0.0345 | 32 | 0.0365 | 36 | 0.0336 | 39 | 0.0407 | | Our Alg. 3.2 | 22 | 0.0342 | 31 | 0.0353 | 39 | 0.0363 | 49 | 0.0543 | | Our Alg. 3.3 | 17 | 0.0185 | 19 | 0.0172 | 25 | 0.0170 | 29 | 0.0218 | | Our Alg. 3.4 | 11 | 0.0147 | 15 | 0.0135 | 18 | 0.0125 | 21 | 0.0142 | | BCGR Alg. 4.4 | 299 | 0.4177 | 299 | 0.4251 | 299 | 0.4180 | 299 | 0.4335 | | KR Alg. (3.1) | 80 | 0.1163 | 103 | 0.1449 | 126 | 0.1724 | 149 | 0.2090 | | TDC Alg. 3.3 | 39 | 0.0673 | 47 | 0.0718 | 55 | 0.0747 | 63 | 0.0879 | | LTD Alg. (49) | 41 | 0.0659 | 49 | 0.0734 | 57 | 0.0764 | 66 | 0.0909 | | ATD Alg. 4 | 71 | 0.1075 | 100 | 0.1471 | 131 | 0.1830 | 164 | 0.2346 | | SKC Alg. 3 | 299 | 0.2086 | 299 | 0.2015 | 299 | 0.2060 | 299 | 0.2112 |
+
+We use $E_n = \|x_n - x^*\|$ to measure the iteration error of all the algorithms. The stopping condition is either $E_n < ε$, or maximum number of iterations which is set to 299. First, choosing $\epsilon = 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}$. We test the convergence behavior of all algorithms under different stopping conditions. The numerical results are shown in Table 2 and Fig. 1. Second, Table 3 and Fig. 2 describe the numerical behavior of all algorithms in different dimensions with the same stopping criterion $\epsilon = 10^{-7}$.
+
+**Remark 5.1** From the numerical results of Example 5.1, we have the following observations:
+
+(1) The four iterative schemes proposed in this paper are efficient and easy to implement.
+The most important thing is that they converge quickly.
+
+(2) Our offered methods converge faster than some known algorithms in the literature in terms of the number of iterations and execution time, and these observations have no significant relationship with the dimensions of the problem and the selection of initial values (cf. Table 2, Table 3, Figs. 1, 2).
+
+**Example 5.2** Assume that $\mathcal{H}_1$ and $\mathcal{H}_2$ are real Hilbert spaces, and $A : \mathcal{H}_1 \to \mathcal{H}_2$ is a bounded linear operator with its adjoint $A^*$. Let $C$ and $Q$ be nonempty closed and convex subsets of $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively. We consider the split feasibility problem (SFP) in infinite-dimensional Hilbert spaces, which reads as
+
+$$ \text{find } x^* \in C \text{ such that } Ax^* \in Q. $$
+
+For any $x, y \in L^2([0, 1])$, we consider $\mathcal{H}_1 = \mathcal{H}_2 = L^2([0, 1])$ embedded with the inner product $\langle x, y \rangle := \int_0^1 x(t)y(t) dt$ and the induced norm $\|x\| := (\int_0^1 |x(t)|^2 dt)^{1/2}$. Consider the following nonempty closed and convex subsets $C$ and $Q$ in $L^2([0, 1])$:
+
+$$ C = \{x \in L_2([0, 1]) \mid \int_0^1 x(t) dt \le 1\}, $$
+
+$$ Q = \{x \in L_2([0, 1]) \mid \int_0^1 |x(t) - \sin(t)|^2 dt \le 16\}. $$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_24.md b/samples/texts/7581440/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..a538fa549fbe77d8471df473cab0fcbaf198f3a5
--- /dev/null
+++ b/samples/texts/7581440/page_24.md
@@ -0,0 +1,11 @@
+Fig. 1 Numerical behavior of all algorithms with different stopping criteria in Example 5.1
+
+Let $A: L^2([0, 1]) \rightarrow L^2([0, 1])$ be the Volterra integration operator, which is given by $(Ax)(t) = \int_0^t x(s) \, ds, \forall t \in [0, 1], x \in \mathcal{H}$. Then $A$ is a bounded linear operator (see [39, Exercise 20.16]) and its operator norm is $\|A\| = \frac{2}{\pi}$. Moreover, the adjoint $A^*$ of $A$ is defined by $(A^*x)(t) = \int_t^1 x(s) \, ds$. Note that $x(t) = 0$ is a solution of (SFP) and thus the solution set of the problem is nonempty. On the other hand, it is known that projections on sets $C$ and $Q$ have display formulas, that is,
+
+$$P_C(x) = \begin{cases} 1 - a + x, & a > 1; \\ x, & a \le 1. \end{cases} \quad \text{and} \quad P_Q(x) = \begin{cases} \sin(\cdot) + \frac{4(x-\sin(\cdot))}{\sqrt{b}}, & b > 16; \\ x, & b \le 16, \end{cases}$$
+
+where $a := \int_{0}^{1} x(t) \, dt$ and $b := \int_{0}^{1} |x(t) - \sin(t)|^{2} \, dt$.
+
+We use symbolic computation in MATLAB to implement these algorithms for generating the sequences of iterates and use $E_n = \| (I - P_C)x_n \|_2^2 + \| A^*(I - P_Q)Ax_n \|_2^2 < 10^{-5}$ for stopping criterion. We do not report the numerical results of BCGR Alg. 4.4 and SKC Alg. 3 here because they converge slowly. Table 4 and Fig. 3 show the numerical behavior of all algorithms (except BCGR Alg. 4.4 and SKC Alg. 3) with four different initial values $x_0 = x_1$.
+
+**Remark 5.2** It can be seen from Table 4 and Fig. 3 that the proposed approaches are easy to implement and efficient. In addition, our suggested methods (especially Alg. 3.3 and Alg. 3.4) require fewer iterations than some algorithms in the literature to achieve the same
\ No newline at end of file
diff --git a/samples/texts/7581440/page_25.md b/samples/texts/7581440/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..ab0802dbdafa2253246d5fffafa9c867ff0f6cf1
--- /dev/null
+++ b/samples/texts/7581440/page_25.md
@@ -0,0 +1,5 @@
+**Table 3** The number of termination iterations and execution time of all algorithms with different dimensions ($\epsilon_n = 10^{-7}$) in Example 5.1
+
+| Algorithms | $m=200$ | $m=400$ | $m=600$ | $m=800$ |
|---|
| Iter. | Time (s) | Iter. | Time (s) | Iter. | Time (s) | Iter. | Time (s) |
|---|
| Our Alg. 3.1 | 35 | 0.1523 | 33 | 0.5510 | 39 | 1.1640 | 36 | 2.4639 | | Our Alg. 3.2 | 38 | 0.1603 | 37 | 0.6209 | 44 | 1.3438 | 39 | 2.7218 | | Our Alg. 3.3 | 28 | 0.0881 | 27 | 0.3723 | 26 | 0.6524 | 28 | 1.2189 | | Our Alg. 3.4 | 20 | 0.0689 | 16 | 0.2252 | 17 | 0.4076 | 17 | 0.7902 | | BCGR Alg. 4.4 | 299 | 1.8065 | 299 | 8.7384 | 299 | 17.9115 | 299 | 37.5884 | | KR Alg. (3.1) | 120 | 0.7502 | 105 | 3.1241 | 124 | 7.4629 | 117 | 15.2309 | | TDC Alg. 3.3 | 40 | 0.2644 | 39 | 1.1668 | 46 | 2.7830 | 44 | 5.7761 | | LTD Alg. (49) | 44 | 0.2896 | 40 | 1.1718 | 51 | 3.3261 | 46 | 6.0827 | | ATD Alg. 4 | 129 | 0.8595 | 116 | 3.4534 | 140 | 8.7756 | 131 | 17.7027 | | SKC Alg. 3 | 299 | 0.9027 | 299 | 4.2046 | 299 | 6.6302 | 299 | 13.7081 |
+
+Fig. 2 Numerical behavior of all algorithms with different dimensions in Example 5.1
\ No newline at end of file
diff --git a/samples/texts/7581440/page_26.md b/samples/texts/7581440/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..136f953b5f546747ed6997cc864f1a9648e6b921
--- /dev/null
+++ b/samples/texts/7581440/page_26.md
@@ -0,0 +1,7 @@
+**Table 4** The number of termination iterations and execution time of all algorithms with different initial values ($x_0 = x_1$) in Example 5.2
+
+| Algorithms | $x_1 = 500 \sin(t)$ | $x_1 = 1000t^2$ | $x_1 = 500(t^3 + 2t)$ | $x_1 = 300 \log(t)$ |
|---|
| Iter. | Time (s) | Iter. | Time (s) | Iter. | Time (s) | Iter. | Time (s) |
|---|
| Our Alg. 3.1 | 15 | 37.529 | 23 | 29.429 | 25 | 69.7555 | 22 | 144.176 | | Our Alg. 3.2 | 8 | 14.119 | 13 | 15.578 | 16 | 42.096 | 13 | 75.2286 | | Our Alg. 3.3 | 11 | 7.4645 | 9 | 4.0796 | 12 | 10.9242 | 5 | 3.476 | | Our Alg. 3.4 | 9 | 7.3766 | 14 | 7.4795 | 17 | 16.4684 | 8 | 6.5769 | | KR Alg. (3.1) | 21 | 4.5369 | 31 | 5.3176 | 35 | 6.5202 | 28 | 5.7873 | | TDC Alg. 3.3 | 16 | 26.817 | 26 | 24.806 | 28 | 51.2577 | 25 | 160.579 | | LTD Alg. (49) | 21 | 18.055 | 31 | 15.019 | 34 | 31.9965 | 28 | 56.5573 | | ATD Alg. 4 | 9 | 6.7382 | 16 | 7.4592 | 20 | 16.6949 | 12 | 12.249 |
+
+Fig. 3 Numerical behavior of all algorithms with different initial values in Example 5.2
+
+error accuracy, and these results are independent of the selection of initial values. It is worth noting that our Algorithms 3.1 and 3.2 enjoy fewer iterations while accompanied by more execution time (because the Armijo-type line search criterion (3.2) takes more time to find a suitable stepsize). Moreover, it should be pointed out that the operator norm $\|A\|$ of this
\ No newline at end of file
diff --git a/samples/texts/7581440/page_27.md b/samples/texts/7581440/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..03da036852cc4061b9c81a70b3bbd0c12f411abb
--- /dev/null
+++ b/samples/texts/7581440/page_27.md
@@ -0,0 +1,9 @@
+Fig. 4 Structure of compressive sensing matrices
+
+**Table 5** The numerical results of all algorithms for solving (LASSO) in case $M = 256$, $N = 512$ and $k = 10$
+
+| Measurement result | Our algorithms |
|---|
| Our Alg. 3.1 | Our Alg. 3.2 | Our Alg. 3.3 | Our Alg. 3.4 |
|---|
| MSE (×10-4) | 0.1786 | 0.2084 | 0.1838 | 0.1730 | | Time (s) | 0.4887 | 0.5200 | 0.1202 | 0.1184 |
+
+| Measurement result | Comparison algorithms |
|---|
| KR Alg. (3.1) | TDC Alg. 3.3 | LTD Alg. (49) | ATD Alg. 4 |
|---|
| MSE (×10-4) | 0.1801 | 0.1786 | 0.1801 | 0.2153 | | Time (s) | 1.6703 | 0.1681 | 1.6658 | 1.6427 |
+
+Fig. 5 Original signal and contaminated signal
\ No newline at end of file
diff --git a/samples/texts/7581440/page_28.md b/samples/texts/7581440/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..f8709ce3b2db04b883ff2ff768e1ff7a122ca3e1
--- /dev/null
+++ b/samples/texts/7581440/page_28.md
@@ -0,0 +1,3 @@
+Fig. 6 The original signal and the signal recovered by our algorithms
+
+Fig. 7 The discrepancy of mean squared error (MSE) of all algorithms
\ No newline at end of file
diff --git a/samples/texts/7581440/page_29.md b/samples/texts/7581440/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..f263f4950302f57c6c49663163a76fb4d1220189
--- /dev/null
+++ b/samples/texts/7581440/page_29.md
@@ -0,0 +1,3 @@
+**Table 6** The numerical results of all algorithms for solving (LASSO) with different situations
+
+| Algorithms | $M = 256, N = 512 k = 20$ | $M = 256, N = 512 k = 40$ | $M = 512, N = 1024 k = 20$ | $M = 512, N = 1024 k = 40$ |
|---|
| MSE (×10-4) | Time (s) | MSE (×10-3) | Time (s) | MSE (×10-4) | Time (s) | MSE (×10-4) | Time (s) |
|---|
| Our Alg. 3.1 | 0.3317 | 0.4874 | 0.1003 | 0.4693 | 0.2082 | 1.4892 | 0.4342 | 1.5198 | | Our Alg. 3.2 | 0.384 | 0.4876 | 0.1248 | 0.4879 | 0.2411 | 1.4459 | 0.5001 | 1.5098 | | Our Alg. 3.3 | 0.3417 | 0.1225 | 0.1064 | 0.1485 | 0.214 | 0.3615 | 0.4481 | 0.3817 | | Our Alg. 3.4 | 0.3221 | 0.1157 | 0.0966 | 0.1156 | 0.2021 | 0.3207 | 0.4224 | 0.368 | | KR Alg. (3.1) | 0.3355 | 1.5696 | 0.1029 | 1.5717 | 0.2098 | 8.2411 | 0.4394 | 7.7349 | | TDC Alg. 3.3 | 0.3317 | 0.184 | 0.1003 | 0.1795 | 0.2082 | 0.8648 | 0.4342 | 0.597 | | LTD Alg. (49) | 0.3355 | 1.5704 | 0.1029 | 1.5271 | 0.2098 | 8.0863 | 0.4393 | 7.6975 | | ATD Alg. 4 | 0.402 | 1.5776 | 0.1416 | 1.5861 | 0.249 | 7.6964 | 0.5242 | 7.7073 |
\ No newline at end of file
diff --git a/samples/texts/7581440/page_3.md b/samples/texts/7581440/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..c821ac4a33046560b5d90ef373c25fb08e42a17b
--- /dev/null
+++ b/samples/texts/7581440/page_3.md
@@ -0,0 +1,11 @@
+Chuang proved that the iterative sequence generated by (1.1) converges weakly to a solution of (SVIP). Later, Majee and Nahak [18] revised the way of calculating $x_{n+1}$ in the third step of Algorithm (1.1). More precisely, they used $x_{n+1} = u_n - \kappa \mu_n c_n$ to calculate the value of $x_{n+1}$, where $\kappa \in (0, 2)$. It is worth noting that this method does not need to evaluate the resolvent mapping of $F_1$ again. They also established the weak convergence of the suggested method under some suitable conditions.
+
+Note that the methods suggested in [7,17,18] all achieve weak convergence in infinite-dimensional spaces. Examples in CT reconstruction and machine learning tell us that strong convergence is preferable to the weak convergence in an infinite-dimensional space. Therefore, a natural question is how to modify the method proposed by Byrne et al. [7, Algorithm 3.1] such that it can achieve the strong convergence in infinite-dimensional spaces. In fact, in the past few years, many scholars have presented various iterative schemes with strong convergence to solve the split variational inclusion problem in real Hilbert spaces; see, e.g., [7,8,10,24–26]. Byrne et al. [7, Algorithm 4.4] used the Halpern method to guarantee the strong convergence of the proposed algorithm. It is known that Halpern-type methods use the initial point $x_0$ in each iteration, which results in slow convergence. Kazmi and Rizvi [8] applied the viscosity-type method to accelerate the convergence speed of Byrne et al.’s algorithm [7, Algorithm 4.4] and obtained the strong convergence of the suggested method. Sitthithakerngkiet et al. [10] combined the viscosity method and the hybrid steepest descent method to assure the strong convergence of the offered algorithm. Recently, based on the inertial idea, Byrne et al.’s method [7], Majee and Nahak’s method [18], Mann method and viscosity method, Thong et al. [24], Long et al. [25] and Anh et al. [26] presented several new inertial strongly convergent algorithms and their numerical experiments show that the new iterative schemes are efficient and easy to implement.
+
+On the other hand, the strongly convergent algorithms mentioned above share a common feature, that is, their step size needs to know the prior information of the operator (matrix) norm $\|A\|$ in advance. It may be difficult to estimate $\|A\|$ in general and thus affecting the implementation of the fixed step size algorithms. To overcome this shortcoming, the construction of self-adaptive step size algorithms has aroused numerous interest among researchers. López et al. [27] introduced a new relaxation algorithm whose iterative process is as follows: $x_{n+1} = P_C(x_n - \gamma_n A^*(I - P_Q)Ax_n)$, where the step size $\gamma_n$ is computed as
+
+$$ \gamma_n = \frac{\varphi_n \frac{1}{2} \| (I - P_Q)Ax \| ^2}{\| A^* (I - P_Q)Ax \| ^2}, \quad 0 < \varphi_n < 4, \inf \varphi_n (4 - \varphi_n) > 0. $$
+
+$P_C$ and $P_Q$ stand for the orthogonal projections on the closed convex sets C and Q, respectively. Recently, there are many algorithms that do not require the prior information of the operator (matrix) norm to solve (SVIP) and other problems; see, e.g., [28–33].
+
+Motivated and stimulated by the above work, in this paper, we propose four self-adaptive inertial algorithms with strong convergence to solve the split variational inclusion problem (SVIP) in real Hilbert spaces. The advantages of the suggested iterative algorithms are that (1) the prior information of the operator (matrix) norm is not required, (2) the strong convergence theorems of the suggested algorithms are established under some weaker conditions, and (3) the inertial term is embedded to accelerate the convergence speed of the algorithms. Furthermore, we also give several theoretical applications of the proposed algorithms. Finally, some numerical experiments are provided to show the advantages of the stated algorithms over the previously existing ones. Our approaches obtained in this paper improve and summarize some results in the literature [7,17,18,24–26,28,34].
\ No newline at end of file
diff --git a/samples/texts/7581440/page_30.md b/samples/texts/7581440/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..eae04a104fb057f8ba9f43bb73256fe4e736dcc1
--- /dev/null
+++ b/samples/texts/7581440/page_30.md
@@ -0,0 +1,20 @@
+Fig. 8 The original signal and the signal recovered by our Algorithm 3.4
+
+problem is not easy to obtain, which means that the algorithms of fixed step size will fail.
+However, the self-adaptive algorithms presented in this paper can work well.
+
+**Example 5.3** Compressed sensing is an effective method to recover a clean signal from a polluted signal. This requires us to solve the following underdetermined system problems:
+
+$$
+\mathbf{y} = \mathbf{A}\mathbf{x} + \mathbf{\epsilon},
+$$
+
+where $\mathbf{y} \in \mathbb{R}^M$ is the observed noise data, $\mathbf{A} : \mathbb{R}^{M \times N}$ is a bounded linear observation operator, $\mathbf{x} \in \mathbb{R}^N$ with $k$ ($k \ll N$) non-zero elements is the original and clean data that needs to be restored, and $\mathbf{\epsilon}$ is the noise observation encountered during data transmission. An important consideration of this problem is that the signal $\mathbf{x}$ is sparse, that is, the number of non-zero elements in the signal $\mathbf{x}$ is much smaller than the dimension of the signal $\mathbf{x}$. Figure 4 visually shows the matrix structure expression of compressed sensing.
+
+A successful model used to solve the above problem can be translated into the following convex constraint minimization problem:
+
+$$
+\min_{\mathbf{x} \in \mathbb{R}^N} \frac{1}{2} \| \mathbf{y} - \mathbf{A}\mathbf{x} \|^2 \quad \text{subject to} \quad \| \mathbf{x} \|_1 \le t, \qquad (\text{LASSO})
+$$
+
+where *t* is a positive constant. It should be pointed out that this problem is related to the least absolute shrinkage and selection operator (LASSO) problem. Note that the (LASSO) problem
\ No newline at end of file
diff --git a/samples/texts/7581440/page_31.md b/samples/texts/7581440/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..288e5663ddf050d6c72aa7ce2147f4288e7655eb
--- /dev/null
+++ b/samples/texts/7581440/page_31.md
@@ -0,0 +1,7 @@
+Fig. 9 The discrepancy of mean squared error (MSE) of all algorithms
+
+described above can be regarded as a special case of (SFP) when $C = \{\mathbf{x} \in \mathbb{R}^N : \| \mathbf{x} \|_1 \le t\}$ and $Q = \{\mathbf{y}\}$. In this situation, we can use the projection formulas described in Sect. 2 to calculate $P_C$ and $P_Q$.
+
+We now consider using the proposed iterative schemes to solve (LASSO) and compare them with some known algorithms in the literature. In our numerical experiments, the matrix $\mathbf{A} \in \mathbb{R}^{M \times N}$ is created from a standard normal distribution with zero mean and unit variance and then orthonormalizing the rows. The clean signal $\mathbf{x} \in \mathbb{R}^N$ contains $k$ ($k \ll N$) randomly generated $\pm 1$ spikes. The observation $\mathbf{y}$ is formed by $\mathbf{y} = \mathbf{A}\mathbf{x} + \epsilon$ with white Gaussian noise $\epsilon$ of variance $10^{-4}$. The recovery process starts with the initial signals $\mathbf{x}_0 = \mathbf{x}_1 = \mathbf{0}$ and ends after 2000 iterations. We use the mean squared error $\text{MSE} = (\text{1/N}) \|\mathbf{x}^* - \mathbf{x}\|^2$ ($\mathbf{x}^*$ is an estimated signal of $\mathbf{x}$) to measure the restoration accuracy of all algorithms. In our first test, we set $M=256$, $N=512$ and $k=10$. The numerical results are shown in Table 5 and Figs. 5, 6 and 7. Figure 5 displays the original signal and the contaminated signal. The recovery results of the suggested algorithms are shown in Fig. 6. Table 5 presents the numerical results of all algorithms, including the mean squared error (MSE) of the restored signal and the original signal, and the execution time required for the iterative process. Figure 7 gives the numerical behavior of the MSE of all algorithms in the iteration process.
+
+Next, in order to demonstrate the robustness of the proposed algorithms, we conduct signal recovery tests with different dimensions and different sparsity. The numerical results are reported in Table 6, Figs. 8 and 9.
\ No newline at end of file
diff --git a/samples/texts/7581440/page_32.md b/samples/texts/7581440/page_32.md
new file mode 100644
index 0000000000000000000000000000000000000000..7d2f30b18e0e59048924703b905af03c406cdee4
--- /dev/null
+++ b/samples/texts/7581440/page_32.md
@@ -0,0 +1,33 @@
+**Remark 5.3** As can be seen from the numerical results of Example 5.3, the proposed algorithms can be applied to signal processing problems in compressed sensing, and they can work well (see Figs. 6, 8). Under the same number of iterations, the presented algorithms have smaller mean squared error and cpu time than the compared algorithms (cf. Tables 5, 6), which implies that our proposed algorithms perform better and converge faster in the signal recovery tests (cf. Figs. 7, 9). Furthermore, as shown in the previous two examples, our algorithms are still robust in this example, because the dimension and sparsity of the signal have no significant influence on our results.
+
+# 6 The Conclusion
+
+In this paper, we presented four inertial algorithms for finding the solution to the split variational inclusion problem in real Hilbert spaces. Our approaches can adaptively update the iteration step size without knowing the prior information of the operator norm. Under some suitable conditions, we established the strong convergence theorems of the suggested algorithms. The applications of our results in split feasibility problems and split minimization problems were given. Finally, we demonstrated the computational efficiency of the offered algorithms compared with other ones through numerical experiments in finite- and infinite-dimensional spaces as well as signal recovery problems. The algorithms obtained in this paper improved and extended some known results in the literature.
+
+**Acknowledgements** The authors would like to thank the referees for their valuable comments and suggestions which improve the paper. The research of the second author was supported by the National Natural Science Foundation of China under Grant No.11401152.
+
+## Compliance with ethical standards
+
+**Conflict of interest** The authors declare that there have no conflict of interest.
+
+# References
+
+1. Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. **150**, 275–283 (2011)
+
+2. Censor, Y., Elfving, T.: A multiprojection algorithms using Bregman projection in a product space. Numer. Algorithms **8**, 221–239 (1994)
+
+3. Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. **51**, 2353–2365 (2006)
+
+4. Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms **59**, 301–323 (2012)
+
+5. Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. **18**, 441–453 (2002)
+
+6. Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. **95**, 155–270 (1996)
+
+7. Byrne, C., Censor, Y., Gibali, A., Reich, S.: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. **13**, 759–775 (2012)
+
+8. Kazmi, K.R., Rizvi, S.H.: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. **8**, 1113–1124 (2014)
+
+9. Chuang, C.S.: Algorithms with new parameter conditions for split variational inclusion problems in Hilbert spaces with application to split feasibility problem. Optimization **65**, 859–876 (2016)
+
+10. Sitthithakerngkiet, K., Deepho, J., Kumam, P.: A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comput. **250**, 986–1001 (2015)
\ No newline at end of file
diff --git a/samples/texts/7581440/page_33.md b/samples/texts/7581440/page_33.md
new file mode 100644
index 0000000000000000000000000000000000000000..33531831e0247409e709d477380be32e477fe73a
--- /dev/null
+++ b/samples/texts/7581440/page_33.md
@@ -0,0 +1,53 @@
+11. Ceng, L.C., Kobis, E., Zhao, X.: On general implicit hybrid iteration method for triple hierarchical variational inequalities with hierarchical variational inequality constraints. Optimization **69**, 1961–1986 (2020)
+
+12. Shehu, Y., Ogbuisi, F.U.: An iterative method for solving split monotone variational inclusion and fixed point problems. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Math. RACSAM **110**, 503–518 (2016)
+
+13. Ceng, L.C., Shang, M.: Generalized Mann viscosity implicit rules for solving systems of variational inequalities with constraints of variational inclusions and fixed point problems. J. Inequal. Appl. **7**, 933 (2019)
+
+14. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. **9**, 3–11 (2001)
+
+15. Polyak, B.T.: Some methods of speeding up the convergence of iterative methods. USSR Comput. Math. Math. Phys. **4**, 1–17 (1964)
+
+16. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. **2**, 183–202 (2009)
+
+17. Chuang, C.S.: Hybrid inertial proximal algorithm for the split variational inclusion problem in Hilbert spaces with applications. Optimization **66**, 777–792 (2017)
+
+18. Majee, P., Nahak, C.: On inertial proximal algorithm for split variational inclusion problems. Optimization **67**, 1701–1716 (2018)
+
+19. Ceng, L.C., Shang, M.J.: Hybrid inertial subgradient extragruadient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization (2019). https:// doi.org/10.1080/02331934.2019.1647203
+
+20. Shehu, Y., Gibali, A.: New inertial relaxed method for solving split feasibilities. Optim. Lett. (2020). https://doi.org/10.1007/s11590-020-01603-1
+
+21. Tan, B., Xu, S., Li, S.: Inertial shrinking projection algorithms for solving hierarchical variational inequal- ity problems. J. Nonlinear Convex Anal. **21**, 871–884 (2020)
+
+22. Tan, B., Fan, J., Li, S.: Self-adaptive inertial extragruadient algorithms for solving variational inequality problems. Comput. Appl. Math. **40**, Article ID 19 (2021)
+
+23. He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl. Math. Optim. **35**, 69–76 (1997)
+
+24. Thong, D.V., Dung, V.T., Cho, Y.J.: A new strong convergence for solving split variational inclusion problems. Numer. Algorithms **86**, 565–591 (2021)
+
+25. Long, L.V., Thong, D.V., Dung, V.T.: New algorithms for the split variational inclusion problems and application to split feasibility problems. Optimization **68**, 2339–2367 (2019)
+
+26. Anh, P.K., Thong, D.V., Dung, V.T.: A strongly convergent Mann-type inertial algorithm for solving split variational inclusion problems. Optim. Eng. **22**, 159–185 (2021)
+
+27. López, G., Martín-Márquez, V., Wang, F., Xu, H.K.: Solving the split feasiblity problem without prior knowledge of matrix norms. Inverse Probl. **28**, Article ID 085004 (2012)
+
+28. Kesornprom, S., Cholamjiak, P.: Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in Hilbert spaces with applications. Optimization **68**, 2369–2395 (2019)
+
+29. Gibali, A., Mai, D.T., Vinh, N.T.: A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. **15**, Article ID 963 (2019)
+
+30. Tang, Y.: Convergence analysis of a new iterative algorithm for solving split variational inclusion problems. J. Ind. Manag. Optim. **13**, Article ID 1 (2019)
+
+31. Tang, Y., Gibali, A.: New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms **83**, 305–331 (2020)
+
+32. Ceng, L.C., Yuan, Q.: Composite inertial subgradient extragruadient methods for variational inequalities and fixed point problems. J. Inequal. Appl. **2019**, Article ID 274 (2019)
+
+33. Ceng, L.C., Petruşel, A., Wen, C.F., Yao, J.C.: Inertial-like subgradient extragruadient methods for variational inequalities and fixed points of asymptotically nonexpansive and strictly pseudocontractive mappings. Mathematics **7**, Article ID 860 (2019)
+
+34. Suantai, S., Kesornprom, S., Cholamjiak, P.: Modified proximal algorithms for finding solutions of the split variational inclusions. Mathematics **7**, Article ID 708 (2019)
+
+35. Marino, G., Xu, H.K.: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. **3**, 791–808 (2004)
+
+36. Takahashi, W.: Nonlinear Functional Analysis-Fixed Point Theory and Its Applications. Yokohama Pub- lishers, Yokohama (2000)
+
+37. Chuang, C.S.: Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. **2013**, Article ID 350 (2013)
\ No newline at end of file
diff --git a/samples/texts/7581440/page_34.md b/samples/texts/7581440/page_34.md
new file mode 100644
index 0000000000000000000000000000000000000000..f46066fba58848fa1b521296ba167fde22944f3c
--- /dev/null
+++ b/samples/texts/7581440/page_34.md
@@ -0,0 +1,5 @@
+38. Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. *Nonlinear Anal.* **75**, 742–750 (2012)
+
+39. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)
+
+**Publisher's Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
\ No newline at end of file
diff --git a/samples/texts/7581440/page_4.md b/samples/texts/7581440/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..76020b7c94a9e4a04327b7713e42ae0cb022aaeb
--- /dev/null
+++ b/samples/texts/7581440/page_4.md
@@ -0,0 +1,39 @@
+The rest of the paper is organized as follows. Some essential definitions and technical lemmas that need to be used are given in the next section. In Sect. 3, we propose several algorithms and analyze their convergence. Section 4 introduces some theoretical applications of the proposed methods. Some numerical experiments to verify our theoretical results are presented in Sect. 5. Finally, the paper ends with a brief summary in Sect. 6, the last section.
+
+## 2 Preliminaries
+
+Let $\mathcal{C}$ be a closed and convex nonempty subset in a real Hilbert space $\mathcal{H}$. The weak convergence and strong convergence of $\{x_n\}_{n=1}^{\infty}$ to $x$ are represented by $x_n \to x$ and $x_n \to x$, respectively. For each $x, y, z \in \mathcal{H}$, we have the following basic facts:
+
+(1) $\|x + y\|^2 \le \|x\|^2 + 2\langle y, x + y \rangle$;
+
+(2) $\|\alpha x + \lambda y + \kappa z\|^2 = \alpha \|x\|^2 + \lambda \|y\|^2 + \kappa \|z\|^2 - \alpha \lambda \|x - y\|^2 - \alpha \kappa \|x - z\|^2 - \lambda \kappa \|y - z\|^2$, where $\alpha, \lambda, \kappa \in [0, 1]$ with $\alpha + \lambda + \kappa = 1$.
+
+Let $T: \mathcal{H} \to \mathcal{H}$ be a mapping with its fixed point set $\text{Fix}(T) = \{x : Tx = x\} \ne \emptyset$. Recall that a mapping $T$ is said to be:
+
+(1) *firmly nonexpansive* if
+
+$$ \|Tx - Ty\|^2 \le (Tx - Ty, x - y), \quad \forall x, y \in \mathcal{H}, $$
+
+or equivalently,
+
+$$ \|Tx - Ty\|^2 \le \|x - y\|^2 - \| (I - T)x - (I - T)y \|^{2}, \quad \forall x, y \in \mathcal{H}. $$
+
+(2) *nonexpansive* if
+
+$$ \|Tx - Ty\| \le \|x - y\|, \quad \forall x, y \in \mathcal{H}. $$
+
+Recall that a multi-valued mapping $F: \mathcal{H} \to 2^{\mathcal{H}}$ with domain $\text{Dom}(F) := \{x \in \mathcal{H}: Fx \ne \emptyset\}$ is said to be (i) monotone if, for all $x, y \in \mathcal{H}, u \in Fx$ and $v \in Fy$ indicates that $\langle u-v, x-y\rangle \ge 0$; (ii) maximal monotone if it is monotone and if, for any $(x, u) \in \mathcal{H} \times \mathcal{H}$, $\langle u-v, x-y\rangle \ge 0$ for every $(y, v) \in \text{Graph}(F)$ (the graph of mapping $F$) indicates that $u \in Fx$. Let mapping $F: \mathcal{H} \to 2^{\mathcal{H}}$ be set-valued maximal monotone. Then, for $\forall x \in \mathcal{H}$ and $\gamma > 0$, the resolvent mapping $J_{\gamma F}: \mathcal{H} \to \mathcal{H}$ associated with $F$ is represented as $J_{\gamma F}(x) = (I + \gamma F)^{-1}(x)$, where $I$ stands for the identity operator on $\mathcal{H}$.
+
+For every point $x \in \mathcal{H}$, there exists a unique nearest point in $\mathcal{C}$, denoted by $P_{\mathcal{C}}(x)$, such that $P_{\mathcal{C}}(x) := \underset{\|x-y\|}{\operatorname{argmin}} [\|x-y\|, y \in C]$. $P_{\mathcal{C}}$ is called the metric or the nearest point projection of $\mathcal{H}$ onto $\mathcal{C}$. The two projection formulas given next will be used in the sequel (see Sect. 5).
+
+(1) The projection of $x$ onto a half-space $H_{u,v} = \{x : \langle u, x\rangle \le v\}$ is computed by
+
+$$ P_{H_{u,v}}(x) = x - \max\left\{\frac{\langle u, x\rangle - v}{\|u\|^2}, 0\right\}u. $$
+
+(2) The projection of $x$ onto a ball $B[p,q] = \{x : \|x-p\| \le q\}$ is computed by
+
+$$ P_{B[p,q]}(x) = p + \frac{q}{\max\{\|x-p\|, q\}}(x-p). $$
+
+The following lemmas are very helpful for the convergence analysis of our algorithms.
+
+**Lemma 2.1** ([35,36]) Assume that mapping $F: \mathcal{H} \to 2^{\mathcal{H}}$ is set-valued maximal monotone and $\lambda > 0$. Then the following statements hold:
\ No newline at end of file
diff --git a/samples/texts/7581440/page_5.md b/samples/texts/7581440/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..3092fa13eb2a10af23a0345606d3258a81292bec
--- /dev/null
+++ b/samples/texts/7581440/page_5.md
@@ -0,0 +1,54 @@
+(1) $J_{\lambda F}$ is a single-valued and firmly nonexpansive mapping.
+
+(2) $\text{Dom}(J_{\lambda F}) = \mathcal{H}$ and $\text{Fix}(J_{\lambda F}) = F^{-1}(0) = \{x \in \text{Dom}(F) : 0 \in Fx\}$.
+
+(3) $(I - J_{\lambda F})$ is a firmly nonexpansive mapping.
+
+(4) Suppose that $F^{-1}(0) \neq \emptyset$. Then, $\langle x - J_{\lambda F}x, J_{\lambda F}x - z \rangle \geq 0$ for all $x \in \mathcal{H}$, $z \in F^{-1}(0)$.
+
+**Lemma 2.2** Assume that $\mathcal{H}_1$ and $\mathcal{H}_2$ are real Hilbert spaces, and $A : \mathcal{H}_1 \to \mathcal{H}_2$ is a linear operator with its adjoint $A^*$. Let mapping $F : \mathcal{H}_2 \to 2^{\mathcal{H}_2}$ be set-valued maximal monotone and let $J_{\lambda F}$ be a resolvent mapping of $F$. For all $x \in \mathcal{H}_1$, let $T : \mathcal{H}_1 \to \mathcal{H}_1$ and $V : \mathcal{H}_1 \to \mathcal{H}_1$ be defined as $Tx := A^*(I - J_{\lambda F})Ax$ and $Vx := x - \gamma_n A^*(I - J_{\lambda F})Ax$, respectively. Let $\{\gamma_n\}$ be a sequence of positive real numbers, $L = \|A\|^2$ and $\lambda > 0$. Then, for any $x, y \in \mathcal{H}_1$ and $p \in A^{-1}(\text{Fix}(J_{\lambda F}))$, the following statements hold:
+
+$$ (1) \| (I - J_{\lambda F}) Ax - (I - J_{\lambda F}) Ay \|^{2} \leq \langle Tx - Ty, x - y \rangle. $$
+
+$$ (2) \|A^*(I - J_{\lambda F})Ax - A^*(I - J_{\lambda F})Ay\| \leq \|A\|^2 \|x - y\|. $$
+
+$$ (3) \|Vx - p\|^2 \leq \|x - p\|^2 - \gamma_n(2 - \gamma_n L)\|(I - J_{\lambda F})Ax\|^2. $$
+
+**Proof** From the fact that $(I - J_{\lambda F})$ is a firmly nonexpansive mapping (Lemma 2.1 (3)), one has
+
+$$
+\begin{aligned}
+\langle Tx - Ty, x - y \rangle &= \langle A^*(I - J_{\lambda F})Ax - A^*(I - J_{\lambda F})Ay, x - y \rangle \\
+&= \langle (I - J_{\lambda F})Ax - (I - J_{\lambda F})Ay, Ax - Ay \rangle \\
+&\geq \| (I - J_{\lambda F})Ax - (I - J_{\lambda F})Ay \|^{2}, \quad \forall x, y \in \mathcal{H}_{1}.
+\end{aligned}
+$$
+
+Moreover, we have
+
+$$
+\begin{aligned}
+\|A^*(I - J_{\lambda F})Ax - A^*(I - J_{\lambda F})Ay\|^2 &\leq \|A\|^2 \cdot \| (I - J_{\lambda F})Ax - (I - J_{\lambda F})Ay \| \\
+&\leq \|A\|^2 \cdot \|Ax - Ay\|^2 \\
+&\leq \|A\|^4 \cdot \|x - y\|^2, \quad \forall x, y \in \mathcal{H}_1.
+\end{aligned}
+$$
+
+Thus, $\|A^*(I - J_{\lambda F})Ax - A^*(I - J_{\lambda F})Ay\| \leq \|A\|^2 \|x-y\|$. Finally, we prove that statement (3) is valid. Indeed, since $p \in A^{-1}(\text{Fix}(J_{\lambda F}))$, one has $Ap \in \text{Fix}(J_{\lambda F})$ and thus $A^*(I - J_{\lambda F})Ap = 0$. From statement (1), we have
+
+$$ \langle A^*(I - J_{\lambda F})Ax - A^*(I - J_{\lambda F})Ap, x - p \rangle \geq \| (I - J_{\lambda F})Ax - (I - J_{\lambda F})Ap \|^{2}, $$
+
+which yields $\langle A^*(I - J_{\lambda F})Ax, x-p\rangle \geq \|Ax-J_{\lambda F}Ax\|^2$. This together with Lemma 2.1 (1) obtains
+
+$$
+\begin{aligned}
+\|Vx - p\|^2 &= \|x - \gamma_n A^*(I - J_{\lambda F})Ax - p\|^2 \\
+&= \|x - p\|^2 + \| \gamma_n A^*(I - J_{\lambda F})Ax \| ^2 - 2\gamma_n \langle x - p, A^*(I - J_{\lambda F})Ax \rangle \\
+&\leq \|x - p\|^2 + \gamma_n^2 \|A\|^2 \| (I - J_{\lambda F})Ax \| ^2 - 2\gamma_n \| (I - J_{\lambda F})Ax \| ^2 \\
+&= \|x - p\|^2 - \gamma_n (2 - \gamma_n L) \| (I - J_{\lambda F})Ax \| ^2.
+\end{aligned}
+$$
+
+This completes the proof of the lemma. $\square$
+
+**Lemma 2.3** ([37]) Assume that $\mathcal{H}_1$ and $\mathcal{H}_2$ are real Hilbert spaces, and $A : \mathcal{H}_1 \to \mathcal{H}_2$ is a linear operator with its adjoint $A^*$. Let $F_1 : \mathcal{H}_1 \to 2^{\mathcal{H}_1}$ and $F_2 : \mathcal{H}_2 \to 2^{\mathcal{H}_2}$ be two set-valued maximal monotone mappings. Let $J_{\lambda F_1}$ and $J_{\lambda F_2}$ be the resolvent mapping of $F_1$ and $F_2$, respectively. Suppose that the solution set of the problem (SVIP) is non-empty and $\lambda > 0$, $\gamma > 0$. Then, for any $z \in \mathcal{H}_1$, $z$ is a solution of (SVIP) if and only if $J_{\lambda F_1}(z - \gamma A^*(I - J_{\lambda F_2})Az) = z$.
\ No newline at end of file
diff --git a/samples/texts/7581440/page_6.md b/samples/texts/7581440/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a15eb2b9f27961af716164467111d716cc53597
--- /dev/null
+++ b/samples/texts/7581440/page_6.md
@@ -0,0 +1,44 @@
+**Lemma 2.4 ([38])** Let {$\{\Upsilon_n\}$} be a sequence of nonnegative real numbers, {$\zeta_n$} be a sequence of real numbers in (0, 1) with $\sum_{n=1}^{\infty} \zeta_n = \infty$, and {$\{\Phi_n\}$} be a sequence of real numbers. Assume that
+
+$$ \Upsilon_{n+1} \leq (1 - \zeta_n)\Upsilon_n + \zeta_n\Phi_n, \quad \forall n \geq 1. $$
+
+If $\limsup_{k \to \infty} \Phi_{n_k} \leq 0$ for every subsequence {$\Upsilon_{n_k}$} of {$\Upsilon_n$} satisfying $\liminf_{k \to \infty} (\Upsilon_{n_k+1} - \Upsilon_{n_k}) \geq 0$, then $\lim_{n \to \infty} \Upsilon_n = 0$.
+
+# 3 Main Results
+
+In this section, we propose four self-adaptive inertial algorithms to solve the split variational inclusion problem (SVIP). The advantage of our algorithms is that they do not require the prior information of the operator norm. Before introducing our algorithms, we assume that the following conditions are satisfied.
+
+(C1) The solution set of problem (SVIP) is nonempty, i.e., $\Omega \neq \emptyset$.
+
+(C2) Assume that $\mathcal{H}_1$ and $\mathcal{H}_2$ are real Hilbert spaces, and $A: \mathcal{H}_1 \to \mathcal{H}_2$ is a bounded linear operator with its adjoint $A^*$. Let $F_1: \mathcal{H}_1 \to \mathcal{H}_1$ and $F_2: \mathcal{H}_2 \to \mathcal{H}_2$ be two set-valued maximal monotone mappings.
+
+(C3) Let {$\varpi_n$} be a positive sequence such that $\lim_{n \to \infty} \frac{\varpi_n}{\sigma_n} = 0$, where {$\sigma_n$} $\subset (0, 1)$ satisfies $\lim_{n \to \infty} \sigma_n = 0$ and $\sum_{n=1}^{\infty} \sigma_n = \infty$.
+
+(C4) The mapping $f: \mathcal{H}_1 \to \mathcal{H}_1$ is $\rho$-contractive with constant $\rho \in [0, 1)$.
+
+## 3.1 The Algorithm 3.1
+
+In this subsection, inspired by the inertial method, Byrne et al.'s method [7], the projection and contraction method and the viscosity-type method, we introduce a self-adaptive inertial projection and contraction method to solve the (SVIP). The details of the first iterative scheme are described in Algorithm 3.1.
+
+**Remark 3.1** From Algorithm 3.1, we have the following observations.
+
+(i) It follows from (3.1) that
+
+$$ \lim_{n \to \infty} \frac{\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| = 0. $$
+
+Indeed, we have $\vartheta_n \|x_n - x_{n-1}\| \leq \varpi_n$ for all $n$, which together with $\lim_{n \to \infty} \frac{\varpi_n}{\sigma_n} = 0$ implies that
+
+$$ \lim_{n \to \infty} \frac{\vartheta_n}{\sigma_n} \|x_n - x_{n-1}\| \leq \lim_{n \to \infty} \frac{\varpi_n}{\sigma_n} = 0. $$
+
+(ii) If $u_n = q_n$ or $c_n = 0$, then $q_n \in \Omega$. Indeed, from the definition of $c_n$, one obtains
+
+$$
+\begin{aligned}
+\|c_n\| &\geq \|u_n - q_n\| - \gamma_n \|A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n\| \\
+&\geq (1 - \delta) \|u_n - q_n\|.
+\end{aligned}
+$$
+
+It can be easily proved that $\|c_n\| \leq (1 + \delta)\|u_n - q_n\|$. Therefore,
+
+$$ (1 - \delta)\|u_n - q_n\| \leq \|c_n\| \leq (1 + \delta)\|u_n - q_n\|, $$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_7.md b/samples/texts/7581440/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..cd0c842ed2b493ee71807db5847758de73309052
--- /dev/null
+++ b/samples/texts/7581440/page_7.md
@@ -0,0 +1,51 @@
+**Algorithm 3.1** The inertial viscosity-type projection and contraction algorithm for (SVIP)
+
+**Initialization:** Set $\lambda > 0$, $\vartheta > 0$, $\zeta > 0$, $\chi \in (0, 1)$, $\delta \in (0, 1)$, $\kappa \in (0, 2)$ and let $x_0, x_1 \in \mathcal{H}$.
+
+**Iterative Steps:** Calculate $x_{n+1}$ as follows:
+
+**Step 1.** Given the iterates $x_{n-1}$ and $x_n$ ($n \ge 1$). Set $u_n = x_n + \vartheta_n(x_n - x_{n-1})$, where
+
+$$ \vartheta_n = \begin{cases} \min \left\{ \frac{\omega_n}{\|x_n - x_{n-1}\|}, \vartheta \right\}, & \text{if } x_n \neq x_{n-1}; \\ \vartheta, & \text{otherwise.} \end{cases} \quad (3.1) $$
+
+**Step 2.** Compute $q_n = J_{\lambda F_1}[u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n]$, where $\gamma_n = \zeta \chi^{r_n}$ and $r_n$ is the smallest nonnegative integer such that
+
+$$ \gamma_n \|A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n\| \le \delta \|u_n - q_n\|. \quad (3.2) $$
+
+If $u_n = q_n$, then stop and $q_n$ is a solution of the problem (SVIP). Otherwise, go to **Step 3**.
+**Step 3.** Compute $g_n = u_n - \kappa \mu_n c_n$, where
+
+$$ c_n = u_n - q_n - \gamma_n [A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n], \\ \mu_n = \frac{\langle u_n - q_n, c_n \rangle}{\|c_n\|^2}. \quad (3.3) $$
+
+**Step 4.** Compute $x_{n+1} = \sigma_n f(x_n) + (1-\sigma_n)g_n$.
+Set $n := n + 1$ and go to **Step 1**.
+
+and thus $u_n = q_n$ iff $c_n = 0$. Hence, if $u_n = q_n$ or $c_n = 0$, then $q_n = J_{\lambda F_1}[q_n - \gamma_n A^*(I - J_{\lambda F_2})Aq_n]$. This implies that $q_n \in \Omega$ by means of Lemma 2.3.
+
+The following lemmas are quite helpful to analyze the convergence of our algorithms.
+
+**Lemma 3.1** *The Armijo-like search rule (3.2) is well defined and $\min\{\zeta, \frac{\delta\chi}{L}\} \le \gamma_n \le \zeta$.*
+
+*Proof* Indeed, using Lemma 2.2 (2), one sees that
+
+$$ \|A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n\| \le L\|u_n - q_n\|, $$
+
+where $L = \|A\|^2$. Obviously, (3.2) holds for all $0 < \gamma_n \le \delta L^{-1}$. On the other hand, it is easy to see that $\gamma_n \le \zeta$. If $\gamma_n = \zeta$, then this lemma is proved. Otherwise, if $\gamma_n < \zeta$, then inequality (3.2) will be violated when $\gamma = \gamma_n \chi^{-1}$, which indicates that $\gamma_n \chi^{-1} > \delta L^{-1}$. Hence $\gamma_n \ge \min\{\zeta, \frac{\delta\chi}{L}\}$. $\square$
+
+**Lemma 3.2** Suppose that Conditions (C1)-(C3) hold. Let {$u_n$}, {$q_n$} and {$g_n$} be three sequences created by Algorithm 3.1. Then, for all $p \in \Omega$,
+
+$$ \|g_n - p\|^2 \le \|u_n - p\|^2 - \frac{2 - \kappa}{\kappa} \|g_n - u_n\|^2, $$
+
+and
+
+$$ \|u_n - q_n\|^2 \le \frac{(1 + \delta)^2}{(1 - \delta)^2 \kappa^2} \|g_n - u_n\|^2. $$
+
+*Proof* Indeed, from the definitions of $g_n$ and $\mu_n$, we get
+
+$$
+\begin{align}
+\|g_n - p\|^2 &= \|u_n - p\|^2 - 2\kappa\mu_n\langle u_n - p, c_n\rangle + \kappa^2\mu_n^2\|c_n\|^2 \\
+&= \|u_n - p\|^2 - 2\kappa\mu_n\langle u_n - p, c_n\rangle + \kappa^2\mu_n\langle u_n - q_n, c_n\rangle .
+\tag{3.4}
+\end{align}
+$$
\ No newline at end of file
diff --git a/samples/texts/7581440/page_8.md b/samples/texts/7581440/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a6738e05618fae5e007940b7ef81d35cb629543
--- /dev/null
+++ b/samples/texts/7581440/page_8.md
@@ -0,0 +1,71 @@
+It is easy to see the following equation
+
+$$
+\langle u_n - p, c_n \rangle = \langle u_n - q_n, c_n \rangle + \langle q_n - p, c_n \rangle. \tag{3.5}
+$$
+
+Next, we prove that
+
+$$
+\langle q_n - p, c_n \rangle \ge 0. \tag{3.6}
+$$
+
+From the definition of $q_n$ and Lemma 2.1 (4), we obtain
+
+$$
+\langle u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n - q_n, q_n - p \rangle \ge 0. \quad (3.7)
+$$
+
+It follows from $p \in \Omega$ that $Ap \in F_2^{-1}(0)$. Using Lemma 2.1 (2), one sees that $Ap \in \text{Fix}(J_{\lambda F_2})$. This indicates that $A^* (I - J_{\lambda F_2})Ap = 0$. By Lemma 2.2 (1), one has
+
+$$
+\langle \gamma_n (A^*(I - J_{\lambda F_2})A q_n - A^*(I - J_{\lambda F_2})A p), q_n - p \rangle \ge 0,
+$$
+
+which yields $\langle q_n - p, \gamma_n A^*(I - J_{\lambda F_2})A q_n \rangle \ge 0$. This together with Eq. (3.7) gets
+
+$$
+\langle q_n - p, u_n - q_n - \gamma_n [A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})A q_n] \rangle \ge 0.
+$$
+
+Thus, (3.6) was proved by using the definition of $c_n$. Combining (3.4), (3.5) and (3.6), we obtain
+
+$$
+\begin{align*}
+\|g_n - p\|^2 &\le \|u_n - p\|^2 - (2\kappa - \kappa^2)\mu_n \langle u_n - q_n, c_n \rangle \\
+&= \|u_n - p\|^2 - \frac{2-\kappa}{\kappa} \|g_n - u_n\|^2.
+\end{align*}
+$$
+
+On the other hand, by using (3.2), we have
+
+$$
+\begin{align*}
+\langle u_n - q_n, c_n \rangle &= \langle u_n - q_n, u_n - q_n - \gamma_n [A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n] \rangle \\
+&= \|u_n - q_n\|^2 - \gamma_n \langle u_n - q_n, A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n \rangle \\
+&\geq \|u_n - q_n\|^2 - \gamma_n \|u_n - q_n\| \|A^*(I - J_{\lambda F_2})Au_n - A^*(I - J_{\lambda F_2})Aq_n\| \\
+&\geq (1-\delta) \|u_n - q_n\|^2.
+\end{align*}
+$$
+
+From Remark 3.1 (ii), one obtains $\|c_n\|^2 \le (1+\delta)^2 \|u_n-q_n\|^2$. According to the definition of $\mu_n$, we have
+
+$$
+\mu_n = \frac{\langle u_n - q_n, c_n \rangle}{\|c_n\|^2} \ge \frac{1-\delta}{(1+\delta)^2}.
+$$
+
+By the definitions of $g_n$ and $\mu_n$, we get
+
+$$
+\|u_n - q_n\|^2 \leq \frac{1}{1-\delta} \langle u_n - q_n, c_n \rangle \leq \frac{1}{(1-\delta)\mu_n \kappa^2} \|g_n - u_n\|^2.
+$$
+
+Therefore, we conclude that
+
+$$
+\|u_n - q_n\|^2 \leq \frac{(1+\delta)^2}{(1-\delta)^2 \kappa^2} \|g_n - u_n\|^2.
+$$
+
+This completes the proof of the lemma.
+
+**Lemma 3.3** Assume that the sequences ${u_n}$ and ${q_n}$ are formed by Algorithm 3.1. If ${u_{nk}}$ converges weakly to $p$ and $\lim_{n\to\infty} \|u_n - q_n\| = 0$, then $p \in \Omega$. □
\ No newline at end of file
diff --git a/samples/texts/7581440/page_9.md b/samples/texts/7581440/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..0ae6a0f5ac046f32225fdba4eeb07433143fafc9
--- /dev/null
+++ b/samples/texts/7581440/page_9.md
@@ -0,0 +1,55 @@
+**Proof** Let us assume that $p \in \Omega$. Then $p \in F_1^{-1}(0)$, and thus $p \in \text{Fix}(J_{\lambda F_1})$ by means of Lemma 2.1 (2). Using the definition of $q_n$ and Lemma 2.1 (4), we obtain
+
+$$ \langle u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n - q_n, q_n - p \rangle \ge 0. \quad (3.8) $$
+
+It follows from $p \in \Omega$ that $Ap \in F_2^{-1}(0)$. Hence $Ap \in \text{Fix}(J_{\lambda F_2})$. This indicates that $A^*(I - J_{\lambda F_2})Ap = 0$. From Lemma 2.2 (1), one infers that
+
+$$ \langle A^*(I - J_{\lambda F_2})Aq_n - A^*(I - J_{\lambda F_2})Ap, q_n - p \rangle \ge \| (I - J_{\lambda F_2})Aq_n \|^2. \quad (3.9) $$
+
+Combining (3.8), (3.9) and Lemma 2.2 (2), we have
+
+$$
+\begin{aligned}
+\gamma_n \|Aq_n - J_{\lambda F_2} Aq_n\|^2 &\le \langle \gamma_n A^*(I - J_{\lambda F_2}) Aq_n, q_n - p \rangle \\
+&\le \langle u_n - q_n - \gamma_n A^*(I - J_{\lambda F_2}) Au_n + \gamma_n A^*(I - J_{\lambda F_2}) Aq_n, q_n - p \rangle \\
+&\le \|u_n - q_n - \gamma_n A^*(I - J_{\lambda F_2}) Au_n + \gamma_n A^*(I - J_{\lambda F_2}) Aq_n\| \|q_n - p\| \\
+&\le (\|u_n - q_n\| + \gamma_n \|A^*(I - J_{\lambda F_2}) Au_n - A^*(I - J_{\lambda F_2}) Aq_n\|) \|q_n - p\| \\
+&\le (1 + \gamma_n \|A\|^2) \|u_n - q_n\| \|q_n - p\|.
+\end{aligned}
+$$
+
+Since $\gamma_n > 0$ and $\lim_{n \to \infty} \|u_n - q_n\| = 0$, we find that $\lim_{n \to \infty} \|Aq_n - J_{\lambda F_2} Aq_n\| = 0$. Moreover, by Lemma 2.1 (1), one observes that
+
+$$
+\begin{aligned}
+\|Au_n - J_{\lambda F_2} Au_n\| &\le \|Au_n - Aq_n - (J_{\lambda F_2} Au_n - J_{\lambda F_2} Aq_n)\| + \|Aq_n - J_{\lambda F_2} Aq_n\| \\
+&\le 2\|A\|\|u_n - q_n\| + \|Aq_n - J_{\lambda F_2} Aq_n\|.
+\end{aligned}
+$$
+
+This indicates that
+
+$$ \lim_{n \to \infty} \|Au_n - J_{\lambda F_2} Au_n\| = 0. \quad (3.10) $$
+
+From Lemma 2.1 (1) and the definition of $q_n$, we get
+
+$$
+\begin{aligned}
+\|q_n - J_{\lambda F_1} u_n\| &= \|J_{\lambda F_1}(u_n - \gamma_n A^*(I - J_{\lambda F_2})Au_n) - J_{\lambda F_1} u_n\| \\
+&\le \gamma_n \|A^*\| \|I - J_{\lambda F_2}\| Au_n \|,
+\end{aligned}
+$$
+
+which together with (3.10) gives that $\lim_{n \to \infty} \|q_n - J_{\lambda F_1} u_n\| = 0$. From $\lim_{n \to \infty} \|u_n - q_n\| = 0$, one obtains $\lim_{n \to \infty} \|u_n - J_{\lambda F_1} u_n\| = 0$. This combining with Lemma 2.1 (1) and $u_{n_k} \to p$ yields $p \in \text{Fix}(J_{\lambda F_1})$. In view of the fact that $A$ is a linear bounded operator and $u_{n_k} \to p$, we get $Au_{n_k} \to Ap$. Using (3.10) and Lemma 2.1 (1), we obtain $Ap \in \text{Fix}(J_{\lambda F_2})$. Thus, we deduce that $p \in \Omega$. The proof is completed. $\square$
+
+**Remark 3.2** It is worth noting that the proof of Lemma 3.3 does not use the definition of Armijo stepsize (3.2).
+
+We are now in a position to prove the strong convergence result of Algorithm 3.1.
+
+**Theorem 3.1** Suppose that Conditions (C1)–(C4) hold. Then the sequence $\{x_n\}$ formed by Algorithm 3.1 converges to $p \in \Omega$ in norm, where $p = P_\Omega \circ f(p)$.
+
+**Proof** For simplicity, we divide the proof into four claims.
+
+**Claim 1.** $\{x_n\}$ is bounded. Indeed, it follows from Lemma 3.2 and $\kappa \in (0, 2)$ that
+
+$$ \|g_n - p\| \le \|u_n - p\|, \quad \forall n \ge 1. \quad (3.11) $$
\ No newline at end of file
diff --git a/samples/texts/7683722/page_1.md b/samples/texts/7683722/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..2b77f3ea8c47ef6dfac199159178a888b76ae34d
--- /dev/null
+++ b/samples/texts/7683722/page_1.md
@@ -0,0 +1,29 @@
+# LocalDrop: A Hybrid Regularization for Deep Neural Networks
+
+Ziqing Lu, Chang Xu, Bo Du, Takashi Ishida, Lefei Zhang, and Masashi Sugiyama
+
+**Abstract**—In neural networks, developing regularization algorithms to settle overfitting is one of the major study areas. We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop. A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs), including drop rates and weight matrices, has been developed based on the proposed upper bound of the local Rademacher complexity by the strict mathematical deduction. The analyses of dropout in FCNs and DropBlock in CNNs with keep rate matrices in different layers are also included in the complexity analyses. With the new regularization function, we establish a two-stage procedure to obtain the optimal keep rate matrix and weight matrix to realize the whole training model. Extensive experiments have been conducted to demonstrate the effectiveness of LocalDrop in different models by comparing it with several algorithms and the effects of different hyperparameters on the final performances.
+
+**Index Terms**—Deep Neural networks, Dropout, Dropblock, Regularization.
+
+## 1 INTRODUCTION
+
+Neural networks have lately shown impressive performance in sophisticated real-world situations, including image classification [1], object recognition [2] and image captioning [3]. Low, middle and high level features are integrated into deep neural networks, which are usually trained in an end-to-end manner. The levels of features can be enriched by stacking more layers (i.e., increasing the network depth) [4] or widening the layers (i.e., including more filters) [5], which then leads to a large number of parameters to be optimized in fully-connected networks (FCNs) and convolutional networks (CNNs). However, given limited training data, it is difficult to figure out appropriate values for such a tremendous volume of variables without overfitting.
+
+Some methods have been developed to enable neural networks to perform well not just on training data, but also on new inputs [6]. Among them, regularization is a major strategy for achieving this goal. The most straightforward solution is to stop the training as soon as the validation performance becomes suboptimal. This tactic is known as early stopping, which is widely used in deep learning regularization. Similarly to other machine learning models, L1 and L2 regularizations are also often adopted to penalize weights in neural networks. Since the seminal work by Hinton [7], dropout has become a popular technique among deep learning researchers to regularize a broad family of models.
+
+* Z. Lu, B. Du, and L. Zhang are with the National Engineering Research Center for Multimedia Software, School of Computer Science, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 430079, China,
+E-mail: lzqbob@gmail.com; dubo@whu.edu.cn; zhanglefei@whu.edu.cn.
+* C. Xu is with the School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, NSW 2008, Australia
+E-mail: c.xu@sydney.edu.au.
+* T. Ishida and M. Sugiyama are with RIKEN Center for Advanced Intelligence Project, and Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, the University of Tokyo.
+E-mail: ishida@ms.ky.tokyo.ac.jp; sugi@ky.tokyo.ac.jp.
+
+Manuscript received 6 May 2020; revised 16 Dec. 2020; accepted 7 Feb. 2021.
+Corresponding authors: Bo Du and Chang Xu.
+
+It randomly masks out part of the network and trains the ensemble consisting of all sub-networks. Extending the idea of dropout, DropConnect randomly drops weights instead of activations [8]. Other dropout variants include adaptive dropout [9], variational dropout [10] and sparse variational dropout [11]. In convolutional neural networks (CNNs), DropBlock [12] extends the idea of dropout to make it adaptable to the structure of CNNs. The original dropout has limit promotions in CNNs because the structure of convolutional networks obliges every unit to correlate with its adjacent units in the feature map. The adjacent units would contain information of any given dropped unit in the feature map. Batch normalization is another popular approach that can evidently improve generalization by reducing the internal covariance shift of feature representations [13]. Other remarkable regularization techniques include weight decay, spatial shuffling [14], fractional max-pooling [15], and model averaging [16].
+
+These regularization approaches have been empirically proven to improve the generalization of neural networks and have achieved impressive performance in various applications. There have been several attempts at theoretically explaining the generalization of deep neural networks [17], [18], [19], [20] but the link between theory and application is tenuous. Most of the regularization techniques were heuristically developed, only a handful of which were amended and complemented with solid analyses on generalization. For instance, dropout [7] was first sought after thanks to their impressive experimental performance, and later theoretical analyses were conducted in the Bayesian framework [21] and based on the Rademacher complexity [22]. However, several recent dropout variants [11], [23], [24] have been introduced without theoretical underpinning.
+
+Instead of first heuristically designing a regularization and then seeking for feasible theoretical explanations, we advocate a reverse thinking to distill the regularization technique explicitly from the generalization analyses of neural networks. More specifically, we propose a new regularization algorithm called LocalDrop to regularize neural networks, including FCNs and CNNs, via the local Rademacher complexity of the function class [25]. As the Rademacher complexity provides global estimates of
\ No newline at end of file
diff --git a/samples/texts/7683722/page_10.md b/samples/texts/7683722/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..786f415f8dd754f004858b3c01725eaa3e56b363
--- /dev/null
+++ b/samples/texts/7683722/page_10.md
@@ -0,0 +1,91 @@
+and DP210101859. M. Sugiyama was supported by KAKENHI 20H04206. T. Ishida was supported by JST, ACT-X Grant Number JPMJAX2005, Japan.
+
+where $\mathbf{u}_j$ and $\mathbf{v}_j$ are the column vectors of $U$ and $V$. Then we separate $\mathbf{W}^l$ into two parts by $h$. $\forall h, 0 \le h \le \text{rank}(\mathbf{W}^l)$, we have
+
+# 7 APPENDIX
+
+## 7.1 Proof of Theorem 1
+
+*Proof.* Since the logistic loss function $\ell$ and function $f^L$ are both 1 Lipschitz, based on the Contraction lemma, the local Rademacher complexity of loss function $\mathcal{L}$ can be written as
+
+$$
+\begin{aligned}
+\hat{R}_S(\mathcal{L}) &= \frac{1}{n} \mathbb{E}_{\epsilon_i} \left[ \sup_{f^L \in \mathcal{F}} \sum_{i=1}^n \epsilon_i \hat{\ell}(f^L(\mathbf{x}_i), \mathbf{y}_i) \right] \\
+&\leq \frac{k}{n} \mathbb{E}_{\epsilon_i} \left[ \sup_{f^L \in \mathcal{F}} \sum_{i=1}^n \epsilon_i f^L(\mathbf{x}_i) \right] = k R_S(\mathcal{F}^L),
+\end{aligned}
+\quad (27) $$
+
+where $k$ is the number of classes to predict. To prove the upper bound of $R_S(\mathcal{F}^L)$ in a recursive way, we define a variant of the local Rademacher complexity $\hat{R}_S(\mathcal{F}^L)$ with 2-norm inside the supremum, which is
+
+$$ \hat{R}_S(\mathcal{F}^L) = \frac{1}{n} \mathbb{E}_{\epsilon_i} \left[ \sup_{f^L \in \mathcal{F}} \left\| \sum_{i=1}^{n} \epsilon_i f^L(\mathbf{x}_i) \right\|_2 \right]. \quad (28) $$
+
+We also have
+
+$$ R_S(\mathcal{F}^L) \leq \hat{R}_S(\mathcal{F}^L). \quad (29) $$
+
+Combining Eqs. 1 and 2, we have
+
+$$
+\begin{aligned}
+& f^l(\mathbf{x}_i; \mathbf{W}^{l:}, \boldsymbol{\theta}^{:(l-1)}) \\
+&= \mathbb{E}_{\mathbf{r}:l-1} f^l(\mathbf{x}_i; \mathbf{W}^{l:}, \mathbf{r}^{:(l-1)}) \\
+&= \mathbb{E}_{\mathbf{r}:l-1} \mathbf{W}^l \mathbf{r}^{l-1} \odot \phi(f^{l-1}(\mathbf{x}_i; \mathbf{W}^{:(l-1)}, \mathbf{r}^{:(l-2)})).
+\end{aligned}
+\quad (30) $$
+
+To simplify our derivation process, we define
+
+$$ g^{l-1}(\mathbf{x}_i) = \mathbb{E}_{\mathbf{r}:l-1} \mathbf{r}^{l-1} \odot \phi(f^{l-1}(\mathbf{x}_i; \mathbf{W}^{:(l-1)}, \mathbf{r}^{:(l-2)})), \quad (31) $$
+
+$$ g_{\epsilon}(x) = \frac{\sum_{i=1}^{n} \epsilon_i g^{l-1}(x_i)}{n}. \quad (32) $$
+
+Hence the variant of the local Rademacher complexity can be rewritten as
+
+$$
+\begin{aligned}
+\hat{R}_S(\mathcal{F}^l) &= \frac{1}{n} \mathbb{E}_{\epsilon_i} \left[ \sup_{\mathbf{W}^{l:}} \left\| \sum_{i=1}^{n} \epsilon_i f^l(\mathbf{x}_i; \mathbf{W}^{l:}, \boldsymbol{\theta}^{:(l-1)}) \right\|_2 \right] \\
+&= \mathbb{E}_{\epsilon_i} \left[ \sup_{\mathbf{W}^{l:}} \| \frac{1}{n} \sum_{i=1}^{n} \epsilon_i f^l(\mathbf{x}_i; \mathbf{W}^{l:}, \boldsymbol{\theta}^{:(l-1)}) \|_2 \right] \\
+&= \mathbb{E}_{\epsilon_i} \left[ \sup_{\mathbf{W}^{l:}} \| \frac{1}{n} \sum_{i=1}^{n} \epsilon_i \mathbf{W}^l g^{l-1}(\mathbf{x}_i) \|_2 \right] \\
+&= \mathbb{E}_{\epsilon} \left[ \sup_{\mathbf{W}^{l:}} \| \mathbf{W}^l g_{\epsilon}(\mathbf{x}) \|_2 \right].
+\end{aligned}
+\quad (33) $$
+
+Considering $\mathbf{W} = U\Sigma V$, $\mathbf{W}^l$ can be written as
+
+$$ \mathbf{W}^l = \sum_{j=1}^{\text{rank}(\mathbf{W}^l)} \sigma_j \mathbf{u}_j \mathbf{v}_j^T, \quad (34) $$
+
+After that, we derive the two upper bounds of two terms in the right part of the above inequality separately. Then we combine these two upper bounds with Eqs. 27 and 28 to get the upper bound of the local Rademacher complexity. To simplify the derivation, we denote $F_1$ as the left part of Eq. 35, $F_2$ as the right part of Eq. 35. Firstly, we consider $F_1$. Based on Eqs. 1, 31, 32, and SVD, we have
+
+$$
+\begin{aligned}
+\hat{R}_S(\mathcal{F}^l) &= \mathbb{E}_\epsilon [\sup_{\mathbf{W}^{l:}} \| \sum_{j=1}^{\text{rank}(\mathbf{W}^l)} \sigma_j \mathbf{u}_j \mathbf{v}_j^T g_\epsilon(\mathbf{x}) \|_2] \\
+&\leq \mathbb{E}_\epsilon [\sup_{\mathbf{W}^{l:}} \| \sum_{j=1}^{h} \sigma_j \mathbf{u}_j \mathbf{v}_j^T g_\epsilon(\mathbf{x}) \|_2] \\
+&\quad + \mathbb{E}_\epsilon [\sup_{\mathbf{W}^{l:}} \| \sum_{j>h}^{\text{rank}(\mathbf{W}^l)} \sigma_j \mathbf{u}_j \mathbf{v}_j^T g_\epsilon(\mathbf{x}) \|_2].
+\end{aligned}
+\quad (35) $$
+
+$$
+\begin{aligned}
+F_1 &= \mathbb{E}_{\epsilon}[\sup_{\mathbf{W}^{l:}} \| \sum_{j=1}^{h} \sigma_j \mathbf{u}_j \mathbf{v}_j^T g_{\epsilon}(x) \|_2] \\
+&\leq \sup_{\mathbf{W}^{l:}} \| \sum_{j=1}^{h} \sigma_j \mathbf{u}_j \mathbf{v}_j^T g^{l-1}(x_i) \|_2 \\
+&\leq \sup_{\mathbf{W}^{l:}} (\|\frac{1}{n}\sum_{i=1}^{n} f^l(x_i; W^{l:}, \boldsymbol{\theta}^{:(l-1)})\|_2)^{\frac{1}{2}} \\
+&\leq \sup_{\mathbf{W}^{l:}} (\frac{1}{n}\sum_{i=1}^{n}\|f^l(x_i; W^{l:}, \boldsymbol{\theta}^{:(l-1)})\|_2)^{\frac{1}{2}} \\
+&\leq \sup_{\mathbf{W}^{l:}} (\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{\boldsymbol{\theta}: (l-1)} \|f^l(x_i; W^{l:}, \boldsymbol{\theta}: (l-1))\|_2)^{\frac{1}{2}}.
+\end{aligned}
+\quad (36) $$
+
+Given the restriction of $f^l$ in Theorem 1, we have
+
+$$ F_1 \leq \sqrt{\delta l}.
+\quad (37) $$
+
+Secondly, we consider $F_2$. By the equivalence of 1-norm and 2-norm, we have
+
+$$
+\begin{aligned}
+F_2 &\leq \mathbb{E}_{\epsilon}[ \sup_{\mathbf{W}^{l:}} \sum_{j>h}^{\text{rank}(\mathbf{W}^l)} \| \sigma_j^l v_j^T g_\epsilon(x) \|_1 \| u_j \|_2 ] \\
+&\leq \mathbb{E}_{\epsilon}[ \sup_{\mathbf{W}^{l:}} \sum_{j>h}^{\text{rank}(\mathbf{W}^l)} \| <\sigma_j^l v_j, g_\epsilon(x)> \|_1 ] \\
+&\leq \mathbb{E}_{\epsilon}[ \sup_{\mathbf{W}^{l:}} \sum_{j>h}^{\text{rank}(\mathbf{W}^l)} \| \sigma_j^l v_j \|_2 \| g_\epsilon(x) \|_2 ] \\
+&\leq \mathbb{E}_{\epsilon}[ \sup_{\mathbf{W}^{l:}} \| \sum_{j>h}^{\text{rank}(\mathbf{W}^l)} \| g_\epsilon(x) \|_2 ].
+\end{aligned}
+\quad (38) $$
\ No newline at end of file
diff --git a/samples/texts/7683722/page_11.md b/samples/texts/7683722/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..a3b8da249d5fef0343fe27e3b4cb39edc7789af7
--- /dev/null
+++ b/samples/texts/7683722/page_11.md
@@ -0,0 +1,80 @@
+Based on Eqs. 31 and 32, we have
+
+$$
+\begin{align}
+F_2 &\le \sum_{j>h} \sigma_j^l \mathbb{E}_\epsilon \sup_{\mathbf{W}^{:(l-1)}} \|\frac{1}{n} \sum_{i=1}^n \epsilon_i g^{l-1}(\mathbf{x})\|_2 \nonumber \\
+&\le \sum_{j>h} \sigma_j^l \mathbb{E}_\epsilon \sup_{\mathbf{W}^{:(l-1)}} \|\frac{1}{n} \sum_{i=1}^n \epsilon_i (\mathbf{E}_{\mathbf{r}:l-1} \mathbf{r}^{l-1} \nonumber \\
+&\qquad \odot \phi(f^{l-1}(\mathbf{x}_i; \mathbf{W}^{:(l-1)}, \mathbf{r}^{:(l-2)}))\|_2 \nonumber \\
+&\le \sum_{j>h} \sigma_j^l \frac{1}{n} \mathbb{E}_\epsilon \sup_{\mathbf{W}^{:(l-1)}} \|\sum_{i=1}^n \epsilon_i \mathbb{E}_{\mathbf{r}:l-1} \mathbf{r}^{l-1} \nonumber \\
+&\qquad \odot \phi(f^{l-1}(\mathbf{x}_i; \mathbf{W}^{:(l-1)}, \mathbf{r}^{:(l-2)})\|_2. \tag{39}
+\end{align}
+$$
+
+By Jensen's inequality and Cauchy-Swartz inequality, we have
+
+$$
+F_2 \leq \sum_{j>h} \sigma_j^l \| \boldsymbol{\theta}^{l-1} \|_2 \\
+\frac{1}{n} \mathbb{E}_\epsilon \sup_{\mathbf{W}^{:(l-1)}} \| \sum_{i=1}^n \epsilon_i \phi(f^{l-1}(\mathbf{x}_i; \mathbf{W}^{:(l-1)}, \boldsymbol{\theta}^{:(l-2)})) \|_2. \quad (40)
+$$
+
+Since ReLU $\phi: \mathbb{R} \rightarrow \mathbb{R}^{+}$ is 1-Lipschitz, based on Ledoux-
+Talagrand contraction we can derive
+
+$$
+F_2 \leq \sum_{j>h} \sigma_j^l ||\boldsymbol{\theta}^{l-1}||_2 \\
+\frac{2}{n}\mathbb{E}_\epsilon \sup_{\mathbf{W}^{:(l-1)}} ||\sum_{i=1}^n \epsilon_i f^{l-1}(\mathbf{x}_i; \mathbf{W}^{:(l-1)}, \boldsymbol{\theta}^{:(l-2)})||_2 \\
+\leq 2 \sum_{j>h} \sigma_j^l ||\boldsymbol{\theta}^{l-1}||_2 \hat{R}_S(\mathcal{F}^{l-1}). \quad (41)
+$$
+
+Thus we combine Eqs. 37 and 41 to get
+
+$$
+\hat{R}_S(\mathcal{F}^l) \le F_1 + F_2 \le \sqrt{\delta^l} + 2 \sum_{j>h}^{\mathrm{rank}(\mathbf{W}^l)} \sigma_j^l ||\boldsymbol{\theta}^{l-1}||_2 \hat{R}_S(\mathcal{F}^{l-1}). \quad (42)
+$$
+
+Consider the fact that
+
+$$
+\hat{R}_S(\mathcal{F}^1) \leq \sqrt{\delta^1} + \sum_{j>h}^{\mathrm{rank}(\mathbf{W}^1)} \sigma_j^1 ||\boldsymbol{\theta}^0||_2 \frac{||\mathbf{x}||_F}{n}, \quad (43)
+$$
+
+where $||\mathbf{x}||_F$ is the Frobenius norm of the feature of input data.
+Denote $||\mathbf{x}||_F \le B$, $B \in \mathbb{R}$. We combine Eqs. 42, 43, 27 and 28
+to achieve
+
+$$
+R_S(\hat{\mathcal{L}}) \le k \left[ \sqrt{\delta^L} + \frac{B}{n} 2^{L-1} \prod_{i=1}^{L} (||\boldsymbol{\theta}^{i-1}||_2 \sum_{j>h}^{\text{rank}(\mathbf{W}^i)} \sigma_j^i + \sum_{i=2}^{L} 2^{i-1} \sqrt{\delta^{L-i+1}} \prod_{j=1}^{i-1} (||\boldsymbol{\theta}^{L-j}||_2 \sum_{j>h}^{\text{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1}) \right] (44)
+$$
+
+which completes the proof.
+
+### 7.2 The Detailed Matrix
+
+To better apply DropBlock into mathematics derivation in CNNs,
+we tend to create the mathematical general form of matrix $\hat{\theta}^{l-1}$
+
+($l \in \{1, 2, 3, ..., L\}$) based on the idea of DropBlock. The
+matrix $\hat{\theta}$ is a mask on the $l$-th layer feature map. In the matrix
+$\hat{\theta}$, $b$ represents the *block size*, and $\gamma$ is the probability of
+dropping one unit, which is decided by *drop rate* (traditional
+dropout rate) and $b$.
+
+The matrix $\hat{\theta}^{l-1}$ means the keep probabilities of all units in feature map, but the parameter is drop rate. Hence the matrix $\hat{\theta}^{l-1}$ equals $1 - \hat{\theta}_{drop}^{l-1}$. Because the matrix $\hat{\theta}_{drop}^{l-1}$ is too big to present, we divide it into two parts, $\theta_{drop}^{l-1}$ and $M_{drop}^{l-1}$. To make sure the block region will be thoroughly contained in feature map, there should be a valid seed region ($M_{drop}^{l-1}$), so the matrix $M_{drop}^{l-1}$ is in the middle of the matrix $\hat{\theta}^{l-1}$. Denote the matrix $\hat{\theta}'$ as $\hat{\theta}_{drop}' = (\hat{\theta}_{drop})^{(b-b')''}$. Then the valid seed region $M_{drop}' = (\hat{\theta}_{drop}')^{(b-b')''}$ is in $\mathbb{R}_{u\times v}$.
+
+In the matrix $\theta_{drop}'$, the top left term is $\gamma'^{l-1}$, because this unit can only be dropped when the top left unit in valid seed region (matrix $M_{drop}'$) is dropped. The term on the right of the top left term is $2\gamma'^{l-1}$, because this unit can only be dropped when the top left unit or the right unit of the top left unit in valid seed region (matrix $M_{drop}'$) is dropped. In this case, we can gradually achieve the drop probabilities of all units in $\theta_{drop}'$. In the matrix $M_{drop}'$, the top left unit can be dropped when the unit in the matrix that is expanded by the right 1, 2, ..., $\frac{b+1}{2}$ units and the bottom 1, 2, ..., $\frac{b+1}{2}$ units is dropped. Thus, the drop probability of the top left unit is $(\frac{b+1}{2})^2\gamma'^{l-1}$. In the middle of matrix $M_{drop}'$, the term $b^2\gamma'^{l-1}$ represents this unit can be dropped when the unit in the matrix that is centered by this unit and expanded to the size of $b \times b$ is dropped. Therefore, we can also achieve the drop probabilities of all units in $M_{drop}'$. Combine these two matrix $\theta_{drop}'$ and $M_{drop}'$, we can get the matrix $\hat{\theta}_{drop}'$, and achieve the mask $\hat{\theta}'$.
+
+### 7.3 Brief Proof of Theorem 2
+
+We only demonstrate the different part in the proof of Theorem 2 comparing to Theorem 1. Other same parts are omitted for simplification. In this section, all p-norms $\|\cdot\|_p$ are entrywise p-norm (regarding an $m \times n$ matrix as an $m \times n$ dimension vector when calculating the p-norm).
+
+Proof. Similarly with the beginning part of the proof of Theorem 1 (from Eqs. 27 to 32), based on Eq. 13, we can directly have
+
+$$
+\begin{align}
+\hat{R}_S(\mathcal{F}^l) &= \frac{1}{n}\mathbb{E}_{\epsilon_i}\left[\sup_{\mathbf{W}:l}\left\|\sum_{i=1}^n \epsilon_i f^l(\mathbf{x}_i; \mathbf{W}:l, \hat{\boldsymbol{\theta}}^{:(l-1)})\right\|_2\right] \\
+&= \mathbb{E}_{\epsilon_i}\left[\sup_{\mathbf{W}:l}\left\|\frac{1}{n}\sum_{i=1}^n \epsilon_i f^l(\mathbf{x}_i; \mathbf{W}:l, \hat{\boldsymbol{\theta}}^{:(l-1)})\right\|_2\right] \\
+&= \mathbb{E}_{\epsilon_i}\left[\sup_{\mathbf{W}:l}\left\|\frac{1}{n}\sum_{i=1}^n \epsilon_i \mathbf{W}^l \otimes g^{l-1}(\mathbf{x}_i)\right\|_2\right] \\
+&= \mathbb{E}_{\epsilon}\left[\sup_{\mathbf{W}:l}\left\|\mathbf{W}^l \otimes g_{\epsilon}(\mathbf{x})\right\|_2\right].
+\tag{45}
+\end{align}
+$$
\ No newline at end of file
diff --git a/samples/texts/7683722/page_12.md b/samples/texts/7683722/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..5b03cf947d256d9cf0cc09d835677b400c6d24f0
--- /dev/null
+++ b/samples/texts/7683722/page_12.md
@@ -0,0 +1,95 @@
+$$
+\boldsymbol{\theta}_{\text{drop}}^{l-1} = \begin{bmatrix}
+\gamma^{l-1} & 2\gamma^{l-1} & \dots & \frac{b+1}{2}\gamma^{l-1} & \dots & \frac{b+1}{2}\gamma^{l-1} & \dots & 2\gamma^{l-1} & \gamma^{l-1} \\
+2\gamma^{l-1} & 4\gamma^{l-1} & \dots & (b+1)\gamma^{l-1} & \dots & (b+1)\gamma^{l-1} & \dots & 4\gamma^{l-1} & 2\gamma^{l-1} \\
+\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
+\frac{b+1}{2}\gamma^{l-1} & (b+1)\gamma^{l-1} & \dots & \dots & \dots & (b+1)\gamma^{l-1} & \dots & \frac{b+1}{2}\gamma^{l-1} & \gamma^{l-1} \\
+\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
+\frac{b+1}{2}\gamma^{l-1} & (b+1)\gamma^{l-1} & \dots & \dots & \dots & (b+1)\gamma^{l-1} & \dots & \frac{b+1}{2}\gamma^{l-1} & \gamma^{l-1} \\
+\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
+2\gamma^{l-1} & 4\gamma^{l-1} & \dots & (b+1)\gamma^{l-1} & \dots & (b+1)\gamma^{l-1} & \dots & 4\gamma^{l-1} & 2\gamma^{l-1} \\
+\gamma^{l-1} & 2\gamma^{l-1} & \dots & \frac{b+1}{2}\gamma^{l-1} & \dots & \frac{b+1}{2}\gamma^{l-1} & \dots & 2\gamma^{l-1} & \gamma^{l-1}
+\end{bmatrix}
+$$
+
+$$
+M_{\text{drop}}^{\ell-1} =
+\begin{bmatrix}
+(\frac{b+1}{2})^2 2\gamma^{l-1} & \frac{b+1}{2} \frac{b+3}{2} 2\gamma^{l-1} & \cdots & \frac{b+1}{2} b\gamma^{l-1} & \cdots & \frac{b+1}{2} b\gamma^{l-1} & \cdots & (\frac{b+1}{2})^2 2\gamma^{l-1} & (\frac{b+1}{2})^2 b\gamma^{l-1} \\
+\frac{b+1}{2} (\frac{b+3}{2}) 2\gamma^{l-1} & (\frac{b+3}{2})^2 2\gamma^{l-1} & \cdots & (\frac{b+3}{2}) b\gamma^{l-1} & \cdots & (\frac{b+3}{2}) b\gamma^{l-1} & \cdots & (\frac{b+3}{2}) (\frac{b+3}{2}) 2\gamma^{l-1} & (\frac{b+3}{2}) (\frac{b+3}{2}) b\gamma^{l-1} \\
+\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
+\frac{b+1}{2} b\gamma^{l-1} & \frac{b+3}{2} b\gamma^{l-1} & \cdots & b^2 2\gamma^{l-1} & \cdots & b^2 2\gamma^{l-1} & \cdots & \frac{b+1}{2} b\gamma^{l-1} & \frac{b+3}{2} b\gamma^{l-1} \\
+\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\
+\frac{b+1}{2} b\gamma^{l-1} & (\frac{b+3}{2})^2 2\gamma^{l-1} & \cdots & (\frac{b+3}{2}) b\gamma^{l-1} & \cdots & (\frac{b+3}{2}) b\gamma^{l-1} & \cdots & (\frac{b+3}{2}) (\frac{b+3}{2}) 2\gamma^{l-1} & (\frac{b+3}{2}) (\frac{b+3}{2}) b\gamma^{l-1} \\
+(\frac{b+1}{2})^2 2\gamma^{l-1} & (\frac{b+3}{2}) (\frac{b+3}{2}) 2\gamma^{l-1} & \cdots & (\frac{b+3}{2}) (\frac{b+3}{2}) b\gamma^{l-1} & \cdots & (\frac{b+3}{2}) (\frac{b+3}{2}) b\gamma^{l-1} & \cdots & (\frac{b+3}{2}) (\frac{b+3}{2}) (\frac{b+3}{2}) 2\gamma^{l-1} & (\frac{b+3}{2}) (\frac{b+3}{2}) (\frac{b+3}{2}) b\gamma^{l-1}
+\end{bmatrix}
+$$
+
+We follow the basic procedure in the proof of Theorem 1, and divide the right part of the above equation into two parts by $h$ to get
+
+$$
+\begin{align}
+& \hat{R}_S(\mathcal{F}^l) \leq \mathbb{E}_\epsilon [\sup_{\mathbf{w}:t} \| \sum_{j=1}^h \sigma_j \mathbf{u}_j \mathbf{v}_j^T \otimes g_\epsilon(\mathbf{x}) \|_2] \\
+& + \mathbb{E}_\epsilon [\sup_{\mathbf{w}:t} \| \sum_{j>h}^{\text{rank}(W^l)} \sigma_j \mathbf{u}_j \mathbf{v}_j^T \otimes g_\epsilon(\mathbf{x}) \|_2].
+\nonumber
+\end{align}
+$$
+
+(46)
+
+Denote $\hat{F}_1$ as the left part of Eq. 46, and $\hat{F}_2$ as the right part of Eq. 46. Firstly, we consider $\hat{F}_1$. According to SVD and properties of entrywise 2-norm, we have
+
+$$
+\begin{align*}
+\hat{F}_1 &= \mathbb{E}_e[\sup_{W:l} \| \sum_{j=1}^h \sigma_j u_j v_j^T \otimes \frac{1}{n} \sum_{i=1}^n e_i g^{-1}(x_i) \|_2] \\
+&\leq \sup_{W:l} \| \sum_{j=1}^{\operatorname{rank}(W^l)} \sigma_j u_j v_j^T \otimes \frac{1}{n} \sum_{i=1}^n g^{-1}(x_i) \|_2 \\
+&\leq \sup_{W:l} (\|W^l \| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+\\
+&\leq \sup_{W:l} (\frac{n}{n-1}\sum_{i=1}^n \|f^l(x_i; W^{\ell,l}, \boldsymbol{\theta}:^{\ell(l-1)})\|_2^2)^{\frac{1}{2}} \\
+&\leq \sqrt{\delta^\ell}.
+\end{align*}
+$$
+
+(47)
+
+Secondly, we consider $\hat{F}_2$. Our goal is to transform the $\hat{F}_2$ to functions that do not have discrete convolution $\otimes$. According to the Young's convolution inequality in [47], regarding the entry-wise $p$-norm of matrix as the $p$-norm of vector, the entrywise 2-norm of the discrete convolution of $\sum_{j>h}^{\operatorname{rank}(W^l)} \sigma_j u_j v_j^T$ and $g_\epsilon(\mathbf{x})$ can be transformed to the product of entrywise 1-norm of
+
+$\sum_{j>h}^{\operatorname{rank}(W^l)} \sigma_j u_j v_j^T$ and entrywise 2-norm of $g_\epsilon(\mathbf{x})$. Hence, we have
+
+$$
+\hat{F}_2 = E_\epsilon [\sup_{W:l} \| \sum_{j>h}^{\operatorname{rank}(W^l)} \sigma_j^l u_j v_j^T \|_1 \| g_\epsilon(\mathbf{x}) \|_2].
+$$
+
+(48)
+
+The properties of entrywise $p$-norm of matrix is same with the $p$-norm of vector. Based on the relations between 1-norm and 2-norm in vector space, we have
+
+$$
+\hat{F}_2 = E_\epsilon [\sup_{W:l} |\sqrt{S^l}| \| \sum_{j>h}^{\mathrm{rank}(W^l)} \sigma_j^l u_j v_j^T \|_2 \| g_\epsilon(\mathbf{x}) \|_2].
+$$
+
+(49)
+
+where $S^l \in R$, $l \in \{1, 2, 3, ..., L\}$ ($W^l \in R^{p^l \times q^l}$, $p^l \times q^l \le S^l$). According to [48], based on the properties of entrywise 2-norm, we can directly have
+
+$$
+\hat{F}_2 \leq E_\epsilon [\sup_{W:l} |\sqrt{S^l}| \| \sum_{j>h}^{\text{rank}(W^l)} {\sigma_j^l \|_2 \| g_\epsilon(\mathbf{x}) \|_2}|.
+$$
+
+(50)
+
+Based on the relations between 1-norm and 2-norm in vector space, we have
+
+$$
+\hat{F}_2 \leq E_\epsilon [\sup_{W:l} |\sqrt{S^l}| \| \sum_{j>h}^{\text{rank}(W^l)} {\sigma_j^l \|_1 \| g_\epsilon(\mathbf{x}) \|_2}|.
+$$
+
+(51)
+
+The above inequality is the same with Eq. 38, except for a constant $\sqrt{S^l}$. Hence, we can follow the exact procedure from Eq. 38 to Eq. 41 in the proof of Theorem 1 to achieve
+
+$$
+\hat{F}_2 \leq 2\sqrt{\overline{S^l}} \sum_{j>h}^{\text{rank}(W^t)} {\sigma_j^l ||\boldsymbol{\theta}^{t-1}||_2 R_S(\mathcal{F}^{t-1})}.
+$$
+
+(52)
\ No newline at end of file
diff --git a/samples/texts/7683722/page_13.md b/samples/texts/7683722/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..fa26348778b6531c53d16a343b872c7644f2aecf
--- /dev/null
+++ b/samples/texts/7683722/page_13.md
@@ -0,0 +1,115 @@
+Similarly with the last step in the proof of Theorem 1, we combine
+Eqs. 47 and 52 to achieve
+
+$$
+\begin{align}
+R_S(\hat{\mathcal{L}}) \le {}& k \left[ \sqrt{\delta^L} + \frac{B}{n} 2^{L-1} \prod_{i=1}^{L} (\sqrt{S^i} \|\hat{\theta}^{i-1}\|_2 \sum_{j>h}^{\text{rank}(\mathbf{W}^i)} \sigma_j^i) \right. \nonumber \\
+& \left. + \sum_{i=2}^{L} 2^{i-1} \sqrt{(S\delta)^{L-i+1}} \prod_{j=1}^{i-1} \right. \nonumber \\
+& \left. (\sqrt{S^{L-j+1}} \|\hat{\theta}^{L-j}\|_2 \sum_{j>h}^{\text{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1}) \right], \tag{53}
+\end{align}
+$$
+
+which completes the proof.
+□
+
+REFERENCES
+
+[1] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, "Going deeper with convolutions," 2015.
+
+[2] J. Yang, M. N. Nguyen, P. P. San, X. Li, and S. Krishnaswamy, "Deep convolutional neural networks on multichannel time series for human activity recognition," in *IJCAI*, 2015.
+
+[3] C. Liu, F. Sun, C. Wang, F. Wang, and A. L. Yuille, "Mat: A multimodal attentive translator for image captioning," in *IJCAI*, 2017.
+
+[4] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," *CoRR*, vol. abs/1512.03385, 2015.
+
+[5] S. Zagoruyko and N. Komodakis, "Wide residual networks," *CoRR*, vol. abs/1605.07146, 2016.
+
+[6] I. Goodfellow, Y. Bengio, and A. Courville, *Deep learning*. MIT press, 2016.
+
+[7] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Improving neural networks by preventing co-adaptation of feature detectors," *CoRR*, vol. abs/1207.0580, 2012.
+
+[8] L. Wan, M. D. Zeiler, S. Zhang, Y. LeCun, and R. Fergus, "Regularization of neural networks using dropconnect," in *ICML*, 2013.
+
+[9] J. Ba and B. J. Frey, "Adaptive dropout for training deep neural networks," in *NIPS*, 2013.
+
+[10] D. P. Kingma, T. Salimans, and M. Welling, "Variational dropout and the local reparameterization trick," *CoRR*, vol. abs/1506.02557, 2015.
+
+[11] D. Molchanov, A. Ashkha, and D. P. Vetrov, "Variational dropout sparsifies deep neural networks," in *ICML*, 2017.
+
+[12] G. Ghiasi, T.-Y. Lin, and Q. V. Le, "Dropblock: A regularization method for convolutional networks," in *NeurIPS*, 2018.
+
+[13] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," *CoRR*, vol. abs/1502.03167, 2015.
+
+[14] M. Hayat, S. H. Khan, M. Bennamoun, and S. An, "A spatial layout and scale invariant feature representation for indoor scene classification," *TIP*, vol. 25, pp. 4829–4841, 2016.
+
+[15] B. Graham, "Fractional max-pooling," *CoRR*, vol. abs/1412.6071, 2014.
+
+[16] L. Breiman, "Bagging predictors," *Machine Learning*, vol. 24, pp. 123–140, 1996.
+
+[17] B. Neyshabur, R. Tomioka, and N. Srebro, "Norm-based capacity control in neural networks," in *COLT*, 2015.
+
+[18] S. Arora, R. Ge, B. Neyshabur, and Y. Zhang, "Stronger generalization bounds for deep nets via a compression approach," *ArXiv*, vol. abs/1802.05296, 2018.
+
+[19] S. Zheng, Q. Meng, H. Zhang, W. Chen, N. Yu, and T. Liu, "Capacity control of relu neural networks by basis-path norm," *ArXiv*, vol. abs/1809.07122, 2019.
+
+[20] S. Arora, S. Du, W. Hu, Z. Li, and R. Wang, "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks," in *ICML*, 2019.
+
+[21] Y. Gal and Z. Ghahramani, "Dropout as a bayesian approximation: Representing model uncertainty in deep learning," in *ICML*, 2016.
+
+[22] W. Gao and Z.-H. Zhou, "Dropout rademacher complexity of deep neural networks," *Science China Information Sciences*, vol. 59, pp. 1–12, 2015.
+
+[23] S. Park, J.-K. Park, S.-J. Shin, and I.-C. Moon, "Adversarial dropout for supervised and semi-supervised learning," in *AAAI*, 2018.
+
+[24] A. Achille and S. Soatto, "Information dropout: Learning optimal representations through noisy computation," *PAMI*, vol. 40, pp. 2897–2905, 2018.
+
+[25] P. L. Bartlett, O. Bousquet, S. Mendelson et al., “Local rademacher complexities,” *The Annals of Statistics*, vol. 33, no. 4, pp. 1497–1537, 2005.
+
+[26] S. I. Wang and C. D. Manning, “Fast dropout training,” in *ICML*, 2013.
+
+[27] J. Tompson, R. Goroshin, A. Jain, Y. LeCun, and C. Bregler, “Efficient object localization using convolutional networks,” *2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 648–656, 2014.
+
+[28] Y. Gal, J. Hron, and A. Kendall, “Concrete dropout,” in *NIPS*, 2017.
+
+[29] J. Hron, A. G. de G. Matthews, and Z. Ghahramani, “Variational bayesian dropout: pitfalls and fixes,” in *ICML*, 2018.
+
+[30] D. Krueger, T. Maharaj, J. Kramár, M. Pezeshki, N. Ballas, N. R. Ke, A. Goyal, Y. Bengio, H. Larochelle, A. C. Courville, and C. J. Pal, “Zoneout: Regularizing rnn s by randomly preserving hidden activations,” *CoRR*, vol. abs/1606.01305, 2017.
+
+[31] W. Mou, Y. Zhou, J. Gao, and L. Wang, “Dropout training, data-dependent regularization, and generalization bounds,” in *ICML*, 2018.
+
+[32] B. Neyshabur, S. Bhojanapalli, D. A. McAllester, and N. Srebro, “A pac-bayesian approach to spectrally-normalized margin bounds for neural networks,” *CoRR*, vol. abs/1707.09564, 2018.
+
+[33] P. L. Bartlett, D. J. Foster, and M. Telgarsky, “Spectrally-normalized margin bounds for neural networks,” in *NIPS*, 2017.
+
+[34] N. Golowich, A. Rakhlin, and O. Shamir, “Size-independent sample complexity of neural networks,” in *COLT*, 2018.
+
+[35] X. Li, J. Lu, Z. Wang, J. D. Haupt, and T. Zhao, “On tighter generalization bound for deep neural networks: Cnn s, resnets, and beyond,” *CoRR*, vol. abs/1806.05159, 2018.
+
+[36] K. Zhai and H. Wang, “Adaptive dropout with rademacher complexity regularization,” in *ICLR*, 2018.
+
+[37] P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” *Journal of Machine Learning Research*, vol. 3, pp. 463–482, 2001.
+
+[38] S. Shalev-Shwartz and S. Ben-David, “Understanding machine learning: From theory to algorithms,” 2014.
+
+[39] Y. Xu and W. Yin, “A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion,” *SIAM J. Imaging Sciences*, vol. 6, pp. 1758–1789, 2013.
+
+[40] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” *Journal of Machine Learning Research*, vol. 15, pp. 1929–1958, 2014.
+
+[41] J.Cai, E.Candès,and Z.Shen," "A singular value thresholding algorithm for matrix completion," *SIAM Journal on Optimization*, vol. 20, pp. 1956–1982, 2010.
+
+[42] A.Krizhevsky," "Learning multiple layers of features from tiny images,"
+
+[43] K.He,X.Zhang,S.Ren,and J.Sun,"Deep residual learning for image
+recognition," **2016 IEEE Conference on Computer Vision and Pattern
+Recognition (CVPR)**, pp. 770–778, 2016.
+
+[44] T Devries and G.W.Taylor,"Improved regularization of convolutional neural networks with cutout," *ArXiv*, vol abs/1708/04552 , 2017.
+
+[45] C Jian-Feng and O Stanley, “Fast singular value thresholding without singular value decomposition,” *Methods and Applications of Analysis*, vol 20 , 2013.
+
+[46] N.J HIGHAM and R.S Schreiber,"Fast polar decomposition of an arbitrary matrix," *SIAM J Scientific Computing*, vol 11 , pp 648–655,
+
+[47] Bogachev and Vladimir, Measure Theory . Springer-Verlag Berlin Heidelberg , 2007.
+
+[48] D Kalman,"A singularly valuable decomposition: The svd of a matrix,"
+1996.
+
diff --git a/samples/texts/7683722/page_2.md b/samples/texts/7683722/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c8cdc99e8b007da67049d8e42d63f262fd18d59
--- /dev/null
+++ b/samples/texts/7683722/page_2.md
@@ -0,0 +1,31 @@
+the complexity of a function class, it does not indicate the fact that the algorithm would pick a function that have a small error. Since only a small subset of the function class will be used in practice, we employ the local Rademacher complexity to measure the complexity of neural networks. In FCNs, dropout with varied keep rates across different layers is included in the complexity analyses but DropBlock was replaced in CNNs. For an improved generalization error bound of neural networks, we choose to minimize the local Rademacher complexity of hypotheses, which then provides us a new complexity regularization involving dropout probabilities and weight normalization. A two-stage optimization is developed to efficiently solve neural networks with the new local Rademacher regularization. Experimental results drawn from real-world datasets demonstrate the effectiveness of LocalDrop with rigorous theoretical foundation.
+
+# 2 RELATED WORKS
+
+This section is a brief review of regularizations and related theoretical analyses for neural networks.
+
+## 2.1 Regularizations for Neural Networks
+
+Regularization aims to settle overfitting and improve the generalization ability of neural networks. Dropout [7] randomly drops units along with their connections from the neural network, preventing units from overly co-adapting. Then an exponential number of different thinned networks are sampled, in a fashion similar to bagging [16]. Drawing on the concept of dropout, several studies have been carried out recently. Dropconnect [8] randomly drops the weights to achieve better performance. Adaptive dropout [9] blends a binary belief network with a classic neural network to lower the classification error in practice though theoretically unwarranted. Fast dropout [26] replaces actual sampling with an approximation of the Gaussian distribution to accelerate dropout training and to avoid training an additional exponential number of neural networks. Spatial dropout [27] randomly drops all the channels from feature maps in CNNs. Variational dropout [10] learns dropout rates for better models, which can also be regarded as a generalization of Gaussian dropout. Based on variational dropout, sparse variational dropout [11] uses different dropout rates to obtain a high level sparsity in neural networks. Extending the idea of analyzing dropout from the Bayesian perspective [21], concrete dropout [28] demonstrates the Bayesian generalization of Bernoulli dropout. In the same vein, variational Bayesian dropout [29] provides the Bayesian generalization of Gaussian dropout. Adversarial dropout [23] integrates adversarial training and dropout to accomplish state-of-the-art performance in age classification. Information dropout [24] applies information theoretic principles (i.e., Bottleneck principles) to deep learning, including dropout and other strategies. DropBlock [12] is a variant of dropout on convolutional neural networks, randomly dropping units in a block region of the feature map. Zoneout [30] is a variant of dropout on RNNs, randomly preserving units' previous values instead of dropping them. Although these dropout strategies have achieved great success in practice, most of them suffer from the lack of rigorous theoretical support.
+
+## 2.2 Theoretical Analyses for Neural Networks
+
+Since many regularization strategies empirically performed well in various complex applications, the theoretical part of these strategies has been brought to attention recently. Gal and Ghahramani
+
+[21] studied the Bernoulli dropout model from the Bayesian perspective. They justified that dropout can be regarded as a particular situation of Bayesian regularization by combining Bayesian models and deep learning to gauge model uncertainty. Besides, Wan [8] proposed a generalization bound of dropout via the Rademacher complexity. Gao and Zhou [22] proved that various dropout strategies can decrease the Rademacher complexity to prevent FCNs from overfitting. Mou [31] combined the Rademacher complexity of the hypothesis set and the variance set to determine a generalization upper bound by applying the proposed general framework with random perturbation on parameters. Neyshabur [17], [32], Barlett [33], Golowich [34] and Li [35] obtained different generalization error bounds of the Rademacher complexity in neural networks. Zhai and Wang [36] proposed a new regularization algorithm based on the network Rademacher complexity bound by mathematical derivation. More perspectives on generalization of neural networks and regularizations are provided by Neyshabur [17], Arora [18], [20] and Zheng [19].
+
+# 3 COMPLEXITY REGULARIZATION
+
+Consider a labeled dataset $S = \{(\mathbf{x}_i, \mathbf{y}_i)\}_{i \in \{1, 2, ..., n\}}, \mathbf{x}_i \in \mathbb{R}^d, \mathbf{y}_i \in \{0, 1\}^k\}$, where $\mathbf{x}_i$ is the feature of the i-th example, $\mathbf{y}_i$ is its corresponding label, and $k$ is the number of classes. Let $k^l$ be the number of neurons in the l-th layer of neural networks, where $l \in \{0, 1, 2, ..., L\}$. The first layer (i.e., $l=0$) takes example features $\mathbf{x}_i$ as the input, while the last layer (i.e., $l=L$) outputs the prediction $\mathbf{y}_i$. Denote $\mathbf{W}^l \in \mathbb{R}^{k^{l-1} \times k^l}$ as the transformation matrix from the $(l-1)$-th layer to the $l$-th layer. For dropout in FCNs, we denote $\theta^l \in [0, 1]^{k^l}$ as the vector of keep rates for the $l$-th layer. In other words, $\theta^l$ represents the possibility of each neuron that is not hidden by dropout in the $l$-th layer. Then we define $\mathbf{r}^l \in \{0, 1\}^{k^l}$ as a binary vector of the combination of $k^l$ independent Bernoulli dropout random variables (i.e., $\theta^l = E_r[\mathbf{r}^l]$). To simplify our notation, we refer $\mathbf{W}^{l} = \{\mathbf{W}^1, \mathbf{W}^2, ..., \mathbf{W}^L\}$, $\mathbf{r}^{l} = \{\mathbf{r}^0, \mathbf{r}^1, ..., \mathbf{r}^L\}$, $\boldsymbol{\theta}^{l} = \{\boldsymbol{\theta}^0, \boldsymbol{\theta}^1, ..., \boldsymbol{\theta}^L\}$, $\mathbf{W} = \mathbf{W}^{L}, \mathbf{r} = \mathbf{r}^{(L-1)}$ and $\boldsymbol{\theta} = \boldsymbol{\theta}^{(L-1)}$. We take ReLU $\phi: \mathbb{R} \rightarrow \mathbb{R}^+$ as the activation function. Therefore, we could write the output of the $l$-th layer in vector form in FCNs as
+
+$$f^{l}(\mathbf{x}; \mathbf{W}^{l}; \mathbf{r}^{:(L-1)}) = \mathbf{W}^{l}(\mathbf{r}^{l-1} \odot \phi(f^{l-1}(\mathbf{x}; \mathbf{W}^{:(L-1)}, \mathbf{r}^{:(L-2)}))), (1)$$
+
+where $\odot$ represents the Hadamard product. As the output of the neural network is a random vector for the sake of the Bernoulli random variables **r**, we take expectation value of $f^L(\mathbf{x}; \mathbf{W}, \mathbf{r})$ as the deterministic output
+
+$$f^L(\mathbf{x}; \mathbf{W}, \boldsymbol{\theta}) = E_r[f^L(\mathbf{x}; \mathbf{W}, \mathbf{r})]. \quad (2)$$
+
+The final predictions are made through a softmax function, and cross-entropy loss is usually adopted as the optimization objective. According to Wan [8], we reformulate the loss function into a logistic function to simplify our generalization analyses as follows
+
+$$\ell(f^L(\mathbf{x}; \mathbf{W}, \boldsymbol{\theta}), \mathbf{y}) = -\sum_j y_j \log \frac{e^{f_j^L(\mathbf{x}; \mathbf{W}, \boldsymbol{\theta})}}{\sum_j e^{f_j^L(\mathbf{x}; \mathbf{W}, \boldsymbol{\theta})}}. \quad (3)$$
+
+We aim to analyze which factors will play important roles in influencing the generalization of the deep neural network, and then incorporate them into the training phase of the neural network for a better generalization.
\ No newline at end of file
diff --git a/samples/texts/7683722/page_3.md b/samples/texts/7683722/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..27a186e0726082ddb08b8c26379e813127cf7f65
--- /dev/null
+++ b/samples/texts/7683722/page_3.md
@@ -0,0 +1,45 @@
+**3.1 Local Rademacher Complexity of FCNs**
+
+The Rademacher complexity [37] is an effective approach for measuring the complexity of the function class L, and it is defined as follows.
+
+**Definition 1.** Let $\mathcal{L}$ be a function class mapping from $\mathbb{Z}$ to $[a, b]$, and $\mathcal{S} = \{(x_i, y_i)|i \in \{1, 2, ..., n\}\}$ is a sample of size $n$ in $\mathbb{Z}$. Let $\epsilon_1, ..., \epsilon_n$ be i.i.d. random variables with $P[\epsilon_i = 1] = P[\epsilon_i = -1] = \frac{1}{2}$. The Rademacher complexity of function class $\mathcal{L}$ with respect to sample $\mathcal{S}$ is defined as
+
+$$R_S(\mathcal{L}) = \frac{1}{n} \mathbb{E}_{\epsilon_i} \left[ \sup_{\ell \in \mathcal{L}} \sum_{i=1}^{n} \epsilon_i \ell(\mathbf{x}_i, \mathbf{y}_i) \right]. \quad (4)$$
+
+The Rademacher complexity provides a global estimation of the complexity of a function class. In other words, it does not reflect the fact that the algorithm will likely pick functions that have a small error. Now that only a small subset of the function class will be used, we try to pick out them by adding a restriction (i.e., the variance $\delta$) on function $f$, and then define the empirical local Rademacher complexity.
+
+**Definition 2.** Let $\hat{\mathcal{L}}$ be the function class mapping from $\mathbb{Z}$ to $[a, b]$, and $\hat{\mathcal{L}} := \{\hat{\ell}(f) : f \in \mathcal{F}\}$, $\mathcal{F} := \{f | P_n ||f||_2^2 \le \delta\}$. $\mathcal{S} = \{(x_i, y_i)|i \in \{1, 2, ..., n\}\}$ is a sample of size $n$ in $\mathbb{Z}$. Then empirical local Rademacher complexity of function class $\hat{\mathcal{L}}$ with respect to the sample $\mathcal{S}$ is defined as
+
+$$R_S(\hat{\mathcal{L}}) = \frac{1}{n} \mathbb{E}_{\epsilon_i} \left[ \sup_{\ell \in \hat{\mathcal{L}}} \sum_{i=1}^{n} \epsilon_i \ell(\mathbf{x}_i) \right], \quad (5)$$
+
+where $P_n ||f||_2 = \frac{1}{n} \sum_{i=1}^n ||f(\mathbf{x}_i)||_2$, and $||\cdot||_2$ represents the l2-norm.
+
+The local Rademacher complexity provides the complexity estimation of a small subset of a function class. Based on this property, we propose a new upper bound of the local Rademacher complexity of FCNs with dropout.
+
+**Theorem 1.** Assume $\|\mathbf{x}\|_{\mathrm{F}} \le B$, where $\|\mathbf{x}\|_{\mathrm{F}}$ is the Frobenius norm of the feature of input data and $B \in \mathbb{R}$. Consider $L$ as the number of total layers and $\theta^l$ as the vector of keep rates in the l-th layer. Denote $n$ as the number of examples and $k$ as the number of classes. Consider the singular value decomposition (SVD) of $\mathbf{W}^l$ as $\mathbf{W}^l = U^l\Sigma^l V^{lT}$, where $\Sigma^l = \text{diag}(\sigma_1^l, ..., \sigma_{\text{rank}(\mathbf{W}^l)}^l)$ and $\sigma_i^l$ is a singular value. Under the condition $\forall h, 0 \le h \le \text{rank}(\mathbf{W}^l)$ and $\frac{1}{n}\sum_{i=1}^n \mathbb{E}_{r:(l-1)} ||f^l(\mathbf{x}_i; \mathbf{W}^l, \tilde{\mathbf{r}}^{(l-1)})||_2^2 \le \delta^l$, the empirical local Rademacher complexity of the function class $\hat{\mathcal{L}}$ on FCNs is bounded by
+
+$$R_S(\hat{\mathcal{L}}) \le k \left[ \sqrt{\delta^L} + \frac{B}{n} 2^{L-1} \prod_{i=1}^{L} (\|\boldsymbol{\theta}^{i-1}\|_2) \sum_{j>h}^{\text{rank}(\mathbf{W}^i)} \sigma_j^i + \sum_{i=2}^{L} 2^{i-1} \sqrt{\delta^{L-i+1}} \prod_{j=1}^{i-1} (\|\boldsymbol{\theta}^{L-j}\|_2) \sum_{j>h}^{\text{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1} \right]. \quad (8)$$
+
+The detailed proof of Theorem 1 can be found in the Appendix. Based on the upper bound of the local Rademacher complexity in Theorem 1, a generalization error bound of deep neural networks can be easily derived [38]. According to Theorem 1, it can be found that increasing the variance $\delta$ will enlarge the upper bound. The upper bound will be tightened when the keep rates $\theta$ and $\sum_{j>h}^{\text{rank}(\mathbf{W})} \sigma_j$ become small. We therefore consider deriving a regularization function in terms of $\theta$ and $\sum_{j>h}^{\text{rank}(\mathbf{W})} \sigma_j$. The local
+
+Rademacher complexity of FCNs without dropout can be analyzed in a similar way, and the conclusion is shown in the following corollary.
+
+**Corollary 1.** Under the same assumptions as Theorem 1, the empirical local Rademacher complexity of the function class $\hat{\mathcal{L}}$ on FCNs without dropout is bounded by
+
+$$R_S(\hat{\mathcal{L}}) \le k \left[ \sqrt{\delta^L} + \frac{B}{n} 2^{L-1} \prod_{i=1}^{L} (\sum_{j>h}^{\text{rank}(\mathbf{W}^i)} \sigma_j^i) + \sum_{i=2}^{L} 2^{i-1} \sqrt{\delta^{L-i+1}} \prod_{j=1}^{i-1} (\sum_{j>h}^{\text{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1}) \right]. \quad (7)$$
+
+In contrast to Theorem 1, the upper bound in Corollary 1 does not include $\theta$ that is related with dropout. The Rademacher complexity bound of neural networks proposed by Golowich [34] could be tighter and independent of the depth of FCNs. But they have additional constraints on loss function and activation function, such as the activation function must be a non element-wise activation function. On the contrary, the activation function we used (ReLU) is an element-wise activation function according to Golowich [34]. These constraints are apparently inappropriate in our situation. In addition, we have analyzed the hypotheses complexity of FCNs in a different way by investigating the properties of local hypotheses of FCNs. Moreover, our major aim is to identify the important factors influencing the hypotheses complexity, and incorporate them into the training phase of the neural network.
+
+**3.2 Complexity Regularization Functions**
+
+According to Theorem 1, the local Rademacher complexity is bounded by a function of keep rates $\theta$ and variances $\delta$ in FCNs. To improve the hypotheses complexity of FCNs, it is natural to design a regularization function by transforming our proposed upper bound of the local Rademacher complexity. A straight forward regularization function can be written as
+
+$$\operatorname{Reg}(\mathbf{W}, \boldsymbol{\theta}) = k\left[\sqrt{\delta^L} + \frac{B}{n} 2^{L-1}\prod_{i=1}^{L} (\|\boldsymbol{\theta}^{i-1}\|_2) \sum_{j>h}^{\operatorname{rank}(\mathbf{W}^i)} \sigma_j^i\right. \\
+\left. + \sum_{i=2}^{L} 2^{i-1}\sqrt{\delta^{L-i+1}}\prod_{j=1}^{i-1} (\|\boldsymbol{\theta}^{L-j}\|_2) \sum_{j>h}^{\operatorname{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1}\right]. \quad (9)$$
+
+According to Definition 2, it is difficult to know the exact value of variance $\delta$ of function $f$. For simplicity, we take the variance $\delta$ as a sufficient large constant. In addition, $k$ is the number of classes to predict, $B \in \mathbb{R}$. In this case, by omitting these constants, the regularization function can be rewritten as
+
+$$\operatorname{Reg}(\mathbf{W}, \boldsymbol{\theta}) = \sum_{i=2}^{L-1} \prod_{j=1}^{i-1} (\|\boldsymbol{\theta}^{L-j}\|_2) \sum_{j>h}^{\operatorname{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1} \\
++ \frac{1}{n} \prod_{i=1}^{L-1} (\|\boldsymbol{\theta}^{i-1}\|_2) \sum_{j>h}^{\operatorname{rank}(\mathbf{W}^i)} \sigma_j^i. \quad (9)$$
+
+It can be noticed that both terms in the regularization function involve $\theta$ and $\sum_{q>h}^{\text{rank}(\mathbf{W})} \sigma_q$. The sparseness and the rank of weight matrix in each layer would be influenced by constraining $\theta$ and $\sum_{q>h}^{\text{rank}(\mathbf{W})} \sigma_q$ respectively. For an efficient optimization,
\ No newline at end of file
diff --git a/samples/texts/7683722/page_4.md b/samples/texts/7683722/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..6096c3572c0c727ee303241d542637fdfd9da1ff
--- /dev/null
+++ b/samples/texts/7683722/page_4.md
@@ -0,0 +1,109 @@
+we extract these two terms out from the complex regularization function, and achieve
+
+$$
+\mathrm{Reg}(\mathbf{W}, \boldsymbol{\theta}) = \sum_{l=1}^{L} (\|\boldsymbol{\theta}^{l-1}\|_2 \sum_{j>h}^{\mathrm{rank}(\mathbf{W}^l)} \sigma_j^l). \quad (10)
+$$
+
+Given a training set $\{(x_i, y_i)\}_{i=1}^n$, therefore the objective function of FCNs is defined as
+
+$$
+\min_{\mathbf{w},\boldsymbol{\theta}} \sum_{i=1}^{n} \ell(f(\mathbf{x}_i; \mathbf{w}, \boldsymbol{\theta}), \mathbf{y}_i) + \lambda \sum_{l=1}^{L} (\|\boldsymbol{\theta}^{l-1}\|_2 \sum_{j>h}^{\mathrm{rank}(\mathbf{w}^l)} \sigma_j^l). \quad (11)
+$$
+
+The objective function aims to minimize the differences between the value of $f(\mathbf{x}_i)$ and the value of $\mathbf{y}_i$. Since overfitting is one of the main problems in deep neural networks, we add a new regularization function into the objective function. This regularization function is derived from the upper bound of the local Rademacher complexity, and it aims to lower the rank of weight matrices $\mathbf{W}$ and to penalize $\boldsymbol{\theta}$. If there is no dropout in FCNs, the objective function correspondingly becomes
+
+$$
+\min_{\mathbf{W}} \sum_{i=1}^{n} \ell(f(\mathbf{x}_i; \mathbf{W}), \mathbf{y}_i) + \sum_{l=1}^{L} \left( \sum_{j>h}^{\text{rank}(\mathbf{W}^l)} \sigma_j^l \right). \quad (12)
+$$
+
+3.3 An Extension to Convolutional Layers
+
+To extend the complexity regularization to CNNs, we follow the
+complexity regularization procedure in FCNs. Most of the pre-
+liminaries in FCNs are still used in CNNs. Denote $\mathbf{W}^l \in \mathbb{R}^{p^l \times q^l}$
+as the kernel matrix in the l-th layer. Although dropout works
+well in FCNs, it only has a slight improvement in convolutional
+neural networks. This is because the structure of convolutional
+networks makes every unit have correlations with adjacent units
+in the feature map. If one single unit is dropped in the feature map,
+the adjacent units also contain some information of the dropped
+unit. Hence the effect of dropout is much weaker. For the sake of
+this, DropBlock [12] is adopted in our analyses. It randomly drops
+a block of units in the feature map, which has a better performance
+than dropout in CNNs. According to DropBlock [12], we calculate
+the keep rate of each unit in every feature map to achieve mask
+matrices $\hat{\boldsymbol{\theta}}^l$. Note that in CNNs, the vector $\boldsymbol{\theta}^l$ in FCNs becomes
+a matrix $\hat{\boldsymbol{\theta}}^l$, which is composed of $\gamma$ and $b$. The detailed matrix
+$\hat{\boldsymbol{\theta}}$ can be found in the Appendix. We can correspondingly have
+$\hat{\boldsymbol{\theta}}^l = \mathbb{E}_{\hat{\mathbf{r}}^{l-1}} \hat{\mathbf{r}}^l$. Hence, for convolutional neural networks, the output
+of the l-th layer is
+
+$$
+f^l(\mathbf{x}; \mathbf{W}^l; \hat{\mathbf{r}}^{l-1}) = \mathbf{W}^l \otimes (\hat{\mathbf{r}}^{l-1} \odot \phi(f^{l-1}(\mathbf{x}; \mathbf{W}^{l-1}), \hat{\mathbf{r}}^{l-1}))), \tag{13}
+$$
+
+where $\otimes$ means the discrete convolution between matrices. Next, we analyze the local Rademacher complexity of CNNs with DropBlock.
+
+In convolutional networks, we use the entrywise *p*-norm of the
+matrix instead of the induced *p*-norm of the matrix in mathematics
+derivation. By doing so, no matter the output of each layer is a
+vector or matrix (i.e., no matter in FCNs or CNNs), the output
+with keep rates can be presented in the same way. The entrywise
+*p*-norm regards an *m* × *n* matrix as an *m* × *n* dimension vector.
+In other words, the matrix **W** is converted into vector (*vec*(**W**)) to
+some extent in the mathematics derivation of convolutional layers.
+Therefore, the *p*-norms of the output of fully-connected layers
+and convolutional layers can be regarded as the same in order
+to remarkably simplify our mathematics derivation. In the whole
+mathematical part of convolutional networks, all *p*-norms ||·||_p
+
+represent entrywise p-norm. The Frobenius form of matrix ||·||_F in the condition of theorems and corollaries is a special case of entrywise 2-norm (i.e., ||·||_2). With the idea of DropBlock [12], we can derive the Theorem 2 based on Theorem 1.
+
+**Theorem 2.** Assume $||\mathbf{x}||_F \le B$, where $||\mathbf{x}||_F$ is the Frobenius norm of the feature of input data, and $B \in \mathbb{R}$. Denote $\mathbf{W}^l \in \mathbb{R}^{p^l \times q^l}$, then assume $p^l \times q^l \le S^l$, where $S^l \in \mathbb{R}$. Consider L as the number of total layers in CNNs, $\hat{\boldsymbol{\theta}}^l$ as the matrix of keep rates in the l-th layer. Denote n as the number of examples, k as the number of classes. Consider the SVD of $\mathbf{W}^l$ as $\mathbf{W}^l = U^T\Sigma^lV^{IT}$, where $\Sigma^l = \text{diag}(\sigma_1^l, ..., \sigma_{\text{rank}(\mathbf{W}^l)}^l)$ and $\sigma_i^l$ is the singular value. Under the condition $\forall h, 0 \le h \le \text{rank}(\mathbf{W}^l)$, $\frac{1}{n}\sum_{i=1}^n E_{\hat{\mathbf{r}}^{(l-1)}} ||f^l(\mathbf{x}_i; \mathbf{W}^l, \hat{\mathbf{r}}^{(l-1)})||_2^2 \le \delta^l$, the empirical local Rademacher complexity of the function class $\hat{\mathcal{L}}$ on CNNs is bounded by
+
+$$
+\begin{equation}
+\begin{aligned}
+R_S(\hat{\mathcal{L}}) &\le k \left[ \sqrt{\delta^L} + \frac{B}{n} 2^{L-1} \prod_{i=1}^{L} (\sqrt{S^i} ||\hat{\theta}^{i-1}||_2 \sum_{j>h}^{\mathrm{rank}(\mathbf{W}^i)} \sigma_j^i) + \right. \\
+&\qquad \left. + \sum_{i=2}^{L} 2^{i-1} \sqrt{(S\delta)^{L-i+1}} \prod_{j=1}^{i-1} (\sqrt{S^{L-j+1}} ||\hat{\theta}^{L-j}||_2 \sum_{j>h}^{\mathrm{rank}(\mathbf{W}^{L-j+1})} \sigma_j^{L-j+1}) \right],
+\end{aligned}
+\tag{14}
+\end{equation}
+$$
+
+The brief proof of Theorem 2 can be found in the Appendix.
+Theorem 2 further investigates the structure of the network, e.g.,
+*S* and *θ̂*. In mathematics, discrete convolution is composed of
+addition and multiplication. Hence the entrywise *p*-norm of the
+convolution of matrices can be transformed to the entrywise *p*
+norm of the multiplication of matrices by some inequalities in
+certain condition. The constant *S* is also derived from this proce-
+dure, converting entrywise 1-norm to entrywise 2-norm. Note that
+the term *θ̂* in the above upper bound is different from the *θ* in
+Theorem 1.
+
+Similarly, the Local Rademacher complexity of CNNs without
+DropBlock can be analyzed to get the following Corollary.
+
+**Corollary 2.** Under the same assumption as Theorem 2, the empirical local Rademacher complexity of the function class $\hat{\mathcal{L}}$ on CNNs without DropBlock is bounded by
+
+$$
+\begin{align}
+R_S(\hat{\mathcal{L}}) &\le k [\sqrt{\delta^L} + \frac{B}{n} 2^{L-1} \prod_{i=1}^{L} (\sqrt{S^i} \sum_{j>h}^{\mathrm{rank}(W^i)} \sigma_j^i) + \nonumber \\
+&\qquad + \sum_{i=2}^{L} 2^{i-1} \sqrt{(S\delta)^{L-i+1}} \prod_{j=1}^{i-1} (\sqrt{S^{L-j+1}} \sum_{j>h}^{\mathrm{rank}(W^{L-j+1})} \sigma_j^{L-j+1})], \tag{15}
+\end{align}
+$$
+
+For convolutional networks with DropBlock, we extract the most influential terms in the upper bound to construct regularization functions. The term $S^l$ in the upper bounds of Theorem 2 and Corollary 2 is also a constant. Whether $\boldsymbol{\theta}$ or $\hat{\boldsymbol{\theta}}$ is a vector or matrix does not have any differences, since the 2-norm of vector is same with the entrywise 2-norm of matrix. Thus, we can achieve
+
+$$
+\min_{\mathbf{w},\boldsymbol{\theta}} \sum_{i=1}^{n} \ell(f(\mathbf{x}_i; \mathbf{w}, \hat{\boldsymbol{\theta}}), \mathbf{y}_i) + \lambda \sum_{l=1}^{L} (\|\hat{\boldsymbol{\theta}}^{l-1}\|_2 \sum_{j>h}^{\mathrm{rank}(\mathbf{W}^l)} \sigma_j^l), (16)
+$$
+
+which is same as the objective function in FCNs. Similarly, if there
+is no DropBlock in CNNs, the objective function correspondingly
+becomes
+
+$$
+\min_{\mathbf{W}} \sum_{i=1}^{n} \hat{\ell}(f(\mathbf{x}_i; \mathbf{W}), \mathbf{y}_i) + \sum_{l=1}^{L} (\sum_{j>h}^{\mathrm{rank}(\mathbf{W}^l)} \sigma_j^l). \quad (17)
+$$
\ No newline at end of file
diff --git a/samples/texts/7683722/page_5.md b/samples/texts/7683722/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..a5a2e3cbfc86653d40ec6087bdc4f5263e45f18c
--- /dev/null
+++ b/samples/texts/7683722/page_5.md
@@ -0,0 +1,61 @@
+## 3.4 An Extension to Multiple Channels
+
+In mainstream CNNs, convolutional feature maps are usually multichannel. In general, each layer's feature map and output have three dimensions, each kernel (weight matrix **W**) has four dimensions. In mathematics derivation, we convert the dimension of output from three to two, and the dimension of kernel from four to two. Each layer's output **f**$^l$ and kernel **W**$^l$ are considered as block matrices. Denoting $c^l$ as the channel number of the l-th layer's feature map, we have
+
+$$ \mathbf{f}^l = [f_1^l \ f_2^l \ f_3^l \ \dots \ f_{c^{l+1}}^l], \quad (18) $$
+
+where $\mathbf{f}^l$ means the whole output of the l-th layer, and $f_i^l$ (a single block) means one output in the l-th layer. Because the number of output is equal to the channel number of the $(l+1)$-th layer's feature map, the subscript of the last block is $c^{l+1}$. The above equation means that all outputs in the l-th layer can be considered as a block matrix $\mathbf{f}^l$. According to DropBlock [12], each feature channel is better to have its own DropBlock mask, which implies
+
+$$ \hat{\boldsymbol{\theta}}^{l-1} = [\theta_1^{l-1} \ \theta_2^{l-1} \ \theta_3^{l-1} \ \dots \ \theta_{c^l}^{l-1}], \quad (19) $$
+
+where $\theta_i^{l-1}$ stands for the mask on each output in the $(l-1)$-th layer (i.e., the feature map in the $i$-th channel in the $l$-th layer). For $\boldsymbol{\theta} = E_r r$, we can correspondingly get the block matrix $\hat{\mathbf{r}}^{l-1}$
+
+$$ \hat{\mathbf{r}}^{l-1} = [r_1^{l-1} \ r_2^{l-1} \ r_3^{l-1} \ \dots \ r_{c^l}^{l-1}] . \quad (20) $$
+
+Based on this idea, we can convert the four dimension's kernel into a two dimension block matrix
+
+$$ \mathbf{W}^l = \begin{bmatrix} W_{11}^l & W_{12}^l & W_{13}^l & \cdots & W_{1c^{l+1}}^l \\ W_{21}^l & W_{22}^l & W_{23}^l & \cdots & W_{2c^{l+1}}^l \\ \vdots & \vdots & \vdots & & \vdots \\ W_{c^l 1}^l & W_{c^l 2}^l & W_{c^l 3}^l & \cdots & W_{c^l c^{l+1}}^l \end{bmatrix}, \quad (21) $$
+
+where $c^l$ is the channel number of the l-th layer, $c^{l+1}$ is the channel number of the $(l+1)$-th layer, the i-th row of the block matrix $\mathbf{W}^l$ represents all kernels in the i-th channel, and the j-th column of the block matrix $\mathbf{W}^l$ represents the j-th kernel in all channels. Since the number of kernel in the l-th layer is equal to the number of output in the l-th layer (i.e., the channel number of feature map in the $(l+1)$-th layer), the matrix $\mathbf{W}^l$ should have $c^{l+1}$ columns. Next, $\mathbf{f}^l$, $\hat{\boldsymbol{\theta}}^l$, $\hat{\mathbf{r}}^l$, $\mathbf{W}^l$ (without subscript) can be regarded as block matrices, and use subscript (i.e., $W_{ij}^l$) to represent the specific block in block matrices.
+
+Based on these block matrices and the Eq. 13, we have
+
+$$ \mathbf{f}^l = [\sum_{i=1}^{c^{l-1}} \mathbf{W}_{i1}^l \otimes \theta_i^{l-1} \phi f_i^{l-1}, \dots, \sum_{i=1}^{c^{l-1}} \mathbf{W}_{ic^{l+1}}^l \otimes \theta_i^{l-1} \phi f_i^{l-1}] $$
+
+where $f_j^l = \sum_{i=1}^{c^{l-1}} \mathbf{W}_{ij}^l \otimes \theta_i^{l-1} \phi f_i^{l-1}$. The equation above is a variant of Eq. 13, presenting the output of the l-th layer with multiple channels in convolutional networks. Hence, we can
+
+follow Theorem 2 to get a variant of Eq. 14 in the circumstance of multichannel with DropBlock:
+
+$$
+\begin{aligned}
+R_S(\hat{\mathcal{L}}) \le k & [\frac{B}{n} 2^{L-1} \prod_{a=1}^{L} \sqrt{\mathcal{S}^a} (\sum_{m=1}^{c^a} (\|\hat{\boldsymbol{\theta}}_m^{a-1}\|_2 \sum_{r=1}^{c^{a+1}} \sum_{k>h} \sigma_k^a)) \\
+& + \sqrt{\delta^L} + \sum_{a=2}^{L} 2^{a-1} \sqrt{\delta^{L-a+1}} \prod_{j=1}^{a} \sqrt{\mathcal{S}^{L-j+1}} \\
+& \qquad \prod_{j=1}^{a-1} (\sum_{m=1}^{c^{L-j+1}} (\|\hat{\boldsymbol{\theta}}_m^{L-j}\|_2 \sum_{r=1}^{c^{a+1}} \sum_{k>h} \sigma_k^a))],
+\end{aligned}
+$$
+
+where $c^l \times c^{l+1} \times p^l \times q^l \le S^l$ ($W_{ij}^l \in R^{p^l \times q^l}$, $\mathbf{W}^l \in R^{p^l \times q^l \times c^l \times c^{l+1}}$). The results of these two terms ($S^a$, $\sum_{k>h} \sigma_k^a$) are consistent with that in Theorem 2. We can follow the process of Section 3.3 to get the regularization function similar to the function 16, which is
+
+$$
+\begin{aligned}
+& \min_{\mathbf{w},\hat{\boldsymbol{\theta}}} \sum_{i=1}^{n} \ell(f(\mathbf{x}_i; \mathbf{W}, \hat{\boldsymbol{\theta}}), y_i) + \\
+& \lambda \sum_{l=1}^{L} [\sum_{m=1}^{c^l} (\|\hat{\boldsymbol{\theta}}_m^{l-1}\|_2 \sum_{j=1}^{c^{l+1}} \mathrm{rank}(\mathbf{W}_{mj}^l) \\
+& \qquad - \sum_{k>h} \sigma_k^l)].
+\end{aligned}
+\quad (22)
+$$
+
+Network architectures have been considered in the objective function, e.g., the number of channels $c^l$ in the l-th layer. Note that the multichannel image convolution is still a 2D convolution, though it looks like 3D convolution. This is because the kernels only slide on the spatial dimensions, rather than the channel dimension.
+
+## 4 OPTIMIZATION
+
+Based on the regularization function derived by an upper bound of the local Rademacher complexity, we begin to settle objective functions. In FCNs, there are weight matrices **W** and keep rate vectors **θ** to solve. For optimizing **θ**, since both loss function and regularization function contain keep rate vectors **θ**, block coordinate descent algorithm is used to optimize it [39]. During the optimization of **θ**, the expected value of the Bernoulli dropout variables is used to approximate the true value of $f^L(\mathbf{x}; \mathbf{W}, \boldsymbol{\theta})$ according to Srivastava [40]. Because if we compute the true value of $f^L(\mathbf{x}; \mathbf{W}, \boldsymbol{\theta})$, it will be extremely time consuming due to the stochasticity of dropout. In other words, the neural network with dropout now works as if it is without dropout.
+
+Backpropagation is widely used to optimize the weights of FCNs. Nevertheless, we cannot straightforwardly apply backpropagation to solve weight matrices **W** in Eq. 11. Because the proposed new regularization function is a non-smooth convex function, and the non-smooth points may block the backpropagation procedure. Therefore, we propose a two-stage optimization for solving FCNs with the new local Rademacher regularization function. The whole process is demonstrated below.
+
+Given a fixed keep rate **θ**, we first ignore the regularization function and conduct backpropagation based on the classification loss to solve weight matrices **W**. Secondly, we take out the current optimal weight matrix $\tilde{\mathbf{W}}^l$ in the l-th layer and project it onto the constraint set.
+
+$$
+\min_{\mathbf{W}^l} \| \mathbf{W}^l - \tilde{\mathbf{W}}^l \|_2 + C \sum_{j>h}^\text{rank} \sigma_j^l,
+\quad (23)
+$$
\ No newline at end of file
diff --git a/samples/texts/7683722/page_6.md b/samples/texts/7683722/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..802c0dc66c7ff2721ab5c31a033061aaea7fe630
--- /dev/null
+++ b/samples/texts/7683722/page_6.md
@@ -0,0 +1,396 @@
+where $C = \lambda ||\boldsymbol{\theta}^{l-1}||_2$. Suppose the SVD of $\hat{W}^l$ is $\hat{W}^l = U^l\Sigma^l V^{lT}$. Define an auxiliary matrix $\Sigma_C^h$ based on $\Sigma^l = \text{diag}(\sigma_1^l, ..., \sigma_{\text{rank}(\hat{W}^l)}^l)$ as follows.
+
+$$
+\Sigma_C^h = \begin{bmatrix}
+\sigma_1^l & 0 & 0 & 0 & \dots & 0 \\
+\vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
+0 & \dots & \sigma_h^l & 0 & \dots & 0 \\
+0 & \dots & 0 & \sigma_{h+1}^l - C & \dots & 0 \\
+\vdots & \dots & \vdots & \vdots & \ddots & \vdots \\
+0 & 0 & \dots & 0 & \dots & \sigma_{\text{rank}(\hat{W}^l)}^l - C & 0
+\end{bmatrix} \quad (24)
+$$
+
+According to SVT algorithm [41], the solution of Eq. 11 can be written in a closed form
+
+$$
+\mathbf{W}^l = U^l \Sigma_C^h V^{lT}. \tag{25}
+$$
+
+**Algorithm 1: The Optimization Process in FCNs**
+
+**Input:** The labeled dataset *S*;
+
+**Output:** weight matrix **W**;
+
+1. Setting the variance $\delta$;
+
+2. Setting three hyperparameters $\lambda$, $m$ and $h$;
+
+3. Initialize **W** and $\theta$, **W** := **W**°, $\theta$ := $\theta$°;
+
+4. epochNum = 0;
+
+5. **for** epochNum ≤ epochMaxNum **do**
+
+6. Use backpropagation to update **W** and $\theta$;
+
+7. epochNum := epochNum + 1;
+
+8. **if** epochNum ≥ k1 and epochNum mod m == 0 **then**
+
+9. $l = 1$;
+
+10. **for** $l < L$ **do**
+
+11. Get $\Sigma^l$ by $\hat{W}^l = U^l\Sigma^l V^{lT}$;
+
+12. $C = \lambda ||\boldsymbol{\theta}^{l-1}||_2$;
+
+13. $\Sigma_C^h = \text{diag}(\sigma_1^l, \dots, \sigma_h^l, \sigma_{h+1}^l - C, \dots, \sigma_{\text{rank}(\hat{W}^l)}^l - C)$;
+
+14. $\mathbf{W}^l = U^l \Sigma_C^h V^{lT}$;
+
+15. $\hat{W}^l := \mathbf{W}^l$;
+
+16. $l := l + 1$;
+
+17. **end**
+
+18. **end**
+
+19. **end**
+
+20. **return** **W**.
+
+Then we put the new optimal weight matrix $\mathbf{W}^l$ back to the backpropagation procedure by replacing the original weight matrix $\hat{\mathbf{W}}^l$, and continue training the FCNs with forward propagation and backpropagation. For efficiency, the second stage can be conducted once after $m$ ($m = 1, 10, 20, 30, \dots$) epochs of the first stage. The two-stage optimization method in FCNs is presented below. For FCNs without dropout, the optimization process is basically the same. The only difference is omitting the keep rate vectors $\boldsymbol{\theta}$ in the functions during the optimization process of FCNs without dropout. The psuedo algorithm is presented above.
+
+For CNNs, we do not exactly follow the procedure in Drop-
+Block [12]. Since $\boldsymbol{\theta}$ is optimized in fully-connected layers, we tend
+to optimize $\hat{\boldsymbol{\theta}}$ in convolutional layers. In the matrix $\hat{\boldsymbol{\theta}}$ (the detailed
+matrix can be found in the Appendix), there are parameters $b$ and
+$\gamma$. The estimation of $\gamma$ in DropBlock [12] is still used to calculate
+its value through the following formula
+
+$$
+\gamma = \frac{d}{b^2} \frac{t^2}{(t - b + 1)^2}, \qquad (26)
+$$
+
+
+
+
+ |
+ Model
+ |
+
+ CIFAR-10
+ |
+
+ CIFAR-100
+ |
+
+
+
+
+ |
+ Original
+ |
+
+ 18.05
+ |
+
+ 50.22
+ |
+
+
+ |
+ Weight decay
+ |
+
+ 17.34
+ |
+
+ 46.48
+ |
+
+
+ |
+ Early stop
+ |
+
+ 16.91
+ |
+
+ 45.47
+ |
+
+
+ |
+ Dropout
+ |
+
+ 15.62
+ |
+
+ 43.28
+ |
+
+
+ |
+ VariationalDropout
+ |
+
+ 14.78
+ |
+
+ 42.34
+ |
+
+
+ |
+ SparseVariationalDropout
+ |
+
+ 14.89
+ |
+
+ 42.25
+ |
+
+
+ |
+ AdaptiveDropout with RC
+ |
+
+ 13.79
+ |
+
+ 38.57
+ |
+
+
+ |
+ LocalDrop in FCNs
+ |
+
+ 16.67
+ |
+
+ 45.29
+ |
+
+
+ |
+ LocalDrop without dropout
+ |
+
+ 16.12
+ |
+
+ 45.46
+ |
+
+
+ |
+ LocalDrop
+ |
+
+
+ 12.16
+
+ |
+
+
+ 38.10
+
+ |
+
+
+
+
+TABLE 1: Classification error (%) on CIFAR datasets.
+
+
+
+
+ | Model |
+ CIFAR-10 |
+ CIFAR-100 |
+ ILSVRC-2012 |
+
+
+
+
+
+ | ResNet-50 |
+ 7.3 |
+ 28.9 |
+ 24.5 |
+
+
+
+ | +Dropout |
+ 6.9 |
+ 28.3 |
+ 23.7 |
+
+
+
+
+ | +Cutout |
+ 6.6 |
+ 27.8 |
+ 24.3 |
+
+
+
+
+ | +SpatialDropout |
+ 6.4 |
+ 27.6 |
+ 23.1 |
+
+
+
+
+ | +DropBlock |
+ 6.1 |
+ 27.3 |
+ 22.2 |
+
+
+
+
+ | +LocalDrop without DB |
+ 6.2 |
+ 27.5 |
+ 21.9 |
+
+
+
+
+ | +LocalDrop |
+ 5.3 |
+ 26.2 |
+ 21.1 |
+
+
+
+
+ | DenseNet |
+ 5.0 |
+ 23.6 |
+ 25.1 |
+
+
+
+
+ | +Dropout |
+ 5.1 |
+ 24.1 |
+ 24.6 |
+
+
+
+
+ | +DropBlock |
+ 4.5 |
+ 23.0 |
+ 24.3 |
+
+
+
+
+ | +LocalDrop |
+ 4.2 |
+ 22.5 |
+ 23.8 |
+
+
+
+
+ | RreActResNet |
+ 4.9 |
+ 22.8 |
+ 21.2 |
+
+
+
+
+ | +Dropout |
+ 4.9 |
+ 23.3 |
+ 20.8 |
+
+
+
+
+ | +DropBlock |
+ 4.5 |
+ 22.4 |
+ 20.5 |
+
+
+
+
+ | +LocalDrop |
+ 4.3 |
+ 22.0 |
+ 20.2 |
+
+
+
+
+ | DPN-92 |
+ 4.8 |
+ 23.1 |
+ 20.7 |
+
+
+
+
+ | +Dropout |
+ 5.2 |
+ 23.2 |
+ 21.0 |
+
+
+
+
+ | +DropBlock |
+ 4.5 |
+ 22.5 |
+ 20.5 |
+
+
+
+
+ | +LocalDrop |
+ 4.3 |
+ 22.2 |
+ 19.9 |
+
+
+
+
+
+
+TABLE 2 : Classification error (%) on CIFAR-10, CIFAR-100 and ILSVRC2012 datasets.
+
+where *d* is drop date, *b* means block size, and *t* stands for feature size. The term *d* can be regarded as the probability to drop every unit in traditional dropout. Hence, the above equation explains the relationship between the drop rate in DropBlock (i.e., *γ*) and the drop rate in dropout (i.e., *d*) to some extent. The term *b*² means every random dropped unit will be expanded to *b*² size. The term (*t*−*b*+1)² indicates the valid seed region for dropping seed units.
+Because we need to make sure that the blocks will not exceed the region of the feature map. Since the valid seeds are all random, there will be some overlaps between dropped blocks. For the sake of this, the equation is an estimation of *γ*. The term *b* is regarded as a hyperparameter, and the term *d* as a parameter to be optimized.
+In DropBlock [12], if the initial value of *d* is big, the performance will be bad. Thus, we set the initial value of *d* as *θ*, and use block coordinate descent to achieve the optimal value of *d*, which is the same procedure in fully-connected layers.
+For optimizing $\mathbf{W}^l$, backpropagation is basically the same in both fully-connected layers and convolutional layers.
+
+# **5 EXPERIMENTS**
+
+In this section, we conduct experiments to evaluate LocalDrop.
+We present our experiment settings and illustrate the performance
+of LocalDrop by comparing it with other representative regular-
+ization methods in several different models on CIFAR datasets
+and ImageNet dataset. Experiments were also done to reveal the
+properties of LocalDrop.
+
+$$
+\gamma = \frac{d}{b^2} \frac{t^2}{(t - b + 1)^2}, \quad (26)
+$$
\ No newline at end of file
diff --git a/samples/texts/7683722/page_7.md b/samples/texts/7683722/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b7fdb54a7fddb5b18da6b9b3968bededcf60859
--- /dev/null
+++ b/samples/texts/7683722/page_7.md
@@ -0,0 +1,21 @@
+Fig. 1: The y-axis of three figures all represent classification error (%). Left: (a) The performance of LocalDrop and adaptive dropout with the Rademacher complexity (i.e., AD with RC) [36] to prevent overfitting influenced by the size of two fully-connected layers on CIFAR-10. Middle: (b) The performance of LocalDrop and adaptive dropout with the Rademacher complexity [36] influenced by epoch number on CIFAR-10. Right: (c) The performance of LocalDrop and adaptive dropout with the Rademacher complexity [36] influenced by epoch number on CIFAR-100
+
+Fig. 2: The comparison of the derived upper bound with the classification error (%) in the optimization process on CIFAR-10 dataset in FCNs. We set $\delta$ as 1 for simplicity. Two y-axes are applied for better visualization of the comparison between the derived upper bound and the actual classification error (%).
+
+Fig. 3: The basic network structure of LocalDrop with DropBlock in ResNet-50
+
+## 5.1 Experimental Settings
+
+First, we evaluated the empirical performance of LocalDrop on the CIFAR-10 and CIFAR-100 datasets [42] in Zhai's model [36]. These datasets are composed of 60,000 32 × 32 color images in 10 and 100 different classes respectively. Both of them contain 50,000 training images and 10,000 testing images. According to Srivastava [40], all the images were preprocessed by global contrast normalization and ZCA whitening before the training procedure. The neural network architecture design of Zhai [36] is adopted for comparison. There are three convolutional layers, each of which is followed by a max-pooling layer. Two fully-connected layers with 2,048 hidden units respectively are followed in the end. LocalDrop was evaluated both in convolutional layers and fully-connected layers.
+
+Next, we evaluated the empirical performance of LocalDrop on the CIFAR and ILSVRC2012 datasets in ResNet-50 [43]. The ILSVRC2012 dataset contains 1.4 million 32 × 32 color images in 1,000 categories, including 1.2 million training images, 50,000 validation images, and 150,000 testing images. In contrast to DropBlock [12], no data augmentation was used in our experi-
+
+| λ | 0.01 | 0.03 | 0.1 | 0.3 | 1 |
|---|
| h=500,m=30 | 12.81 | 12.65 | 12.56 | 12.75 | 12.95 | | h=1000,m=20 | 12.88 | 12.74 | 12.61 | 12.82 | 13.02 | | h=1500,m=10 | 12.49 | 12.43 | 12.38 | 12.51 | 12.82 | | h=2000,m=1 | 12.98 | 12.91 | 12.80 | 12.95 | 13.18 | | h=2000,m=10 | 12.34 | 12.27 | 12.16 | 12.39 | 12.77 |
+
+TABLE 3: The influences of λ on classification error(%) of CIFAR-10.
+
+ments. The basic network structure in ResNet is shown in Figure 3.
+
+## 5.2 Comparisons with Different Regularizations
+
+Table 1 reports the performance of LocalDrop by comparing classification error against other methods on the CIFAR-10 and CIFAR-100 datasets in the first model. The performance of the original network was evaluated in the beginning. Then we realized two common regularization methods (i.e., weight decay and early
\ No newline at end of file
diff --git a/samples/texts/7683722/page_8.md b/samples/texts/7683722/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..f3523fc3f4f2ad3d3bb1d572a95025e5a64cd26f
--- /dev/null
+++ b/samples/texts/7683722/page_8.md
@@ -0,0 +1,33 @@
+| h | 0 | 200 | 500 | 700 | 1000 | 1200 | 1500 | 1700 | 2000 | 2048 |
|---|
| m=1 | 12.95 | 12.89 | 12.81 | 12.75 | 12.64 | 12.78 | 12.67 | 12.71 | 12.80 | 13.11 | | m=10 | 12.42 | 12.55 | 12.75 | 12.66 | 12.78 | 12.74 | 12.38 | 12.30 | 12.16* | 13.11 | | m=20 | 12.68 | 12.75 | 12.72 | 12.63 | 12.61 | 12.47 | 12.33 | 12.38 | 12.41 | 13.11 | | m=30 | 12.48 | 12.44 | 12.56 | 12.54 | 12.62 | 12.79 | 12.72 | 12.68 | 12.60 | 13.11 |
+
+TABLE 4: The influences of *h* and *m* on classification error(%) on CIFAR-10 when $\lambda = 0.1$. The bold number in each row represents the best performance of that row. The bold number with * means the best performance of all possible hyperparameter combination choices.
+
+| Model | CIFAR-10 | CIFAR-100 |
|---|
| Original | 30.33 | 33.43 |
|---|
| Dropout | 35.57 | 39.22 |
|---|
| AdaptiveDropout with RC | 38.15 | 42.93 |
|---|
| LocalDrop | 39.43 | 44.90 |
|---|
+
+TABLE 5: Time consumption (minutes) on CIFAR datasets every 100 epochs.
+
+| block size | 1 | 3 | 5 | 7 | 9 | 11 |
|---|
| CIFAR-10 | 6.9 | 6.1 | 5.5 | 5.3 | 5.8 | 6.6 |
|---|
| CIFAR-100 | 28.3 | 27.0 | 26.6 | 26.2 | 26.9 | 27.7 |
|---|
| ILSVRC2012 | 23.7 | 22.5 | 21.7 | 21.1 | 21.4 | 21.9 |
|---|
+
+TABLE 6: The influences of block size on classification error(%) of three datasets.
+
+| Model | CIFAR-10 | CIFAR-100 | ILSVRC2012 |
|---|
| ResNet-50 | 150 | 167 | 522 |
|---|
| Dropout | 196 | 212 | 583 |
|---|
| DropBlock | 204 | 222 | 597 |
|---|
| LocalDrop | 238 | 267 | 661 |
|---|
+
+TABLE 7: Time consumption (minutes) on three datasets every 100 epochs.
+
+stop) as the baselines. According to [36], we adopted two variants of dropout (i.e., variational dropout and sparse variational dropout [10], [11]) which had relatively better performances in various dropout variants. Then the performance of adaptive dropout with the Rademacher complexity [36] (i.e., AdaptiveDropout with RC) was evaluated for comparison. Finally, LocalDrop was conducted to demonstrate our algorithm achieving the best performance among these methods. This is mainly because LocalDrop intends to achieve low-rank weight matrices **W** and to penalize $\theta$. In addition, two different situations of LocalDrop were evaluated, one only in FCNs (i.e., LocalDrop in FCNs) and the other without dropout (i.e., LocalDrop without dropout) for comparison. It could be found that the regularization in convolutional layers may have bigger effects on the final performance. Hence, we expanded LocalDrop to convolutional layers with DropBlock. Next, we evaluate LocalDrop in ResNet.
+
+Table 2 shows the performance of LocalDrop comparing to other methods on CIFAR-10, CIFAR-100, and ILSVRC2012 datasets. In comparison with DropBlock [12], these experiments were also conducted in 270 epochs in ResNet-50. Similarly, we first evaluated the performance of the original ResNet-50 [43] network. Then dropout was regarded as a baseline in ResNet-50, because dropout did not perform well in convolutional layers relative to its performance in FCNs. Next, two algorithms were adopted, Cutout and spatial dropout, which were related to Drop-
+
+Block [12]. Cutout [44] is a simplified version of DropBlock, applied to randomly cut out blocks in the input data. DropBlock resembles dropout when $b = 1$, and resembles spatial dropout when $b = t$ ($b$ means block size, $t$ means feature size). Therefore, the three algorithms above DropBlock can be regarded as three different special cases of DropBlock. The below three rows present the performances of LocalDrop on several other models, including DenseNet, PreActResNet, DPN. Compared with dropblock, LocalDrop also has lower classification error rates on all datasets. The results in Table 2 also reflects the low-rank weight matrices will lead to better performance.
+
+Figure 1 (a) compares the original network (i.e., None), LocalDrop and adaptive dropout with the Rademacher complexity (i.e., AD with RC) [36]. Through analyzing Figure 1 (a), it could be found that the performance of the original network (i.e., the black line) deteriorated when the number of hidden units rose in FCNs. But the performance of LocalDrop (i.e., the red line) and adaptive dropout with the Rademacher complexity [36] (i.e., the blue line) were still improving when the number of hidden units increased in FCNs. This was because when the size of hidden units became larger, more parameters in weight matrix could lead to overfitting. Rademacher regularization functions were added to both LocalDrop and adaptive dropout with the Rademacher complexity [36] both add Rademacher regularization functions to prevent overfitting. In addition, LocalDrop has consistent lower error rates, since it can achieve a low-rank weight matrix.
+
+Figure 1 (b) and Figure 1 (c) present the changes of the classification error with respect to epoch number on CIFAR-10 and CIFAR-100 datasets. We initialized the learning rate with 0.005 and exponentially decayed it by half every 200 epochs. It can be observed that both LocalDrop and adaptive dropout with the Rademacher complexity [36] have relatively stable classification errors between 700 epochs and 800 epochs. Hence we set the maximum of epochs with 800 in the training process. Figure 1 (b) and Figure 1 (c) again demonstrate that the model performs better when a low-rank weight matrix is achieved. Next, we focus on the analyses of hyperparameter.
+
+Figure 2 is a visualization of our derived upper bound in section 3.1 changes in the optimization procedure of FCNs. We set $\delta$ as 1 for simplicity. The left y-axis presents the value of our derived upper bound, the right y-axis is the classification error (%) in empirical experiment. In addition, the derived upper bound was compared with the classification error in the same condition to demonstrate that our bound matches with the trend of classification error.
+
+Figure 3 illustrates the basic network structure of LocalDrop in ResNet-50.
+
+## 5.3 Hyperparameter Analyses
+
+For the first model [36], there are three hyperparameters (i.e., $\lambda$, $m$ and $h$) in LocalDrop. In the following, we proceed to analyze
\ No newline at end of file
diff --git a/samples/texts/7683722/page_9.md b/samples/texts/7683722/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e85d8ec38207b1bfee65ad7bccb4a9ecd626612
--- /dev/null
+++ b/samples/texts/7683722/page_9.md
@@ -0,0 +1,23 @@
+the effects of these hyperparameters.
+
+We begin with $\lambda$. The first four rows in Table 3 illustrate the effect of $\lambda$ with four rough combinations of $m$ and $h$. It could be observed that for different combinations of $m$ and $h$, the classification error is always the lowest when $\lambda = 0.1$. For example, given $h = 1500$ and $m = 10$, the classification error under $\lambda = 1$ is 12.82, which is worse than the error rate under $\lambda = 0.1$. Hence, we take $0.1$ as the optimal value for hyperparameter $\lambda$. In the last row of Table 3, we make sure that initializing parameters with $\{\lambda = 0.1, h = 2000, m = 10\}$ can achieve the best performance on the CIFAR-10 dataset.
+
+Table 4 shows the influence of hyperparameters $h$ and $m$ on the classification error. Firstly, each row can be considered as an entirety to reveal the effect pattern of $m$. In the condition of using the second stage of our proposed optimization in every epoch (i.e., $m = 1$), the result is consistently worse than expected. Because subtracting $C$ from the singular values $\sigma_{h+1}^l, \dots, \sigma_{\text{rank}(W^l)}^l$ can reduce the rank of weight matrix $W^l$. If this process is conducted in every epoch, the rank of the weight matrix is too low to elicit good performance. The best result can be achieved when the proposed optimization is conducted in every 10 epochs with $h = 2000$. If $m > 10$ (i.e., $m = 20, 30, \dots$), the performance gradually declined. Because fewer second stages of our proposed optimization procedures had been conducted. The hyperparameter $h$ actually intends to lower the rank of weight matrix $W^l$. A low-rank weight matrix $W^l$ means more correlations between all neurons in the $l$-th layer. However, a lower rank weight matrix $W^l$ does not necessarily lead to better performance. When $h = \text{rank}(W^l) = 2048$, no singular value $\sigma_i$ will be subtracted by $C$. So the second stage of our proposed optimization is not used when $h = 2048$. The classification error is therefore over 13%. After the optimal combination of $m$ and $h$ (i.e., $h = 2000, m = 10$) is achieved, we apply their values back to Table 3 to find out whether the assumed best value of $\lambda$ is the optimal value. The processes to find the optimal hyperparameters for the CIFAR-100 dataset was the same as above. The optimal combination of hyperparameters was also $\{\lambda = 0.1, h = 2000, m = 10\}$ on the CIFAR-100 dataset.
+
+Table 5 shows the time consumption of several algorithms on CIFAR datasets in the first model [36]. The experiments in this table were conducted with the optimal value of hyperparameters (i.e., $m = 10$). Comparing the last two rows, it only took LocalDrop about 100 seconds more than the algorithm of [36] did on every 100 epochs on average. Since the most time-consuming part of LocalDrop is the two-stage optimization procedure in backpropagation, two methods were adopted to speed up this procedure. SVD is time-consuming in the second stage of optimization. According to Cai and Osher [45], we only computed singular values that exceeded a threshold and their associated singular values to accelerate SVD. The partial SVD could be computed efficiently by some Krylov subspace projection methods [46]. Next, the second stage of optimization was conducted every $m$ epochs (in this case, $m = 10$) instead of every epoch, so the model was much more time efficient.
+
+For ResNet-50, besides three hyperparameters (i.e., $\lambda$, $m$ and $h$), $b$ (block size) is another hyperparameter to be decided. Firstly, we assumed that the optimal values of $b$ in DropBlock [12] (i.e., $b = 7$) would perform the best in LocalDrop. Then the above process was repeated to find the optimal values of $\lambda$, $m$ and $h$. The result was the same as the result in fully-connected layers (i.e., $\{\lambda = 0.1, h = 2, m = 10\}$), except for $h$. Since the
+
+size of the convolutional kernels in ResNet-50 are 3 × 3 and 1 × 1 (LocalDrop only applied to 3 × 3 kernels), the value of hyperparameters $h$ can only be 1, 2, or 3. When $h = 3$, the two-stage optimization did not have any effect (same as when $h = 2048$ in the experiment of the first model). It could be found that the best value of hyperparameter $h$ was probably close to the rank of weight matrix $\text{rank}(W^l)$. Thus three hyperparameters with the optimal values were teased out to verify whether the assumption of optimal value of $b$ is true.
+
+Table 6 shows the effect of block size on three datasets (CIFAR-10, CIFAR-100, ILSVRC2012). It can be noticed that when the $b$ is 7, the performance is the best in all three datasets. This result corresponds with the results in DropBlock [12] in that the performance when $b = 9$ was better than that when $b = 5$ in the ILSVRC2012 dataset, but this was not the case in CIFAR datasets. The image differences of datasets may be the cause. Therefore, we achieved the optimal combination of hyperparameters (i.e., $\{\lambda = 0.1, h = 2, m = 10, b = 7\}$).
+
+Table 7 shows the time consumption of several algorithms on three datasets in ResNet-50 every 100 epochs. The experiment was also conducted with the optimal value of hyperparameters $m$ (i.e., $m = 10$). Comparing the last two rows, LocalDrop costs about 40 minutes more than DropBlock every 100 epochs in CIFAR datasets, and about 60 minutes more every 100 epochs in the ILSVRC2012 dataset. This is mainly due to the fact that the $d$ (drop rate) is a hyperparameter (linear scheme from 0 to 0.1) in DropBlock, but a parameter, which needs to be optimized, in LocalDrop. In addition, the two-stage optimization process conducted every $m$ epochs also costs extra time (despite the same methods was used in fully-connected layers to speed up).
+
+# 6 CONCLUSION
+
+In this paper, we have proposed a new regularization algorithm with the local Rademacher complexity for both fully connected networks and convolutional neural networks. In contrast to many regularization algorithms, we provide a rigorous mathematical derivation to achieve the upper bound of the local Rademacher complexity with the analyses of dropout and DropBlock. With the combination of keep rate matrices and weight matrices, we introduce a new regularization function based on the upper bound of the local Rademacher complexity. Therefore a two-stage optimization process is designed to solve neural networks. Experiments on CIFAR-10 dataset, CIFAR-100 dataset, and ILSVRC2012 dataset have proven the superior performance of LocalDrop. Moreover, empirical analyses of several hyperparameters present potential patterns of these parameters.
+
+# ACKNOWLEDGEMENTS
+
+The authors would like to thank the Associate Editor and all anonymous reviewers for their positive support and constructive comments for improving the quality of this paper. This work was supported in part by the National Natural Science Foundation of China under Grants 61822113, National Key Research and Development Program of China under Grants 2018AAA0101110, the Science and Technology Major Project of Hubei Province (Next-Generation AI Technologies) under Grant 2019AEA170, the Natural Science Foundation of Hubei Province under Grants 2018CFA050 and the Supercomputing Center of Wuhan University. C. Xu was supported by ARC DE180101438
\ No newline at end of file
diff --git a/samples/texts/7707372/page_1.md b/samples/texts/7707372/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..838dfb11d9fc8c57e0ba339cdd9a49096eb18889
--- /dev/null
+++ b/samples/texts/7707372/page_1.md
@@ -0,0 +1,22 @@
+# Hermitian-holomorphic (2)-Gerbes and tame symbols
+
+Ettore Aldrovandi
+
+Department of Mathematics
+Florida State University
+Tallahassee, FL 32306-4510, USA
+aldrovandi@math.fsu.edu
+
+## Abstract
+
+The tame symbol of two invertible holomorphic functions can be obtained by computing their cup product in Deligne cohomology, and it is geometrically interpreted as a holomorphic line bundle with connection. In a similar vein, certain higher tame symbols later considered by Brylinski and McLaughlin are geometrically interpreted as holomorphic gerbes and 2-gerbes with abelian band and a suitable connective structure.
+
+In this paper we observe that the line bundle associated to the tame symbol of two invertible holomorphic functions also carries a fairly canonical hermitian metric, hence it represents a class in a Hermitian holomorphic Deligne cohomology group.
+
+We put forward an alternative definition of hermitian holomorphic structure on a gerbe which is closer to the familiar one for line bundles and does not rely on an explicit "reduction of the structure group." Analogously to the case of holomorphic line bundles, a uniqueness property for the connective structure compatible with the hermitian-holomorphic structure on a gerbe is also proven. Similar results are proved for 2-gerbes as well.
+
+We then show the hermitian structures so defined propagate to a class of higher tame symbols previously considered by Brylinski and McLaughlin, which are thus found to carry corresponding hermitian-holomorphic structures. Therefore we obtain an alternative characterization for certain higher Hermitian holomorphic Deligne cohomology groups.
+
+## Contents
+
+| 1 | Introduction | 2 | | 1.1 | Background notions | 2 | | 1.2 | Statement of the results | 3 | | 1.3 | Outline of the paper | 4 | | 2 | Preliminaries | 5 | | 2.1 | Notation and conventions | 5 | | 2.2 | Deligne cohomology | 6 | | 2.3 | Cones | 7 | | 3 | Hermitian holomorphic Deligne cohomology | 8 | | 3.1 | Metrized line bundles | 8 | | 3.2 | Hermitian holomorphic complexes | 8 | | 3.3 | Explicit cocycles | 10 | | 4 | Tame symbol and hermitian structure | 10 | | 4.1 | Cup product and Deligne torsor. | 10 | | 4.2 | Heisenberg group. | 11 | | 4.3 | Hermitian product structure. | 11 | | 4.4 | Comparisons. | 13 |
\ No newline at end of file
diff --git a/samples/texts/7707372/page_10.md b/samples/texts/7707372/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..bbdfb3f56023806cd261611fd10b52377c314abb
--- /dev/null
+++ b/samples/texts/7707372/page_10.md
@@ -0,0 +1,52 @@
+## 3.3 Explicit cocycles
+
+Use of the seemingly more complicated complex (3.7) in place of the one in (3.5) is justified by the fact that the data comprising the canonical connection can be characterized cohomologically, as follows:
+
+**Lemma 3.3.1.** Let $(L, \rho)$ be a metrized line bundle on $X$. Assume $(L, \rho)$ to be trivialized with respect to the open cover $\mathcal{U}_X$ of $X$ as before. The data:
+
+$$
+\begin{gather*}
+\xi_i \in \underline{A}_X^{(1,0)}(U_i), \quad \frac{1}{2} \log \rho_i \in \underline{\mathcal{E}}_X^0(U_i), \quad \eta_i \in \underline{A}_X^{(1,1)}(U_i), \\
+2\pi\sqrt{-1}c_{ijk} \in \mathbb{Z}(1)_X(U_{ijk}), \quad \log g_{ij} \in \mathcal{O}_X(U_{ij})
+\end{gather*}
+$$
+
+represent a degree 2 cocycle with values in $\operatorname{Tot} \check{C}^\bullet(\mathfrak{U}_X, D(1)^\bullet_{h.h.})$ if and only if the relations (3.2), (3.3), (3.4), plus those in sect. 3.1, defining the canonical connection are satisfied.
+
+*Proof.* One need only unravel the cone defining $D(1)^\bullet_{h.h.}$ as follows:
+
+$$
+\begin{tikzcd}[column sep=2.8em, row sep=2.8em]
+\mathbb{Z}(1)_X \arrow[r] & \mathcal{O}_X \arrow[r] & 0 \arrow[r] & \dots \\
+& \arrow[u] \arrow[r, "0 \oplus \pi_0"] & & \\
+(3.8) & F^1\underline{A}_X^1 \oplus \underline{\mathcal{E}}_X^0 & F^1\underline{A}_X^2 \oplus \underline{\mathcal{E}}_X^1 & \dots \\
+& \arrow[u] \arrow[r, "j \oplus 0"] & & \\
+& F^1\underline{A}_X^2 \cap \underline{\mathcal{E}}_X^2(1) & & \dots
+\end{tikzcd}
+$$
+
+and carefully chase the diagram.
+
+On the other hand, the hermitian holomorphic Deligne complex in the form (3.5) corresponds to “reducing the structure group” from $\mathbb{C}^\times$ to $\mathbb{T}$. This can be made explicit for $l = 1$ and a line bundle $L \to X$ by choosing sections $t_i$ of the smooth bundle corresponding to $L$ such that $\rho(t_i) = 1$. Clearly the resulting smooth transition functions will be sections of $\mathbb{T}_X$ over $U_{ij}$. See refs. [11] and [9] for more details.
+
+# 4 Tame symbol and hermitian structure
+
+Let $X$ be a complex analytic manifold and $U \subset X$ open. Let $f$ and $g$ two invertible holomorphic functions on $U$. The tame symbol [13] $(f,g]$ associated to $f$ and $g$ is a $\mathcal{O}_{X|U}^\times$-torsor equipped with an analytic connection.
+
+## 4.1 Cup product and Deligne torsor
+
+(See [13, 15].) We consider $f$ and $g$ as elements of $H_D^1(U, \mathbb{Z}(1))$. Then $(f,g] = f \cup g \in H_D^2(U, \mathbb{Z}(2))$. Consider the cover $\mathfrak{U}_X$ of $X$ so that $U$ is covered by $\{U \cap U_i\}_{i \in I}$ and choose representatives $(2\pi\sqrt{-1}m_{ij}, \log_i f)$ and $(2\pi\sqrt{-1}n_{ij}, \log_i g)$ for $f$ and $g$, respectively. Then, using (2.4), the cup product is represented by the cocycle:
+
+$$ (4.1) \qquad ((2\pi\sqrt{-1})^2 m_{ij} n_{jk}, -2\pi\sqrt{-1} m_{ij} \log_j g, \log_i f \frac{dg}{g}). $$
+
+Under the quasi-isomorphism with the complex $(\mathcal{O}_X^\times \to \underline{\Omega}_X^1)$ (which essentially amounts to a division by $2\pi\sqrt{-1}$) the cocycle (4.1) becomes
+
+$$ (4.2) \qquad (g^{-m_{ij}}, -\frac{1}{2\pi\sqrt{-1}} \log_i f \frac{dg}{g}). $$
+
+In ref. [13] the trivializing section on $U \cap U_i$ corresponding to (4.2) is denoted $\{\log_i f, g\}$. Two trivializations over $U \cap U_i$ and $U \cap U_j$ are related by $\{\log_j f, g\} = \{\log_i f, g\} g^{-m_{ij}}$. Furthermore, the analytic connection is defined by the rule:
+
+$$ (4.3) \qquad \nabla\{\log_i f, g\} = -\{\log_i f, g\} \otimes \frac{1}{2\pi\sqrt{-1}} \log_i f \frac{dg}{g}. $$
+
+A general section $s$ of $(f,g]$ can be written as $s = h_i\{\log_i f, g\}$, for some $h_i \in \mathcal{O}_U(U_i)$, and therefore
+
+$$ (4.4) \qquad \nabla s = \{\log_i f, g\} \otimes (dh_i - \frac{1}{2\pi\sqrt{-1}} \log_i f \frac{dg}{g}). $$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_11.md b/samples/texts/7707372/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..54a37a73aadfd59766ef68b6bc87aa4ba9f0ee47
--- /dev/null
+++ b/samples/texts/7707372/page_11.md
@@ -0,0 +1,51 @@
+## 4.2 Heisenberg group
+
+An equivalent approach to the Deligne symbol is via the complex three-dimensional Heisenberg group, see refs. [5, 20, 22]. Let $H_C$ denote the group of complex unipotent $3 \times 3$ lower triangular matrices. Let
+
+$$ H_Z = \left\{ \begin{pmatrix} 1 & & \\ & m_1 & 1 \\ & m_2 & n_1 \end{pmatrix} \middle| m_1, n_1 \in \mathbb{Z}(1), m_2, n_2 \in \mathbb{Z}(2) \right\} \subset H_C. $$
+
+The quotient $H_C/H_Z$ is a $\mathbb{C}/\mathbb{Z}(2)$-bundle over $\mathbb{C}/\mathbb{Z}(1) \times \mathbb{C}/\mathbb{Z}(1)$ via the projection map
+
+$$ p: \begin{bmatrix} 1 \\ x \\ z \end{bmatrix} \mapsto ([x], [y]), $$
+
+where $x, y, z \in \mathbb{C}$, and the brackets denote the appropriate equivalence classes. (The $\mathbb{C}/\mathbb{Z}(2)$-action is by multiplication with a matrix of the form $(\begin{smallmatrix} 1 & 1 \\ 0 & 0 \end{smallmatrix})$.)
+
+The twisting of $H_C/H_Z$ is analogous to that of the Deligne torsor in sect. 4.1: the right action of $H_Z$ on $H_C$ amounts to:
+
+$$ (4.5) \qquad x \mapsto x + m_1, \quad y \mapsto y + n_1, \quad z \mapsto z + m_1 \cdot y + m_2. $$
+
+Moreover, the complex form
+
+$$ (4.6) \qquad \omega = \frac{1}{2\pi\sqrt{-1}}(dz - x dy) $$
+
+is invariant under the action of $H_Z$ and defines a $\mathbb{C}/\mathbb{Z}(2)$-connection form on the total space $H_C/H_Z$.
+
+The invertible functions $f$ and $g$ on $U$ define a map $(f,g): U \to \mathbb{C}^{\times} \times \mathbb{C}^{\times}$. Then the tame symbol $(f,g]$ is obtained as the pull-back:
+
+$$ (f,g] = (f,g)^* (H_C/H_Z), $$
+
+and the section $\{\log_i f, g\}$ corresponds to the class of the matrix
+
+$$ \begin{pmatrix} 1 & & \\ & \log_i f & 1 \\ & 0 & \log_i g \end{pmatrix}. $$
+
+Furthermore, the pull-back of the connection form $\omega$ on $H_C/H_Z$ along the section $\{\log_i f, g\}$ is the same form as the one in (4.1). More generally, a section $s$ as given at the end of sect. 4.1 corresponds to the class of the matrix
+
+$$ \begin{pmatrix} 1 & & \\ & \log_i f & 1 \\ & h_i & \log_i g \end{pmatrix}, $$
+
+Pulling back (4.6) along the section gives (4.4).
+
+## 4.3 Hermitian product structure
+
+Consider the “imaginary part” map
+
+$$ (4.7) \qquad \begin{aligned} \mathbb{C} \otimes \mathbb{C} &\to \mathbb{R}(1) \\ a \otimes b &\mapsto -\pi_1(a) \pi_0(b) = -\sqrt{-1} \operatorname{Im}(a) \operatorname{Re}(b), \end{aligned} $$
+
+Similarly, we have:
+
+$$ (4.8) \qquad \mathcal{O}_X \otimes \mathcal{O}_X \to \mathcal{E}_X^0(1) \qquad f \otimes g \mapsto -\pi_1(f) \pi_0(g). $$
+
+**Definition 4.3.1.** Define the map
+
+$$ (4.9) \qquad \begin{aligned} (\mathbb{Z}(1)_X \to \mathcal{O}_X) \otimes (\mathbb{Z}(1)_X \to \mathcal{O}_X) &\to (\mathbb{Z}(2)_X \to \mathcal{O}_X \xrightarrow{-\pi_1} \mathcal{E}_X^0(1)) \\ &\cong 2\pi\sqrt{-1} \otimes (\mathbb{Z}(1)_X \to \mathcal{O}_X \xrightarrow{-\pi_0} \mathcal{E}_X^0) \end{aligned} $$
+
+by using (4.8) in place of the map $\mathcal{O}_X \otimes \mathcal{O}_X \to \Omega_X^1$, $f \otimes g \mapsto fdg$, in (2.4).
\ No newline at end of file
diff --git a/samples/texts/7707372/page_12.md b/samples/texts/7707372/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..190b134a6190a514b470555ade634bdd16ff84e1
--- /dev/null
+++ b/samples/texts/7707372/page_12.md
@@ -0,0 +1,56 @@
+**Proposition 4.3.2.** The product map (4.9) is well defined, namely it is a map of complexes. Furthermore, it is homotopy graded commutative.
+
+*Proof.* The fact that (4.9) is a map of complexes is a direct verification. After ref. [15], consider the map
+
+$$h(f \otimes g) = fg, \quad f,g \in \mathcal{O}_X,$$
+
+and zero otherwise. It provides the required homotopy.
+□
+
+The target complex of the product map in eq. (4.9) is the complex encoding hermitian structures appearing
+in sect. 3.1. In other words, up to quasi-isomorphism, we have a product:
+
+$$\mathbb{Z}(1)_\bullet \otimes \mathbb{Z}(1)_\bullet \to 2\pi\sqrt{-1} \otimes D(1)_{h.h.}.$$
+
+*Remark 4.3.3.* The map (4.8) provides an explicit homotopy map for the homotopy commutative diagram
+
+where the model (2.5) for $\mathbb{R}(k)_{\bullet, \mathbb{D}}$ is used (see [15]).
+
+Now, in view of Prop. 4.3.2, we have a graded commutative product at the level of cohomology groups. In particular, let $f,g$ be two invertible holomorphic functions on $U \subset X$.
+
+**Proposition 4.3.4.** *The Deligne torsor underlying $(f,g]$ admits a hermitian fiber metric.*
+
+*Proof.* View $f$ and $g$ as elements of $H^1_D(U, \mathbb{Z}(1))$. Taking the product according to (4.9) yields an element in
+
+$$H_{\mathcal{D}_{h.h.}}^2(U, 1) \cong \widehat{\mathrm{Pic}(U)}$$
+
+that is, a holomorphic line bundle with hermitian fiber metric (up to isomorphism).
+
+Taking the image of the tame symbol $(f,g]$ under the map $H_{\mathcal{D}}^{\bullet}(U,\mathbb{Z}(2)) \to H_{\mathcal{D}}^{\bullet}(U,\mathbb{Z}(1)) = \mathrm{Pic}(U)$ induced by $\mathbb{Z}(2)_{\mathcal{D}}^{\bullet} \to \mathbb{Z}(1)_{\mathcal{D}}^{\bullet}$ forgets the analytic connection and retains just the line bundle. Similarly, the map $H_{\mathcal{D}_{h.h.}}^2(U,1) \to H_{\mathcal{D}}^{\bullet}(U,\mathbb{Z}(1)) = \mathrm{Pic}(U)$ induced by $D(1)_{h.h.}^{\bullet} \to \mathbb{Z}(1)_{\mathcal{D}}^{\bullet}$ forgets the hermitian structure. Clearly both maps to the same underlying line bundle.
+$\square$
+
+Using a Čech cover we can represent $f$ and $g$ as in sect. 4.1. Then the cocycle corresponding to their product in $H_{\mathcal{D}_{h.h.}}^2(U, 1)$ is:
+
+$$
+(4.10) \qquad \left( 2\pi\sqrt{-1} m_{ij} n_{jk}, -m_{ij} \log_j g, -\frac{1}{2\pi\sqrt{-1}} \pi_1(\log_i f) \log|g| \right).
+$$
+
+This allows us to identify the representative of the hermitian metric, or rather its logarithm, as
+
+$$
+(4.11) \qquad \frac{1}{2} \log \rho_i = - \frac{1}{2\pi\sqrt{-1}} \pi_1 (\log_i f) \log |g|.
+$$
+
+It follows that if $s$ is the local section at the end of sect. 4.1 then
+
+$$
+(4.12) \qquad \log \rho(s) = \frac{1}{2\pi\sqrt{-1}} (\pi_1(h_i) - \pi_1(\log_i f) \log|g|).
+$$
+
+### 4.3.1 Remarks on the Heisenberg bundle
+
+The hermitian metric can be constructed from the more global point of view afforded by the use of the Heisenberg group recalled in sect. 4.2. The hermitian metric on the bundle $H_C/H_Z \to C^\times \times C^\times$ is given by the map $\rho: H_C/H_Z \to \mathbb{R}_+$ defined by:
+
+$$
+(4.13) \qquad \rho: \begin{bmatrix} 1 \\ x & 1 \\ z & y \end{bmatrix} \mapsto \exp \frac{1}{2\pi\sqrt{-1}} (\pi_1(z) - \pi_1(x) \pi_0(y))
+$$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_13.md b/samples/texts/7707372/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..fa115d4b3e7f9d99b23cfd66e778828543c2a6f1
--- /dev/null
+++ b/samples/texts/7707372/page_13.md
@@ -0,0 +1,63 @@
+Indeed, using the explicit action (4.5), one checks (4.13) is invariant and provides the required quadratic form.
+In particular, the quantity
+
+$$
+-\frac{1}{2\pi\sqrt{-1}} \pi_1(x) \pi_0(y)
+$$
+
+is immediately shown to behave as the logarithm of the local representative of a hermitian metric. Thus the hermitian holomorphic line bundle represented by the cocycle (4.10) is the pull-back of (HC/HZ, ρ) via the map (f,g): U → C× × C×.
+
+**4.3.2 Relations with Mixed Hodge Structures**
+
+Both structures, namely the standard cup product $\mathbb{Z}(1)_\bullet^\cdot \otimes \mathbb{Z}(1)_\bullet^\cdot \to \mathbb{Z}(2)_\bullet^\cdot$ given by (2.4), and the modified
+one $\mathbb{Z}(1)_\bullet^\cdot \otimes \mathbb{Z}(1)_\bullet^\cdot \to 2\pi\sqrt{-1} \otimes D(1)_{h,h}^\bullet$. of Definition 4.3.1, can be obtained by taking projections of a common
+object in two different ways.
+
+Let $s$ be a local section of the pull-back
+
+$$
+(f,g] = (f,g)^* (H_C/H_Z)
+$$
+
+as at the end of sect. 4.1. (The local expression in terms of matrices is given at the end of sect. 4.2.) Equivalently,
+$s$ can be considered as a (local) lift of the map $(f,g) : X \to \mathbb{C}^{\times} \times \mathbb{C}^{\times}$ to $H_C/H_Z$.
+
+Let $\mathcal{M}_X^{(2)}$ be the resulting variation of Mixed Hodge Structures on $X$ obtained by pulling back the universal MHS $\mathcal{M}^{(2)}$ on $H_C/H_Z$ via $s$.
+
+**Lemma 4.3.5 (See [18]).** The period $P(\mathcal{M}_X^{(2)}) \in \mathcal{O}_X \otimes_Q \mathcal{O}_X$ of $\mathcal{M}_X^{(2)}$ is given by:
+
+$$
+P(\mathcal{M}_X^{(2)}) = \frac{h}{(2\pi\sqrt{-1})^2} \otimes 1 - 1 \otimes \frac{h}{(2\pi\sqrt{-1})^2} \\
++ 1 \otimes \frac{\log f \log g}{(2\pi\sqrt{-1})^2} - \frac{\log g}{2\pi\sqrt{-1}} \otimes \frac{\log f}{2\pi\sqrt{-1}}
+$$
+
+Proof. The expression is computed in the appendix for the universal case.
+
+Notice that the period actually belongs to the kernel of the multiplication map $a \otimes b \to ab$.
+
+Let us now use the map $\mathcal{O}_X \otimes_Q \mathcal{O}_X \to \mathcal{O}_X \otimes_C \mathcal{O}_X$. Let $\mathcal{I}_X$ be the kernel of the multiplication map (over $\mathbb{C}$). Then $\underline{\mathcal{O}}_{X/\mathbb{C}}^1 \cong \mathcal{I}_X/\mathcal{I}_X^2$. The calculations for the following proposition are done in the universal case in the appendix.
+
+**Proposition 4.3.6.** The expressions (4.4) and (4.12) respectively correspond to the images of $P(M_X^{(2)})$ under the projections $\mathcal{I}_X \subset \mathcal{O}_X \otimes_C \mathcal{O}_X \to \underline{\mathcal{O}}_{X/\mathbb{C}}^1$, sending $a \otimes b - ab \otimes 1$ to $ab$, and $\mathcal{I}_X \subset \mathcal{O}_X \otimes_C \mathcal{O}_X \to \underline{\mathcal{E}}_X^0$ given by (4.8).
+
+**4.4 Comparisons**
+
+In the previous sections we have shown that the Deligne torsor $(f,g]$ associated to two invertible functions $f$
+and $g$ naturally acquires two structures: the analytic connection $\nabla$ described in section 4.1 via the standard
+cup product in Deligne cohomology, and the hermitian structure described in section 4.3 via the modified cup
+product (4.9). We wish to briefly compare the two structures.
+
+First, observe that using the canonical connection (cf. section 3.1.1) a pair $(L, \rho)$ can also be thought of as
+a triple $(L, \rho, \nabla^\rho)$, where $\nabla^\rho$ is the canonical connection determined by $\rho$. Equivalently, we can just consider
+the pair $(L, \nabla^\rho)$. Also, let us stress that the canonical connection is only a smooth connection and is in general
+far from being analytic (or algebraic).
+
+Thus our question can be reformulated as follows: for a given line bundle $L$ equipped with an analytic
+connection $\nabla$ and a hermitian fiber metric $\rho$, how do the pairs $(L, \nabla)$ and $(L, \nabla^\rho)$ compare?
+
+The answer is the following well-known
+
+**Lemma 4.4.1.** Consider both $\nabla$ and $\nabla^h$ as smooth connections. Then:
+
+1. $\nabla - \nabla^h$ determines a global section of $\underline{A}_X^{1,0}$, and
+
+2. this global section is zero, that is, $\nabla = \nabla^h$, if and only if $L$ is unitary flat, namely it defines an element of $H^1(X, \mathbb{R}/\mathbb{Z})$.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_14.md b/samples/texts/7707372/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a6c17ae6e6954cde7cafebed5a4f618b164b147
--- /dev/null
+++ b/samples/texts/7707372/page_14.md
@@ -0,0 +1,56 @@
+*Proof.* It is a well-known fact that the difference of two connections is a global one-form. Working in a local setting, let $s \in L|_U$ be a local section, and let $\|s\|$ be its length with respect to the metric. Then $\nabla s = \omega \otimes s$, for $\omega \in \Omega_X^1(U)$, whereas $\nabla^\rho s = \partial \log \|s\| \otimes s$, and $\partial \log \|s\|$ gives a local (1,0)-form representative of $\nabla^\rho$, cf. section 3.1.1. Clearly, the difference $\omega - \partial \log \|s\|$ gives a global section of $\mathcal{A}_X^{1,0}$.
+
+As for the second point, one would have $\bar{\partial}\partial \log \|s\| = 0$, but this represents $c_1(L)$, hence the conclusion. □
+
+In the situation when the two connections agree, that is, the connection is simultaneously analytic and it is the canonical connection associated to a hermitian structure, we say they are *compatible*. The line bundle supporting it is necessarily flat.
+
+Interestingly enough, the previous lemma can be recast into entirely cohomological terms. This is advantageous in dealing with the special case $L = (f,g]$ of special interest to us, as well as to address the very same question in the case of gerbes later on in this paper.
+
+In the previous lemma we have compared $\nabla$ and $\nabla^\rho$ by mapping their respective local representatives in $\mathcal{A}_X^{1,0}$. It will be more convenient to use the sheaf of imaginary 1-forms instead, namely consider $\pi_1 : \Omega_X^1 \to \mathcal{E}_X^1(1)$ and $d : \mathcal{E}_X^0(1) \to \mathcal{E}_X^1(1)$. Consider the complex
+
+$$ \Lambda(2)^\bullet \stackrel{\text{def}}{=} (\mathbb{Z}(2) \stackrel{z}{\to} \mathcal{O}_X \xrightarrow{-\pi_1 \circ d} \mathcal{E}_X^1(1)), $$
+
+and the obvious maps of complexes
+
+$$ \alpha : \mathbb{Z}(2)_{\mathcal{D}}^{\bullet} \to \Lambda(2)^{\bullet} \quad \text{and} \quad \beta : 2\pi\sqrt{-1} \otimes D(1)_{h.h.}^{\bullet} \to \Lambda(2)^{\bullet}. $$
+
+As usual, the cone:
+
+$$ \Gamma(2)^\bullet \stackrel{\text{def}}{=} \text{Cone}(\alpha - \beta)[-1], $$
+
+characterizes the elements in $\mathbb{Z}(2)_{\mathcal{D}}^{\bullet}$ and $2\pi\sqrt{-1} \otimes D(1)_{h.h.}^{\bullet}$ which agree in $\Lambda(2)^\bullet$. A tedious but straightforward direct verification yields:
+
+**Lemma 4.4.2.** We have the quasi-isomorphism:
+
+$$ (4.14) \qquad \Gamma(2)^\bullet \xrightarrow{\approx} (\mathbb{Z}(2) \stackrel{z}{\to} \mathcal{O}_X \xrightarrow{(d,-\pi_1)} \mathcal{O}_X^1 \oplus \mathcal{E}_X^0(1) \xrightarrow{\pi_1+d} \mathcal{E}_X^1(1)) $$
+
+Dropping the last term in (4.14), we obtain the truncation
+
+$$ \tilde{\Gamma}(2)^\bullet \stackrel{\text{def}}{=} (\mathbb{Z}(2) \stackrel{z}{\to} \mathcal{O}_X \xrightarrow{(d,-\pi_1)} \mathcal{O}_X^1 \oplus \mathcal{E}_X^0(1)), $$
+
+which clearly characterizes the elements in $\mathbb{Z}(2)_{\mathcal{D}}^{\bullet}$ and $2\pi\sqrt{-1} \otimes D(1)_{h.h.}^{\bullet}$, which agree in $2\pi\sqrt{-1} \otimes \mathbb{Z}(1)_{\mathcal{D}}^{\bullet}$.
+(In other words, $\tilde{\Gamma}(2)^\bullet$ can be obtained by replacing $\Lambda(2)^\bullet$ by $\mathbb{Z}(1)_{\mathcal{D}}^{\bullet}$ in the previous paragraphs.) In particular, let us denote by Pic$(X, \nabla, h)$ the second hypercohomology group $\mathbf{H}^2(X, \tilde{\Gamma}(2)^\bullet)$, namely the subgroup of $H_{\mathcal{D}}^2(X, \mathbb{Z}(2)) \times \widehat{\text{Pic}(X)}$ of classes of pairs $(L, \nabla)$ and $(L, \rho)$ mapping to the same element of Pic$(X) \cong H_{\mathcal{D}}^2(X, \mathbb{Z}(1))$. Then lemma 4.4.1 has the following reformulation:
+
+**Lemma 4.4.3.** There is an exact sequence:
+
+$$ (4.15) \qquad 0 \to H^1(X, \mathbb{R}/\mathbb{Z}) \to \text{Pic}(X, \nabla, h) \to E^1(X)(1), $$
+
+where $E^1(X)(1)$ are the global sections of $\mathcal{E}_X^1(1)$. Thus compatible connections are necessarily flat.
+
+*Proof.* The complex $\tilde{\Gamma}(2)^\bullet$ is a quotient of $\Gamma(2)^\bullet$, namely we have the exact sequence:
+
+$$ 0 \to \mathcal{E}_X^1(1)[-3] \to \Gamma(2)^\bullet \to \tilde{\Gamma}(2)^\bullet \to 0, $$
+
+and from the resulting long exact cohomology sequence:
+
+$$ 0 \to \mathbf{H}^2(X, \Gamma(2)^\bullet) \to \mathbf{H}^2(X, \tilde{\Gamma}(2)^\bullet) \to E^1(X)(1) \to \dots $$
+
+It was noted above that $\mathbf{H}^2(X, \tilde{\Gamma}(2)^\bullet) \cong \text{Pic}(X, \nabla, h)$, whereas for $\Gamma(2)^\bullet$ we have
+
+$$ \mathbf{H}^2(X, \Gamma(2)^\bullet) \cong H^1(X, \mathbb{R}(2)/\mathbb{Z}(2)). $$
+
+The latter isomorphism follows either from a direct computation, or noticing that $\Gamma(2)^\bullet$ is a quotient of $D(2)_{h.h.}^\bullet$ (see eq. (3.7)) and
+
+$$ \mathbf{H}^2(X, \Gamma(2)^\bullet) \cong H_{D_{h.h.}}^2(X, 2) $$
+
+and then using lemma 3.2.4. □
\ No newline at end of file
diff --git a/samples/texts/7707372/page_15.md b/samples/texts/7707372/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..e59f8a23f7e11de364cbf716ef93c27f14a0d8ad
--- /dev/null
+++ b/samples/texts/7707372/page_15.md
@@ -0,0 +1,52 @@
+**4.4.1 Comparing $(f,g]$ and $(f,g]_{h.h.}$**
+
+Suppose now $L$ is the Deligne torsor determined by two invertible functions $f$ and $g$. Clearly, the symbols $(f,g]$ and $(f,g]_{h.h.}$ taken together determine an element of Pic$(X, \nabla, h)$, since the underlying torsor in Pic$(X) \cong H_D^2(X, \mathbb{Z}(1))$ is the same. This element can be represented by the cocycle
+
+$$((2\pi\sqrt{-1})^2 m_{ij} n_{jk}, -2\pi\sqrt{-1} m_{ij} \log_j g, \log_i f \frac{dg}{g} \oplus -\pi_1(\log_i f) \log|g|)$$
+
+with values in $\tilde{\Gamma}(2)^{\bullet}$.
+
+Following Goncharov ([17]) let us define for any two invertibles $f$ and $g$ the 1-form
+
+$$ (4.16) \qquad r_2(f,g) \stackrel{\text{def}}{=} \pi_1(d \log f) \log|g| - \log|f| \pi_1(d \log g). $$
+
+This is clearly globally defined where $f$ and $g$ are invertible.
+
+We finally obtain the following comparison.
+
+**Proposition 4.4.4.** *The analytic connection in $(f,g]$ and the canonical one associated to the hermitian structure in $(f,g]_{h.h.}$ are compatible if and only if $r_2(f,g) = 0$ in $E^1(X)(1)$.*
+
+*Proof.* Let $\omega_i = \log_i f \, dg/g$ and $\sigma_i = -\pi_1(\log_i f) \log|g|$. The connecting homomorphism from $\tilde{\Gamma}(2)^\bullet$ to $\mathcal{E}_X^1(1)$, that is the last map to the right in the sequence (4.15), amounts to computing $\pi_1(\omega_i) + d\sigma_i$. A straightforward calculation yields
+
+$$ \pi_1(\omega_i) + d\sigma_i = -r_2(f,g). \quad \square $$
+
+# 5 Hermitian holomorphic gerbes and 2-gerbes
+
+## 5.1 Higher tame symbols
+
+Brylinski and McLaughlin considered higher degree versions of the tame symbol construction, [8, 9], namely cup products of higher degree Deligne cohomology classes: $(f, L]$ for $f$ a holomorphic invertible function and $L$ a holomorphic line bundle, and $(L, L']$ for a pair of holomorphic line bundles. The geometric interpretation of the symbols so obtained, also put forward in refs. [8, 9], is that $(f, L]$ is a gerbe on $X$ with band (≡ lien) $\mathcal{O}_X^\times$ and a holomorphic connective structure. A similar statement holds for the 2-gerbe $(L, L']$.
+
+### 5.1.1 Cup products
+
+From the point of view of cohomology classes, one computes the relevant cup products. Using (2.4), we find that $(f, L] \in H_D^3(X, \mathbb{Z}(2))$ is represented by the cocycle
+
+$$ (5.1) \qquad (g_{jk}^{-m_{ij}}, -\frac{1}{2\pi\sqrt{-1}} \log_i f \, d\log g_{ij}), $$
+
+having made the standard choices for $\log_i f$ and the transition functions $g_{ij}$ of $L$ with respect to the choice of a cover $\mathfrak{U}_X$. Similarly, if $g'_{ij}$ are the transition functions of $L'$, and $2\pi\sqrt{-1}c_{ijk}$ represents $c_1(L)$ with respect to the cover $\mathfrak{U}_X$, then $(L, L'] \in H_D^4(X, \mathbb{Z}(2))$ is represented by the cocycle
+
+$$ (5.2) \qquad (g'_{kl}^{-c_{ijk}}, -\frac{1}{2\pi\sqrt{-1}} \log g_{ij} \, d\log g'_{jk}). $$
+
+### 5.1.2 Hermitian variant
+
+If we use the product
+
+$$ \mathbb{Z}(1)_D^\bullet \otimes \mathbb{Z}(1)_D^\bullet \to D(1)_{h.h}^\bullet. $$
+
+introduced in sect. 4.3, for $f, L$ and $L'$ as above we have
+
+$$
+\begin{aligned}
+& H_D^1(X, \mathbb{Z}(1)) \otimes H_D^2(X, \mathbb{Z}(1)) \quad \to H_{D,h.h.}^3(X, 1) \\
+& f \otimes [L] \quad \mapsto (f, L]_{h.h}.
+\end{aligned}
+ $$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_16.md b/samples/texts/7707372/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..337eabe2d9c973a2991f44a334f1364844d0a7fd
--- /dev/null
+++ b/samples/texts/7707372/page_16.md
@@ -0,0 +1,67 @@
+Using the same Čech data as before, the symbol $(f, L]_{h.h.}$ is represented by the cocycle
+
+$$
+(5.3) \qquad (g_{jk}^{-m_{ij}}, -\frac{1}{2\pi\sqrt{-1}}\pi_1(\log_i f)\pi_0(\log g_{ij})).
+$$
+
+Similarly, with $L$ and $L'$ we have the product
+
+$$
+\begin{array}{ccc}
+H_D^2(X, \mathbb{Z}(1)) \otimes H_D^2(X, \mathbb{Z}(1)) & \rightarrow & H_{D,h.h.}^4(X, 1) \\
+[L] \otimes [L'] & \mapsto & (L, L']_{h.h.}
+\end{array}
+$$
+
+and the representing cocycle
+
+$$
+(5.4) \qquad (g'_{kl} e^{-c_{ijk}}, -\frac{1}{2\pi\sqrt{-1}} \pi_1(\log g_{ij}) \pi_0(\log g'_{jk})).
+$$
+
+Similarly to the proof of prop. 4.3.4, the maps of complexes $\mathbb{Z}(2)_{\bullet, \mathbb{D}} \to \mathbb{Z}(1)_{\bullet, \mathbb{D}}$ and $D(1)_{\bullet, h.h.} \to \mathbb{Z}(1)_{\bullet, \mathbb{D}}$ induce corresponding maps on the symbols $(f, L]$ and $(f, L]_{h.h.}$, moreover their images agree in $H_D^3(X, \mathbb{Z}(1))$. An identical statement holds for $(L, L']$ and $(L, L']_{h.h.}$.
+
+5.2 Gerbes with Hermitian structure
+
+Let $\mathcal{S}$ be a gerbe on $X$ with band $\mathcal{O}_X^\times$ ([16]). After [7, 10], its class is an element of $H_D^3(X, \mathbb{Z}(1)) \cong \mathcal{H}^2(X, \mathcal{O}_X^\times)$.
+Let $\mathcal{E}_{X,+}^0$ be the sheaf of real positive smooth functions on $X$.
+
+**Definition 5.2.1.** A hermitian structure on $\mathcal{S}$ consists of the following data:
+
+1. For each object $P$ in $\mathcal{S}_U$, is assigned a $\mathcal{E}_{U,+}^0$-torsor $\textbf{herm}(P)$ (a $\mathbb{R}_+$-principal bundle). The assignment must be compatible with the restriction functors $i^* : \mathcal{S}_U \to \mathcal{S}_V$ arising from $i : V \hookrightarrow U$ in the cover $\mathfrak{A}_X$ of $X$.
+
+2. For each morphism $f: P \to Q$ in $\mathcal{S}_U$ a corresponding morphism $f_*: \textbf{herm}(P) \to \textbf{herm}(Q)$ of $\mathcal{E}_{U,+}^0$-torsors.¹
+This map must be compatible with compositions of morphisms in $\mathcal{S}_U$ and with the restriction functors.
+For an object $P$ of $\mathcal{S}_U$, an automorphism $\varphi \in \textbf{Aut}(P)$ is identified with a section of $\mathcal{O}_X^\times$ over $U$. We then
+require that
+
+$$
+\begin{equation}
+\begin{gathered}
+\varphi_* : \operatorname{\underline{herm}}(P) \xrightarrow{\approx} \operatorname{\underline{herm}}(P) \\
+h \mapsto h \cdot |\varphi|^2
+\end{gathered}
+\end{equation}
+$$
+
+where the latter is the $\mathcal{E}_{U,+}^0$-action on the torsor $\underline{\mathrm{herm}}(P)$.
+
+**Theorem 5.2.2.** *Equivalence classes of *O**X**×*-gerbes with hermitian structure are classified by the group*
+
+$$
+H^3(X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0).
+$$
+
+Proof. Let $\mathcal{G}$ be an $\mathcal{O}_X^\times$-gerbe on $X$ with hermitian structure as per definition 5.2.1. Choose a full decomposition (see [7]) with objects $P_i$ of $\mathcal{G}_{U_i}$ and isomorphisms $f_{ij}: P_j|_{U_{ij}} \to P_i|_{U_{ij}}$ with respect to a cover $\mathfrak{A}_X$ of $X$. By a standard procedure (see refs. [7, 10]) these data determine a cochain $g_{ijk} \in \text{Aut}(P_i)|_{U_{ijk}} \simeq \mathcal{O}_X^\times|_{U_{ijk}}$ satisfying the cocycle condition and determining a class in $\mathcal{H}^2(X, \mathcal{O}_X^\times)$. Furthermore, choose sections $r_i$ of the torsors $\textbf{herm}(P_i)$ above $U_i$. From condition 2 in definition 5.2.1 we have that there must exist $\rho_{ij} \in \mathcal{E}_{X,+}^0|_{U_{ij}}$ such that:
+
+$$
+(5.6) \qquad f_{ij*}(r_j) = r_i \cdot \rho_{ij}.
+$$
+
+On the 3-skeleton of the cover we have that on one hand
+
+$$
+(5.7) \qquad f_{ij_*} \circ f_{jk_*} (r_k) = f_{ij_*} (r_j) \cdot \rho_{jk} = r_i \cdot \rho_{ij} \rho_{jk},
+$$
+
+¹A $\mathcal{E}_{U,+}^0$-torsor will in general be automatically trivializable. However, in this context it is convenient to “forget” the actual trivializing map.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_17.md b/samples/texts/7707372/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..b6f239f8a49495a01681793268a13155d2f23f24
--- /dev/null
+++ b/samples/texts/7707372/page_17.md
@@ -0,0 +1,64 @@
+whereas on the other hand, since $f_{ij} \circ f_{jk} = g_{ijk} \circ f_{ik}$, we have
+
+$$ (5.8) \qquad (f_{ij} \circ f_{jk})_*(r_k) = g_{ijk*} \circ f_{ik*}(r_k) = g_{ijk*}(r_i \cdot \rho_{ik}) = r_i \cdot |g_{ijk}|^2 \rho_{ik}. $$
+
+Equating the right hand sides of eqs. (5.7) and (5.8), and extracting the appropriate logarithms, we see we
+have obtained a Čech cocycle representing a class in
+
+$$ (5.9) \qquad \tilde{\mathbb{H}}^3(\mathfrak{U}_X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0). $$
+
+Conversely, let a class in $H_{\mathcal{D}_{h.h.}}^3(X, 1)$ be given, and assume we represent it via the choice of $\mathfrak{U}_X$ by a degree 2 Čech cocycle with values in the complex
+
+$$ \mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0, $$
+
+which we write as
+
+$$ (2\pi\sqrt{-1}c_{ijkl}, \log g_{ijk}, \frac{1}{2}\log \rho_{ij}). $$
+
+This cocycle determines, via the map $D(1)_{h.h.} \to \mathbb{Z}(1)_\bullet$, a cocycle $\{g_{ijk}\} \in \check{C}^2(\mathfrak{U}_X, \mathcal{O}_X^\times)$ which can be used,
+according to refs. [7, 10], to glue the local stacks *Tors*(Ω$_{U_i}$) into a global $\mathcal{G}$, in fact a gerbe. Given a $\mathcal{O}_{U_i}^\times$-torsor
+$P_i$, namely an object of $\mathcal{G}_{U_i} \cong \text{Tors}(\mathcal{O}_{U_i})$, define a hermitian structure by:
+
+$$ \mathrm{herm}(P_i) = \text{trivial } \mathcal{E}_{U_i,+}^0 - \text{torsor} $$
+
+Then use $\rho_{ij}$ to glue $\underline{\mathrm{herm}}(P_i)$ and $\underline{\mathrm{herm}}(P_j)$ over $U_{ij}$, namely *define* an isomorphism via eq. (5.6). Since the isomorphisms $P_k \to P_i$ and $P_k \to P_j \to P_i$ differ by the equivalence determined by $g_{ijk}$, we see using (5.5) that the condition
+
+$$ \rho_{ij} \rho_{jk} = |g_{ijk}|^2 \rho_{ik}, $$
+
+ensuing from the cocycle condition, ensures the compatibility of this definition over $U_{ijk}$. $\square$
+
+**Corollary 5.2.3.** Using the quasi-isomorphism
+
+$$ D(1)_{h.h.} \xrightarrow{\approx} (\mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0), $$
+
+the class of a gerbe with hermitian structure is in fact in $H_{\mathcal{D}_{h.h.}}^3(X, 1)$.
+
+We will see (cf. sect. 5.3) this group also automatically classifies a special type of connective structure on
+$\mathcal{G}$.
+
+## 5.3 Hermitian connective structure
+
+The structure defined in sect. 5.2 can be supplemented by a variant of Brylinski's connective structure [10] by taking into account the first Hodge filtration as in ref. [11]. Let $\mathcal{G}$ be an $\mathcal{O}_X^\times$ gerbe over $X$.
+
+**Definition 5.3.1.** A type (1,0) connective structure on $\mathcal{G}$ is the assignment to each object $P$ of $\mathcal{G}_U$ of a $F^1\underline{\mathbb{A}}_U^1$-torsor $\mathcal{Co}(P)$ compatible with restriction functors and morphisms of objects. In particular, for $\varphi \in \underline{\mathrm{Aut}}(P)$, we require that
+
+$$ (5.10) \qquad \begin{aligned} \varphi_* : \underline{\mathrm{Co}}(P) &\xrightarrow{\approx} \underline{\mathrm{Co}}(P) \\ \nabla &\mapsto \nabla + d \log \varphi \end{aligned} $$
+
+where $\nabla$ is a section of $\underline{\mathrm{Co}}(P)$ over $U.^2$
+
+**Definition 5.3.2.** Let $\mathcal{G}$ be equipped with a hermitian structure. A type (1,0) connective structure on $\mathcal{G}$ is *compatible* with the hermitian structure if for each object $P$ of $\mathcal{G}$ there is an isomorphism of torsors
+
+$$
+\begin{gather*}
+\underline{\operatorname{herm}}(P) \longrightarrow \underline{\operatorname{Co}}(P) \\
+r \longmapsto \nabla_r
+\end{gather*}
+ $$
+
+such that for a positive function $\rho$ on $U$
+
+$$ r \cdot \rho \mapsto \nabla_r + \partial \log \rho. $$
+
+(In other words, $\nabla_r . \rho = \nabla_r + \partial \log \rho$.)
+
+$^2$Note that $d \log \varphi$ is holomorphic, hence of type (1,0).
\ No newline at end of file
diff --git a/samples/texts/7707372/page_18.md b/samples/texts/7707372/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ba6889feeee66b8f83af265be43a0333e401df9
--- /dev/null
+++ b/samples/texts/7707372/page_18.md
@@ -0,0 +1,57 @@
+Connective structures of type (1,0) are classified as follows.
+
+**Theorem 5.3.3.** Let again $D(1)_{h,h}^\bullet$ be the complex given by (3.7) for $l=1$. Equivalence classes of connective structures on a $\mathcal{O}_X^\times$-gerbe $\mathcal{G}$ compatible with a given hermitian structure are classified by the group
+
+$$ \mathbf{H}^3(X, D(1)_{h.h.}^\bullet). $$
+
+We have the following analog of the existence and uniqueness of the canonical connection on an invertible sheaf.
+
+**Corollary 5.3.4.** A connective structure compatible with a hermitian structure on a gerbe $\mathcal{G}$ is uniquely determined up to equivalence.
+
+*Proof.* It is an immediate consequence of the fact that the groups in Theorems 5.2.2 and 5.3.3, being computed from quasi-isomorphic complexes, are actually the same (and equal to $H_{\mathcal{D}_{h,h.}}^3(X, 1)$). $\square$
+
+*Remark 5.3.5.* The group $\mathbf{H}^3(X, D(1)_{h.h.}^\bullet) \cong H_{\mathcal{D}_{h,h.}}^3(X, 1)$ is not equal to Brylinski's
+
+$$ \mathbf{H}^3(X, \mathbb{Z}(1) \to \underline{\mathcal{E}}_X^0(1) \to \underline{\mathcal{E}}_X^1(1)), $$
+
+cf. ref. [11, Proposition 6.9 (1)]. (In fact there is an epimorphism $C(1)^{\bullet} \to (\mathbb{Z}(1) \to \underline{\mathcal{E}}_X^0(1) \to \underline{\mathcal{E}}_X^1(1))$ with non-trivial kernel.) It follows that the notion of “hermitian gerbes with hermitian connective structure” in loc. cit. is not identical to our notion of $\mathcal{O}_X^\times$-gerbe with hermitian structure and compatible type (1,0) connective structure.
+
+*Proof of Theorem 5.3.3.* Choose a cover $\mu_X$ as usual and let $(P_i, f_{ij}, r_i)$ be a decomposition of $\mathcal{G}$ and its hermitian structure as in the proof of Theorem 5.2.2.
+
+If $\mathcal{G}$ has a compatible type (1,0) connective structure, we have a map $\text{herm}(G_{U_i}) \ni r_i \mapsto \nabla_i \in \text{herm}(G_{U_i})$. For every isomorphism $f_{ij}$ the compatibility condition from Definition 5.3.2 determines a form
+
+$$ \xi_{ij} = \partial \log \rho_{ij} \in F^1\underline{A}_X^1(U_{ij}) $$
+
+satisfying the condition
+
+$$ (5.11) \qquad \xi_{jk} - \xi_{ik} + \xi_{ij} = d \log g_{ijk}. $$
+
+The imaginary 2-form $\eta_{ij} \stackrel{\text{def}}{=} \bar{\partial}\xi_{ij} = \bar{\partial}\partial\log\rho_{ij}$ then is a cocycle with values in $F^1\underline{A}_X^2 \cap \underline{\mathcal{E}}_X^2(1)$.
+
+Altogether, $g_{ijk}$, $\frac{1}{2}\log\rho_{ij}$, $\xi_{ij}$ and $\eta_{ij}$ determine a cocycle of total degree 3 in the Čech resolution $\check{C}^\bullet(\mu_X, D(1)_{h.h.}^\bullet)$.
+
+Conversely, given a degree 3 cocycle with values in $D(1)_{h,h.}^\bullet$, a gerbe $\mathcal{G}$ with hermitian structure can be obtained by gluing trivial $\mathcal{O}_{U_i,+}^\times$-torsors and $\underline{\mathcal{E}}_{U_i,+}^0$ torsors as in Theorem 5.2.2. Furthermore, define a map by assigning the trivial $F^1\underline{A}_{U_i,+}^1$-torsor to the trivial $\underline{\mathcal{E}}_{U_i,+}^0$-torsor by
+
+$$ r \mapsto \nabla_r = \partial \log r. $$
+
+Clearly, this defines a type (1,0) connective structure compatible with the hermitian structure on $\mathcal{G}$. $\square$
+
+*Remark 5.3.6.* Note the proof of Theorem 5.3.3 that $d\eta_{ij} = 0$, hence we obtain a class
+
+$$ [\eta_{ij}] \in \mathbf{H}^3(X, F^1\underline{A}_X^\bullet \cap \sigma^2\underline{\mathcal{E}}_X^\bullet(1)) $$
+
+which can be associated to $\mathcal{G}$ via the obvious map
+
+$$ D(1)_{h.h.}^\bullet \to F^1\underline{A}_X^\bullet \cap \sigma^2\underline{\mathcal{E}}_X^\bullet(1). $$
+
+This class plays the same role for $\mathcal{G}$ as the (global) imaginary form $c_1(\rho) = \bar{\partial}\partial\log\rho_i$ for a metrized line bundle $(L, \rho)$.
+
+*Remark 5.3.7* (Hermitian curving). An equivalent degree 3 cocycle can be obtained by introducing the cochain $K_i \in \underline{A}_X^{1,1} \cap \underline{\mathcal{E}}_X^2(1)(U_i)$ of imaginary 2-forms such that
+
+$$ \bar{\partial}\partial\log\rho_{ij} = K_j - K_i, $$
+
+and the imaginary 3-form $\Omega_i = \Omega|_{U_i}$ such that
+
+$$ dK_i = \Omega|_{U_i}, $$
+
+where $\Omega \in F^1A^3(X) \cap E^3(X)(1)$ (global sections). We can regard $K_i$ as the hermitian *curving* and $\Omega$ as the hermitian *3-curvature*, respectively, of the type (1,0) hermitian connection.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_19.md b/samples/texts/7707372/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e856c3bc113f5080bbf9495417a35b725bb10f0
--- /dev/null
+++ b/samples/texts/7707372/page_19.md
@@ -0,0 +1,49 @@
+5.4 The symbol $(f, L]_{h.h.}$.
+
+Given an invertible function $f$ and a line bundle $L$ we have seen there is a product $(f, L]_{h.h.} \in H_{\mathcal{D}h.h.}^3(X, 1)$.
+We briefly give a geometric construction of the corresponding hermitian-holomorphic gerbe.
+
+We need to recall from [9] the construction of the gerbe $\mathcal{C}$ underlying $(f, L]$. $\mathcal{C}$ is the stackification of the following pre-stack $\mathcal{C}^0$. For $U \hookrightarrow X$ objects of the category $\mathcal{C}_U^0$ are non vanishing sections of $L|_U$. If $s \in L|_U$, and non vanishing, it is denoted $(f, s]$ as an object of $\mathcal{C}_U^0$. Given another non vanishing section $s'$ of $L$ over $U$, there is $g \in \mathcal{O}_U^\times$ such that $s' = sg$. Morphisms from $(f, s']$ to $(f, s]$ are given by sections of the Deligne torsor $(f, g]$ over $U$. For a third non vanishing section $s''$, with $s'' = s'g' = sgg'$, composition of morphisms in the category $\mathcal{C}_U^0$ corresponds to the $K$-theoretic property of the Deligne torsor:
+
+$$ (f, gg'] \cong (f, g] \otimes (f, g'). $$
+
+Given a trivialization of $L$ by a collection $\{s_i\}$ relative to a cover $\mathcal{U}_X = \{U_i\}_{i \in I}$, with transition functions
+$g_{ij} \in \mathcal{O}_X^\times(U_{ij})$, the objects $(f, s_i]$ and the morphisms
+
+$$ \phi_{ij} = \{\log_i f, g_{ij}\} : (f, s_j] \to (f, s_i] $$
+
+provide a decomposition of $\mathcal{C}$ in the sense of [7]. It follows that the automorphisms
+
+$$ (5.12) \qquad h_{ijk} = \phi_{ij} \otimes \phi_{jk} \otimes \phi_{ik}^{-1} = g_{jk}^{-m_{ij}} \in \mathrm{Aut}((f, s_i]|_{U_{ijk}}) \cong \mathcal{O}_X^\times(U_{ijk}) $$
+
+represent the cohomology class of $\mathcal{C}$ in $H_{\mathcal{D}}^3(X, \mathbb{Z}(1)) \cong H^2(X, \mathcal{O}_X^\times)$.
+
+Now define a *hermitian structure* on $\mathcal{C}$ as follows. To an object $(f, s]$ of $\mathcal{C}_U$ we assign
+
+$$ (5.13) \qquad (f, s] \rightsquigarrow \operatorname{herm}((f, s]) = \text{trivial } \mathcal{E}_{U,+}^0\text{-torsor.} $$
+
+Then, given a morphism $(f,g] \ni \phi: (f,s'] \to (f,s]$ in $\mathcal{C}_U$, with $s' = sg$ as above, we use the hermitian
+structure on the Deligne torsor underlying $(f,g]$ defined in sect. 4.3, Proposition 4.3.4. Namely
+
+$$ (5.14) \qquad \begin{gathered} \phi_* : \operatorname{herm}((f, s']) \to \operatorname{herm}((f, s]) \\ h \mapsto h \cdot \| \phi \|^{2} \end{gathered} $$
+
+where $h$ is a local section of $\underline{\operatorname{herm}}((f, s'])$, to be identified with one of $\underline{\mathcal{E}}_{U,+}^0$ and $\|\phi\|$ is the length of the
+non-vanishing section $\phi$. We have the following analog of Proposition 4.3.4:
+
+**Proposition 5.4.1.** The class of the gerbe $\mathcal{C}$ underlying the symbol $(f, L]$ with hermitian structure defined by eqs. (5.13) and (5.14) is given by the product $(f, L]_{h.h.}$ in the group $H^3(X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0) \cong H_{\mathcal{D}h.h.}^3(X, 1)$.
+
+*Proof.* We need to find the class of the $\mathcal{C}$ as in the proof of Thm. 5.2.2 and show it coincides with $(f, L]_{h.h.}$ as computed in eq. (5.3). To this end, let us use the decomposition of $\mathcal{C}$ given by the objects $(f, s_i]$ and morphisms $\phi_{ij} = \{\log_i f, g_{ij}\}: (f, s_j] \to (f, s_i]$ for non vanishing sections $s_i \in L|_{U_i}$, as before. The class of $\mathcal{C}$ (without extra structures) is represented by the cochain $g_{jk}^{-m_{ij}}$ already appearing in eq. (5.12).
+
+Furthermore, in the hermitian Deligne torsor $(f,g_{ij}]$ over $U_{ij}$ the logarithm of the length of the section
+$\phi_{ij} = \{\log_i f, g_{ij}\}$ is given by
+
+$$ \sigma_{ij} = \frac{1}{2} \log \| \phi_{ij} \|^{2} = \frac{1}{2} \log \rho_{ij} = -\frac{1}{2\pi\sqrt{-1}} \pi_{1}(\log_{i} f) \log |g_{ij}|, $$
+
+cf. eq. (4.11). Thus we have found the total cocycle representing $(f, L]_{h.h.}$ as in eq. (5.3). Indeed, by computing
+the Čech coboundary we find
+
+$$ \sigma_{ij} - \sigma_{ik} + \sigma_{jk} = -m_{ij} \log |g_{jk}|, $$
+
+as desired.
+
+$\square$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_2.md b/samples/texts/7707372/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..7c80bdb4dc490f641f09a05fd5fa9934b93ac076
--- /dev/null
+++ b/samples/texts/7707372/page_2.md
@@ -0,0 +1,108 @@
+
+
+
+ | 5 |
+ Hermitian holomorphic gerbes and 2-gerbes |
+ 15 |
+
+
+ | 5.1 |
+ Higher tame symbols |
+ 15 |
+
+
+ | 5.2 |
+ Gerbes with Hermitian structure |
+ 16 |
+
+
+ | 5.3 |
+ Hermitian connective structure |
+ 17 |
+
+
+ | 5.4 |
+ The symbol (f, L]h,h. |
+ 19 |
+
+
+ | 5.5 |
+ Hermitian 2-Gerbes. |
+ 20 |
+
+
+ | 5.6 |
+ The symbol (L, L')h,h. |
+ 23 |
+
+
+ | 5.7 |
+ Comparisons and relations with other definitions |
+ 23 |
+
+
+ | 6 |
+ Concluding remarks |
+ 25 |
+
+
+ | A |
+ Remarks on Hodge-Tate structures |
+ 25 |
+
+
+ | A.1 |
+ A Mixed Hodge Structure |
+ 25 |
+
+
+ | A.2 |
+ The big period |
+ 26 |
+
+
+ | A.3 |
+ The extension class |
+ 27 |
+
+
+
+
+# 1 Introduction
+
+The aim of this work is two-fold. For an analytic manifold $X$ we investigate geometric objects corresponding to the elements of certain low-degree Hermitian-Holomorphic Deligne cohomology groups. These groups, denoted here $H_{\mathcal{D}_{h,h.}}^k(X,l)$, for two integers $k$ and $l$, were defined in [11] and, in a slightly different fashion, later in [1].
+
+It is already an observation by Deligne (cf. [14]) that $H_{\mathcal{D}_{h,k.}}^2(X, 1) \cong \widehat{\mathrm{Pic}}\overline{X}$, the group of isomorphism classes of holomorphic line bundles with hermitian fiber metric. Here we define an appropriate notion of hermitian structure on a gerbe (or 2-gerbe) bound by $\mathcal{O}_X^\times$ and show that the corresponding (equivalence) classes are in bijective correspondence with the elements of $H_{\mathcal{D}_{h,h.}}^k(X, 1)$, for $k=3,4$.
+
+As a second result and application, we show that the torsors and (2-)gerbes underlying the cup products in ordinary Deligne cohomology studied by Brylinski-McLaughlin [8, 9] can be equipped in a rather natural way with the above mentioned hermitian structures, thus producing classes in the Hermitian-Holomorphic variant. More precisely, we modify the cup product at the level of Deligne complexes to land into a Hermitian-Holomorphic one. This modification is actually quite a natural one from the point of view of Mixed Hodge Structures.
+
+## 1.1 Background notions
+
+To explain things a little bit more, let $X$ be an analytic manifold and let $A \subseteq \mathbb{R}$ be a subring—typically $A = \mathbb{Z}, \mathbb{Q}$ or $\mathbb{R}$. For any integer $j$, set $A(j) = (2\pi\sqrt{-1})^j A$ and let $A(j)_{\mathcal{D}}^{\bullet}$ be the Deligne complex
+
+$$A(j)_X \hookrightarrow \mathcal{O}_X \to \underline{\mathcal{O}}_X^1 \to \dots \to \underline{\mathcal{O}}_X^{j-1}.$$
+
+It is well known that (at the level of the derived category) there are maps $A(j)_{\mathcal{D}}^{\bullet} \otimes A(k)_{\mathcal{D}}^{\bullet} \to A(j+k)_{\mathcal{D}}^{\bullet}$ inducing a cup product in cohomology
+
+$$H_{\mathcal{D}}^{p}(X, A(j)) \otimes H_{\mathcal{D}}^{q}(X, A(k)) \xrightarrow{\cup} H_{\mathcal{D}}^{p+q}(X, A(j+k)),$$
+
+where we have used the notation $H_{\mathcal{D}}^{p}(X, A(j)) = \mathbf{H}^{p}(X, A(j)_{\mathcal{D}}^{\bullet})$ for the *Deligne cohomology* groups, and $\mathbf{H}^{\bullet}(X, -)$ denotes hypercohomology.
+
+The question of obtaining a geometric picture of the cup product in cohomology is a very interesting one.
+A chief foundational example is the following. For $A = \mathbb{Z}$ the product
+
+$$
+(1.1) \qquad \mathbb{Z}(1)_{\mathcal{D}}^{\bullet} \otimes \mathbb{Z}(1)_{\mathcal{D}}^{\bullet} \to \mathbb{Z}(2)_{\mathcal{D}}^{\bullet}
+$$
+
+corresponds to the morphism
+
+$$
+(1.2) \qquad \Theta_X^\times \otimes \Theta_X^\times \to (\Theta_X^\times \xrightarrow{\mathrm{d}\log} \underline{\Omega}_X^1)
+$$
+
+via the quasi-isomorphisms $\mathbb{Z}(1)_{\mathcal{D}}^{\bullet} \cong \Theta_X^\times[-1]$ and $\mathbb{Z}(2)_{\mathcal{D}}^{\bullet} \cong (\Theta_X^\times \xrightarrow{\mathrm{d}\log} \underline{\Omega}_X^1)[-1]$. Deligne gave a geometric construction of (1.2)and the ensuing cup product
+
+$$
+\Theta_X^\times(X) \otimes \Theta_X^\times(X) \xrightarrow{\cup} \mathbf{H}^1(X, \Theta_X^\times \xrightarrow{\mathrm{d}\log} \underline{\Omega}_X^1)
+$$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_20.md b/samples/texts/7707372/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..94ab703ac2892e03c1e2f7d2eefc3f0452a622cb
--- /dev/null
+++ b/samples/texts/7707372/page_20.md
@@ -0,0 +1,58 @@
+5.5 Hermitian 2-Gerbes
+
+Let us briefly extend the considerations outlined in the previous sections to 2-gerbes over $X$ bound by $\mathcal{O}_X^\times$. (An extended exposition of the local geometry of 2-gerbes is to be found in ref. [7]. See also [8] for the abelian case.)
+
+Recall that a 2-gerbe $\mathcal{G}$ over $X$ bound by a sheaf of *abelian groups* $\mathcal{H}$ is a fibered 2-category over $X$ which satisfies the 2-descent condition for objects, and such that for any two objects $P$ and $Q$ in the fiber 2-category $\mathcal{G}_U$ over $U \subset X$ the fibered category $\text{Hom}(P, Q)$ is a stack. If fact, this fibered category turns out to be an $\mathcal{H}$-gerbe equivalent to the neutral one $\text{Tors}(\mathcal{H})$. The properties of interest to us are the following: $\mathcal{G}$ is locally non-empty, namely there is a cover $\mathfrak{A}_X$ of $X$ such that for $U \subset X$ in the cover, the object set of $\mathcal{G}_U$ is non-empty; $\mathcal{G}$ is locally connected, namely any two objects can be connected by a weakly invertible 1-arrow (that is, invertible up to a 2-arrow); any two 1-arrows can be (locally) joined by a 2-arrow; finally, for every 1-arrow its automorphism group is isomorphic in a specified way to $\mathcal{H}$.
+
+Once the appropriate notion of isomorphism for 2-gerbes is introduced, isomorphism classes of 2-gerbes
+bound by $\mathcal{H}$ are classified by the sheaf cohomology group $H^3(X, \mathcal{H})$, see, e.g. refs. [7, 8].
+
+In what follows, we shall set $\mathcal{H} = \mathcal{O}_X^\times$. Hence we can rephrase the previous statement by saying that
+isomorphism classes of 2-gerbes bound by $\mathcal{O}_X^\times$ are classified by the group
+
+$$
+H^3(X, \mathcal{O}_X^\times) \cong H_D^4(X, \mathbb{Z}(1)).
+$$
+
+We shall need the local calculation leading to the classification, so we recall it here. Given a 2-gerbe G,
+let us choose a decomposition by selecting a cover $\mathfrak{A}_X$ of X and a collection of objects $P_i$ in $G_{U_i}$. There is a
+1-arrow
+
+$f_{ij}: P_j \to P_i$
+
+between their restrictions to $G_{U_{ij}}$. Furthermore, from the axioms there is a 2-arrow
+
+$$
+\alpha_{ijk} : f_{ij} \circ f_{jk} \implies f_{ik}.
+$$
+
+Further restricting over a 4-fold intersection $U_{ijkl}$, we have two 1-arrows $f_{ij} \circ f_{jk} \circ f_{kl} : P_l \to P_i$ and $f_{il} : P_l \to P_i$
+and between them two 2-arrows, namely $\alpha_{ijl} \circ (\mathrm{Id}_{f_{ij}} * \alpha_{jkl})$ and $\alpha_{ikl} \circ (\alpha_{ijk} * \mathrm{Id}_{f_{kl}})$.
+Since 2-arrows are strictly
+invertible, it follows again from the axioms that there exists a section $h_{ijkl}$ of $\mathcal{O}_X^\times$ over $U_{ijkl}$ such that
+
+$$
+(5.15) \qquad \alpha_{ijl} \circ (\mathrm{Id}_{f_{ij}} * \alpha_{jkl}) = h_{ijkl} \circ \alpha_{ikl} \circ (\alpha_{ijk} * \mathrm{Id}_{f_{kl}}).
+$$
+
+This section is a 3-cocycle and the assignment $\mathcal{G} \mapsto [\mathcal{h}]$ gives the classification isomorphism.
+
+In analogy with what was previously done for gerbes, we are going to define a notion of hermitian structure
+and of type (1,0) *connectivity* for 2-gerbes on X bound by $\mathcal{O}_X^\times$. Brylinski and McLaughlin defined a *concept*
+of connectivity on a 2-gerbe $\mathcal{G}$ over X to be the datum of a compatible class of connective structures on the
+gerbes $\operatorname{Hom}_U(P,Q)$ for two objects $P, Q$ in the fiber $\mathcal{G}_U$. It is possible to introduce several variants of this
+notion, as done in refs. [8, 9]. Thus a type (1,0) connectivity will just be the requirement that these connective
+structures take their values in $F^1\underline{A}_X^\times$ - torsors.
+
+Let us model the concept of hermitian structure on a 2-gerbe after the one for gerbes given above in definition 5.2.1.
+
+**Definition 5.5.1.** A hermitian structure on a $\mathcal{O}_X^\times$-2-gerbe $\mathcal{G}$ over $X$ consists of the following data.
+
+1. To each object $P$ in the fiber 2-category $\mathcal{G}_U$ over $U \subset X$ we assign a $\mathcal{E}_{U,+}^0$-gerbe **herm**($P$) over $U$. (As before, $\mathcal{E}_{U,+}^0$ is the sheaf of real positive functions on $U$.)
+
+2. This assignment must be compatible with the inverse image 2-functors $i^*: \mathcal{G}_U \to \mathcal{G}_V$, natural transformations $\varphi_{i,j}: j^*i^* \Rightarrow (ij)^*$ and modifications $\alpha_{i,j,k}: \varphi_{i,j,k} \circ (h^*\varphi_{i,j}) \Rightarrow \varphi_{i,jk} \circ (\varphi_{j,k}*i^*)$ arising from the inclusions $i: V \hookrightarrow U$, $j: W \hookrightarrow V$, and $k: Z \hookrightarrow W$, in the cover $\mathfrak{A}_X$.
+
+3. For each 1-arrow $f: P \to Q$ in $\mathcal{G}_U$ a corresponding equivalence $f_*: \text{herm}(P) \to \text{herm}(Q)$ of $\mathcal{E}_{U,+}^0$-gerbes.
+For each 2-arrow $\alpha: f \Rightarrow f'$ a corresponding natural transformation $\alpha_*: f_* \Rightarrow f'_*$ between equivalences.
+We ask that this correspondence be compatible with compositions of 1- and 2-arrows. Namely, for 1-
+arrows $f, f': P \to Q$ and $g,g': Q \to R$ and for 2-arrows $\alpha: f \Rightarrow f'$ and $\beta: g \Rightarrow g'$ in $\mathcal{G}_U$, which we
\ No newline at end of file
diff --git a/samples/texts/7707372/page_21.md b/samples/texts/7707372/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..159528d48ddce02f3608eea31cbe7f8567c19402
--- /dev/null
+++ b/samples/texts/7707372/page_21.md
@@ -0,0 +1,37 @@
+compose as $\beta * \alpha: g \circ f \Rightarrow g' \circ f'$, we find a diagram of natural transformations
+
+$$ (5.16) \qquad \begin{array}{ccc} g_* \circ f_* & \xrightarrow{\epsilon(f,g)} & (g \circ f)_* \\ \bigvee_{\beta_* * \alpha_*} & & \bigvee_{(\beta * \alpha)_*} \\ g'_* \circ f'_* & \xrightarrow{\epsilon(f',g')} & (g' \circ f')_* \end{array} $$
+
+of equivalences between the $\mathcal{E}_{U,+}^0$-gerbes $\underline{\text{herm}}(P)$ and $\underline{\text{herm}}(R)$ on $U \subset X$.
+
+4. From the axioms, the group of automorphisms of a 1-arrow $f: P \to Q$ in $G_U$ is identified with $\mathcal{O}_U^\times$. It follows that such an automorphism $\alpha$ (that is, a 2-arrow from $f$ to itself) can be identified with a section $a \in \mathcal{O}_U^\times$. We then require that the induced natural isomorphism
+
+$$ \alpha_* : f_* \implies f_*, \quad \text{where } f_* : \underline{\text{herm}}(P) \to \underline{\text{herm}}(Q) $$
+
+be identified with a section of $\mathcal{E}_{U,+}^0$ via the map
+
+$$ (5.17) \qquad a \mapsto |a|^2 $$
+
+and an appropriate labeling of $\underline{\text{herm}}(P)$ and $\underline{\text{herm}}(Q)$ by objects $r$ and $s$, respectively. In more detail, given an arrow $f_*(r) \to s$ in $\underline{\text{herm}}(Q)$, the action of $\alpha$ via $\alpha_*$ will amount to an automorphism of $s$. We require that it be $|a|^2$.
+
+**Remark 5.5.2.** The abstract nonsense of definition 5.5.1 could have more succinctly characterized by saying that the correspondence $\underline{\text{herm}}(\cdot)$ realizes a Cartesian 2-functor between $G$ and the 2-gerbe $\text{Gerbes}(\mathcal{E}_{X,+}^0)$ on $X$, shifting to the reader the burden of unraveling the diagrams.
+
+We have the following analog of theorem 5.2.2:
+
+**Theorem 5.5.3.** Isomorphism classes of $\mathcal{O}_X^\times$-2-gerbes with hermitian structure in the sense of definition 5.5.1 are classified by the group
+
+$$ H^4(X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0) \cong H_{D_{h.h.}}^4(X, 1). $$
+
+*Proof.* Let $G$ be a $\mathcal{O}_X^\times$-2-gerbe on $X$ with hermitian structure as per definition 5.5.1. Forgetting the hermitian structure, $G$ will determine a class in the group $H_D^4(X, \mathbb{Z}(1)) \cong H^3(X, \mathcal{O}_X^\times)$, and we have briefly recalled before — cf. eq. (5.15) — how to obtain a 3-cocycle representing the class of $G$.
+
+To obtain the rest of the cocycle with values in the complex $\mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0$ let us make the same choice for a decomposition of $G$ with respect to the cover $\mathcal{U}_X$: a collection of objects $P_i$ in $G_{U_i}$, 1-arrows $f_{ij}: P_j \to P_i$ between their restrictions and 2-arrows $\alpha_{ijk}: f_{ij} \circ f_{jk} \Rightarrow f_{ik}$.
+
+We shall also need a decomposition of the $\mathcal{E}_{U_i,+}^0$-gerbes $\underline{\text{herm}}(P_i)$: to this end let us choose objects $r_i$ over $U_i$ and arrows $\xi_{ij}: (f_{ij})_*(r_j) \to r_i$ between their restriction to $U_{ij}$.
+
+Let us consider a triple of objects $P_i, P_j, P_k$ over $U_{ijk}$. (we are implicitly restricting to the fiber 2-category $G_{U_{ijk}}$.) We obtain the following diagram in $\underline{\text{herm}}(P_i)|_{U_{ijk}}$:
+
+The left vertical arrow in (5.18) results from the composition of two-arrows
+
+$$ (f_{ij})_*(f_{jk})_* \xrightarrow{\varepsilon_{ijk}} (f_{ij} \circ f_{jk})_* \xrightarrow{(\alpha_{ijk})_*} (f_{ik})_* $$
+
+resulting from diagram (5.16) in definition 5.5.1. At the level of objects in the gerbe $\underline{\text{herm}}(P_i)$ diagram (5.16) is of course not commutative, so we obtain a section $\rho_{ijk} \in \underline{\text{Aut}}(r_i)$, which we can identify with a section of the sheaf $\mathcal{E}_{U,+}^0$ over $U_{ijk}$.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_22.md b/samples/texts/7707372/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..34cc8a52cc0b3a9c265ec12b09657689a383da00
--- /dev/null
+++ b/samples/texts/7707372/page_22.md
@@ -0,0 +1,43 @@
+Now consider a four-fold intersection $U_{ijkl}$: we have a cube determined by the objects $r_i, \dots, r_l$ whose faces are built from copies of (5.18). Since this cube brings in the relation (5.15), using the mapping of the $\mathcal{O}_X^\times$ action spelled out in the last point in definition 5.5.1, we get the relation
+
+$$ (5.19) \qquad \rho_{jkl} \rho_{ikl}^{-1} \rho_{ijl} \rho_{ijk}^{-1} = |h_{ijkl}|^2 $$
+
+which, after taking the appropriate logarithms, defines a Čech cocycle representing a class in
+
+$$ \check{\mathbf{H}}^4(\mathfrak{U}_X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0). $$
+
+Details (and diagram chasing) are straightforward and left to the reader.
+
+Conversely, let us be given a class in
+
+$$ \mathbf{H}^4(X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0) \cong \mathbf{H}^3(X, \mathcal{O}_X^\times \xrightarrow{| \cdot |} \underline{\mathcal{E}}_{X,+}^0), $$
+
+and let us assume it is represented by the (multiplicative) Čech cocycle $(h_{ijkl}, \rho_{ijk})$. Let just explain the construction of a corresponding 2-gerbe with hermitian structure (up to equivalence). Again, details will be left to the reader.
+
+We first apply the map
+
+$$ (\mathbb{Z}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0) \to (\mathbb{Z}(1)_X \to \mathcal{O}_X) $$
+
+to the representative Čech cocycle to reconstruct a $\mathcal{O}_X^\times$-2-gerbe G according to refs. [7, 8, 9]. Recall that this is accomplished by gluing the local stacks Gerbes($\mathcal{O}_{U_i}^\times$) using $h_{ijkl}$. Secondly, we define a hermitian structure as follows. Assign to any object $P_i$ over $U_i$ of the so-determined 2-gerbe G the trivial $\underline{\mathcal{E}}_{U_i,+}^0$-gerbe herm($P_i$) = Tors($\underline{\mathcal{E}}_{U_i,+}^0$). For a triple of such on $U_{ijk}$ we use $\rho_{ijk} \in \underline{\mathcal{E}}_{U_i,+|U_{ijk}}^0$ as an automorphism of an object $r_i$ in herm($P_i$).
+
+Checking that this structure satisfies the properties in definition 5.5.1 and it defines a 2-gerbe with hermitian structure whose class is the one we started with is modeled after the pattern of refs. [7] and [10] and it will be left to the reader. $\square$
+
+As mentioned before, a connectivity on a $\mathcal{O}_X^\times$-2-gerbe is in practice the assignment of compatible connective structures on the local gerbes of morphisms. We have the following definition (see also [11, sect. 7], for the first part):
+
+**Definition 5.5.4.** Let G be a $\mathcal{O}_X^\times$-2-gerbe on X.
+
+1. A type (1,0) *concept of connectivity* on G is the assignment of a $F^1\underline{A}_U^1$-gerbe $\text{Co}(P)$ to each object P in $G_U$. This assignment will have to satisfy properties analogous to those of definition 5.5.1. Of course, in the last condition, the map (5.17) will have to be replaced by $a \mapsto d \log a$.
+
+2. A type (1,0) concept of connectivity is *compatible* with a hermitian structure if for each object P of $G_U$ there is an equivalence of gerbes
+
+$$ \text{herm}(P) \to \text{Co}(P) $$
+
+satisfying the obvious compatibility conditions with the operations of $G_U$ and the restrictions.
+
+The proof of the following theorem can be patterned after an appropriate generalization of the proof of Theorem 5.3.3, so we shall omit it.
+
+**Theorem 5.5.5.** Let G be a $\mathcal{O}_X^\times$-2-gerbe with hermitian structure and let $D(1)_{h,h}^\bullet$ be the complex given by (3.7) for $l=1$. Equivalence classes of type (1,0) connectivities on G compatible with the given hermitian structure are classified by the group
+
+$$ \mathbf{H}^4(X, D(1)_{h,h}^\bullet). $$
+
+Furthermore, the equivalence class is unique.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_23.md b/samples/texts/7707372/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..11370d8979e37620ac846ed8b85618cf37ad0d8b
--- /dev/null
+++ b/samples/texts/7707372/page_23.md
@@ -0,0 +1,45 @@
+**5.6 The symbol $(L, L')_{h.h.}$**
+
+We have seen that given two line bundles $L$ and $L'$ over $X$ their cup product $(L, L')_{h.h.}$ defines a class in $H^4_{\mathcal{D}_{h.h.}}(X, 1)$. According to Theorem 5.5.3 it corresponds to an equivalence class of 2-gerbes with hermitian structure. Using the obvious maps of complexes $D(1)^{\bullet}_{h.h.} \to \mathbb{Z}(1)^{\bullet}_{\mathcal{D}}$ and $\mathbb{Z}(2)^{\bullet}_{\mathcal{D}} \to \mathbb{Z}(1)^{\bullet}_{\mathcal{D}}$, the geometric 2-gerbe $G$ that underlies $(L, L')_{h.h.}$ is the same one as for the standard symbol $(L, L']$ constructed by Brylinski and McLaughlin.
+
+Recall (see ref. [9] for more details) that objects of $G$ underlying $(L, L']$ over $U \subset X$ are the non-vanishing sections $s$ of $L|_U$, denoted $(s, L]$. Given another non vanishing section $s' \in L|_U$ we have $s' = sg$ for an invertible function $g$ over $U$. Then the category of morphisms from $(s', L]$ to $(s, L]$ is the *gerbe* $(g, L]$ defined in section 5.4. For a third non vanishing section $s''$ of $L$ over $U$, with $s'' = s' g'$, the morphism composition functor is given by the equivalence
+
+$$ (g, L'] \otimes (g', L] \rightarrow (gg', L] ) $$
+
+where on the left hand side we have the contracted product of two (abelian) gerbes. To be precise, it turns out that $G$ is an appropriate “2-stackification” of the 2-pre-stack defined here.
+
+A calculation in ref. [9] shows that with respect to the trivializations $\{g_{ij}\}$ and $\{g'_{ij}\}$ of $L$ and $L'$, respectively, the class of $G$ is represented by the cocycle $g'_{kl} \xrightarrow{-c_{ijk}} \mathcal{O}_X^\times(U_{ijkl})$, where the cocycle $c_{ijk}$ represents $c_1(L)$.
+
+We can define a hermitian structure on $G$ as follows. To an object $(s, L']$ of $G_U$ we assign
+
+$$ (5.20) \qquad (s, L'] \rightsquigarrow \operatorname{herm}((s, L')) = \text{trivial } \mathcal{E}_{U,+}^0\text{-gerbe}. $$
+
+Furthermore, as remarked above we have $\operatorname{Hom}_U((s', L'], (s, L']) \cong (g, L']$. Thus we set
+
+$$ (5.21) \qquad \operatorname{Hom}_U(\operatorname{herm}((s', L')), \operatorname{herm}((s, L'))) = (g, L']_{h.h.}, $$
+
+where on the right hand side we use the hermitian structure on the gerbe $(g, L']$ as defined in section 5.4. On the left hand side of (5.21) we have the equivalences of the two $\mathcal{E}_{U,+}^0$-gerbes.
+
+The proof of the following proposition is a straightforward generalization of the one for proposition 5.4.1.
+
+**Proposition 5.6.1.** *The class of the $\mathcal{O}_X^\times$-2-gerbe $G$ underlying the symbol $(L, L']$ with hermitian structure defined by eqs. (5.20) and (5.21) is given by the product $(L, L']_{h.h.}$ in the group $\mathbf{H}^4(X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \mathcal{E}_X^0)$ $\cong H^4_{\mathcal{D}_{h.h.}}(X, 1)$.*
+
+**5.7 Comparisons and relations with other definitions**
+
+Recall from refs. [8, 9], that analytic connective structures on gerbes with band $\mathcal{O}_X^\times$ are classified by the group $H^3_{\mathcal{D}}(X, \mathbb{Z}(2))$. Similarly, for 2-gerbes with the same band, the relevant group is $H^4_{\mathcal{D}}(X, \mathbb{Z}(2))$. In the previous sections we have introduced hermitian structures and type-(1,0) connective structures on gerbes and 2-gerbes with band $\mathcal{O}_X^\times$. We define the concept of compatibility analogously to the case of line bundles in sect. 4.4 as follows.
+
+Let $\mathcal{S}$ be a $\mathcal{O}_X^\times$-gerbe on $X$. Let $\mathcal{Co}(\cdot)^{\text{an}}$ be a (holomorphic) connective structure on $\mathcal{S}$ in the sense of refs. [8, 9], and let $\mathcal{Co}(\cdot)^h$ be a connective structure on the same gerbe in the sense of sect. 5.3.
+
+The relevant group classifying $\mathcal{S}$ equipped with both types of connections is therefore $\mathbf{H}^3(X, \tilde{\Gamma}(2)\bullet)$, where the complex $\tilde{\Gamma}(2)\bullet$ has been introduced in sect. 4.4.
+
+**Definition 5.7.1.** We say that $\mathcal{Co}(\cdot)^{\text{an}}$ and $\mathcal{Co}(\cdot)^h$ are compatible if for any object $P$ of $\mathcal{S}_U$, $U \subset X$, there is an isomorphism of torsors $\mathcal{Co}(P)^{\text{an}} \cong \mathcal{Co}(P)^h$ (after lambda-extension of $\mathcal{Co}(P)^{\text{an}}$ from $\Omega_U^1$ to $\Lambda_U^{1,0}$.)
+
+Similarly, if $G$ is a $\mathcal{O}_X^\times$-2-gerbe on $X$, carrying both types of connective structures, its class is an element of the group $\mathbf{H}^4(X, \tilde{\Gamma}(2)\bullet)$. We can also repeat the above definition, taking care that now for any object of $G$ over $U \subset X$, $\mathcal{Co}(P)^{\text{an}} \cong \mathcal{Co}(P)^h$ must be an equivalence of gerbes.
+
+The next lemma immediately follows from the definitions.
+
+**Lemma 5.7.2.** Let $\Gamma(2)\bullet$ be the complex defined in sect. 4.4.
+
+1. Classes of $\mathcal{O}_X^\times$-gerbes with compatible connective structures in the sense of definition 5.7.1 are classified by the elements of the group $\mathbf{H}^3(X, \Gamma(2)\bullet)$.
+
+2. Similarly, classes of $\mathcal{O}_X^\times$-2-gerbes with compatible connective structures are classified by $\mathbf{H}^4(X, \Gamma(2)\bullet)$.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_24.md b/samples/texts/7707372/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..f829b6bc4a2eaf3a5e4c2674fdef84062813c050
--- /dev/null
+++ b/samples/texts/7707372/page_24.md
@@ -0,0 +1,47 @@
+### 5.7.1 Compatibility and flatness conditions
+
+While these definitions seem to follow the pattern of line bundles analyzed in sect. 4.4, there in an important difference, namely gerbes (or 2-gerbes) satisfying the compatibility condition of definition 5.7.1 are not necessarily flat! Moreover, in the present framework the compatibility condition is less special than it was seen in the case of line bundles. This can be seen by way of the following cohomological argument.
+
+The complex $\Gamma(2)^\bullet$ introduced in sect. 4.4 is easily seen to be a quotient of the complex $D(2)_{h.h.}^\bullet$:
+
+$$D(2)_{h.h.}^\bullet \to \Gamma(2)^\bullet \to 0.$$
+
+The kernel is complicated, but up to quasi-isomorphism, it can be reduced (by direct computation) to the one-element complex $\mathcal{E}_X^2(1) \cap A_X^{1,1}[-4]$ so that we have the triangle:
+
+$$\mathcal{E}_X^2(1) \cap A_X^{1,1}[-4] \to D(2)_{h.h.}^\bullet \to \Gamma(2)^\bullet \xrightarrow{\quad +1 \quad}$$
+
+Focusing our attention to degree 3 and 4, we get the sequence:
+
+$$0 \to H^2(X, \mathbb{R}(2)/\mathbb{Z}(2)) \to \mathbf{H}^3(X, \Gamma(2)^\bullet) \to E^2(X)(1) \cap A^{1,1}(X) \to H_{\mathcal{D}_{h.h.}}^4(X, 2) \to \mathbf{H}^4(X, \Gamma(2)^\bullet) \to 0,$$
+
+where we have used lemma 3.2.4. Moreover, the exact sequence from the proof of lemma 4.4.3 relating $\tilde{\Gamma}(2)^\bullet$ to $\Gamma(2)^\bullet$ yields the following completion of (4.15):
+
+$$0 \to H^1(X, \mathbb{R}(2)/\mathbb{Z}(2)) \to \mathrm{Pic}(X, \nabla, h) \to E^1(X)(1) \to \mathbf{H}^3(X, \Gamma(2)^\bullet) \to \mathbf{H}^3(X, \tilde{\Gamma}(2)^\bullet) \to 0$$
+
+and
+
+$$\mathbf{H}^4(X, \Gamma(2)^\bullet) \stackrel{\sim}{\longrightarrow} \mathbf{H}^4(X, \tilde{\Gamma}(2)^\bullet),$$
+
+where we have used that $\mathcal{E}_X^1(1)$ is soft. In summary we have:
+
+**Proposition 5.7.3.**
+
+1. The class of a $\mathcal{O}_X^\times$-gerbe supporting both types of connective structures can be lifted to a class of compatible connective structures on a (possibly equivalent) gerbe.
+
+2. A $\mathcal{O}_X^\times$-gerbe with compatible connective structures is flat if the (trivial) (1, 1)-curving is zero (cf. sect. 5.3, remarks 5.3.6 and 5.3.7.)
+
+3. A $\mathcal{O}_X^\times$-2-gerbe supporting both types of connective structures is equivalent to a 2-gerbe with compatible connective structures. Its class can be lifted to $H_{\mathcal{D}_{h.h.}}^4(X, 2)$
+
+### 5.7.2 Comparing $(f, L]$ and $(L, L']$ with their hermitian variants
+
+The higher symbols $(f, L]$ and $(f, L]_{h.h.}$ have the same underlying gerbe, and similarly $(L, L']$ and $(L, L']_{h.h.}$ determine the same 2-gerbe. Let us denote them, respectively, by $\{f, L\}$ and $\{L, L'\}$. By construction, they determine classes in $\mathbf{H}^3(X, \tilde{\Gamma}(2)^\bullet)$ and $\mathbf{H}^4(X, \tilde{\Gamma}(2)^\bullet)$, respectively. The proposition specializes to this case as follows:
+
+**Corollary 5.7.4.** The connective structures $\mathcal{Co}(\cdot)^{an}$ and $\mathcal{Co}(\cdot)^h$ on $\{f, L\}$ are compatible (up to $\mathcal{E}_J^1$-torsor automorphism).
+
+The analytic and hermitian connective structures on the 2-gerbe $\{L, L'\}$ are compatible.
+
+*Proof.* The statement follows at once from the calculations preceding the proposition. $\square$
+
+*Remark 5.7.5.* As an alternative proof of the corollary, note that a calculation analogous to that of the proof of proposition 4.4.4 from the cocycle representations (5.1) and (5.3), yields the 1-cocycle $r_2(f, g_{ij})$ with values in $\mathcal{E}_X^1(1)$, where $g_{ij}$ are the transition functions of $L$. This cocycle represents the zero class (softness of $\mathcal{E}_X^1(1)$), therefore $r_2(f, g_{ij}) = \eta_j - \eta_i$, and this choice is determined up to a global section of $\mathcal{E}_X^1(1)$.
+
+Similarly, in the case of $\{L, L'\}$ we get the 2-cocycle $r_2(g_{ij}, g'_{jk})$ which again represents the zero class.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_25.md b/samples/texts/7707372/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..4209181b86f572ae10c5678eecfe699ecafedb0a
--- /dev/null
+++ b/samples/texts/7707372/page_25.md
@@ -0,0 +1,27 @@
+## 6 Concluding remarks
+
+In this paper we have put forward a definition for the concept of hermitian structure, and associated compatible connective structure for gerbes and 2-gerbes with band $O_X^\times$. We have presented classification results in terms of low degree hermitian holomorphic Deligne cohomology groups. Notable examples are provided by higher versions of the classical notion of tame symbol associated to two invertible functions. Indeed, our second main result that there exists a modified version of the cup product in low degree Deligne cohomology taking values in the first hermitian holomorphic Deligne complex, naturally provides the symbols $(f, L]$ and $(L, L']$ with hermitian structures according to our definition.
+
+Two questions naturally arise. Since $(f, L]$ and $(L, L']$ also carry an analytic connective structure, we may ask to what degree the latter and the hermitian one are compatible. Remark 5.3.5 prompts a second obvious question regarding the relation between our classification theorems 5.3.3 and 5.5.5 and others', notably Brylinski's ([11, Proposition 6.9 (1)]).
+
+We have analyzed the compatibility in cohomological terms, first for line bundles (in the sense of $O_X^\times$-torsors) and then for gerbes and 2-gerbes with band-$O_X^\times$, with somewhat surprising results. Whereas the compatibility may be regarded as exceptional for a line bundle—and it implies its flatness—it is not so for gerbes (or 2-gerbes). Thus flatness is not a necessary condition. In the specific case of the tame symbols and their generalizations, we have found that while the compatibility of $(f, g]$ and $(f, g]_{h,h.}$ (that is, their respective connections) may in general be obstructed, $(f, L]$ and $(f, L]_{h,h.}$ can always be made compatible, and $(L, L']$ and $(L, L']_{h,h.}$ are automatically so.
+
+As for the relation with other notions of “hermitian gerbe” with “hermitian connective structure” (or 2-gerbe) there appear to be subtle differences in the definitions which we can trace to what aspect of line bundles with connection we decide to generalize. Our approach has been to copy the concept of *metrized analytic* (or *algebraic*) *line bundle* familiar from Arakelov geometry (cf. ref. [21]). On the other hand, one could describe a metrized $O_X^\times$-line bundle by means of the $\mathcal{T}$-reduction of its associated smooth line bundle plus a unitary connection. Whereas these two approaches are equivalent in the case of line bundles, they seem to diverge as soon as we move on to gerbes. (And possibly matters worsen in the case of 2-gerbes.) This may also serve to explain the lack of uniqueness found by Hitchin's student D. Chatterjee in his thesis. Although that school's approach to gerbes lacks the categorical input (in fact for them a gerbe is just the “torsor cocycle” in the sense of [7]) the definition of hermitian gerbe is along Brylinski's lines.
+
+Another difference is the following. Our cohomological characterization via the group $H_{D_{h,h.}}^k(X, 1) \simeq \mathbb{H}^k(X, D(1)_{h,h.}^\bullet)$, $k=3,4$, involves forms of degree two, which points to a natural notion of curving naturally associated with the structures we have defined (cf. remarks 5.3.6 and 5.3.7). This is obviously absent in the truncated group in remark 5.3.5. The cohomological analysis of sect. 5.7, where the group $H_{D_{h,h.}}^4(X, 2)$ appears, suggests that curvings can be a very nuanced structure, however dealing with them in detail falls outside the scope of the present work.
+
+We hope to further elucidate matters in the future in another publication.
+
+# A Remarks on Hodge-Tate structures
+
+The relation between the “imaginary part” map made in sect. 4.3 together with the product $\mathbb{Z}(1)_{\mathcal{D}}^\bullet \otimes \mathbb{Z}(1)_{\mathcal{D}}^\bullet \to 2\pi\sqrt{-1} \otimes D(1)_{h,h.}^\bullet$, and the cup product $\mathbb{Z}(1)_{\mathcal{D}}^\bullet \otimes \mathbb{Z}(1)_{\mathcal{D}}^\bullet \to \mathbb{Z}(2)_{\mathcal{D}}^\bullet$ giving rise to the tame symbol becomes more transparent from the point of view of Hodge-Tate structures.
+
+## A.1 A Mixed Hodge Structure
+
+Let us briefly recall the following well known MHS on $\mathbb{C}^3$, see [13, 4]. Consider, as before,
+
+$$ (A.1) \qquad M^{(2)} = \begin{pmatrix} 1 \\ x & 1 \\ z & y & 1 \end{pmatrix} $$
+
+with complex entries $x, y, z$. Consider also its canonical version
+
+$$ (A.2) \qquad A^{(2)} = \begin{pmatrix} 1 & & & \\ x & 2\pi\sqrt{-1} & & \\ z & 2\pi\sqrt{-1}y & (2\pi\sqrt{-1})^2 & \end{pmatrix}. $$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_26.md b/samples/texts/7707372/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..96f3f1968ac9495dc264ccdb5784f13ed1ff8d16
--- /dev/null
+++ b/samples/texts/7707372/page_26.md
@@ -0,0 +1,50 @@
+The MHS $\mathcal{M}_2$ corresponding to $M^{(2)}$, or more precisely $A^{(2)}$, comprises the following data. The integer lattice is the $\mathbb{Z}$ span of the columns of $A^{(2)}$, and similarly for $\mathbb{Q}$ and $\mathbb{R}$. Let $v_0, v_1, v_2$ denote the columns of $A^{(2)}$ starting from the left. The weight spaces are $W_{-2k}\mathcal{M}^{(2)} = \text{span}\langle v_k, \dots, v_2 \rangle$ (over the appropriate ring), and the Hodge filtration is given by $F^{-k}\mathcal{M}^{(2)}(\mathbb{C}) = \mathbb{C}\langle e_0, \dots, e_k \rangle$, where the $e_i$'s are the standard basis vectors in $\mathbb{C}^2$. The graded quotients $\text{Gr}_{-2k}^W\mathcal{M}^{(2)}$ are the Tate structures $\mathbb{Z}(0), \mathbb{Z}(1)$, and $\mathbb{Z}(2)$. A change of the generators $v_i$ preserving the structure clearly amounts to a change of $A^{(2)}$ by right multiplication by a lower unipotent matrix over $\mathbb{Z}$ (or $\mathbb{Q}$ or $\mathbb{R}$). This is the same as changing $M^{(2)}$ by a matrix in $H_\mathbb{Z}$ (or the appropriate ring thereof) as in sect. 4.2.³
+
+The real structure underlying $\mathcal{M}^{(2)}$ is linked to the hermitian structure on the bundle $H_{\mathbb{C}}/H_{\mathbb{Z}}$ as presented in sect. 4.3.1. In [4] the image of $A^{(2)}$ in $GL_2(\mathbb{C})/GL_2(\mathbb{R})$ is obtained by computing the matrix
+
+$$B \stackrel{\text{def}}{=} A\bar{A}^{-1} \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix},$$
+
+(we have dropped the superscript (2) for ease of notation). The logarithm is:
+
+$$\frac{1}{2} \log B = \begin{pmatrix} 1 & 0 & 0 & 0 \\ \pi_0(x) & \pi_0(y) & 1 & 0 \\ \pi_1(z) - \pi_1(x)\pi_0(y) & \pi_0(y) & \pi_1(y) & 1 \end{pmatrix}.$$
+
+We immediately recognize the expression of the hermitian form as given in sect. 4.3.1.
+
+## A.2 The big period
+
+In ref. [18] Goncharov defines a tensor
+
+$$P(\mathcal{M}) \in \mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C}$$
+
+associated to a MHS (technically, a framed one) $\mathcal{M}$. For the MHS defined by the period matrix (A.1) it is computed as follows. Let $f_0, f_1, f_2$ be the dual basis to $v_0, v_1, v_2$. Then, according to ref. [18],
+
+$$P(\mathcal{M}^{(2)}) = \sum_k \langle f_2, M^{(2)} v_k \rangle \otimes_Q \langle f_k, M^{(2)^{-1}} v_0 \rangle.$$
+
+Performing the calculation we find:
+
+$$
+(A.3) \qquad P(\mathcal{M}^{(2)}) = \frac{z}{(2\pi\sqrt{-1})^2} \otimes 1 - 1 \otimes \frac{z}{(2\pi\sqrt{-1})^2} \\
+\phantom{(A.3) \qquad P(\mathcal{M}^{(2)}) = } + 1 \otimes \frac{xy}{(2\pi\sqrt{-1})^2} - \frac{y}{2\pi\sqrt{-1}} \otimes \frac{x}{2\pi\sqrt{-1}}
+$$
+
+Clearly, $P(\mathcal{M}^{(2)})$ is invariant under the action (4.5) (over $\mathbb{Q}$). Moreover, $P(\mathcal{M}^{(2)})$ belongs to the kernel $\mathcal{I}$ of the multiplication map $\mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C} \to \mathbb{C}$. As a consequence, we have:
+
+**Proposition A.2.1.** *The “connection form” (4.6) and the (logarithm of the) hermitian fiber metric on the Heisenberg bundle correspond to the images of P($\mathcal{M}^{(2)}$) under the two projections*
+
+$$\mathcal{I} \to \mathcal{I}/\mathcal{I}^2 = \Omega_{\mathbb{C}/\mathbb{Q}}^1$$
+
+and
+
+$$\mathcal{I} \subset \mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C} \to \mathbb{R}(1),$$
+
+respectively.
+
+*Proof.* The images under the two projections are, respectively, equal to
+
+$$-d\left(\frac{z}{(2\pi\sqrt{-1})^2}\right) + \frac{x}{2\pi\sqrt{-1}} d\left(\frac{y}{2\pi\sqrt{-1}}\right)$$
+
+and
+
+$$\frac{1}{(2\pi\sqrt{-1})^2} (\pi_1(z) - \pi_1(x)\pi_0(y)).$$
+
+³These data correspond to the case $N=2$ of a MHS on $\mathbb{C}^N$ defined for any integer $N$, cf. [4]
\ No newline at end of file
diff --git a/samples/texts/7707372/page_27.md b/samples/texts/7707372/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..30ff24e77bc39b60b2101e458b8935775eb3926e
--- /dev/null
+++ b/samples/texts/7707372/page_27.md
@@ -0,0 +1,63 @@
+### A.3 The extension class
+
+The big period can be obtained as a symmetrization of an extension class of MHS. Indeed, the weight $-2$ subspace $W_{-2}\mathcal{M}^{(2)} \cong \mathcal{M}^{(1)} \otimes 2\pi\sqrt{-1} \equiv \mathcal{M}^{(1)}(1)$ is itself a MHS (twisted by $2\pi\sqrt{-1}$) defined by
+
+$$ (A.4) \qquad A^{(1)} = \begin{pmatrix} 1 \\ y & 2\pi\sqrt{-1} \end{pmatrix}. $$
+
+(The data are as for $\mathcal{M}^{(2)}$, replacing 2 by 1.) We thus have an extension of MHS:
+
+$$ (A.5) \qquad 0 \to \mathcal{M}^{(1)}(1) \to \mathcal{M}^{(2)} \to \mathbb{Z}(0) \to 0. $$
+
+Following the procedure explained in ref. [6], it is seen that the class of the extension (A.5) belongs to $\mathcal{M}_{\mathbb{C}}^{(1)}(\mathcal{M}_{\mathbb{Q}}^{(1)})$,
+
+and it is given by the vector
+
+$$ (A.6) \qquad e = - \frac{x}{2\pi\sqrt{-1}} v_1 - \frac{z - xy}{(2\pi\sqrt{-1})^2} v_2 $$
+
+taken modulo $\mathcal{M}_{\mathbb{Q}}^{(1)}$. This computation can be refined by noticing ([6]) that $\mathcal{M}^{(1)}$ is itself an extension,
+
+$$ 0 \to \mathbb{Z}(1) \to \mathcal{M}^{(1)} \to \mathbb{Z}(0) \to 0 $$
+
+mapping (over $\mathbb{Q}$) to the “universal extension” $\mathcal{H}^{(1)}$:
+
+$$ (A.7) \qquad 0 \to \mathbb{Q}(1) \to \mathbb{C} \to \mathbb{C}^\times \otimes \mathbb{Q} \to 0 $$
+
+obtained by tensoring the standard exponential sequence by $\mathbb{Q}$. Over the complex numbers, we have
+
+$$ 0 \to \mathbb{C} \to \mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C} \to \mathbb{C}^\times \otimes_{\mathbb{Z}} \mathbb{C} \cong \mathbb{C}/\mathbb{Q}(1) \otimes_{\mathbb{Q}} \mathbb{C} \to 0, $$
+
+Here we have $\mathcal{H}_{\mathbb{Q}}^{(1)} = \mathbb{C}$ and $\mathcal{H}_{\mathbb{C}}^{(1)} = \mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C}$. According to the same principle the class of the extension (A.7) lives in
+
+$$ (A.8) \qquad \mathcal{H}_{\mathbb{C}}^{(1)} / \mathcal{H}_{\mathbb{Q}}^{(1)} \cong \mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C} / \mathbb{C} \cong \mathbb{C} \otimes_{\mathbb{Z}} \mathbb{C}^\times. $$
+
+The image of (A.6) in $\mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C}$ is given by
+
+$$ (A.9) \qquad \tilde{e} = -y \otimes x - 2\pi\sqrt{-1} \otimes \frac{z - xy}{2\pi\sqrt{-1}}. $$
+
+Taking (A.9) modulo $\mathcal{H}_{\mathbb{Q}}^{(1)} \cong \mathbb{C}$ we finally have
+
+$$ (A.10) \qquad (\mathrm{Id} \otimes \exp)(\tilde{e}) = y \otimes e^{-x} + 2\pi\sqrt{-1} \otimes e^{-(z-xy)/2\pi\sqrt{-1}}. $$
+
+This is the (image of) the class of the extension (A.5) as computed in ref. [6]. It is easily seen that the element (A.10) is invariant under the transformations (4.5).
+
+**Lemma A.3.1.** There is a unique well defined lift of the class (A.10) to $F^0\mathcal{H}_{\mathbb{C}}^{(1)} = \ker(m: \mathbb{C} \otimes_{\mathbb{Q}} \mathbb{C} \to \mathbb{C})$. This can be obtained by adding to (A.9) a (necessarily unique, see ref. [6]) element from $\mathcal{H}_{\mathbb{Q}}^{(1)} \cong \mathbb{C}$ to (A.9). The lift is
+
+$$ 2\pi\sqrt{-1} \otimes 2\pi\sqrt{-1} \cdot P(\mathcal{M}^{(2)}). $$
+
+*Proof.* We can identify $\mathcal{H}_{\mathbb{Q}}^{(1)} \cong \mathbb{C}$ inside $\mathcal{H}_{\mathbb{C}}^{(1)}$ via $a \mapsto a \otimes 2\pi\sqrt{-1}$. Thus add any such element to $\tilde{e}$ and consider the image under the multiplication map:
+
+$$ m(\tilde{e} + a \otimes 2\pi\sqrt{-1}) = -z + 2\pi\sqrt{-1}a. $$
+
+It is equal to zero iff $a = z/2\pi\sqrt{-1}$, hence
+
+$$
+\begin{aligned}
+\tilde{e} &= \tilde{e} + \frac{z}{2\pi\sqrt{-1}} \otimes 2\pi\sqrt{-1} \\
+&= -y \otimes x + 2\pi\sqrt{-1} \otimes \frac{xy}{2\pi\sqrt{-1}} + \frac{z}{2\pi\sqrt{-1}} \otimes 2\pi\sqrt{-1} - 2\pi\sqrt{-1} \otimes \frac{z}{2\pi\sqrt{-1}}
+\end{aligned}
+ $$
+
+is the required element.
+
+$\square$
+
diff --git a/samples/texts/7707372/page_28.md b/samples/texts/7707372/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..84ef2d54d6a017d20af2a407036b6e208aff8251
--- /dev/null
+++ b/samples/texts/7707372/page_28.md
@@ -0,0 +1,43 @@
+References
+
+[1] Ettore Aldrovandi, *On hermitian-holomorphic classes related to uniformization, the dilogarithm and the liouville action*, arXiv:math.CV/0211055, To appear in Commun. Math. Phys.
+
+[2] A. A. Beilinson, *Higher regulators and values of L-functions*, Current problems in mathematics, Vol. 24, Itogi Nauki i Tekhniki, Akad. Nauk SSSR Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1984, pp. 181–238.
+
+[3] ——, *Notes on absolute Hodge cohomology*, Applications of algebraic K-theory to algebraic geometry and number theory, Part I, II (Boulder, Colo., 1983), Amer. Math. Soc., Providence, RI, 1986, pp. 35–68.
+
+[4] A. A. Beilinson and P. Deligne, *Interprétation motivique de la conjecture de Zagier reliant polylogarithmes et régulateurs*, Motives (Seattle, WA, 1991), Proc. Sympos. Pure Math., vol. 55, Amer. Math. Soc., Providence, RI, 1994, pp. 97–121.
+
+[5] Spencer Bloch, *The dilogarithm and extensions of Lie algebras*, Algebraic K-theory, Evanston 1980 (Proc. Conf., Northwestern Univ., Evanston, Ill., 1980), Springer, Berlin, 1981, pp. 1–23.
+
+[6] ——, *Function theory of polylogarithms*, Structural properties of polylogarithms, Math. Surveys Monogr., vol. 37, Amer. Math. Soc., Providence, RI, 1991, pp. 275–285.
+
+[7] Lawrence Breen, *On the classification of 2-gerbes and 2-stacks*, Astérisque (1994), no. 225, 160.
+
+[8] J.-L. Brylinski and D. A. McLaughlin, *The geometry of degree-four characteristic classes and of line bundles on loop spaces. I*, Duke Math. J. **75** (1994), no. 3, 603–638.
+
+[9] ——, *The geometry of degree-4 characteristic classes and of line bundles on loop spaces. II*, Duke Math. J. **83** (1996), no. 1, 105–139.
+
+[10] Jean-Luc Brylinski, *Loop spaces, characteristic classes and geometric quantization*, Birkhäuser Boston Inc., Boston, MA, 1993.
+
+[11] ——, *Geometric construction of Quillen line bundles*, Advances in geometry, Birkhäuser Boston, Boston, MA, 1999, pp. 107–146.
+
+[12] Jean-Luc Brylinski and Dennis McLaughlin, *Holomorphic quantization and unitary representations of the Teichmüller group*, Lie theory and geometry, Progr. Math., vol. 123, Birkhäuser Boston, Boston, MA, 1994, pp. 21–64.
+
+[13] P. Deligne, *Le symbole modéré*, Inst. Hautes Études Sci. Publ. Math. (1991), no. 73, 147–181.
+
+[14] Hélène Esnault, *Characteristic classes of flat bundles*, Topology **27** (1988), no. 3, 323–352.
+
+[15] Hélène Esnault and Eckart Viehweg, *Deligne-Beilinson cohomology*, Beilinson's conjectures on special values of L-functions, Academic Press, Boston, MA, 1988, pp. 43–91.
+
+[16] Jean Giraud, *Cohomologie non abélienne*, Springer-Verlag, Berlin, 1971, Die Grundlehren der mathematischen Wissenschaften, Band 179.
+
+[17] A. B. Goncharov, *Explicit regulator maps on polylogarithmic motivic complexes*, Motives, polylogarithms and Hodge theory, Part I (Irvine, CA, 1998), Int. Press Lect. Ser., vol. 3, Int. Press, Somerville, MA, 2002, pp. 245–276.
+
+[18] Alexander Goncharov, *Volumes of hyperbolic manifolds and mixed Tate motives*, J. Amer. Math. Soc. **12** (1999), no. 2, 569–618, arXiv:alg-geom/9601021.
+
+[19] Phillip Griffiths and Joseph Harris, *Principles of algebraic geometry*, Wiley-Interscience [John Wiley & Sons], New York, 1978, Pure and Applied Mathematics.
+
+[20] Richard M. Hain, *Classical polylogarithms*, Motives (Seattle, WA, 1991), Proc. Sympos. Pure Math., vol. 55, Amer. Math. Soc., Providence, RI, 1994, pp. 3–42.
+
+[21] Serge Lang, *Introduction to Arakelov theory*, Springer-Verlag, New York, 1988.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_29.md b/samples/texts/7707372/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..7d85ddafc3f010d751f0fef33652a52af52cc599
--- /dev/null
+++ b/samples/texts/7707372/page_29.md
@@ -0,0 +1 @@
+[22] Dinakar Ramakrishnan, *A regulator for curves via the Heisenberg group*, Bull. Amer. Math. Soc. (N.S.) **5** (1981), no. 2, 191–195.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_3.md b/samples/texts/7707372/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..551a9ae93fabada3b15fe58625bdc9b7f9f0d862
--- /dev/null
+++ b/samples/texts/7707372/page_3.md
@@ -0,0 +1,29 @@
+in his work on tame symbols, cf. [13]: If $f$ and $g$ are two invertible functions on $X$, namely two elements of $\mathcal{O}_X^\times$, their cup product corresponds to a $\mathcal{O}_X^\times$-torsor, denoted $(f,g]$, equipped with an analytic connection. Furthermore, if $X$ is a Riemann surface, the complex $(\mathcal{O}_X^\times \xrightarrow{d\log} \mathcal{O}_X^1)$ is quasi-isomorphic to $\mathbb{C}^\times$ and the product is interpreted as the *holonomy* of the connection. For $X$ equal to a punctured disk $D_p$ centered at $p$, if $f$ and $g$ are holomorphic on $D_p$, meromorphic at $p$, the holonomy of $(f,g]$ computes the *tame symbol*
+
+$$ (f,g)_p = (-)^{v(f)v(g)} (f^{v(g)}/g^{v(f)})(p), $$
+
+where $v(f)$ is the valuation of $f$ at $p$, cf. [2, 13, 20]. This justifies the use of the name *tame symbol* for $(f,g]$.
+
+A particularly pleasant property is that when $f$ and $1-f$ are both invertible a calculation [13] using the classical Euler's dilogarithm $\mathrm{Li}_2$ shows that $(f, 1-f]$ is isomorphic to the trivial torsor equipped with the trivial connection $d$, namely the unit element in the group $\mathbf{H}^1(X, \mathcal{O}_X^\times \xrightarrow{d\log} \mathcal{O}_X^1)$. From this one also builds an interpretation of the symbol associated to $f$ and $g$ in terms of Mixed Hodge Structures [13].
+
+In this particular example there appear degree 1 and 2 Deligne cohomology groups: specifically, it is made use of the fact that invertible functions determine elements in the group $H_D^1(X, \mathbb{Z}(1)) \cong \mathcal{O}_X^\times(X)$, and, given $f$ and $g$, the class of the torsor with connection $(f,g]$ is an element of $H_D^2(X, \mathbb{Z}(2)) \cong \mathbf{H}^1(X, \mathcal{O}_X^\times \xrightarrow{d\log} \mathcal{O}_X^1)$. It is therefore natural to investigate the geometric objects corresponding to similar cup products of higher degree. The case of $(f,L]$, where $f$ is again an invertible function and $L$ is an $\mathcal{O}_X^\times$-torsor, so it determines a class in $H_D^2(X, \mathbb{Z}(1)) \cong \mathbf{H}^1(X, \mathcal{O}_X^\times)$, was already considered in ref. [13], where it is interpreted in terms of a gerbe $\mathcal{G}$ over $X$.
+
+This idea has been further pursued by Brylinski-McLaughlin, [8, 9]. In their study of degree 4 characteristic classes they considered the symbols $(f,L] \in H_D^3(X, \mathbb{Z}(2))$ and, for a pair of $\mathcal{O}_X^\times$-torsors, $(L,L'] \in H_D^4(X, \mathbb{Z}(2))$. The corresponding geometric objects are identified with a gerbe (resp. a 2-gerbe) both equipped with the appropriate analog of a connection. Furthermore, the obvious map $\mathbb{Z}(2)^\bullet_D \to \mathbb{Z}(1)^\bullet_D$ induces a corresponding map $H_D^k(X, \mathbb{Z}(2)) \to H_D^k(X, \mathbb{Z}(1))$ which simply forgets the connection. Therefore elements in the groups $H_D^k(X, \mathbb{Z}(1))$, for $k=3,4$ correspond to equivalence classes of (2-)gerbes bound by $\mathcal{O}_X^\times$, cf. [7, 8, 9]. Thus in the end several Deligne cohomology groups have a concrete interpretation in terms of geometric data.
+
+Hermitian-Holomorphic Deligne cohomology, as defined by Brylinski, cf. [11], is an enhanced version of Deligne cohomology. For all positive integers $l$ Brylinski introduces certain complexes $C(l)^\bullet$, and defines the Hermitian-Holomorphic Deligne cohomology groups as the sheaf hypercohomology groups: $H_{\mathcal{D}_{h,h.}}^k(X,l) = \mathbf{H}^k(X,C(l)^\bullet)$. The complex $C(l)^\bullet$ has a map $C(l)^\bullet \to \mathbb{Z}(l)^\bullet_D$, thus there is an obvious map $H_{\mathcal{D}_{h,h.}}^k(X,l) \to H_D^k(X,\mathbb{Z}(l))$ forgetting the extra-structure.
+
+A primary example is provided by Deligne’s observation mentioned before, cf. [14], that
+
+$$ (1.3) \qquad \widehat{\text{Pic}} X \cong \mathbf{H}^2(X, \mathbb{Z}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0), $$
+
+where $\widehat{\text{Pic}} X$ is the set of isomorphism classes of $\mathcal{O}_X^\times$-torsors with hermitian metric, and $\underline{\mathcal{E}}_X^0$ is the sheaf of smooth real-valued functions on $X$. The complex in (1.3) is quasi-isomorphic to $C(1)^\bullet$, therefore
+
+$$ \widehat{\text{Pic}} X \cong H_{\mathcal{D}_{h,h.}}^2 (X, 1). $$
+
+In fact, both complexes are quasi-isomorphic to the complex $(\mathcal{O}_X^\times \oplus \mathbb{T}_X \to \underline{\mathbb{C}}_X^\times)[-1]$, [9, 11], which encodes the reduction of the torsor structure from $\mathcal{O}_X^\times$ to $\mathbb{T}_X$ afforded by the hermitian metric.
+
+Concerning higher degrees, Brylinski-McLaughlin [9, 12] gave a geometric interpretation for some of the groups $H_{\mathcal{D}_{h,h.}}^k(X,l)$, $k=3,4$ and $l=1,2$ in terms of classes of gerbes and 2-gerbes bound by $\mathbb{T}_X$ and equipped with a concept of connection valued in an appropriate Hodge filtration of the de Rham complex of $X$.
+
+## 1.2 Statement of the results
+
+In this work we take on the same question of a geometric interpretation for some Hermitian-Holomorphic Deligne cohomology groups from a holomorphic view-point which, we believe, is complementary to that of Brylinski-McLaughlin. We define a hermitian structure on a $\mathcal{O}_X^\times$-gerbe $\mathcal{G}$ as the assignment of a $\underline{\mathcal{E}}_{U,+}^0$-torsor (the “+” denotes positive functions) to any object $P$ of $\mathcal{G}_U$ subject to several conditions spelled out in Definition 5.2.1. We prove that classes of gerbes with hermitian structures in this sense correspond to elements of $H_{\mathcal{D}_{h,h.}}^3(X,1) \cong \mathbf{H}^3(X,\mathbb{Z}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0)$, in complete analogy with (1.3). Moreover we can define a type (1,0)-connective
\ No newline at end of file
diff --git a/samples/texts/7707372/page_4.md b/samples/texts/7707372/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..82ef47d1ec187989891dcd153679a7a0a44ac99a
--- /dev/null
+++ b/samples/texts/7707372/page_4.md
@@ -0,0 +1,29 @@
+structure on $\mathcal{S}$ by requiring that to any object $P$ of $\mathcal{S}_U$ be assigned a $F^1\underline{A}_U^\perp$-torsor, essentially repeating the steps in ref. [9]. (Here $\underline{A}_U^\bullet$ is the smooth $\mathbb{C}$-valued de Rham complex, and $F^1$ is the first Hodge filtration.) Then a notion of compatibility between the hermitian structure and the connective one is defined, and in fact we prove there is only one such type (1,0) connective structure compatible with a given hermitian structure, up to equivalence. This result is analogous to the corresponding statement for hermitian holomorphic line bundles, that there is a unique connection — the *canonical or Griffiths connection* — compatible with both structures.
+
+Similar results are available for 2-gerbes: we define a hermitian structure for a $\mathcal{O}_X^\times$-2-gerbe $\mathcal{G}$ as the assignment of a $\mathcal{E}_{U,+}^0$-gerbe for each object $P$ of $\mathcal{G}_U$, subject to several conditions spelled out in Definition 5.5.1. Analogously to the simpler case of gerbes, we have a concept of type (1,0) connectivity compatible with the hermitian structure and a uniqueness result up to equivalence.
+
+A second line of results is more specific to the tame symbols we encountered before. Alongside with the map of complexes
+
+$$ \mathbb{Z}(1)_\mathcal{D}^\bullet \otimes \mathbb{Z}(1)_\mathcal{D}^\bullet \to \mathbb{Z}(2)_\mathcal{D}^\bullet $$
+
+we define a companion map
+
+$$ (1.4) \qquad \mathbb{Z}(1)_\mathcal{D}^\bullet \otimes \mathbb{Z}(1)_\mathcal{D}^\bullet \to 2\pi\sqrt{-1} \otimes C(1)^\bullet $$
+
+so that it is possible to obtain a different cup product valued in Hermitian-Holomorphic Deligne cohomology:
+
+$$ H_{\mathcal{D}}^{i}(X, \mathbb{Z}(1)) \otimes H_{\mathcal{D}}^{j}(X, \mathbb{Z}(1)) \xrightarrow{\cup} 2\pi\sqrt{-1} \otimes H_{\mathcal{D}_{h.h.}}^{i+j}(X, 1). $$
+
+An immediate consequence is that for $f$ and $g$ invertible, and $L, L'$ line bundles, the torsor $(f,g]$ and the gerbe $(f,L]$ support natural hermitian structures of the type discussed above, in addition to the analytic connection (or connective) ones associated with the cup product in standard Deligne cohomology. The same conclusions are valid for the 2-gerbe $(L,L']$. It turns out that supporting both structures is an easy consequence of the commutativity of the following diagram:
+
+Indeed, forgetting either structure, brings us back to the same underlying object.
+
+The map (1.4) has a rather natural definition from the point of view of Mixed Hodge Structures, whose role in the matter was mentioned in relation with the product (1.1), see [13]. Namely, there is a “universal” MHS $\mathcal{M}^{(2)}$ corresponding to an iterated extension of $\mathbb{Z}(0)$ by $\mathbb{Z}(1)$ by $\mathbb{Z}(2)$, where in this case $\mathbb{Z}(n)$ denotes a Hodge-Tate structure. To $\mathcal{M}^{(2)}$ we can associate a tensor — the “big period” — $P(\mathcal{M}^{(2)}) \in \mathbb{C}\otimes_\mathbb{Q}\mathbb{C}$, cf. [18]. The period is in fact a multiple of the extension class of $\mathcal{M}^{(2)}$, and it belongs to the kernel $\mathcal{I} = \ker(m: \mathbb{C}\otimes_\mathbb{Q}\mathbb{C} \to \mathbb{C})$ of the multiplication map. We find the map (1.4) corresponds to the image of $P(\mathcal{M}^{(2)})$ under the “imaginary part” projection $\mathbb{C}\otimes_\mathbb{Q}\mathbb{C} \to \mathbb{R}(1)$ given by $a \otimes b \mapsto \mathrm{Im}(a)\mathrm{Re}(b)$. On the other hand, the standard one (1.1) involves the projection onto the Kähler differentials $\mathcal{I} \to \mathcal{I}/\mathcal{I}^2$ given by $a \otimes b \mapsto a db$.
+
+Another consequence of the previous diagram is that $(f,g]$, $(f,L]$, and $(L,L']$ come equipped with two connection (or connective) structures. If the unitary connection in a line bundle $L$ is also analytic, then $L$ is flat. In the case of $(f,g]$ we find there is an obstruction to this type of compatibility. This can be cast in cohomological terms, which allows to extend these considerations to $\mathcal{O}_X^\times$-gerbes and 2-gerbes. We find that the obstruction vanish, so compatibility can always be achieved.
+
+## 1.3 Outline of the paper
+
+This work is organized as follows. In section 2 we make some preliminaries observations about Deligne complexes and cohomology and collect a few needed facts. We recall the definition of Hermitian-Holomorphic Deligne cohomology and state some of its properties in section 3. Alongside Brylinski’s complex $C(l)^\bullet$, we use a complex quasi-isomorphic to it, denoted $D(l)^\bullet_{h.h.}$, which for a line bundle directly encodes the data defining the *canonical connection*.
+
+In section 4 we recall the definition of the tame symbol $(f,g]$ for two invertible functions and some of its properties. We define the modified product (1.4) and show that through it, the torsor associated to $(f,g]$ also
\ No newline at end of file
diff --git a/samples/texts/7707372/page_5.md b/samples/texts/7707372/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..c836d1905c321021d50f42e729fc18613990badd
--- /dev/null
+++ b/samples/texts/7707372/page_5.md
@@ -0,0 +1,35 @@
+comes equipped with a hermitian structure. As mentioned before, the product (1.4) and its relation with the standard for Deligne complexes become more clear when analyzed in terms of Hodge Structures. In order to do this, we felt necessary to recall a few elementary facts and calculations concerning Hodge-Tate structures that are certainly well-known to experts. For this reason, and also because this development lies somewhat aside this work's main lines, we present this material in appendix A. This presentation relies in part on the Heisenberg group picture of the Deligne torsor, which we have recalled in section 4.2.
+
+Section 5 is the main part of this work. There we redefine the notion of hermitian structure (modeled after that of connective structure) and prove that equivalence classes of these are classified by the groups $H_{\mathbb{D}_{h,h}}^k(X, 1)$. We then apply this classification to the Hermitian structures and the product (1.4) for the higher versions of the tame symbols considered by Brylinski-McLaughlin.
+
+The interplay between the analytic connection (or connective) structures arising from standard Deligne cohomology and their hermitian counterparts defined here is analyzed in sections 4.4 and 5.7.
+
+## Acknowledgments
+
+Parts of the present work were written while visiting the Department of Mathematics, Aarhus University, Århus, Denmark; the International School for Advanced Studies (SISSA), Trieste, Italy; the Department of Mathematics, Instituto Superior Técnico, Lisbon, Portugal. It is a pleasure to thank all these institutions for hospitality, support, and for providing an excellent, friendly, and stimulating research environment. It is also a pleasure to thank the anonymous referee for raising important points and providing several stimulating comments leading to a much improved version of the paper.
+
+# 2 Preliminaries
+
+## 2.1 Notation and conventions
+
+If $z$ is a complex number, then $\pi_p(z) \stackrel{\text{def}}{=} \frac{1}{2}(z + (-1)^p \bar{z})$, and similarly for any other complex quantity, e.g. complex valued differential forms. For a subring $A$ of $\mathbb{R}$ and an integer $j$, $A(j) = (2\pi\sqrt{-1})^j A$ is the Tate twist of $A$. We identify $\mathbb{C}/\mathbb{Z}(j) \cong \mathbb{C}^\times$ via the exponential map $z \mapsto \exp(z/(2\pi\sqrt{-1})^{j-1})$, and $\mathbb{C}/\mathbb{R}(j) \cong \mathbb{R}(j-1)$.
+
+If $X$ is a complex manifold, $\mathcal{A}_X^\bullet$ and $\mathcal{Q}_X^\bullet$ denote the de Rham complexes of sheaves of smooth $\mathbb{C}$-valued and holomorphic forms, respectively. We denote by $\mathcal{E}_X^\bullet$ the de Rham complex of sheaves of real valued differential forms and by $\mathcal{E}_X^\bullet(j)$ the twist $\mathcal{E}_X^\bullet \otimes_\mathbb{R} \mathbb{R}(j)$. We set $\mathcal{O}_X \equiv \mathcal{Q}_X^0$ as usual. When needed, $\mathcal{A}_X^{p,q}$ will denote the sheaf of smooth $(p,q)$-forms. We use the standard decomposition $d = \partial + \bar{\partial}$ according to types. Furthermore, we introduce the differential operator $d^c = \partial - \bar{\partial}$ (contrary to the convention, we omit the factor $1/(4\pi\sqrt{-1})$). We have $2\partial\bar{\partial} = d^c d$. The operator $d^c$ is an imaginary one and accordingly we have the rules
+
+$$d\pi_p(\omega) = \pi_p(d\omega), \quad d^c\pi_p(\omega) = \pi_{p+1}(d^c\omega)$$
+
+for any complex form $\omega$.
+
+An open cover of $X$ will be denoted by $\mathcal{U}_X$. If $\{U_i\}_{i \in I}$ is the corresponding collection of open sets, we write $U_{ij} = U_i \cap U_j$, $U_{ijk} = U_i \cap U_j \cap U_k$, and so on. More generally we can also have $\mathcal{U}_X = \{U_i \to X\}_{i \in I}$, where the maps are regular coverings in an appropriate category. In this case intersections are replaced by $(n+1)$-fold fibered products $U_{i_0i_1\cdots i_n} = U_{i_0} \times_X \cdots \times_X U_{i_n}$.
+
+If $\underline{F}^\bullet$ is a complex of abelian sheaves on $X$, its Čech resolution with respect to a covering $\mathcal{U}_X \to X$ is the double complex
+
+$$\mathcal{C}^{p,q}(\underline{F}) \stackrel{\text{def}}{=} \check{\mathcal{C}}^q(\mathcal{U}_X, \underline{F}^p),$$
+
+where the *q*-cochains with values in $\underline{F}^p$ are given by $\prod \underline{F}^p(U_{i_0 \cdots i_n})$. The Čech coboundary operator is denoted $\delta$. The convention we use is to put the index along the Čech resolution in the *second* place, so if we denote by $d$ the differential in the complex $\underline{F}^\bullet$, the total differential is given by $D = d + (-1)^p \delta$ on the component $\check{\mathcal{C}}^q(\mathcal{U}_X, \underline{F}^p)$ of the total simple complex. Furthermore, recall that the Koszul sign rule causes a sign being picked whenever two degree indices are formally exchanged. For Čech resolutions of complexes of sheaves it leads to the following conventions. If $\underline{G}^\bullet$ is a second complex of sheaves on $X$, then one defines the cup product
+
+$$\cup : \mathcal{C}^{p,q}(\underline{F}) \otimes \mathcal{C}^{r,s}(\underline{G}) \to \check{\mathcal{C}}^{q+s}(\mathcal{U}_X, \underline{F}^p \otimes \underline{G}^r) \subset \mathcal{C}^{p+r,q+s}(\underline{F} \otimes \underline{G})$$
+
+of two elements $\{f_{i_0, \dots, i_q}\} \in \mathcal{C}^{p,q}(\underline{F})$ and $\{g_{j_0, \dots, j_s}\} \in \mathcal{C}^{r,s}(\underline{G})$ by
+
+$$(-1)^{qr} f_{i_0, \dots, i_q} \otimes g_{i_q, i_{q+1}, \dots, i_{q+s}}.$$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_6.md b/samples/texts/7707372/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..c5c25b4c693d0ab9073fa04e397077521c3867e2
--- /dev/null
+++ b/samples/texts/7707372/page_6.md
@@ -0,0 +1,47 @@
+For a given complex of abelian objects, say $C^\bullet$, the symbol $\sigma^i$ denotes sharp truncation at the index $i$: $\sigma^i C^p = 0$ for $p < i$.
+
+## 2.2 Deligne cohomology
+
+There are several models for the complexes to use to compute Deligne cohomology [15, 2]. For $A \subset \mathbb{R}$ and an integer $j$ the latter is the hypercohomology:
+
+$$H_{\mathbb{D}}^{\bullet}(X, A(j)) = \mathbf{H}^{\bullet}(X, A(j)_{\mathbb{D}}^{\bullet}).$$
+
+Here $A(p)_{\mathbb{D}}^{\bullet}$ is the Deligne complex
+
+$$ (2.1) \qquad A(j)_{\mathbb{D}}^{\bullet} = A(j)_X \xrightarrow{i} \mathcal{O}_X \xrightarrow{d} \Omega_X^1 \xrightarrow{d} \cdots \xrightarrow{d} \Omega_X^{j-1} $$
+
+$$ (2.2) \qquad \cong \operatorname{Cone} (A(j)_X \oplus F^j \underline{\Omega}_X^{\bullet} \xrightarrow{i-j} \underline{\Omega}_X^{\bullet}) [-1], $$
+
+where $F^j \underline{\Omega}_X^\bullet$ in eqn. (2.2) is the Hodge (“stupid”) filtration on the de Rham complex. The symbol $\xrightarrow{\cong}$ denotes a quasi-isomorphism. In view of Beilinson formula for the cup product on cones to be recalled below [3], Deligne complexes acquire a family of cup-products (depending on a real parameter $\alpha$)
+
+$$A(j)_{\mathbb{D}}^{\bullet} \otimes A(k)_{\mathbb{D}}^{\bullet} \xrightarrow{\cup_{\alpha}} A(j+k)_{\mathbb{D}}^{\bullet}.$$
+
+Cup products related to different values of the parameter $\alpha$ are related by homotopy-commutative diagrams, hence they induce a well defined graded commutative cup-product in cohomology
+
+$$ (2.3) \qquad H_{\mathbb{D}}^{p}(X, A(j)) \otimes H_{\mathbb{D}}^{q}(X, A(k)) \xrightarrow{\cup} H_{\mathbb{D}}^{p+q}(X, A(j+k)). $$
+
+In order to explicitly compute cup products, the model given by eqn. (2.1) leads to simpler formulas (when it can be used). If $f \in A(j)_{\mathbb{D}}^{\bullet}$ and $g \in A(k)_{\mathbb{D}}^{\bullet}$, then from ref. [15] we quote:
+
+$$ (2.4) \qquad f \cup g = \begin{cases} f \cdot g & \deg f = 0, \\ f \wedge dg & \deg f > 0 \text{ and } \deg g = k, \\ 0 & \text{otherwise.} \end{cases} $$
+
+The following examples are well known and will frequently recur in the following.
+
+**Example 2.2.1.** For $A = \mathbb{Z}$ it is immediately verified that $\mathbb{Z}(1)_{\mathbb{D}}^{\bullet} \xrightarrow{\cong} \mathcal{O}_X^{\times}[-1]$ via the standard exponential sequence, so that $H_{\mathbb{D}}^k(X, \mathbb{Z}(1)) \cong H^{k-1}(X, \mathcal{O}_X^{\times})$. In particular $H_{\mathbb{D}}^1(X, \mathbb{Z}(1)) \cong H^0(X, \mathcal{O}_X^{\times})$, the global invertibles on $X$, and $H_{\mathbb{D}}^2(X, \mathbb{Z}(1)) \cong \mathrm{Pic}(X)$, the Picard group of line bundles over $X$.
+
+**Example 2.2.2.** $\mathbb{Z}(2)_{\mathbb{D}}^{\bullet} \xrightarrow{\cong} (\mathcal{O}_X^{\times} \xrightarrow{d\log} \underline{\Omega}_X^1)[-1]$. A fundamental observation by Deligne (see ref. [2]) is that $H_{\mathbb{D}}^2(X, \mathbb{Z}(2))$ is identified with the group of isomorphism classes of holomorphic line bundles with (holomorphic) connection. This is easily understood from a Čech cohomology point of view. Using the cover $\mathfrak{U}_X = \{U_i\}_{i \in I}$, a class in
+
+$$H_{\mathbb{D}}^{2}(X, \mathbb{Z}(2)) \cong \mathbf{H}^{1}(X, \mathcal{O}_{X}^{\times} \xrightarrow{d\log} \underline{\Omega}_{X}^{1})$$
+
+is represented by a pair $(\omega_i, g_{ij})$ with $\omega_i \in \underline{\Omega}_X^1(U_i)$ and $g_{ij} \in \mathcal{O}_X^\times(U_{ij})$ satisfying the relations
+
+$$\omega_j - \omega_i = d \log g_{ij}, \quad g_{ij} g_{jk} = g_{ik}.$$
+
+The Čech representative for the actual class in $H_{\mathbb{D}}^2(X, \mathbb{Z}(2))$ is obtained (up to a multiplication by $2\pi\sqrt{-1}$) by extracting local logarithms $\log g_{ij}$, see ref. [15] for full details.
+
+For real Deligne cohomology, i.e. when $A = \mathbb{R}$, other models quasi-isomorphic to those in eqs. (2.1) and (2.2) are available. Since the maps
+
+$$ (\mathbb{R}(j) \to \underline{\Omega}_X^\bullet) \xrightarrow{\cong} (\mathbb{R}(j) \to \mathbb{C}) \xrightarrow{\cong} \mathbb{R}(j-1) \xrightarrow{\cong} \underline{\mathcal{E}}_X^\bullet(j-1) $$
+
+are all quasi-isomorphisms in the derived category, cf. [15], we have
+
+$$ (2.5) \qquad \mathbb{R}(j)_{\mathbb{D}}^{\bullet} \xrightarrow{\cong} \mathrm{Cone}\left(F^j\underline{\Omega}_X^{\bullet} \to \underline{\mathcal{E}}_X^{\bullet}(j-1)\right)[-1]. $$
\ No newline at end of file
diff --git a/samples/texts/7707372/page_7.md b/samples/texts/7707372/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..69f2737de31422085502dd49e28f8bcd4bbbf5b6
--- /dev/null
+++ b/samples/texts/7707372/page_7.md
@@ -0,0 +1,43 @@
+Moreover, we can use smooth forms thanks to the fact that the inclusion $\mathcal{Q}_{\mathcal{X}}^{\bullet} \hookrightarrow \mathcal{A}_{\mathcal{X}}^{\bullet}$ is a filtered quasi-isomorphism with respect to the filtrations $F^j \mathcal{Q}_{\mathcal{X}}^{\bullet} \hookrightarrow F^j \mathcal{A}_{\mathcal{X}}^{\bullet}$. Here $F^j \mathcal{A}_{\mathcal{X}}^{\bullet}$ is the subcomplex of $\mathcal{A}_{\mathcal{X}}^{\bullet}$ comprising forms of type $(p, q)$ where $p$ is at least $j$, so that $F^j \mathcal{A}_{\mathcal{X}}^n = \bigoplus_{p \ge j} \mathcal{A}_{\mathcal{X}}^{p,n-p}$.
+
+Let $(\omega_1, \eta_1)$ be an element of degree $n$ in $\mathbb{R}(j)_{\mathcal{D}}^{\bullet}$—this means that $\omega_1 \in F^j \mathcal{Q}_{\mathcal{X}}^n$ and $\eta_1 \in \mathcal{E}_{\mathcal{X}}^{n-1}(j-1)$—and $(\omega_2, \eta_2)$ any element in $\mathbb{R}(k)_{\mathcal{D}}^{\bullet}$. A product is given by the formula (cf. ref. [15]):
+
+$$ (2.6) \qquad (\omega_1, \eta_1) \tilde{\cup} (\omega_2, \eta_2) = (\omega_1 \wedge \omega_2, (-1)^n \pi_p \omega_1 \wedge \eta_2 + \eta_1 \wedge \pi_q \omega_2). $$
+
+**Example 2.2.3.** $H_{\mathcal{D}}^1(X, \mathbb{R}(1))$ is the group of real valued functions $\eta$ on $X$ such that there exists a holomorphic one-form $\omega$ such that $\pi_0\omega = d\eta$. In other words, it is the group of those real smooth functions $\eta$ such that $\partial\eta$ is holomorphic. In particular, if $f$ is holomorphic and invertible on $U \subset X$, then the class in $H_{\mathcal{D}}^1(X, \mathbb{R}(1))$ determined by $f$ is represented by $(d \log f, \log|f|)$.
+
+## 2.3 Cones
+
+We recall here a variant of Beilinson's formula for the cup product on certain diagrams of complexes. (For full details see refs. [1, 3, 15].)
+
+For $i=1,2,3$ consider the diagrams of complexes
+
+$$ (2.7) \qquad \mathcal{D}_i \stackrel{\text{def}}{=} X_i^\bullet \xrightarrow{f_i} Z_i^\bullet \xleftarrow{g_i} Y_i^\bullet $$
+
+and set
+
+$$ C(\mathcal{D}_i) = \text{Cone}\left(X_i^\bullet \oplus Y_i^\bullet \xrightarrow{f_i-g_i} Z_i^\bullet\right)[-1], \quad i=1,2,3. $$
+
+Suppose there are product maps $X_i^\bullet \otimes X_2^\bullet \xrightarrow{\cup} X_3^\bullet$, and similarly for $Y_i^\bullet$, and $Z_i^\bullet$. We assume the products to be compatible with the $f_i$, $g_i$ only up to homotopy, namely there exist maps
+
+$$ h: (X_1 \otimes X_2)^\bullet \to Z_3^{\bullet-1}, \quad k: (Y_1 \otimes Y_2)^\bullet \to Z_3^{\bullet-1} $$
+
+such that
+
+$$ f_3 \circ \alpha - \alpha \circ f_3 = dh + hd, \quad g_3 \circ \beta - \beta \circ g_3 = dk + kd, $$
+
+with obvious meaning of the symbols. The following lemma establishes a variant of Beilinson's product formula [3].
+
+**Lemma 2.3.1.** For $(x_i, y_i, z_i) \in X_i^\bullet \oplus Y_i^\bullet \oplus Z_i^{\bullet-1}$, $i=1,2$, and a real parameter $\alpha$, the following formula:
+
+$$ (2.8) \qquad (x_1, y_1, z_1) \cup_\alpha (x_2, y_2, z_2) = (x_1 \cup x_2, y_1 \cup y_2, \\ (-1)^{\deg(x_1)}((1-\alpha)f_1(x_1) + \alpha g_1(y_1)) \cup z_2 \\ + z_1 \cup (\alpha f_2(x_2) + (1-\alpha)g_2(y_2)) \\ - h(x_1 \otimes x_2) + k(y_1 \otimes y_2)). $$
+
+defines a family of products
+
+$$ C(\mathcal{D}_1) \otimes C(\mathcal{D}_2) \xrightarrow{\cup_\alpha} C(\mathcal{D}_3). $$
+
+These products are homotopic to one another, and graded commutative up to homotopy. The homotopy formula is the same as that found in ref. [3].
+
+*Proof.* Direct verification. $\square$
+
+If the maps $f_i$, $g_i$ above are strictly compatible with the products, namely the homotopies $h$ and $k$ are zero, (2.8) reduces to the formulas found in [3, 15]. Homotopy commutativity at the level of complexes ensures the corresponding cohomologies will have genuine graded commutative products.
\ No newline at end of file
diff --git a/samples/texts/7707372/page_8.md b/samples/texts/7707372/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d7ab396467119767a16f603f8cdafd2e14e9f83
--- /dev/null
+++ b/samples/texts/7707372/page_8.md
@@ -0,0 +1,49 @@
+# 3 Hermitian holomorphic Deligne cohomology
+
+## 3.1 Metrized line bundles
+
+Let $X$ be a complex manifold. Consider a holomorphic line bundle $L$ on $X$ with hermitian fiber metric $\rho$ or, equivalently, an invertible sheaf $L$ equipped with a map $\rho: L \to \mathcal{E}_{X,+}^0$ to (the sheaf of) positive real smooth functions, see ref. [21] for the relevant formalism. Let $\widehat{\text{Pic}}(\widetilde{X})$ denote the group of isomorphism classes of line bundles with hermitian metric. A basic observation by Deligne (cf. [14]) is that $\widehat{\text{Pic}}\widetilde{X}$ can be identified with the second hypercohomology group:
+
+$$ (3.1) \qquad \mathbf{H}^2(X, \mathbb{Z}(1)_X) \xrightarrow{i} \mathcal{O}_X \xrightarrow{-\pi_0} \mathcal{E}_X^0. $$
+
+This is easy to see in Čech cohomology. Suppose $s_i$ is a trivialization of $L|_{U_i}$, with transition functions $g_{ij} \in \mathcal{O}_X^\times(U_{ij})$ determined by $s_j = s_i g_{ij}$. Let $\rho_i$ be the value of the quadratic form associated to $\rho$ on $s_i$, namely $\rho_i = \rho(s_i)$. Then we have $\rho_j = \rho_i |g_{ij}|^2$. Taking logarithms, we see that
+
+$$ (2\pi\sqrt{-1}c_{ijk}, \log g_{ij}, \frac{1}{2}\log \rho_i), $$
+
+where $2\pi\sqrt{-1}c_{ijk} = \log g_{jk} - \log g_{ik} + \log g_{ij} \in \mathbb{Z}(1)$, is a cocycle representing the class of the pair $(L, \rho)$.
+
+### 3.1.1 Canonical connection
+
+Recall for later use that the *canonical connection*, [19] on a metrized line bundle $(L, \rho)$ is the unique connection compatible with both the holomorphic and hermitian structures. In Čech cohomology with respect to the cover $\mathcal{U}_X$ as above, the canonical connection on $(L, \rho)$ corresponds to a collection of (1,0) forms $\xi_i \in \mathcal{A}_X^{1,0}(U_i)$ satisfying the relations
+
+$$ (3.2) \qquad \xi_j - \xi_i = d \log g_{ij} $$
+
+$$ (3.3) \qquad \pi_0(\xi_i) = \frac{1}{2} d \log \rho_i. $$
+
+The latter just means $\xi_i = \partial \log \rho_i$, in more familiar terms. The global 2-form
+
+$$ (3.4) \qquad c_1(\rho) = \eta_i = \bar{\partial} \partial \log \rho_i $$
+
+represents the first Chern class of $L$ in $H^2(X, \mathbb{R}(1))$. The class of $c_1(\rho)$ is in fact a pure Hodge class in $H^{1,1}(X)$—the image of the first Chern class of $L$ under the map $H_D^2(X, \mathbb{Z}(1)) \to H_D^2(X, \mathbb{R}(1))$ induced by $\mathbb{Z}(1) \to \mathbb{R}(1)$. It only depends on the class of $(L, \rho)$ in $\widehat{\text{Pic}}(\widetilde{X})$.
+
+## 3.2 Hermitian holomorphic complexes
+
+In ref. [11] Brylinski introduced the complexes
+
+$$ (3.5) \qquad C(l)^{\bullet} = \text{Cone}(\mathbb{Z}(l)_X \oplus (F^l\underline{\Lambda}_{\mathcal{X}}^{\bullet} \cap \sigma^{2l}\underline{\mathcal{E}}_{\mathcal{X}}^{\bullet})(l)) \to \underline{\mathcal{E}}_{\mathcal{X}}^{\bullet}(l)[-1]. $$
+
+**Definition 3.2.1.** The hypercohomology groups
+
+$$ (3.6) \qquad H_{\mathcal{D}_{h.h.}}^p(X,l) \stackrel{\text{def}}{=} \mathbf{H}^p(X,C(l)) $$
+
+are the *Hermitian holomorphic Deligne cohomology groups*.
+
+By the remark after eq. (2.5), the complex
+
+$$ \widetilde{\mathbb{R}}(l)^{\bullet}_{\mathcal{D}} = \text{Cone} (F^l\underline{\Lambda}_{\mathcal{X}}^{\bullet} \to \underline{\mathcal{E}}_{\mathcal{X}}^{\bullet}(l-1)) [-1]. $$
+
+also computes the real Deligne cohomology. Then consider the complex
+
+$$ (3.7) \qquad D(l)^{\bullet}_{h.h.} = \text{Cone}(\mathbb{Z}(l)^{\bullet}_{\mathcal{D}} \oplus (F^l\underline{\Lambda}_{\mathcal{X}}^{\bullet} \cap \sigma^{2l}\underline{\mathcal{E}}_{\mathcal{X}}^{\bullet})(l)) \to \widetilde{\mathbb{R}}(l)^{\bullet}_{\mathcal{D}} [-1]. $$
+
+In ref. [1] we prove
\ No newline at end of file
diff --git a/samples/texts/7707372/page_9.md b/samples/texts/7707372/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..a2bdb2882ec43521bf186e284948d76ecfc55e0c
--- /dev/null
+++ b/samples/texts/7707372/page_9.md
@@ -0,0 +1,51 @@
+**Lemma 3.2.2.** The complexes $C(l)^\bullet$ and $D(l)^\bullet_{h.h.}$ are quasi-isomorphic, hence we also have
+
+$$H_{\mathcal{D}_{h.h.}}^{p}(X,l) = \mathbf{H}^{p}(X, D(l)_{h.h.}^{\bullet}).$$
+
+*Remark 3.2.3.* The complex $F^l A_X^\bullet \cap \sigma^{2l} \mathcal{E}_X^\bullet(l)$ appearing in both (3.5) and (3.6) can be rewritten in terms of the complex $G(l)^\bullet$ of ref. [14]. Set
+
+$$G(l)^\bullet = 0 \to \dots \to 0 \to A_X^{(l,l)} \xrightarrow{d} A_X^{(l+1,l)} \oplus A_X^{(l,l+1)} \xrightarrow{d} \dots$$
+
+Then we have $F^l A_X^\bullet \cap \sigma^{2l} \mathcal{E}_X^\bullet(l) = G(l)^\bullet \cap \mathcal{E}_X^\bullet(l)$.
+
+For certain ranges of values of the cohomology index the groups $H_{\mathcal{D}_{h.h.}}^p(X,l)$ are fairly ordinary. Indeed we have the following easy
+
+**Lemma 3.2.4.** For $p \le 2l-1$ we have
+
+$$H_{\mathcal{D}_{h.h.}}^{p}(X,l) \cong H^{p-1}(X,\mathbb{R}(l)/\mathbb{Z}(l)).$$
+
+*Proof.* Using either $C(l)^\bullet$ or $D(l)^\bullet_{h.h.}$, we see that they are quasi-isomorphic to
+
+$$\text{Cone}\left(F^l \underline{A}_X^\bullet \cap \sigma^{2l} \underline{\mathcal{E}}_X^\bullet(l) \to \mathbb{R}(l)/\mathbb{Z}(l)\right)[-1],$$
+
+which leads to the triangle
+
+$$\mathbb{R}(l)/\mathbb{Z}(l)[-1] \to D(l)_{h.h.}^\bullet \to F^l\underline{A}_X^\bullet \cap \sigma^{2l}\underline{\mathcal{E}}_X^\bullet(l) \xrightarrow{\phantom{l+1}l} .$$
+
+The statement follows.
+
+In general these groups are interesting when $p \ge 2l$. The most important example is:
+
+**Lemma 3.2.5.**
+
+$$\widehat{\text{Pic}(\widetilde{X})} \cong H_{\mathcal{D}_{h.h.}}^2(X, 1).$$
+
+*Proof.* We have quasi-isomorphisms
+
+$$\mathbb{Z}(1)_X \xrightarrow{\iota} \mathcal{O}_X \xrightarrow{-\pi_0} \mathcal{E}_X^0 \xrightarrow{\simeq} D(1)^\bullet_{h.h.} \xrightarrow{\simeq} C(1)^\bullet.$$
+
+Indeed, note that $D(1)^\bullet_{h.h.}$ can be rewritten as
+
+$$\text{Cone}\left(\widetilde{\mathbb{Z}}(1)^\bullet_{\mathcal{D}} \to \widetilde{\mathbb{R}}(1)^\bullet_{\mathcal{D}} / (F^1\underline{A}_X^\bullet \cap \sigma^2\underline{\mathcal{E}}_X^\bullet(1))\right)[-1]$$
+
+and
+
+$$\widetilde{\mathbb{R}}(1)^\bullet_{\mathcal{D}} / (F^1\underline{A}_X^\bullet \cap \sigma^2\underline{\mathcal{E}}_X^\bullet(1)) \xrightarrow{\simeq} \text{Cone}\left(F^1\underline{A}_X^\bullet / F^1\underline{A}_X^\bullet \cap \sigma^2\underline{\mathcal{E}}_X^\bullet(1) \xrightarrow{-\pi_0} \underline{\mathcal{E}}_X^\bullet\right)[-1].$$
+
+By direct verification, the latter complex is quasi-isomorphic to $\underline{\mathcal{E}}_X^0[-1]$. Thus
+
+$$D(1)_{h.h.}^{\bullet} \xrightarrow{\simeq} \text{Cone}\left(\widetilde{\mathbb{Z}}(1)_{\mathcal{D}}^{\bullet} \to \underline{\mathcal{E}}_X^0[-1]\right)[-1] \xrightarrow{\simeq} \widetilde{\mathbb{Z}}(1)_X \to \mathcal{O}_X \to \underline{\mathcal{E}}_X^0.$$
+
+Since hermitian holomorphic Deligne complexes can be expressed as cones of diagrams of the form (2.7), they admit cup products, and hence there is a cup product for hermitian holomorphic Deligne cohomology [11]:
+
+$$H_{\mathcal{D}_{h.h.}}^{p}(X,l) \otimes H_{\mathcal{D}_{h.h.}}^{q}(X,k) \stackrel{\cup}{\longrightarrow} H_{\mathcal{D}_{h.h.}}^{p+q}(X,l+k).$$
\ No newline at end of file
|