diff --git a/samples/pdfs/1239855.pdf b/samples/pdfs/1239855.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b597237f7c51339a5782907bdf9b260d0ed668a2
Binary files /dev/null and b/samples/pdfs/1239855.pdf differ
diff --git a/samples/pdfs/3332461.pdf b/samples/pdfs/3332461.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d0d498eebac32b6e4e06c64bf083ce6f3f0e750e
Binary files /dev/null and b/samples/pdfs/3332461.pdf differ
diff --git a/samples/texts/1228241/page_1.md b/samples/texts/1228241/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..b0ba642ed881c916e97841ee7e8216c6e7141b44
--- /dev/null
+++ b/samples/texts/1228241/page_1.md
@@ -0,0 +1,16 @@
+Reclaiming the energy of a schedule: models and algorithms
+
+Guillaume Aupy, Anne Benoit, Fanny Dufossé, Yves Robert
+
+► To cite this version:
+
+Guillaume Aupy, Anne Benoit, Fanny Dufossé, Yves Robert. Reclaiming the energy of a schedule: models and algorithms. Concurrency and Computation: Practice and Experience, Wiley, 2013, 25, pp.1505-1523. 10.1002/cpe.2889 . hal-00763388
+
+HAL Id: hal-00763388
+https://hal.inria.fr/hal-00763388
+
+Submitted on 3 Sep 2013
+
+**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
+
+L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_10.md b/samples/texts/1228241/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..8d251684136d7f11b2ecc616ea2914709f3c6ed3
--- /dev/null
+++ b/samples/texts/1228241/page_10.md
@@ -0,0 +1,31 @@
+### 4.3. General DAGs
+
+For arbitrary execution graphs, we can rewrite the MINENERGY($G, D$) problem as follows:
+
+$$
+\begin{array}{ll}
+\text{Minimize} & \displaystyle\sum_{i=1}^{n} u_i^{-2} \times w_i \\
+\text{subject to} & (i) \quad b_i + w_i \times u_i \leq b_j \text{ for each edge } (T_i, T_j) \in E \\
+& (ii) \quad b_i + w_i \times u_i \leq D \text{ for each task } T_i \in V \\
+& (iii) \quad u_i \geq \frac{1}{s_{max}} \text{ for each task } T_i \in V \\
+& (iv) \quad b_i \geq 0 \text{ for each task } T_i \in V
+\end{array}
+\qquad (2)
+$$
+
+Here, $u_i = 1/s_i$ is the inverse of the speed to execute task $T_i$. We now have a convex optimization problem to solve, with linear constraints in the non-negative variables $u_i$ and $b_i$. In fact, the objective function is a pos polynomial, so we have a geometric programming problem [16, Section 4.5] for which efficient numerical schemes exist. In addition, such an optimization problem with a smooth convex objective function is known to be well-conditioned [35].
+
+However, as illustrated on simple fork graphs, the optimal speeds are not expected to be rational numbers but instead arbitrarily complex expressions (we have the cubic root of the sum of cubes for forks, and nested expressions of this form for trees). From a computational complexity point of view, we do not know how to encode such numbers in polynomial size of the input (the rational task weights and the execution deadline). Still, we can always solve the problem numerically and get fixed-size numbers that are good approximations of the optimal values.
+
+In the following, we show that the total power consumption of any optimal schedule is constant throughout execution. While this important property does not help to design an optimal solution, it shows that a schedule with large variations in its power consumption is likely to waste a lot of energy.
+
+We need a few notations before stating the result. Consider a schedule for a graph $G = (V, E)$ with $n$ tasks. Task $T_i$ is executed at constant speed $s_i$ (see Lemma 1) and during interval $[b_i, c_i]$:
+$T_i$ begins its execution at time $b_i$ and completes it at time $c_i$. The total power consumption $P(t)$ of the schedule at time $t$ is defined as the sum of the power consumed by all tasks executing at time $t$:
+
+$$ P(t) = \sum_{1 \le i \le n, t \in [b_i, c_i]} s_i^3 . $$
+
+**Theorem 4.** *Consider an instance of CONTINUOUS, and an optimal schedule for this instance, such that no speed is equal to $s_{max}$. Then the total power consumption of the schedule throughout execution is constant.*
+
+**Proof.** We prove this theorem by induction on the number of tasks of the graph. First we prove a preliminary result:
+
+**Lemma 2.** Consider a graph $G = (V, E)$ with $n \ge 2$ tasks, and any optimal schedule of deadline $D$. Let $t_1$ be the earliest completion time of a task in the schedule. Similarly, let $t_2$ be the latest starting time of a task in the schedule. Then, either $G$ is composed of independent tasks, or $0 < t_1 \le t_2 < D$.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_11.md b/samples/texts/1228241/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..ecc443494248f017b57138ab9f6c13e636e646b7
--- /dev/null
+++ b/samples/texts/1228241/page_11.md
@@ -0,0 +1,18 @@
+**Proof.** Task $T_i$ is executed at speed $s_i$ and during interval $[b_i, c_i]$. We have $t_1 = \min_{1 \le i \le n} c_i$ and $t_2 = \max_{1 \le i \le n} b_i$. Clearly, $0 \le t_1, t_2 \le D$ by definition of the schedule. Suppose that $t_2 < t_1$. Let $T_1$ be a task that ends at time $t_1$, and $T_2$ one that starts at time $t_2$. Then:
+
+• $\nexists T \in V, (T_1, T) \in E$ (otherwise, $T$ would start after $t_2$), therefore, $t_1 = D$;
+
+• $\nexists T \in V, (T, T_2) \in E$ (otherwise, $T$ would finish before $t_1$); therefore $t_2 = 0$.
+
+This also means that all tasks start at time 0 and end at time D. Therefore, G is only composed
+of independent tasks.
+□
+
+Back to the proof of the theorem, we consider first the case of a graph with only one task. In an optimal schedule, the task is executed in time *D*, and at constant speed (Lemma 1), hence with constant power consumption.
+
+Suppose now that the property is true for all DAGs with at most *n* − 1 tasks. Let *G* be a DAG with *n* tasks. If *G* is exactly composed of *n* independent tasks, then we know that the power consumption of *G* is constant (because all task speeds are constant). Otherwise, let *t*₁ be the earliest completion time, and *t*₂ the latest starting time of a task in the optimal schedule. Thanks to Lemma 2, we have 0 < *t*₁ ≤ *t*₂ < *D*.
+
+Suppose first that $t_1 = t_2 = t_0$. There are three kinds of tasks: those beginning at time 0 and ending at time $t_0$ (set $S_1$), those beginning at time $t_0$ and ending at time $D$ (set $S_2$), and finally those beginning at time 0 and ending at time $D$ (set $S_3$). Tasks in $S_3$ execute during the whole schedule duration, at constant speed, hence their contribution to the total power consumption $P(t)$ is the same at each time-step $t$. Therefore, we can suppress them from the schedule without loss of generality. Next we determine the value of $t_0$. Let $A_1 = \sum_{T_i \in S_1} w_i^3$, and $A_2 = \sum_{T_i \in S_2} w_i^3$. The energy consumption between 0 and $t_0$ is $\frac{A_1}{t_0^2}$, and between $t_0$ and $D$, it is $\frac{A_2}{(D-t_0)^2}$. The optimal energy consumption is obtained with $t_0 = \frac{A_1^{1/3}}{A_1^{1/3}+A_2^{1/3}}$. Then, the total power consumption of the optimal schedule is the same in both intervals, hence at each time-step: we derive that $P(t) = (\frac{A_1^{1/3}+A_2^{1/3}}{D})^3$, which is constant.
+
+Suppose now that $t_1 < t_2$. For each task $T_i$, let $w'_i$ be the number of operations executed before $t_1$, and $w''_i$ the number of operations executed after $t_1$ (with $w'_i + w''_i = w_i$). Let $G'$ be the DAG $G$ with execution costs $w'_i$, and $G''$ be the DAG $G$ with execution costs $w''_i$. The tasks with a cost equal to 0 are removed from the DAGs. Then, both $G'$ and $G''$ have strictly fewer than $n$ tasks. We can therefore apply the induction hypothesis. We derive that the power consumption in both DAGs is constant. Since we did not change the speeds of the tasks, the total power consumption $P(t)$ in $G$ is the same as in $G'$ if $t < t_1$, hence a constant. Similarly, the total power consumption $P(t)$ in $G$ is the same as in $G''$ if $t > t_1$, hence a constant. Considering the same partitioning with $t_2$ instead of $t_1$, we show that the total power consumption $P(t)$ is a constant before $t_2$, and also a constant after $t_2$. But $t_1 < t_2$, and the intervals $[0, t_2]$ and $[t_1, D]$ overlap. Altogether, the total power consumption is the same constant throughout $[0, D]$, which concludes the proof.
+□
\ No newline at end of file
diff --git a/samples/texts/1228241/page_12.md b/samples/texts/1228241/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..483b29b89d676d427a2bb43ce1ecdcb75f34a895
--- /dev/null
+++ b/samples/texts/1228241/page_12.md
@@ -0,0 +1,19 @@
+# Reclaiming the energy of a schedule: models and algorithms
+
+Guillaume Aupy¹, Anne Benoit*¹,²,
+Fanny Dufossé¹ and Yves Robert¹,²
+
+¹ ENS Lyon, Université de Lyon,
+LIP laboratory, UMR 5668, ENS Lyon-CNRS-INRIA-UCBL, Lyon, France
+
+² Institut Universitaire de France
+
+## SUMMARY
+
+We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs (DAGs). We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the Vdd-hopping model that allows to switch between different supply voltages ($V_{DD}$) while executing a task leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
+
+KEY WORDS: Energy models, complexity, bi-criteria optimization, algorithms, scheduling.
+
+*Correspondence to: Anne Benoit, LIP, ENS Lyon, 46 allée d'Italie, 69364 Lyon Cedex 07, France.
+†E-mail: {Guillaume.Aupy, Anne.Benoit, Fanny.Dufosse, Yves.Robert}@ens-lyon.fr.
+This work was supported in part by the ANR StochaGrid and RESCUE projects. A two-page extended abstract of this work appears as a short presentation in SPAA'2011.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_13.md b/samples/texts/1228241/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..d830f39d4d632e2bac1afb56876e9ddbc6737369
--- /dev/null
+++ b/samples/texts/1228241/page_13.md
@@ -0,0 +1,35 @@
+**5. Discrete models**
+
+In this section, we present complexity results on the three energy models with a finite number of possible speeds. The only polynomial instance is for the VDD-HOPPING model, for which we write a linear program in Section 5.1. Then, we give NP-completeness results in Section 5.2, and approximation results in Section 5.3, for the DISCRETE and INCREMENTAL models.
+
+**5.1. The Vdd-Hopping model**
+
+**Theorem 5.** With the VDD-HOPPING model, MINENERGY(G, D) can be solved in polynomial time.
+
+**Proof.** Let $G$ be the execution graph of an application with $n$ tasks, and $D$ a deadline. Let $s_1, \dots, s_m$ be the set of possible processor speeds. We use the following rational variables: for $1 \le i \le n$ and $1 \le j \le m$, $b_i$ is the starting time of the execution of task $T_i$, and $\alpha_{(i,j)}$ is the time spent at speed $s_j$ for executing task $T_i$. There are $n+n \times m = n(m+1)$ such variables. Note that the total execution time of task $T_i$ is $\sum_{j=1}^{m} \alpha_{(i,j)}$. The constraints are:
+
+* $\forall 1 \le i \le n, b_i \ge 0$: starting times of all tasks are non-negative numbers;
+
+* $\forall 1 \le i \le n, b_i + \sum_{j=1}^{m} \alpha_{(i,j)} \le D$: the deadline is not exceeded by any task;
+
+* $\forall 1 \le i, i' \le n$ such that $T_i \rightarrow T_{i'}$, $b_i + \sum_{j=1}^{m} \alpha_{(i,j)} \le b_{i'}$: a task cannot start before its predecessor has completed its execution;
+
+* $\forall 1 \le i \le n, \sum_{j=1}^{m} \alpha_{(i,j)} \times s_j \ge w_i$: task $T_i$ is completely executed.
+
+The objective function is then $\min (\sum_{i=1}^{n} \sum_{j=1}^{m} \alpha_{(i,j)} s_j^3)$.
+
+The size of this linear program is clearly polynomial in the size of the instance, all $n(m+1)$ variables are rational, and therefore it can be solved in polynomial time [36]. $\square$
+
+**5.2. NP-completeness results**
+
+**Theorem 6.** With the INCREMENTAL model (and hence the DISCRETE model), MINENERGY(G, D) is NP-complete.
+
+**Proof.** We consider the associated decision problem: given an execution graph, a deadline, and a bound on the energy consumption, can we find an execution speed for each task such that the deadline and the bound on energy are respected? The problem is clearly in NP: given the execution speed of each task, computing the execution time and the energy consumption can be done in polynomial time.
+
+To establish the completeness, we use a reduction from 2-Partition [37]. We consider an instance $\mathcal{I}_1$ of 2-Partition: given $n$ strictly positive integers $a_1, \dots, a_n$, does there exist a subset $I$ of $\{1, \dots, n\}$ such that $\sum_{i \in I} a_i = \sum_{i \notin I} a_i$? Let $T = \frac{1}{2} \sum_{i=1}^{n} a_i$.
+
+We build the following instance $\mathcal{I}_2$ of our problem: the execution graph is a linear chain with $n$ tasks, where:
+
+* task $T_i$ has size $w_i = a_i$;
+
+* the processor can run at $m = 2$ different speeds;
\ No newline at end of file
diff --git a/samples/texts/1228241/page_14.md b/samples/texts/1228241/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..094afcb22e8815fdf32fceb8eba3d61fd259d8e5
--- /dev/null
+++ b/samples/texts/1228241/page_14.md
@@ -0,0 +1,33 @@
+• $s_1 = 1$ and $s_2 = 2$, (i.e., $s_{min} = 1, s_{max} = 2, \delta = 1$);
+
+• $L = 3T/2$;
+
+• $E = 5T$.
+
+Clearly, the size of $\mathcal{I}_2$ is polynomial in the size of $\mathcal{I}_1$.
+
+Suppose first that instance $\mathcal{I}_1$ has a solution $I$. For all $i \in I$, $T_i$ is executed at speed 1, otherwise it is executed at speed 2. The execution time is then $\sum_{i \in I} a_i + \sum_{i \notin I} a_i / 2 = \frac{3}{2} T = D$, and the energy consumption is $E = \sum_{i \in I} a_i + \sum_{i \notin I} a_i \times 2^2 = 5T = E$. Both bounds are respected, and therefore the execution speeds are a solution to $\mathcal{I}_2$.
+
+Suppose now that $\mathcal{I}_2$ has a solution. Since we consider the DISCRETE and INCREMENTAL models, each task run either at speed 1, or at speed 2. Let $I = \{i | T_i$ is executed at speed 1$\}$. Note that we have $\sum_{i \notin I} a_i = 2T - \sum_{i \in I} a_i$.
+
+The execution time is $D' = \sum_{i \in I} a_i + \sum_{i \notin I} a_i / 2 = T + (\sum_{i \in I} a_i) / 2$. Since the deadline is not exceeded, $D' \le D = 3T/2$, and therefore $\sum_{i \in I} a_i \le T$.
+
+For the energy consumption of the solution of $\mathcal{I}_2$, we have $E' = \sum_{i \in I} a_i + \sum_{i \notin I} a_i \times 2^2 = 2T + 3 \sum_{i \notin I} a_i$. Since $E' \le E = 5T$, we obtain $3 \sum_{i \notin I} a_i \le 3T$, and hence $\sum_{i \notin I} a_i \le T$.
+
+Since $\sum_{i \in I} a_i + \sum_{i \notin I} a_i = 2T$, we conclude that $\sum_{i \in I} a_i = \sum_{i \notin I} a_i = T$, and therefore $\mathcal{I}_1$ has a solution. This concludes the proof. $\square$
+
+### 5.3. Approximation results
+
+Here we explain, for the INCREMENTAL and DISCRETE models, how the solution to the NP-hard problem can be approximated. Note that, given an execution graph and a deadline, the optimal energy consumption with the CONTINUOUS model is always lower than that with the other models, which are more constrained.
+
+**Theorem 7.** With the INCREMENTAL model, for any integer $K > 0$, the MINENERGY$(G, D)$ problem can be approximated within a factor $(1 + \frac{\delta}{s_{\min}})^2(1 + \frac{1}{K})^2$, in a time polynomial in the size of the instance and in $K$.
+
+**Proof.** Consider an instance $\mathcal{I}_{inc}$ of the problem with the INCREMENTAL model. The execution graph $G$ has $n$ tasks, $D$ is the deadline, $\delta$ is the minimum permissible speed increment, and $s_{min}, s_{max}$ are the speed bounds. Moreover, let $K > 0$ be an integer, and let $E_{inc}$ be the optimal value of the energy consumption for this instance $\mathcal{I}_{inc}$.
+
+We construct the following instance $\mathcal{I}_{vdd}$ with the VDD-HOPPING model: the execution graph and the deadline are the same as in instance $\mathcal{I}_{inc}$, and the speeds can take the values
+
+$$ \left\{ s_{\min} \times \left( 1 + \frac{1}{K} \right)^i \right\}_{0 \le i \le N}, $$
+
+where $N$ is such that $s_{max}$ is not exceeded: $N = \lfloor (\ln(s_{max}) - \ln(s_{min})) / \ln(1 + \frac{1}{K}) \rfloor$. As $N$ is asymptotically of order $O(K \ln(s_{max}))$, the number of possible speeds in $\mathcal{I}_{vdd}$, and hence the size of $\mathcal{I}_{vdd}$, is polynomial in the size of $\mathcal{I}_{inc}$ and $K$.
+
+Next, we solve $\mathcal{I}_{vdd}$ in polynomial time thanks to Theorem 5. For each task $T_i$, let $s_i^{(vdd)}$ be the average speed of $T_i$ in this solution: if the execution time of the task in the solution
\ No newline at end of file
diff --git a/samples/texts/1228241/page_15.md b/samples/texts/1228241/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a220e2c84c4846d7fe8a2cb7e3411da45a49cfd
--- /dev/null
+++ b/samples/texts/1228241/page_15.md
@@ -0,0 +1,29 @@
+is $d_i$, then $s_i^{(vdd)} = w_i/d_i$; $E_{vdd}$ is the optimal energy consumption obtained with these speeds. Let $s_i^{(algo)} = \min_i\{s_{min} + u \times \delta \mid s_{min} + u \times \delta \ge s_i^{(vdd)}\}$ be the smallest speed in $\mathcal{I}_{inc}$ that is larger than $s_i^{(vdd)}$. There exists such a speed since, because of the values chosen for $\mathcal{I}_{vdd}$, $s_i^{(vdd)} \le s_{max}$. The values $s_i^{(algo)}$ can be computed in time polynomial in the size of $\mathcal{I}_{inc}$ and $K$. Let $E_{algo}$ be the energy consumption obtained with these values.
+
+In order to prove that this algorithm is an approximation of the optimal solution, we need to prove that $E_{algo} \le (1 + \frac{\delta}{s_{min}})^2(1 + \frac{1}{K})^2 \times E_{inc}$. For each task $T_i$, $s_i^{(algo)} - \delta \le s_i^{(vdd)} \le s_i^{(algo)}$. Since $s_{min} \le s_i^{(vdd)}$, we derive that $s_i^{(algo)} \le s_i^{(vdd)} \times (1 + \frac{\delta}{s_{min}})$. Summing over all tasks, we get
+
+$$E_{algo} = \sum_i w_i (s_i^{(algo)})^2 \le \sum_i w_i (s_i^{(vdd)} \times (1 + \frac{\delta}{s_{min}}))^2 \le E_{vdd} \times (1 + \frac{\delta}{s_{min}})^2.$$
+
+Next, we bound $E_{vdd}$ thanks to the optimal solution with the CONTINUOUS model, $E_{con}$. Let $\mathcal{I}_{con}$ be the instance where the execution graph $G$, the deadline $D$, the speeds $s_{min}$ and $s_{max}$ are the same as in instance $\mathcal{I}_{inc}$, but now admissible speeds take any value between $s_{min}$ and $s_{max}$. Let $s_i^{(con)}$ be the optimal continuous speed for task $T_i$, and let $0 \le u \le N$ be the value such that:
+
+$$s_{min} \times \left(1 + \frac{1}{K}\right)^u \le s_i^{(con)} \le s_{min} \times \left(1 + \frac{1}{K}\right)^{u+1} = s_i^*.$$
+
+In order to bound the energy consumption for $I_{vdd}$, we assume that $T_i$ runs at speed $s_i^*$, instead of $s_i^{(vdd)}$. The solution with these speeds is a solution to $I_{vdd}$, and its energy consumption is $E^* \ge E_{vdd}$. From the previous inequalities, we deduce that $s_i^* \le s_i^{(con)} \times (1 + \frac{1}{K})$, and by summing over all tasks,
+
+$$
+\begin{aligned}
+E_{vdd} \le E^* &= \sum_i w_i (s_i^*)^2 \le \sum_i w_i (s_i^{(con)} \times (1 + \frac{1}{K}))^2 \\
+&\le E_{con} \times (1 + \frac{1}{K})^2 \le E_{inc} \times (1 + \frac{1}{K})^2.
+\end{aligned}
+\quad \square
+$$
+
+**Proposition 3.**
+
+• For any integer $\delta > 0$, any instance of MINENERGY$(G, D)$ with the CONTINUOUS model can be approximated within a factor $(1 + \frac{\delta}{s_{min}})^2$ in the INCREMENTAL model with speed increment $\delta$.
+
+• For any integer $K > 0$, any instance of MINENERGY$(G, D)$ with the DISCRETE model can be approximated within a factor $(1 + \frac{\alpha}{s_1})^2(1 + \frac{1}{K})^2$, with $\alpha = \max_{1 \le i < m}\{s_{i+1} - s_i\}$, in a time polynomial in the size of the instance and in $K$.
+
+**Proof.** For the first part, let $s_i^{(con)}$ be the optimal continuous speed for task $T_i$ in instance $\mathcal{I}_{con}$; $E_{con}$ is the optimal energy consumption. For any task $T_i$, let $s_i$ be the speed of $\mathcal{I}_{inc}$ such that $s_i - \delta < s_i^{con} \le s_i$. Then, $s_i^{(con)} \le s_i \times (1 + \frac{\delta}{s_{min}})$. Let $E$ be the energy with speeds $s_i$. $E_{con} \le E \times (1 + \frac{\delta}{s_{min}})^2$. Let $E_{inc}$ be the optimal energy of $\mathcal{I}_{inc}$. Then, $E_{con} \le E_{inc} \times (1 + \frac{\delta}{s_{min}})^2$.
+
+For the second part, we use the same algorithm as in Theorem 7. The same proof leads to the approximation ratio with $\alpha$ instead of $\delta$. □
\ No newline at end of file
diff --git a/samples/texts/1228241/page_16.md b/samples/texts/1228241/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..46dfb15a3cf46155cd1a5b28f59c3559292bdc75
--- /dev/null
+++ b/samples/texts/1228241/page_16.md
@@ -0,0 +1,13 @@
+## 6. Conclusion
+
+In this paper, we have assessed the tractability of a classical scheduling problem, with task preallocation, under various energy models. We have given several results related to CONTINUOUS speeds. However, while these are of conceptual importance, they cannot be achieved with physical devices, and we have analyzed several models enforcing a bounded number of achievable speeds, a.k.a. modes. In the classical DISCRETE model that arises from DVFS techniques, admissible speeds can be irregularly distributed, which motivates the VDD-HOPPING approach that mixes two consecutive modes optimally. While computing optimal speeds is NP-hard with discrete modes, it has polynomial complexity when mixing speeds. Intuitively, the VDD-HOPPING approach allows for smoothing out the discrete nature of the modes. An alternate (and simpler in practice) solution to VDD-HOPPING is the INCREMENTAL model, where one sticks with unique speeds during task execution as in the DISCRETE model, but where consecutive modes are regularly spaced. Such a model can be made arbitrarily efficient, according to our approximation results.
+
+Altogether, this paper has laid the theoretical foundations for a comparative study of energy models. In the recent years, we have observed an increased concern for green computing, and a rapidly growing number of approaches. It will be very interesting to see which energy-saving technological solutions will be implemented in forthcoming future processor chips.
+
+Regardless of the (future) energy model, there are two important future research directions that can already be envisioned:
+
+* For those situations where the optimal solutions or approximation algorithms provided in this paper would be too costly, fast heuristics can easily be introduced. Typically, such heuristics would greedily perform local changes in the schedule until a local optimum has been reached. It would be very interesting to assess the energy savings achieved by such "fast" solutions with respect to the gain provided by the optimal solution.
+
+* This paper has dealt with a fixed (given) mapping of the task graph. In some situations, the user may well have the possibility to choose, say, the list-schedule that assigns tasks to physical resources. Given a deadline, the problem is already NP-complete without energy considerations. Introducing variable speeds together with an energy-oriented objective dramatically increases the combinatorial difficulty of the problem. Still, designing and evaluating fast yet efficient heuristics would be of great practical significance.
+
+**Acknowledgement.** We thank the reviewers for their observations and suggestions that greatly improved the final version of the paper.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_17.md b/samples/texts/1228241/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..7d6764eace8cb4a681be96fc44d924c58fabd0c0
--- /dev/null
+++ b/samples/texts/1228241/page_17.md
@@ -0,0 +1,24 @@
+REFERENCES
+
+1. Mills MP. The internet begins with coal. *Environment and Climate News* 1999; :.
+2. Ge R, Feng X, Cameron KW. Performance-constrained distributed DVS scheduling for scientific applications on power-aware clusters. *Proceedings of the ACM/IEEE conference on SuperComputing (SC)*, IEEE Computer Society, 2005; 34.
+3. Skadron K, Stan MR, Sankaranarayanan K, Huang W, Velusamy S, Tarjan D. Temperature-aware microarchitecture: modeling and implementation. *ACM Transactions on Architecture and Code Optimization* 2004; 1(1):94–125.
+4. Hotta Y, Sato M, Kimura H, Matsuoka S, Boku T, Takahashi D. Profile-based optimization of power performance by using dynamic voltage scaling on a pc cluster. *Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS)*, IEEE Computer Society Press: Los Alamitos, CA, USA, 2006; 340, doi:http://doi.ieeecomputersociety.org/10.1109/IPDPS.2006.1639597.
+5. Ishihara T, Yasuura H. Voltage scheduling problem for dynamically variable voltage processors. *Proceedings of International Symposium on Low Power Electronics and Design (ISLPED)*, ACM Press, 1998; 197–202.
+6. Pruhs K, van Stee R, Uthaisombut P. Speed scaling of tasks with precedence constraints. *Theory of Computing Systems* 2008; 43:67–80.
+7. Chandrakasan AP, Sinha A. JouleTrack: A Web Based Tool for Software Energy Profiling. *Design Automation Conference*, IEEE Computer Society Press: Los Alamitos, CA, USA, 2001; 220–225.
+8. Aydin H, Yang Q. Energy-aware partitioning for multiprocessor real-time systems. *Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS)*, IEEE CS Press, 2003; 113–121.
+9. Chen JJ, Kuo TW. Multiprocessor energy-efficient scheduling for real-time tasks. *Proceedings of International Conference on Parallel Processing (ICPP)*, IEEE CS Press, 2005; 13–20.
+10. Rayward-Smith VJ, Burton FW, Janacek GJ. Scheduling parallel programs assuming preallocation. *Scheduling Theory and its Applications*, Chrétienne P, Coffman Jr EG, Lenstra JK, Liu Z (eds.), John Wiley and Sons, 1995.
+11. Wang L, von Laszewski G, Dayal J, Wang F. Towards Energy Aware Scheduling for Precedence Constrained Parallel Tasks in a Cluster with DVFS. *Proceedings of CCGrid’2010, the 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing*, 2010; 368–377, doi:10.1109/CCGRID.2010.19.
+12. Prathipati RB. Energy efficient scheduling techniques for real-time embedded systems. Master’s Thesis, Texas A&M University May 2004.
+13. Bansal N, Kimbrel T, Pruhs K. Speed scaling to manage energy and temperature. *Journal of the ACM* 2007; **54**(1):1–39, doi:http://doi.acm.org/10.1145/1206035.1206038.
+14. Okuma T, Yasuura H, Ishihara T. Software energy reduction techniques for variable-voltage processors. *Design Test of Computers*, IEEE Mar 2001; **18**(2):31–41, doi:10.1109/54.914613.
+15. Miermont S, Vivet P, Renaudin M. A Power Supply Selector for Energy- and Area-Efficient Local Dynamic Voltage Scaling. *Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation, Lecture Notes in Computer Science*, vol. 4644, Azémard N, Svensson L (eds.). Springer Berlin / Heidelberg, 2007; 556–565. URL http://dx.doi.org/10.1007/978-3-540-74442-9_54.
+16. Boyd S, Vandenberghe L. *Convex Optimization*. Cambridge University Press, 2004.
+17. Lee S, Sakurai T. Run-time voltage hopping for low-power real-time systems. *Proceedings of DAC’2000, the 37th Conference on Design Automation*, 2000; 806–809.
+18. Lahiri K, Raghunathan A, Dey S, Panigrahi D. Battery-driven system design: a new frontier in low power design. *Proceedings of ASP-DAC 2002, the 7th Asia and South Pacific Design Automation Conference and the 15th International Conference on VLSI Design*, 2002; 261–267, doi:10.1109/ASPDAC.2002.994932.
+19. Grosse P, Durand Y, Feautrier P. Methods for power optimization in SOC-based data flow systems. *ACM Trans. Des. Autom. Electron. Syst.* June 2009; **14**:38:1–38:20, doi:http://doi.acm.org/10.1145/1529255.1529260. URL http://doi.acm.org/10.1145/1529255.1529260.
+20. Jejurikar R, Pereira C, Gupta R. Leakage aware dynamic voltage scaling for real-time embedded systems. *Proceedings of DAC’04, the 41st annual Design Automation Conferencea*, ACM: New York, NY, USA, 2004; 275–280, doi:http://doi.acm.org/10.1145/996566.996650.
+21. Chen JJ, Kuo CF. Energy-Efficient Scheduling for Real-Time Systems on Dynamic Voltage Scaling (DVS) Platforms. *Proceedings of the International Workshop on Real-Time Computing Systems and Applications*, IEEE Computer Society: Los Alamitos, CA, USA, 2007; 28–38, doi:http://doi.ieeecomputersociety.org/10.1109/RTCSA.2007.37.
+22. Kim KH, Buyya R, Kim J. Power Aware Scheduling of Bag-of-Tasks Applications with Deadline Constraints on DVS-enabled Clusters. *Proceedings of CCGRID 2007, the 7th IEEE International
\ No newline at end of file
diff --git a/samples/texts/1228241/page_19.md b/samples/texts/1228241/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..b3ae87876ef92b72442ec05accc3e98f7b7e4229
--- /dev/null
+++ b/samples/texts/1228241/page_19.md
@@ -0,0 +1,13 @@
+# 1. Introduction
+
+The energy consumption of computational platforms has recently become a critical problem, both for economic and environmental reasons [1]. As an example, the Earth Simulator requires about 12 MW (Mega Watts) of peak power, and PetaFlop systems may require 100 MW of power, nearly the output of a small power plant (300 MW). At $100 per MW.Hour, peak operation of a PetaFlop machine may thus cost $10,000 per hour [2]. Current estimates state that cooling costs $1 to $3 per watt of heat dissipated [3]. This is just one of the many economical reasons why energy-aware scheduling has proved to be an important issue in the past decade, even without considering battery-powered systems such as laptops and embedded systems. As an example, the Green500 list (www.green500.org) provides rankings of the most energy-efficient supercomputers in the world, therefore raising even more awareness about power consumption.
+
+To help reduce energy dissipation, processors can run at different speeds. Their power consumption is the sum of a static part (the cost for a processor to be turned on) and a dynamic part, which is a strictly convex function of the processor speed, so that the execution of a given amount of work costs more power if a processor runs in a higher mode [4]. More precisely, a processor running at speed *s* dissipates *s*³ watts [5, 6, 7, 8, 9] per time-unit, hence consumes *s*³ × *d* joules when operated during *d* units of time. Faster speeds allow for a faster execution, but they also lead to a much higher (supra-linear) power consumption.
+
+Energy-aware scheduling aims at minimizing the energy consumed during the execution of the target application. Obviously, it makes sense only if it is coupled with some performance bound to achieve, otherwise, the optimal solution always is to run each processor at the slowest possible speed.
+
+In this paper, we investigate energy-aware scheduling strategies for executing a task graph on a set of processors. The main originality is that we assume that the mapping of the task graph is given, say by an ordered list of tasks to execute on each processor. There are many situations in which this problem is important, such as optimizing for legacy applications, or accounting for affinities between tasks and resources, or even when tasks are pre-allocated [10], for example for security reasons. In such situations, assume that a list-schedule has been computed for the task graph, and that its execution time should not exceed a deadline *D*. We do not have the freedom to change the assignment of a given task, but we can change its speed to reduce energy consumption, provided that the deadline *D* is not exceeded after the speed change. Rather than using a local approach such as backfilling [11, 12], which only reclaims gaps in the schedule, we consider the problem as a whole, and we assess the impact of several speed variation models on its complexity. More precisely, we investigate the following models:
+
+**Continuous model.** Processors can have arbitrary speeds, and can vary them continuously: this model is unrealistic (any possible value of the speed, say $\sqrt{e^{\pi}}$, cannot be obtained) but it is theoretically appealing [13]. A maximum speed, $s_{max}$, cannot be exceeded.
+
+**Discrete model.** Processors have a discrete number of predefined speeds (or frequencies), which correspond to different voltages that the processor can be subjected to [14]. Switching frequencies is not allowed during the execution of a given task, but two different tasks scheduled on a same processor can be executed at different frequencies.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_2.md b/samples/texts/1228241/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..e393033dc68a3b67050bf34cebce252518cd2c60
--- /dev/null
+++ b/samples/texts/1228241/page_2.md
@@ -0,0 +1,16 @@
+We conclude the study of this simple example with a short discussion on the energy savings that can be achieved. All three models have a maximum speed $s_{max} = 6$. Executing the four tasks at maximum speed leads to consuming an energy $E_{max} = 8 \times 6^2 = 288$. Such an execution completes within a delay $D = 1$. We clearly see the trade-off between execution time and energy consumption here, since we gain more than half the energy by slowing down the execution from $D = 1$ to $D = 1.5$. Note that with $D = 1$, we can still slow down task $T_2$ to speed 4, and still gain a little over the brute force solution. Hence, even such a toy example allows us to illustrate the benefits of energy-aware schedules. Obviously, with larger examples, the energy savings will be even more dramatic, depending upon the range of available speeds and the tightness of the execution deadline. In fact, the maximal energy gain that can be achieved is not bounded: when executing each task as slow as possible (instead of as fast as possible), we gain $\left(\frac{s_{max}}{s_{min}}\right)^2 W_{total}$, where $W_{total}$ is the sum of all task weights, and this quantity can be arbitrarily large. One of the main contributions of this paper is to provide optimal energy-aware algorithms for each model (or guaranteed polynomial approximations for NP-complete instances).
+
+## 4. The Continuous model
+
+With the CONTINUOUS model, processor speeds can take any value between 0 and $s_{max}$. First we prove that, with this model, the processors do not change their speed during the execution of a task (Section 4.1). Then, we derive in Section 4.2 the optimal speed values for special execution graph structures, expressed as closed form algebraic formulas, and we show that these values may be irrational (as already illustrated in the example in Section 3.3). Finally, we formulate the problem for general DAGs as a convex optimization program in Section 4.3.
+
+### 4.1. Preliminary lemma
+
+**Lemma 1 (constant speed per task)** *In all optimal solution with the CONTINUOUS model, each task is executed at constant speed, i.e., a processor does not change its speed during the execution of a task.*
+
+**Proof.** Suppose that in the optimal solution, there is a task whose speed changes during the execution. Consider the first time-step at which the change occurs: the computation begins at speed $s$ from time $t$ to time $t'$, and then continues at speed $s'$ until time $t''$. The total energy consumption for this task in the time interval $[t; t'']$ is $E = (t' - t) \times s^3 + (t'' - t') \times (s'')^3$. Moreover, the amount of work done for this task is $W = (t' - t) \times s + (t'' - t') \times s'$.
+
+If we run the task during the whole interval $[t; t'']$ at constant speed $W/(t'' - t)$, the same amount of work is done within the same time. However, the energy consumption during this interval of time is now $E' = (t'' - t) \times (W/(t'' - t))^3$. By convexity of the function $x \mapsto x^3$, we obtain $E' < E$ since $t < t' < t''$. This contradicts the hypothesis of optimality of the first solution, which concludes the proof. □
+
+Copyright © 2011 John Wiley & Sons, Ltd.
+Prepared using cpeauth.cls
\ No newline at end of file
diff --git a/samples/texts/1228241/page_20.md b/samples/texts/1228241/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..1cc36c14cc5f62b2ddde06b9a65efb860c53f7fd
--- /dev/null
+++ b/samples/texts/1228241/page_20.md
@@ -0,0 +1,11 @@
+**Vdd-Hopping model.** This model is similar to the DISCRETE one, except that switching modes during the execution of a given task is allowed: any rational speed can be simulated, by simply switching, at the appropriate time during the execution of a task, between two consecutive modes [15]. Note that $V_{DD}$ usually represents the supply voltage, hence the name VDD-HOPPING.
+
+**Incremental model.** In this variant of the DISCRETE model, we introduce a value $\delta$ that corresponds the minimum permissible speed increment, induced by the minimum voltage increment that can be achieved when controlling the processor CPU. This new model aims at capturing a realistic version of the DISCRETE model, where the different modes are spread regularly instead of arbitrarily chosen.
+
+Our main contributions are the following. For the CONTINUOUS model, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem [16] for general DAGs. For the VDD-HOPPING model, we show that the optimal solution for general DAGs can be computed in polynomial time, using a (rational) linear program. Finally, for the DISCRETE and INCREMENTAL models, we show that the problem is NP-complete. Furthermore, we provide approximation algorithms that rely on the polynomial algorithm for the VDD-HOPPING model, and we compare their solution with the optimal CONTINUOUS solution.
+
+The paper is organized as follows. We start with a survey of related literature in Section 2. We then provide the formal description of the framework and of the energy models in Section 3, together with a simple example to illustrate the different models. The next two sections constitute the heart of the paper: in Section 4, we provide analytical formulas for continuous speeds, and the formulation into the convex optimization problem. In Section 5, we assess the complexity of the problem with all the discrete models: DISCRETE, VDD-HOPPING and INCREMENTAL, and we discuss approximation algorithms. Finally we conclude in Section 6.
+
+## 2. Related work
+
+Reducing the energy consumption of computational platforms is an important research topic, and many techniques at the process, circuit design, and micro-architectural levels have been proposed [17, 18, 19]. The dynamic voltage and frequency scaling (DVFS) technique has been extensively studied, since it may lead to efficient energy/performance trade-offs [20, 2, 13, 21, 22, 23, 11]. Current microprocessors (for instance, from AMD [24] and Intel [25]) allow the speed to be set dynamically. Indeed, by lowering supply voltage, hence processor clock frequency, it is possible to achieve important reductions in power consumption, without necessarily increasing the execution time. We first discuss different optimization problems that arise in this context. Then we review energy models.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_21.md b/samples/texts/1228241/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..f6666c440d322787c42e6e9d435338dea62f8a2f
--- /dev/null
+++ b/samples/texts/1228241/page_21.md
@@ -0,0 +1,11 @@
+## 2.1. DVFS and optimization problems
+
+When dealing with energy consumption, the most usual optimization function consists in minimizing the energy consumption, while ensuring a deadline on the execution time (i.e., a real-time constraint), as discussed in the following papers.
+
+In [14], Okuma et al. demonstrate that voltage scaling is far more effective than the shutdown approach, which simply stops the power supply when the system is inactive. Their target processor employs just a few discretely variable voltages. De Langen and Juurlink [26] discuss leakage-aware scheduling heuristics that investigate both dynamic voltage scaling (DVS) and processor shutdown, since static power consumption due to leakage current is expected to increase significantly. Chen et al. [27] consider parallel sparse applications, and they show that when scheduling applications modeled by a directed acyclic graph with a well-identified critical path, it is possible to lower the voltage during non-critical execution of tasks, with no impact on the execution time. Similarly, Wang et al. [11] study the slack time for non-critical jobs, they extend their execution time and thus reduce the energy consumption without increasing the total execution time. Kim et al. [22] provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints, based on dynamic voltage scaling. Their goal is to minimize power consumption as well as to meet the deadlines specified by application users.
+
+For real-time embedded systems, slack reclamation techniques are used. Lee and Sakurai [17] show how to exploit slack time arising from workload variation, thanks to a software feedback control of supply voltage. Prathipati [12] discusses techniques to take advantage of run-time variations in the execution time of tasks; it determines the minimum voltage under which each task can be executed, while guaranteeing the deadlines of each task. Then, experiments are conducted on the Intel StrongArm SA-1100 processor, which has eleven different frequencies, and the Intel PXA250 XScale embedded processor with four frequencies. In [28], the goal of Xu et al. is to schedule a set of independent tasks, given a worst case execution cycle (WCEC) for each task, and a global deadline, while accounting for time and energy penalties when the processor frequency is changing. The frequency of the processor can be lowered when some slack is obtained dynamically, typically when a task runs faster than its WCEC. Yang and Lin [23] discuss algorithms with preemption, using DVS techniques; substantial energy can be saved using these algorithms, which succeed to claim the static and dynamic slack time, with little overhead.
+
+Since an increasing number of systems are powered by batteries, maximizing battery life also is an important optimization problem. Battery-efficient systems can be obtained with similar techniques of dynamic voltage and frequency scaling, as described by Lahiri et al. in [18]. Another optimization criterion is the energy-delay product, since it accounts for a trade-off between performance and energy consumption, as for instance discussed by Gonzalez and Horowitz in [29]. We do not discuss further these latter optimization problems, since our goal is to minimize the energy consumption, with a fixed deadline.
+
+In this paper, the application is a task graph (directed acyclic graph), and we assume that the mapping, i.e., an ordered list of tasks to execute on each processor, is given. Hence, our problem is closely related to slack reclamation techniques, but instead on focusing on non-critical tasks as for instance in [11], we consider the problem as a whole. Our contribution is
\ No newline at end of file
diff --git a/samples/texts/1228241/page_22.md b/samples/texts/1228241/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f0c7b286814e73acf249441354f021467823f75
--- /dev/null
+++ b/samples/texts/1228241/page_22.md
@@ -0,0 +1,15 @@
+to perform an exhaustive complexity study for different energy models. In the next paragraph, we discuss related work on each energy model.
+
+## 2.2. Energy models
+
+Several energy models are considered in the literature, and they can all be categorized in one of the four models investigated in this paper, i.e., CONTINUOUS, DISCRETE, VDD-HOPPING or INCREMENTAL.
+
+The CONTINUOUS model is used mainly for theoretical studies. For instance, Yao et al. [30], followed by Bansal et al. [13], aim at scheduling a collection of tasks (with release time, deadline and amount of work), and the solution is the time at which each task is scheduled, but also, the speed at which the task is executed. In these papers, the speed can take any value, hence following the CONTINUOUS model.
+
+We believe that the most widely used model is the DISCRETE one. Indeed, processors have currently only a few discrete number of possible frequencies [24, 25, 14, 12]. Therefore, most of the papers discussed above follow this model. Some studies exploit the continuous model to determine the smallest frequency required to run a task, and then choose the closest upper discrete value, as for instance [12] and [31].
+
+Recently, a new local dynamic voltage scaling architecture has been developed, based on the VDD-HOPPING model [15, 32, 33]. It was shown in [17] that significant power can be saved by using two distinct voltages, and architectures using this principle have been developed (see for instance [34]). Compared to traditional power converters, a new design with no needs for large passives or costly technological options has been validated in a STMicroelectronics CMOS 65nm low-power technology [15].
+
+To the best of our knowledge, this paper introduces the INCREMENTAL model for the first time. The main rationale is that future technologies may well have an increased number of possible frequencies, and these will follow a regular pattern. For instance, note that the SA-1100 processor, considered in [12], has eleven frequencies that are equidistant, i.e., they follow the INCREMENTAL model. Lee and Sakurai [17] exploit discrete levels of clock frequency as $f, f/2, f/3, ...$, where *f* is the master (i.e., the higher) system clock frequency. This model is closer to the DISCRETE model, although it exhibits a regular pattern similarly to the INCREMENTAL model.
+
+Our work is the first attempt to compare these different models: on the one hand, we assess the impact of the model on the problem complexity (polynomial vs NP-hard), and on the other hand, we provide approximation algorithms building upon these results. The closest work to ours is the paper by Zhang et al. [31], in which the authors also consider the mapping of directed acyclic graphs, and compare the DISCRETE and the CONTINUOUS models. We go beyond their work in this paper, with an exhaustive complexity study, closed-form formulas for the continuous model, and the comparison with the VDD-HOPPING and INCREMENTAL models.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_23.md b/samples/texts/1228241/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f48bfec65cbfdf881d2e2c5023ccadd3b795ee2
--- /dev/null
+++ b/samples/texts/1228241/page_23.md
@@ -0,0 +1,50 @@
+3. Framework
+
+First we detail the optimization problem in Section 3.1. Then we describe the four energy
+models in Section 3.2. Finally, we illustrate the models and motivate the problem with an
+example in Section 3.3.
+
+**3.1. Optimization problem**
+
+Consider an application task graph $\mathcal{G} = (V, \mathcal{E})$, with $n = |V|$ tasks denoted as $V = \{T_1, T_2, \ldots, T_n\}$, and where the set $\mathcal{E}$ denotes the precedence edges between tasks. Task $T_i$ has a cost $w_i$ for $1 \le i \le n$. We assume that the tasks in $\mathcal{G}$ have been allocated onto a parallel platform made up of identical processors. We define the *execution graph* generated by this allocation as the graph $\mathcal{G} = (V, E)$, with the following augmented set of edges:
+
+• $\mathcal{E} \subseteq E$: if an edge exists in the precedence graph, it also exists in the execution graph;
+
+• if $T_1$ and $T_2$ are executed successively, in this order, on the same processor, then
+$(T_1, T_2) \in E$.
+
+The goal is to the minimize the energy consumed during the execution while enforcing a
+deadline *D* on the execution time. We formalize the optimization problem in the simpler case
+where each task is executed at constant speed. This strategy is optimal for the CONTINUOUS
+model (by a convexity argument) and for the DISCRETE and INCREMENTAL models (by
+definition). For the VDD-HOPPING model, we reformulate the problem in Section 5.1. For
+each task $T_i \in V$, $b_i$ is the starting time of its execution, $d_i$ is the duration of its execution,
+and $s_i$ is the speed at which it is executed. We obtain the following formulation of the
+MINENERGY(*G*, *D*) problem, given an execution graph *G* = (*V*, *E*) and a deadline *D*; the
+$s_i$ values are variables, whose values are constrained by the energy model (see Section 3.2).
+
+$$
+\begin{array}{ll}
+\text{Minimize} & \displaystyle\sum_{i=1}^{n} s_i^3 \times d_i \\
+\text{subject to} & (i) \quad w_i = s_i \times d_i \quad \text{for each task } T_i \in V \\
+& (ii) \quad b_i + d_i \leq b_j \quad \text{for each edge } (T_i, T_j) \in E \\
+& (iii) \quad b_i + d_i \leq D \quad \text{for each task } T_i \in V \\
+& (iv) \quad b_i \geq 0 \quad \text{for each task } T_i \in V
+\end{array}
+\tag{1}
+$$
+
+Constraint (i) states that the whole task can be executed in time $d_i$ using speed $s_i$.
+Constraint (ii) accounts for all dependencies, and constraint (iii) ensures that the execution
+time does not exceed the deadline $D$. Finally, constraint (iv) enforces that starting times
+are non-negative. The energy consumed throughout the execution is the objective function.
+It is the sum, for each task, of the energy consumed by this task, as we detail in the next
+section. Note that $d_i = w_i/s_i$, and therefore the objective function can also be expressed as
+$\sum_{i=1}^{n} s_i^2 \times w_i$.
+
+Note that, whatever the energy model, there is a maximum speed that cannot be exceeded,
+denoted $s_{max}$. We point out that there is a solution to the minimization problem if and only
+if there is a solution with $s_i = s_{max}$ for all $1 \le i \le n$. Such a solution would correspond to
+executing each task as early as possible (according to constraints (ii) and (iv)) and as fast as
+possible. The optimal solution then slows down tasks to save as much energy as possible, while
+enforcing the deadline constraint. There is no guarantee on the uniqueness of the solution,
\ No newline at end of file
diff --git a/samples/texts/1228241/page_24.md b/samples/texts/1228241/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..eeefe84270324d77e74c42119a66bf541527284f
--- /dev/null
+++ b/samples/texts/1228241/page_24.md
@@ -0,0 +1,27 @@
+since it may be possible to modify the beginning time of a task without affecting the energy
+consumption, if some of the constraints (ii) are not tight.
+
+**3.2. Energy models**
+
+In all models, when a processor operates at speed $s$ during $d$ time-units, the corresponding consumed energy is $s^3 \times d$, which is the dynamic part of the energy consumption, following the classical models of the literature [5, 6, 7, 8, 9]. Note that we do not take static energy into account, because all processors are up and alive during the whole execution. We now detail the possible speed values in each energy model, which should be added as a constraint in Equation (1).
+
+* In the CONTINUOUS model, processors can have arbitrary speeds, from 0 to a maximum value $s_{max}$, and a processor can change its speed at any time during execution.
+
+* In the DISCRETE model, processors have a set of possible speed values, or modes, denoted as $s_1, ..., s_m$. There is no assumption on the range and distribution of these modes. The speed of a processor cannot change during the computation of a task, but it can change from task to task.
+
+* In the VDD-HOPPING model, a processor can run at different speeds $s_1, ..., s_m$, as in the previous model, but it can also change its speed during a computation. The energy consumed during the execution of one task is the sum, on each time interval with constant speed $s$, of the energy consumed during this interval at speed $s$.
+
+* In the INCREMENTAL model, we introduce a value $\delta$ that corresponds to the minimum permissible speed (i.e., voltage) increment. That means that possible speed values are obtained as $s = s_{min} + i \times \delta$, where $i$ is an integer such that $0 \le i \le \frac{s_{max}-s_{min}}{\delta}$. Admissible speeds lie in the interval [$s_{min}, s_{max}$]. This new model aims at capturing a realistic version of the DISCRETE model, where the different modes are spread regularly between $s_1 = s_{min}$ and $s_m = s_{max}$, instead of being arbitrarily chosen. It is intended as the modern counterpart of a potentiometer knob!
+
+**3.3. Example**
+
+Consider an application with four tasks of costs $w_1 = 3$, $w_2 = 2$, $w_3 = 1$ and $w_4 = 2$, and one precedence constraint $T_1 \rightarrow T_3$. We assume that $T_1$ and $T_2$ are allocated, in this order, onto processor $P_1$, while $T_3$ and $T_4$ are allocated, in this order, on processor $P_2$. The resulting execution graph $G$ is given in Figure 1, with two precedence constraints added to the initial task graph. The deadline on the execution time is $D = 1.5$.
+
+We set the maximum speed to $s_{max} = 6$ for the CONTINUOUS model. For the DISCRETE and VDD-HOPPING models, we use the set of speeds $s_1^{(d)} = 2$, $s_2^{(d)} = 5$ and $s_3^{(d)} = 6$. Finally, for the INCREMENTAL model, we set $\delta = 2$, $s_{min} = 2$ and $s_{max} = 6$, so that possible speeds are $s_1^{(i)} = 2$, $s_2^{(i)} = 4$ and $s_3^{(i)} = 6$. We aim at finding the optimal execution speed $s_i$ for each task $T_i$ ($1 \le i \le 4$), i.e., the values of $s_i$ that minimize the energy consumption.
+
+With the CONTINUOUS model, the optimal speeds are non rational values, and we obtain
+
+$$s_1 = \frac{2}{3}(3 + 35^{1/3}) \approx 4.18; \quad s_2 = s_1 \times \frac{2}{35^{1/3}} \approx 2.56; \quad s_3 = s_4 = s_1 \times \frac{3}{35^{1/3}} \approx 3.83.$$
+
+Copyright © 2011 John Wiley & Sons, Ltd.
+Prepared using cpeauth.cls
\ No newline at end of file
diff --git a/samples/texts/1228241/page_25.md b/samples/texts/1228241/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..f9821f7f0ebf7de491a5e6f5f4c82c99f583ecee
--- /dev/null
+++ b/samples/texts/1228241/page_25.md
@@ -0,0 +1,9 @@
+Figure 1: Execution graph for the example.
+
+Note that all speeds are lower than the maximum $s_{max}$. These values are obtained thanks to the formulas derived in Section 4. The energy consumption is then $E_{opt}^{(c)} = \sum_{i=1}^{4} w_i \times s_i^2 = 3 \cdot s_1^2 + 2 \cdot s_2^2 + 3 \cdot s_3^2 \approx 109.6$. The execution time is $\frac{w_1}{s_1} + \max(\frac{w_2}{s_2}, \frac{w_3+w_4}{s_3})$, and with this solution, it is equal to the deadline *D* (actually, both processors reach the deadline, otherwise we could slow down the execution of one task).
+
+For the DISCRETE model, if we execute all tasks at speed $s_2^{(d)} = 5$, we obtain an energy $E = 8 \times 5^2 = 200$. A better solution is obtained with $s_1 = s_3^{(d)} = 6$, $s_2 = s_4 = s_5^{(d)} = 2$ and $s_4 = s_2^{(d)} = 5$, which turns out to be optimal: $E_{opt}^{(d)} = 3 \times 36 + (2+1) \times 4 + 2 \times 25 = 170$. Note that $E_{opt}^{(d)} > E_{opt}^{(c)}$, i.e., the optimal energy consumption with the DISCRETE model is much higher than the one achieved with the CONTINUOUS model. Indeed, in this case, even though the first processor executes during $3/6 + 2/2 = D$ time units, the second processor remains idle since $3/6 + 1/2 + 2/5 = 1.4 < D$. The problem turns out to be NP-hard (see Section 5.2), and the solution has been found by performing an exhaustive search.
+
+With the VDD-HOPPING model, we set $s_1 = s_2^{(d)} = 5$; for the other tasks, we run part of the time at speed $s_2^{(d)} = 5$, and part of the time at speed $s_1^{(d)} = 2$ in order to use the idle time and lower the energy consumption. $T_2$ is executed at speed $s_1^{(d)}$ during time $\frac{5}{6}$ and at speed $s_2^{(d)}$ during time $\frac{2}{30}$ (i.e., the first processor executes during time $3/5 + 5/6 + 2/30 = 1.5 = D$, and all the work for $T_2$ is done: $2 \times 5/6 + 5 \times 2/30 = 2 = w_2$). $T_3$ is executed at speed $s_2^{(d)}$ (during time $1/5$), and finally $T_4$ is executed at speed $s_1^{(d)}$ during time $0.5$ and at speed $s_2^{(d)}$ during time $1/5$ (i.e., the second processor executes during time $3/5 + 1/5 + 0.5 + 1/5 = 1.5 = D$, and all the work for $T_4$ is done: $2 \times 0.5 + 5 \times 1/5 = 2 = w_4$). This set of speeds turns out to be optimal (i.e., it is the optimal solution of the linear program introduced in Section 5.1), with an energy consumption $E_{opt}^{(v)} = (3/5 + 2/30 + 1/5 + 1/5) \times 5^3 + (5/6 + 0.5) \times 2^3 = 144$. As expected, $E_{opt}^{(c)} \le E_{opt}^{(v)} \le E_{opt}^{(d)}$, i.e., the VDD-HOPPING solution stands between the optimal CONTINUOUS solution, and the more constrained DISCRETE solution.
+
+For the INCREMENTAL model, the reasoning is similar to the DISCRETE case, and the optimal solution is obtained by an exhaustive search: all tasks should be executed at speed $s_2^{(i)} = 4$, with an energy consumption $E_{opt}^{(i)} = 8 \times 4^2 = 128 > E_{opt}^{(c)}$. It turns out to be better than DISCRETE and VDD-HOPPING, since it has different discrete values of energy that are more appropriate for this example.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_3.md b/samples/texts/1228241/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..311c1633bcd10209312a85a2c710e22d8f62bf4d
--- /dev/null
+++ b/samples/texts/1228241/page_3.md
@@ -0,0 +1,19 @@
+## 4.2. Special execution graphs
+
+### 4.2.1. Independent tasks
+
+Consider the problem of minimizing the energy of *n* independent tasks (i.e., each task is mapped onto a distinct processor, and there are no precedence constraints in the execution graph), while enforcing a deadline *D*.
+
+**Proposition 1 (independent tasks)** When *G* is composed of independent tasks {$T_1, \dots, T_n$}, the optimal solution to MINENERGY(*G*, *D*) is obtained when each task $T_i$ ($1 \le i \le n$) is computed at speed $s_i = \frac{w_i}{D}$. If there is a task $T_i$ such that $s_i > s_{max}$, then the problem has no solution.
+
+**Proof.** For task $T_i$, the speed $s_i$ corresponds to the slowest speed at which the processor can execute the task, so that the deadline is not exceeded. If $s_i > s_{max}$, the corresponding processor will never be able to complete its execution before the deadline, therefore there is no solution. To conclude the proof, we note that any other solution would meet the deadline constraint, and therefore the $s_i$'s should be such that $\frac{w_i}{s_i} \le D$, which means that $s_i \ge \frac{w_i}{D}$. These values would all be higher than the $s_i$'s of the optimal solution, and hence would lead to a higher energy consumption. Therefore, this solution is optimal. $\square$
+
+### 4.2.2. Linear chain of tasks
+
+This case corresponds for instance to *n* independent tasks {$T_1, \dots, T_n$} executed onto a single processor. The execution graph is then a linear chain (order of execution of the tasks), with $T_i \to T_{i+1}$, for $1 \le i < n$.
+
+**Proposition 2 (linear chain)** When *G* is a linear chain of tasks, the optimal solution to MINENERGY(*G*, *D*) is obtained when each task is executed at speed *s* = *W/D*, with *W* = $\sum_{i=1}^{n} w_i$. If *s* > *s*max, then there is no solution.
+
+**Proof.** Suppose that in the optimal solution, tasks $T_i$ and $T_j$ are such that $s_i < s_j$. The total energy consumption is $E_{opt}$. We define *s* such that the execution of both tasks running at speed *s* takes the same amount of time than in the optimal solution, i.e., $(w_i + w_j)/s = w_i/s_i + w_j/s_j$: $s = \frac{(w_i+w_j)}{w_i s_j + w_j s_i}$ × $s_i s_j$. Note that $s_i < s < s_j$ (it is the barycenter of two points with positive mass).
+
+We consider a solution such that the speed of task $T_k$, for $1 \le k \le n$, with $k \neq i$ and $k \neq j$, is the same as in the optimal solution, and the speed of tasks $T_i$ and $T_j$ is *s*. By definition of *s*, the execution time has not been modified. The energy consumption of this solution is *E*, where $E_{opt} - E = w_i s_i^2 + w_j s_j^2 - (w_i + w_j)s^2$, i.e., the difference of energy with the optimal solution is only impacted by tasks $T_i$ and $T_j$, for which the speed has been modified. By convexity of the function $x \mapsto x^2$, we obtain $E_{opt} > E$, which contradicts its optimality. Therefore, in the optimal solution, all tasks have the same execution speed. Moreover, the energy consumption is minimized when the speed is as low as possible, while the deadline is not exceeded. Therefore, the execution speed of all tasks is $s = W/D$. $\square$
\ No newline at end of file
diff --git a/samples/texts/1228241/page_5.md b/samples/texts/1228241/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..df4013d41cc3c8219c6d039bc56c483233cc8ff6
--- /dev/null
+++ b/samples/texts/1228241/page_5.md
@@ -0,0 +1,19 @@
+Finally, we compute the exact expression of $\mathbf{minE}(G, D) = f(s_0)$, when $s_0 \le s_{max}$:
+
+$$f(s_0) = s_0^2 \left( w_0 + \frac{W_3}{(s_0 D - w_0)^2} \right) = \left( \frac{W_3^{1/3} + w_0}{D} \right)^2 \left( \frac{W_3}{W_3^{2/3}} + w_0 \right) = \frac{\left( W_3^{1/3} + w_0 \right)^3}{D^2},$$
+
+which concludes the proof. $\square$
+
+**Corollary 2 (equivalent tasks for speed)** Consider a fork or join graph with tasks $T_i$, $0 \le i \le n$, and a deadline $D$, and assume that the speeds in the optimal solution to $\text{MINENERGY}(G, D)$ do not exceed $s_{max}$. Then, these speeds are the same as in the optimal solution for $n+1$ independent tasks $T'_0, T'_1, \dots, T'_n$, where $w'_0 = (\sum_{i=1}^n w_i^3)^{1/3} + w_0$, and, for $1 \le i \le n$, $w'_i = w_0 \cdot \frac{w_i}{(\sum_{i=1}^n w_i^3)^{1/3}}$.
+
+**Corollary 3 (equivalent task for energy)** Consider a fork or join graph $G$ and a deadline $D$, and assume that the speeds in the optimal solution to $\text{MINENERGY}(G, D)$ do not exceed $s_{max}$. We say that the graph $G$ is equivalent to the graph $G^{(eq)}$, consisting of a single task $T_0^{(eq)}$ of weight $w_0^{(eq)} = (\sum_{i=1}^n w_i^{(eq)})^{1/3} + w_0$, because the minimum energy consumption of both graphs are identical: $\mathbf{minE}(G, D)=\mathbf{minE}(G^{(eq)}, D)$.
+
+### 4.2.4. Trees
+
+We extend the results on a fork graph for a tree $G = (V, E)$ with $|V| = n + 1$ tasks. Let $T_0$ be the root of the tree; it has $k$ children tasks, which are each themselves the root of a tree. A tree can therefore be seen as a fork graph, where the tasks of the fork are trees.
+
+The previous results for fork graphs naturally lead to an algorithm that peels off branches of the tree, starting with the leaves, and replaces each fork subgraph in the tree, composed of a root $T_0$ and $k$ children, by one task (as in Corollary 3) that becomes the unique child of $T_0$'s parent in the tree. We say that this task is equivalent to the fork graph, since the optimal energy consumption will be the same. The computation of the equivalent cost of this task is done thanks to a call to the **eq** procedure, while the **tree** procedure computes the solution to $\text{MINENERGY}(G, D)$ (see Algorithm 1). Note that the algorithm computes the minimum energy for a tree, but it does not return the speeds at which each task must be executed. However, the algorithm returns the speed of the root task, and it is then straightforward to compute the speed of each children of the root task, and so on.
+
+**Theorem 2 (tree graphs)** When $G$ is a tree rooted in $T_0$ ($T_0 \in V$, where $V$ is the set of tasks), the optimal solution to $\text{MINENERGY}(G, D)$ can be computed in polynomial time $O(|V|^2)$.
+
+**Proof.** Let $G$ be a tree graph rooted in $T_0$. The optimal solution to $\text{MINENERGY}(G, D)$ is obtained with a call to **tree** $(G, T_0, D)$, and we prove its optimality recursively on the depth of the tree. Similarly to the case of the fork graphs, we reduce the tree to an equivalent task that, if executed alone within a deadline $D$, consumes exactly the same amount of energy. The procedure **eq** is the procedure that reduces a tree to its equivalent task (see Algorithm 1).
\ No newline at end of file
diff --git a/samples/texts/1228241/page_6.md b/samples/texts/1228241/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..a25f3e8754d1fc8f7db42fd078e14e182469266a
--- /dev/null
+++ b/samples/texts/1228241/page_6.md
@@ -0,0 +1,32 @@
+**Algorithm 1:** Solution to MINENERGY(G, D) for trees.
+
+procedure tree (tree G, root T₀, deadline D)
+begin
+ Let w=eq (tree G, root T₀);
+ if $w/D \le s_{max}$ then
+ return $w^3/D^2$;
+ else
+ if $w_0/s_{max} > D$ then
+ return Error:No Solution;
+ else
+ /* T₀ is executed at speed $s_{max}$ */
+ return $w_0 \times s_{max}^2 + \sum_{G_i \text{ subtree rooted in } T_i \in \text{children}(T_0)} \text{tree}(G_i, T_i, D - \frac{w_0}{s_{max}})$;
+ end
+ end
+end
+
+procedure eq (tree G, root T₀)
+begin
+ if $\text{children}(T_0)=\emptyset$ then
+ return $w_0$;
+ else
+ return $\left(\sum_{G_i \text{ subtree rooted in } T_i \in \text{children}(T_0)} (\mathbf{eq}(G_i, T_i))^3\right)^{\frac{1}{3}} + w_0$;
+ end
+end
+
+If the tree has depth 0, then it is a single task, **eq** (G, T₀) returns the equivalent cost w₀,
+and the optimal execution speed is w₀/D (see Proposition 1). There is a solution if and only if
+this speed is not greater than sₘₐₓ, and then the corresponding energy consumption is w₀³/D², as
+returned by the algorithm.
+
+Assume now that for any tree of depth *i* < *p*, **eq** computes its equivalent cost, and **tree** returns its optimal energy consumption. We consider a tree *G* of depth *p* rooted in *T*₀: *G* = *T*₀ ∪ {*G*ᵢ}, where each subgraph *G*ᵢ is a tree, rooted in *T*ᵢ, of maximum depth *p* − 1. As in the case of forks, we know that each subtree *G*ᵢ has a deadline *D* − *x*, where *x* = w₀/s₀, and *s*₀ is the speed at which task *T*₀ is executed. By induction hypothesis, we suppose that each graph *G*ᵢ is equivalent to a single task, *T*′ᵢ, of cost *w*′ᵢ (as computed by the procedure **eq**). We can then use the results obtained on forks to compute *w*⁽**eq**⁾(0) (see proof of Theorem 1):
\ No newline at end of file
diff --git a/samples/texts/1228241/page_7.md b/samples/texts/1228241/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..2e552a2113fb7d12a1d67973795d9cc3e6f4e259
--- /dev/null
+++ b/samples/texts/1228241/page_7.md
@@ -0,0 +1,17 @@
+$$w_0^{(eq)} = \left( \sum_i (w'_i)^3 \right)^{\frac{1}{3}} + w_0.$$
+
+Finally the tree is equivalent to one task of cost $w_0^{(eq)}$, and if $\frac{w_0^{(eq)}}{D} \le s_{max}$, the energy consumption is $\frac{(w_0^{(eq)})^3}{D^2}$, and no speed exceeds $s_{max}$.
+
+Note that the speed of a task is always greater than the speed of its successors. Therefore, if $\frac{w_0^{(eq)}}{D} > s_{max}$, we execute the root of the tree at speed $s_{max}$ and then process each subtree $G_i$ independently. Of course, there is no solution if $\frac{w_0}{s_{max}} > D$, and otherwise we perform the recursive calls to **tree** to process each subtree independently. Their deadline is then $D - \frac{w_0}{s_{max}}$. $\square$
+
+### 4.2.5. Series-parallel graphs
+
+We can further generalize our results to series-parallel graphs (SPGs), which are built from a sequence of compositions (parallel or series) of smaller-size SPGs. The smallest SPG consists of two nodes connected by an edge (such a graph is called an *elementary SPG*). The first node is the source, while the second one is the sink of the SPG. When composing two SGPs in series, we merge the sink of the first SPG with the source of the second one. For a parallel composition, the two sources are merged, as well as the two sinks, as illustrated in Figure 2.
+
+We can extend the results for tree graphs to SPGs, by replacing step by step the SPGs by an equivalent task (procedure **cost** in Algorithm 2): we can compute the equivalent cost for a series or parallel composition.
+
+However, since it is no longer true that the speed of a task is always larger than the speed of its successor (as was the case in a tree), we have not been able to find a recursive property on the tasks that should be set to $s_{max}$, when one of the speeds obtained with the previous method exceeds $s_{max}$. The problem of computing a closed form for a SPG with a finite value of $s_{max}$ remains open. Still, we have the following result when $s_{max} = +\infty$:
+
+**Theorem 3 (series-parallel graphs)** *When G is a SPG, it is possible to compute recursively a closed form expression of the optimal solution of MINENERGY(G, D), assuming $s_{max} = +\infty$, in polynomial time $O(|V|)$, where V is the set of tasks.*
+
+**Proof.** Let $G$ be a series-parallel graph. The optimal solution to $\text{MINENERGY}(G, D)$ is obtained with a call to **SPG** $(G, D)$, and we prove its optimality recursively. Similarly to trees, the main idea is to peel the graph off, and to transform it until there remains only a single equivalent task that, if executed alone within a deadline $D$, would consume exactly
\ No newline at end of file
diff --git a/samples/texts/1228241/page_8.md b/samples/texts/1228241/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..e2bdc120951501149c66fe4e8db267b018a24a16
--- /dev/null
+++ b/samples/texts/1228241/page_8.md
@@ -0,0 +1,11 @@
+Figure 2: Composition of series-parallel graphs (SPGs).
+
+the same amount of energy. The procedure **cost** is the procedure that reduces a tree to its equivalent task (see Algorithm 2).
+
+The proof is done by induction on the number of compositions required to build the graph $G$, $p$. If $p = 0$, $G$ is an elementary SPG consisting in two tasks, the source $T_0$ and the sink $T_1$. It is therefore a linear chain, and therefore equivalent to a single task whose cost is the sum of both costs, $w_0+w_1$ (see Corollary 1 for linear chains). The procedure **cost** returns therefore the correct equivalent cost, and **SPG** returns the minimum energy consumption.
+
+Let us assume that the procedures return the correct equivalent cost and minimum energy consumption for any SPG consisting of $i < p$ compositions. We consider a SPG $G$, with $p$ compositions. By definition, $G$ is a composition of two smaller-size SPGs, $G_1$ and $G_2$, and both of these SPGs have strictly fewer than $p$ compositions. We consider $G'_1$ and $G'_2$, which are identical to $G_1$ and $G_2$, except that the cost of their source and sink tasks are set to 0 (these costs are handled separately), and we can reduce both of these SPGs to an equivalent task, of respective costs $w'_1$ and $w'_2$, by induction hypothesis. There are two cases:
+
+* If $G$ is a series composition, then after the reduction of $G'_1$ and $G'_2$, we have a linear chain in which we consider the source $T_0$ of $G_1$, the sink $T_1$ of $G_1$ (which is also the source of $G_2$), and the sink $T_2$ of $G_2$. The equivalent cost is therefore $w_0 + w'_1 + w_1 + w'_2 + w_2$, thanks to Corollary 1 for linear chains.
+
+* If $G$ is a parallel composition, the resulting graph is a fork-join graph, and we can use Corollaries 1 and 3 to compute the cost of the equivalent task, accounting for the source $T_0$ and the sink $T_1$: $w_0 + ((w'_1)^3 + (w'_2)^3)^{\frac{1}{3}} + w_1$.
\ No newline at end of file
diff --git a/samples/texts/1228241/page_9.md b/samples/texts/1228241/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..b60c1f937d9cc32885725b9597dda8f6a4d02346
--- /dev/null
+++ b/samples/texts/1228241/page_9.md
@@ -0,0 +1,30 @@
+**Algorithm 2:** Solution to MINENERGY(G, D) for series-parallel graphs.
+
+procedure **SPG** (series-parallel graph G, deadline *D*)
+begin
+ return $\frac{(\mathbf{cost}(G))^3}{D^2}$;
+end
+
+procedure **cost** (series-parallel graph G)
+begin
+ Let $T_0$ be the source of G and $T_1$ its sink;
+ if G is composed of only two tasks, $T_0$ and $T_1$ **then**
+ return $w_0 + w_1$;
+ else
+ /* G is a composition of two SPGs $G_1$ and $G_2$. */
+ For $i = 1, 2$, let $G'_i = G_i$ where the cost of source and sink tasks is set to 0;
+ $w'_1 = \mathbf{cost}(G'_1)$; $w'_2 = \mathbf{cost}(G'_2)$;
+ if G is a series composition **then**
+ Let $T_0$ be the source of $G_1$, $T_1$ be its sink, and $T_2$ be the sink of $G_2$;
+ return $w_0 + w'_1 + w_1 + w'_2 + w_2$;
+ else
+ /* It is a parallel composition. */
+ Let $T_0$ be the source of G, and $T_1$ be its sink;
+ return $w_0 + ((w'_1)^3 + (w'_2)^3)^{\frac{1}{3}} + w_1$;
+ end
+ end
+end
+
+Once the cost of the equivalent task of the SPG has been computed with the call to **cost** (*G*), the optimal energy consumption is $\frac{(\mathbf{cost}(G))^3}{D^2}$.
+
+Contrarily to the case of tree graphs, since we never need to call the **SPG** procedure again because there is no constraint on $s_{max}$, the time complexity of the algorithm is the complexity of the **cost** procedure. There is exactly one call to **cost** for each composition, and the number of compositions in the SPG is in $O(|V|)$. All operations in **cost** can be done in $O(1)$, hence a complexity in $O(|V|)$. □
\ No newline at end of file
diff --git a/samples/texts/1469251/page_1.md b/samples/texts/1469251/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..008b020457ac6bbf0e33956fa862a4772200ebfe
--- /dev/null
+++ b/samples/texts/1469251/page_1.md
@@ -0,0 +1,33 @@
+Divisors in algebraic geometry
+
+C.S. Seshadri
+
+**Translator's note.**
+
+This text is one of a series* of translations of various papers into English. The translator takes full responsibility for any errors introduced in the passage from one language to another, and claims no rights to any of the mathematical content herein.
+
+What follows is a translation of the French seminar talk:
+
+SESHADRI, C. S. "Diviseurs en géométrie algébrique". Séminaire Claude Chevalley, Volume 4 (1958-1959), Talk no. 4. http://www.numdam.org/item/SCC_1958-1959_4__A4_0
+
+Contents
+
+1 Preliminaries 1
+
+2 Dévissage theorem 3
+
+3 Divisors (Generalities) 5
+
+| p. 4-01
+
+In the first part of this talk, we will prove a theorem of Serre on complete varieties [6], following the methods of Grothendieck [4]. The second part is dedicated to generalities on divisors. In the literature, we often call the divisors studied here “locally principal” divisors.
+
+The algebraic spaces considered here are defined over an algebraically closed field $K$. By “variety”, we mean an irreducible algebraic space. If $X$ is an algebraic space, we denote by $\mathcal{O}(X)$, $\mathcal{R}(X)$, etc. (or simply $\mathcal{O}$, $\mathcal{R}$, etc.) the structure sheaf, of regular functions, etc. on $X$ (to define $\mathcal{R}(X)$ we assume that $X$ is a variety). By “coherent sheaf” on $X$, we mean a coherent sheaf of $\mathcal{O}$-modules on $X$.
+
+# 1 Preliminaries
+
+[4, 5, 6]
+
+If $M$ is a module over an integral ring $A$ (commutative and with 1), then we say that an element $m \in M$ is a *torsion element* if there exists some non-zero $a \in A$ such that $a \cdot m = 0$. We say that $M$ is a *torsion module* (resp. *torsion-free module*) if every element of $M$ is a torsion element (resp. if $M \neq 0$ and no non-zero element of $M$ is a torsion element). The torsion elements of $M$ form a torsion submodule of $M$ (denoted by $T(M)$); if $M \neq 0$, then
+
+*https://thosgood.com/translations
\ No newline at end of file
diff --git a/samples/texts/1469251/page_2.md b/samples/texts/1469251/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..69df4650a82fc694da6a76acf8d4158c76f149fd
--- /dev/null
+++ b/samples/texts/1469251/page_2.md
@@ -0,0 +1,27 @@
+$M/T(M)$ is a torsion-free module. If $M$ is a torsion module of finite type over $A$, then the ideal $\text{ann}\,M$ of $A$ (the ideal of $A$ given by the elements $a \in A$ such that $aM = 0$) is non-zero.
+
+Let $X$ be an algebraic space and $\mathcal{F}$ a sheaf of $\mathcal{O}$-modules on $X$. We define supp $\mathcal{F}$ to be the set of points $x \in X$ such that $\mathcal{F}_x \neq 0$. If $\mathcal{F}$ is coherent, then supp $\mathcal{F}$ is a closed subset of $X$. If $X$ is affine, then supp $\mathcal{F}$ is the set defined by the ideal $\text{ann}\,H^0(X, \mathcal{F})$ of the affine algebra $H^0(X, \mathcal{O})$, where $H^0(X, \mathcal{F})$ is considered as a module over $H^0(X, \mathcal{O})$.
+
+A sheaf $\mathcal{F}$ of $\mathcal{O}$-modules on a variety $X$ is said to be a *torsion sheaf* (resp. *torsion-free sheaf*) if, for every $x \in X$, the module $\mathcal{F}_x$ over the ring $\mathcal{O}_x$ is a torsion module (resp. torsion-free module).
+
+p. 4-02
+
+**Proposition 1.** If $\mathcal{F}$ is a coherent sheaf on a variety $X$, then there exists a coherent subsheaf $T(\mathcal{F})$ of $\mathcal{F}$ (and only one) such that $(T(\mathcal{F}))_x = T(\mathcal{F}_x)$.
+
+*Proof.* The uniqueness is trivial. The existence is a consequence of the fact that, if $X$ is affine, then $T(\mathcal{F}_x)$ is given by localisation of the module $T(H^0*(X, \mathcal{F}))$ with respect to the maximal ideal of $H^0(X, \mathcal{O})$ that defines $x$. $\square$
+
+**Corollary.** If $\mathcal{F} \neq 0$ then $\mathcal{F}/T(\mathcal{F})$ is a torsion-free coherent sheaf.*
+
+**Proposition 2.** If $\mathcal{F}$ is a coherent sheaf on the variety $X$, then $\text{supp} \,\mathcal{F} \neq X$ if and only if $\mathcal{F}$ is a torsion sheaf.
+
+*Proof.* This is a trivial consequence of the fact that, if $U$ is an affine open subset, then $\text{supp}.\mathcal{F} \cap U$ is defined by the ideal $\text{ann}\,H^0(U, \mathcal{F})$ of $H^0(U, \mathcal{O})$, where $H^0(U, \mathcal{F})$ is considered as a module over $H^0(U, \mathcal{O})$. $\square$
+
+**Proposition 3.** If $\mathcal{F}$ is a torsion-free coherent sheaf on a variety $X$, with $\mathcal{F} \subset \mathbb{R}^n$, then there exists a coherent sheaf $\mathcal{I} \neq 0$ of ideals of $\mathcal{O}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$.
+
+*Proof.* Let $\mathcal{I}_x$ be the ideal $\left[\mathcal{O}_x^n : \mathcal{F}_x\right]$ of $\mathcal{O}_x$, i.e. the ideal of elements $i_x$ of $\mathcal{O}_x$ such that $i_x \mathcal{F}_x \subset \mathcal{O}_x^n$. Since $\mathcal{F}_x$ is of finite type over $\mathcal{O}_x$, we know that $\mathcal{I}_x \neq 0$. If we take an affine open subset $U$ of $X$, then we can prove that $\mathcal{I}_x$ is given by localisation of the ideal $[H^0(U, \mathcal{O}^n) : H^0(U, \mathcal{F})]$ of $H^0(U, \mathcal{O})$ by the maximal ideal of $H^0(U, \mathcal{O})$ that defines $x$. Thus ${\mathcal{I}_x}_{x \in X}$ defines a coherent sheaf $\mathcal{I}$ of ideals of $\mathcal{O}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$. $\square$
+
+Let $\mathcal{F}$ be a torsion-free coherent sheaf on a variety $X$. Then the canonical homomorphism $\mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}} \mathbb{R}$ is injective. The sheaves $\mathbb{R}$ and $\mathcal{F} \otimes_{\mathcal{O}} \mathbb{R}$ are locally constant sheaves, and thus constant ([5, page 229]). We can then identify $\mathcal{F} \otimes_{\mathcal{O}} \mathbb{R}$ with a vector space of finite dimension over $\mathbb{R}$ (we identify the field of rational functions with the sheaf $\mathbb{R}$ since $\mathbb{R}$ is constant). We call this dimension the *rank* of $\mathcal{F}$, and we can then consider $\mathcal{F}$ as a subsheaf of $\mathbb{R}^n$, where $n = \text{rank}.\mathcal{F}$.
+
+**Proposition 4.** Under the same hypotheses as in Proposition 3, there exists a coherent sheaf $\mathcal{I} \neq 0$ of ideals of $\mathcal{O}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$, where $n = \text{rank}.\mathcal{F}$; then $\mathcal{O}^n/(\mathcal{I} \cdot \mathcal{F})$ and $\mathcal{F}/(\mathcal{I} \cdot \mathcal{F})$ are torsion sheaves.
+
+* [Trans.] The condition that $\mathcal{F} \neq 0$ is unnecessary, but we include it here since it is in the original. Note that the zero sheaf is indeed a torsion-free sheaf, otherwise any coherent torsion sheaf $\mathcal{F}$ provides a counterexample to this corollary.
\ No newline at end of file
diff --git a/samples/texts/1469251/page_3.md b/samples/texts/1469251/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..ddee161b3c495c1e0f1011bdfe6daa16e4939045
--- /dev/null
+++ b/samples/texts/1469251/page_3.md
@@ -0,0 +1,27 @@
+*Proof.* The proof is immediate.
+
+If $Y$ is a closed subset of an algebraic space $X$, then we denote by $\mathcal{I}_Y$ the coherent sheaf of ideals of $\mathcal{O}$ defined by $Y$.
+
+**Proposition 5.** Let $Y$ be a closed subset of an algebraic space $X$, and $\mathcal{F}$ a coherent sheaf on $X$, with $\text{supp} \, \mathcal{F} \subset Y$; then there exists an integer $k$ such that $\mathcal{I}_Y^k \mathcal{F} = 0$.
+
+*Proof.* We can reduce to the case where $X$ is affine, since there exists a finite cover of $X$ by affine opens. In this case, the hypothesis implies that the set defined by the ideal $\text{annH}^0(X, \mathcal{F})$ is contained in $Y$. This implies, as is well known, that $\text{annH}^0(X, \mathcal{F}) \supset \mathcal{I}_Y^k$. $\square$
+
+**Proposition 6.** Let $\mathcal{F}$ be a coherent sheaf of fractional ideals on a variety $X$ (i.e. a coherent subsheaf of $\mathcal{R}$) such that, for every $x$ outside of a closed subset $Y$ of $X$, $\mathcal{F}_x$ is an ideal of $\mathcal{O}_x$. Then there exists an integer $k$ such that $\mathcal{I}_Y^k \cdot \mathcal{F} \subset \mathcal{O}$.
+
+*Proof.* By Proposition 3 and the hypothesis, there exists a coherent sheaf $\mathcal{J}$ of ideals of $\mathcal{O}$ such that $\mathcal{I}_x = \mathcal{O}_x$ if $x \notin Y$, and such that $\mathcal{J} \cdot \mathcal{F} \subset \mathcal{O}$. Thus $\text{supp}(\mathcal{O}/\mathcal{J}) \subset Y$, and, by Proposition 5, there exists an integer $k$ such that $\mathcal{I}_Y^k(\mathcal{O}/\mathcal{J}) = 0$. This implies that $\mathcal{I}_Y^k \subset \mathcal{J}$. $\square$
+
+## 2 Dévissage theorem
+
+Let $\mathscr{C}$ be an abelian category, and $\mathscr{C}'$ a subcategory of objects of $\mathscr{C}$. We say that $\mathscr{C}'$ is left exact in $\mathscr{C}$ if
+
+1. every subobject of an object of $\mathscr{C}'$ is in $\mathscr{C}'$;
+
+2. for every exact sequence $0 \to \mathscr{A}' \to \mathscr{A} \to \mathscr{A}'' \to 0$ in $\mathscr{C}$, the object $\mathscr{A}$ is in $\mathscr{C}'$ if the other two objects are in $\mathscr{C}'$.²
+
+Let $X$ be an algebraic space. We denote by $\mathscr{C}(X)$ the abelian category of coherent sheaves on $X$. If $Y$ is a closed subset of $X$, then a coherent sheaf on $Y$ has a canonical extension to a coherent sheaf on $X$ (extending by 0 outside of $Y$), and so we can consider $\mathscr{C}(Y)$ as a subcategory of $\mathscr{C}(X)$. With this notation, we have the following theorem:
+
+*Theorem (Dévissage).* Let $\mathcal{D}$ be a left-exact subcategory of $\mathcal{C}(X)$ that has the following property: for every closed irreducible subset $Y$ of $X$, there exists a coherent sheaf $\mathcal{M}_Y$ of $\mathcal{C}(Y)$ that belongs to $\mathcal{D}$, and that is torsion-free as a sheaf on $Y$. Then $\mathcal{D} = \mathcal{C}(X)$.
+
+*Proof.* The proof works by induction on the dimension of $X$. If $\dim X = 0$, then $X$ consists of a finite number of points $P_1, \dots, P_r$, and a coherent sheaf on $X$ can be identified with a system $\{N_i\}_{i=1,\dots,r}$, where $N_i$ is a vector space of finite dimension over $K$. Thus the sheaf $\mathcal{M}_{P_i}$ on $P_i$ that we have, by hypothesis, is a vector space of finite dimension over $K$. By the axioms of a left-exact subcategory, it is trivial to show that every system $\{N_i\}_{i=1,\dots,r}$,
+
+²The axioms here that define a left-exact subcategory are slightly stronger than those of Grothendieck [4].
\ No newline at end of file
diff --git a/samples/texts/1469251/page_4.md b/samples/texts/1469251/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..77fe0450a366e38675563411ce290ba21982e223
--- /dev/null
+++ b/samples/texts/1469251/page_4.md
@@ -0,0 +1,25 @@
+where $N_i$ is a vector space of finite dimension over $K$, considered as a coherent sheaf on $X$, belongs to $\mathcal{D}$.
+
+Now assume that we have proven the theorem for all dimensions $\le (n-1)$. Let $\dim X = n$. Let $Y$ be a closed subset of $X$ such that $\dim Y \le (n-1)$. We can easily show that $\mathcal{D} \cap \mathcal{C}(Y)$ is a left-exact subcategory of $\mathcal{C}(Y)$ that satisfies the hypotheses of the theorem. So, by the induction hypothesis, $\mathcal{D} \supset \mathcal{C}(Y)$.
+
+We will now prove that, if $\mathcal{F}$ is a coherent sheaf on $X$ with $\text{supp} \, \mathcal{F} = Y$, then $\mathcal{F} \in \mathcal{D}$. If $\mathcal{I}_Y \cdot \mathcal{F} = 0$, then $\mathcal{F} \in \mathcal{C}(Y)$, and, by the above, $\mathcal{F} \in \mathcal{D}$. No matter what, by Proposition 5, there exists an integer $k \ge 1$ such that $\mathcal{I}_Y^k \mathcal{F} = 0$. We will complete the proof by induction on $k$. Suppose that that claim has been proven for every coherent sheaf $\mathcal{G}$ on $X$ such that $\mathcal{I}_Y^{k-1} \mathcal{G} = 0$. For $\mathcal{F}$, we have an exact sequence
+
+$$0 \to \mathcal{I}_Y \cdot \mathcal{F} \to \mathcal{F} \to \mathcal{F}/(\mathcal{I}_Y \cdot \mathcal{F}) \to 0.$$
+
+The sheaf $\mathcal{I}_Y \mathcal{F}$ is annihilated by $\mathcal{I}_Y^{k-1}$, and the sheaf $\mathcal{F}/(\mathcal{I}_Y \mathcal{F})$ is annihilated by $\mathcal{I}_Y$. Thus $\mathcal{I}_Y \mathcal{F}$ and $\mathcal{F}/(\mathcal{I}_Y \mathcal{F})$ belong to $\mathcal{D}$. This implies that $\mathcal{F} \in \mathcal{D}$.
+
+Suppose that $X$ is a variety, and that $\mathcal{F}$ is a torsion-free sheaf on $X$. We can consider $\mathcal{F}$ as a coherent subsheaf of $\mathbb{R}^n$, where $n = \text{rank} \, \mathcal{F}$, and, by Proposition 4, there then exists a coherent sheaf of ideals $\mathcal{I}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$, and such that the sheaves $\mathcal{F}/(\mathcal{I}\mathcal{F})$ and $\mathcal{O}^n/(\mathcal{I}\mathcal{F})$ are torsion sheaves. Since $\mathcal{F}/(\mathcal{I}\mathcal{F})$ is a torsion sheaf, $\mathcal{F}/(\mathcal{I}\mathcal{F}) \in \mathcal{D}$; thus $\mathcal{F} \in \mathcal{D}$ if and only if $\mathcal{I}\mathcal{F} \in \mathcal{D}$. Analogously, $\mathcal{I}\mathcal{F} \in \mathcal{D}$ if and only if $\mathcal{O}^n \in \mathcal{D}$, and, by the axioms of an exact subcategory, if and only if $\mathcal{O} \in \mathcal{D}$. Thus $\mathcal{F} \in \mathcal{D}$ if and only if $\mathcal{O} \in \mathcal{D}$. If we repeat the same argument for the torsion-free sheaf $\mathbb{M}_X$, which we have by hypothesis, then we see that $\mathcal{O} \in \mathcal{D}$, which implies that $\mathcal{F} \in \mathcal{D}$.
+
+Suppose again that $X$ is a variety, but now that $\mathcal{F}$ is an arbitrary coherent sheaf. We will show that $\mathcal{F} \in \mathcal{D}$. We can assume that $\mathcal{F} \ne 0$, and we then have
+
+$$0 \to T(\mathcal{F}) \to \mathcal{F} \to \mathcal{F}/T(\mathcal{F}) \to 0$$
+
+where $T(\mathcal{F})$ is a torsion sheaf, and $\mathcal{F}/T(\mathcal{F})$ is a torsion-free sheaf. By Proposition 2, $\text{supp} T(\mathcal{F}) \ne X$, and, since $X$ is a variety, $\text{dim} \, \text{supp} T(\mathcal{F}) < \text{dim} T(X)$. We then have, by the induction hypothesis, that $T(\mathcal{F}) \in \mathcal{D}$, and we have just proven that $\mathcal{F}/T(\mathcal{F}) \in \mathcal{D}$. Thus $\mathcal{F} \in \mathcal{D}$.
+
+Now let $X$ be an arbitrary algebraic space, and $X_1, \dots, X_p$ its irreducible components. If $\mathcal{F}$ is a coherent sheaf on $X$, then $\mathcal{F}/(\mathcal{I}_{X_i}\mathcal{F})$ can be identified with a sheaf on the variety $X_i$ (where $\mathcal{I}_{X_i}$ is the sheaf of ideals of $\mathcal{O}(X)$ determined by $X_i$), and, by the above, $\mathcal{F}/(\mathcal{I}_{X_i}\mathcal{F}) \in \mathcal{D}$. Thus the sheaf $\mathcal{G} = \sum_{i=1}^p \mathcal{F}/(\mathcal{I}_{X_i}\mathcal{F})$ belongs to $\mathcal{D}$. We have a canonical homomorphism $\varphi: \mathcal{F} \to \mathcal{G}$. The image of $\varphi$ is a coherent subsheaf of $\mathcal{G}$, and so the image of $\varphi$ belongs to $\mathcal{D}$.
+
+We know that $\text{supp} \ker \varphi \subset \bigcup_{i \ne j} X_i \cap X_j$, and so $\text{dim} \operatorname{supp} \ker \varphi < \text{dim} X$, and, by the induction hypothesis, $\ker \varphi \in \mathcal{D}$. Thus $\mathcal{F} \in \mathcal{D}$, and the theorem is proven. □
+
+**Corollary (Serre's Theorem).** If $\mathcal{F}$ is a coherent sheaf on a complete algebraic space $X$, then $H^0(X, \mathcal{F})$ is a vector space of finite dimension over $K$.
+
+*Proof.* We take $\mathcal{D}$ to be the category of all coherent sheaves $\mathcal{F}$ on $X$ such that $H^0(X, \mathcal{F})$ is of finite dimension over $K$. We can prove that $\mathcal{D}$ is a left-exact subcategory of $\mathcal{C}(X)$. Also,
\ No newline at end of file
diff --git a/samples/texts/1469251/page_5.md b/samples/texts/1469251/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..edf897772cb59f10a16ab426060330af2c649ee0
--- /dev/null
+++ b/samples/texts/1469251/page_5.md
@@ -0,0 +1,19 @@
+we know that, if $Y$ is an irreducible closed subset of $X$, then $Y$ is a complete variety. Thus the coherent sheaf $\mathcal{O}(Y)$ on $Y$ is a torsion-free sheaf with the property that $H^0(Y, \mathcal{O}(Y)) \cong K$, and so $H^0(X, \mathcal{O}(Y)) = H^0(Y, \mathcal{O}(Y))$ is of finite dimension over $K$ (we denote also by $\mathcal{O}(Y)$ the canonical extension of $\mathcal{O}(Y)$ to $X$). By the theorem, the corollary is proven. $\square$
+
+### 3 Divisors (Generalities)
+
+Let $X$ be an algebraic variety, and $\mathcal{R}^\times(X)$ and $\mathcal{O}^\times(X)$ (or simply $\mathcal{R}^\times$ and $\mathcal{O}^\times$) the constant sheaf on $X$ of non-zero rational functions and the sheaf on $X$ of invertible regular functions (respectively). The sheaves $\mathcal{R}^\times$ and $\mathcal{O}^\times$, endowed with their multiplicative structure, are sheaves of abelian groups.
+
+A divisor $D$ on $X$ is a section of the quotient sheaf $\mathcal{R}^\times/\mathcal{O}^\times$. An element of $\mathcal{R}^\times$ that is a representative of the value $D(x)$ of $D$ at $x$ is called a *definition function of D at x*. More generally, a function $f \in \mathcal{R}^\times$ is called a *definition function of D in an open subset U* if, for all $x \in U$, $f$ is a representative of $D(x)$; then $f$ is determined up to an invertible regular function on $U$. Since we can locally lift a section of $\mathcal{R}^\times/\mathcal{O}^\times$ to a section of $\mathcal{R}^\times$, a divisor $D$ is determined by the following data: a cover $\{U_i\}$ by open subsets, and non-zero rational functions $f_i$ on $U_i$ such that, on $U_i \cap U_j$, $f_{ij} = f_i/f_j$ is an invertible regular function. We have that $f_{ij}f_{jk}f_{ki} = 1$ on $U_i \cap U_j \cap U_k$, and, as is well known, this allows us to construct a locally trivial fibre bundle with $K^\times$ as the structure group; it is easy to see that this fibre bundle is determined up to equivalence [7]. We also know that the coherent sheaves of fractional ideals (i.e. the coherent subsheaves of $\mathcal{R}$) that are generated by $f_i$ and $f_j$ (respectively) agree on $U_i \cap U_j$, and do not depend on the choice of definition functions of $D$ in $U_i$ and $U_j$. This implies that the divisor $D$ canonically determines a coherent sheaf of locally principal fractional ideals. We can easily see that the converse is true, and this gives us an equivalent definition of a divisor [1].
+
+A divisor $D$ is said to be *positive* if, for each $x \in X$, $D(x) \in \mathcal{O}_x/\mathcal{O}_x^\times$ (i.e. if all the definition functions of $D$ at $x$ are regular functions in $x$).
+
+Since $\mathcal{R}^\times/\mathcal{O}^\times$ is a sheaf of abelian groups on $X$, there is a canonical structure of an abelian group on the set of divisors on $X$; this group is called the *group of divisors on X*. The composition law in this group is written additively, and the identity element in this group is thus called the *zero divisor*, and is denoted by (0).
+
+If $f$ is a non-zero rational function on $X$, then it defines a divisor $\text{div } f$ by the data $(\text{div } f)(x) = \text{Im } f \subseteq \mathcal{R}^\times/\mathcal{O}_x^\times$. The divisors obtained in this way are called *principal divisors*, and form a subgroup of the group of divisors on $X$; the quotient group is called the *group of classes of divisors on X*. Two divisors $D_1$ and $D_2$ are said to be equivalent if they are equivalent module the group of principal divisors; we write $D_1 \sim D_2$. We have seen that a divisor defines, up to equivalence, a locally trivial algebraic fibre bundle with structure group $K^\times$. On the other hand, it is easy to see that a locally trivial algebraic fibre bundle with $K^\times$ as its structure group defines, up to equivalence, a divisor [7]. Thus the group of classes of divisors on $X$ is equal to $H^1(X, \mathcal{O}^\times)$, the group of classes of equivalent algebraic fibre bundles with $K^\times$ as their structure group.
+
+We can define, in an analogous way, an *additive divisor* on a variety $X$ as a section of the sheaf $\mathcal{R}/\mathcal{O}$ (the divisors defined above are called *multiplicative divisors*, or simply *divisors*). The additive divisors form an abelian group, and even a vector space over $K$. An additive divisor is determined by the following data: a cover $\{U_i\}$ of $X$ by open subsets,
+
+p. 4-06
+
+p. 4-07
\ No newline at end of file
diff --git a/samples/texts/1754951/page_30.md b/samples/texts/1754951/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..da1af653f67c8a914c747a94b082c95c488761a6
--- /dev/null
+++ b/samples/texts/1754951/page_30.md
@@ -0,0 +1,17 @@
+Figure 17: The clockwise / anti-clockwise enumeration of parallel edges with respect to $R$.
+
+important in how we relate a solution of the given instance of **Planar Disjoint Paths** to a weak linkage in $H$. We remind that for a pair of adjacent vertices $u, v \in V(H)$, we denoted the $4n + 1$ parallel copies of edges between them by $e_{-2n}, e_{-2n+1}, \dots, e_{-1}, e_0, e_1, e_2, \dots, e_{2n}$ where $e = \{u, v\}$, such that when the edges incident to $u$ (or $v$) are enumerated in cyclic order, the occurrences of $e_i$ and $e_{i+1}$ are consecutive (that is, $e_i$ appears immediately before $e_{i+1}$ or vice versa) for every $i \in \{-2n, -2n+1, \dots, 2n-1\}$, and $e_{-2n}$ and $e_{2n}$ are the outermost copies of $e$. We say that such an embedding is *valid*. Now, we further refine the embedding of $H_G$ (so that it remains valid)—notice that for each edge, there are two possible ways to order its copies in the embedding so that it satisfies the condition above. Here, we will specify for the edges of $R$ a particular choice among the two to embed their copies. Towards this, for a vertex $v \in V(R)$, let $\tilde{E}_R(v) = \{e \in E_H(v) \mid e \text{ is parallel to an edge } e' \in E(R)\}$. (The set $E_H(v)$ contains all edges in $E(H)$ incident to $v$.)
+
+For the definition of the desired embedding, we remind that any tree can be properly colored in two colors (that is, every vertex is assigned a color different than the colors of its neighbors), and that in such a coloring, for every vertex, all the neighbors of the vertex get the same color. We let **color**: $V(R) \rightarrow \{\text{red, green}\}$ be some such coloring of $R$. Then, we embed parallel copies such that for every $v \in V(R)$, the following conditions hold (see Fig. 17).
+
+* If **color**(v) = red, then when we enumerate $\tilde{E}_R(v)$ in clockwise order, for every $e \in E_R(v)$, the $4n + 1$ copies of e are enumerated in this order: $e_{-2n}, e_{-2n+1}, \dots, e_0, \dots, e_{2n}$. We let **order**$_v$ denote such an enumeration starting with an edge indexed $-2n$.
+
+* If **color**(v) = green, then when we enumerate $\tilde{E}_R(v)$ in counter-clockwise order, for every $e \in E_R(v)$, the $4n+1$ copies of e are enumerated in this order: $e_{-2n}, e_{-2n+1}, \dots, e_0, \dots, e_{2n}$. We let **order**$_v$ denote such an enumeration starting with an edge indexed $-2n$.
+
+Let us observe that the above scheme is well defined.
+
+**Observation 6.9.** Let $(G, S, T, g, k)$ be a good instance of Planar Disjoint Paths with a backbone Steiner tree $R$. Then, there is a valid embedding of $H$ such that, for every $v \in V(R)$, the enumeration order$_v$ is well defined with respect to some proper coloring **color**: $V(R) \to \{\text{red, green}\}$. Furthermore, such an embedding can be computed in time $\mathcal{O}(n^2)$.
+
+*Proof.* Since $e = \{u, v\} \in E(R)$, $u$ and $v$ get different colors under **color**. Let us assume that **color**(u) = red and **color**(v) = green. Then the parallel copies of e are enumerated in clockwise order in **order**$_v$ and anti-clockwise order in **order**$_v$. Hence, they agree and by construction, it is a good enumeration. Finally, to bound the time required to obtain such an embedding, observe that it can be obtained by starting with any arbitrary embedding of H and then renaming the edges. Since the total number of edges in $E(H)$ (including parallel copies) is at most $\mathcal{O}(n^2)$, this can be done in $\mathcal{O}(n^2)$ time. □
+
+From now on, we assume that $H$ is embedded in a way so that the enumerations $order_v$ are well defined. We also remind that $R$ only contains the 0-th copies of edges in $H$. Finally, we have the following observation.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_34.md b/samples/texts/1754951/page_34.md
new file mode 100644
index 0000000000000000000000000000000000000000..efe4c7b7019e28e4346a7ae4a600fa6646d72011
--- /dev/null
+++ b/samples/texts/1754951/page_34.md
@@ -0,0 +1,15 @@
+to Disjoint Paths can be replaced by an equivalent one whose paths avoid v.
+
+This result says that if the treewidth of the input planar graph is (roughly) $\Omega(2^k)$, then we can find an irrelevant vertex and remove it. A natural question is whether we can guarantee an irrelevant vertex even if the treewidth is $\Omega(\text{poly}(k))$. Adler and Krause [3] exhibited a planar graph $G$ with $k+1$ terminal pairs such that $G$ contains a $(2^k+1) \times (2^k+1)$ grid as a subgraph, Disjoint Paths on this input has a unique solution, and the solution uses all vertices of $G$; in particular, no vertex of $G$ is irrelevant. This implies that the irrelevant vertex technique can only guarantee a treewidth of $\Omega(2^k)$, even if the input graph is planar.
+
+Combining items (1) and (3), we conclude that the known methodology for Disjoint Paths can only guarantee an algorithm with running time $2^{2^{\mathcal{O}(k)}}n^2$ for Planar Disjoint Paths. Thus, a $2^{\text{poly}(k)}n^{\mathcal{O}(1)}$-time algorithm for Planar Disjoint Paths appears to require entirely new ideas. As this obstacle was known to Adler et al. [1], it is likely to be the main motivation for Adler to pose the existence of a $2^{\text{poly}(k)}n^{\mathcal{O}(1)}$ time algorithm for Planar Disjoint Paths as an open problem.
+
+**Our Methods.** Our algorithm is based on a novel combination of two techniques that do not seem to give the desired outcome when used on their own. The first ingredient is the treewidth reduction theorem of Adler et al. [2] that proves that given an instance of Planar Disjoint Paths, the treewidth can be brought down to $2^{\mathcal{O}(k)}$ (explained in item (3) above). This by itself is sufficient for an FPT algorithm (this is what Adler et al. [2] do), but as explained above, it seems hopeless that it will bring a $2^{\text{poly}(k)}n^{\mathcal{O}(1)}$-time algorithm.
+
+We circumvent the obstacle by using an algorithm for a more difficult problem with a worse running time, namely, Schrijver's $n^{\mathcal{O}(k)}$-time algorithm for Disjoint Paths on directed planar graphs [42]. Schrijver's algorithm has two steps: a "guessing" step where one (essentially) guesses the homology class of the solution paths, and then a surprising homology-based algorithm that, given a homology class, finds a solution in that class (if one exists) in polynomial time. Our key insight is that for Planar Disjoint Paths, if the instance that we are considering has been reduced according to the procedure of Adler et al. [2], then we only need to iterate over $2^{\mathcal{O}(k^2)}$ homology classes in order to find the homology class of a solution, if one exists. The proof of this key insight is highly non-trivial, and builds on a cornerstone ingredient of the recent FPT algorithm of Cygan et al. [14] for Disjoint Paths on directed planar graphs. To the best of our knowledge, this is the first algorithm that finds the exact solution to a problem that exploits that the treewidth of the input graph is small in a way that is different from doing dynamic programming. A technical overview of our methods will appear in the next section. In our opinion, a major strength of the paper is that it breaks not only a barrier in running time, but also a longstanding methodological barrier. Since there are many algorithms that use the irrelevant vertex technique in some way, there is reasonable hope that they could benefit from the methods developed in this work.
+
+We remark that we have made no attempt to optimize the polynomial factor in this paper. Doing that, and in particular achieving linear dependency on $n$ while keeping the dependency on $k$ single-exponential, is the natural next question for future research. In particular, this might require to “open-up” the black boxes that we use, whose naive analysis yields a large polynomial dependency on $n$, but there is no reason to believe that it cannot be made linear—most likely, this will require extensive independent work on these particular ingredients. Having both the best dependency on $k$ and the best dependency on $n$ simultaneously may be critical to achieve a practical exact algorithm for large-scale instances.
+
+## 2 Overview
+
+**Homology.** In this overview, we explain our main ideas in an *informal* manner. Our starting point is Schrijver's view [42] of a collection of "non-crossing" (but possibly not vertex- or even edge-disjoint) sets of walks as flows. To work with flows (defined immediately), we deal with
\ No newline at end of file
diff --git a/samples/texts/1754951/page_37.md b/samples/texts/1754951/page_37.md
new file mode 100644
index 0000000000000000000000000000000000000000..ca59ff5bbda043361e1170a4d9d79e65188779bc
--- /dev/null
+++ b/samples/texts/1754951/page_37.md
@@ -0,0 +1,25 @@
+*Proof.* We think of $Q$ as oriented from its endpoint on $I_{\text{in}}$ to its endpoint on $I_{\text{out}}$. Let $a$ be the last vertex on $Q$ that lies on $\hat{I}_{\text{in}}$ and $b$ be the first vertex on $Q$ that lies on $\hat{I}_{\text{out}}$. Further, let $Q_{\text{in}}$ be the prefix of $Q$ from its start to $a$, and $Q_{\text{out}}$ be the suffix of $Q$ from $b$ to its end. By Claim 7.2, $Q_{\text{in}}$ is entirely contained in the ring induced by $I_{\text{in}}$ and $C_{20\ell}$. By the claim analogous to Claim 7.2, $Q_{\text{out}}$ is entirely contained in the ring induced by $I_{\text{out}}$ and $C_{p-20\ell+1}$. Since $p > 40\ell$, it follows that $Q_{\text{in}}$ and $Q_{\text{out}}$ are disjoint, and in particular $a$ appears before $b$ on $Q$. Let $\hat{Q}$ be the infix of $Q$ between $a$ and $b$. Then, $\hat{Q}$ is a path in $\text{Ring}(\hat{I}_{\text{in}}, \hat{I}_{\text{out}})$ that traverses this ring, so it suffices to check that $\lvert \text{WindNum}(\hat{Q}, \hat{\eta}) - \text{WindNum}(Q, \eta) \rvert \le 40\ell$.
+
+Observe that every crossing of $Q$ and $\eta$ that is not a crossing of $\hat{Q}$ and $\hat{\eta}$ has to occur on either $Q_{\text{in}}$ or $Q_{\text{out}}$. However, $Q_{\text{in}}$ and $Q_{\text{out}}$ can have at most 40ℓ vertices in common with $\eta$, because these must be among the intersections of $\eta$ with cycles $C_1, \dots, C_{20\ell}, C_{p-20\ell+1}, \dots, C_p$, of which there are 40ℓ. Each such crossing can contribute +1 or -1 to the difference between the winding numbers $\overline{\text{WindNum}}(\hat{Q}, \hat{\eta})$ and $\overline{\text{WindNum}}(Q, \eta)$, hence the difference between these winding numbers is at most 40ℓ. ◇
+
+For every path $Q \in Q$, fix the path $\hat{Q}$ provided by Claim 7.3, and let $\hat{Q} \subseteq \{\hat{Q} | Q \in Q\}$ such that $\lvert \hat{Q} \rvert = \lvert \mathcal{P}_{\text{traverse}} \rvert$. Then, $\mathcal{Q}'$ is a traversing linkage in $\text{Ring}(\hat{I}_{\text{in}}, \hat{I}_{\text{out}})$. Apply Proposition 7.2 to the linkages $\mathcal{P}_{\text{traverse}}$ and $\hat{Q}$ in $\text{Ring}(\hat{I}_{\text{in}}, \hat{I}_{\text{out}})$, yielding a linkage $\mathcal{P}'_{\text{traverse}}$ that is aligned from $\mathcal{P}_{\text{traverse}}$ and such that
+
+$$ \lvert \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \hat{\eta}) - \overline{\text{WindNum}}(\hat{Q}, \hat{\eta}) \rvert \le 6. $$
+
+Clearly, by construction we have that the paths of $\mathcal{P}'_{\text{traverse}}$ are disjoint with the paths of $\mathcal{P}_{\text{visitor}}$. Furthermore, the paths in $\mathcal{P}'_{\text{traverse}}$ traverse $\text{Ring}(I_{\text{in}}, I_{\text{out}})$ since they are aligned with $\mathcal{P}_{\text{traverse}}$ (i.e. they have the same endpoints). Finally, by Claim 7.3 we have
+
+$$ \lvert \overline{\text{WindNum}}(\hat{Q}, \hat{\eta}) - \overline{\text{WindNum}}(Q, \eta) \rvert \le 40\ell, $$
+
+and by Claim 7.1 (applied to paths in $\mathcal{P}'_{\text{traverse}}$) we have
+
+$$ \lvert \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \eta) - \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \hat{\eta}) \rvert \le 20\ell. $$
+
+By the above we conclude that
+
+$$ \lvert \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \eta) - \overline{\text{WindNum}}(Q, \eta) \rvert \le 60\ell + 6, $$
+
+which completes the proof. □
+
+## 7.3 Rings of the Backbone Steiner Tree
+
+Based on results in previous subsections, we proceed to show that if the given instance admits a solution, then it also admits a solution of small winding number. Recall the backbone Steiner tree $R$ constructed in Section 6. Let $P = \textbf{path}_R(u, v)$ be a long maximal degree-2 path in $R$, where $u, v \in V_{=1}(R) \cup V_{\ge 3}(R)$, and assume without loss of generality (under the supposition that we are given a *Yes* instance) that the subtree of $R-V(P)-\{u, v\}$ containing $v$ also contains the terminal $t^* \in T$ lying on the outer face of $H$. Recall the (minimal) separators $S_u = \textbf{Sep}_R(P, u)$ and $S_v = \textbf{Sep}_R(P, v)$ in $H$. Hence $H[S_u]$ and $H[S_v]$ form two cycles in $H$, and $H[S_u]$ is contained in the strict interior of $H[S_v]$. Further, recall that $|S_u|, |S_v| \le \alpha_{\text{sep}}(k)$. Consider the ring induced by $H[S_u]$ and $H[S_v]$, i.e. $\text{Ring}(S_u, S_v) := \text{Ring}(H[S_u], H[S_v])$. Let $V(S_u, S_v)$ denote the set of all vertices (in $V(H)$) that lie in this ring, including those in $S_u$ and $S_v$. Then, $\text{Ring}(S_u, S_v) = H[V(S_u, S_v)]$. Note that by definition it contains $\textbf{Sep}_R(P, u)$ and $\textbf{Sep}_R(P, v)$. Let $G_{u,v}$ denote the restriction of this graph to $G$, i.e. $G_{u,v} = G[V(G) \cap V(S_u, S_v)]$. Additionally, recall that there are two distinct vertices $u'$ and $v'$ in $P$ that lie in $S_u$ and $S_v$, respectively, such that $P = \textbf{path}_R(u, u')-\textbf{path}_R(u', v')-(v', v)$ (by Lemma 6.5 and Definition 6.5). Lastly, we remind that $A^*_{R,P,u}$ and $A^*_{R,P,v}$ are the two components of $R-V(\textbf{path}_R(u', v') - \{u', v'\})$ that contain $u$ and $v$, respectively (Definition 6.5). Then, the following observation is immediate.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_42.md b/samples/texts/1754951/page_42.md
new file mode 100644
index 0000000000000000000000000000000000000000..a19536b25c817980314bedf1add89e94ba4ec88b
--- /dev/null
+++ b/samples/texts/1754951/page_42.md
@@ -0,0 +1,29 @@
+**Definition 8.7 (Segment).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$. A crossing of $W$ with $R$ is a crossing $(v, e, \hat{e}, e', \tilde{e})$ of $W$ and some path in $R$.¹⁴ Then, a segment of $W$ is a maximal subwalk of $W$ that has no crossings with $R$. Let $\text{Seg}(W)$ denote the set¹⁵ of segments of $W$.
+
+We remind that $R$ only contains 0-copies of edges, hence we can ensure that we deal with walks that are edge-disjoint from $R$ by avoiding the usage of 0-copies. Towards the definition of potential for a weak linkage, we group segments together as follows (see Fig. 19).
+
+**Definition 8.8 (Segment Group).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$. A segment group of $W$ is a maximal subwalk $W'$ of $W$ such that either (i) $\text{Seg}(W') \subseteq \text{Seg}(W)$ and all of the endpoints of all of the segments in $\text{Seg}(W')$ are internal vertices of the same maximal degree-2 path of $R$, or (ii) $W' \in \text{Seg}(W)$ and the two endpoints of $W'$ are not internal vertices of the same maximal degree-2 path in $R$.¹⁶ The set of segment groups of $W$ is denoted by $\text{SegGro}(W)$.
+
+Observe that the set of segments, as well as the set of segment groups, define a partition of a walk. We define the “potential” of a segment group based on its winding number in the ring that corresponds to its path (in case it is a long path where a ring is defined). To this end, recall the labeling function in Definition 7.4. Note that the labeling is defined for any two walks irrespective of the existence of a ring.
+
+**Definition 8.9 (Potential of Segment Group).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$ and whose endpoints are in $V_{=1}(R)$. Let $W' \in \text{SegGro}(W)$. If $|\text{Seg}(W')| = 1$, then the potential of $W'$, denoted by $\text{Potential}(W')$, is defined to be 1. Otherwise, it is defined as follows.
+
+$$ \text{Potential}(W') = 1 + \sum_{(e,e') \in E(W^*) \times E(W')} \text{label}_P^{W'}(e, e'), $$
+
+where $W^*$ is the walk obtained from $W'$ be adding two edges to $W'$—the edge consecutive to the first edge of $W'$ in $W$ and the edge consecutive to the last edge of $W'$ in $W$, and $P$ is the maximal degree-2 path of $R$ such that all of the endpoints of all of the segments in $\text{Seg}(W')$ are its internal vertices.
+
+The potential of a segment group is well defined as we use the function label only for edges incident to internal vertices of the maximal degree-2 paths in $R$. For an example of a potential of a segment groups, see Fig. 19. Now, we generalize the notion of potential from segment groups to weak linkages as follows.
+
+**Definition 8.10 (Potential of Weak Linkage).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a weak linkage. Then, the potential of $W$ is
+
+$$ \text{Potential}(W) = \sum_{W' \in \text{SegGro}(W)} \text{Potential}(W'), $$
+
+where $\text{SegGro}(W) = \bigcup_{W \in W} \text{SegGro}(W)$.
+
+To upper bound the potential of a solution of a small winding number, we first upper bound the number of segment groups.
+
+¹⁴The path might not be a maximal degree-2 path, thus $(v, e, \hat{e}, e', \tilde{e})$ may concern a vertex $v \in V_{\ge 3}(R)$.
+
+¹⁵Because we deal with walks that do not repeat edges, $\text{Seg}(W)$ is necessarily a set rather than a multiset.
+
+¹⁶That is, the two endpoints of $W'$ are internal vertices in different maximal degree-2 paths in $R$ or at least one endpoint of $W'$ is a vertex in $V_{=1}(R) \cup V_{\ge 3}(R)$.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_45.md b/samples/texts/1754951/page_45.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1cbe6141e83335c67e749a0b33c8b80eee30783
--- /dev/null
+++ b/samples/texts/1754951/page_45.md
@@ -0,0 +1,17 @@
+Figure 1: Flow at a vertex and its reduction.
+
+Figure 2: Two different ways of extracting a walk from a flow.
+
+directed graphs. (In this context, undirected graphs are treated as directed graphs by replacing each edge by two parallel arcs of opposite directions.) Specifically, we denote an instance of Directed Planar Disjoint Paths as a tuple $(D, S, T, g, k)$ where $D$ is a directed plane graph, $S, T \subseteq V(D)$, $k = |S|$ and $g: S \to T$ is bijective. Then, a *solution* is a set $\mathcal{P}$ of pairwise vertex-disjoint directed paths in $D$ containing, for each vertex $s \in S$, a path directed from $s$ to $g(s)$.
+
+In the language of flows, each arc of $D$ is assigned a word with letters in $T \cup T^{-1}$ (that is, we treat the set of vertices $T$ also as an alphabet), where $T^{-1} = \{t^{-1} : t \in T\}$. This collection of words is denoted by $(T \cup T^{-1})^*$ and let 1 denote the empty word. A word is *reduced* if, for all $t \in T$, the letters $t$ and $t^{-1}$ do not appear consecutively. Then, a *flow* is an assignment of reduced words to arcs that satisfies two constraints. First, when we concatenate the words assigned to the arcs incident to a vertex $v \notin S \cup T$ in clockwise order, where words assigned to ingoing arcs are reversed and their letters negated, the result (when reduced) is the empty word 1 (see Fig. 1). This is an algebraic interpretation of the standard flow-conservation constraint. Second, when we do the same operation with respect to a vertex $v \in S \cup T$, then when the vertex is in $S$, the result is $g(s)$ (rather than the empty word), and when it is in $T$, the result is $t$. There is a natural association of flows to solutions: for every $t \in T$, assign the letter $t$ to all arcs used by the path from $g^{-1}(t)$ to $t$.
+
+Roughly speaking, Schrijver proved that if a flow $\phi$ is given along with the instance $(D, S, T, g, k)$, then in *polynomial time* we can either find a solution or determine that there is no solution “similar to $\phi$”. Specifically, two flows are *homologous* (which is the notion of similarity) if one can be obtained from the other by a *set* of “face operations” defined as follows.
+
+**Definition 2.1.** Let $D$ be a directed plane graph with outer face $f$, and denote the set of faces of $D$ by $\mathcal{F}$. Two flows $\phi$ and $\psi$ are homologous if there exists a function $h : \mathcal{F} \to (T \cup T^{-1})^*$ such that (i) $h(f) = 1$, and (ii) for every arc $e \in A(D)$, $h(f_1)^{-1} \cdot \phi(e) \cdot h(f_2) = \psi(e)$ where $f_1$ and $f_2$ are the faces at the left-hand side and the right-hand side of $e$, respectively.
+
+Then, a slight modification of Schrijver's theorem [42] readily gives the following corollary.
+
+**Corollary 2.1.** There is a polynomial-time algorithm that, given an instance $(D, S, T, g, k)$ of Directed Planar Disjoint Paths, a flow $\phi$ and a subset $X \subseteq A(D)$, either finds a solution of $(D - X, S, T, g, k)$ or decides that there is no solution of it such that the “flow associated with it” and $\phi$ are homologous in $D$.
+
+**Discrete Homotopy and Our Objective.** While the language of flows and homology can be used to phrase our arguments, it also makes them substantially longer and somewhat obscure because it brings rise to multiple technicalities. For example, different sets of non-crossing walks may correspond to the same flow (see Fig. 2). Instead, we define a notion of *discrete homotopy*, inspired by (standard) homotopy. Specifically, we deal only with collections of non-crossing *and edge-disjoint* walks, called *weak linkages*. Then, two weak linkages are *discretely homotopic* if
\ No newline at end of file
diff --git a/samples/texts/1754951/page_49.md b/samples/texts/1754951/page_49.md
new file mode 100644
index 0000000000000000000000000000000000000000..06f3d945e90018ed1fbcb54fb3f9955ad50d83ad
--- /dev/null
+++ b/samples/texts/1754951/page_49.md
@@ -0,0 +1,12 @@
+$W'$ is sensible, well-behaved, shallow and outer-terminal having the same potential as $W$, and
+$\sum_{\hat{S} \in \text{Seq}(W')} \text{Volume}(\hat{S}) < \sum_{\hat{S} \in \text{Seq}(W)} \text{Volume}(\hat{S})$.
+
+*Proof.* We first argue that the cycle move/pull operation is applicable to $(W, C)$. Let $P_1, P_2$ and $P_3$ be the partition of $C$ in Definition 8.17. Note that because $S$ is innermost, its projecting cycle does not contain any edge in $E(W)$ that is not parallel to some edge of $R$, and hence so does $C$ as it is drawn in the interior (including the boundary) of the projecting cycle. Furthermore, the only edges parallel to $R$ that $C$ can contain are those parallel to the edge $e_i$ of $P_3$ whose subscripts have absolute value larger than $|i|$. However, none of these edges belong to $W$ because $i \in \{-n+\ell-1, n-h+1\}$ where $\ell$ and $h$ are as in Definition 8.16, and $W$ is shallow. Lastly, note that $P_1$ is either $S$ (which might be a cycle) or a subpath of $S$, and hence it is a subwalk of $W$. Thus, the cycle move or pull (depending on whether $P_1$ is a cycle) is applicable to $(W, C)$. Furthermore, the new walk $W'$ that results from the application is the modification of $W$ that replaces $P_1$ by the path consisting of $P_2$ and $P_3$.
+
+Because $\mathcal{W}$ is sensible and the endpoints of no walk in $\mathcal{W}$ are changed in $\mathcal{W}'$, we have that $\mathcal{W}'$ is sensible as well. Moreover, the vertices of $P_2$ are not used by any walk in $\mathcal{W}$ apart from $\mathcal{W}'$ and only in its subwalk that traverses $P_2$, and therefore, as $\mathcal{W}$ is well-behaved, so is $\mathcal{W}'$. Additionally, note that $\mathcal{W}$ is shallow and that each edge belongs to at most as many projecting cycles of sequences in $\mathcal{W}'$ as it does in $\mathcal{W}$. Thus, if $P_3$ does not contain an edge (the only edge parallel to an edge of $R$ that might be used by $\mathcal{W}'$ but not by $\mathcal{W}$ is the edge $e_i$ of $P_3$, if it exists), it is immediate that $\mathcal{W}'$ is shallow. Now, suppose that $e_i$ exists. Let $b \in \{-1, 1\}$ have the same sign as $i$. Recall that $i \in \{-n+\ell-1, n-h+1\}$ where $\ell$ and $h$ are as in Definition 8.16, thus to conclude that $\mathcal{W}'$ is shallow, we only need to argue that $e_b$ belongs to the interior of fewer projecting cycles of sequences in $\mathcal{W}'$ as it does in $\mathcal{W}$. However, this holds since the only difference between the sequences of $\mathcal{W}$ and $\mathcal{W}'$ is that the sequence $S$ occurs in $\mathcal{W}$ (and contains $e_b$ in the interior of its projecting cycle), but is transformed into (one or two) other sequences in $\mathcal{W}'$, and these new sequences, by the definition of $\mathcal{W}'$, no longer contain $e_b$ in their projecting cycles. In this context, also note that the projecting cycles of the (one or two) new sequences enclose disjoint areas contained in the area enclosed by the projecting cycle of $S$, and the projecting cycles of the new sequences do not enclose the faces enclosed by $C$, but the projecting cycle of $S$ does enclose them. Thus, $\sum_{\hat{S} \in \text{Seq}(\mathcal{W})} \text{Volume}(\hat{S}) < \sum_{\hat{S} \in \text{Seq}(\mathcal{W})} \text{Volume}(\hat{S})$.
+
+It remains to show that $\mathcal{W}'$ is outer-terminal and that has the same potential as $\mathcal{W}$. The second claim is immediate since $\mathcal{W}$ and $\mathcal{W}'$ have precisely the same crossings with $R$. For the first claim, note that since $\mathcal{W}$ is outer-terminal, it uses exactly one edge incident to $t^*$. The only vertex of $R$ that can possibly be incident to more edges in $E(\mathcal{W}')$ that in $E(\mathcal{W})$ is the other endpoint, say, $w$, of the edge of $P_3$ in the case where $P_3$ contains an edge. So, suppose that $P_3$ does contain an edge and that $w = t^*$, else we are done. Since $t^*$ is a leaf or $R$ that belongs to the boundary of the outer-face of $H$, it cannot be enclosed in the strict interior of the projecting cycle of $S$ and therefore it must be a vertex of $S$. However, this together with the maximality of the number of cycles enclosed by the shrinking cycle $C$ implies that the $C$ is equal to the projecting cycle of $S$. Thus, by the definition of $\mathcal{W}'$, the only difference between the edges incident to $t^*$ in $\mathcal{W}$ compared to $\mathcal{W}'$ is that in $\mathcal{W}$ it is incident to an edge of $S$, while in $\mathcal{W}'$ it is incident to the edge of $P_3$. In particular, this means that $\mathcal{W}'$ has exactly one edge incident to $t^*$ and therefore it is outer-terminal. □
+
+Having Lemmas 8.2, 8.4, 8.5 and 8.6 at hand, we are ready to push a solution onto $R$. Since this part is only required to be existential rather than algorithmic, we give a simpler proof by contradiction rather than an explicit process to push the solution. Notice that once the solution has already been pushed, rather than using the notion of shallowness, we only demand to have multiplicity at most 2$n$.
+
+**Lemma 8.7.** Let $(G, S, T, g, k)$ be a good Yes-instance of Planar Disjoint Paths, and $R$ be a backbone Steiner tree. Then, there exists a sensible outer-terminal weak linkage $\mathcal{W}$ in $H$ that
\ No newline at end of file
diff --git a/samples/texts/1754951/page_56.md b/samples/texts/1754951/page_56.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e1fed41971abcdfbb1069e8be967753a1e8ef81
--- /dev/null
+++ b/samples/texts/1754951/page_56.md
@@ -0,0 +1,15 @@
+Figure 3: Moving a walk of a weak linkage (in blue) onto the Steiner tree (the walk in purple) with “face operations” (e.g. a sub-path of the blue path is pushed giving the green sub-path).
+
+one can be obtained from the other by using “face operations” that push/stretch its walks across faces and keep them non-crossing and edge-disjoint (see Fig. 3). More precisely, discrete homotopy is an equivalence relation that consists of three face operations, whose precise definition (not required to understand this overview) can be found in Section 5. We note that the order in which face operations are applied is important in discrete homotopy (unlike homology)—we cannot stretch a walk across a face if no walk passes its boundary, but we can execute operations that will move a walk to that face, and then stretch it. In Section 5, we translate Corollary 2.1 to discrete homotopy (and undirected graphs) to derive the following result.
+
+**Lemma 2.1.** There is a polynomial-time algorithm that, given an instance $(G, S, T, g, k)$ of *Planar Disjoint Paths*, a weak linkage $W$ in $G$ and a subset $X \subseteq E(G)$, either finds a solution of $(G - X, S, T, g, k)$ or decides that no solution of it is discretely homotopic to $W$ in $G$.
+
+In light of this result, our objective is reduced to the following task.
+
+Compute a collection of weak linkages such that if there exists a solution, then there also exists a solution (*possibly a different one!*.) that is discretely homotopic to one of the weak linkages in our collection. To prove Theorem 1.1, the size of the collection should be upper bounded by $2^{O(k^2)}$.
+
+**Key Player: Steiner Tree.** A key to the proof of our theorem is a very careful construction (done in three steps in Section 6) of a so-called *Backbone Steiner tree*. We use the term Steiner tree to refer to any tree in the *radial completion* of $G$ (the graph obtained by placing a vertex on each face and making it adjacent to all vertices incident to the face) whose set of leaves is precisely $S \cup T$. In the first step, we consider an arbitrary Steiner tree as our Steiner tree $R$. Having $R$ at hand, we have a more focused goal: we will zoom into weak linkages that are “pushed onto $R$”, and we will only generate such weak linkages to construct our collection. Informally, a weak linkage is *pushed onto* $R$ if all of the edges used by all of its walks are *parallel to* edges of $R$. We do not demand that the edges belong to $R$, because then the goal described immediately cannot be achieved—instead, we make $4n+1$ parallel copies of each edge in the radial completion (the number $4n+1$ arises from considerations in the “pushing process”), and then impose the above weaker demand. Now, our goal is to show that, if there exists a solution, then there also exists one that can be pushed onto $R$ by applying face operations (in discrete homotopy) so that it becomes *identical* to one of the weak linkages in our collection (see Fig. 3).
+
+At this point, one remark is in place. Our Steiner tree $R$ is a subtree of the radial completion of $G$ rather than $G$ itself. Thus, if there exists a solution discretely homotopic to one of the weak linkages that we generate, it might not be a solution in $G$. We easily circumvent this issue by letting the set $X$ in Lemma 2.1 contain all “fake” edges.
+
+**Partitioning a Weak Linkage Into Segments.** For the sake of clarity, before we turn to present the next two steps taken to construct $R$, we begin with the (non-algorithmic) part of
\ No newline at end of file
diff --git a/samples/texts/1754951/page_58.md b/samples/texts/1754951/page_58.md
new file mode 100644
index 0000000000000000000000000000000000000000..420ae6ea0d08bcc0dd712356c46f5b274536ca7d
--- /dev/null
+++ b/samples/texts/1754951/page_58.md
@@ -0,0 +1,21 @@
+*Proof.* Let $e_i$ and $e'_j$ be the first and last edges of $S$, and denote the endpoints of $S$ by $u$ and $v$ where $u$ is an endpoint of $e_i$. Because $S$ is a swollen segment, both these edges are on the same side of $\text{path}_R(u, v)$. Let $P$ be the unique subpath in $R$ between $u$ and $v$. Because $\mathcal{W}$ does not have any special U-turn, we have that one of the following cases occurs (see Fig. 27): (i) $S$ traverses a path that starts at $e_i$, consists of edges parallel to $P$ and ends at $e'_j$; (ii) $S$ traverses a path that starts at $e_i$, consists of edges parallel to $P$ but does not end at $e'_j$, and hence (to reach $e'_j$ without having U-turns) $S$ traverses at least two copies (on opposite sides) of every edge of $R$; (iii) the first edge that $S$ traverses after $e_i$ is not parallel to an edge of $P$, and hence (to reach $e'_j$ without having U-turns) $S$ traverses at least two copies (on opposite sides) of every edge of $R$ except possibly for the edges of $P$. In the first case, we are done. In the other two cases, we have that $\mathcal{E}(\mathcal{W})$ contains more than one copy of the edge incident to $t^*$ in $R$, which contradicts the assumption that $\mathcal{W}$ is outer-terminal. □
+
+The segment chosen to move at each step is an innermost one, formally defined as follows.
+
+**Definition 8.21 (Innermost Swollen Segment).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $\mathcal{W}$ be weak linkage that is pushed onto $R$. Let $S \in \text{Seg}(\mathcal{W})$ be swollen. Then, $S$ is innermost if there do not exist parallel edges $e_i \in E(S)$ and $e_j \in E(\mathcal{W}) \setminus E(S)$ such that $i$ and $j$ have the same sign and $|j| < |i|$.
+
+We now argue that if there is a swollen segment, then there is also an innermost one.
+
+**Lemma 8.14.** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $\mathcal{W}$ an outer-terminal weak linkage that has no special U-turns and is pushed onto $R$, such that $\text{Seg}(\mathcal{W})$ contains at least one swollen segment. Then, $\text{Seg}(\mathcal{W})$ contains at least one innermost swollen segment.
+
+*Proof.* Let $S$ be a swollen segment of $\mathcal{W}$ such that the sum of the absolute values of the indices of the edge copies it uses is minimized. By Lemma 8.13, $S$ is parallel to a subpath $P$ of a maximal degree-2 path of $R$. Thus, because $S$ is a segment, all the edge copies it uses are on the same side. We claim that $S$ is innermost. Suppose, by way of contradiction, that this claim is false. Thus there exist parallel edges $e_i \in E(S)$ and $e_j \in E(\mathcal{W}) \setminus E(S)$ such that $i$ and $j$ have the same sign and $|j| < |i|$. Let $S'$ be the segment of $\mathcal{W}'$ to which $e_j$ belongs. Because $\mathcal{W}$ has no special U-turns and because weak linkages contain neither crossings not repeated edges, it follows that $S'$ is parallel to a subpath $Q$ of $P$ and consists only of edge copies whose indices strictly smaller absolute value than the edges of $P$ they are parallel to. However, this implies that $S'$ is a swollen segment of $\mathcal{W}$ such that the sum of the absolute value of the indices of the edge copies it uses is smaller than the sum of $S$. This contradicts the choice of $S$. □
+
+Given an innermost swollen segment whose copies have, on one side of R, we would like to
+move the segment to “the other side” of R. We know that these copies will be free in case we
+handle an extremal weak linkage. We now define a tuple of cycles on which we will perform
+move operations (see Fig. 26). The fact that this notion is well-defined (in the sense that the
+indices ℓ in the definition exist) will be argued in the lemma that follows it.
+
+**Definition 8.22 (Move-Through Tuple).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $\mathcal{W}$ an outer-terminal extremal weak linkage that has no special U-turns and is pushed onto $R$, and let $S \in \text{Seg}(\mathcal{W})$ be an innermost swollen segment. Let $e_{i_1}^1, e_{i_2}^2, \dots, e_{i_t}^t$, where $t = |E(S)|$, be the edges of $S$ in the order occurred when $S$ is traversed from one endpoint to another.¹⁹ Then, the move-through tuple of $S$ is $T = (C_1, \dots, C_t)$ where for every $j \in \{1, \dots, t\}$, $C_j$ is a cycle that consists of two parallel edges: $e_{ij}^j$ and $e_\ell^j$ where $\ell$ is
+
+¹⁹To avoid ambiguity in the context of this definition, suppose that we have a fixed choice (e.g., lexicographic) of which endpoint is traversed first.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_62.md b/samples/texts/1754951/page_62.md
new file mode 100644
index 0000000000000000000000000000000000000000..bcaabd0222d4f3e3d25794d3c64cc11ec1770f6a
--- /dev/null
+++ b/samples/texts/1754951/page_62.md
@@ -0,0 +1,19 @@
+**Lemma 8.20.** Let $(G, S, T, g, k)$ be a good Yes-instance of Planar Disjoint Paths, and $R$ be a backbone Steiner tree. Then, there exists a sensible, outer-terminal, U-turn-free weak linkage $W$ in $H$ that is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G, S, T, g, k)$, and $|\text{Seg}(W)| \le \alpha_{\text{potential}}(k)$.
+
+*Proof.* By Lemma 8.17, there exists a sensible, outer-terminal, extremal weak linkage in $H$ that is pushed onto $R$, has no special U-turns and swollen segments, is discretely homotopic in $H$ to some solution of $(G, S, T, g, k)$ and whose potential is upper bounded by $\alpha_{\text{potential}}(k)$. By Lemma 8.18, its number of segments is also upper bounded $\alpha_{\text{potential}}(k)$. Among all such weak linkages that are sensible, outer-terminal, pushed onto $R$ and whose number of segments is upper bounded $\alpha_{\text{potential}}(k)$, let $W$ be one with minimum number of edges. To conclude the proof, it suffices to argue that $W$ is U-turn-free.
+
+Suppose, by way of contradiction, that $W$ has at least one U-turn. Then, by Lemma 8.10, $W$ has an innermost U-turn $U = \{e_i, e_j\}$. Let $W$ be the walk in $W$ that uses $e_i$ and $e_j$, and $C$ be the cycle in $H$ that consists of $e_i$ and $e_j$. Then, by Lemma 8.11, the cycle pull operation is applicable to $(W, C)$. Furthermore, by Lemma 8.11, the resulting weak linkage $W'$ is sensible, outer-terminal, pushed onto $R$, has fewer edges than $W$, and its number of segments is upper bounded by the number of segments of $W$. Since discrete homotopy is an equivalence relation, $W'$ is discretely homotopic to some solution of $(G, S, T, g, k)$. However, this is a contradiction to the choice of $W$. $\square$
+
+Now, we prove that having no U-turns implies that each segment can use only two parallel copies of every edge.
+
+**Lemma 8.21.** Let $(G, S, T, g, k)$ be a nice instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be an outer-terminal, U-turn-free weak linkage that is pushed onto $R$. Then, each segment $S \in \text{Seg}(W)$ uses at most two copies of every edge in $E(R)$.
+
+*Proof.* Consider some segment $S \in \text{Seg}(W)$. Suppose, by way of contradiction, that there exists some edge $e_0 \in E(R)$ such that $S$ contains at least three edges parallel to $e_0$ (but it cannot contain $e_0$ as it is pushed onto $R$). Then, without loss of generality, suppose that it contains two copies with positive subscript, $e_i$ and $e_j$, and let $S'$ be the subwalk of $S$ having these edge copies as the extreme edges. Then, because $S$ is U-turn free, when we traverse $S'$ from $e_i$ to $e_j$, then we must visit the positive and negative copy of every other edge in $E(R)$ exactly once. However, this means that $E(W$ contains more than one copy of the edge incident to $t^*$ in $R$, which contradicts the assumption that $W$ is outer-terminal. $\square$
+
+Having established Lemmas 8.8, 8.20 and 8.21, we are ready to prove Lemma 8.19.
+
+*Proof of Lemma 8.19.* By Lemma 8.20, there exists a sensible, outer-terminal, U-turn-free weak linkage $W'$ in $H$ that is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G, S, T, g, k)$, and whose number of segments is upper bounded by $\alpha_{\text{potential}}(k)$. By Lemma 8.21, the multiplicity of $W'$ is upper bounded by $2\alpha_{\text{potential}}(k) = \alpha_{\text{mul}}(k)$. By Lemma 8.8, there exists a weak linkage $W$ that is sensible, pushed onto $R$, canonical, discretely homotopic to $W'$, and whose multiplicity is upper bounded by the multiplicity of $W'$. Thus, $W$ is simplified. Moreover, since discrete homotopy is an equivalence relation, $W$ is discretely homotopic to some solution of $(G, S, T, g, k)$. $\square$
+
+# 9 Reconstruction of Pushed Weak Linkages from Templates
+
+In this section, based on the guarantee of Lemma 8.19, we only attempt to reconstruct simplified weak linkages. Towards this, we introduce the notion of a template (based on another notion called a pairing). Roughly speaking, a template indicates how many parallel copies of each edge
\ No newline at end of file
diff --git a/samples/texts/1754951/page_63.md b/samples/texts/1754951/page_63.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ee0db59f742d22086e8ce09875ef2c20c348402
--- /dev/null
+++ b/samples/texts/1754951/page_63.md
@@ -0,0 +1,17 @@
+incident to a vertex in $V_1(R) \cup V_2(R)$ are used by the walks in the simplified weak linkage $\mathcal{W}$ under consideration, and how many times, for each pair $(e, e')$ of non-parallel edges sharing a vertex, the walks in $\mathcal{W}$ traverse from a copy of $e$ to a copy of $e'$. Observe that a template does not indicate which edge copy is used by each walk, but only specifies certain numbers. Nevertheless, we will show that this is sufficient for faithful reconstruction of simplified weak linkages. The great advantage of templates, proved later, is that there are only few of them.
+
+## 9.1 Generic Templates and Templates of Simplified Weak Linkages
+
+We begin with the definition of the notion of a pairing, which will form the basis of a template. Let $V^*(R) = V_{\ge 3}(R) \cup V_2^*(R)$ where $V_2^*(R) = \{v \in V_{\ge 2}(R) | \exists u \in V_{\ge 1}(R) \cup V_{\ge 3}(R) \text{ such that } \{u, v\} \in E(R)\}$. Observe that $|V_2^*(R)| \le 2(|V_{\ge 1}(R)| + |V_{\ge 3}(R)| - 1) \le 8k$, by Observation 6.1. Therefore, $|V^*(R)| \le 12k$. Let $E^*(R)$ denote the set of edges in $E(R)$ that are incident on a vertex of $V^*(R)$, and observe that $|E^*(R)| \le 24k$ (since $R$ is a tree).
+
+**Definition 9.1 (Pairing).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths with a Steiner tree $R$. For a vertex $v \in V_{\ge 3}(R)$, a pairing at $v$ is a set $\textbf{pairing}_v$ of unordered pairs of distinct edges in $R$ incident to $v$. For a vertex $v \in V_2^*(R)$, a pairing at $v$ is a collection of pairs (possibly non-distinct) edges in $E_R(v)$. And, for a vertex $v \in V_{\le 1}(R)$, it is the empty set or singleton set of the pair where the (unique) edge incident to $v$ in $R$ occurs twice. A collection $\{\textbf{pairing}_u\}_{u \in V^*(R)}$, where $\textbf{pairing}_u$ is a pairing at $u$ for every vertex $u \in V^*(R)$, is called a pairing.
+
+As we will see later, simplified weak linkages can only give rise to a specific type of pairings, which we call non-crossing pairings.
+
+**Definition 9.2 (Non-Crossing Pairing).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths. Let $R$ be a Steiner tree. Consider a vertex $v \in V^*(R)$, and let $e^1, e^2, \dots, e^r$ be the edges in $E(R)$ incident to $v$ in clockwise order where the first edge $e^1$ is chosen arbitrarily. A pairing $\textbf{pairing}_v$ at $v$ is non-crossing if there do not exist two pairs $(e^i, e^j)$ and $(e^x, e^y)$ in $\textbf{pairing}_v$, where $i < j$ and $x < y$, such that $i < x < j < y$ or $x < i < y < j$. More generally, a pairing $\{\textbf{pairing}_u\}_{u \in V^*(R)}$ is non-crossing if, for every $u \in V^*(R)$, the pairing $\textbf{pairing}_u$ is non-crossing.
+
+We now show that a non-crossing pairing can contain only $\mathcal{O}(k)$ pairs, which is better than a trivial bound of $\mathcal{O}(k^2)$. This bound will be required to attain a running time of $2^{\mathcal{O}(k^2)} n^{\mathcal{O}(1)}$ rather than $2^{\mathcal{O}(k^3)} n^{\mathcal{O}(1)}$.
+
+**Lemma 9.1.** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths. Let $R$ be a Steiner tree. Let $\{\textbf{pairing}_v\}_{v \in V^*(R)}$ be a non-crossing pairing. Then, $\left|\bigcup_{v \in V^*(R)} \textbf{pairing}_v\right| \le \alpha_{npair}(k) := 48k$.
+
+*Proof.* Towards the bound on $\left|\bigcup_{v \in V^*(R)} \textbf{pairing}_v\right|$, we first obtain a bound on each individual set $\textbf{pairing}_v$. To this end, consider some vertex $v \in V_{\ge 3}(R)$, and let $e^0, e^1, \dots, e^{r-1}$ are the edges in $E(R)$ incident to $v$ in clockwise order. Consider the undirected graph $C$ on vertex set $\{u_{e^i} | i \in \{0, 1, \dots, r-1\}\}$ and edge set $\{\{u_{e^i}, u_{e^{(i+1)} \bmod r}\} | i \in \{0, 1, \dots, r-1\}\} \cup \{\{u_{e^i}, u_{e^{j}}\} | (e^i, e^j) \in \textbf{pairing}_v\}$. Now, notice that $C$ is an outerplanar graph (Fig. 28). To see this, draw the vertices of $C$ on a circle on the plane, so that the curves on the cycle that connect them correspond to the drawing of the edges in $\{\{u_{e^i}, u_{e^{(i+1)} \bmod r}\} | i \in \{0, 1, \dots, r-1\}\}$. Now, for each edge in $\{\{u_{e^i}, u_{e^{j}}\} | (e^i, e^j) \in \textbf{pairing}_v\}$, draw a straight line segment inside the circle that connects $u_{e^i}$ and $u_{e^{j}}$. The condition that asserts that $\textbf{pairing}_v$ is non-crossing ensures that no two lines segments among those drawn previously intersect (except for at their endpoints). As an outerplanar graph on $q$ vertices can have at most $2q-3$ edges, we have that $|E(C)| < 2|V(C)| = 2r$. Because $\left|\textbf{pairing}_v\right| \le |E(C)|$, we have that $\left|\textbf{pairing}_v\right| \le 2r$. For a vertex in $v \in V_2^*(R)$, since it has only two edges incident on it, $\left|\textbf{pairing}_v\right| \le 3$. Finally, for $v \in V_{\le 1}(R)$, $\left|\textbf{pairing}_v\right| \le 1$.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_72.md b/samples/texts/1754951/page_72.md
new file mode 100644
index 0000000000000000000000000000000000000000..a13e45871f6a37060d7fb8c8107a68df4d536dfb
--- /dev/null
+++ b/samples/texts/1754951/page_72.md
@@ -0,0 +1,15 @@
+Let us now consider the edges in $E^*(v)$. Since $\mathcal{W}$ is a weak linkage, exactly one walk, say $W_1$, that has the vertex $v$ as an endpoint. Any other walk in $\mathcal{W}$ contains an even number of edges from $E_H(v)$, and since $\mathcal{W}$ is pushed onto $R$, these edges are all parallel copies of $e^*$. Hence, the walks in $\mathcal{W}$ contain an odd number of parallel copies of $e^*$ in total, i.e. $\ell(e^*)$ is an odd number. Since $v$ is an endpoint of $W_1 \in \mathcal{W}$, there is exactly one edge in $e_z^* \in E^*(v)$ such that $\text{stitch}_v(e_z^*) = e_z^*$.
+
+We claim that $z = \frac{\ell(e^*)+1}{2}$. Towards this, let us argue that for any edge $e_i^*$, where $i < z$, if $\text{stitch}_v(e_i^*) = e_j^*$ then $j > z$. Suppose not, and without loss of generality assume that $i < j < z$. Let us choose $i$ such that $|j - i|$ is minimized, and note that $j \neq i$. Consider the collection of edges $e_p^*$ such that $i < p < j$. If this collection is empty, i.e. $j = i + 1$, then observe that the edges $(e_i, e_j)$ form a U-turn, since $\text{stitch}_v(e_i) = e_j$ only if they were consecutive edges of some walk in $\mathcal{W}$ and there is no edge in the strict interior of the cycle formed by the parallel edges $e_i$ and $e_j$. Otherwise this collection is non-empty, then observe that if $\text{stitch}_v(e_p^*) = e_q^*$ then $i < q < j$. Indeed, if this were not the case then the pairs $e_i^*, e_j^*$ and $e_p^*, e_q^*$ are crossing at $v$, since they occur as $e_i^* < e_p^* < e_j^* < e_q^*$ in order$_v$. This contradicts the weak linkage $\mathcal{W}$ is non-crossing. Otherwise, $i < q < j$ and hence $|q - p| < |j - i|$. But this contradicts the choice of $i$. Hence, for every $i < z$, $\text{stitch}_v(e_i^*) = e_j^*$ where $j > z$. A symmetric argument holds for the other case, i.e. if $i > z$ then $\text{stitch}_v(e_i^*) = e_j^*$ where $j < z$. Therefore, we can conclude that
+$$z = \frac{\ell(e^*)+1}{2}, \text{ and hence } \text{stitch}_v(e_z^*) = e_{\ell(e^*)+1-z}^*$$
+
+Let us now consider the other edges in $E^*(v)$. Suppose that there exist integers $1 \le i, p \le \ell(e^*)$ such that $\text{stitch}_v(e_i^*) = e_j^*$, $\text{stitch}_v(e_p^*) = e_q^*$ such that $i < p < z$ and $z < j < q$. Then it is clear that the pairs $e_i^*, e_j^*$ and $(e_p^*, e_q^*)$ are crossing at $v$, which is a contradiction. Therefore, if $i < p < z$ then $z < q < j$, and this holds for every choice of $i$ and $p$. A symmetric argument holds in the other direction, i.e. if $i > p > z$ and $\text{stitch}_v(e_i^*) = e_j^*$ and $\text{stitch}_v(e_p^*) = e_q^*$, then $j < q < z$. Now we claim that for any $i \in \{1, 2, \dots, \ell e^*\}$, if $\text{stitch}_v(e_i^*) = e_j^*$ then $j = \ell(e^*) + 1 - i$. Suppose not, and consider the case $i < z$, and further let $j < \ell(e^*) + 1 - i$. Then observe that, for any edge $e_p^* \in \{e_{i+1}^*, \dots, e_{z-1}^*\}$, $\text{stitch}_v(e_p^*) \in \{e_{z+1}^*, \dots, e^{j-1}^*\}$. But $\{e_{i+1}^*, \dots, e_{z-1}^*\}$ is strictly larger than $\{e_{z+1}^*, \dots, e^{j-1}^*\}$, which is a contradiction to the definition of $\text{stitch}_v$. Hence, $j \ge \ell(e^*) + 1 - i$. A symmetric argument implies that $j \le \ell(e^*) + 1 - i$. Therefore, for any $i < z$, $\text{stitch}_v(e_i^*) = e_{\ell(e^*)+1-i}^*$. We can similarly argue that for $i > z$ $\text{stitch}_v(e_i^*) = e_{\ell(e^*)+1-i}^*$. Since we have already shown that $\text{stitch}_v(e_z^*) = e_z^*$, this concludes the proof of this lemma. $\square$
+
+**Lemma 9.8.** Let $(G, S, T, g, k)$ be a good Yes-instance of Planar Disjoint Paths, and let $R$ be a backbone Steiner tree. Let $\mathcal{W}$ be a simplified weak linkage in $H$ and let $\ell$ be the multiplicity function of $\mathcal{W}$. Consider the collection $\mathcal{A} = \{(pairing_v, template_v)\}|_{v \in V(R)} \subset \widehat{\text{ALL}},$ such that it is the collection of pairings and templates of $\mathcal{W}$. Let $\{f_v\}_{v \in V(R)}$ be the stitching extracted from $\mathcal{A}$. Then, for every vertex $v \in V_2(R) \cup V_{\ge 3}(R)$, $\text{stitch}_v(e) = f_v(e)$ for all edges $e \in E_H(v)$.
+
+*Proof.* Let $\ell$ be the multiplicity function of the simplified weak linkage $\mathcal{W}$. Then, as $\mathcal{W}$ is canonical, for each edge $e \in E_R(v)$ with a parallel copy $e_i$, $\text{stitch}_v(e_i) \neq \perp$ if and only if $i \in \{1, 2, \dots, \ell(e^x)\}$. Since $\mathcal{W}$ is a sensible and $v \notin V_1(R)$, it cannot be the endpoint of any walk in $\mathcal{W}$. Hence, any walk contains an even number of edges from $E_H(v)$, and further any such edge is a parallel copy of an edge in $E_R(v) = \{e^1, e^2, \dots, e^r\}$, where these edges are enumerated according to **order$_v$$}$. Note that, the collections of parallel copies of these edges also occur in the same manner in **order$_v$$}$. We present our arguments in three steps.
+
+**Claim 9.1.** Consider a pair of edges $(e, e') \in \text{pairing}_v$, such that $\text{template}_v(e, e') > 0$. Then $\text{stitch}_v$ maps each edge in $\{e_{(x_e,e')+1}, \dots, e_{(x_e,e' + \text{template}_v(e,e'))}\}$ to some edge in $\{e'_1, e'_2, \dots, e'_{\ell(e')} \}$,
+and vice versa.
+
+*Proof.* Suppose not, and consider the case where $e$ occur before $e'$ in order$_v$. Consider a parallel copy of $e$, say $e_i \in \{e_{(x_e,e')+1}, \dots, e_{(x_e,e' + \text{template}_v(e,e'))}\}$ such that $\text{stitch}_v(e_i) = \hat{e}_j$, where $\hat{e} \in$
\ No newline at end of file
diff --git a/samples/texts/1754951/page_74.md b/samples/texts/1754951/page_74.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e54e57cfe680605e79476a9a85baf0669740055
--- /dev/null
+++ b/samples/texts/1754951/page_74.md
@@ -0,0 +1,21 @@
+to the edges in $\{e_{(x_e, e'+i+1)}, \dots, e_{(x_e, e'+\text{template}_v(e, e'))}\}$. If not, then consider an edge $e'_p \in \{e'_{(y_{e,e'}-\text{template}_v(e,e'))}, \dots, e'_{(j-1)}\}$ such that $\text{stitch}_v(e'_p) = e_q$ where $q < x_{e,e'} + i$. Then consider the pairs $e_{x_{e,e'}+i}, e_j$ and $e_p, e_q$ in $\text{order}_v$, and observe that $e_q < x_{e,e'}+i < e'_p < e'_j$ in $\text{order}_v$. Then these pairs of edges are crossing at $v$, which is a contradiction to the fact that $\mathcal{W}$ is weak linkage. On the other hand, $\left|\{e'_{(y_{e,e'}-\text{template}_v(e,e'))}, \dots, e'_{(j-1)}\}\right|$ is strictly larger than $\left|\{e_{(x_{e,e'}+i+1)}, \dots, e_{(x_{e,e'}+\text{template}_v(e,e'))}\}\right|$, which is again a contradiction, since all edges in $\{e'_1, \dots, e'_{\ell(e')}\}$ are mapped to distinct edges by $\text{stitch}_v$, and they are not mapped to $\perp$. By symmetric arguments, the case when $j < y_{e,e'} - i$ also leads to a contradiction. ◇
+
+Now, by considering all pairs in $\text{pairing}_v$ and applying the above claims, we obtain that $\text{stitch}_v = f_v$ for all $v \in V_2(R) \cup V_{\ge e}(R)$. This concludes the proof of this lemma. □
+
+The following lemma is a corollary of Lemma 9.7 and Lemma 9.8.
+
+**Lemma 9.9.** Let $(G, S, T, g, k)$ be a nice instance of Planar Disjoint Paths. Let $R$ be a Steiner tree. Let $\mathcal{W}$ be a simplified weak linkage, and let $\mathcal{A} = \{\text{(pairing}_v, \text{template}_v)\}|_{v \in V(R)} \in \tilde{\text{ALL}}$ be pairing and template of $\mathcal{W}$. Let $\{f_v\}_{v \in V(R)}$ be the stitching extracted from $\mathcal{A}$. Then $f_v = \text{stitch}_v$ for every vertex $v \in V(R)$.
+
+Now, we consider the computational aspect of the definitions considered so far in this section.
+
+**Lemma 9.10.** Let $(D, S, T, g, k)$ be a nice instance of Directed Planar Disjoint Paths. Let $R$ be a Steiner tree. Let $\mathcal{A} = \{\text{(pairing}_v, \text{template}_v)\}|_{v \in V(R)} \in \tilde{\text{ALL}}$. Then, the multiplicity function extracted from $\mathcal{A}$ can be computed in time $k^{\mathcal{O}(1)n}$, and the stitching extracted from $\mathcal{A}$ can be computed in time $2^{\mathcal{O}(k)n}$.
+
+*Proof.* First we consider the computation of the multiplicity function $\ell = \{\ell_v\}|_{v \in V(R)}$ extracted from $\mathcal{A}$ according to Definition 9.11. Note that, for every vertex $v \in V(R)$, we have that $|\text{pairing}_v| = \mathcal{O}(k)$ and the numbers assigned by $\text{template}_v$ are bounded by $2^{\mathcal{O}(k)}$. Therefore, $\ell_v$ can be computed in $2^{\mathcal{O}(k)}$ time for each $v \in V(R)$, taking a total of $2^{\mathcal{O}(k)}n$ time. Now, note that for any vertex $v \in V(R)$, it holds that $|\ell_v(e)| = 2^{\mathcal{O}(k)}$ for any edge $e \in E_H(v)$ (because $\text{template}_v$ is a $2^{\mathcal{O}(k)}$-template).
+
+Let $\{f_v\}_{v \in V(R)}$ be the stitching extracted from $\mathcal{A}$ by Definition 9.12. Observe that when describing the stitching $f_v$ extracted at a vertex $v \in V(R)$, we only need to describe it for the parallel copies of edges in $E(R)$, and then only for the parallel copies $\{e_1, e_2, \dots, e_{\ell(v)}\}$ of $e \in E(R)$, where $\ell$ is the multiplicity function extracted from $\mathcal{A}$. For all other edges and parallel copies, the stitching maps them to $\perp$. Since $\ell(e) \le \alpha_{\text{mul}}(k)$, and the tree $R$ has at most $2k$ leaves, the stitching at each vertex can be described by a collection of $\mathcal{O}(k \cdot \alpha_{\text{mul}}) = 2^{\mathcal{O}(k)}$ pairs of edges in $E_H(v) \times E_H(v)$. Further, by the construction described in Definitions 9.12 and 9.13, the stitching $f_v$ at each vertex $v \in V(R)$ can be constructed in time $2^{\mathcal{O}(k)}$ time. Therefore, the collection $\{f_v\}_{v \in V(R)}$ can be constructed in $2^{\mathcal{O}(k)}n$ time. Finally, we need to test if this collection is a valid stitching, as described in Definition 9.14, which can be done by picking each edge $e \in E(R)$ and testing the parallel copies $\{e_1, e_2, \dots, e_{\ell(e)}\}$ one by one, which again takes $2^{\mathcal{O}(k)}n$ time. Hence the total time required to extract the stitching is $2^{\mathcal{O}(k)}n$. □
+
+## 9.3 Reconstruction of Weak Linkages from Templates
+
+Now we describe the construction of a weak linkage from a valid stitching.
+
+**Definition 9.15** (Weak Linkage of a stitching.) Let $(G, S, T, g, k)$ be a good instance of Planar Disjoint Paths, and let $R$ be a backbone Steiner tree. Let $\{f_v\}_{v \in V(R)}$ be a stitching and suppose that is is valid. Then the weak linkage $\mathcal{W}$ constructed from $f_v$ is obtained as follows.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_78.md b/samples/texts/1754951/page_78.md
new file mode 100644
index 0000000000000000000000000000000000000000..ee8d54fba490bcb323741d05d3ebf9781e6759b1
--- /dev/null
+++ b/samples/texts/1754951/page_78.md
@@ -0,0 +1,17 @@
+Figure 6: Detours in the Steiner tree.
+
+guarantee their existence. Specifically, we will ensure that $R$ does not have any *detour*, which roughly means that each of its maximal degree-2 paths is a shortest path connecting the two subtrees obtained once it is removed. More formally, we define a detour as follows (see Fig. 6).
+
+**Definition 2.3.** A detour in $R$ is a pair of vertices $u, v \in V_{\ge 3}(R) \cup V_{=1}(R)$ (i.e. the non-degree 2 vertices in $R$) that are endpoints of a maximal degree-2 path $L$ of $R$, and a path $P$ in the radial completion of $G$, such that (i) $P$ is shorter than $L$, (ii) one endpoint of $P$ belongs to the component of $R - V(L) \setminus \{u, v\}$ containing $u$, and (iii) one endpoint of $P$ belongs to the component of $R - V(L) \setminus \{u, v\}$ containing $v$.
+
+By repeatedly “short-cutting” $R$, a process that terminates in a linear number of steps, we obtain a new Steiner tree $R$ with no detour. Now, if the separator $S_u$ is large, then there is a large number of vertex-disjoint paths that connect the two subtrees separated by $S_u$, and all of these paths are “long”, namely, of length at least $2^{c_2k} - 2^{c_1k}$. Based on a result by Bodlaender et al. [5] (whose application requires to work in the radial completion of $G$ rather than $G$ itself), we show that the existence of these paths implies that the treewidth of $G$ is large. Thus, if the treewidth of $G$ were small, all of our separators would have also been small. Fortunately, to guarantee this, we just need to invoke the following known result in a preprocessing step:
+
+**Proposition 2.1 ([2]).** There is a $2^{\mathcal{O}(k)}n^2$-time algorithm that, given an instance $(G, S, T, g, k)$ of Planar Disjoint Paths, outputs an equivalent instance $(G', S, T, g, k)$ of Planar Disjoint Paths where $G'$ is a subgraph of $G$ whose treewidth is upper bounded by $2^{ck}$ for some constant $c$.
+
+Having separators of size $2^{\mathcal{O}(k)}$, because segments going across different long paths must intersect these separators (or have an endpoint at distance $2^{\mathcal{O}(k)}$ in $R$ from some endpoint of a maximal degree-2 path), we immediately deduce the following.
+
+**Observation 2.2.** Let $\mathcal{P}$ be a solution. Then, its number of segments that have one endpoint on one long path, and a second endpoint on a different long path, is upper bounded by $2^{\mathcal{O}(k)}$.
+
+**Segments with Both Endpoints on the Same Long Path.** We are now left with segments whose both endpoints belong to the same long path, which have two different kinds of behavior: they may or may not spiral around $R$, where spiraling means that the two endpoints of the segment belong to different “sides” of the path (see Fig. 4 and Fig. 8). By making sure that at least one vertex in $S \cup T$ is on the outer face of the radial completion of $G$, we ensure that the cycle formed by any non-spiraling segment together with the subpath of $R$ connecting its two endpoints does not enclose all of $S \cup T$; specifically, we avoid having to deal with segments as the one in Fig. 7.
+
+While it is tempting to try to devise face operations that transform a spiraling segment into a non-spiraling one, this is not always possible. In particular, if the spiral “captures” a path $P$ (of a solution), then when $P$ and the spiral are pushed onto $R$, the spiral is not reduced to a simple path between its endpoints, but to a walk that “flanks” $P$. Due to such scenarios, dealing with spirals (whose number we are not able to upper bound) requires special attention. Before we turn to this task, let us consider the non-spiraling segments.
\ No newline at end of file
diff --git a/samples/texts/1754951/page_83.md b/samples/texts/1754951/page_83.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8f3e5e2ca40f89c6d68643fccd9dcef9a6d6407
--- /dev/null
+++ b/samples/texts/1754951/page_83.md
@@ -0,0 +1,17 @@
+Figure 7: A bad segment that contains all of $S \cup T$ in its cycle.
+
+**Non-Spiraling Segments.** To achieve our main goal, we aim to push a (hypothetical) solution onto $R$ so that the only few parallel copies of each edge will be used. Now, we argue that non-spiraling segments do not pose a real issue in this context. To see this, consider a less refined partition of a solution where some non-spiraling segments are “grouped” as follows (see Fig. 4).
+
+**Definition 2.4.** A subwalk of a walk $W$ is a preliminary group of $W$ if either (i) it has endpoints on two different maximal degree-2 paths of $R$ or an endpoint in $V_{\ge 1}(R) \cup V_{\ge 3}(R)$ or it is spiraling, or (ii) it is the union of an inclusion-wise maximal collection of segments not of type (i).
+
+The collection of preliminary groups of $W$ is denoted by $\text{PreSegGro}(W)$. Clearly, it is a partition of $W$. For a weak linkage $W$, $\text{PreSegGro}(W) = \bigcup_{W \in W} \text{PreSegGro}(W)$. Then,
+
+**Observation 2.3.** Let $W$ be a weak linkage. The number of type-(ii) preliminary groups in $\text{PreSegGro}(W)$ is at most 1 plus the number of type-(i) preliminary groups in $\text{PreSegGro}(W)$.
+
+Roughly speaking, a type-(i) preliminary group is easily pushed onto $R$ so that it becomes merely a simple path (see Fig. 4). Thus, by Observation 2.3, all type-(ii) preliminary groups of a solution in total do not give rise to the occupation of more than $x+1$ copies of an edge, where $x$ is the number of type-(i) preliminary groups.
+
+**Rollback Spirals and Winding Number.** Unfortunately, the number of spirals can be huge. Nevertheless, we can pair-up *some* of them so that they will “cancel” each other when pushed onto $R$ (see Fig. 8), thereby behaving like a type-(ii) preliminary group. Intuitively, we pair-up two spirals of a walk if one of them goes from the left-side to the right-side of the path, the other goes from the right-side to the left-side of the same path, and “in between” them on the walk, there are only type-(ii) preliminary groups and spirals that have already been paired-up. We refer to paired-up spirals as *rollback spirals*. (Not all spirals can be paired-up in this manner.) This gives rise to the following strengthening of Definition 2.4.
+
+**Definition 2.5.** A subwalk of a walk $W$ is called a group of $W$ if either (i) it is a non-spiral type-(i) preliminary group, or (ii) it is the union of an inclusion-wise maximal collection of segments not of type (i) (i.e., all endpoints of the segments in the group are internal vertices of the same maximal degree-2 path of $R$). The potential of a group is (roughly) 1 plus its number of non-rollback spirals.
+
+Now, rather than upper bounding the total number of spirals, we only need to upper bound the number of non-rollback spirals. To this end, we use the notion of *winding number* (in Section 7), informally defined as follows. Consider a solution $\mathcal{P}$, a path $Q \in \mathcal{P}$, and a long path $P$ of $R$ with separators $S_u$ and $S_v$. As $S_u$ and $S_v$ are minimal separators in a triangulated graph (the radial completion is triangulated), they are cycles, and as at least one vertex in $T$ belongs to the outer face, they form a ring (see Fig. 9). Each maximal subpath of $Q$ that lies inside this ring can either *visit* the ring, which means that both its endpoints belong to the same separator, or *cross* the ring, which means that its endpoints belong one to $S_u$ and the other to $S_v$ (see Fig. 9). Then, the (absolute value of the) *winding number* of a crossing subpath is the number
\ No newline at end of file
diff --git a/samples/texts/2262004/page_1.md b/samples/texts/2262004/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..6ca1edf9f6dcfe7445db80991e64c30e6abde578
--- /dev/null
+++ b/samples/texts/2262004/page_1.md
@@ -0,0 +1,34 @@
+We are IntechOpen,
+the world's leading publisher of
+Open Access books
+Built by scientists, for scientists
+
+5,300
+Open access books available
+
+131,000
+International authors and editors
+
+160M
+Downloads
+
+Our authors are among the
+TOP 1%
+most cited scientists
+
+154
+Countries delivered to
+
+12.2%
+Contributors from top 500 universities
+
+WEB OF SCIENCE™
+
+Selection of our books indexed in the Book Citation Index
+in Web of Science™ Core Collection (BKCI)
+
+Interested in publishing with us?
+Contact book department@intechopen.com
+
+Numbers displayed above are based on latest data collected.
+For more information visit www.intechopen.com
\ No newline at end of file
diff --git a/samples/texts/2262004/page_10.md b/samples/texts/2262004/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..278f13ee7be58e9e34be5c583abd456599ba561d
--- /dev/null
+++ b/samples/texts/2262004/page_10.md
@@ -0,0 +1,17 @@
+The following symbols are assigned: C to instantaneous concentration of CDNB, B to instantaneous concentration of GSH, Q to instantaneous concentration of the product, $K_{ma}$ to Km of GST for CSNB, $K_{mb}$ to Km of GST for GSH, $K_{i\alpha}$ to the dissociation constant of GSH, $K_{iq}$ to the dissociation constant of the product, A for instantaneous absorbance, $A_m$ for the maximal absorbance of the product, $\varepsilon$ to difference in absorptivity of product and CDNB, $V_m$ for the maximal reaction rate of GST. The differential rate equation for GST reaction is Equ.(13). After the definition of $M_1$, $M_2$ and $M_3$, the integrated rate equation with the predictor variable of reaction time is Equ.(19) if GST reaction is irreversible and a process similar to that for Equ.(4) is employed (Zhao, L.N., et al., 2006).
+
+$$ \frac{1}{V} = \left( \frac{K_{mb}}{V_m} \right) \times \left[ 1 + K_{ib} \times \frac{K_{ma} \times Q}{K_{iq} \times K_{mb} \times C} \right] / B + \left[ 1 + K_{ma} \times \frac{1+Q}{K_{iq}} \right] / V_m \quad (13) $$
+
+$$ M_1 = \frac{K_{ma}}{\varepsilon \times K_{iq}} \quad (14) $$
+
+$$ M_2 = K_{ma} - K_{ib} \times \frac{K_{ma}}{K_{iq}} - A_m \times \frac{K_{ma}}{\varepsilon \times K_{iq}} + C - A_0 \times \frac{K_{ma}}{\varepsilon \times K_{iq}} \quad (15) $$
+
+$$ M_3 = K_{ma} \times A_m + \varepsilon \times K_{mb} \times C + C \times A_m - K_{ib} \times K_{mb} \times A_0 / K_{iq} - K_{ma} \times A_m \times A_0 / (K_{iq} \times \varepsilon) \quad (16) $$
+
+$$ \frac{M_1 \times A^2 + M_2 \times A - M_3}{A - A_m} \times dA = C \times \varepsilon \times V_m \times dt \quad (17) $$
+
+$$ Y = M_1 \times (A - A_m)^2 / 2 + (2 \times M_1 \times A_m + M_2) \times (A - A_m) + (M_1 \times A_m^2 + M_2 \times A_m - M_3) \times L\eta |A - A_m| \quad (18) $$
+
+$$ Y = C \times \varepsilon \times V_m \times (t - T_{lag}) = a + b \times t \quad (19) $$
+
+Fig. 5. Estimated $V_m$ to changes in data ranges for analyses with 60 µmol/L GSH.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_11.md b/samples/texts/2262004/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..d76383295a5c94e6a91757f2f5e3cb137c95fbef
--- /dev/null
+++ b/samples/texts/2262004/page_11.md
@@ -0,0 +1,7 @@
+As demonstrated in the definition of $M_1$, $M_2$ and $M_3$, kinetic parameters preset as constants for kinetic analysis of GST reaction curve should have strong covariance. Except $K_{iq}$ as an unknown kinetic parameter for optimization, other kinetic parameters are those reported (Kunze, 1997; Pabst, et al, 1974). To optimize $K_{iq}$, two criteria are used. The first is the consistency of predicted $A_m$ at a series of GSH concentrations using data of 6.0-min reaction with that by the equilibrium method after 40 min reaction (GST activity is optimized to complete the reaction within 40 min). The second is the resistance of $V_m$ to reasonable changes in data ranges for analyses. After stepwise optimization, $K_{iq}$ is fixed at 4.0 µmol/L; $A_m$ predicted for GSH from 5.0 µmol/L to 50 µmol/L is consistent with that by the equilibrium method (Zhao, L.N., et al. 2006); the estimation of $V_m$ is resistant to changes of data ranges (Fig. 5). Therefore, $K_{iq}$ is optimized and fixed as a constant at 4.0 µmol/L.
+
+Fig. 6. Response of GSH concentration determined to preset GSH concentrations (the equilibrium method uses data with 6.0 min reaction).
+
+Fig. 7. Response of initial rates to quantities of purified porcine alkaline GST.
+
+Kinetic analysis of GST reaction curve can predict $A_m$ for GSH over 4.0 µmol/L, but there are no sufficient data for analyses at GSH below 3.0 µmol/L; after optimization of GST activity for complete conversion of GSH at 5.0 µmol/L within 6.0 min, reaction curve within 5.0 min for GSH at 5.0 µmol/L can be used for kinetic analysis of reaction curve to predict $A_m$. With the optimized GST activity for reaction within 5.0 min, the linear range for GSH assay is from 1.5 µmol/L to over 90.0 µmol/L by the integration strategy while it is from 4.0
\ No newline at end of file
diff --git a/samples/texts/2262004/page_12.md b/samples/texts/2262004/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..16d1fbee4c4413c7b4374eb422dc525a2e9e0ebb
--- /dev/null
+++ b/samples/texts/2262004/page_12.md
@@ -0,0 +1,17 @@
+Kinetic Analyses of Enzyme Reaction Curves
+with New Integrated Rate Equations
+and Applications
+
+Xiaolan Yang, Gaobo Long, Hua Zhao and Fei Liao*
+
+College of Laboratory Medicine, Chongqing Medical University,
+Chongqing,
+China
+
+# 1. Introduction
+
+A reaction system of Michaelis-Menten enzyme on single substrate can be characterized by the initial substrate concentration before enzyme action ($S_0$), the maximal reaction rate ($V_m$) and Michaelis-Menten constant ($K_m$), besides some other required parameters. The estimations of $S_0$, $V_m$ and $K_m$ can be used to measure enzyme substrates, enzyme activities, epitope or hapten (enzyme-immunoassay), irreversible inhibitors and so on. During enzyme reaction, the changes of substrate or product concentrations can be monitored; continuous monitor of such changes provides a reaction curve while discontinuous monitor of such changes provides signals just for the starting point and the terminating point of enzyme reaction. It is an end-point method when only signals for the starting point and the terminating point are analyzed. It is a kinetic method when a range of data from a reaction curve are analyzed, and can be classified into the initial rate method and kinetic analysis of reaction curve. The initial rate method only analyzes data for initial rate reaction whose instantaneous rates are constants; kinetic analysis of reaction curve analyzes data whose instantaneous rates show obvious deviations from the initial rate (Bergmeyer, 1983; Guilbault, 1976; Marangoni, 2003). To estimate those parameters of an enzyme reaction system, kinetic analysis of reaction curve is favoured because the analysis of one reaction curve can concomitantly provide $V_m$, $S_0$ and $K_m$. Hence, methods for kinetic analysis of reaction curve to estimate parameters of enzyme reaction systems are widely studied.
+
+An enzyme reaction curve is a function of dependent variables, which are proportional to concentrations of a substrate or product, with respect to reaction time as the predictor variable. In general, there are two types of enzyme reaction curves. The first type involves the action of just one enzyme, and employs either a selective substrate to detect the activity of one enzyme of interest or a specific enzyme to act on a unique substrate of interest. The second type involves the actions of at least two enzymes, and requires at least one auxiliary enzyme as a tool to continuously monitor a reaction curve. The second type is an enzyme-coupled reaction system. For kinetic analysis of reaction curve, there are many reports on
+
+* Corresponding Author
\ No newline at end of file
diff --git a/samples/texts/2262004/page_13.md b/samples/texts/2262004/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..3bf5476f9bdc33fb494e138bc4397101334ccf3c
--- /dev/null
+++ b/samples/texts/2262004/page_13.md
@@ -0,0 +1,11 @@
+µmol/L to over 90.0 µmol/L by kinetic analysis of reaction curve alone (Fig. 6, unpublished). By the equilibrium method alone for reaction within 5.0 min, the assay of 80.0 µmol/L GSH requires GST activity that is 50 folds higher due to the inhibition of GST by the accumulated product. Therefore, the integration strategy for GSH assay is obviously advantageous.
+
+The integration strategy for measuring GST initial rates is tested. For convenience, $S_0$ of the final GSH is fixed at 50 µmol/L and the duration to monitor reaction curve is optimized. After the analyses of reaction curves recorded within 10 min, it is found that reaction for 6.0 min is sufficient to provide the required overlapped region of GST activities measurable by both methods. By using $K_{iq}$ fixed at 4.0 µmol/L as a constant, the reaction duration of 6.0 min and PSC at 48 µmol/L to convert $V_m$ to initial rates, the integration strategy gives a linear range from 2.0 U/L to 60 U/L; kinetic analysis of reaction curve alone gives the linear range from 5.0 U/L to 60 U/L while the classical initial rate method alone gives a linear range from 1.0 U/L to 5.0 U/L (Fig. 7, unpublished). Clearly, with enzyme suffering strong product inhibition, the integration strategy for enzyme initial rate assay is advantageous.
+
+### 2.5.3 Alcohol dehydrogenase reaction
+
+ADH is widely used for serum ethanol assay. ADH kinetics is sophisticated due to the reversibility of reaction and the inhibition by both acetaldehyde and NADH as products. To simplify ADH kinetics, some special approaches are employed to make ADH reaction apparently irreversible on single substrate (alcohol). Thus, reaction pH is optimized to 9.2 to scavenge hydrogen ion; semicabarzide at final 75 mmol/L is used to remove acetaldehyde as completely as possible; final nicotinamide adenine dinucleotide (NAD+) is 3.0 mmol/L; final ADH is about 50 U/L (Liao, et al., 2007a). By assigning the maximal absorbance at 340 nm for reduced nicotinamide adenine dinucleotide (NADH) by the equilibrium method to $A_{me}$ and that by kinetic analysis of reaction curve to $A_{mk}$, kinetic analysis of ADH reaction curve should predict $A_{mk}$ consistent with $A_{me}$, but requires some special efforts.
+
+Fig. 8. Response of $F$ values to preset $C_{ald}$ for kinetic analysis of reaction curve for 0.31 mmol/L ethanol (reproduced with permission from Liao, et al, 2007a).
+
+The use of semicabarzide reduces concentrations of acetaldehyde ($C_{ald}$) to unknown levels, and thus complicates the treatment of acetaldehyde inhibition on ADH. The integration rate equation with the predictor variable of reaction time can be worked out for ADH (Liao, et
\ No newline at end of file
diff --git a/samples/texts/2262004/page_14.md b/samples/texts/2262004/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..72eddbf5ad9d1492d597738cc3fa6a9e48223f91
--- /dev/null
+++ b/samples/texts/2262004/page_14.md
@@ -0,0 +1,7 @@
+al., 2007a). All kinetic parameters and NAD+ concentrations are preset as those used or reported (Ganzhorn, et al. 1987). However, there are multiple maxima of the goodness of fit with the continuous increase in steady-state $C_{\text{ald}}$ for kinetic analysis of reaction curve (Fig. 8). Thus, $C_{\text{ald}}$ can not be concomitantly estimated by kinetic analysis of reaction curve, and a special approach is used to approximate steady-state $C_{\text{ald}}$ for predicting $A_{\text{mk}}$.
+
+Fig. 9. Correlation function of the best steady-state $C_{\text{ald}}$ with $A_{\text{me}}$ (reproduced with permission from Liao, et al, 2007a).
+
+Under the same reaction conditions, the equilibrium method can determine $A_{\text{me}}$ for ethanol below 0.20 mmol/L after reaction for 50 min. For kinetic analyses of such reaction curves, the lag time for steady-state reaction is estimated to be over 40 s and is used to select data of steady-state reaction for analysis. Using the equilibrium method as the reference method, the best steady-state $C_{\text{ald}}$ for data of 6.0-min reaction is obtained for consistency of $A_{\text{mk}}$ with $A_{\text{me}}$ at each tested ethanol level from 10 µmol/L to 0.17 mmol/L. After dilution and determination by the equilibrium method, $A_{\text{me}}$ for each tested ethanol level from 0.17 mmol/L to 0.30 mmol/L is also available. Consequently, an exponential additive function is obtained to approximate the correlation of the best $C_{\text{ald}}$ for predicting $A_{\text{mk}}$ consistent with $A_{\text{me}}$ (Fig. 9). This special correlation function for $C_{\text{ald}}$ and $A_{\text{mk}}$ is used as a restriction function to iteratively adjust $C_{\text{ald}}$ for predicting $A_{\text{mk}}$; namely, iterative kinetic analysis of reaction curve with $C_{\text{ald}}$ predicted from the restriction function using previous $A_{\text{mk}}$ finally gives the desired $A_{\text{mk}}$. Such an artificial intelligence approach to the steady-state $C_{\text{ald}}$ for kinetic analysis of reaction curve can hardly be found in publications.
+
+To start kinetic analysis of an ADH reaction curve, the highest absorbance under analysis is taken as $A_{\text{mk}}$ to predict the best $C_{\text{ald}}$ for the current run of kinetic analysis of reaction curve. The estimated $A_{\text{mk}}$ is then used to predict the second $C_{\text{ald}}$ for the second run of kinetic analysis of reaction curve (Fig. 10). Such an iterative kinetic analysis of reaction curve can predict $A_{\text{mk}}$ consistent with $A_{\text{me}}$ for 0.31 mmol/L ethanol when reaction duration is just 6.0 min and the convergence criterion is set for absorbance change below 0.0015 in $A_{\text{mk}}$. Usually convergence is achieved with 7 runs of the iterative kinetic analysis of reaction. Moreover, it is resistant to the change of ADH activities by 50% and coefficients of variation (CV) are below 5% for final ethanol levels from 20 µmol/L to 310 µmol/L in reaction solutions.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_15.md b/samples/texts/2262004/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..99a1941f13e9fe2086aa51ca018b4f449b6dc9cc
--- /dev/null
+++ b/samples/texts/2262004/page_15.md
@@ -0,0 +1,11 @@
+Fig. 10. Iterative adjustment of $C_{ald}$ to predict $A_{mk}$ for 0.31 mmol/L ethanol at 50 U/L ADH (reproduced with permission from Liao, et al, 2007a).
+
+Obviously, by this special approach for kinetic analysis of ADH reaction curve, the upper limit of linear response is excellent, but the lower limit of linear response is over 5.0 µmol ethanol. Under the stated reaction conditions, the equilibrium method after reaction for 8.0 min is effective to quantify ethanol up to final 6.0 µmol. Thus, the equilibrium method with reaction duration of 8.0 min can be integrated with iterative kinetic analysis of reaction curve for quantifying ethanol; this integration strategy gives the linear range from about final 2.0 µmol to about 0.30 mmol/L ethanol in reaction solutions; it has CVs below 8% for ethanol below 10 µmol/L, and CVs below 5% for ethanol over 20 µmol/L (Liao, et al., 2007a). These results clearly supported the advantage of the new integration strategy for substrate assay and the importance of chemometrics in kinetic enzymatic analysis of substrate.
+
+## 2.6 Programming for kinetic analysis of enzyme reaction curve
+
+Most software package like Origin, SAS, MATLAB can perform kinetic analysis of reaction curve, but they are usually ineffective to implicit functions for kinetic analysis of reaction curve. For convenience and the use of some complicated methods for kinetic analysis of reaction curve in widow-aided mode, self-programming is still favourable.
+
+For simplicity in programming, we used Visual Basic 6.0 to write the source code and working windows (Liu, et al., 2011). The executable program has the main window to perform kinetic analysis of reaction curve (Fig. 11). Original data for each reaction curve is stored as a text file, and keywords are used to indicate specific information related to the reaction curve including sample numbering, the enzyme used, the quantification method, some necessary kinetic parameters, and usually initial signal before enzyme action. Such information is read into memory by the software for kinetic analysis of reaction curve.
+
+On the main window to perform kinetic analysis of reaction curve, original data are listed and plotted for eyesight-checking of data for steady-state reaction. Text boxes are used to input some common parameters like $K_m$, and most parameters are read from the text file for the reaction curve. Subprogram for an enzyme reaction system is called for running; results are displayed on the main window and may be saved in text file for further analysis.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_16.md b/samples/texts/2262004/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..edd1f09da3329e1ccede1a15961311596e7e2508
--- /dev/null
+++ b/samples/texts/2262004/page_16.md
@@ -0,0 +1,7 @@
+Fig. 11. Main window for the executable PCFenzyme
+
+We called the software PCFenzyme. An old version of the executable PCFenzyme can be downloaded at http://dx.doi.org/10.1016/j.clinbiochem.2008.11.016. The latest version of the executable PCFenzyme with new methods included is available upon request by e-mail.
+
+**3. Conclusion**
+
+The following conclusions can be drawn. (a) Kinetic analysis of reaction curve can give the initial substrate concentration before enzyme action, the maximal reaction rate, Michaelis-Menten constant and other related parameters of an enzyme reaction system; for reliability, however, it is better to just estimate the initial substrate concentration before enzyme action and the maximal reaction rate with Michaelis-Menten constant and other parameters fixed as constants after optimization. (b) For an enzyme whose integrated rate equation with the predictor variable of reaction time is accessible, kinetic analysis of reaction curve can estimate parameters *via* nonlinear-least-square-fitting after transformation of data from the reaction curve under analysis. (c) For an enzyme reaction system whose kinetics is described by a set of differential rate equations or is difficult to be integrated with the predictor of reaction time, iterative numerical integration of the differential rate equation(s) with a series of preset parameters can produce serial calculated reaction curves; such calculated reaction curves can be fit to the reaction curve under analysis for estimating parameters based on nonlinear-least-square-fitting. This approach is applicable to enzyme-coupled reaction systems of sophisticated kinetics. (d) The integration of kinetic analysis of reaction curve with the equilibrium method can quantify enzyme substrates with expanded linear ranges, favourable analysis efficiency, low cost on tool enzyme, desirable resistance to factors affecting enzyme activities and enhanced precision; it can be applied to enzyme reaction suffering strong product inhibition. (e) The integration of kinetic analysis of reaction curve
\ No newline at end of file
diff --git a/samples/texts/2262004/page_17.md b/samples/texts/2262004/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..3b2f6a905a646bf7ba37244e0157f785972fceba
--- /dev/null
+++ b/samples/texts/2262004/page_17.md
@@ -0,0 +1,23 @@
+with the classical initial rate method can measure enzyme initial rates with wide linear ranges, favourable analysis efficiency and practical levels of substrates; it can be applicable to enzyme-coupled reaction curve or enzyme reaction suffering product inhibition.
+
+Taken together, kinetic analysis of enzyme reaction curves under optimized conditions can screen common reversible inhibitors and enzyme mutants; the integration strategy for measuring enzyme activities can quantify serum enzymes and enzyme labels in enzyme-immunoassays to expand the quantifiable ranges, and can be applied to quantify irreversible inhibitors as environmental pollutants; the integration strategy to quantify enzyme substrate can be the second-generation approaches and potentially find wide applications in clinical laboratory medicine. Therefore, these new methodologies for enzymatic analyses based on chemometrics can potentially find their important applications in biomedical sciences.
+
+## 4. Acknowledgment
+
+This work is supported by the program for New Century Excellent Talent in University (NCET-09), high-technology-program "863" of China (2011AA02A108), National Natural Science Foundation of China (nos. 30200266, 30672009, 81071427), Chongqing Municipal Commission of Sciences and Technology (CQ CSTC2011BA5039), and Chongqing Education Commission (KJ100313).
+
+## 5. References
+
+Atkins, G.L. & Nimmo, I.A. (1973). The reliability of Michaelis-Menten constants and maximum velocities estimated by using the integrated Michaelis-Menten equation. *The Biochemical Journal*, vol. 135, no.4, (December 1973), pp. 779-784, ISSN 0264-6021
+
+Baywenton, P. R. (1986). *Data process and error analysis* (Translated into Chinese by Weili Qiu, Genxin Xu, Enguang Zhao, and Shengzhong Chen), ISBN 13214.84, Knowledge Press, Beijing, China
+
+Bergmeyer, H.U. (1983). Methods of Enzymatic Analysis, Vol. I. Fundamentals (3rd Ed.), ISBN 978-3527260416, Wiley VCH, Weinheim, Germany
+
+Burden, R.L. & Faires, J.D. (2001). *Numerical Analysis* (7th ed.), ISBN 978-0534382162, Academic Internet Publishers, Ventura, Carliforlia, USA
+
+Burguillo, J., Wright, A.J. & Bardsley, W. G. (1983). Use of the F test for determining the degree of enzyme-kinetic and ligand-binding data. *The Biochemical Journal*, vol. 211, no.1, (April 1983), pp. 23-34, ISSN 0264-6021
+
+Cheng, Y.C. & Prusoff, W.H. (1973). Relationship between the inhibition constant (KI) and the concentration of inhibitor which causes 50 per cent inhibition (I50) of an enzymatic reaction. Biochemical Pharmacology, vol.22, no. 23, (December 1973), pp. 3099-3108, ISSN 0006-2952.
+
+Cheng, Z.L., Chen, H., Zhao, Y.S., Yang, X.L., Lu, W., Liao, H., Yu, M.A., & Liao, F. (2008). The measurement of the activity of rabbit muscle lactic dehydrogenase by integrating the classical initial rate method with an integrated method. *2nd International Conference on Bioinformatics and Biomedical Engineering*, iCBBE 2008, pp.1209-1212, ISBN 978-1-4244-1748-3, Shanghai, China, May 26-28, 2008
\ No newline at end of file
diff --git a/samples/texts/2262004/page_18.md b/samples/texts/2262004/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..0665bb6485c76f459e60d31a31a95c53129a6853
--- /dev/null
+++ b/samples/texts/2262004/page_18.md
@@ -0,0 +1,29 @@
+Claro, E. (2000). Understanding initial velocity after the derivatives of progress curves. *Biochemistry and Molecular Biology Education*, Vol.28, no.6, (November 2000), pp. 304-306, ISSN 1470-8175
+
+Cornish-Bowden, A. (1975). The use of the direct linear plot for determining initial velocities. *The Biochemical Journal*, vol. 149, no.2, (August 1975), pp. 305-312, ISSN 0264-6021
+
+Cornish-Bowden, A. (1995). *Analysis of enzyme kinetic data*, ISBN 978-0198548775, Oxford University Press, London, UK
+
+Dagys, R., Tumas, S., Zvirblis, S. & Pauliukonis, A. (1990) Determination of first and second derivatives of progress curves in the case of unknown experimental error. *Computers and Biomedical Research*, Vol.23, no. 5, (October 1990), pp. 490-498, ISSN 0010-4809
+
+Dagys, R., Pauliukonis, A., Kazlauskas, D., Mankevicius, M. & Simutis, R. (1986). Determination of initial velocities of enzymic reactions from progress curves. *The Biochemical Journal*, vol.237, no.3, (August 1986), pp. 821-825, ISSN 0264-6021
+
+del Rio F.J., Riu, J. & Rius, F. X. (2001). Robust linear regression taking into account errors in the predictor and response variables. *Analyst*, vol. 126, no. , (July 2001), pp. 1113-1117, ISSN 0003-2654
+
+Dilena, B.A., Peake, M.J., Pardue, H.L., Skoug, J.W. (1986). Direct ultraviolet method for enzymatic determination of uric acid, with equilibrium and kinetic data-processing options. *Clinical Chemistry*, vol. 32, no.3, (May 1986), pp. 486-491, ISSN 0009-9147
+
+Dixon, M.C. & Webb, EC. (1979). *Enzymes* (3rd ed.), ISBN 0122183584, Academic Press, New York, USA
+
+Draper, N.R. & Smith, H. (1998). *Applied regression analysis* (3rd ed.), ISBN 978-0471170822, Wiley-Interscience; New York, USA
+
+Duggleby, R. G. (1983). Determination of the kinetic properties of enzymes catalysing coupled reaction sequences. *Biochimica et Biophysica Acta (BBA) - Protein Structure and Molecular Enzymology*, Vol.744, no. 3, (May 1983), pp. 249-259, ISSN 0167-4838
+
+Duggleby, R.G. (1985). Estimation of the initial velocity of enzyme-catalysed reactions by non-linear regression analysis of progress curves. *The Biochemical Journal*, vol. 228, no.1, (May 1985), pp. 55-60, ISSN 0264-6021
+
+Duggleby, R. G. (1994). Analysis of progress curves for enzyme-catalyzed reactions: application to unstable enzymes, coupled reactions, and transient state kinetics. *Biochimica et Biophysica Acta (BBA) - General subjects*, vol. 1205, no.2, (April 1994), pp. 268-274, ISSN 0304-4165
+
+Fresht, A. (1985). *Enzyme structure and Mechanism* (2nd Ed.), ISBN 978-0716716143, Freeman WH, New York, USA
+
+Ganzhorn, A.J., Green, D.W., Hershey, A.D., Gould, R.M., Plapp, B.V. (1987). Kinetic characterization of yeast alcohol dehydrogenases. Amino acid residue 294 and substrate specificity. The Journal of Biological chemistry, vol.262, no.8, (March 1987), pp. 3754-3761, ISSN 0021-9258
+
+Guilbault, G. G. (1976). *Handbook of enzymatic methods of analysis*, ISBN 978-0824764258, Marcel Dekker, New York, USA
\ No newline at end of file
diff --git a/samples/texts/2262004/page_19.md b/samples/texts/2262004/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..f3b5ac05433087079467e50f602876890bf214c9
--- /dev/null
+++ b/samples/texts/2262004/page_19.md
@@ -0,0 +1,25 @@
+Gutierrez, O.A. & Danielson, U. H. (2006). Sensitivity analysis and error structure of progress curves. Analytical Biochemistry, vol.358, no.1, (August 2006), pp.1-10, ISSN 0003-2697
+
+Hamilton, S. D. & Pardue, H. L. (1982). Kinetic method having a linear range for substrate concentration that exceed Michaelis–Menten constants. *Clinical Chemistry*, vol. 28, no.12, (December 1982), pp.2359-2365, ISSN 0009-9147
+
+Hasinoff, B. B. (1985). A convenient analysis of Michaelis enzyme kinetic progress curves based on second derivatives. *Biochimica et Biophysica Acta (BBA) - General Subjects*, Vol. 838, no. 2, (February 1985), pp. 290-292, ISSN 0304-4165
+
+Kahn, K. & Tipton, P.A. (1998). Spectroscopic characterization of intermediates in the urate oxidase reaction. *Biochemistry*, vol. 37, no. (August 1998), pp. 11651-11659, ISSN 0006-2960.
+
+Koerber, S. C. & Fink, A. L. (1987). The analysis of enzyme progress curves by numerical differentiation, including competitive product inhibition and enzyme reactivation. *Analytical Biochemistry*, vol. 165, no.1, (December 2004), pp. 75-87, ISSN 0003-2697
+
+Li, Z.R., Liu,Y., Yang, X.Y., Pu,J., Liu, B.Z., Yuan, Y.H., Xie, Y.L. & Liao, F. (2011). Kinetic analysis of gamma-glutamyltransferase reaction process for measuring activity via an integration strategy at low concentrations of gamma-glutamyl p-nitroaniline. *Journal of Zhejiang University Science B*, vol. 12, no.3, (March 2011), pp. 180-188, ISSN 1673-1581
+
+Liao, F. (2005). *The method for quantitative enzymatic analysis of uric acid in body fluids by predicting the background absorbance*. China patent: ZL O3135649.4, 2005-08-31
+
+Liao, F., Li, J.C., Kang, G.F., Zeng, Z.C., Zuo, Y.P. (2003a). Measurement of mouse liver glutathione-S- transferase activity by the integrated method. *Journal of Medical Colleges of PLA, vol. 18, no.5*, (October 2003), pp. 295-300, ISSN 1000-1948
+
+Liao, F., Liu, W.L., Zhou, Q.X., Zeng, Z.C., Zuo, Y.P. (2001). Assay of serum arylesterase activity by fitting to the reaction curve with an integrated rate equation. *Clinica Chimica Acta*, vol. 314, no.1-2, (December 2001), pp.67-76, ISSN 0009-8981
+
+Liao, F., Tian, K.C., Yang, X., Zhou, Q.X., Zeng, Z.C., Zuo, Y.P. (2003b). Kinetic substrate quantification by fitting to the integrated Michaelis-Menten equation. *Analytical Bioanalytical Chemistry*, vol. 375, no. 6, (February 2003), pp. 756-762, ISSN 1618-2642
+
+Liao, F., Yang, D.Y., Tang, J.Q., Yang, X.L., Liu, B.Z., Zhao, Y.S., Zhao, L.N., Liao, H. & Yu, M.A. (2009). The measurement of serum cholinesterase activities by an integration strategy with expanded linear ranges and negligible substrate-activation. *Clinical Biochemistry*, vol.42, no.6, (December 2008), pp.926-928. ISSN 0009-9120
+
+Liao, F., Zhao, L.N., Zhao, Y.S., Tao, J., Zuo, Y.P. (2007a). Integrated rate equation considering product inhibition and its application to kinetic assay of serum ethanol. *Analytical Sciences*, vol. 23, no.4, (April 2007), pp. 439-444 , ISSN 0910-6340
+
+Liao, F., Zhao, Y.S., Zhao, L.N., Tao, J., Zhu, X.Y., Liu, L. (2006). The evaluation of a direct kinetic method for serum uric acid assay by predicting the background absorbance of uricase reaction solution with an integrated method. *Journal of Zhejiang University Science B*, vol. 7, no.6, pp. 497-502, ISSN 1673-1581
\ No newline at end of file
diff --git a/samples/texts/2262004/page_2.md b/samples/texts/2262004/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c6b9d84ac0705c9e5d8740dbea08e4980ed27e9
--- /dev/null
+++ b/samples/texts/2262004/page_2.md
@@ -0,0 +1,22 @@
+$$
+\begin{aligned}
+C_{p,i+1} = C_{p,i} + V_{1k} \times \Delta t - & \\
+& V_m \times \frac{\Delta t}{(1 + K_a/C_{n,i} + K_b/C_{p,i} + K_{ab}/(C_{n,i} \times C_{p,i}))}
+\end{aligned}
+\quad (11) $$
+
+$$ A_{i+1} = A_i - \varepsilon \times V_m \times \frac{\Delta t}{(1 + K_a/C_{n,i} + K_b/C_{p,i} + K_{ab}/C_{n,i} \times C_{p,i})} \quad (12) $$
+
+By simulation with such a new approach for kinetic analysis of enzyme-coupled reaction curve recorded at 1-s intervals, the upper limit of linear response for measuring ALT initial rates is increased to about five times that by the classical initial rate method. This new approach is resistant to reasonable variations in data range for analysis. By experimentation using the sampling intervals of 10 s, the upper limit is about three times that by the classical initial rate method. Therefore, this new approach for kinetic analysis of enzyme-coupled reaction curve is advantageous, and can potentially be a universal approach for kinetic analysis of reaction curve of any system of much complicated kinetics.
+
+The computation time for numerical integration is inversely proportional to the integration step, $\Delta t$; the use of shorter $\Delta t$ is always better but $\Delta t$ of 0.20 s at low cost on computation is sufficient for a desirable upper limit of linear response. This new approach with Celeron 300A CPU on a personal computer needs about 10 min for just 30 data in a LDH-coupled reaction curve, but it consumes just about 5 s with Lenovo Notebook S10e. The advancement of personal computers surely can promote the practice of this approach.
+
+## 2.4 Integration of kinetic analysis of reaction curve with other methods
+
+Any analytical method should have favourable analysis efficiency, wide linear range, low cost and strong robustness. Kinetic analysis of reaction curve for $V_m$ and $S_0$ assay can have much better upper limit of linear response, but inevitably tolerates low analysis efficiency when wide linear range is required. Based on kinetic analysis of reaction curve, however, our group developed two integration strategies for enzyme initial rate and substrate assay, respectively, with both favourable analysis efficiency and ideal linear ranges.
+
+### 2.4.1 New integration strategy for enzyme initial rate assay
+
+The classical initial rate method to measure enzyme initial rates requires $S_0$ much higher than $K_m$ to have desirable linear ranges (Bergmeyer, 1983; Dixon & Webb, 1979; Guilbault, 1976; Marangoni, 2003). Due to substrate inhibition, limited solubility and other causes, practical substrate levels are always relatively low and thus the linear ranges by the classical initial rate method are always unsatisfactory (Li, et al., 2011; Morishita, et al., 2000; Stromme & Theodorsen, 1976). As described above, kinetic analysis of reaction curve can measure enzyme $V_m$, and many approaches based on kinetic analysis of reaction curve are already proposed (Cheng, et al., 2008; Claro, 2000; Cornish-Bowden 1975, 1995; Dagys, et al., 1986, 1990; Duggleby, 1983, 1985, 1994; Hasinoff, 1985; Koerber, & Fink, 1987; Liao, et al., 2001; Lu & Fei, 2003; Marangoni, 2003; Walsh, et al. 2010). Such approaches all require substrate consumption percentage over 40% with $K_m$ preset as a constant. As a result, there should be intolerably long reaction duration to monitor reaction curves for samples of low enzyme activities, or else the lower limits of linear response are unfavourable.
+
+The integration of kinetic analysis of reaction curve using proper integrated rate equations with the classical initial rate method gives an integration strategy to measure enzyme initial
\ No newline at end of file
diff --git a/samples/texts/2262004/page_20.md b/samples/texts/2262004/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a204d8d6e721737376d4f30ba37e105bdf80f70
--- /dev/null
+++ b/samples/texts/2262004/page_20.md
@@ -0,0 +1,25 @@
+Liao, F., Zhao, Y.S., Zhao, L.N., Tao, J., Zhu, X.Y., Wang, Y.M., Zuo, Y.P. (2005b). Kinetic method for enzymatic analysis by predicting background with uricase reaction as model. *Journal of Medical Colleges of PLA*, vol.20, no.6, (December 2005), pp. 338-344, ISSN 1000-1948
+
+Liao, F., Zhu, X.Y., Wang, Y.M., Zhao, Y.S., Zhu, L.P., Zuo, Y.P. (2007b). Correlation of serum arylesterase activity on phenylacetate estimated by the integrated method to common classical biochemical indexes of liver damage. *Journal of Zhejiang University Science B*, vol. 8, no.4, (April 2007), pp.237-241, ISSN 1673-1581
+
+Liao, F., Zhu, X.Y., Wang, Y.M., Zuo, Y.P. (2005a). The comparison on the estimation of kinetic parameters by fitting enzyme reaction curve to the integrated rate equation of different predictor variables. *Journal of Biochemical Biophysical Methods*, vol. 62, no.1, (January 2005), pp. 13-24, ISSN 0165-022X
+
+Liu, B.Z., Zhao, Y.S., Zhao, L.N., Xie, Y.L., Zhu, S., Li, Z.R., Liu, Y., Lu, W., Yang, X.L., Xie, G.M., Zhong, H.S., Yu, M.A., Liao, H. & Liao, F. (2009). An integration strategy to estimate the initial rates of enzyme reactions with much expanded linear ranges using uricases as models. *Analytica Chimica Acta*, vol.631, no.1, (October 2008), pp. 22-28. ISSN 0003-2670
+
+Liu, M., Yang, X.L., Yuan, Y.H., Tao, J. & Liao, F. (2011). PCFenzyme for kinetic analyses of enzyme reaction processes. *Procedia Environmental Sciences*, vol. 8, (December 2011), pp.582-587, ISSN 1878-0296
+
+Lu, W.P. & Fei, L. (2003). A logarithmic approximation to initial rates of enzyme reactions. *Analytical Biochemistry*, vol. 316, no. 1, (May 2003), pp.58-65, ISSN 0003-2697
+
+Marangoni, A. G. (2003). *Enzyme kinetics:a modern approach*, ISBN 978-0471159858, Wiley-Interscience, New York, USA
+
+Meyler-Almes, F.J. & Auer, M. (2000). Enzyme inhibition assay using fluorescence correlation spectroscopy: a new algorithm for the derivation of Kcat/KM.and Ki values at substrate concentration much lower than the Michaelis constant. *Biochemistry*, vol. 39, no.43 (October 2000), pp. 13261-13268, ISSN 0006-2960
+
+Miller, J. C. & Miller, J. N. (1993). *Statistics for analytical chemistry* (3rd), ISBN 978-0130309907, Ellis Horwood, Chichester, New York, USA
+
+Morishita, Y., Iinuma, Y., Nakashima, N., Majima, K., Mizuguchi, K. & Kawamura, Y. (2000). Total and pancreatic amylase measured with 2-chloro-4-nitrophenyl-4-O-β-D-galactopyranosylmaltoside. *Clinical Chemistry*, vol. 46, no.7, (July 2000), pp. 928-933, ISSN 0009-9147
+
+Moruno-Davila, M.A., Solo, C.G., Garcia-Moreno, M., Garcia-CAnovas, F. & Varon, R. (2001). Kinetic analysis of enzyme systems with suicide substrate in the presence of a reversible, uncompetitive inhibitor. *Biosystems*, vol. 61, no.1, (June 2001), pp.5-14, ISSN 0303-2647
+
+Moss, D.W. (1980). Methodological principles in the enzymatic determination of substrates illustrated by the measurement of uric acid. *Clinica Chimica Acta*, Vol. 105, no. 3, (August 1980), pp. 351-360, ISSN 0009-8981
+
+Newman, P.F.J., Atkins, G.L. & Nimmo, I. A. (1974). The effects of systematic error on the accuracy of Michaelis constant and maximum velocities estimated by using the
\ No newline at end of file
diff --git a/samples/texts/2262004/page_21.md b/samples/texts/2262004/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..6561cb608b5ebb71ca4d0a7b697533d4ccdc64bd
--- /dev/null
+++ b/samples/texts/2262004/page_21.md
@@ -0,0 +1,25 @@
+integrated Michaelis-Menten equation. *The Biochemical Journal*, vol. 143, no. 3, (December 1974), pp. 779-781. ISSN 0264-6021
+
+Northrop, D. B. (1983). Fitting enzyme-kinetic data to V/K. *Analytical Biochemistry*, vol. 132, No. 2, (July 1983), pp. 457-61, ISSN 0003-2697
+
+Orsi, B.A. & Tipton, K. F. (1979). Kinetic analysis of progress curves. In: *Methods in Enzymology*, vol. 63, D. L. Purich, (Ed.), 159-183, Academic Press, ISBN 978-0-12-181963-7, New York, USA
+
+Priest, D.G. & Pitts, O.M. (1972). Reaction intermediate effects on the spectrophotometric uricase assay. *Analytical Biochemistry*, vol.50, no.1, (November 1972), pp. 195-205, ISSN 0003-2697
+
+Stromme, J.H. & Theodorsen, L. (1976). Gamma-glutamyltransferase: Substrate inhibition, kinetic mechanism, and assay conditions. *Clinical Chemistry*, vol. 22, no.4, (April 1976), pp. 417-421, ISSN 0009-9147
+
+Varon, R., Garrido-del Solo, C., Garcia-Moreno, M., Garcoa-Canovas, F., Moya-Garcia, G., Vidal de Labra, J., Havsteen BH. (1998). Kinetics of enzyme systems with unstable suicide substrates. *Biosystems*, vol. 47, no.3, (August 1998), pp.177-192, ISSN 0303-2647
+
+Walsh, R., Martin, E., Darvesh, S. (2010). A method to describe enzyme-catalyzed reactions by combining steady state and time course enzyme kinetic parameters. *Biochimica et Biophysica Acta-General Subjects*, vol.1800, no.1, (October 2009), pp1-5, ISSN 0304-4165.
+
+Yang, D., Tang, J., Yang, X., Deng, P., Zhao, Y., Zhu, S., Xie, Y., Dai, X., Liao, H., Yu, M., Liao, J. & Liao, F. (2011). An integration strategy to measure enzyme activities for detecting irreversible inhibitors with dimethoate on butyrylcholinesterase as model. *International Journal of Environmental Analytical Chemistry*, vol.91, no.5, (March 2011), pp.431-439, ISSN 0306-7319
+
+Yang, X. L., Liu, B.Z., Sang, Y., Yuan, Y.H., Pu, J., Liu, Y., Li, Z.R., Feng, J., Xie, Y.L., Tang, R. K., Yuan, H.D. & Liao, F. (2010). Kinetic analysis of lactate-dehydrogenase-coupled reaction process and measurement of alanine transaminase by an integration strategy. *Analytical Sciences*, vol.26, no. 11, (November 2010), pp. 1193-1198, ISSN0910-6340
+
+Zhang, C., Yang, X.L., Feng, J., Yuan, Y.H., Li, X., Bu, Y.Q., Xie, Y.L., Yuan, H.D. & Liao, F. (2010). Effects of modification of amino groups with poly(ethylene glycol) on a recombinant uricase from Bacillus fastidiosus. *Bioscience Biotechnology Biochemistry*, 2010; vol.74, no.6, (June 2010), pp. 1298-1301, 0916-8451. ISSN 0916-8451.
+
+Zhao, L.N., Tao, J., Zhao, Y.S., Liao, F. (2006). Quantification of reduced glutathione by analyzing glutathine-S-transferase reaction process taking into account of product inhibition. *Journal of Xi'an Jiaotong University (Medical Sciences)*, vol. 27, no.3, (June 2006), pp.300-303, ISSN 1671-8259
+
+Zhao,Y.S., Yang, X.Y., Lu,W., Liao,H. & Liao, F. (2009). Uricase based method for determination of uric acid in serum. *Microchimica Acta*, vol. 164, no.1, (May 2008), pp.1-6, ISSN 0026-3672
+
+Zhao,Y.S., Zhao,L.N., Yang,G.Q., Tao, J., Bu, Y.Q. & Liao, F. (2006). Characterization of an intracellular uricase from Bacillus fastidious ATCC 26904 and its application to
\ No newline at end of file
diff --git a/samples/texts/2262004/page_22.md b/samples/texts/2262004/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..b410f8cf2ba45814a3be5fd592ec4279cd907e14
--- /dev/null
+++ b/samples/texts/2262004/page_22.md
@@ -0,0 +1,3 @@
+serum uric acid assay by a patented kinetic method. *Biotecnología Applied Biochemistry*, vol. 45, no.2, (September 2006), pp. 75-80, ISSN 0885-4513
+
+Zou, G.L. & Zhu, R.F. (1997). *Enzymology*, Wuhan University Press, ISBN 7-307-02271-0/Q, Wuhan, China
\ No newline at end of file
diff --git a/samples/texts/2262004/page_23.md b/samples/texts/2262004/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..29da941ceb9d977c965aa5630c8334357cc2d3c3
--- /dev/null
+++ b/samples/texts/2262004/page_23.md
@@ -0,0 +1,9 @@
+one enzyme reaction system, but are just a few reports on enzyme-coupled reaction system (Atkins & Nimmo, 1973; Liao, et al., 2005; Duggleby, 1983, 1985, 1994; Walsh, 2010).
+
+In theory, enzyme reactions may tolerate reversibility, the activation/inhibition by substrates/products, and even thermo-inactivation of enzyme. From a mathematic view, it is still feasible to estimate parameters of an enzyme reaction system by kinetic analysis of reaction curve if the roles of all those factors mentioned above are included in a kinetic model (Baywenton, 1986; Duggleby, 1983, 1994; Moruno-Davila, et al., 2001; Varon, et al., 1998). However, enzyme kinetics is usually so complex due to the effects of those mentioned factors that there are always some technical challenges for kinetic analysis of reaction curve. Hence, most methods for kinetic analysis of reaction curve are reported for enzymes whose actions suffer alterations by those mentioned factors as few as possible.
+
+In practice, kinetic analysis of reaction curve usually employs nonlinear-least-square-fitting (NLSF) of the differential or integrated rate equation(s) to either the reaction curve *per se* or data set(s) transformed from the reaction curve (Cornish-Bowden, 1995; Duggleby, 1983, 1994; Orsi & Tipton, 1979). The use of NLSF rather than matrix inversion is due to the existence of multiple minima of the sum of residual squares with respect to some nonlinear parameters (Liao, et al., 2003a, 2007a). When a differential rate equation is used, numerical differentiation of data from the reaction curve has to be employed to derive instantaneous reaction rates. In this case, there must be intervals as short as possible to monitor reaction curves (Burden & Faires, 2001; Dagys, 1990; Hasinoff, 1985; Koerber & Fink, 1987). However, the instantaneous reaction rates from reaction curves inherently exhibit narrow distribution ranges and large errors; the strategy by numerical differentiation of data in a reaction curve is unfavourable for estimating $V_m$ and $S_0$ because of their low reliability and unsatisfactory working ranges. On the other hand, when an integrated rate equation of an enzyme reaction is used for kinetic analysis of reaction curve, there is no prerequisites of short intervals to record reaction curves so that automated analyses in parallel can be realized for enhanced performance with a large number of samples. As a result, integrated rate equations of enzymes are widely studied for kinetic analysis of reaction curve to estimate parameters of enzyme reaction systems (Duggleby, 1994; Liao, et al., 2003a, 2005a; Orsi & Tipton, 1979).
+
+Due possibly to the limitation on computation resources, integrated rate equations of enzymes in such methods are usually rearranged into special forms to facilitate NLSF after data transformation (Atkins & Nimmo, 1973; Orsi & Tipton, 1979). In appearance, the uses of different forms of the same integrated rate equation for NLSF to data sets transformed from the same reaction curve can give the same parameters. However, kinetic analysis of reaction curve with rearranged forms of an integrated rate equation always gives parameters with uncertainty too large to have practical roles (Newman, et al., 1974). Therefore, proper forms of an integrated rate equation should be selected carefully for estimating parameters by kinetic analysis of reaction curve.
+
+In the past ten years, our group studied chemometrics for kinetic analysis of reaction curve to estimate parameters of enzyme reaction systems; the following results were found. (a) In terms of reliability and performance for estimating parameters, the use of the integrated rate equations with the predictor variable of reaction time is superior to the use of the integrated rate equations with predictor variables other than reaction time (Liao, et al., 2005a); (b) the integration of kinetic analysis of reaction curve with other methods to quantify initial rates
\ No newline at end of file
diff --git a/samples/texts/2262004/page_24.md b/samples/texts/2262004/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..5f9304305f23670d60b259e7d14555ffdedfd609
--- /dev/null
+++ b/samples/texts/2262004/page_24.md
@@ -0,0 +1,44 @@
+CHEMOMETRICS
+IN PRACTICAL
+APPLICATIONS
+
+Edited by Kurt Varmuza
+
+ISBN 978-953-51-0438-4
+
+Hard cover, 326 pages
+
+Publisher InTech
+
+Published online 23, March, 2012
+
+Published in print edition March, 2012
+
+In the book "Chemometrics in practical applications", various practical applications of chemometric methods in chemistry, biochemistry and chemical technology are presented, and selected chemometric methods are described in tutorial style. The book contains 14 independent chapters and is devoted to filling the gap between textbooks on multivariate data analysis and research journals on chemometrics and chemoinformatics.
+
+## How to reference
+
+In order to correctly reference this scholarly work, feel free to copy and paste the following:
+
+Xiaolan Yang, Gaobo Long, Hua Zhao and Fei Liao (2012). Kinetic Analyses of Enzyme Reaction Curves with New Integrated Rate Equations and Applications, Chemometrics in Practical Applications, Dr. Kurt Varmuza (Ed.), ISBN: 978-953-51-0438-4, InTech, Available from: http://www.intechopen.com/books/chemometrics-in-practical-applications/kinetic-analyses-of-enzyme-reaction-curves-using-integrated-rate-equations-and-applications
+
+## INTECH
+
+open science | open minds
+
+### InTech Europe
+
+University Campus STeP Ri
+Slavka Krautzeka 83/A
+51000 Rijeka, Croatia
+Phone: +385 (51) 770 447
+Fax: +385 (51) 686 166
+www.intechopen.com
+
+### InTech China
+
+Unit 405, Office Block, Hotel Equatorial Shanghai
+No.65, Yan An Road (West), Shanghai, 200040, China
+中国上海市延安西路65号上海国际贵都大饭店办公楼405单元
+Phone: +86-21-62489820
+Fax: +86-21-62489821
\ No newline at end of file
diff --git a/samples/texts/2262004/page_25.md b/samples/texts/2262004/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..d765d68c4b0cb5f9dabb2e7a1e489545491be59e
--- /dev/null
+++ b/samples/texts/2262004/page_25.md
@@ -0,0 +1 @@
+© 2012 The Author(s). Licensee IntechOpen. This is an open access article distributed under the terms of the [Creative Commons Attribution 3.0 License](http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_26.md b/samples/texts/2262004/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..27bc83d6de07186ce66a058804a3e22566e85739
--- /dev/null
+++ b/samples/texts/2262004/page_26.md
@@ -0,0 +1,19 @@
+and substrates has more absorbing advantages (Liu, et al., 2009; Yang, et al., 2010); such integration strategies can be applied to enzyme-coupled reaction systems and enzymes suffering inhibition by substrates/products. Herein, we discuss chemometrics for both kinetic analysis of reaction curve and its integration with other methods, and demonstrate their applications to quantify enzyme initial rates and substrates with some typical enzymes.
+
+## 2. Kinetic analysis of enzyme reaction curve: chemometrics and application
+
+To estimate parameters by kinetic analysis of reaction curve, the desired parameters are included in a set of parameters for the best fitting. Regardless of the number of enzymes involved in a reaction curve, there are the following two approaches for kinetic analysis of reaction curve based on different ways to realize NLSF and their data transformation.
+
+In the first approach, with a differential or integrated rate equation, a series of dependent variables are derived from data in a reaction curve with each set of preset parameters. Such dependent variables should follow a predetermined response to predictor variables that are either reaction time or data transformed from those in the reaction curve. The goodness of the predetermined response is the criterion for the best fitting. In this approach, NLSF is realized with a model for data transformed from a reaction curve (Burguillo, 1983; Cornish-Bowden, 1995; Liao, 2005; Liao, et al., 2003a, 2003b, 2005a, 2005b; Orsi & Tipton, 1979).
+
+In the second approach, reaction curves are calculated with sets of preset parameters by iterative numerical integration from a preset starting point. Such calculated reaction curves are fit to a reaction curve of interest; the least sum of residual squares indicates the best fitting (Duggleby, 1983, 1994; Moruno-Davila, et al., 2001; Varon, et al., 1998; Yang, et al., 2010). In this approach, calculated reaction curves still utilize reaction time as the predictor variable and become discrete at the same intervals as the reaction curve of interest. Clearly, there is no transformation of data from a reaction curve in this approach.
+
+With any enzyme, iterative numerical integration of the differential rate equation(s) from a starting point with sets of preset parameters can be universally applicable regardless of the complexity of the kinetics. Thus, the second approach exhibits better universality and there are few technical challenges to kinetic analysis of reaction curve *via* NLSF. In fact, however, the second approach is occasionally utilized while the first approach is widely practiced.
+
+In the following subsections, the differential rate equation of simple Michaelis-Menten kinetics on single substrate is integrated; the prerequisites for kinetic analysis of reaction curve with integrated rate equations, kinetic analysis of enzyme-coupled reaction curve, the integrations of kinetic analysis of reaction curve with other methods, and the applications of such integration strategies to some typical enzymes are discussed.
+
+### 2.1 Integrated rate equation for one enzyme on single substrate
+
+Assigning instantaneous substrate concentration to $S$, instantaneous reaction time to $t$, steady-state kinetics of Michaelis-Menten enzyme on single substrate follows Equ.(1).
+
+$$ -\frac{dS}{dt} = \frac{(V_m \times S)}{K_m + S} \quad (1) $$
\ No newline at end of file
diff --git a/samples/texts/2262004/page_27.md b/samples/texts/2262004/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..b99ee0c4b22506739d482d44956eea1e0a51d148
--- /dev/null
+++ b/samples/texts/2262004/page_27.md
@@ -0,0 +1,13 @@
+Assigning the substrate concentration at the first point for analysis to S₁, Equ.(1) is integrated into Equ.(2) when the enzyme is stable, the substrate and product do not alter the intrinsic activity of the enzyme and the reaction is irreversible (Atkins & Nimmo, 1973; Marangoni, 2003; Orsi & Tipton, 1979; Zou & Zhu, 1997). In Equ.(2), tᵢlag accounts for the lag time of steady-state reaction. After transformation of data in a reaction curve according to Equ.(3), there should be a linear response of the left part in Equ.(2) to reaction time, as in Equ.(4). The goodness of this linear response is judged based on regression analysis. However, to estimate parameters by kinetic analysis of reaction curve *via* NLSF, there are the following general prerequisites for Equ.(2) or any of its equivalency.
+
+$$ \frac{(S_1 - S)}{K_m} + \ln\left(\frac{S_1}{S}\right) = \left(\frac{V_m}{K_m}\right) \times (t - t_{lag}) \quad (2) $$
+
+$$ y = \frac{(S_1 - S)}{K_m} + \ln\left(\frac{S_1}{S}\right) \quad (3) $$
+
+$$ y = a + b \times t \quad (4) $$
+
+The first prerequisite is that enzyme reaction should apparently follow kinetics on single substrate. For enzyme reactions with multiple substrates whose concentrations are all changing during reactions, kinetic analysis of reaction curve always give parameters of too low reliability to have practical roles no matter what methods are used for NLSF (data unpublished). From our experiences to estimate parameters by kinetic analyses of reaction curves, any substrate at levels below 10% of its Km can be considered negligible; the use of one substrate at levels below 10% of those of other substrates can make enzyme reactions follow single substrate kinetics (Liao, et al, 2001; Liao, et al, 2003a, 2003b; Li et al., 2011; Zhao et al., 2006). For any enzyme on multiple substrates, therefore, there are two approaches to make it apparently follow kinetics on single substrate. The first is the use of one substrate of interest at levels below 10% of those of the other substrates; this approach has universal applicability to common enzymes such as hydrolases in aqueous buffers and oxidases in air-saturated buffers. The second is the utilization of special reaction systems to regenerate the substrate of the enzyme of interest by actions of some auxiliary enzymes, and indeed this approach usually yields enzyme-coupled reaction curves of complicated kinetics.
+
+The second prerequisite is that enzyme reaction should be irreversible. In theory, the estimation of parameters by kinetic analysis of reaction curve is still feasible when reaction reversibility is considered, but the estimated parameters possess too low reliability to have practical roles (data unpublished). Generally, a preparation of a substance with contaminants less than 1% in mass content can be taken as a pure substance. Namely, a reagent leftover in a reaction accounting for less than 1% of that before reaction can be negligible. For convenience, therefore, an enzyme reaction is considered irreversible when the leftover level of a substrate of interest in equilibrium is much less than 1% of its initial one. To promote the consumption of the substrate of interest, the concentrations of other substrates should be preset at levels much over 10 times the initial level of the substrate of interest. In this case, the enzyme reaction is apparently irreversible and follows kinetics on single substrate. Or else, the use of scavenging reactions to remove products can drive the reaction forward. The concurrent uses of both approaches are usually better.
+
+The third prerequisite is that there should be steady-state data for analysis (Atkins & Nimmo, 1973; Dixon & Webb, 1979; Liao, et al, 2005a; Marangoni, 2003; Orsi & Tipton, 1979).
\ No newline at end of file
diff --git a/samples/texts/2262004/page_28.md b/samples/texts/2262004/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..94a88c0ee8b58fdf2e747ce1d43147118ff74d46
--- /dev/null
+++ b/samples/texts/2262004/page_28.md
@@ -0,0 +1,13 @@
+For this prerequisite, the first and the last points of data in a reaction curve for analysis should be carefully selected. The first point should exclude data within the lag time of steady-state reaction. The last point should ensure data for analyses to have substrate concentrations high enough for steady-state reaction. Namely, substrate concentrations should be much higher than the concentration of the active site of the enzyme (Dixon & Webb, 1979). The use of special weighting functions for NLSF can mitigate the contributions of residual squares at low substrate levels that potentially obviate steady-state reaction.
+
+The fourth prerequisite is that the enzyme should be stable to validate Equ.(2), or else the inactivation kinetics of the enzyme should be included in the kinetic model. Enzyme stability should be checked before kinetic analysis of reaction curve. When the inactivation kinetics of an enzyme is included in a kinetic model for kinetic analysis of reaction curve, the integrated rate equation is usually quite complex or even inaccessible if the inactivation kinetics is too complex. For kinetic analysis of reaction curve of complicated kinetics, numerical integration to produce calculated reaction curves for NLSF to a reaction curve of interest, instead of NLSF with Equ.(4), can be used to estimate parameters (Duggleby, 1983, 1994; Moruno-Davila, et al., 2001; Varon, et al., 1998; Yang, et al., 2010).
+
+The fifth prerequisite is that there should be negligible inhibition/activation of activity of an enzyme by products/substrates, or else such inhibition/activation on the activity of the enzyme by its substrate/product should be included in an integrated rate equation for kinetic analysis of reaction curve (Zhao, L.N., et al., 2006). For validating Equ.(2), any substrate that alters enzyme activity should be preset at levels low enough to cause negligible alterations; any product that alters enzyme activity can be scavenged by proper reactions. When such alterations are complex, numerical integration of differential rate equations for NLSF to a reaction curve of interest can be used (Duggleby, 1983, 1994; Moruno-Davila, et al., 2001; Varon, et al., 1998).
+
+Obviously, the first three prerequisites are mandatory for the inherent reliability of parameters estimated by kinetic analysis of reaction curve; the later two prerequisites are required for the validity of Equ.(2) or its equivalency for kinetic analysis of reaction curve.
+
+## 2.2 Realization of NLSF and limitation on parameter estimation
+
+To estimate parameters by kinetic analysis of reaction curve based on NLSF, the main concerns are the satisfaction to the prerequisites for the quality of data under analysis, the procedure to realize NLSF, and the reliability of parameters estimated thereby.
+
+For the estimation of parameters by kinetic analysis of reaction curve, there are two general prerequisites for the quality of data under analysis: (a) there should be a minimum number of the effective data whose changes in signals are over three times the random error; (b) there should be a minimum consumption percentage of the substrate in such effective data for analysis. In general, at least two parameters like $V_m$ and $S_0$ should be estimated; the minimum number of the effective data should be no less than 7 (Atkins & Nimmo, 1973; Baywenton, 1986; Miller, J. C. & Miller, J. N., 1993). The minimum consumption percentage of the substrate can be about 40% if only $V_m$ and $S_0$ are estimated while other parameters are fixed as constants. In general, the estimation of more parameters requires higher consumption percentages of the substrate in the effective data for analysis.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_29.md b/samples/texts/2262004/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..0251f554a662a9eb115ff65654762f753b0195c4
--- /dev/null
+++ b/samples/texts/2262004/page_29.md
@@ -0,0 +1,11 @@
+With a valid Equ.(2), data in a reaction curve can be transformed according to Equ.(3) to realize NLSF with Equ.(4). The use of Equ.(4) for NLSF needs no special treatment of the unknown $t_{\text{lag}}$. For any method to continuously monitor reaction curve, there may be an unknown but constant background in signals (Newman, et al., 1974; Liao, et al., 2003a, 2005a; Yang, et al., 2010). Thus, the background in the signal for $S_1$ in Equ.(2) is better to be treated as a nonlinear parameter to realize NLSF; this procedure gives the term of NLSF but causes the burden of computation; as a result, a rearranged form of Equ.(2) is suggested for kinetic analysis of reaction curve (Atkins & Nimmo, 1973; Liao, et al., 2005a).
+
+In theory, Equ.(2) can be rearranged into Equ.(5) as a linear function of $V_m$ and $K_m$. In Equ.(5), the instantaneous reaction time at the moment for $S_1$ is preset as zero so that there is no treatment of $t_{\text{lag}}$. When the signal for $S_1$ is not treated as a nonlinear parameter, kinetic analysis of reaction curve by fitting with Equ.(5) can be finished within 1 s with a pocket calculator. However, parameters estimated with Equ.(5) always have so large errors that Equ.(5) is scarcely practiced in biomedical analyses. Hence, the proper form of an integrated rate equation after validating should be selected carefully.
+
+$$ \frac{(S_1 - S)}{t - t_{\text{lag}}} = V_m - K_m \times \left( \frac{\ln(S_1/S)}{t - t_{\text{lag}}} \right) \quad (5) $$
+
+In principle, to reliably estimate parameters based on NLSF, the distribution ranges of both the dependent variables and the predictor variables in any kinetic model should be as wide as possible while their random errors should be as small as possible (Baywenton, 1986; del Rio, et al., 2001; Draper & Smith, 1998; Miller, J. C. & Miller, J. N., 1993). By serial studies with common enzymes, we found the use of Equ.(4) or similar forms of integrated rate equations with the predictor variables of reaction time for kinetic analysis of reaction curve could give reliable $V_m$ and $S_0$, when $K_m$ was fixed at a constant after optimization (Liao, 2005; Liao, et al, 2001, 2003a, 2003b, 2005a, 2005b, 2006, 2007b; Zhao, Y.S., et al., 2006, 2009). Reaction time as the predictor variable has the widest distribution and the smallest random errors, in comparison to the predictor variable in Equ.(5). The left part in Equ.(4) also possess a wider distribution range. Such differences in predictor variables and dependent variables should account for different reliability of parameters estimated with Equ.(2) and Equ.(5), and thus an integrated rate equation with the predictor variable of reaction time may be the proper form for kinetic analysis of reaction curve. However, when NLSF with Equ.(4) is realized with $S_1$ as a nonlinear parameter, there is nearly 10 s for computation with Celeron 300A CPU on a personal computer. Currently, computation resource is no longer a problem and thus Equ.(4) or its equivalent equations should always be adopted.
+
+The selection of a weighting factor for kinetic analysis of reaction curve is also a concern. Based on error propagation and the principle for weighted NLSF with $y$ defined in Equ.(3), squares of instantaneous rates can be the weighting factors ($W_f$) with Equ.(4) for NLSF to get the weighted sum of residual squares (Q), as described in Equ.(6), Equ.(7) and Equ.(8) (Baywenton, 1986; Draper & Smith, 1998; Gutierrez & Danielson, 2006; Miller, J. C. & Miller, J. N., 1993). The use of a weighting function like Equ.(7) can mitigate the effects of errors in substrate or product concentrations near the completion of reaction. The resistance of an estimated parameter (the variation within 3% in our studies) to reasonable changes in data ranges for analysis can be a criterion to judge the reliability of parameter estimated.
+
+$$ \frac{\partial y}{\partial S} = -\frac{(K_m + S)}{K_m \times S} \qquad (6) $$
\ No newline at end of file
diff --git a/samples/texts/2262004/page_3.md b/samples/texts/2262004/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..7fc7986bc897a58e9d2d459675eed69f09f67e49
--- /dev/null
+++ b/samples/texts/2262004/page_3.md
@@ -0,0 +1,7 @@
+rates with expanded linear ranges and practical analysis efficiency. This integration strategy is effective at substrate concentrations from one-eighth of $K_m$ to three-fold of $K_m$ (Li, et al., 2011; Liao, et al., 2009; Liu, et al., 2009; Yang, et al., 2011). The integration strategy for enzyme initial rate assay uses a special method to convert $V_m$ into initial rates so that the indexes of enzyme activities by both methods become the same; it is applicable to enzymes suffering strong inhibition by substrates/products (Li, et al., 2011). Walsh et al. proposed an integration strategy to measure enzyme initial rate but they employed Equ.(9) that requires substrate levels below 10% of $K_m$ (Walsh, et al. 2010). Our integration strategy is valid at any substrate level to satisfy Equ.(2) and hence can be a universal approach to common enzymes of different $K_m$. The principles and applications of the integration strategy to one enzyme reaction systems and enzyme-coupled reaction systems are discussed below.
+
+As for one enzyme reaction systems, kinetic analysis of reaction curve can be realized with an integrated rate equation after data transformation; the integration strategy for enzyme initial rate assay requires enzyme kinetics on single substrate and an integrated rate equation with the predictor variable of reaction time (Liao, et al., 2003a, 2005a, Zhao, L.N., et al., 2006). Moreover, the integration strategy should solve the following challenges: (a) there should be an overlapped range of enzyme activities measurable by both methods with consistent results; (b) there should be consistent slopes of linear response for enzyme activities to enzyme quantities by both methods (Figure 1). After these two challenges are solved, the linear segment of response by the classical initial rate method is an extended line of the linear segment of response by kinetic analysis of reaction curve (Liu, et al., 2009).
+
+To solve the first challenge, a practical $S_0$ and reasonable duration to monitor reaction curve for favourable analysis efficiency are required as optimized experimental conditions. By mathematic derivation and simulation analyses to solve the first challenge, it is demonstrated that a ratio of $S_0$ to $K_m$ from 0.5 to 2.5, the duration of 5.0 min to monitor reaction curves at intervals no longer than 10 s can solve the first challenge for most enzymes, any ratio of $S_0$ to $K_m$ smaller than 0.5 or larger than 2.5 requires longer duration to monitor reaction curves. The use of $S_0$ about one-eighth of $K_m$ requires no shorter than 8.0 min to monitor reaction curves at 10-s intervals to solve the first challenge (Li, et al., 2011; Liu, et al., 2009). When $S_0$ is too much larger than three times $K_m$, reaction time to record reaction curves for analysis to solve the first challenge should be much longer than 5 min. Clearly, the first challenge can be solved with practical $S_0$ for favourable analysis efficiency.
+
+To solve the second challenge, $K_m$ and other parameters should be optimized and fixed as constants to estimate $V_m$ by kinetic analysis of reaction curve, and a preset substrate concentration (PSC) should be optimized to convert $V_m$ into initial rates according to the differential rate equation. In theory, a reliable $V_m$ should be independent of ranges of data when they are reasonably restricted, and CVs for estimating parameters by enzymatic analysis are usually about 5%. Hence, the estimation of $V_m$ with variations below 3% for the changes of substrate consumption percentages from 60% to 90% can be a criterion to select the optimized set of preset parameters. For converting $V_m$ into initial rates, the optimized PSC is usually about 93% of $S_0$ and can be refined for different enzymes (Li, et al., 2011; Liao, et al., 2009; Liu, et al., 2009; Yang, et al., 2011). Optimized $K_m$ and PSC to solve the second challenge are parameters for data processing while optimized $S_0$ and reaction duration to solve the first challenge are experimental conditions. The concomitant solution of the two challenges provides feasibility and potential reliability to the integration strategy.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_30.md b/samples/texts/2262004/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e7ff87364cae989e3b9664d47b36fb5cb991174
--- /dev/null
+++ b/samples/texts/2262004/page_30.md
@@ -0,0 +1,11 @@
+$$W_f = \partial S / \partial y = -K_m \times S / (K_m + S) \quad (7)$$
+
+$$Q = \sum W_f^2 \times (y_{\text{predicted}} - y_{\text{calculated}})^2 \quad (8)$$
+
+It is also concerned which parameter is suitable for estimation by kinetic analysis of reaction curve. In theory, all parameters of an enzyme reaction system can be simultaneously estimated by kinetic analysis of reaction curve. However, there is unknown covariance among some parameters to devalue their reliability; there is the limited accuracy of original data for analyses and the estimation of some parameters with narrow working ranges will have negligible practical roles. $V_m$ is independent of all other parameters and so is $S_0$, and the assay of $V_m$ and $S_0$ are already routinely practiced in biomedical analyses. Therefore, $V_m$ and $S_0$ may be the parameters suitable for estimation by kinetic analysis of reaction curve. Additionally, $K_m$ is used for screening enzyme mutants and enzyme inhibitors; but $K_m$ estimated by kinetic analysis of reaction curve usually exhibits lower reliability and is preferred to be fixed for estimating $V_m$ and $S_0$. If $K_m$ is estimated as well, $S_1$ should be at least 1.5-fold $K_m$ and there should be more than 85% consumption of the substrate in the data selected for analysis (Atkins & Nimmo, 1973; Liao, et al., 2005a; Newman, et al., 1974; Orsi & Tipton, 1979). To estimate $K_m$, the initial datum ($S_1$) and its corresponding ending datum from a reaction curve for analysis should be tried sequentially till the requirements for data range are met concurrently. In this case, the estimation of $S_1$ has no practical roles. In general, the resistance of $V_m$ and $S_0$ to reasonable changes in ranges of data for analyses can be a criterion to select the optimized set of parameters that are fixed as constants.
+
+In comparison to the low reliability to estimate $K_m$ independently for screening enzyme inhibitors and enzyme mutants, the ratio of $V_m$ to $K_m$ as an index of enzyme activity can be estimated robustly by kinetic analysis of reaction curve. Reversible inhibitors of Michaelis-Menten enzyme include competitive, noncompetitive, uncompetitive and mixed ones (Bergmeyer, 1983; Dixon & Webb, 1979; Marangoni, 2003). The ratios of $V_m$ to $K_m$ will respond to concentrations of common inhibitors except uncompetitive ones that are very rare in nature. Thus, the ratio of $V_m$ to $K_m$ can be used for screening common inhibitors. More importantly, the ratio of $V_m$ to $K_m$ is an index of the intrinsic activity of an enzyme and the estimation of the ratios of $V_m$ to $K_m$ can also be a promising strategy to screen enzyme mutants of powerful catalytic capacity (Fresht, 1985; Liao, et al., 2001; Northrop, 1983).
+
+For robust estimation of the ratio of $V_m$ to $K_m$ of an enzyme, $S_0$ can be preset at a value below 10% of $K_m$ to simplify Equ.(2) into Equ.(9). Steady-state data from a reaction curve can be analyzed after data transformation according to the left part in Equ.(9). For validating Equ.(9), it is proposed that $S_0$ should be below 1% of $K_m$ (Meyler-Almes & Auer, 2000). The use of extremely low $S_0$ requires special methods to monitor enzyme reaction curves and steady-state reaction can not always be achieved with enzymes of low intrinsic catalytic activities. On the other hand, the use of $S_0$ below 10% of $K_m$ is reasonable to estimate the ratio of $V_m$ to $K_m$ (Liao, et al., 2001). To estimate the ratio of $V_m$ to $K_m$, the use of Equ.(9) to analyze data is robust and resistant to variations of $S_0$ if Equ.(9) is valid; this property makes the estimation of the ratio of $V_m$ to $K_m$ for screening reversible inhibitors superior to the estimation of the half-inhibition concentrations (Cheng & Prusoff, 1973).
+
+$$\ln(S_1/S) = a + (V_m/K_m) \times t \quad (9)$$
\ No newline at end of file
diff --git a/samples/texts/2262004/page_31.md b/samples/texts/2262004/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..d8bf415f4ff7d9fe7cfb23e3ed3c766e445d5c4e
--- /dev/null
+++ b/samples/texts/2262004/page_31.md
@@ -0,0 +1,11 @@
+Kinetic analysis of reaction curve requires more considerations when activities of enzymes are altered by their substrates/products. In this case, more parameters can be included in kinetic models similar to Equ.(2) for kinetic analysis of reaction curve, but there must be complicated process to optimize reaction conditions and preset parameters. Based on the principle for kinetic analysis of reaction curve described above, we developed some new integration strategies to successfully quantify enzyme initial rates and substrate with absorbing performance even when the activities of enzymes of interest are altered significantly by substrates/products (Li, et al., 2011; Liao, 2007a; Zhao, L.N., et al., 2006).
+
+## 2.3 Kinetic analysis of enzyme-coupled reaction curve
+
+When neither substrate nor product is suitable for continuous monitor of reaction curve, a tool enzyme can be used to regenerate a substrate or consume a product of the enzyme of interest; the action of the tool enzyme should consume/generate a substrate/product as an indicator suitable for continuous monitor of reaction curve. Namely, the reaction of the tool enzyme is coupled to the reaction of an enzyme of interest for continuous monitor of reaction curve (Bergmeyer, 1983; Guilbault, 1976; Dixon & Webb, 1979). When such enzyme-coupled assays are used to measure initial rates of an enzyme, there are always unsatisfactory linear range because the activities of the tool enzyme is always limited and the concentration of the substrate of the tool enzyme is also limited (Bergmeyer, 1983; Dixon & Webb, 1979). It is expected that kinetic analysis of enzyme-coupled reaction curve may effectively enhance the upper limit of linear response. However, kinetics of enzyme-coupled reaction systems is described with a set of differential rate equations, which cause difficulty in accessing an integrated rate equation with the predictor variable of reaction time.
+
+In this case, iterative numerical integration to obtain calculated reaction curves for NLSF to a reaction curve of interest can be used (Duggleby, 1983, 1994; Moruno-Davila, et al., 2001; Varon, et al., 1998; Yang, et al., 2010). Lactic dehydrogenase (LDH) is widely used as a tool enzyme for enzyme-coupled assay. The assay of activity of alanineaminotransferase (ALT) in sera has important biomedical roles and usually employs LDH-coupled assay. For LDH-coupled ALT assay, iterative numerical integration of the set of differential rate equations with each set of preset parameters from a preset starting point can produce a calculated reaction curve; such a calculated reaction curve can be made discrete at the same intervals as the reaction curve of interest and then be used for NLSF to the reaction curve of interest.
+
+The process of iterative numerical integration for LDH-coupled ALT assay is given below (Yang, et al., 2010). In an LDH-coupled ALT reaction system, assigning instantaneous concentration of NADH to $C_{n,i}$, instantaneous concentration of pyruvate to $C_{p,i}$, instantaneous absorbance at 340 nm for NADH to $A_i$, the molar absorptivity of NADH to $\epsilon$, the initial rate of ALT under steady-state reaction to $V_{1k}$, the maximal activity of LDH to $V_{max}$, the integration step to $\Delta t$, there are Equ.(10), Equ.(11) and Equ.(12) to describe the iterative integration of the set of differential rate equations. Calculated reaction curves according to Equ. (12) using different sets of preset parameters become discrete and are fit to the reaction curve of interest, and background absorbance at 340 nm is treated as a parameter as well.
+
+$$ C_{n,i} = (A_i - A_b)/\epsilon \quad (10) $$
\ No newline at end of file
diff --git a/samples/texts/2262004/page_4.md b/samples/texts/2262004/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..334ffce7bdcf86e229bbaf2c0337bbae55a5c521
--- /dev/null
+++ b/samples/texts/2262004/page_4.md
@@ -0,0 +1,7 @@
+Fig. 1. The integration strategy for enzyme initial rate assay (Modified from Liu et al. (2009)).
+
+After the integration strategy for enzyme initial rate assay is validated, a switch point should be determined for changing from the classical initial rate method to kinetic analysis of reaction curve. The estimation of Vm by kinetic analysis of reaction curve usually prefers substrate consumption percentages reasonably high. Therefore, the substrate consumption percentage that gives an enzyme activity from 90% to 100% of the upper limit of linear response by the classical initial rate method can be used as the switch point.
+
+It should be noted that the lower limit of linear response is difficult to be defined for enzyme initial rate assay by an integration strategy. For most methods, their lower limits of linear response are usually defined as three times the standard errors of estimate (Miller, J. C. & Miller, J. N., 1993). Usually, enzyme initial rate assay utilizes just one method for data processing and the difference between the lower limit and the upper limit of linear response is seldom over 30-fold. By the integration strategy, the measurable ranges of enzyme quantities cover two magnitudes and the detection limit is reduced to that by the classical initial rate method. By manual operation, different dilution ratios of a stock solution of the enzyme have to be used and any dilution error will increase the standard error of estimate for regression analysis. The measurement of higher enzyme activities will inevitably have larger standard deviation. Thus, regression analysis of the response of all measurable enzyme initial rates by the integration strategy to quantities of the enzyme will give higher standard error of estimate and thus an unfavourable lower limit of linear response. By this new integration strategy, we arbitrarily use twice the lower limit of linear response by the classical initial rate method as the lower limit if the overall standard error of estimate is more than twice that by the classical initial rate method alone; or else, the lower limit of linear response is still three times the overall standard error of estimate.
+
+Taken together, for measuring initial rates of enzyme acting on single substrate by the integration strategy based on NLSF and data transformation, there are the following basic steps different from those by the classical initial rate method. The first is to work out the
\ No newline at end of file
diff --git a/samples/texts/2262004/page_5.md b/samples/texts/2262004/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..44dbe1b9bcafa8cebce93a37f03db8653d25ee8a
--- /dev/null
+++ b/samples/texts/2262004/page_5.md
@@ -0,0 +1,9 @@
+integrated rate equation with the predictor variable of reaction time. The second is to optimize individually their parameters fixed as constants for kinetic analysis of reaction curve. The third is to optimize a ratio of S₀ to K_m and duration to monitor reaction curves; usually a ratio of S₀ to K_m from 0.5 to 2.5, the duration of 5.0 min and intervals of 10 s are effective. The fourth is to refine PSC around 93% of S₀ to convert V_m into initial rates.
+
+As for enzyme-coupled reaction system, initial rate itself is estimated by kinetic analysis of reaction curve based on numerical integration and NLSF of calculated reaction curves to a reaction curve of interest. Consequently, neither the conversion of indexes nor the optimization of parameters for such conversion is required and the integration strategy can be realized easily. By kinetic analysis of enzyme-coupled reaction curve, there still should be a minimum number of the effective data and a minimum substrate consumption percentage in the effective data for analysis; these prerequisites lead to unsatisfactory lower limits of linear response for favourable analysis efficiency (the use of reaction duration within 5.0 min). The classical initial rate method is effective to enzyme-coupled reaction systems when activities of the enzyme of interest are not too high. Therefore, this new approach for kinetic analysis of enzyme-coupled reaction curve can be integrated with the classical initial rate method to quantify enzyme initial rates potentially for wider linear ranges.
+
+With enzyme-coupled reaction systems, only the first challenge should be solved to practice the integration strategy. Namely, reaction duration and sampling intervals to record reaction curve should be optimized so that there is an overlapped region of enzyme initial rates measurable by both methods with consistent results. The upper limit of the classical initial rate method should be high enough so that data after reaction of about 5.0 min for enzyme activity at such an upper limit are suitable for kinetic analysis of reaction curve. The integration strategy gives an approximated linear range from the lower limit of linear response by the classical initial rate method to the upper limit of linear response by kinetic analysis of LDH-coupled ALT reaction curve (Yang, et al., 2010).
+
+### 2.4.2 New integration strategy for enzyme substrate assay
+
+Analysis of a biochemical as the substrate of a typical tool enzyme, i.e., enzymatic analysis of substrate in biological samples, is important in biomedicine (Bergmeyer, 1983; Dilena, 1986; Guilbault, 1976; Moss, 1980). In general, there are the kinetic method and the end-point method for enzyme substrate assay; the end-point method is called the equilibrium method, and it determines the difference between the initial signal for a reaction system before enzyme action and the last signal after the completion of enzyme reaction; such differences proportional to S₀ can serve as an index of substrate concentration (Dilena, et al., 1986; Guilbault, 1976; Moss, 1980; Zhao, et al., 2009). For better analysis efficiency and lower cost on tool enzymes, kinetic methods for enzyme substrate assay are preferred. Among available kinetic methods, the initial rate method based on the response of initial rates of an enzyme at a fixed quantity to substrate concentrations is conventional; however, it tolerates sensitivity to any factor affecting enzyme activities, requires tool enzymes of high K_m, and has narrow linear ranges. Kinetic analysis of reaction curve with a differential rate equation to estimate S₀ is proposed with favourable resistance to variations in enzyme activities and has upper limit of linear response over K_m, but it suffers from high sensitivity to background and has unfavourable lower limit of linear response (Dilena, et al., 1986; Hamilton & Pardue, 1982; Moss, 1980). Hence, new kinetic methods for enzyme substrate assay are still desired.
\ No newline at end of file
diff --git a/samples/texts/2262004/page_6.md b/samples/texts/2262004/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..94af16d9872db24930fbc0d7dde674604d26d68f
--- /dev/null
+++ b/samples/texts/2262004/page_6.md
@@ -0,0 +1,7 @@
+For enzymatic analysis of substrate, the equilibrium method can still be preferable as long as it has desirable analysis efficiency and favourable cost on tool enzyme. In theory, the last signal for the stable product or the background in the equilibrium method can be estimated by kinetic analysis of reaction curve with data far before the completion of reaction. This process can be a new kinetic method for enzyme substrate assay and is distinguished from the equilibrium method and other kinetic methods by its prediction of the last signal after the completion of enzyme reaction (Liao, 2005; Liao, et al., 2003, 2005a, 2006; Zhao, L.N., et al., 2006; Zhao, Y.S., et al., 2006, 2009). This new kinetic method should have resistance to factors affecting enzyme activities and upper limit of linear response higher than $K_m$ besides all advantages of common kinetic methods.
+
+An enzyme reaction curve can be monitored by absorbance of a stable product or the substrate itself (Figure 2). The initial absorbance before enzyme action ($A_0$) thus is the background ($A_b$) when absorbance for a stable product is quantified, or is the absorbance of the substrate plus background when absorbance of the substrate is quantified. The last absorbance after the completion of enzyme reaction, which is predicted by kinetic analysis of reaction curve, is the maximum absorbance of the stable product plus the background ($A_m$) or $A_b$ itself. There is strong covariance between the initial signal and the last signal for the same reaction system; this assertion enhances precision of this kinetic method for substrate assay (Baywenton, 1986; Liao, et al, 2005b; Zhao, Y.S., et al., 2009).
+
+Fig. 2. Demonstration of reaction curves of uricase (293 nm) and glutathione-S-transferase (340 nm), and the prediction of the last absorbance after infinite reaction time
+
+However, this new kinetic method for substrate assay can by no means concomitantly have wider linear ranges and desirable analysis efficiency. Due to the prerequisites of the quality of data for kinetic analysis of reaction curve, the activity of a tool enzyme for enzymatic analysis of substrate should be reasonably high for higher upper limit of linear response, but it should be reasonably low for favourable lower limit of linear response. On the other hand, the duration to monitor reaction curves should be long enough to have higher upper limit at reasonable cost on a tool enzyme, but should be as short as possible for favourable analysis efficiency. Thus, this new kinetic method alone requires tough optimizations of conditions. Moreover, there are the inevitable random noises from any instrument to record an enzyme reaction curve; when there is much small difference between the initial signal before enzyme action and the last signal recorded after the preset reaction duration, this kinetic method for
\ No newline at end of file
diff --git a/samples/texts/2262004/page_7.md b/samples/texts/2262004/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f4677f63cf5ca5e80ea3d8289f880bb27393983
--- /dev/null
+++ b/samples/texts/2262004/page_7.md
@@ -0,0 +1,48 @@
+substrate assay always has unsatisfactory precision. Therefore, this new kinetic method
+itself is still much beyond satisfaction for substrate assay.
+
+To concomitantly have wider linear ranges, desirable analysis efficiency and favourable
+precision for enzyme substrate assay, the integration of kinetic analysis of reaction curve
+with the equilibrium method can be used. The indexes of substrate quantities by the two
+methods have exactly the same physical meanings, and thus the integration strategy can be
+easily realized for enzyme substrate assay. By this integration strategy, there should still be
+an overlapped range of concentrations of the substrate measurable consistently by both
+methods, besides a switch threshold within such an overlapped region to change from the
+equilibrium method to kinetic analysis of reaction curve. Additionally, this overlapped
+region of substrate concentration measurable by both methods with consistent results
+should localize in a range of substrate concentration high enough for reasonable precision of
+substrate assay based on kinetic analysis of enzyme reaction curve. These requirements can
+be met as described below. (a) The upper limit of linear response by the equilibrium method
+should be optimized to be high enough, so that the difference between the initial signal
+before enzyme action and the last recorded signal for about 80% of this upper limit is 50
+times higher than the random noise of an instrument to record enzyme reaction curves; such
+a difference can be used as the switch threshold. (b) The activity of a tool enzyme and the
+duration to monitor reaction curve as experimental conditions should be optimized; kinetic
+parameters except Vm for kinetic analysis of reaction curve are optimized as well. The
+resistance of the predicted last signal to reasonable variations in data ranges for analysis can
+be a criterion to judge the optimized set of preset parameters. For favourable analysis
+efficiency in clinical laboratories, reaction duration can be about 5.0 min. This reaction
+duration results in a minimum activity of the tool enzyme for the integration strategy so that
+the upper limit of linear response by the equilibrium method can be high enough to switch
+to kinetic analysis of reaction curve. This integration strategy after optimizations can
+simultaneously have wider linear ranges, higher analysis efficiency and lower cost, better
+precision and stronger resistance to factors affecting enzyme activities.
+
+Similarly, with the integration strategy for enzyme substrate assay, we also use twice the
+lower limit of the equilibrium method as the lower limit by the integration strategy if the
+standard error of estimate is much larger; or else, three times the standard error of estimate
+by the integration strategy is taken as the lower limit of linear response.
+
+In general, the following steps are required to realize this integration strategy for enzyme
+substrate assay: (a) to work out the integrated rate equation with the predictor variable of
+reaction time; (b) to optimize individually the (kinetic) parameters preset as constants for
+kinetic analysis of reaction curve; (c) to optimize the activity of the tool enzyme so that data
+for the upper limit of linear response by the equilibrium method within about 5.0-min
+reaction are suitable for kinetic analysis of reaction curve. As demonstrated later, this
+integration strategy is applicable to enzymes suffering from strong product inhibition.
+
+## 2.5 Applications of new methods to some typical enzymes
+
+We investigated kinetic analysis of reaction curve with arylesterase (Liao, et al., 2001, 2003a,
+2007b), alcohol dehydrogenase (ADH) (Liao, et al., 2007a), gama-glutamyltransfease (Li, et
+al., 2011), uricase (Liao, 2005; Liao, et al., 2005a, 2005b, 2006; Liu, et al., 2009; Zhao, Y.S., et
\ No newline at end of file
diff --git a/samples/texts/2262004/page_8.md b/samples/texts/2262004/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..91f559ec97106b2409787203a650311d98e142d7
--- /dev/null
+++ b/samples/texts/2262004/page_8.md
@@ -0,0 +1,11 @@
+al., 2006, 2009), glutathione-S-tranferase (GST) (Liao, et al., 2003b; Zhao, L.N., et al., 2006), butylcholineesterase (Liao, et al., 2009; Yang, et al., 2011), LDH (Cheng, et al., 2008) and LDH-coupled ALT reaction systems (Yang, et al., 2010). Uricase of simple kinetics is a good example to study new methods for kinetic analysis of reaction curve; reactions of GGT and ADH suffer product inhibition and kinetic analyses of their reaction curves are complicated because they require unreported parameters. Hence, our new methods for kinetic analysis of reaction curve and the integration strategies for quantifying enzyme substrates and initial rates are demonstrated with uricase, GST and ADH as examples.
+
+### 2.5.1 Uricase reaction
+
+Uricase follows simple Michaelis-Menten kinetics on single substrate in air-saturated buffers, and suffers neither reversible reaction nor product inhibition (Liao, 2005; Liao, et al., 2005a, 2005b; Zhao, Y.S., et al., 2006). Uricase reaction curve can be monitored by absorbance at 293 nm. The potential interference from the intermediate 5-hydroxylisourate with uric acid absorbance at 293 nm can be alleviated by analyzing data of steady-state reaction in borate buffer at high reaction pH (Kahn & Tipton, 1998; Priest & Pitts, 1972). The integrated rate equation for uricase reaction with the predictor variable of reaction time is Equ.(4). Uricases from different sources have different $K_m$ (Liao, et al., 2005a, 2006; Zhang, et al., 2010; Zhao, Y.S., et al., 2006). Using Equ.(4), $K_m$ of *Candidate* utilis is estimated with reasonable reliability (Liao, et al., 2005a). Using Equ.(9) to estimate the ratio of $V_m$ to $K_m$, uricase mutants of better catalytic capacity and their sensitivity to xanthine are routinely characterized (data unpublished). Thus, we used uricases of different $K_m$ as models to test the two integration strategies for enzyme substrate assay and initial rate assay, respectively.
+
+Uricase from *Bacillus fastidiosus* A.T.C.C. 29604 has high $K_m$ to facilitate predicting $A_b$ (Zhang, et al., 2010; Zhao, Y.S., et al., 2006, 2009). Reaction curves at low levels of uric acid with this uricase at 40 U/L are demonstrated in Fig. 3. Steady-state reaction is not reached within 30 s since reaction initiation; it is difficult to get more than 5 data with absorbance changes over 0.003 for kinetic analysis of reaction curve at uric acid levels below 3.0 µmol/L. At 40 U/L of this uricase, the absorbance after reaction for 5.0 min has negligible difference from that after reaction for 30 min for uric acid below 5.0 µmol/L. To quantify the difference between $A_0$ and $A_b$ after reaction for 5.0 min, the equilibrium method has an upper limit of about 5.0 µmol/L, while kinetic analysis of reaction curve with $K_m$ as a constant is feasible for $S_0$ of about 5.0 µmol/L. Thus, the change of absorbance over 0.050 between $A_0$ and the absorbance after reaction for 5.0 min can be the switch threshold to change from the equilibrium method to kinetic analysis of reaction curve.
+
+This integration strategy for enzyme substrate assay gives the linear response from about 1.5 µmol/L up to 60 µmol/L uric acid at 40 U/L uricase (Fig.4, unpublished), and shows resistance to the action of xanthine at 30 µmol/L in reaction solutions (this level of xanthine always caused negative interference with all available kits commercialized for serum uric acid assay). Therefore, the integration strategy for uric acid assay is clearly superior to any other uricase method reported.
+
+Uricases from *Candida* sp. with $K_m$ of 6.6 µmol/L (Sigma U0880) and *Bacillus fastidiosus* uricase from A.T.C.C. 29604 with $K_m$ of 0.22 mmol/L are used to test the integration strategy for initial rate assay. The use of uric acid at $S_0$ of 25 µmol/L to monitor reaction curves
\ No newline at end of file
diff --git a/samples/texts/2262004/page_9.md b/samples/texts/2262004/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..5c842ac626004f44ae345ddf83aabc5cf9ef118f
--- /dev/null
+++ b/samples/texts/2262004/page_9.md
@@ -0,0 +1,9 @@
+Fig. 3. Reaction curves (absorbance at 293 nm) at low levels of uric acid and 40 U/L uricase (recombinant uricase in E. Coli BL21 was as reported before (Zhang, et al., 2010)).
+
+within 8.0 min or at S₀ of 75 µmol/L to monitor reaction curves within 5.0 min, the integration strategy to measure initial rates of both uricases is feasible; the use of PSC of 93% S₀ to convert Vₘ into initial rates gives the linear range of about two magnitudes (Liu, et al., 2009). Therefore, the integration strategy for enzyme initial rate assay is also advantageous.
+
+Fig. 4. Response of absorbance change at 293 nm to preset uric acid levels at 40 U/L uricase.
+
+**2.5.2 Glutathione-S-transferase reaction**
+
+Using purified alkaline GST isozyme from porcine liver as model on glutathione (GSH) and 2,4-dinitrochlorobenzene (CDNB) as substrates, GST reaction curves are monitored by absorbance at 340 nm (Kunze, 1997; Pabst, et al, 1974; Zhao, L.N., et al., 2006). To promote reaction on single substrate, CDNB is fixed at 1.0 mmol/L while GSH concentrations are kept below 0.10 mmol/L (Zhao, L.N., et al., 2006). Because the concentration of product is calculated from absorbance at 340 nm, the background absorbance before GST reaction is adjusted to zero so that there is no need to treat Ab as a parameter. This treatment of background absorbance eliminate the estimation of Ab and thus confronts with no problem of covariance between Ab and Am for NLSF. However, GST reaction is more complicated than uricase because it suffers strong product inhibition with an unreported inhibition constant (Kunze, 1997; Pabst, et al, 1974). Thus, the effectiveness of the two integration strategies is tested for measuring initial rate and GSH levels after the inhibition constant of the product is optimized for kinetic analysis of GST reaction curve.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_1.md b/samples/texts/2602609/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..7608729b64166d46901569c9e5472536e2131a6d
--- /dev/null
+++ b/samples/texts/2602609/page_1.md
@@ -0,0 +1,19 @@
+# Efficient Maximal Privacy in Boardroom Voting and Anonymous Broadcast
+
+Jens Groth¹,²
+
+¹ BRICS*, University of Aarhus, Ny Munkegade bd. 540, 8000 Århus C, Denmark
+² Cryptomathic A/S**, Jægergårdsgade 118, 8000 Århus C, Denmark
+jg@brics.dk
+
+**Abstract.** Most voting schemes rely on a number of authorities. If too many of these authorities are dishonest then voter privacy may be violated. To give stronger guarantees of voter privacy Kiayias and Yung [1] introduced the concept of elections with perfect ballot secrecy. In this type of election scheme it is guaranteed that the only thing revealed about voters' choices is the result of the election, no matter how many parties are corrupt. Our first contribution is to suggest a simple voting scheme with perfect ballot secrecy that is more efficient than [1]. Considering the question of achieving maximal privacy in other protocols, we look at anonymous broadcast. We suggest the notion of perfect message secrecy; meaning that nothing is revealed about who sent which message, no matter how many parties are corrupt. Our second contribution is an anonymous broadcast channel with perfect message secrecy built on top of a broadcast channel.
+
+## 1 Introduction
+
+Voting schemes are legion in the cryptographic literature. Common for most of them is that they rely on some authorities to conduct the election. Furthermore, if a large group of authorities is dishonest then individual votes may be revealed. To some extend this is unavoidable, some degree of privacy violation is inherent in any election; a group of voters may subtract their own votes from the result and thereby obtain some information about the remaining voters' choice. In terms of privacy, the best we can hope for is to ensure that nobody can deduce more about the distribution of honest voters' votes than what can be deduced from the result and knowledge of dishonest voters' choices. We call this type of security perfect ballot secrecy.
+
+Kiayias and Yung [1] introduced the notion of perfect ballot secrecy together with self-tallying and dispute-freeness. Self-tallying means there is no need for authorities to tally the votes. Once all votes have been cast, the result can be tallied and verified by anybody. Dispute-freeness says that anybody may
+
+* Basic Research in Computer Science (www.brics.dk),
+funded by the Danish National Research Foundation.
+** www.cryptomathic.com
\ No newline at end of file
diff --git a/samples/texts/2602609/page_10.md b/samples/texts/2602609/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..86e224bb78a9ea46187008df1636feebf2ec5338
--- /dev/null
+++ b/samples/texts/2602609/page_10.md
@@ -0,0 +1,21 @@
+2. Voter 2 selects at random $r_2 \in \mathbb{Z}_q$ and computes $(g^{r_2}, (\prod_{i=2}^n h_i)^{r_2} g^{v_2})$. Multiplying this to the first vote, he gets $(g^{r_1+r_2}, (\prod_{i=2}^n h_i)^{r_1+r_2} g^{v_1+v_2})$. With his knowledge of the secret key $x_2$, he may peel of a layer of this El Gamal encryption of the partial result. In other words, he computes $(g^{r_1+r_2}, (\prod_{i=3}^n h_i)^{r_1+r_2} g^{v_1+v_2})$. He publishes this on the message board.
+
+3. Voter 3 performs the same type of operations as voter 2. He ends up publishing $(g^{r_1+r_2+r_3}, (\prod_{i=4}^n h_i)^{r_1+r_2+r_3} g^{v_1+v_2+v_3})$ on the message board.
+
+...
+
+n. Voter $n$ performs the same type of operations as the previous voters. When he is done, his output is $(g^{\sum_{i=1}^n v_i}, g^{\sum_{i=1}^n v_i})$.
+
+**Tallying:** From the last voter's output we can read off $\sum_{i=1}^n v_i$. We compute the discrete logarithm, this is possible since the exponent is at most $n$, to get $\sum_{i=1}^n v_i$. This is the number of 1-votes in the election.
+
+**The full protocol.** The protocol as described is not fair, it is possible for the last voter to know the result before casting his own vote. As in [1] we deal with this by saying that a special election authority must act like a voter and cast a zero-vote in the end. Since it is a zero-vote, it does not affect the result. On the other hand, the perfect ballot secrecy of the voting scheme ensures that up to this point nobody but the authority can know any partial tally. Therefore, if the authority is honest then the voting scheme is fair.
+
+To go beyond the honest-but-curious assumption and deal with all kinds of adversaries all we have to do is to add zero-knowledge proofs of knowledge of correctness. These proofs will be the typical 3-move honest verifier proofs ($\Sigma$-protocols [7]), where using the Fiat-Shamir heuristic we can make very efficient non-interactive zero-knowledge proofs. Security of the protocol will be proved in the random oracle model [8].
+
+We wish to support a set $W$ of possible votes. Let us write the $c$ candidates in $W$ as candidates as $0, \dots, c-1$. We do this by encoding candidate number $i$ as $(n+1)^i$. From a sum $\sum_{i=1}^n v_i$ of votes with this encoding we can read off the number of votes on each candidate. To compute the result, we have to compute the discrete logarithm of $g^{\sum_{i=1}^n v_i}$. With $n$ voters and $c$ candidates, the number of possible results is $(c^{n-1})$. With a small number of voters or a small number of candidates, it is possible to compute the discrete logarithm. If we have a larger number of voters and candidates, we may use a cryptosystem similar to the one in [2]. This allows computing discrete logarithms efficiently, but on the other hand the key generation becomes much more complicated. Alternatively, we may use the anonymous broadcast protocol we present in the next section.
+
+The full protocol can be seen in Figure 1.
+
+**Performance.** Let $n$ be the number of voters, $c$ be the number of candidates, and $k$ be the security parameter. We assume that $n^c \le q$.
+
+For each voter it takes $\mathcal{O}(1)$ exponentiations to compute the key $h_i$ and the associated proof. The size of the key is $\mathcal{O}(k)$. Verification of the $n$ keys takes $\mathcal{O}(n)$ exponentiations.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_11.md b/samples/texts/2602609/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..0488ab38fe9438b1b4814575fdc9cf03ce3fd623
--- /dev/null
+++ b/samples/texts/2602609/page_11.md
@@ -0,0 +1,9 @@
+Fig. 1. The voting protocol
+
+In the voting phase, it takes $O(\log c)$ exponentiations to compute the vote and the proof associated with it.³ The vote has size $O(k \log c)$. It takes $O(n \log c)$ exponentiations to verify all the voters' proofs.
+
+In comparison, the protocol in [1] lets the voter do $O(n)$ exponentiations in the key registration phase, the key has size $O(nk)$, and verification of the keys takes $O(n^2)$ exponentiations. In the voting phase, the voter must do $O(\log c)$ exponentiations, the vote has size $O(k \log c)$, and it takes $O(n \log c)$ exponentiations to verify all the votes.
+
+The Kiayias and Yung protocol does have the advantage that many voters can vote at the same time, whereas we demand that they download the current
+
+³ Let us sketch where the $\log c$ factor comes from. In the proof of correctness of a vote the voter has to argue that the encrypted vote is on the form $(1+n)^i$ for $i \in \{0, \dots, c-1\}$. Let $\{b_1, \dots, b_{\log c}\}$ be a set of positive integers with the following property: for any number $1, \dots, c-1$ there is a subset where the numbers have this sum, and for no number larger than $c-1$ there is a subset with elements having this sum. Write the vote as $(1+n)^n = (1+n)\sum_{i=1}^{\log c} a_i = \prod_{i=1}^{\log c} (1+a_i)$, where $a_1 = b_1 \lor a_1 = 0, \dots, a_{\log c-1} = b_{\log c} - b_1 \lor a_1 \lor \dots \lor a_{\log c-1} = 0$. This shows that the vote can be built as a product of $[\log c]$ elements. It is possible to prove correctness of such elements and make proofs of products in $O(1)$ exponentiations, giving a total of $O(\log c)$ exponentiations.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_12.md b/samples/texts/2602609/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc8cc4597413ed578e42f4ce7f9e4d2ef3a033ba
--- /dev/null
+++ b/samples/texts/2602609/page_12.md
@@ -0,0 +1,15 @@
+state and use that in making their vote. Since the voting protocols are designed
+for self-tallying and demand that all voters participate we can only see them as
+being realistic in settings with few voters though. With few voters, we believe it is
+reasonable to assume that voters act one at a time; and even if they occasionally
+do not it is easy to correct.
+
+## 2.3 Security.
+
+To argue perfect ballot secrecy of the voting protocol in Figure 1 we will show that a real-life execution of the protocol can be simulated with knowledge of the sum of the honest voters’ votes only. To do so we define two experiments, a real-life experiment, and a simulation experiment.
+
+**Real-life experiment.** In the real-life experiment the voters $V_1, \dots, V_n$ have votes $v_1, \dots, v_n$ that they want to cast. An adversary $\mathcal{A}$ tries to break the protocol. $\mathcal{A}$ has full control over a fixed set of corrupt voters and gets as input a string $z$. $\mathcal{A}$ controls the flow of the protocol, i.e., it decides when to shift to the next phase, and within each phase it can adaptively activate voters. Upon activation, a voter reads the contents of the message board, computes its input according to the voting protocol, and posts it on the message board. After an honest voter has been activated control is passed back to $\mathcal{A}$. Please note that $\mathcal{A}$ may choose not to activate a voter, in that case the voter does not get to submit a vote. Once the election is over $\mathcal{A}$ computes an output $s$ and halts. The output of the experiment is $(s, \text{cont}, \text{result})$, where cont is the contents of the message board and result is the outcome of the election if this can be computed from cont.
+
+We write $\text{Exp}_{V_1, \dots, V_n, z}^{\text{real}}(\psi_1, \dots, \psi_n; z)$ to denote the distribution of $(s, \text{cont}, \text{result})$ from the real-life experiment.
+
+**Simulation.** In this experiment, a simulator $S$ has to simulate the election. $S$ gets as input a string $z$, including a list of corrupt voters. $S$ controls the random oracle; this enables it to simulate zero-knowledge proofs. In the simulation, we let a trusted party $\mathcal{T}$ handle the message board as well as computation of the result. $\mathcal{T}$ learns the votes $v_1, \dots, v_n$ and which voters are corrupt. In the key registration phase, the voting phase and the fault correction phase, $\mathcal{T}$ expects to receive also the witnesses when $S$ submits a valid key or a valid vote on behalf of a corrupt voter. In particular, this means that $\mathcal{T}$ learns the plaintext vote whenever a corrupt voter tries to cast a vote. Due to the self-tallying property of the voting scheme, the honest voters’ partial tally may be revealed at some point. We formulate the following rule for letting $\mathcal{T}$ reveal this partial tally to $S$. First, $\mathcal{T}$ notes which honest voters did not participate in the setup phase or the key-registration phase. In the voting phase, if $S$ is about to activate the last remaining honest voter then it may query $\mathcal{T}$ for the partial tally of the honest voters. Afterwards, we demand that $S$ posts a vote on behalf of this simulated voter. After the election, $S$ halts with output $s$. $\mathcal{T}$ computes the result using the plaintext votes and the honest voters votes, and outputs the contents of the message board and the result.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_13.md b/samples/texts/2602609/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..435d9e2639e02e5eee88bfd7fbfb384f3e04ec5c
--- /dev/null
+++ b/samples/texts/2602609/page_13.md
@@ -0,0 +1,13 @@
+We write $\mathrm{Exp}_{T,S}^{\mathrm{sim}}(v_1, \ldots, v_n, z)$ to denote the distribution of $(s, \mathrm{cont}, \mathrm{result})$ in the simulation.
+
+The simulator $S$. $S$ runs a copy of $\mathcal{A}$ and simulates everything that $\mathcal{A}$ sees, including the behavior of the honest voters. When $\mathcal{A}$ changes phase in the protocol so does $S$. If $\mathcal{A}$ lets a corrupt voter post something on the message board, $S$ verifies the proof. If the proof is valid, $S$ uses rewinding techniques to extract the witness. It then submits the entire thing to $\mathcal{T}$. In particular, this means that the vote is submitted in plaintext to $\mathcal{T}$. If $\mathcal{A}$ activates an honest party in the key registration phase, $S$ selects $h_i$ at random and simulates the proof of knowledge of $x_i$. It submits $h_i$ and the simulated proof to $\mathcal{T}$. If $\mathcal{A}$ activates an honest voter in the voting phase, and this is not the last remaining honest voter to vote, $S$ picks $(U,V)$ at random and simulates a proof of knowledge of the corresponding $x_i, r_i, v_i$. If the activated honest voter is the last honest voter to submit a vote, then $S$ queries $\mathcal{T}$ for the partial tally of the honest voters. Knowing the witnesses for the corrupt voters' submissions it can then compute the partial tally of voters that have voted so far. Let $S$ be the set of voters that have voted, including the voter to vote right now. Let $\mathcal{T}$ be the set of remaining eligible voters; all of them are corrupt. $S$ picks $U$ at random and computes $V = U \sum_{j \in \tau} x_j g^{r_j \sum_{i \in S} v_i}$. It then simulates the proof for having computed $(U,V)$ correctly and gives it to $\mathcal{T}$. At some point the simulated $\mathcal{A}$ halts with output $s$. $S$ outputs $s$ and halts.
+
+**Lemma 1.** For any adversary $\mathcal{A}$ there exists a simulator $S$ such that the distributions $\mathrm{Exp}_{V_1, \dots, V_n, \mathcal{A}}^{\mathrm{real}}(v_1, \dots, v_n, z)$ and $\mathrm{Exp}_{T, S}^{\mathrm{sim}}(v_1, \dots, v_n, z)$ are indistinguishable for all $v_1, \dots, v_n, z$.
+
+*Proof.* We use the simulator $S$ described above. To show indistinguishability we will go through a series of intermediate experiments $\mathrm{Exp}_1, \dots, \mathrm{Exp}_3$. We then show that $\mathrm{Exp}_{V_1, \dots, V_n, \mathcal{A}}^{\mathrm{real}}(v_1, \dots, v_n, z) \approx \mathrm{Exp}_1(v_1, \dots, v_n, z) \approx \mathrm{Exp}_2(v_1, \dots, v_n, z) \approx \mathrm{Exp}_3(v_1, \dots, v_n, z) \approx \mathrm{Exp}_{T,S}^{\mathrm{sim}}(v_1, \dots, v_n, z)$.
+
+$\mathrm{Exp}_1$ works like $\mathrm{Exp}_{V_1, \dots, V_n, \mathcal{A}}^{\mathrm{real}}$ except whenever $\mathcal{A}$ submits a valid input on behalf of a corrupt voter. In these cases, we use rewinding techniques to extract the corresponding witnesses in expected polynomial time. This way for each key registration from a corrupt voter we know the corresponding exponent $x_i$, and for each vote we know the vote $v_i$ as well as the randomness $r_i$ and $x_i$. Having knowledge of the witnesses, we may now run the entire protocol using the trusted party $\mathcal{T}$ from the simulation experiment to control the message board. The outputs of the two experiments are the same, so indistinguishability is obvious.
+
+$\mathrm{Exp}_2$ works like $\mathrm{Exp}_1$ except we simulate all proofs made by honest voters. Typically, these proofs are statistical zero-knowledge and then we get statistical indistinguishability between $\mathrm{Exp}_1$ and $\mathrm{Exp}_2$.
+
+Let us consider $\mathrm{Exp}_2$ a little further. Define $g_i = g^{r_i}$ and $h_{ij} = h_j^{r_i}$, where $r_i$ is the randomness used by voter $i$. Consider the voting phase, denote at a given time $S$ to be the voters that have cast votes already and $\mathcal{T}$ to be the voters that
\ No newline at end of file
diff --git a/samples/texts/2602609/page_14.md b/samples/texts/2602609/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..48dd1d4cd442b6e4f67dc3b2514334b1a8968d6a
--- /dev/null
+++ b/samples/texts/2602609/page_14.md
@@ -0,0 +1,23 @@
+have not yet acted in this phase. The state at this time is
+
+$$ (u, v) = \left( \prod_{i \in S} g_i, \left( \prod_{i \in S} \prod_{j \in T} h_{ij} \right) g^{\sum_{i \in S} v_i} \right). $$
+
+Since we are simulating the proofs, we do not need knowledge of $x_i, r_j$ for honest voters. Therefore, to carry out $Exp_2$ we can first compute a table of the $g_i$'s, $h_j$'s and $h_{ij}$'s for the honest voters and then use these values.
+
+Define $Exp_3$ to be $Exp_2$ where we choose the $g_i$'s, $h_j$'s and $h_{ij}$'s randomly from $G_q$. By a hybrid argument using the DDH assumption, the tables of these elements in $Exp_2$ and $Exp_3$ are indistinguishable. Therefore, the two experiments $Exp_2$ and $Exp_3$ are indistinguishable.
+
+Remaining is the fact that we still use individual votes $v_i$ from honest voters to perform the experiment. However, note that in the voting phase when an honest voter $V_i$ updates from $(u, v)$ to $(U, V)$ he sets $U = ug_i$ and $V = v(\prod_{j \in S} h_{ji}^{-1})(\prod_{j \in T} h_{ij})g^{v_i}$. The elements $\{h_{ij}\}_{j \in T}$ contain new randomness and therefore the vote $v_i$ is perfectly hidden unless $T$ has no honest voters, i.e., $V_i$ is the last honest voter to vote.
+
+These considerations lead us to modify $Exp_3$ the following way. An honest voter who is not the last honest voter to act in the voting phase computes the new state $(U, V)$ by picking it at random in $G_q \times G_q$. An honest voter $V_i$ who is the last honest voter to vote computes $\sum_{i \in S} v_i$, picks $U$ at random from $G_q$ and sets $V = U \sum_{i \in T} x_i g^{\sum_{i \in S} v_i}$.
+
+This modifies $Exp_3$ into $Exp_{T,S}^{sin}$, so these two experiments are perfectly indistinguishable. $\square$
+
+Lemma 1 says that the election can be simulated without knowledge of the honest voters’ individual votes. Moreover, it forces the simulator to submit plain-text votes on behalf of corrupt voters, so their votes cannot be related to the honest voters’ votes.
+
+**Theorem 1.** *The voting protocol described in Figure 1 is self-tallying, dispute-free, and has perfect ballot secrecy. If the last voter is an honest authority that submits a zero-vote then the protocol is fair.*
+
+*Proof.* It is easy to see that the protocol is self-tallying if all parties act according to the protocol, and the zero-knowledge proofs force the parties to act according to the protocol. Likewise, since the zero-knowledge proofs force parties to act according to the protocol it follows that the protocol is dispute-free. Perfect ballot secrecy follows from Lemma 1. Fairness follows from perfect ballot secrecy, since perfect ballot secrecy implies that we cannot compute any partial result before the authority submits its vote, and if honest the authority does not submit its vote before the end of the election. $\square$
+
+## 2.4 A Veto Protocol
+
+Kiayias and Yung suggested a veto-protocol in [5]. By this, we mean that any party may veto a proposal, however, it should not be possible to learn who vetoed the proposal or how many vetoed a proposal.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_15.md b/samples/texts/2602609/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..f6e272d9722f785cd4f0027947dfa1785148dcd9
--- /dev/null
+++ b/samples/texts/2602609/page_15.md
@@ -0,0 +1,21 @@
+It is easy to implement such a veto protocol with the voting scheme we have suggested. We let acceptance of the proposal correspond to a 0-vote. On the other hand, a veto is a vote on a random element from $Z_q$. This way, if nobody vetoed we get a tally, which is 0. On the other hand, if anybody vetoed, then we get a tally, which is a random number from $Z_q$. Discrete logarithms are difficult to compute, however, we do not have to do that, all we need to do is to verify that $g^{\text{result}} \neq 1$.
+
+One problem, which also pertains to the scheme in [5], remains with this scheme, since any vetoer knows his own random element and therefore he may check whether he is the only one who vetoed. To guard against that we may rely on the authority disclosing the result to raise $(u, v)$ to a random exponent from $Z_q^*$ before decrypting. This way it is impossible for any cheating vetoer to see whether he is the only one to veto the proposal.
+
+# 3 Self-disclosing Anonymous Broadcast with Perfect Message Secrecy
+
+## 3.1 Security Definitions
+
+In this section, we deal with the possibility of building an anonymous broadcast channel on top of an authenticated broadcast channel. We want some strict security requirements to be satisfied. The security requirements are quite similar to those for self-tallying elections with perfect ballot secrecy but we rename the latter notion to stress that anonymous broadcast has many other applications than voting.
+
+**Perfect message secrecy:** Knowledge of the set of messages to be broadcast is only accessible to a coalition of all remaining senders, and this knowledge does not include the connection between senders and messages. This means that a sender is hidden completely among the group of honest senders.
+
+**Self-disclosing:** Once the last sender has submitted his message, anybody may see which messages were broadcast.
+
+**Fairness:** Until the deadline is reached it is impossible to know what messages will be broadcast. Again, we will only demand fairness in a restricted sense, namely it will be ensured by a hopefully honest authority.
+
+**Dispute-freeness:** It is publicly verifiable whether senders follow the protocol or not.
+
+## 3.2 The Anonymous Broadcast Protocol
+
+*Physical analogue.* The senders one after another enter a room alone. Bringing with them they take a box, all boxes look alike, and a padlock for each of the remaining senders. In the room, they write down their message, put it in the box, and lock the box with the padlocks corresponding to the remaining senders. Then they shuffle around the boxes so nobody can tell them apart. In the presence
\ No newline at end of file
diff --git a/samples/texts/2602609/page_2.md b/samples/texts/2602609/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..6a4f3f573dc1a0ce021ef32af5582dcc07e7e957
--- /dev/null
+++ b/samples/texts/2602609/page_2.md
@@ -0,0 +1,19 @@
+of the remaining senders, they now remove one lock from each box, namely the locks that fit their key. As the last sender removes the locks, the messages are revealed.
+
+**_Idea in the protocol._** We use similar ideas as we did in the voting protocol. Each voter encrypts his message with the keys of the remaining senders. This means that the message will not be revealed until all honest voters have been involved in the protocol and peeled off the layer of encryption corresponding to their secret key. The sender will rely on this last honest sender to anonymize his message with respect to all the honest senders.
+
+Since the sender cannot know whether he is the last honest sender, he must also ensure himself that his message is mixed with the messages of the previous senders. Since ElGamal encryption is homomorphic, it is easy to permute and rerandomize (shuffle) all the ciphertexts made up to this point. Furthermore, efficient proofs of a correct shuffle exist, see [9–11].
+
+Summarizing the protocol the method is as follows. The senders all register public keys just as in the voting protocol. When a sender wants to add his message to the pool, he encrypts it with the public keys of the remaining senders including his own key. Then he shuffles all the ciphertexts in a random way. Finally he peels off a layer of the encryption, namely he decrypts all the ciphertexts with respect to his own key. He proves in zero-knowledge that all these steps have been performed correctly.
+
+The full protocol can be seen in Figure 2.
+
+**_Performance evaluation._** Key registration takes $O(1)$ exponentiations for each sender, and each key has size $O(k)$. To verify the correctness of the keys we use $O(n)$ exponentiations.
+
+With respect to message submission, we may use the efficient shuffle proofs of [9–11]. This way it takes $O(n)$ exponentiations to compute the new batch of ciphertexts and the proofs, and such a batch has size $O(nk)$. It takes $O(n^2)$ exponentiations to verify all the senders’ proofs.
+
+**_Simultaneous disclosure._** If we remove the shuffling part of our anonymous broadcast protocol, we get a simultaneous disclosure protocol. We can therefore compare our performance with the simultaneous disclosure protocol of [5], which uses $O(n^2)$ exponentiations for each voter in the registration phase, and $O(n)$ exponentiations for each voter in the message submission phase.
+
+### 3.3 Security
+
+To argue perfect message secrecy we show that the broadcast protocol can be simulated without knowledge of the individual messages. Very similar to the case of the voting protocol we therefore define a real-life experiment and a simulation experiment.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_3.md b/samples/texts/2602609/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..a736ad8cd5c9336233a15e8b375bf024bdc4fd48
--- /dev/null
+++ b/samples/texts/2602609/page_3.md
@@ -0,0 +1,7 @@
+Fig. 2. The anonymous broadcast protocol
+
+*Real-life experiment.* We have parties $P_1, \dots, P_n$ with messages $m_1, \dots, m_n$ that they want to broadcast anonymously. An adversary $\mathcal{A}$ with input $z$ controls a fixed set of these parties. $\mathcal{A}$ also controls the scheduling in the protocol, in other words, $\mathcal{A}$ decides when to proceed to the next phase, and within each phase $\mathcal{A}$ activates parties adaptively. When activated a party receives the contents of the message board, computes its input according to the protocol, and posts it on the message board. Control then passes back to $\mathcal{A}$. In the end, $\mathcal{A}$ outputs some string $s$ and halts.
+
+We denote by $\text{Exp}_{P_1, \dots, P_n, \mathcal{A}}^{\text{real}}(m_1, \dots, m_n; z)$ the distribution of outputs $(s, \text{cont. messages})$ from the experiment, where cont is the content of the message board, and messages is a sorted list of messages from cont.
+
+*Simulation.* Again, we have a trusted party $T$ and a simulator $S$. $T$ controls the message board and has as input $m_1, \dots, m_n$ and a list of corrupted parties. During the execution of the protocol it expects $S$ to provide witnesses for correctness of the actions performed by corrupted parties. When only one honest party re-
\ No newline at end of file
diff --git a/samples/texts/2602609/page_4.md b/samples/texts/2602609/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..11c3e762e1cfe049fc3da2c59e68a6d87768356a
--- /dev/null
+++ b/samples/texts/2602609/page_4.md
@@ -0,0 +1,13 @@
+mains in the broadcast phase, $\mathcal{S}$ can query $\mathcal{T}$ for the set of messages $m_1, \dots, m_k$ submitted by honest parties. After this $\mathcal{S}$ must then submit this honest party's broadcast to $\mathcal{T}$. In the end, $\mathcal{S}$ halts with output $s$, and $\mathcal{T}$ outputs the contents of the message board and the set of messages submitted in lexicographic order.
+
+We write $\text{Exp}_{\mathcal{T},\mathcal{S}}^{\text{sim}}(m_1, \dots, m_n, z)$ for the distribution of $(s, \text{cont.}, \text{messages})$.
+
+The simulator $\mathcal{S}$. $\mathcal{S}$ runs a copy of $\mathcal{A}$ simulating anything $\mathcal{A}$ would see in a real-life execution, including the actions of the honest parties. Whenever $\mathcal{A}$ changes phase, so will $\mathcal{S}$. If $\mathcal{A}$ lets a corrupt party submit something with a valid proof for the message board, $\mathcal{S}$ uses rewinding to extract the witness. This way, in the key registration phase $\mathcal{S}$ learns the exponent $x_i$, when corrupt party $P_i$ registers key $h_i$. Likewise, when corrupt party $P_i$ makes a broadcast, then $\mathcal{S}$ learns the randomizers used, the new message that was submitted, and the permutation $\pi_i$. After extracting the witness, $\mathcal{S}$ sends everything to the trusted party $\mathcal{T}$. If $\mathcal{A}$ activates an honest party $P_i$ in the key registration phase then $\mathcal{S}$ picks $h_i$ at random and simulates a proof that it knows the exponent $x_i$. If $\mathcal{A}$ activates an honest party $P_i$ in the message submission phase, and this is not the last honest party to act, $\mathcal{S}$ selects $(u_i, v_i)$ at random from $G_q \times G_q$. For each $k \in S$, where $\mathcal{S}$ is the set of senders that have been active in the protocol, including $P_i$, $\mathcal{S}$ selects at random $(U_k, V_k')$ and $(U_k, \bar{V}_k)$. It then simulates proofs that it knows the message inside the $(u_i, v_i)$ encryption, that it knows a permutation $\pi_i$ and randomizers such that $((U_k, V_k'))_{k \in S}$ is a shuffle of $((u_k, v_k))_{k \in S}$, and that for each $k \in S$, $(U_k, \bar{V}_k)$ is the decryption of $(\bar{U}_k, V_k')$ with key $x_i$ used to form $h_i$. If the sender activated is the last remaining honest sender, $\mathcal{S}$ queries $\mathcal{T}$ for the list of messages for honest senders. Furthermore, it knows the messages submitted by corrupt parties. It labels in random order the messages $\{m_k\}_{k \in S}$. It picks $(u_i, v_i)$ at random and picks at random $(U_k, V_k')$ for $k \in S$. Then for $k \in S$ it sets $\bar{V}_k = U_k^{\sum_{j \in \tau^{*j}} m_k}$, where $\tau$ is the set of (corrupt) senders that have not yet been activated. $\mathcal{S}$ simulates the proofs of correctness and submits it all to $\mathcal{T}$. In the end the simulated $\mathcal{A}$ terminates with output $s$. $\mathcal{S}$ outputs $s$ and halts.
+
+**Lemma 2.** For any adversary $\mathcal{A}$ there exists a simulator $\mathcal{S}$ such that the two distributions $\text{Exp}_{P_1, \dots, P_n, \mathcal{A}}^{\text{real}}(m_1, \dots, m_n, z)$ and $\text{Exp}_{\mathcal{T}, \mathcal{S}}^{\text{sim}}(m_1, \dots, m_n, z)$ are indistinguishable for all $m_1, \dots, m_n, z$.
+
+*Proof.* The proof is similar to the proof for Lemma 1. We use the simulator described above. We define three intermediate experiments $\text{Exp}_1$, $\text{Exp}_2$ and $\text{Exp}_3$ and prove that $\text{Exp}_{P_1, \dots, P_n, \mathcal{A}}^{\text{real}}(m_1, \dots, m_n, z) \approx \text{Exp}_1 \approx \text{Exp}_2 \approx \text{Exp}_3 \approx \text{Exp}_{\mathcal{T}, \mathcal{S}}^{\text{sim}}(m_1, \dots, m_n, z)$.
+
+$\text{Exp}_1$ is the real-life experiment where we use rewinding techniques to extract witnesses for valid actions that $\mathcal{A}$ lets corrupt parties make. Having the witnesses, we can then execute this experiment in the trusted message board model, giving $\mathcal{T}$ the witnesses to go along with the messages.
+
+$\text{Exp}_2$ is a modification of $\text{Exp}_1$ where we simulate all proofs that honest parties make. Consider how an honest party $P_i$ computes the new state $((U_j, V_j])_{j \in s}$, where $\mathcal{S}$ is the set of parties that have submitted their message.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_5.md b/samples/texts/2602609/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..b35abf32ecba50e270f5de9eba7b087f3e53adf9
--- /dev/null
+++ b/samples/texts/2602609/page_5.md
@@ -0,0 +1,13 @@
+Write $T$ for the set of remaining parties that have not yet made a broadcast. $P_i$ first selects $r_i$ at random and sets $(u_i, v_i) = (g_i; (\prod_{j \in T} h_{ij}) m_i)$, where $g_i = g^{r_i}$ and $h_{ij} = h_j^{r_i}$. Then $P_i$ selects $\pi_i$ as a random permutation over $S$, and computes the pairs $(U_k, V_k') = (u_{\pi_k^{-1}(k)} g_{ik}, v_{\pi_k^{-1}(k)} \prod_{j \in T} h_{ijk})_{k \in S}$, where $g_{ik} = g^{r_{ik}}$ and $h_{ijk} = h_j^{r_{ijk}}$, with the $r_{ik}$'s and $r_{ijk}$'s chosen at random from $\mathbb{Z}_q$. Finally, for $k \in S$ it sets $(U_k, V_k) = (U_k, V_k U_k^{-x_k}) = (U_k, V_k \prod_{j \in S} h_{ji}^{-1})$. All this can be computed from a table of $g_i$'s, $h_j$'s, $h_{ij}$'s, $g_{ik}$'s, and $h_{ijk}$'s for the honest parties without knowing the underlying randomizers.
+
+*Exp*3 is a modification of *Exp*2 where the $g_i$'s, $h_j$'s, $h_{ij}$'s, $g_{ik}$'s and $h_{ijk}$'s for honest parties are selected at random from $G_q$. By a hybrid argument using the DDH assumption, *Exp*2 and *Exp*3 are indistinguishable.
+
+Looking at Exp3, we notice that we might as well pick the elements $u_i, v_i, U_k, V_k', V_k$ completely at random from $G_q$ instead of bothering with picking a permutation $\pi_i$ and inserting messages, as long as $P_i$ is not the last honest party to broadcast a message. An honest party $P_i$ that is the last honest party to broadcast a message chooses $u_i, v_i, U_k, V_k'$ at random. It picks a permutation $\pi$ at random and sets $V_k = U_k^{\sum_{j \in \tau} x_j} m_{\pi(k)}$ for $k \in S$. This last experiment is exactly what happens in the simulation so Exp3 and $\text{Exp}_{\tau,S}^{\text{sim}}$ are perfectly indistinguishable. $\square$
+
+**Theorem 2.** The protocol described in Figure 2 is a self-disclosing, dispute-free anonymous broadcast protocol with perfect message secrecy. If the last sender is an honest authority (who does not submit a message himself) then the protocol is fair.
+
+*Proof.* It is easy to see that the protocol is self-disclosing. The zero-knowledge proofs entail dispute-freeness. Perfect message secrecy follows from Lemma 2. Finally, fairness follows from the perfect message secrecy. $\square$
+
+## 4 Various Comments
+
+Reusing the public keys. In both the voting protocol and the anonymous broadcast protocol we may reuse the public keys in many instantiations of the protocols presented here, but some care must be taken. The reason to be careful is the fact we must be able to rewind and extract witnesses from proofs made by the adversary. In the simulation, however, we cannot rewind the trusted party $T$, so we must be careful that we never have to rewind past a point where $T$ gives us a partial tally or partial set of honest senders messages. When only a single protocol is running this is no problem since in the zero-knowledge proofs we query the random oracle with the current state. When a partial result is released we always let an honest party act right after it, and this honest party injects some new randomness into the state. For this reason an adversary can not predict what the state will be after the release of a partial result, and therefore cannot make queries before the release of the partial result that it uses after the release of the partial result. This means that we never have to rewind back before the
\ No newline at end of file
diff --git a/samples/texts/2602609/page_6.md b/samples/texts/2602609/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..bbccc4ff1cba3b4ed375cb7af7c4a8e9e7744796
--- /dev/null
+++ b/samples/texts/2602609/page_6.md
@@ -0,0 +1,30 @@
+release of a partial result. When running multiple protocols we have to query
+the random oracle with the states of all protocols to guarantee not having to
+rewind back past a point where a partial result was released. If we do this, we
+may use the same public keys to run many protocols.
+
+**Universal composability.** The statement of our lemmas is somewhat inspired by the universal composability framework of Canetti [12, 13]. However, we have not proved the protocols to be universally composable. In particular, we do not include a party $Z$ to model the exterior environment. It is possible to make the protocols universally composable against non-adaptive adversaries by generating a key for a public key cryptosystem in the setup phase. After this we can in the key registration phase encrypt the keys $x_i$ and prove to have done so in zero-knowledge. We can set this up so the simulator knows the corresponding secret key for the cryptosystem, and therefore it can make a straight-line extraction of the $x_i$'s. Knowing the $x_i$'s it can then extract votes and messages, and carry on the simulation without ever having to rewind. Unfortunately, the technique above may make the protocols considerably less efficient and we have therefore not pursued this option in the paper.
+
+*Flexibility in participation.* It is easy to set up an election where only a part of the participants is allowed to participate. In that case, we simply ignore the public keys of those not allowed to participate in this instance of the protocol.
+
+In the voting protocol, it is easy to include new voters that may participate
+in future election. We can choose the group $G_q \le Z_q$ specified by $p, q, g$ in a
+publicly verifiable manner, e.g., chosen at random from the binary expansion of
+$\pi$, or chosen from a string of hashes on some random value. Considering uniform
+adversaries it would seem reasonable that this gives us a suitably hard group.⁴
+Since the new voter can trust this group, he simply needs to register a public
+key himself in order to join.
+
+In the anonymous broadcast protocol, we may also include new senders.
+However, here the new senders have to beware of the risk that the commitment
+scheme may be chosen with a trapdoor known to the senders already registered.
+Therefore, the new sender will have to update this commitment key in a publicly
+verifiable way.
+
+*The authenticated broadcast channel.* We do not need something fancy to form this channel. We may for instance assume that a central server stores all the data, and this central server may act like the authority too.
+
+To ensure correctness of the data we will assume that all communication
+is signed with a digital signature. We cannot rely on a certification authority
+to issue these digital signatures in the strict setting we are working in. Instead,
+
+⁴ While it varies from group to group how hard it is to compute discrete logarithms, we do not know of any groups where the DDH problem can be efficiently solved, provided the groups are some subgroup of $Z_p^*$ where $p$ are suitably large primes. See also [14] on this issue.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_7.md b/samples/texts/2602609/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1faad1f903a3001f0fb0c3a3f6037cd9fd9c5a7
--- /dev/null
+++ b/samples/texts/2602609/page_7.md
@@ -0,0 +1,39 @@
+each participant must certify each other participants public key. Since we assume
+only a few voters or senders are participating in the protocol, this is a reasonable
+burden to put on the participants.
+
+Imagine now that the central server fails. Since everything is digitally signed,
+the participants may restore the state of the message board from their own
+data. They may now simply set up a new server to run the protocol. It is easy
+to modify the votes in a publicly verifiable manner such that the data fits the
+public key of the new authority.
+
+References
+
+1. Kiayias, A., Yung, M.: Self-tallying elections and perfect ballot secrecy. In: proceedings of PKC '02, LNCS series, volume 2274. (2002) 141–158
+
+2. Damgård, I., Jurik, M.J.: A length-flexible threshold cryptosystem with applications. In: proceedings of ACISP '03, LNCS series, volume 2727. (2003) 350–364
+
+3. Paillier, P.: Public-key cryptosystems based on composite residuosity classes. In: proceedings of EUROCRYPT '99, LNCS series, volume 1592. (1999) 223–239
+
+4. Kiayias, A., Yung, M.: Robust verifiable non-interactive zero-sharing. In Gritzalis, D., ed.: Secure Electronic Voting. Kluwer Academic Publishers (2003) 139–151
+
+5. Kiayias, A., Yung, M.: Non-interactive zero-sharing with applications to private distributed decision making. In: proceedings of Financial Crypto, LNCS series, volume 2742. (2003) 303–320
+
+6. Algesheimer, J., Camenisch, J., Shoup, V.: Efficient computation modulo a shared secret with application to the generation of shared safe-prime products. In: proceedings of CRYPTO '02, LNCS series, volume 2442. (2002) 417–432
+
+7. Cramer, R., Damgård, I., Schoenmakers, B.: Proofs of partial knowledge and simplified design of witness hiding protocols. In: proceedings of CRYPTO '94, LNCS series, volume 893. (1994) 174–187
+
+8. Bellare, M., Rogaway, P.: Random oracles are practical: A paradigm for designing efficient protocols. In: ACM Conference on Computer and Communications Security 1993. (1993) 62–73
+
+9. Furukawa, J., Sako, K.: An efficient scheme for proving a shuffle. In: proceedings of CRYPTO '01, LNCS series, volume 2139. (2001) 368–387
+
+10. Neff, A.C.: A verifiable secret shuffle and its application to e-voting. In: ACM CCS '01. (2001) 116–125
+
+11. Groth, J.: A verifiable secret shuffle of homomorphic encryptions. In: proceedings of PKC '03, LNCS series, volume 2567. (2003) 145–160
+
+12. Canetti, R.: Security and composition of multi-party cryptographic protocols. Journal of Cryptology **13** (2000) 143–202
+
+13. Canetti, R.: Universally composable security: A new paradigm for cryptographic protocols. In: FOCS 2001. (2001) 136–145
+
+14. Gordon, D.M.: Designing and detecting trapdoors for discrete log cryptosystems. In: proceedings of CRYPTO '92, LNCS series, volume 740. (1992) 66–75
\ No newline at end of file
diff --git a/samples/texts/2602609/page_8.md b/samples/texts/2602609/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..43a61e24e2942fe49225d689503937a4e643d64c
--- /dev/null
+++ b/samples/texts/2602609/page_8.md
@@ -0,0 +1,17 @@
+verify that indeed the parties do follow the protocol. In other words, it is public knowledge whether a party performed correctly or tried to cheat.
+
+Kiayias and Yung [1] presented a self-tallying dispute-free voting scheme with perfect ballot secrecy with security based on the Decisional Diffie-Hellman (DDH) assumption. Later Damgård and Jurik [2] suggested a somewhat similar scheme based on the Decisional Composite Residuosity (DCR) assumption [3]. Both schemes work in the random oracle model and assume an authenticated broadcast channel; in the present paper, we use this model too.
+
+Kiayias and Yung [1, 4, 5] rely on a method they call zero-sharing for achieving maximal privacy. Not only do they build a voting protocol from this, but they also suggest protocols for anonymous vetoing and simultaneous disclosure of secrets.
+
+**_Our contributions._** Our first contribution is a new voting scheme that has the same security properties as [1, 2] but is simpler and more efficient. We base our scheme on the DDH assumption, i.e., ElGamal encryption, but the same ideas can be used in combination with the DCR assumption. The reason for this choice is that it is easy to generate in a distributed manner suitable groups where the DDH assumption is well founded. Distributed generation of a suitable group for the DCR assumption is more complicated [6].
+
+Our second contribution is to construct an anonymous broadcast channel with perfect message secrecy, i.e., no matter which parties are dishonest, they are not able to tell among the honest senders who sent a particular message. This scheme is related to voting in the sense that using this anonymous channel to cast votes gives us a self-tallying voting scheme with perfect ballot secrecy, but it may of course also have many other applications.
+
+## 1.1 Model
+
+Throughout the paper, we assume all parties have access to an authenticated broadcast channel with memory. We imagine this in the form of a message board that all parties can access. Each party has a special designated area where he, and nobody else, can write. No party can delete any messages from the message board. One way of implementing such a message board would be to have a central server on the Internet handling the messages. We discuss this further in Section 4.
+
+When considering security of the protocols we imagine that there is an active polynomial time adversary $\mathcal{A}$ trying to break them. $\mathcal{A}$ is static, i.e., from the beginning of the protocol it has control over a fixed set of parties.
+
+The parties in the protocol work semi-synchronously; the protocol proceeds in phases and in each phase parties may act in random order. We let the adversary decide when to change to the next phase. Since the protocols we design are intended for use with a small number of participants, we find this to be a reasonable assumption. Should several parties by accident happen to execute their action at the same time anyway, then it is quite easy to recover.
\ No newline at end of file
diff --git a/samples/texts/2602609/page_9.md b/samples/texts/2602609/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..6d29a61f8387ad1a231b26db020bd153b95d270f
--- /dev/null
+++ b/samples/texts/2602609/page_9.md
@@ -0,0 +1,29 @@
+# 2 Self-tallying Voting Scheme with Perfect Ballot Secrecy
+
+## 2.1 Security Definitions
+
+The requirements we want the voting scheme to satisfy are the following.
+
+**Perfect ballot secrecy:** This is an extension of the usual privacy requirement.
+In a voting scheme with perfect ballot secrecy the partial tally of a group of voters is only accessible to a coalition consisting of all remaining voters.
+This is the best type of anonymity we can hope for in elections where we publish the result, since a coalition of voters may of course always subtract their own votes.
+
+**Self-tallying:** After all votes have been cast, it is possible for anybody, both voters and third parties, to compute the result.
+
+**Fairness:** Nobody has access to a partial tally before the deadline. We will interpret this demand in a relaxed way such that it is guaranteed by a hopefully honest authority.
+
+**Dispute-freeness:** This notion extends universal verifiability. A scheme is dispute-free if everybody can check whether voters act according to the protocol or not. In particular, this means that the result is publicly verifiable.
+
+## 2.2 The voting protocol
+
+*The basic idea.* To quickly describe our idea let us use an analogue with the physical world. Assume a group of people want to vote yes or no to a proposal. To do this the voters take a box with a small slot and each voter puts a padlock on the box. Taking turns the voters one by one drop a white (yes) stone or a black (no) stone into the box and remove their padlock. When the last voter has removed his padlock, they may open the box and see the result of the election. The protocol has perfect ballot secrecy since the box cannot be opened before all honest voters have cast their vote, and thus any honest voter's vote is mixed in with the rest of the honest voters' votes.
+
+*Overview of the protocol.* For simplicity, we first describe the protocol in the honest-but-curious setting, i.e., corrupted voters may leak information but follow the protocol. For simplicity, we also assume there are just two candidates that the voters can choose between.
+
+**Initialization:** First, the voters agree on a group $G_q$ of order $q$ where the DDH problem is hard. Let $g$ be a generator for $G_q$.
+All voters now select at random an element in $\mathbb{Z}_q$. Each voter $j$ keeps his element $x_j$ secret but publishes $h_j = g^{x_j}$.
+
+**Casting votes:** Voters may vote in any adaptively chosen order, however, for simplicity we assume in this example that they vote in the order $1, 2, \dots, n$. Let their choices be $v_1, v_2, \dots, v_n \in \{0, 1\}$.
+The election now proceeds like this:
+
+1. Voter 1 selects at random $r_1 \in \mathbb{Z}_q$ and publishes $(g^{r_1}, (\prod_{i=2}^n h_i)^{r_1} g^{r_1})$.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_1.md b/samples/texts/2704143/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..d115811420fdb712a32686b650c4805667e3510f
--- /dev/null
+++ b/samples/texts/2704143/page_1.md
@@ -0,0 +1,14 @@
+# SEMIPARAMETRIC THEORY FOR CAUSAL MEDIATION ANALYSIS: EFFICIENCY BOUNDS, MULTIPLE ROBUSTNESS AND SENSITIVITY ANALYSIS
+
+BY ERIC J. TCHETGEN TCHETGEN¹ AND ILYA SHPITSER
+
+*Harvard School of Public Health*
+
+While estimation of the marginal (total) causal effect of a point exposure on an outcome is arguably the most common objective of experimental and observational studies in the health and social sciences, in recent years, investigators have also become increasingly interested in mediation analysis. Specifically, upon evaluating the total effect of the exposure, investigators routinely wish to make inferences about the direct or indirect pathways of the effect of the exposure, through a mediator variable or not, that occurs subsequently to the exposure and prior to the outcome. Although powerful semiparametric methodologies have been developed to analyze observational studies that produce double robust and highly efficient estimates of the marginal total causal effect, similar methods for mediation analysis are currently lacking. Thus, this paper develops a general semiparametric framework for obtaining inferences about so-called marginal natural direct and indirect causal effects, while appropriately accounting for a large number of pre-exposure confounding factors for the exposure and the mediator variables. Our analytic framework is particularly appealing, because it gives new insights on issues of efficiency and robustness in the context of mediation analysis. In particular, we propose new multiply robust locally efficient estimators of the marginal natural indirect and direct causal effects, and develop a novel double robust sensitivity analysis framework for the assumption of ignorability of the mediator variable.
+
+**1. Introduction.** The evaluation of the total causal effect of a given point exposure, treatment or intervention on an outcome of interest is arguably the most common objective of experimental and observational studies in the fields of epidemiology, biostatistics and in the social sciences. However, in recent years, investigators in these various fields have become increasingly interested in making inferences about the direct or indirect pathways of the exposure effect, through a mediator variable or not, that occurs subsequently to the exposure and prior to the outcome. Recently, the counterfactual language of causal inference has proven particularly useful for formalizing mediation analysis. Indeed, causal inference
+
+Received March 2011; revised March 2012.
+¹Supported by NIH Grant R21ES019712.
+MSC2010 subject classifications. 62G05.
+*Key words and phrases.* Natural direct effects, natural indirect effects, double robust, mediation analysis, local efficiency.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_10.md b/samples/texts/2704143/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..ae21343d3f8d78d4640f16a018ccc44f93436263
--- /dev/null
+++ b/samples/texts/2704143/page_10.md
@@ -0,0 +1,21 @@
+$$X_2 = Z_2 / \{1 + \exp(Z_1)\} + 10; X_3 = (Z_1 Z_3 / 25 + 0.6)^3$$
+and $X_4 = (Z_2 + Z_4 + 20)^2$, so that $Z$ may be expressed in terms of $X$.
+
+(Model.E) $[E|X_1, X_2, X_3] \sim \text{Bernoulli}([1 + \exp\{(Z_1 - 0.5Z_2 + 0.25Z_3 + 0.1Z_4)\}]^{-1})$;
+
+(Model.M) $[M|E, X_1, X_2, X_3] \sim \text{Bernoulli}([1 + \exp\{-(0.5 - Z_1 + 0.5Z_2 - 0.9Z_3 + Z_4 - 1.5E)\}]^{-1})$;
+
+(Model.Y) $[Y|M, E, X_1, X_2, X_3] \sim 210 + 27.4Z_1 + 13.7Z_3 + 13.7Z_3 + M + E + N(0, I)$.
+
+Correctly specified working models were thus achieved when an additive linear regression of $Y$ on $Z$, a logistic regression of $M$ with linear predictor additive in $Z$ and $E$ and a logistic regression of $E$ with linear predictor additive in the $Z$, respectively. Incorrect specification involved fitting these models with $X$ replacing $Z$, which produces highly variable weights. For instance, an estimated propensity score as small as $5.5 \times 10^{-33}$ occurred in the simulation study reflecting an effective violation of positivity; similarly, a mediator predicted probability as small as $3 \times 10^{-20}$ also occured in the simulation study.
+
+Tables 4 and 5 summarize simulation results for $\hat{\theta}_0^{\text{ym}}, \hat{\theta}_0^{\text{ye}}, \hat{\theta}_0^{\text{em}}, \hat{\theta}_0^{\text{triple}}, \hat{\theta}_0^{\text{triple},\dagger,1}$ and $\hat{\theta}_0^{\text{triple},\dagger,2}$. When all three working models are correct, all estimators perform well in terms of bias, but there are clear differences between the estimators in
+
+TABLE 4
+Simulation results n = 200
+
+
| | Mym | Mye | Mem | Munion | M†,1union | M†,2union |
|---|
| All correct | bias | 0.001 | -0.207 | 0.498 | 0.003 | -0.08 | -0.079 |
| MC s.e.* | 2.614 | 8.333 | 20.214 | 2.6151 | 2.6155 | 2.6153 |
| Y wrong | bias | -9.87 | -10.221 | 0.498 | -0.147 | -0.502 | -0.202 |
| MC s.e. | 3.322 | 10.539 | 20.214 | 4.461 | 3.177 | 3.141 |
| M wrong | bias | -0.033 | -0.207 | -9.497 | 0.001 | 0.046 | 0.046 |
| MC s.e. | 2.613 | 8.333 | 15.376 | 2.615 | 2.614 | 2.614 |
| E wrong | bias | -0.001 | 0.132 | 210.450 | 0.066 | -0.089 | -0.087 |
| MC s.e. | 2.614 | 4.373 | 2336.92 | 4.891 | 2.619 | 2.615 |
| Y, E wrong | bias | -9.869 | -13.535 | 210.454 | -33.090 | -1.4609 | -2.487 |
| MC s.e. | 3.322 | 5.256 | 2336.92 | 375.334 | 5.187 | 4.245 |
| Y, M wrong | bias | -9.355 | -10.220 | -9.496 | -4.346 | -3.579 | -3.579 |
| MC s.e. | 3.224 | 10.539 | 15.376 | 3.912 | 3.480 | 3.441 |
| E, M wrong | bias | -0.032 | 0.132 | 205.060 | 0.088 | -0.001 | -3.77 × 10-5 |
| MC s.e. | 2.614 | 4.373 | 2289.788 | 4.763 | 2.623 | 2.618 |
| Y, E, M wrong | bias | -9.355 | -13.535 | 205.060 | -37.757 | -4.223 | -5.253 |
| MC s.e. | 3.224 | 5.356 | 2289.78 | 379.122 | 5.835 | 4.828 |
+
+$\mathcal{M}_{\text{ym}}: \hat{\theta}_{0}^{\text{ym}}; \mathcal{M}_{\text{ye}}: \hat{\theta}_{0}^{\text{ye}}; \mathcal{M}_{\text{em}}: \hat{\theta}_{0}^{\text{em}}; \mathcal{M}_{\text{union}}: \hat{\theta}_{0}^{\text{triple}}; \mathcal{M}_{\text{union}}^{\dagger, 1}: \hat{\theta}_{0}^{\text{triple},\dagger, 1}; \mathcal{M}_{\text{union}}^{\dagger, 2}: \hat{\theta}_{0}^{\text{triple},\dagger, 2}$.
+
+*Monte Carlo standard error.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_11.md b/samples/texts/2704143/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..5304392b44aa91432daf2ba6217eb0a66c03400c
--- /dev/null
+++ b/samples/texts/2704143/page_11.md
@@ -0,0 +1,12 @@
+TABLE 5
+Simulation results *n* = 1000
+
+ | | Mym | Mye | Mem | Munion | M†,1union | M†,2union |
|---|
| All correct | bias | 0.0324 | 0.004 | -0.106 | 0.034 | -0.047 | -0.047 |
| MC s.e.* | 1.136 | 3.06 | 6.490 | 1.136 | 1.137 | 1.137 |
| Y wrong | bias | -10.256 | -10.305 | -0.106 | 0.063 | -0.147 | -0.148 |
| MC s.e. | 1.675 | 4.005 | 6.490 | 1.769 | 1.419 | 1.407 |
| M wrong | bias | -5 × 10-4 | 0.004 | -9.706 | 0.033 | 0.076 | 0.076 |
| MC s.e. | 1.136 | 3.060 | 5.395 | 1.137 | 1.137 | 1.135 |
| E wrong | bias | 0.032 | 0.135 | 2.4 × 106 | 1908.76 | -0.038 | -0.030 |
| MC s.e. | 1.136 | 1.794 | 4.3 × 107 | 53911.63 | 1.400 | 1.242 |
| Y, E wrong | bias | -10.256 | -14.011 | 2.4 × 106 | -1.1 × 106 | 6.201 | 1.024 |
| MC s.e. | 1.675 | 2.386 | 4.3 × 107 | 2.1 × 107 | 9.406 | 5.097 |
| Y, M wrong | bias | -9.705 | -10.305 | -9.706 | -4.216 | -3.555 | -3.557 |
| MC s.e. | 1.626 | 4.004 | 5.395 | 1.667 | 1.527 | 1.510 |
| E, M wrong | bias | 5.7 × 10-4 | 0.135 | 2.5 × 106 | 2034.83 | 0.0539 | 0.0599 |
| MC s.e. | 1.136 | 1.794 | 4.6 × 107 | 56090.10 | 1.429 | 1.272 |
| Y, E, M wrong | bias | -9.075 | -14.011 | 2.5 × 106 | -1.2 × 106 | 4.659 | -0.755 |
| MC s.e. | 1.626 | 2.386 | 4.6 × 107 | 2.2 × 107 | 10.121 | 5.910 |
+
+$M_{\text{ym}}: \hat{\theta}_{0}^{\text{ym}}; M_{\text{ye}}: \hat{\theta}_{0}^{\text{ye}}; M_{\text{em}}: \hat{\theta}_{0}^{\text{em}}; M_{\text{union}}: \hat{\theta}_{0}^{\text{triply}}; M_{\text{union}}^{\dagger, 1}: \hat{\theta}_{0}^{\text{triply}, \dagger, 1}; M_{\text{union}}^{\dagger, 2}: \hat{\theta}_{0}^{\text{triply}, \dagger, 2}$.
+
+*Monte Carlo standard error.
+
+terms of efficiency. In fact, $\hat{\theta}_0^{\text{ym}}$, $\hat{\theta}_0^{\text{triply}}$, $\hat{\theta}_0^{\text{triply},\dagger,1}$ and $\hat{\theta}_0^{\text{triply},\dagger,2}$ have comparable efficiency for $n = 200, 1000$, but $\hat{\theta}_0^{\text{ye}}$, $\hat{\theta}_0^{\text{em}}$ is far more variable. Moreover, under mis-specification of a single model, $\hat{\theta}_0^{\text{triply}}$, $\hat{\theta}_0^{\text{triply},\dagger,1}$ and $\hat{\theta}_0^{\text{triply},\dagger,2}$ remain nearly unbiased, and for the most part substantially more efficient than the corresponding consistent estimator in $\{\hat{\theta}_0^{\text{ym}}, \hat{\theta}_0^{\text{ye}}, \hat{\theta}_0^{\text{em}}\}$. When at least two models are mis-specified, the multiply robust estimators $\hat{\theta}_0^{\text{triply}}$, $\hat{\theta}_0^{\text{triply},\dagger,1}$ and $\hat{\theta}_0^{\text{triply},\dagger,2}$ generally outperform the other estimators, although $\hat{\theta}_0^{\text{triply}}$ occasionally succumbs to the unstable weights resulting in disastrous mean squared error; see Table 5 when Model M and Model E are both incorrect. In contrast, $\hat{\theta}_0^{\text{triply},\dagger,2}$ generally improves on $\hat{\theta}_0^{\text{triply},\dagger,1}$ which generally outperforms $\hat{\theta}_0^{\text{triply}}$ and for the most part $\hat{\theta}_0^{\text{triply},\dagger,1}$ and $\hat{\theta}_0^{\text{triply},\dagger,2}$ appear to eliminate any possible deleterious impact of highly variable weights.
+
+**7. A comparison to some existing estimators.** In this section, we briefly compare the proposed approach to some existing estimators in the literature. Perhaps the most common approach for estimating direct and indirect effects when Y is continuous uses a system of linear structural equations; whereby, a linear structural equation for the outcome, given the exposure, the mediator and the confounders, is combined with a linear structural equation for the mediator, given
\ No newline at end of file
diff --git a/samples/texts/2704143/page_12.md b/samples/texts/2704143/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..003f902389c1fcb13ccdd5c2d52bfdefa57ff154
--- /dev/null
+++ b/samples/texts/2704143/page_12.md
@@ -0,0 +1,39 @@
+offers a formal mathematical framework for defining varieties of direct and indi-
+rect effects, and for establishing necessary and sufficient identifying conditions of
+these effects. A notable contribution of causal inference to the literature on me-
+diation analysis is the key distinction drawn between so-called controlled direct
+effects versus natural direct effects. In words, the controlled direct effect refers
+to the exposure effect that arises upon intervening to set the mediator to a fixed
+level that may differ from its actual observed value [Robins and Greenland (1992),
+Pearl (2001), Robins (2003)]. In contrast, the natural (also known as pure) direct
+effect captures the effect of the exposure when one intervenes to set the mediator
+to the (random) level it would have been in the absence of exposure [Robins and
+Greenland (1992), Pearl (2001)]. As noted by Pearl (2001), controlled direct and
+indirect effects are particularly relevant for policy making, whereas natural direct
+and indirect effects are more useful for understanding the underlying mechanism
+by which the exposure operates. In fact, natural direct and indirect effects combine
+to produce the exposure total effect.
+
+To formally define natural direct and indirect effects first requires defining counterfactuals. We assume that for each level of a binary exposure *E*, and of a mediator variable *M*, there exist a counterfactual variable *Y*e,m corresponding to the outcome *Y* had possibly contrary to fact the exposure and mediator variables taken the value (*e*, *m*). Similarly, for *E* = *e*, we assume there exists a counterfactual variable *M*e corresponding to the mediator variable had possibly contrary to fact the exposure variable taken the value *e*. The current paper concerns the decomposition of the total effect of *E* on *Y*, in terms of natural direct and natural indirect effects, which, expressed on the mean difference scale, is given by
+
+$$
+\begin{align}
+\overbrace{\mathbb{E}(Y_{e=1} - Y_{e=0})}^{\text{total effect}} &= \mathbb{E}(Y_{e=1, M_{e=1}} - Y_{e=0, M_{e=0}}) \tag{1} \\
+&= \overbrace{\mathbb{E}(Y_{e=1, M_{e=1}} - Y_{e=1, M_{e=0}})}^{\text{natural indirect effect}} + \overbrace{\mathbb{E}(Y_{e=1, M_{e=0}} - Y_{e=0, M_{e=0}})}^{\text{natural direct effect}},
+\end{align}
+$$
+
+where $\mathbb{E}$ stands for expectation.
+
+In an effort to account for confounding bias when estimating causal effects,
+such as the average total effect (1) from nonexperimental data, investigators rou-
+tinely collect and adjust for in data analysis, a large number of confounding fac-
+tors. Because of the curse of dimensionality, nonparametric methods of estima-
+tion are typically not practical in such settings, and one usually resorts to one of
+two dimension-reduction strategies; either one relies on a model for the outcome
+given exposure and counfounders, or alternately one relies on a model for the ex-
+posure, that is, the propensity score. Recently, powerful semiparametric methods
+have been developed to analyze observational studies that produce so-called dou-
+ble robust and highly efficient estimates of the exposure total causal effect [Robins
+(2000), Scharfstein, Rotnitzky and Robins (1999), Bang and Robins (2005), Tsiatis
+(2006)] and similar methods have also been developed to estimate controlled direct
\ No newline at end of file
diff --git a/samples/texts/2704143/page_13.md b/samples/texts/2704143/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..7affe8a0a51964d651453b3f4299cf623d4228f7
--- /dev/null
+++ b/samples/texts/2704143/page_13.md
@@ -0,0 +1,43 @@
+the exposure and confounders, to produce an estimator of natural direct and in-
+direct effects. The classical approach of Baron and Kenny (1986) is a particular
+instance of this approach. In recent work, mainly motivated by Pearl's mediation
+functional, several authors [Imai, Keele and Tingley (2010), Imai, Keele and Ya-
+mamoto (2010), Pearl (2011), VanderWeele (2009), Vanderweele and Vansteelandt
+(2010)] have demonstrated how the simple linear structural equation approach gen-
+eralizes to accommodate both, the presence of an interaction between exposure and
+mediator variables, and a nonlinear link function, either in the regression model
+for the outcome, or in the regression model for the mediator, or both. In fact, when
+the effect of confounders is also modeled in such structural equations, inferences
+based on the latter can be viewed as special instances of inferences obtained under
+a particular specification of model $M_a$ for the outcome and the mediator densities.
+And thus, as previously shown in the simulations, an estimator obtained under a
+system of structural equations will generally fail to produce a consistent estimator
+of natural direct and indirect effects when model $M_a$ is incorrect, whereas, by
+using the proposed multiply robust estimator, valid inferences can be recovered
+under the union model $M_b \cup M_c$, even if $M_a$ fails.
+
+A notable improvement on the system of structural equations approach is the
+double robust estimator of a natural direct effect due to van der Laan and Pe-
+tersen (2005). Their estimator solves the estimating equation constructed using
+an empirical version of $S_{\text{NDE,singleton}}^{\text{eff}, M_a \cup M_c}(\theta_0, \delta_0)$ given in the online Appendix. They
+show their estimator remains CAN in the larger submodel $M_a \cup M_c$ and there-
+fore, they can recover valid inferences even when the outcome model is incorrect,
+provided both the exposure and mediator models are correct. Unfortunately, the
+van der Laan estimator is still not entirely satisfactory because unlike the proposed
+multiply robust estimator, it requires that the model for the mediator density is cor-
+rect. Nonetheless, if the mediator model is correct, the authors establish that their
+estimator achieves the efficiency bound for model $M_a \cup M_c$ at the intersection
+submodel $M_a \cap M_c$ where all models are correct; and thus it is locally semipara-
+metric efficient in $M_a \cup M_c$. Interestingly, as we report in the online supplement,
+the semiparametric efficiency bounds for models $M_a \cup M_c$ and $M_a \cup M_b \cup M_c$
+are distinct, because the density of the mediator variable is not ancillary for in-
+ferences about the M-functional. Thus, any restriction placed on the mediator's
+conditional density can, when correct, produce improvements in efficiency. This is
+in stark contrast with the role played by the density of the exposure variable, which
+as in the estimation of the marginal causal effect, remains ancillary for inferences
+about the M-functional and thus the efficiency bound for the latter is unaltered by
+any additional information on the former [Robins, Rotnitzky and Zhao (1994)]. In
+the online Appendix, we provide a general functional map that relates the efficient
+influence function for the larger model $M_a \cup M_b \cup M_c$ to the efficient influence
+for the smaller model $M_a \cup M_c$ where the model for the mediator is either para-
+metric or semiparametric. Our map is instructive because it makes explicit using
\ No newline at end of file
diff --git a/samples/texts/2704143/page_14.md b/samples/texts/2704143/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..fa842aa9c8e825006986dd5a16981cbe9fa521b6
--- /dev/null
+++ b/samples/texts/2704143/page_14.md
@@ -0,0 +1,38 @@
+simple geometric arguments, the information that is gained from increasing restrictions on the law of the mediator. In the online Appendix, we illustrate the map by recovering the efficient influence function of van der Laan and Petersen in the case of a singleton model (i.e., a known conditional density) for the mediator and in the case of a parametric model for the mediator.
+
+**8. A semiparametric sensitivity analysis.** We describe a semiparametric sensitivity analysis framework to assess the extent to which a violation of the ignorability assumption for the mediator might alter inferences about natural direct and indirect effects. Although only results for the natural direct effect are given here, the extension for the indirect effect is easily deduced from the presentation.
+Let
+
+$$t(e, m, x) = \mathbb{E}[Y_{1,m} | E=e, M=m, X=x] - \mathbb{E}[Y_{1,m} | E=e, M \neq m, X=x],$$
+
+then
+
+$$Y_{e',m} \perp M | E=e, X,$$
+
+that is, a violation of the ignorability assumption for the mediator variable, generally implies that
+
+$$t(e, m, x) \neq 0 \quad \text{for some } (e, m, x).$$
+
+Thus, we proceed as in Robins, Rotnitzky and Scharfstein (2000), and propose to recover inferences by assuming the selection bias function $t(e, m, x)$ is known, which encodes the magnitude and direction of the unmeasured confounding for the mediator. In the following, the support of $M$, $S$ is assumed to be finite. To motivate the proposed approach, suppose for the moment that $f_{M|E,X}(M|E, X)$ is known; then under the assumption that the exposure is ignorable given $X$, we show in the Appendix that
+
+$$
+\begin{align*}
+& \mathbb{E}[Y_{1,m} | M_0 = m, X = x] \\
+&= \mathbb{E}[Y_{1,m} | E = 0, M = m, X = x] \\
+&= \mathbb{E}[Y | E = 1, M = m, X = x] - t(1, m, x)(1 - f_{M|E,X}(m|E = 1, X = x)) \\
+&\quad + t(0, m, x)(1 - f_{M|E,X}(m|E = 0, X = x)),
+\end{align*}
+$$
+
+and therefore the M-functional is identified by
+
+$$
+\begin{equation}
+\begin{aligned}
+& \sum_{m \in S} \mathbb{E}\left\{ \mathbb{E}[Y | E = 1, M = m, X] - t(1, m, X)(1 - f_{M|E,X}(m|E = 1, X)) \right. \\
+& \qquad \left. + t(0, m, X)(1 - f_{M|E,X}(m|E = 0, X)) \right\} \\
+& \quad \times f_{M|E,X}(m|E = 0, X),
+\end{aligned}
+\tag{5}
+\end{equation}
+$$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_15.md b/samples/texts/2704143/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..2c214a586ae9d5a36703e9dd3ce1dd9682ea4b9e
--- /dev/null
+++ b/samples/texts/2704143/page_15.md
@@ -0,0 +1,33 @@
+which is equivalently represented as
+
+$$
+(6) \quad \mathbb{E} \left[ \frac{I\{E=1\} f_{M|E,X}(M|E=0, X)}{f_{E|X}(1|X) f_{M|E,X}(M|E=1, X)} \right. \\
+\qquad \times \left\{ \begin{aligned}[t]
+& Y - t(1, M, X)(1 - f_{M|E,X}(m|E=1, X)) \\
+& + t(0, M, X)(1 - f_{M|E,X}(M|E=0, X))
+\end{aligned} \right\} \right] .
+$$
+
+Below, these two equivalent representations, (5) and (6), are carefully combined to obtain a double robust estimator of the M-functional, assuming $t(\cdot, \cdot, \cdot)$ is known. A sensitivity analysis is then obtained by repeating this process and reporting inferences for each choice of $t(\cdot, \cdot, \cdot)$ in a finite set of user-specified functions $\mathcal{T} = \{\tau_\lambda(\cdot, \cdot, \cdot) : \lambda\}$ indexed by a finite dimensional parameter $\lambda$ with $t_0(\cdot, \cdot, \cdot) \in \mathcal{T}$ corresponding to the unmeasured confounding assumption, that is, $t_0(\cdot, \cdot, \cdot) \equiv 0$. Throughout, the model $f_{M|E,X}^{\text{par}}(\cdot|E, X; \beta_m)$ for the probability mass function of $M$ is assumed to be correct. Thus, to implement the sensitivity analysis, we develop a semiparametric estimator of the natural direct effect in the union model $\mathcal{M}_a \cup \mathcal{M}_c$, assuming $t(\cdot, \cdot, \cdot) = t_{\lambda^*}(\cdot, \cdot, \cdot)$ for a fixed $\lambda^*$. The proposed doubly robust estimator of the natural direct effect is then given by $\hat{\theta}_0^{\text{doubly}}(\lambda^*) - \hat{\delta}_0^{\text{doubly}}$ where $\hat{\delta}_0^{\text{doubly}}$ is as previously described, and
+
+$$
+\hat{\theta}_0^{\text{doubly}}(\lambda^*) = \mathbb{P}_n \left[ \frac{I\{E=1\} \hat{f}_{M|E,X}^{\text{par}}(M|E=0, X)}{\hat{f}_{E|X}^{\text{par}}(1|X) \hat{f}_{M|E,X}^{\text{par}}(M|E=1, X)} \right. \\
+\qquad \left. \times \{Y - \hat{\mathbb{E}}^{\text{par}}(Y|X, M, E=1)\} + \tilde{\eta}^{\text{par}}(1, 0, X; \lambda^*) \right],
+$$
+
+with
+
+$$
+\begin{align*}
+& \tilde{\eta}^{\text{par}}(1, 0, X; \lambda^*) \\
+&= \sum_{m \in S} \Biggl\{
+ \begin{aligned}[t]
+ & \hat{\mathbb{E}}^{\text{par}}(Y|X, M=m, E=1) + t_{\lambda^*}(0, m, X)(1 - \hat{f}_{M|E,X}^{\text{par}}(m|E=0, X)) \\
+ & \phantom{{}=} - t_{\lambda^*}(1, m, X)(1 - \hat{f}_{M|E,X}^{\text{par}}(m|E=1, X))
+ \end{aligned}
+\Biggr\} \\
+&\quad \times \hat{f}_{M|E,X}^{\text{par}}(m|E=0, X).
+\end{align*}
+$$
+
+Our sensitivity analysis then entails reporting the set $\{\hat{\theta}_0^{\text{doubly}}(\lambda) - \hat{\delta}_0^{\text{doubly}} : \lambda\}$ (and the associated confidence intervals), which summarizes how sensitive inferences are to a deviation from the ignorability assumption $\lambda = 0$. A theoretical justification for the approach is given by the following formal result, which is proved in the supplemental Appendix.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_16.md b/samples/texts/2704143/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..34cd29f6ff8c5c41fb5c7adc7b421cecaaa4a3df
--- /dev/null
+++ b/samples/texts/2704143/page_16.md
@@ -0,0 +1,9 @@
+**THEOREM 4.** Suppose $t(\cdot, \cdot, \cdot) = t_\lambda^*(\cdot, \cdot, \cdot)$; then under the consistency, positivity assumptions and the ignorability assumption for the exposure, $\hat{\theta}_0^{\text{doubly}}(\lambda^*) - \hat{\delta}_0^{\text{doubly}}$ is a CAN estimator of the natural direct effect in $M_a \cup M_c$.
+
+The influence function of $\hat{\theta}_0^{\text{doubly}}(\lambda^*)$ is provided in the Appendix, and can be used to construct a corresponding confidence interval.
+
+It is important to note that the sensitivity analysis technique presented here differs in crucial ways from previous techniques developed by Hafeman (2008), VanderWeele (2010) and Imai, Keele and Yamamoto (2010). First, the methodology of VanderWeele (2010) postulates the existence of an unmeasured confounder $U$ (possibly vector valued) which, when included in $X$, recovers the sequential ignorability assumption. The sensitivity analysis then requires specification of a sensitivity parameter encoding the effect of the unmeasured confounder on the outcome within levels of $(E, X, M)$, and another parameter for the effect of the exposure on the density of the unmeasured confounder given $(X, M)$. This is a daunting task which renders the approach generally impractical, except perhaps in the simple setting where it is reasonable to postulate a single binary confounder is unobserved, and one is willing to make further simplifying assumptions about the required sensitivity parameters [VanderWeele (2010)]. In comparison, the proposed approach circumvents this difficulty by concisely encoding a violation of the ignorability assumption for the mediator through the selection bias function $t_\lambda(e, m, x)$. Thus the approach makes no reference and thus is agnostic about the existence, dimension and nature of unmeasured confounders $U$. Furthermore, in our proposal, the ignorability violation can arise due to an unmeasured confounder of the mediator-outcome relationship that is also an effect of the exposure variable, a setting not handled by the technique of VanderWeele (2010). The method of Hafeman (2008) which is restricted to binary data, shares some of the limitations given above. Finally, in contrast with our proposed double robust approach, a coherent implementation of the sensitivity analysis techniques of Imai, Keele and Yamamoto (2010), Imai, Keele and Tingley (2010) and VanderWeele (2010) rely on correct specification of all posited models. We refer the reader to VanderWeele (2010) for further discussion of Hafeman (2008) and Imai, Keele and Yamamoto (2010).
+
+**9. Discussion.** The main contribution of the current paper is a theoretically rigorous yet practically relevant semiparametric framework for making inferences about natural direct and indirect causal effects in the presence of a large number of confounding factors. Semiparametric efficiency bounds are given for the nonparametric model, and multiply robust locally efficient estimators are developed that can be used when nonparametric estimation is not possible.
+
+Although the paper focuses on a binary exposure, we note that the extension to a polytomous exposure is trivial. In future work, we shall extend our results
\ No newline at end of file
diff --git a/samples/texts/2704143/page_17.md b/samples/texts/2704143/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..aeee13741462488995f1850207946d1bc8472dbb
--- /dev/null
+++ b/samples/texts/2704143/page_17.md
@@ -0,0 +1,17 @@
+for marginal effects by considering conditional natural direct and indirect effects, given a subset of pre-exposure variables [Tchetgen Tchetgen and Shpitser (2011)]. These models are particularly important in making inferences about so-called moderated mediation effects, a topic of growing interest, particularly in the field of psychology [Preacher, Rucker and Hayes (2007)]. In related work, we have recently extended our results to a survival analysis setting [Tchetgen Tchetgen (2011)].
+
+A major limitation of the current paper is that it assumes that the mediator is measured without error, an assumption that may be unrealistic in practice and, if incorrect, may result in biased inferences about mediated effects. We note that much of the recent literature on causal mediation analysis makes a similar assumption. In future work, it will be important to build on the results derived in the current paper to appropriately account for a mis-measured mediator [Tchetgen Tchetgen and Lin (2012)].
+
+## APPENDIX
+
+PROOF OF THEOREM 1. Let $F_{O;t} = F_{Y|M,X,E;t} F_{M|E,X;t} F_{E|X;t} F_{X;t}$ denote a one-dimensional regular parametric submodel of $M_{\text{nonpar}}$, with $F_{O,0} = F_O$, and let
+
+$$ \theta_t = \theta_0(F_{O;t}) = \iint_{S \times X} \mathbb{E}_t(Y|E=1, M=m, X=x) \\ \times f_{M|E,X;t}(m|E=0, X=x)f_{X;t}(x) d\mu(m,x). $$
+
+The efficient influence function $S_{\theta_0}^{\text{eff,nonpar}}(\theta_0)$ is the unique random variable to satisfy the following equation:
+
+$$ \nabla_t = 0 \theta_t = \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0)U\} $$
+
+for $U$ the score of $F_{O;t}$ at $t=0$, and $\nabla_t = 0$ denoting differentiation w.r.t. $t$ at $t=0$. We observe that
+
+$$ \begin{align*} \left. \frac{\partial \theta_t}{\partial t} \right|_{t=0} &= \iint_{S \times X} \nabla_t = 0 \mathbb{E}_t (Y | E = 1, M = m, X = x) \\ & \quad \times f_{M|E,X}(m|E=0, X=x) f_X(x) d\mu(m,x) \\ &+ \iint_{S \times X} \mathbb{E}(Y | E = 1, M = m, X = x) \\ & \quad \times \nabla_t = 0 f_{M|E,X;t}(m|E=0, X=x) f_X(x) d\mu(m,x) \\ &+ \iint_{S \times X} \mathbb{E}(Y | E = 1, M = m, X = x) \\ & \quad \times f_{M|E,X}(m|E=0, X=x) \nabla_t = 0 f_{X;t}(x) d\mu(m,x). \end{align*} $$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_18.md b/samples/texts/2704143/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5f2045ad382ea769570c59401c57a7ba57cdbb8
--- /dev/null
+++ b/samples/texts/2704143/page_18.md
@@ -0,0 +1,55 @@
+Considering the first term, it is straightforward to verify that
+
+$$
+\begin{align*}
+\iint_{S \times X} \nabla_t = 0 \mathbb{E}_t (Y | E = 1, M = m, X = x) f_{M|E,X}(m|E=0, X=x) f_X(x) d\mu(m,x) \\
+&= \mathbb{E} \left[ U \frac{I(E=1)}{f_{E|X}(E|X)} \left\{ Y - \mathbb{E}(Y|E, M=m, X=x) \frac{f_{M|E,X}(M|E=0, X)}{f_{M|E,X}(M|E=1, X)} \right\} \right].
+\end{align*}
+$$
+
+Similarly, one can easily verify that
+
+$$
+\begin{align*}
+\iint_{S \times X} \mathbb{E}(Y | E = 1, M = m, X = x) \nabla_{t=0} f_{M|E,X;t}(m | E = 0, X = x) f_X(x) d\mu(m, x) \\
+&= \mathbb{E} \left[ U \frac{I(E=0)}{f_{E|X}(E|X)} \left\{ \mathbb{E}(Y | E = 1, M = m, X = x) - \eta(1, 0, X) \right\} \right],
+\end{align*}
+$$
+
+and finally, one can also verify that
+
+$$
+\begin{align*}
+\iint_{S \times X} \mathbb{E}(Y | E = 1, M = m, X = x) f_{M|E,X}(m | E = 0, X = x) \nabla_{t=0} f_{X;t}(x) d\mu(m,x) \\
+&= \mathbb{E}[U\{\eta(1, 0, X) - \theta_0\}].
+\end{align*}
+$$
+
+Thus we obtain
+
+$$
+\nabla_t = 0 \theta_t = \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0) U\}.
+$$
+
+Given $S_{\delta e}^{\text{eff,nonpar}}(\delta_e)$, the results for the direct and indirect effect follow from the fact that the influence function of a difference of two functionals equals the difference of the respective influence functions. Because the model is nonparametric, there is a unique influence function for each functional, and it is efficient in the model, leading to the efficiency bound results. $\square$
+
+PROOF OF THEOREM 2. We begin by showing that
+
+$$
+\begin{equation}
+\begin{aligned}
+& \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0; \beta_m^*, \beta_e^*, \beta_y^*)\} \\
+& \quad = 0
+\end{aligned}
+\tag{7}
+\end{equation}
+$$
+
+under model $\mathcal{M}_{\text{union}}$. First note that $(\beta_y^*, \beta_m^*) = (\beta_y, \beta_m)$ under model $\mathcal{M}_a$. Equality (7) now follows because $\mathbb{E}^{\text{par}}(Y|\tilde{X}, M, E=1; \beta_y) = \mathbb{E}(Y|\tilde{X}, M, E=1)$ and $\eta(1,0,X; \beta_y, \beta_m) = \mathbb{E}[\{\mathbb{E}^{\text{par}}(Y|\tilde{X}, M, E=1; \beta_y)\}|E=0, X] = \eta(1,0,X):$
+
+$$
+\begin{align*}
+& \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0; \beta_m, \beta_e^*, \beta_y)\} \\
+& = \mathbb{E}\left[\frac{I\{E=1\} f_{M|E,X}^{\text{par}}(M|E=0, X; \beta_m)}{f_{E|X}^{\text{par}}(1|X; \beta_e^*) f_{M|E,X}^{\text{par}}(M|E=1, X; \beta_m)}
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_19.md b/samples/texts/2704143/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..d6d51f1bca74a9e85106b5f705d94c0c03871b0a
--- /dev/null
+++ b/samples/texts/2704143/page_19.md
@@ -0,0 +1,34 @@
+$$
+\begin{align*}
+& \times \underbrace{\mathbb{E}\{Y - \mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y)|E=1, M, X\}}_{=0} \\
+& + \mathbb{E}\left[\frac{I(E=0)}{f_{E|X}^{\text{par}}(1|X; \beta_e^*)}\right. \\
+& \qquad \left. \times \underbrace{\mathbb{E}[\{\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y) - \eta(1, 0, X; \beta_y, \beta_m)\}|E=0, X]}_{=0} \right] \\
+& + \mathbb{E}[\eta(1, 0, X; \beta_y, \beta_m)] - \theta_0 \\
+& = 0.
+\end{align*}
+$$
+
+Second, $(\beta_y^*, \beta_e^*) = (\beta_y, \beta_e)$ under model $\mathcal{M}_b$. Equality (7) now follows because $\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y) = \mathbb{E}(Y|X, M, E=1)$ and $f_{E|X}^{\text{par}}(1|X; \beta_e) = f_{E|X}(1|X)$:
+
+$$
+\begin{align*}
+& \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0; \beta_m^*, \beta_e, \beta_y)\} \\
+&= \mathbb{E}\Biggl[ \frac{I\{E=1\} f_{M|E,X}^{\text{par}}(M|E=0, X; \beta_m^*)}{f_{E|X}^{\text{par}}(1|X; \beta_e) f_{M|E,X}^{\text{par}}(M|E=1, X; \beta_m^*)} \\
+&\qquad \times \underbrace{\mathbb{E}\{Y - \mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y)|E=1, M, X\}}_{=0} \Biggr] \\
+&\quad + \mathbb{E}\left[ \frac{I(E=0)}{f_{E|X}^{\text{par}}(1|X; \beta_e)} \right. \\
+&\qquad \left. \times \mathbb{E}\{\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y) - \eta(1, 0, X; \beta_y, \beta_m^*)\}|E=0, X\} \right] \\
+&\quad + \mathbb{E}[\eta(1, 0, X; \beta_y, \beta_m^*)] - \theta_0 \\
+&= \mathbb{E}[\mathbb{E}\{\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y)\}|E=0, X]] - \theta_0 = 0.
+\end{align*}
+$$
+
+Third, equality (7) holds under model $\mathcal{M}_c$ because
+
+$$
+\begin{align*}
+& \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0; \beta_m, \beta_e, \beta_y^*)\} \\
+&= \mathbb{E}\left[\frac{I\{E=1\} f_{M|E,X}^{\text{par}}(M|E=0, X; \beta_m)}{f_{E|X}^{\text{par}}(1|X; \beta_e) f_{M|E,X}^{\text{par}}(M|E=1, X; \beta_m)}\right. \\
+&\qquad \left. \times \mathbb{E}\{Y - \mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y^*)\}\right] \\
+&\quad + \mathbb{E}\left[\frac{I(E=0)}{f_{E|X}^{\text{par}}(1|X; \beta_e)}\right]
+\end{align*}
+$$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_2.md b/samples/texts/2704143/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e8b7e3edc8fa402be206e34509771c28871f43e
--- /dev/null
+++ b/samples/texts/2704143/page_2.md
@@ -0,0 +1,33 @@
+*Strategy 3:* The last strategy is based on a third representation of the M-
+functional
+
+$$
+\iint_{S \times X} \mathbb{E}(Y|E=1, M=m, X=x) dF_{M|E}(m|E=0, X=x) dF_X(x) \\
+= \sum_{e=0}^{1} \iiint_{Y \times S \times X} y \frac{I(e=1)}{f_{E|X}(e|X=x)} \frac{f_{M|E,X}(M|E=0, X)}{f_{M|E,X}(M|E, X)} dF_{Y,M,E,X}(y, m, e, x) \\
+= \mathbb{E}\left\{Y \frac{I(E=1)}{f_{E|X}(E|X)} \frac{f_{M|E,X}(M|E=0, X)}{f_{M|E,X}(M|E, X)}\right\}.
+$$
+
+Thus, our third estimator takes the form
+
+$$
+\hat{\theta}_0^{\text{em}} = \mathbb{P}_n \left\{ Y \frac{I(E=1)}{\hat{f}_{E|X}(E|X)} \frac{\hat{f}_{M|E,X}(M|E=0, X)}{\hat{f}_{M|E,X}(M|E, X)} \right\}.
+$$
+
+At first glance the three estimators $\hat{\theta}_0^{\text{em}}$, $\hat{\theta}_0^{\text{ye}}$ and $\hat{\theta}_0^{\text{ym}}$ might appear to be distinct; however, we observe that provided the empirical distribution function $\hat{F}_O = \hat{F}_{Y|E,M,X} \times \hat{F}_{M|E,X} \times \hat{F}_{E|X} \times \hat{F}_X$ satisfies the positivity assumption, and thus $\hat{F}_O \in \mathcal{M}_{\text{nonpar}}$, then actually $\hat{\theta}_0^{\text{em}} = \hat{\theta}_0^{\text{ye}} = \hat{\theta}_0^{\text{ym}} = \theta_0(\hat{F}_O)$ since the three representations agree on the nonparametric model $\mathcal{M}_{\text{nonpar}}$. Therefore we may conclude that these three estimators are in fact asymptotically efficient in $\mathcal{M}_{\text{nonpar}}$ with common influence function $S_{\theta_0}^{\text{eff,nonpar}}(\theta_0)$. Furthermore, from this observation, one further concludes that (asymptotic) inferences obtained using one of the three representations are identical to inferences using either of the other two representations.
+
+At this juncture, we note that the above equivalence no longer applies when
+as we have previously argued will likely occur in practice, (M, X) contains 3 or
+more continuous variables and/or X is too high dimensional for models to be satu-
+rated or nonparametric, and thus parametric (or semiparametric) models are spec-
+ified for dimension reduction. Specifically, for such settings, we observe that three
+distinct modeling strategies are available. Under the first strategy, the estimator
+$\hat{\theta}_0^{\text{ym,par}}$ is obtained $\hat{\theta}_0^{\text{ym}}$ using parametric model estimates $\hat{\mathbb{E}}^{\text{par}}(Y|E, M, X)$ and
+$\hat{f}_{M|E,X}^{\text{par}}(m|E, X)$ instead of their nonparametric counterparts; similarly under the
+second strategy, the estimator $\hat{\theta}_0^{\text{ye,par}}$ is obtained similarly to $\hat{\theta}_0^{\text{ye}}$ using estimates
+of parametric models $\hat{\mathbb{E}}^{\text{par}}(Y|E=1, M=m, X)$ and $\hat{f}_{E|X}^{\text{par}}(e|X)$ and finally, un-
+der the third strategy, $\hat{\theta}_0^{\text{em,par}}$ is obtained similarly to $\hat{\theta}_0^{\text{em}}$ using $\hat{f}_{E|X}^{\text{par}}(e|X)$ and
+$\hat{f}_{M|E,X}^{\text{par}}(m|E, X)$. Then, it follows that $\hat{\theta}_0^{\text{ym,par}}$ is CAN under the submodel $\mathcal{M}_a$,
+but is generally inconsistent if either $\hat{\mathbb{E}}^{\text{par}}(Y|E, M, X)$ or $\hat{f}_{M|E,X}^{\text{par}}(m|E, X)$ fails
+to be consistent. Similarly, $\hat{\theta}_0^{\text{ye,par}}$ and $\hat{\theta}_0^{\text{em,par}}$ are, respectively, CAN under the
+submodels $\mathcal{M}_b$ and $\mathcal{M}_c$, but each estimator generally fails to be consistent out-
+side of the corresponding submodel. In the next section, we propose an approach
\ No newline at end of file
diff --git a/samples/texts/2704143/page_20.md b/samples/texts/2704143/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..df10b8baf4a3cb19ec53f8bb269559ffbb954903
--- /dev/null
+++ b/samples/texts/2704143/page_20.md
@@ -0,0 +1,31 @@
+$$
+\begin{align*}
+& \times \mathbb{E}[\{\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y^*) - \eta(1, 0, X; \beta_y^*, \beta_m)\}|E=0, X] \\
+& + \mathbb{E}[\eta(1, 0, X; \beta_y^*, \beta_m)] - \theta_0 \\
+&= \mathbb{E}[\mathbb{E}[\{\mathbb{E}(Y|X, M, E=1)\}|E=0, X]] \\
+& \quad - \mathbb{E}[\mathbb{E}[\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y^*)|E=0, X]] \\
+& \quad + \mathbb{E}[\mathbb{E}[\mathbb{E}^{\text{par}}(Y|X, M, E=1; \beta_y^*)|E=0, X]] - \mathbb{E}[\eta(1, 0, X; \beta_y^*, \beta_m)] \\
+& \quad + \mathbb{E}[\eta(1, 0, X; \beta_y^*, \beta_m)] - \theta_0 \\
+&= \mathbb{E}[\mathbb{E}[\{\mathbb{E}(Y|X, M, E=1)\}|E=0, X]] - \theta_0.
+\end{align*}
+$$
+
+Assuming that the regularity conditions of Theorem 1A in Robins, Mark and Newey (1992) hold for $S_{\theta_0}^{\text{eff,nonpar}}(\theta_0; \beta_m, \beta_e, \beta_y)$, $S_\beta(\beta)$, the expression for $S_{\theta_0}^{\text{union}}(\theta_0, \beta^*)$ follows by standard Taylor expansion arguments, and it now follows that
+
+$$ (8) \qquad \sqrt{n}(\hat{\theta}_0^{\text{triply}} - \theta_0) = \frac{1}{n^{1/2}} \sum_{i=1}^{n} S_{\theta_0, i}^{\text{union}}(\theta_0, \beta^*) + o_p(1). $$
+
+The asymptotic distribution of $\sqrt{n}(\hat{\theta}_0^{\text{triply}} - \theta_0)$ under model $\mathcal{M}_{\text{union}}$ follows from the previous equation by Slutsky's Theorem and the Central Limit Theorem.
+
+We note that $\hat{\delta}_e^{\text{doubly}}$ is CAN in the union model $\mathcal{M}_{\text{union}}$ since it is CAN in the larger model where either the density for the exposure is correct, or the density of the mediator and the outcome regression are both correct and thus $\eta(e, e, X; \beta_y^*, \beta_m^*) = \mathbb{E}(Y|X, E=e)$. This gives the multiply robust result for direct and indirect effects. The asymptotic distribution of direct and indirect effect estimates then follows from similar arguments as above.
+
+At the intersection submodel
+
+$$ \frac{\partial \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \boldsymbol{\beta})\}}{\partial \boldsymbol{\beta}^T} = 0 $$
+
+hence
+
+$$ S_{\theta_0}^{\text{union}}(\theta_0, \boldsymbol{\beta}) = S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \boldsymbol{\beta}). $$
+
+The semiparametric efficiency claim then follows for $\hat{\theta}_0^{\text{triply}}$, and a similar argument gives the result for direct and indirect effects. $\square$
+
+PROOFS OF THEOREMS 3 AND 4. The proofs are given in the online Appendix. $\square$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_21.md b/samples/texts/2704143/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b2c96643f5365552ef483ae69482cf59e2c24e8
--- /dev/null
+++ b/samples/texts/2704143/page_21.md
@@ -0,0 +1,35 @@
+**Acknowledgments.** The authors would like to acknowledge Andrea Rotnitzky who provided invaluable comments that improved the presentation of the results given in Section 7. The authors also thank James Robins and Tyler VanderWeele for useful comments that significantly improved the presentation of this article.
+
+## SUPPLEMENTARY MATERIAL
+
+**Supplemental Appendix to Semiparametric theory for causal mediation analysis** (DOI: 10.1214/12-AOS990SUPP; .pdf). The supplementary material gives the semiparametric efficiency theory for estimation of natural direct effects with a known model for the mediator density. The Appendix also gives the proof of Theorem 3 (stated in the Supplementary Appendix) and of Theorem 4.
+
+## REFERENCES
+
+AVIN, C., SHPITSER, I. and PEARL, J. (2005). Identifiability of path-specific effects. In *IJCAI-05, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, UK, July 30-August 5, 2005* 357–363.
+
+BANG, H. and ROBINS, J. M. (2005). Doubly robust estimation in missing data and causal inference models. *Biometrics* **61** 962–972. MR2216189
+
+BARON, R. M. and KENNY, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. *J. Pers. Soc. Psychol.* **51** 1173–1182.
+
+CAO, W., TSIATIS, A. A. and DAVIDIAN, M. (2009). Improving efficiency and robustness of the doubly robust estimator for a population mean with incomplete data. *Biometrika* **96** 723–734. MR2538768
+
+GOETGELUK, S., VANSTEELANDT, S. and GOETGHEBEUR, E. (2008). Estimation of controlled direct effects. *J. R. Stat. Soc. Ser. B Stat. Methodol.* **70** 1049–1066. MR2530329
+
+HAFEMAN, D. (2008). Opening the black box: A reassessment of mediation from a counterfactual perspective. PhD dissertation, Columbia Univ., New York.
+
+HAFEMAN, D. M. and VANDERWEELE, T. J. (2011). Alternative assumptions for the identification of direct and indirect effects. *Epidemiology* **22** 753–764.
+
+HAHN, J. (1998). On the role of the propensity score in efficient semiparametric estimation of average treatment effects. *Econometrica* **66** 315–331. MR1612242
+
+IMAI, K., KEELE, L. and TINGLEY, D. (2010). A general approach to causal mediation analysis. *Psychological Methods* **15** 309–334.
+
+IMAI, K., KEELE, L. and YAMAMOTO, T. (2010). Identification, inference and sensitivity analysis for causal mediation effects. *Statist. Sci.* **25** 51–71. MR2741814
+
+KANG, J. D. Y. and SCHAFER, J. L. (2007). Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. *Statist. Sci.* **22** 523–539. MR2420458
+
+PEARL, J. (2001). Direct and indirect effects. In *Proceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence (UAI-01)* 411–442. Morgan Kaufmann, San Francisco, CA.
+
+PEARL, J. (2011). The mediation formula: A guide to the assessment of causal pathways in nonlinear models. Technical report. Available at http://ftp.cs.ucla.edu/pub/statlle/r379.pdf.
+
+PREACHER, K. J., RUCKER, D. D. and HAYES, A. F. (2007). Assessing moderated mediation hypotheses: Strategies, methods, and prescriptions. *Multivariate Behavioral Research* **42** 185–227.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_22.md b/samples/texts/2704143/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..02e9916a4dd44b25e0f3239c926c6dd2a2911e6b
--- /dev/null
+++ b/samples/texts/2704143/page_22.md
@@ -0,0 +1,39 @@
+ROBINS, J. M. (2000). Robust estimation in sequentially ignorable missing data and causal inference models. *Proceedings of the American Statistical Association Section on Bayesian Statistical Science* **1999** 6-10. Amer. Statist. Soc., Alexandria, VA.
+
+ROBINS, J (2003). Semantics of causal DAG models and the identification of direct and indirect effects. In *Highly Structured Stochastic Systems* (P. Green, N. Hjort and S. Richardson, eds.) 70-81. Oxford Univ. Press, Oxford.
+
+ROBINS, J. M. and GREENLAND, S. (1992). Identifiability and exchangeability for direct and indirect effects. *Epidemiology* **3** 143-155.
+
+ROBINS, J. M., MARK, S. D. and NEWEY, W. K. (1992). Estimating exposure effects by modelling the expectation of exposure conditional on confounders. *Biometrics* **48** 479-495. MR1173493
+
+ROBINS, J. M. and RICHARDSON, T. S. (2012). Alternative graphical causal models and the identification of direct effects. In *Causality and Psychopathology: Finding the Determinants of Disorders and Their Cures* (P. Shrout, ed.). Oxford Univ. Press. To appear.
+
+ROBINS, J. M. and ROTNITZKY, A. (2001). Comment on "Inference for semiparametric models: Some questions and an answer by P. J. Bickel and J. Kwon." *Statist. Sinica* **11** 920-936.
+
+ROBINS, J. M., ROTNITZKY, A. and ZHAO, L. P. (1994). Estimation of regression coefficients when some regressors are not always observed. *J. Amer. Statist. Assoc.* **89** 846–866. MR1294730
+
+ROBINS, J. M., ROTNITZKY, A. and SCHARFSTEIN, D. O. (2000). Sensitivity analysis for selection bias and unmeasured confounding in missing data and causal inference models. In *Statistical Models in Epidemiology, the Environment, and Clinical Trials* (Minneapolis, MN, 1997). IMA Vol. Math. Appl. **11** 1-94. Springer, New York. MR1731681
+
+ROBINS, J., SUED, M., LEI-GOMEZ, Q. and ROTNITZKY, A. (2007). Comment: Performance of double-robust estimators when “inverse probability” weights are highly variable. *Statist. Sci.* **22** 544–559. MR2420460
+
+SCHARFSTEIN, D. O., ROTNITZKY, A. and ROBINS, J. M. (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models. *J. Amer. Statist. Assoc.* **94** 1096-1146. MR1731478
+
+TAN, Z. (2010). Bounded, efficient, and doubly robust estimation with inverse weighting. *Biometrika* **97** 661-682.
+
+TCHETGEN TCHETGEN, E. J. (2011). On causal mediation analysis with a survival outcome. *Int. J. Biostat.* **7** Art. 33, 38. MR2843528
+
+TCHETGEN TCHETGEN, E. J. and LIN, S. H. (2012). Robust estimation of pure/natural direct effects with mediator measurement error. Technical report, Dept. Epidemiology, Harvard School of Public Health.
+
+TCHETGEN TCHETGEN, E. J. and SHPITSER, I. (2011). Semiparametric estimation of models for natural direct and indirect effects. Harvard Univ. Biostatistics Working Paper 129. Available at http://biostats.bepress.com/harvardbiostat/paper129.
+
+TCHETGEN TCHETGEN, E. J. and SHPITSER, I. (2012). Supplement to "Semiparametric theory for causal mediation analysis: Efficiency bounds, multiple robustness and sensitivity analysis." DOI:10.1214/12-AOS990SUPP.
+
+TCHETGEN TCHETGEN, E. J. and VANDERWEELE, T. J. (2012). On identification of natural direct effects when a confounder of the mediator is directly affected by exposure. Harvard Univ. Biostatistics Working Paper 148. Available at http://biostats.bepress.com/harvardbiostat/paper148.
+
+TSIATIS, A. A. (2006). *Semiparametric Theory and Missing Data*. Springer, New York. MR2233926
+
+VAN DER LAAN, M. and PETERSEN, M. (2005). Direct effect models. Working Paper 187. Univ. California Berkeley Division of Biostatistics Working Paper Series. Available at http://www.bepress.com/ucbbiostat/paper187.
+
+VANDER LAAN, M. J. and ROBINS, J. M. (2003). *Unified Methods for Censored Longitudinal Data and Causality*. Springer, New York. MR1958123
+
+VANDERWEELE, T. J. (2009). Marginal structural models for the estimation of direct and indirect effects. *Epidemiology* **20** 18-26.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_23.md b/samples/texts/2704143/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..7934901a2f002f881603ce2d640e8dbfd5c4c01d
--- /dev/null
+++ b/samples/texts/2704143/page_23.md
@@ -0,0 +1,13 @@
+effects [Goetgeluk, Vansteelandt and Goetghebeur (2008)]. An important advantage of a double robust method is that it carefully combines both of the aforementioned dimension reduction strategies for confounding adjustment, to produce an estimator of the causal effect that remains consistent and asymptotically normal, provided at least one of the two strategies is correct, without necessarily knowing which strategy is indeed correct [van der Laan and Robins (2003)]. Unfortunately, similar methods for making semiparametric inferences about marginal natural direct and indirect effects are currently lacking. Thus, this paper develops a general semiparametric framework for obtaining inferences about marginal natural direct and indirect effects on the mean of an outcome, while appropriately accounting for a large number of confounding factors for the exposure and the mediator variables.
+
+Our semiparametric framework is particularly appealing, as it gives new insight on issues of efficiency and robustness in the context of mediation analysis. Specifically, in Section 2, we adopt the sequential ignorability assumption of Imai, Keele and Tingley (2010) under which, in conjunction with the standard consistency and positivity assumptions, we derive the efficient influence function and thus obtain the semiparametric efficiency bound for the natural direct and natural indirect marginal mean causal effects, in the nonparametric model $M_{nonpar}$ in which the observed data likelihood is left unrestricted. We further show that in order to conduct mediation inferences in $M_{nonpar}$, one must estimate at least a subset of the following quantities:
+
+(i) the conditional expectation of the outcome given the mediator, exposure and confounding factors;
+
+(ii) the density of the mediator given the exposure and the confounders;
+
+(iii) the density of the exposure given the confounders.
+
+Ideally, to minimize the possibility of modeling bias, one may wish to estimate each of these quantities nonparametrically; however, as previously argued, when as we assume throughout, we wish to account for numerous confounders, such nonparametric estimates will likely perform poorly in finite samples. Thus, in Section 2.3 we develop an alternative multiply robust strategy. To do so, we propose to model (i), (ii) and (iii) parametrically (or semiparametrically), but rather than obtaining mediation inferences that rely on the correct specification of a specific subset of these models, instead we carefully combine these three models to produce estimators of the marginal mean direct and indirect effects that remain consistent and asymptotically normal (CAN) in a union model, where at least one but not necessarily all of the following conditions hold:
+
+(a) the parametric or semi-parametric models for the conditional expectation of the outcome (i) and for the conditional density of the mediator (ii) are correctly specified;
\ No newline at end of file
diff --git a/samples/texts/2704143/page_24.md b/samples/texts/2704143/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e50f0c4390ddf4d775724076f99a23a3e5ccbe1
--- /dev/null
+++ b/samples/texts/2704143/page_24.md
@@ -0,0 +1,19 @@
+VANDERWEELE, T. J. (2010). Bias formulas for sensitivity analysis for direct and indirect effects.
+*Epidemiology* **21** 540–551.
+
+VANDERWEELE, T. J. and VANSTEELANDT, S. (2010). Odds ratios for mediation analysis for a
+dichotomous outcome. *Am. J. Epidemiol.* **172** 1339–1348.
+
+DEPARTMENTS OF EPIDEMIOLOGY AND BIOSTATISTICS
+HARVARD SCHOOL OF PUBLIC HEALTH
+677 HUNTINGTON AVENUE
+BOSTON, MASSACHUSETTS 02115
+USA
+E-MAIL: etchetge@hsph.harvard.edu
+
+DEPARTMENT OF EPIDEMIOLOGY
+HARVARD SCHOOL OF PUBLIC HEALTH
+677 HUNTINGTON AVENUE
+BOSTON, MASSACHUSETTS 02115
+USA
+E-MAIL: ishpitse@hsph.harvard.edu
\ No newline at end of file
diff --git a/samples/texts/2704143/page_25.md b/samples/texts/2704143/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..5cdeae6a295adbe281a0d82157e9ea32c96d2d51
--- /dev/null
+++ b/samples/texts/2704143/page_25.md
@@ -0,0 +1,7 @@
+(b) the parametric or semiparametric models for the conditional expectation of the outcome (i) and for the conditional density of the exposure (iii) are correctly specified;
+
+(c) the parametric or semiparametric models for the conditional densities of the exposure and the mediator (ii) and (iii) are correctly specified.
+
+Accordingly, we define submodels $M_a$, $M_b$ and $M_c$ of $M_{\text{nonpar}}$ corresponding to models (a), (b) and (c) respectively. Thus, the proposed approach is triply robust as it produces valid inferences about natural direct and indirect effects in the union model $M_{\text{union}} = M_a \cup M_b \cup M_c$. Furthermore, as we later show in Section 2.3, the proposed estimators are also locally semiparametric efficient in the sense that they achieve the respective efficiency bounds for estimating the natural direct and indirect effects in $M_{\text{union}}$, at the intersection submodel $M_a \cap M_b \cap M_c = M_a \cap M_c = M_a \cap M_b = M_b \cap M_c \subset M_{\text{union}} \subset M_{\text{nonpar}}$.
+
+Section 3 summarizes a simulation study illustrating the finite sample performance of the various estimators described in Section 2, and Section 4 gives a real data application of these methods. Section 5 describes a strategy to improve the stability of the proposed multiply robust estimator which directly depends on inverse exposure and mediator density weights, when such weights are highly variable, and Section 6 demonstrates the favorable performance of two modified multiply robust estimators in the context of such highly variable weights. In Section 7, we compare the proposed methodology to the prevailing estimators in the literature. Based on this comparison, we conclude that the new approach should generally be preferred because an inference under the proposed method is guaranteed to remain valid under many more data generating laws than an inference based on each of the other existing approaches. In particular, as we argue below the approach of van der Laan and Petersen (2005) is not entirely satisfactory because, despite producing a CAN estimator of the marginal direct effect under the union model $M_a \cup M_c$ (and therefore an estimator that is double robust), their estimator requires a correct model for the density of the mediator. Thus, unlike the direct effect estimator developed in this paper, the van der Laan estimator fails to be consistent under the submodel $M_b \subset M_{\text{union}}$. Nonetheless, the estimator of van der Laan is in fact locally efficient in model $M_a \cup M_c$, provided the model for the mediator's conditional density is either known, or can be efficiently estimated. This property is confirmed in a supplementary online Appendix [Tchetgen Tchetgen and Shpitser (2012)], where we also provide a general map that relates the efficient influence function for model $M_{\text{union}}$ to the corresponding efficient influence function for model $M_a \cup M_c$, assuming an arbitrary parametric or semiparametric model for the mediator conditional density is correctly specified. In Section 8, we describe a novel double robust sensitivity analysis framework to assess the impact on inferences about the natural direct effect, of a departure from the ignorability assumption of the mediator variable. We conclude with a brief discussion.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_26.md b/samples/texts/2704143/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..c58ad140da539664d6dcfd553469b998a84c0e9e
--- /dev/null
+++ b/samples/texts/2704143/page_26.md
@@ -0,0 +1,27 @@
+**2. The nonparametric mediation functional.**
+
+2.1. *Identification*. Suppose i.i.d. data on $O = (Y, E, M, X)$ is collected for $n$ subjects. Recall that $Y$ is an outcome of interest, $E$ is a binary exposure variable, $M$ is a mediator variable with support $\mathcal{S}$, known to occur subsequently to $E$ and prior to $Y$ and $X$ is a vector of pre-exposure variables with support $\mathcal{X}$ that confound the association between ($E, M$) and $Y$. The overarching goal of this paper is to provide some theory of inference about the fundamental functional of mediation analysis which Judea Pearl calls “the mediation causal formula” [Pearl (2011)] and which, expressed on the mean scale, is
+
+$$ (2) \qquad \theta_0 = \iint_{S \times \mathcal{X}} \mathbb{E}(Y|E=1, M=m, X=x) \\ \qquad \times f_{M|E,X}(m|E=0, X=x)f_X(x) d\mu(m,x), $$
+
+$f_{M|E,X}$ and $f_X$ are respectively the conditional density of the mediator $M$ given $(E, X)$ and the density of $X$, and $\mu$ is a dominating measure for the distribution of $(M, X)$. Hereafter, to keep with standard statistical parlance, we shall simply refer to $\theta_0$ as the “mediation functional” or “M-functional” since it is formally a functional on the nonparametric statistical model $\mathcal{M}_{\text{nonpar}} = \{\mathcal{F}_O(\cdot) : \mathcal{F}_O \text{ unrestricted}\}$ of all regular laws $\mathcal{F}_O$ of the observed data $O$ that satisfy the positivity assumption given below; that is, $\theta_0 = \theta_0(\mathcal{F}_O) : \mathcal{M}_{\text{nonpar}} \to \mathcal{R}$, with $\mathcal{R}$ the real line. The functional $\theta_0$ is of keen interest here because it arises in the estimation of natural direct and indirect effects as we describe next. To do so, we make the consistency assumption.
+
+*Consistency:*
+
+if $E = e$, then $M_e = M$ w.p.1 and
+
+if $E = e$ and $M = m$, then $Y_{e,m} = Y$ w.p.1.
+
+In addition, we adopt the sequential ignorability assumption of Imai, Keele and Tingley (2010) which states that for $e, e' \in \{0, 1\}$.
+
+*Sequential ignorability:*
+
+$$ \begin{gathered} \{Y_{e',m}, M_e\} \perp E | X, \\ Y_{e'm} \perp M | E = e, X, \end{gathered} $$
+
+where $A \perp B | C$ states that $A$ is independent of $B$ given $C$; paired with the following:
+
+*Positivity:*
+
+$f_{M|E,X}(m|E, X) > 0$ w.p.1 for each $m \in \mathcal{S}$ and
+
+$f_{E|X}(e|X) > 0$ w.p.1 for each $e \in \{0, 1\}$.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_27.md b/samples/texts/2704143/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..75ff326e81eb87510d8841401fd8c90498cdfa0f
--- /dev/null
+++ b/samples/texts/2704143/page_27.md
@@ -0,0 +1,27 @@
+Then, under the consistency, sequential ignorability and positivity assumptions,
+Imai, Keele and Tingley (2010) showed that
+
+$$
+\begin{align*}
+\theta_0 &= \mathbb{E}(Y_{1,M_0}) \quad \text{and} \\
+\delta_e &\equiv \int_X \mathbb{E}(Y|E=e, X=x) f_X(x) d\mu(x) \\
+(3) \qquad &= \iint_{S \times X} \mathbb{E}(Y|E=e, M=m, X=x) \\
+&\qquad \times f_{M|E,X}(m|E=e, X=x) f_X(x) d\mu(m,x) \\
+&= \mathbb{E}(Y_e) = \mathbb{E}(Y_{e,M_e}), \quad e=0,1,
+\end{align*}
+$$
+
+so that $\mathbb{E}(Y_{1,M_0})$ and $\mathbb{E}(Y_e)$, $e=0,1$, are identified from the observed data, and so
+is the mean natural direct effect $\mathbb{E}(Y_{1,M_0}) - \mathbb{E}(Y_0) = \theta_0 - \delta_0$ and the mean natural
+indirect effect $\mathbb{E}(Y_1) - \mathbb{E}(Y_{1,M_0}) = \delta_1 - \theta_0$. For binary $Y$, one might alternatively
+consider the natural direct effect on the risk ratio scale $\mathbb{E}(Y_{1,M_0})/\mathbb{E}(Y_0) = \theta_0/\delta_0$
+or on the odds ratio scale $\{\mathbb{E}(Y_{1,M_0})\mathbb{E}(1-Y_0)\}/\{\mathbb{E}(1-Y_{1,M_0})\mathbb{E}(Y_0)\} = \{\theta_0(1-\delta_0)\}/\{\delta_0(1-\theta_0)\}$ and similarly defined natural indirect effects on the risk ratio
+and odds ratio scales. It is instructive to contrast the expression (2) for $\mathbb{E}(Y_{1,M_0})$
+with the expression (3) for $e=1$ corresponding to $\mathbb{E}(Y_1)$, and to note that the two
+expressions bare a striking resemblance except the density of the mediator in the
+first expression conditions on the unexposed (with $E=0$), whereas in the sec-
+ond expression, the mediator density is conditional on the exposed (with $E=1$).
+As we demonstrate below, this subtle difference has remarkable implications for
+inference.
+
+Pearl (2001) was the first to derive the M-functional $\theta_0 = \mathbb{E}(Y_{1,M_0})$ under a different set of assumptions. Others have since contributed alternative sets of identifying assumptions. In this paper, we have chosen to work under the sequential ignorability assumption of Imai, Keele and Yamamoto (2010), Imai, Keele and Tingley (2010), but note that alternative related assumptions exist in the literature [Robins and Greenland (1992), Pearl (2001), van der Laan and Petersen (2005), Hafeman and Vanderweele (2011)]; however, we note that Robins and Richardson (2012) disagree with the label “sequential ignorability” because its terminology has previously carried a different interpretation in the literature. Nonetheless, the assumption entails two ignorability-like assumptions that are made sequentially. First, given the observed pre-exposure confounders, the exposure assignment is assumed to be ignorable, that is, statistically independent of potential outcomes and potential mediators. The second part of the assumption states that the mediator is ignorable given the observed exposure and pre-exposure confounders. Specifically, the second part of the sequential ignorability assumption is conditional on the observed value of the ignorable treatment and the observed pretreatment confounders. We note that the second part of the sequential ignorability assumption is
\ No newline at end of file
diff --git a/samples/texts/2704143/page_28.md b/samples/texts/2704143/page_28.md
new file mode 100644
index 0000000000000000000000000000000000000000..674e5cd6f97d1ac9554a273547c8cc8b5ac8c3da
--- /dev/null
+++ b/samples/texts/2704143/page_28.md
@@ -0,0 +1,41 @@
+particularly strong and must be made with care. This is partly because it is always
+possible that there might be unobserved variables that confound the relationship
+between the outcome and the mediator variables, even upon conditioning on the
+observed exposure and covariates. Furthermore, the confounders X must all be
+pre-exposure variables; that is, they must precede E. In fact, Avin, Shpitser and
+Pearl (2005) proved that without additional assumptions, one cannot identify nat-
+ural direct and indirect effects if there are confounding variables that are affected
+by the exposure, even if such variables are observed by the investigator [also see
+Tchetgen Tchetgen and VanderWeele (2012)]. This implies that, similarly to the
+ignorability of the exposure in observational studies, ignorability of the mediator
+cannot be established with certainty, even after collecting as many pre-exposure
+confounders as possible. Furthermore, as Robins and Richardson (2012) point out,
+whereas the first part of the sequential ignorability assumption could, in princi-
+ple, be enforced in a randomized study, by randomizing E within levels of X;
+the second part of the sequential ignorability assumption cannot similarly be en-
+forced experimentally, even by randomization. And thus, for this latter assumption
+to hold, one must entirely rely on expert knowledge about the mechanism under
+study. For this reason, it will be crucial in practice to supplement mediation analy-
+ses with a sensitivity analysis that accurately quantifies the degree to which results
+are robust to a potential violation of the sequential ignorability assumption. Later
+in the paper, we develop a variety of sensitivity analysis techniques that allow the
+analyst to quantify the degree to which his or her mediation analysis results are
+robust to a potential violation of the sequential ignorability assumption.
+
+**2.2. Semiparametric efficiency bounds for Mnonpar.** In this section, we derive the efficient influence function for the M-functional θ₀ in Mnonpar. This result is then combined with the efficient influence function for the functional δe [Robins, Rotnitzky and Zhao (1994), Hahn (1998)] to obtain the efficient influence function for the natural direct and indirect effects on the mean difference scale. Thus, in the following, we shall use the efficient influence function Sδeeff,nonpar (δe) of δe which is well known to be
+
+$$
+\frac{I(E=e)}{f_{E|X}(e|X)} \{Y - \eta(e, e, X)\} + \eta(e, e, X) + \delta_e,
+$$
+
+where for $e, e^* \in \{0, 1\}$, we define
+
+$$
+\eta(e, e^*, X) = \int_S \mathbb{E}(Y|X, M=m, E=e) f_{M|E,X}(m|E=e^*, X) d\mu(m),
+$$
+
+so that $\eta(e, e, X) = \mathbb{E}(Y | X, E = e)$, $e = 0, 1$.
+
+The following theorem is proved in the Appendix.
+
+**THEOREM 1.** *Under the consistency, sequential ignorability and positivity assumptions, the efficient influence function of the M-functional θ₀ in model Mnonpar*
\ No newline at end of file
diff --git a/samples/texts/2704143/page_29.md b/samples/texts/2704143/page_29.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe4b6ba217dc8de6f107879d30c9a22aecc6e3af
--- /dev/null
+++ b/samples/texts/2704143/page_29.md
@@ -0,0 +1,43 @@
+is given by
+
+$$
+\begin{align*}
+S_{\theta_0}^{\text{eff,nonpar}}(\theta_0) \\
+&= S_{\theta_0}^{\text{eff,nonpar}}(O; \theta_0) \\
+&= \frac{I\{E=1\}f_{M|E,X}(M|E=0, X)}{f_{E|X}(1|X)f_{M|E,X}(M|E=1, X)}\{Y - \mathbb{E}(Y|X, M, E=1)\} \\
+&\quad + \frac{I(E=0)}{f_{E|X}(0|X)}\{\mathbb{E}(Y|X, M, E=1) - \eta(1, 0, X)\} + \eta(1, 0, X) - \theta_0,
+\end{align*}
+$$
+
+and the efficient influence function of the natural direct and indirect effects on the
+mean difference scale in model $\mathcal{M}_{\text{nonpar}}$ are respectively given by
+
+$$
+\begin{align*}
+S_{\text{NDE}}^{\text{eff,nonpar}}(\theta_0, \delta_0) \\
+&= S_{\text{NDE}}^{\text{eff,nonpar}}(O; \theta_0, \delta_0) \\
+&= S_{\theta_0}^{\text{eff,nonpar}}(\theta_0) - S_{\delta_0}^{\text{eff,nonpar}}(\delta_0) \\
+&= \frac{I\{E=1\}f_{M|E,X}(M|E=0, X)}{f_{E|X}(1|X)f_{M|E,X}(M|E=1, X)}\{Y - \mathbb{E}(Y|X, M, E=1)\} \\
+&\quad + \frac{I(E=0)}{f_{E|X}(0|X)}\{\mathbb{E}(Y|X, M, E=1) - Y - \eta(1, 0, X) + \eta(0, 0, X)\} \\
+&\quad + \eta(1, 0, X) - \eta(0, 0, X) - \theta_0 + \delta_0,
+\end{align*}
+$$
+
+and
+
+$$
+\begin{align*}
+S_{\text{NIE}}^{\text{eff, nonpar}}(\delta_1, \theta_0) \\
+&= s_{\text{NIE}}^{\text{eff, nonpar}}(O; \delta_1, \theta_0) \\
+&= \frac{I(E=1)}{f_{E|X}(1|X)} \left\{
+ \begin{aligned}[t]
+ & Y - \eta(1, 1, X) \\
+ & - \frac{f_{M|E,X}(M|E=0, X)}{f_{M|E,X}(M|E=1, X)} \{Y - \mathbb{E}(Y|X, M, E=1)\} \\
+ & - \frac{I(E=0)}{f_{E|X}(0|X)} \{\mathbb{E}(Y|X, M, E=1) - \eta(1, 0, X)\} \\
+ & + \eta(1, 1, X) - \eta(1, 0, X) + \theta_0 - \delta_1.
+ \end{aligned}
+\right\}
+\end{align*}
+$$
+
+Thus, the semiparametric efficiency bound for estimating the natural direct and the natural indirect effects in $\mathcal{M}_{\text{nonpar}}$ are respectively given by $\mathbb{E}\{S_{\text{NDE}}^{\text{eff,nonpar}}(\theta_0, \delta_0)^2\}^{-1}$ and $\mathbb{E}\{S_{\text{NIE}}^{\text{eff,nonpar}}(\delta_1, \theta_0)^2\}^{-1}$.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_3.md b/samples/texts/2704143/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..11b33ebc554ea32c90486b765ce57e930cf9faa0
--- /dev/null
+++ b/samples/texts/2704143/page_3.md
@@ -0,0 +1,25 @@
+that produces a triply robust estimator by combining the above three strategies so that only one of models $\mathcal{M}_a, \mathcal{M}_b$ and $\mathcal{M}_c$ needs to be valid for consistency of the estimator.
+
+2.3. Triply robust estimation. The proposed triply robust estimator $\hat{\theta}_0^{\text{triply}}$ solves
+
+$$ \mathbb{P}_n \hat{S}_{\theta_0}^{\text{eff,nonpar}} (\hat{\theta}_0^{\text{triply}}) = 0, $$
+
+where $\hat{S}_{\theta_0}^{\text{eff,nonpar}}(\theta)$ is equal to $S_{\theta_0}^{\text{eff,nonpar}}(\theta)$ evaluated at $\{\hat{\mathbb{E}}^{\text{par}}(Y|E, M, X), \hat{f}_{M|E,X}^{\text{par}}(m|E, X), \hat{f}_{E|X}^{\text{par}}(e|X)\}$; that is,
+
+$$ (4) \qquad \begin{aligned} \hat{\theta}_0^{\text{triply}} = {}& \mathbb{P}_n \left[ \frac{I\{E=1\} \hat{f}_{M|E,X}^{\text{par}}(M|E=0, X)}{\hat{f}_{E|X}^{\text{par}}(1|X) \hat{f}_{M|E,X}^{\text{par}}(M|E=1, X)} \right. \\ & \qquad \left. \times \{Y - \hat{\mathbb{E}}^{\text{par}}(Y|X, M, E=1)\} \right. \\ & + \frac{I(E=0)}{\hat{f}_{E|X}^{\text{par}}(0|X)} \left\{ \begin{aligned}[t] & \hat{\mathbb{E}}^{\text{par}}(Y|X, M, E=1) \\ & - \hat{\eta}^{\text{par}}(1, 0, X) \end{aligned} \right\} + \hat{\eta}^{\text{par}}(1, 0, X) \Biggr], \end{aligned} $$
+
+is CAN in model $\mathcal{M}_{\text{union}} = \mathcal{M}_a \cup \mathcal{M}_b \cup \mathcal{M}_c$, where
+
+$$ \hat{\eta}^{\text{par}}(e, e^*, X) = \int_S \hat{\mathbb{E}}^{\text{par}}(Y|X, M=m, E=e) \hat{f}_{M|E,X}^{\text{par}}(m|E=e^*, X) d\mu(m). $$
+
+In the next theorem, the estimator in the above display is combined with a doubly robust estimator $\hat{\delta}_e^{\text{doubly}}$ of $\delta_e$ [see van der Laan and Robins (2003) or Tsiatis (2006)], to obtain multiply robust estimators of natural direct and indirect effects, where
+
+$$ \hat{\delta}_e^{\text{doubly}} = \mathbb{P}_n \left[ \frac{I(E=e)}{\hat{f}_{E|X}^{\text{par}}(e|X)} \left\{ Y - \hat{\eta}^{\text{par}}(e, e, X) \right\} + \hat{\eta}^{\text{par}}(e, e, X) \right]. $$
+
+To state the result, we set $\hat{\mathbb{E}}^{\text{par}}(Y|X, M, E) = \mathbb{E}^{\text{par}}(Y|X, M, E; \hat{\beta}_y) = g^{-1}(\hat{\beta}_y^T h(X, M, E))$, where $g$ is a known link function, and $h$ is a user specified function of $(X, M, E)$ so that $\mathbb{E}^{\text{par}}(Y|X, M, E; \beta_y) = g^{-1}(\beta_y^T h(X, M, E))$ entails a working regression model for $\mathbb{E}(Y|X, M, E)$, and $\hat{\beta}_y$ solves the estimating equation
+
+$$ 0 = \mathbb{P}_n[S_y(\hat{\beta}_y)] = \mathbb{P}_n[h(X, M, E)(Y - g^{-1}(\hat{\beta}_y^T h(X, M, E)))]. $$
+
+Similarly, we set $\hat{f}_{M|E,X}^{\text{par}}(m|E, X) = f_{M|E,X}^{\text{par}}(m|E, X; \hat{\beta}_m)$ for $f_{M|E,X}^{\text{par}}(m|E, X; \beta_m)$, a parametric model for the density of $[M|E, X]$ with $\hat{\beta}_m$, solving
+
+$$ 0 = \mathbb{P}_n[S_m(\hat{\beta}_m)] = \mathbb{P}_n\left[\frac{\partial}{\partial\hat{\beta}_m} \log f_{M|E,X}^{\text{par}}(M|E, X; \hat{\beta}_m)\right], $$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_30.md b/samples/texts/2704143/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..f519a9239f39a835b74a5d705d66321e78b0ae65
--- /dev/null
+++ b/samples/texts/2704143/page_30.md
@@ -0,0 +1,36 @@
+Although not presented here, Theorem 1 is easily extended to obtain the ef-
+ficient influence functions and the respective semiparametric efficiency bounds
+for the direct and indirect effects on the risk ratio and the odds ratio scales by
+a straightforward application of the delta method. An important implication of
+the theorem is that all regular and asymptotically linear (RAL) estimators of $\theta_0$,
+$\delta_1 - \theta_0$ and $\theta_0 - \delta_0$ in model $M_{\text{nonpar}}$ share the common influence functions
+$S_{\theta_0}^{\text{eff,nonpar}}(\theta_0)$, $S_{\text{NDE}}^{\text{eff,nonpar}}(\theta_0, \delta_0)$ and $S_{\text{NIE}}^{\text{eff,nonpar}}(\delta_1, \theta_0)$, respectively. Specifically,
+any RAL estimator $\hat{\theta}_0$ of the M-functional $\theta_0$ in model $M_{\text{nonpar}}$, shares a com-
+mon asymptotic expansion,
+
+$$n^{1/2}(\hat{\theta}_0 - \theta_0) = n^{1/2} \mathbb{P}_n S_{\theta_0}^{\text{eff,nonpar}}(\theta_0) + o_P(1),$$
+
+where $\mathbb{P}_n[\cdot] = n^{-1} \sum_i [\cdot]_i$. To illustrate this property of nonparametric RAL estimators, and as a motivation to multiply robust estimation when nonparametric methods are not appropriate, we provide a detailed study of three nonparametric strategies for estimating the M-functional in a simple yet instructive setting in which X and M are both discrete with finite support.
+
+*Strategy 1:* The first strategy entails obtaining the maximum likelihood estimator upon evaluating the M-functional under the empirical law of the observed data,
+
+$$\hat{\theta}_0^{\text{ym}} = \mathbb{P}_n \sum_{m \in S} \hat{\mathbb{E}}(Y|E=1, M=m, X) \hat{f}_{M|E,X}(m|E=0, X),$$
+
+where $\hat{f}_{Y|E,M,X}$ and $\hat{f}_{M|E,X}$ are the empirical probability mass functions, and
+$\hat{\mathbb{E}}(Y|E=e, M=m, X=x)$ is the expectation of $Y$ under $\hat{f}_{Y|E,M,X}$.
+
+*Strategy 2:* The second strategy is based on the following alternative representation of the M-functional:
+
+$$
+\begin{align*}
+& \iint_{S \times X} \mathbb{E}(Y|E=1, M=m, X=x) dF_{M|E}(m|E=0, X=x) dF_X(x) \\
+&= \sum_{e=0}^{1} \iint_{S \times X} \mathbb{E}(Y|E=1, M=m, X=x) \frac{I(e=0)}{f_{E|X}(e|X=x)} dF_{M,E,X}(m,e,x) \\
+&= \mathbb{E}\left\{ \frac{I(E=0)}{f_{E|X}(0|X)} \mathbb{E}(Y|E=1, M,X) \right\}.
+\end{align*}
+$$
+
+Thus, our second estimator takes the form
+
+$$\hat{\theta}_0^{\text{ye}} = \mathbb{P}_n \left\{ \frac{I(E=0)}{\hat{f}_{E|X}(0|X)} \hat{\mathbb{E}}(Y|E=1, M, X) \right\},$$
+
+with $\hat{f}_{E|X}$ the empirical estimate of the probability mass function $f_{E|X}$.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_4.md b/samples/texts/2704143/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..ede79915d9c52a8937ae69b7f7ebd618a68d54c7
--- /dev/null
+++ b/samples/texts/2704143/page_4.md
@@ -0,0 +1,23 @@
+and we set $\hat{f}_{E|X}^{\text{par}}(e|X) = f_{E|X}^{\text{par}}(e|X; \hat{\beta}_e)$ for $f_{E|X}^{\text{par}}(e|X; \beta_e)$, a parametric model for the density of $[E|X]$ with $\hat{\beta}_e$ solving
+
+$$0 = \mathbb{P}_n[S_e(\hat{\beta}_e)] = \mathbb{P}_n\left[\frac{\partial}{\partial\beta_e} \log f_{E|X}^{\text{par}}(E|X; \hat{\beta}_e)\right].$$
+
+**THEOREM 2.** Suppose that the assumptions of Theorem 1 hold, and that the regularity conditions stated in the Appendix hold and that $\beta_m, \beta_e$ and $\beta_y$ are variation independent.
+
+(i) **Mediation functional: Then, $\sqrt{n}(\hat{\theta}_0^{\text{triple}} - \theta_0)$ is RAL under model $\mathcal{M}_{\text{union}}$ with influence function**
+
+$$S_{\theta_0}^{\text{union}}(\theta_0, \beta^*) = S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \beta^*) - \left. \frac{\partial \mathbb{E}\{S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \beta)\}}{\partial \beta^T} \right|_{\beta^*} \mathbb{E}\left\{ \left. \frac{\partial S_\beta(\beta)}{\partial \beta^T} \right|_{\beta^*} \right\}^{-1} S_\beta(\beta^*),$$
+
+and thus converges in distribution to a $N(0, \Sigma_{\theta_0})$, where
+
+$$\Sigma_{\theta_0}(\theta_0, \beta^*) = \mathbb{E}(S_{\theta_0}^{\text{union}}(\theta_0, \beta^*)^2),$$
+
+with $\beta^T = (\beta_m^T, \beta_e^T, \beta_y^T)$ and $S_\beta(\beta) = (S_m^T(\beta_m), S_e^T(\beta_e), S_y^T(\beta_y))^T$, and with $\beta^*$ denoting the probability limit of the estimator $\hat{\beta} = (\hat{\beta}_m^T, \hat{\beta}_e^T, \hat{\beta}_y^T)^T$.
+
+(ii) **Natural direct effect:** Similarly, $\sqrt{n}(\hat{\theta}_0^{\text{triple}} - \hat{\delta}_0^{\text{doubly}} - (\theta_0 - \delta_0))$ is RAL under model $\mathcal{M}_{\text{union}}$ with influence function $S_{\text{NDE}}^{\text{union}}(\theta_0, \delta_0, \beta^*)$ defined as $S_{\theta_0}^{\text{union}}(\theta_0, \beta^*)$ with $S_{\text{NDE}}^{\text{eff,nonpar}}(\theta_0, \delta_0, \beta^*)$ replacing $S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \beta^*)$, and asymptotic variance $\Sigma_{\theta_0-\delta_0}(\delta_1, \theta_0, \beta^*)$ defined accordingly.
+
+(iii) **Natural indirect effect:** Similarly, $\sqrt{n}(\hat{\delta}_1^{\text{doubly}} - \hat{\theta}_0^{\text{triple}} - (\delta_1 - \theta_0))$ is RAL under model $\mathcal{M}_{\text{union}}$ with influence function $S_{\text{NIE}}^{\text{union}}(\delta_1, \theta_0, \beta^*)$ defined as $S_{\theta_0}^{\text{union}}(\theta_0, \beta^*)$ with $S_{\text{NIE}}^{\text{eff,nonpar}}(\delta_1, \theta_0, \beta^*)$ replacing $S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \beta^*)$, and asymptotic variance $\Sigma_{\delta_1-\theta_0}(\delta_1, \theta_0, \beta^*)$ defined accordingly.
+
+(iv) $\hat{\theta}_0^{\text{triple}}, \hat{\delta}_1^{\text{doubly}}$ and $\hat{\delta}_1^{\text{triple}} - \hat{\delta}_0^{\text{triple}}$ are semiparametric locally efficient in the sense that they are RAL under model $\mathcal{M}_{\text{union}}$ and respectively achieve the semiparametric efficiency bound for $\theta_0$, $\theta_0 - \delta_0$, and $\delta_1 - \theta_0$ under model $\mathcal{M}_{\text{union}}$ at the intersection submodel $\mathcal{M}_a \cap \mathcal{M}_b \cap \mathcal{M}_c$, with respective efficient influence functions: $S_{\theta_0}^{\text{eff,nonpar}}(\theta_0, \beta^*)$, $S_{\text{NDE}}^{\text{eff,nonpar}}(\theta_0, \delta_0, \beta^*)$ and $S_{\text{NIE}}^{\text{eff,nonpar}}(\delta_1, \theta_0, \beta^*)$.
+
+Empirical versions of $\Sigma_{\theta_0-\delta_0}(\delta_1, \theta_0, \beta^*)$ and $\Sigma_{\delta_1-\theta_0}(\delta_1, \theta_0, \beta^*)$ are easily obtained, and the corresponding Wald-type confidence intervals can be used to make formal inferences about natural direct and indirect effects. It is also straightforward to extend the approach to the risk ratio and odds ratio scales for binary Y. By a theorem due to Robins and Rotnitzky (2001), part (iv) of the theorem implies that when all models are correct, $\hat{\theta}_0^{\text{triple}}, \hat{\theta}_0^{\text{triple}} - \hat{\delta}_0^{\text{doubly}}$ and
\ No newline at end of file
diff --git a/samples/texts/2704143/page_5.md b/samples/texts/2704143/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..052dbcdaf1a763a6abcb62d99cb2cf94f843cae1
--- /dev/null
+++ b/samples/texts/2704143/page_5.md
@@ -0,0 +1,20 @@
+$\hat{\delta}_1^{\text{doubly}} - \hat{\theta}_0^{\text{triple}}$ are semiparametric efficient in model $\mathcal{M}_{\text{nonpar}}$ at the intersection submodel $\mathcal{M}_a \cap \mathcal{M}_b \cap \mathcal{M}_c$.
+
+3. **A simulation study of estimators of direct effect.** In this section, we report a simulation study which illustrates the finite sample performance of the various estimators described in previous sections. We generated 1000 samples of size $n = 600, 1000$ from the following model:
+
+$$
+\begin{align*}
+\text{(Model.X)} \quad & X_1 \sim \text{Bernoulli}(0.4); [X_2|X_1] \sim \text{Bernoulli}(0.3 + 0.4X_1); \\
+& [X_3|X_1, X_2] \sim -0.024 - 0.4X_1 + 0.4X_2 + N(0, 1); \\
+\text{(Model.E)} \quad & [E|X_1, X_2, X_3] \sim \text{Bernoulli}([1 + \exp\{-0.4 + X_1 - X_2 + 0.1X_3 - \\
+& \qquad 1.5X_1X_3\}])^{-1}; \\
+\text{(Model.M)} \quad & [M|E, X_1, X_2, X_3] \sim \text{Bernoulli}([1 + \exp\{-0.5 - X_1 + 0.5X_2 - \\
+& \qquad 0.9X_3 + E - 1.5X_1X_3\}])^{-1}; \\
+\text{(Model.Y)} \quad & [Y|M, E, X_1, X_2, X_3] \sim 1 + 0.2X_1 + 0.3X_2 + 1.4X_3 \\
+& - 2.5E-3.5M + 5EM + N(0, 1).
+\end{align*}
+$$
+
+We then evaluated the performance of the following four estimators of the natural direct effect $\hat{\theta}_0^{\text{em}} - \hat{\delta}_0^{\text{doubly}}$, $\hat{\theta}_0^{\text{ye}} - \hat{\delta}_0^{\text{doubly}}$, $\hat{\theta}_0^{\text{ym}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{triple}} - \hat{\delta}_0^{\text{doubly}}$. Note that the doubly robust estimator $\hat{\delta}_0^{\text{doubly}}$ was used throughout to estimate $\delta_0 = E(Y_0)$. To assess the impact of modeling error, we evaluated these estimators in four separate scenarios. In the first scenario, all models were correctly specified, whereas the remaining three scenarios respectively mis-specified only one of Model E, Model M and Model Y. In order to mis-specify Model E and Model M, we respectively left out the $X_1X_3$ interaction when fitting each model, and we assumed an incorrect log-log link function. The incorrect model for $Y$ simply assumed no EM interaction.
+
+Tables 1 and 2 summarize the simulation results which largely agree with the theory developed in the previous sections. Mainly, all proposed estimators performed well at both moderate and large sample sizes in the absence of modeling error. Furthermore, under the partially mis-specified model in which Model.Y was incorrect, both estimators, $\hat{\theta}_0^{\text{ye}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{ym}} - \hat{\delta}_0^{\text{doubly}}$, showed significant bias irrespective of sample size, while $\hat{\theta}_0^{\text{em}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{triple}} - \hat{\delta}_0^{\text{doubly}}$ both performed well. Similarly when Model M was incorrect, the estimators $\hat{\theta}_0^{\text{em}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{ym}} - \hat{\delta}_0^{\text{doubly}}$ resulted in large bias, when compared to the relatively small bias of $\hat{\theta}_0^{\text{ye}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{triple}} - \hat{\delta}_0^{\text{doubly}}$. Finally, mis-specifying Model E lead to estimators $\hat{\theta}_0^{\text{ye}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{em}} - \hat{\delta}_0^{\text{doubly}}$ that were significantly more biased than the estimators $\hat{\theta}_0^{\text{ym}} - \hat{\delta}_0^{\text{doubly}}$ and $\hat{\theta}_0^{\text{triple}} - \hat{\delta}_0^{\text{doubly}}$. Interestingly, the efficiency loss of the multiply robust estimator remained relatively small when compared to the consistent nonrobust estimator under the various scenarios, suggesting that, at least in this simulation study, the benefits of robustness appear to outweigh the loss of efficiency.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_6.md b/samples/texts/2704143/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..bb3ee72be62e10adb9bae6221361d7f4cc7f6eec
--- /dev/null
+++ b/samples/texts/2704143/page_6.md
@@ -0,0 +1,165 @@
+TABLE 1
+Simulation results *n* = 600
+
+
+
+
+ |
+ |
+ Mym |
+ Mye |
+ Mem |
+ Munion |
+
+
+
+
+ | All correct |
+ bias |
+ 0.002 |
+ 0.008 |
+ 0.002 |
+ 0.005 |
+
+
+ | MC s.e.* |
+ 0.005 |
+ 0.007 |
+ 0.006 |
+ 0.006 |
+
+
+ | Y wrong |
+ bias |
+ -0.500 |
+ -0.500 |
+ 0.0001 |
+ 0.004 |
+
+
+ | MC s.e. |
+ 0.005 |
+ 0.006 |
+ 0.006 |
+ 0.006 |
+
+
+ | M wrong |
+ bias |
+ 0.038 |
+ 0.008 |
+ -0.054 |
+ 0.003 |
+
+
+ | MC s.e. |
+ 0.005 |
+ 0.007 |
+ 0.006 |
+ 0.006 |
+
+
+ | E wrong |
+ bias |
+ 0.003 |
+ 0.027 |
+ 0.059 |
+ 0.004 |
+
+
+ | MC s.e. |
+ 0.005 |
+ 0.005 |
+ 0.005 |
+ 0.005 |
+
+
+
+
+$M_{\text{ym}}: \hat{\theta}_{0}^{\text{ym}} - \hat{\delta}_{0}^{\text{doubly}}; M_{\text{ye}}: \hat{\theta}_{0}^{\text{ye}} - \hat{\delta}_{0}^{\text{doubly}}; M_{\text{em}}: \hat{\theta}_{0}^{\text{em}} - \hat{\delta}_{0}^{\text{doubly}}; M_{\text{union}}: \hat{\theta}_{0}^{\text{triply}} - \hat{\delta}_{0}^{\text{doubly}}$
+
+*Monte Carlo standard error.
+
+**4. A data application.** In this section, we illustrate the methods in a real world application from the psychology literature on mediation. We re-analyze data from The Job Search Intervention Study (JOBS II) also analyzed by Imai, Keele and Tingley (2010). JOBS II is a randomized field experiment that investigates the efficacy of a job training intervention on unemployed workers. The program is designed not only to increase reemployment among the unemployed but also to enhance the mental health of the job seekers. In the study, 1801 unemployed workers received a pre-screening questionnaire and were then randomly assigned to treatment and control groups. The treatment group with *E* = 1 participated in job skills workshops in which participants learned job search skills and coping strategies for dealing with setbacks in the job search process. The control group with *E* = 0 received a booklet describing job search tips. An analysis considers
+
+TABLE 2
+Simulation results *n* = 1000
+
+
+
+
+ |
+ |
+ Mym |
+ Mye |
+ Mem |
+ Munion |
+
+
+
+
+ | All correct |
+ bias |
+ 0.001 |
+ 0.009 |
+ 0.001 |
+ 0.001 |
+
+
+ | MC s.e.* |
+ 0.004 |
+ 0.005 |
+ 0.004 |
+ 0.004 |
+
+
+ | Y wrong |
+ bias |
+ -0.484 |
+ -0.484 |
+ 0.003 |
+ 0.003 |
+
+
+ | MC s.e. |
+ 0.004 |
+ 0.004 |
+ 0.004 |
+ 0.004 |
+
+
+ | M wrong |
+ bias |
+ 0.136 |
+ -0.008 |
+ 0.056 |
+ 0.01 |
+
+
+ | MC s.e. |
+ 0.004 |
+ 0.05 |
+ 0.004 |
+ 0.01 |
+
+
+ | E wrong |
+ bias |
+ 0.001 |
+ -0.024 |
+ -0.054 |
+ 0.001 |
+
+
+ | MC s.e. |
+ 0.004 |
+ 0.004 |
+ 0.004 |
+ 0.004 |
+
+
+
+
+$M_{\text{ym}}: \hat{\theta}_{\theta}^{\text{ym}} - \hat{\delta}_{\theta}^{\text{doubly}}; M_{\text{ye}}: \hat{\theta}_{\theta}^{\text{ye}} - \hat{\delta}_{\theta}^{\text{doubly}}; M_{\text{em}}: \hat{\theta}_{\theta}^{\text{em}} - \hat{\delta}_{\theta}^{\text{doubly}}; M_{\text{union}}: \hat{\theta}_{\theta}^{\text{triply}} - \hat{\delta}_{\theta}^{\text{doubly}}.$
+
+*Monte Carlo standard error.
\ No newline at end of file
diff --git a/samples/texts/2704143/page_7.md b/samples/texts/2704143/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..d22ef575b9e07372b899e637f4ad7c5f9bc5c51a
--- /dev/null
+++ b/samples/texts/2704143/page_7.md
@@ -0,0 +1,14 @@
+TABLE 3
+Estimated causal effects of interest using the job search intervention study data
+
+ | | Mym | Mye | Mem | Munion |
|---|
| Direct effect | Estimate | -0.0310 | -0.0310 | 0.0280 | -0.0409 |
| s.e.* | 0.0124 | 0.0620 | 0.0465 | 0.0217 |
| Indirect effect | Estimate | -0.0160 | -0.0160 | -0.0750 | -0.0070 |
| s.e.* | 0.0372 | 0.0620 | 0.0434 | 0.0217 |
+
+*Nonparametric bootstrap standard errors.
+
+a continuous outcome measure *Y* of depressive symptoms based on the Hopkins Symptom Checklist [Imai, Keele and Tingley (2010)]. In the JOBS II data, a continuous measure of job search self-efficacy represented the hypothesized mediating variable *M*. The data also included baseline covariates *X* measured before administering the treatment including: pretreatment level of depression, education, income, race, marital status, age, sex, previous occupation, and the level of economic hardship.
+
+Note that by randomization, the density of [*E*|*X*] was known by design not to depend on covariates, and therefore its estimation is not prone to modeling error. The continuous outcome and mediator variables were modeled using linear regression models with Gaussian error, with main effects for (*E*, *M*, *X*) included in the outcome regression and main effects for (*E*, *X*) included in the mediator regression. Table 3 summarizes results obtained using $\hat{\theta}_0^{\text{em}}$, $\hat{\theta}_0^{\text{ye}}$, $\hat{\theta}_0^{\text{ym}}$ and $\hat{\theta}_0^{\text{triply}}$ together with $\hat{\delta}_e^{\text{doubly}}$, $e = 0, 1$, to estimate the direct and indirect effects of the treatment.
+
+Point estimates of both natural direct and indirect effects closely agreed under models $M_{ym}$ and $M_{ye}$, and also agreed with the results of Imai, Keele and Tingley (2010). We should note that inferences under our choice of $M_{ym}$ are actually robust to the normality assumption and, as in Imai, Keele and Tingley (2010), only require that the mean structure of [*Y*|*E*, *M*, *X*] and [*M*|*E*, *X*] is correct. In contrast, inferences under model $M_{em}$ require a correct model for the mediator density. This distinction may partly explain the apparent disagreement in the estimated direct effect under $M_{em}$ when compared to the other methods, also suggesting that the Gaussian error model for *M* is not entirely appropriate. The multiply robust estimate of the natural direct effect is consistent with estimates obtained under models $M_{ym}$ and $M_{ye}$, and is statistically significant, suggesting that the intervention may have beneficial direct effects on participants’ mental health; while the multiply robust approach suggests a much smaller indirect effect than all other estimators although none achieved statistical significance.
+
+**5. Improving the stability of $\hat{\theta}_0^{\text{triply}}$ when weights are highly variable.** The triply robust estimator $\hat{\theta}_0^{\text{triply}}$ which involves inverse probability weights for the exposure and mediator variables, clearly relies on the positivity assumption, for good
\ No newline at end of file
diff --git a/samples/texts/2704143/page_8.md b/samples/texts/2704143/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..c34de1d15e37301103853af157ef61e0ea8f5369
--- /dev/null
+++ b/samples/texts/2704143/page_8.md
@@ -0,0 +1,15 @@
+finite sample performance. But as recently shown by Kang and Schafer (2007) in the context of missing outcome data, a practical violation of positivity in data analysis can severely compromise inferences based on such methodology; although their analysis did not directly concern the M-functional $\theta_0$. Thus, it is crucial to critically examine, as we do below in a simulation study, the extent to which the various estimators discussed in this paper are susceptible to a practical violation of the positivity assumption, and to consider possible approaches to improve the finite sample performance of these estimators in the context of highly variable empirical weights. Methodology to enhance the finite sample behavior of $\hat{\delta}_j^{\text{doubly}}$ is well studied in the literature and is not considered here; see, for example, Robins et al. (2007), Cao, Tsiatis and Davidian (2009) and Tan (2010). We first describe an approach to enhance the finite sample performance of $\hat{\theta}_0^{\text{triple}}$, particularly in the presence of highly variable empirical weights. To focus the exposition, we only consider the case of a continuous $Y$ and a binary $M$, but in principle, the approach could be generalized to a more general setting. The proposed enhancement involves two modifications.
+
+The first modification adapts to the mediation context, an approach developed for the missing data context (and for the estimation of total effects) in Robins et al. (2007). The basic guiding principle of the approach is to carefully modify the estimation of the outcome and mediator models in order to ensure that the triply robust estimator given by equation (4) has the simple M-functional representation
+
+$$ \hat{\theta}_{0}^{\text {triple}, \dagger}=\mathbb{P}_{n}\{\hat{\eta}^{\text {par}, \dagger}(1, 0, X)\}, $$
+
+where $\hat{\eta}^{\text{par},\dagger}(1,0,X)$ is carefully estimated to ensure multiple robustness. The reason for favoring an estimator with the above representation is that it is expected to be more robust to practical positivity violation because it does not directly depend on inverse probability weights. However, as we show next, to ensure multiple robustness, estimation of $\eta^{\text{par}}$ involves inverse probability weights, and therefore, $\hat{\theta}_0^{\text{triple},\dagger}$ indirectly depends on such weights. Our strategy involves a second step to minimize the potential impact of this indirect dependence on weights.
+
+In the following, we assume, to simplify the exposition, that a simple linear model is used:
+
+$$ \mathbb{E}^{\text{par}}(Y|X, M, E = 1) = \mathbb{E}^{\text{par}}(Y|X, M, 1; \beta_y) = [1, X^T, M]\beta_y. $$
+
+Then, similar to Robins et al. (2007), one can verify that the above M-functional representation of a triply robust estimator is obtained by estimating $f_{M|E,X}^{\text{par}}(M|E=0, X)$ with $\hat{f}_{M|E,X}^{\text{par},\dagger}(M|E=0, X)$ obtained via weighted logistic regression in the unexposed-only, with weight $\hat{f}_{E|X}^{\text{par}}(0|X)^{-1}$; and by estimating $\mathbb{E}^{\text{par}}(Y|X, M, E=1)$ using weighted OLS of $Y$ on $(M, X)$ in the exposed-only, with weight
+
+$$ \hat{f}_{M|E,X}^{\text{par},\dagger}(M|E=0, X)\{\hat{f}_{E|X}^{\text{par}}(1|X)\hat{f}_{M|E,X}^{\text{par},\dagger}(M|E=1, X)\}^{-1}; $$
\ No newline at end of file
diff --git a/samples/texts/2704143/page_9.md b/samples/texts/2704143/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a205f8f500459229db57b5ceae24bf8ca81f688
--- /dev/null
+++ b/samples/texts/2704143/page_9.md
@@ -0,0 +1,25 @@
+provided that both working models include an intercept. The second enhancement to minimize undue influence of variable weights on the M-functional estimator, entails using $\hat{f}_{E|X}^{\text{par},\dagger}$ in the previous step instead of $\hat{f}_{E|X}^{\text{par}}$, where
+
+$$ \operatorname{logit} \hat{f}_{E|X}^{\text{par},\dagger}(1|X) = \operatorname{logit} \hat{f}_{E|X}^{\text{par}}(1|X) + \hat{C}_1 $$
+
+with
+
+$$ \hat{C}_1 = -\log(1 - \mathbb{P}_n(E)) + \log\left(\mathbb{P}_n\left[E\hat{f}_{E|X}^{\text{par}}(0|X)/\hat{f}_{E|X}^{\text{par}}(1|X)\right]\right). $$
+
+This second modification ensures a certain boundedness property of inverse propensity score-weighting. Specifically, for any bounded function $R = r(Y, M)$ of $Y$ and $M$; consider for a moment the goal of estimating the counterfactual mean $E\{r(Y_1, M_1)\}$; then it is well known that even though $R$ is bounded, the simple inverse-probability weighting estimator $\mathbb{P}_n\{ER\hat{f}_{E|X}^{\text{par}}(1|X)^{-1}\}$ could easily be unbounded, particularly if positivity is practically violated. In contrast, as we show next, the estimator $\mathbb{P}_n\{ER\hat{f}_{E|X}^{\text{par},\dagger}(1|X)^{-1}\}$ is generally bounded. To see why, note that
+
+$$
+\begin{align*}
+\mathbb{P}_n\{ER\hat{f}_{E|X}^{\text{par},\dagger}(1|X)^{-1}\} &= \mathbb{P}_n\{ER\hat{f}_{E|X}^{\text{par},\dagger}(0|X)\hat{f}_{E|X}^{\text{par},\dagger}(1|X)^{-1}\} + \mathbb{P}_n\{R\} \\
+&= \mathbb{P}_n\left\{R \frac{E\hat{f}_{E|X}^{\text{par}}(0|X)\hat{f}_{E|X}^{\text{par}}(1|X)^{-1}}{\mathbb{P}_n[E\hat{f}_{E|X}^{\text{par}}(0|X)\hat{f}_{E|X}^{\text{par}}(1|X)^{-1}]} (1 - \mathbb{P}_n(E))\right\} \\
+&\quad + \mathbb{P}_n\{R\}
+\end{align*}
+$$
+
+which is bounded since the second term is bounded, and the first term is a convex combination of bounded variables, and therefore is also bounded. Furthermore, $\mathbb{P}_n[E\hat{f}_{E|X}^{\text{par},\dagger}(0|X)\hat{f}_{E|X}^{\text{par},\dagger}(1|X)^{-1}]$ converges in probability to $(1 - E(E))$ provided that $\hat{f}_{E|X}^{\text{par}}$ converges to $f_{E|X}$, ensuring that the expression in the above display is consistent for $E\{r(Y_1, M_1)\}$. The nonparametric bootstrap is most convenient for inference using $\hat{f}_{E|X}^{\text{par},\dagger}$.
+
+In the next section, we study, in the context of highly variable weights, the behavior of our previous estimators of $\theta_0$, together with that of the enhanced estimators $\hat{\theta}_0^{\text{triple},\dagger,j} = \mathbb{P}_n\{\hat{\eta}^{\text{par},\dagger,j}(1, 0, X)\}$, $j = 1, 2$, where $\hat{\eta}^{\text{par},\dagger,1}$ is constructed as described above using $\hat{f}_{E|X}^{\text{par}}$, and $\hat{\eta}^{\text{par},\dagger,2}$ uses $\hat{f}_{E|X}^{\text{par},\dagger}$.
+
+**6. A simulation study where positivity is practically violated.** We adapted to the mediation setting, the missing data simulation scenarios in Kang and Schafer (2007) which were specifically designed so that, when misspecified, working models are nonetheless nearly correct, but yield highly variable inverse probability weights with practical positivity violation in the context of estimation. We generated 1000 samples of size $n = 200$, 1000 from the following model:
+
+(Model.X) $Z = Z_1, Z_2, Z_3, Z_4 \stackrel{\text{i.i.d.}}{\sim} N(0, 1)$; $X_1 = \exp(Z_1/2)$;
\ No newline at end of file
diff --git a/samples/texts/2815108/page_2.md b/samples/texts/2815108/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac838e4b7cbbdaab9781e2ed422b9c8c8689764e
--- /dev/null
+++ b/samples/texts/2815108/page_2.md
@@ -0,0 +1,77 @@
+to satisfy the nonholonomic constraint if $\dot{q}(t) \in D_{q(t)}$ for all $t$.
+
+The system is required to move from an initial state $(q(0) = q_0, \dot{q}(0) = \dot{q}_0)$ to a final state $(q(T) = q_T, \dot{q}(T) = \dot{q}_T)$ during a time interval $[0, T]$ under the influence of a control force $f(t)$ while minimizing a cost function:
+
+$$J(q, \dot{q}, f) = \int_{0}^{T} C(q(t), \dot{q}(t), f(t))dt \quad (1)$$
+
+The motion must satisfy the nonholonomic Lagrange-d'Alembert principle ($L$ is the Lagrangian, $L: TQ \to \mathbb{R}$):
+
+$$\delta \int_{0}^{T} L(q(t), \dot{q}(t))dt + \int_{0}^{T} f(t) \cdot \delta q(t) dt = 0 \quad (2)$$
+
+for variations $\delta q(t)$ such that $\delta q(0) = \delta q(T) = 0$ and $\delta q(t) \in D_{q(t)}$ for each $t \in [0, T]$. Additional nonlinear equality or inequality constraints might be imposed in the form $H(q(t)) \ge 0$.
+
+## II. DISCRETIZATION OF NONHOLONOMIC SYSTEMS
+
+In this section we extend the discrete optimal control formulation of [20] to systems with nonholonomic constraints. The system is discretized by replacing the state space $TQ$ with $Q \times Q$ [24]. Thus, a velocity vector $(q, \dot{q}) \in TQ$ is represented by a pair of points $(q_0, q_1) \in Q \times Q$. A path $q: [0, T] \to Q$ is replaced by a discrete path $q_d: \{kh\}_{k=0}^N \to Q$, $Nh = T$. Discrete configurations are denoted $q_k = q_d(kh)$. Similarly, a continuous force $f: [0, T] \to T^*Q$ is replaced by a discrete force $f_d: \{kh\}_{k=0}^N \to T^*Q$ with corresponding notation $f_k = f_d(kh)$. Based on this discretization, the nonholonomic constraint distribution $\mathcal{D} \subset TQ$ is replaced with $D_d \subset Q \times Q$ such that $(q_k, q_{k+1}) \in D_d$ for all $k=0, ..., N-1$.
+
+### A. Discrete Nonholonomic Lagrange-d'Alembert Principle
+
+The Lagrangian action integral (2) can then be approximated on a time interval $[kh, (k+1)h]$ by the discrete Lagrangian $L_d: Q \times Q \to \mathbb{R}$ according to
+
+$$L_d(q_k, q_{k+1}) \approx \int_{kh}^{(k+1)h} L(q(t), \dot{q}(t))dt \quad (3)$$
+
+The virtual work in (2) can be approximated using
+
+$$f_k^{-} \cdot \delta q_k + f_k^{+} \cdot \delta q_{k+1} \approx \int_{kh}^{(k+1)h} f(t) \cdot \delta q(t) dt \quad (4)$$
+
+where $f_k^-, f_k^+ \in T^*Q$ are called left and right discrete forces.
+
+The discrete version of the Lagrange-d'Alembert principle becomes:
+
+$$\delta \sum_{k=0}^{N-1} L_d(q_k, q_{k+1}) + \sum_{k=0}^{N-1} f_k^{-} \cdot \delta q_k + f_k^{+} \cdot \delta q_{k+1} = 0 \quad (5)$$
+
+such that $\delta q_k \in D_{q_k}$, $(q_k, q_{k+1}) \in D_d$ for all $k=0, ..., N-1$ and $\delta q_0 = \delta q_N = 0$.
+
+Assume that the space $\mathcal{D}$ is defined by $m$ functions $\omega^a: TQ \to R$, $a=1,...,m$ that are linear in the velocities and satisfy $\omega^a(q, \dot{q}) = 0$. One can always select coordinates $q=(r,s)$ such that the functions can be expressed as
+
+$\omega^a(q, \dot{q}) = \dot{s}^a + A_\alpha^a(r, s)\dot{r}^\alpha, \alpha = n-m$, which is equivalent to constraining the variations according to $\delta s^a + A_\alpha^a\delta r^\alpha = 0$. Forces can be expressed in the corresponding dual basis as $f = (f_\alpha, f_a)$ and we assume that $f_a = 0$, i.e. forces enter only in the $r$-coordinates. The resulting equations after substituting the constraint are
+
+$$\frac{d}{dt}\frac{\partial L}{\partial \dot{r}} - \frac{\partial L}{\partial r} - f = A(r, s)\left(\frac{d}{dt}\frac{\partial L}{\partial \dot{s}} - \frac{\partial L}{\partial s}\right) \quad (6)$$
+
+Assume that the functions $\omega^a$ are approximated using corresponding discrete constraint functions $\omega_d^a: Q \times Q \to \mathbb{R}$. Then (5) becomes equivalent to the discrete nonholonomic Euler-Lagrange equations:
+
+$$\begin{aligned} & \frac{\partial L_d}{\partial r_k} + \frac{\partial L_{k-1}}{\partial r_k} + f_k^{-} + f_{k-1}^{+} \\ &= A(r_k, s_k) \left( \frac{\partial L_k}{\partial s_k} + \frac{\partial L_{k-1}}{\partial s_k} \right) \\ & \omega_d^a(r_k, s_k, r_{k+1}, s_{k+1}) = 0, \end{aligned} \quad (7)$$
+
+for $k=0,...,N-1$, $a=1,...,m$, where $L_d := L_d(r_k, s_k, r_{k+1}, s_{k+1})$. The above formulation avoids the use of Lagrange multipliers.
+
+### B. Discrete Optimization Problem
+
+The cost function is approximated on each trajectory segment $(q_k, q_{k+1})$ using
+
+$$C_d(q_k, q_{k+1}, f_k, f_{k+1}) \approx \int_{kh}^{(k+1)h} C(q, \dot{q}, f) dt \quad (8)$$
+
+yielding the total cost
+
+$$J_d(q_d, f_d) = \sum_{k=0}^{N-1} C_d(q_k, q_{k+1}, f_k, f_{k+1})$$
+
+Velocity boundary conditions $\dot{q}(0) = \dot{q}_0$ and $\dot{q}(T) = \dot{q}_T$ are enforced using
+
+$$\begin{aligned} & \frac{\partial L}{\partial \dot{r}}(q_0, \dot{q}_0) + \frac{\partial L_0}{\partial r_0} + f_0^{-} \\ &= A(r_0, s_0) \left( \frac{\partial L}{\partial \dot{s}}(q_0, \dot{q}_0) + \frac{\partial L_0}{\partial s_0} \right) \\ & \frac{\partial L}{\partial \dot{r}}(q_T, \dot{q}_T) - \frac{\partial L_{N-1}}{\partial r_N} - f_{N-1}^{+} \\ &= A(r_N, s_N) \left( \frac{\partial L}{\partial \dot{s}}(q_N, \dot{q}_N) - \frac{\partial L_{N-1}}{\partial s_N} \right) \end{aligned} \quad (9)$$
+
+In summary, we have the following constrained nonlinear optimization problem:
+
+**Compute:** $q_d, f_d$
+
+**minimizing** $\sum_{k=0}^{N-1} C_d(q_k, q_{k+1}, f_k, f_{k+1})$
+
+**subject to:**
+
+$q(0) = q_0, q(T) = q_T$
+
+Equations (7)
+
+Equations (9)
+
+$H(q_k) \ge 0,$
+
+¹Using the summation convention $a_i b^i := \sum_i a_i b^i$
\ No newline at end of file
diff --git a/samples/texts/2815108/page_4.md b/samples/texts/2815108/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..46acd413b5cb1738d79638505aa0d5a437a74429
--- /dev/null
+++ b/samples/texts/2815108/page_4.md
@@ -0,0 +1,115 @@
+and the cost function
+
+$$
+\begin{align*}
+C_d(r_k, r_{k+1}, g_k, f_k, f_{k+1}) \\
+&= hC(r_{k+\frac{1}{2}}, \Delta r_k, g_{k+\frac{1}{2}}, g_{k+\frac{1}{2}}\xi_k),
+\end{align*}
+$$
+
+where $\xi_k = -A_{loc}(r_{k+\frac{1}{2}}) \cdot \Delta r_k$ and $g_{k+\frac{1}{2}} = g_k \exp(\frac{h}{2}\xi_k)$.
+
+B. Reduced Discrete Optimization Problem
+
+The reduced optimization problem can be formulated as
+follows
+
+**Compute:** $r_d, f_d$
+
+**minimizing** $\sum_{k=0}^{N-1} C_d(r_k, r_{k+1}, g_k, f_k, f_{k+1})$
+
+**subject to:**
+
+$r(0) = r_0, \quad g(0) = g_0, \quad r(T) = r_T, \quad g(T) = g_T,$
+
+Equations (14)
+
+Equations (15)
+
+$H(r_k, g_k) \ge 0,$
+
+for $k = 0, ..., N-1$, $r_d = \{r_i\}_{i=0}^N$. Group variables $g_k$ need not be included as part of the optimization state vector since they can be reconstructed from shape trajectories internally during optimization.
+
+IV. WHEELED VEHICLES APPLICATIONS
+
+A. Models
+
+1) *Two wheeled robot:* Consider the two wheeled mobile robot [1] controlled by applying torque to each wheel independently assuming there is roll without slip. The configuration space is $Q = S^1 \times S^1 \times SE(2)$ with coordinates $q = (\phi_R, \phi_L, x, y, \theta)$, where $(\phi_R, \phi_L)$ are the rotation angles of the right and left wheels and $(x, y, \theta) \in SE(2)$ are the position and orientation. The robot is controlled with right and left wheel torques $\tau_R$ and $\tau_L$ respectively. The Lagrangian is
+
+$$
+L(q, \dot{q}) = \frac{1}{2} J_w (\dot{\phi}_R^2 + \dot{\phi}_L^2) + \frac{1}{2} J \dot{\theta}^2 + \frac{1}{2} m (\dot{x}^2 + \dot{y}^2) \quad (18)
+$$
+
+where *m* is the mass, and *J* and *J*w are the moments of inertia, ρ is the wheel radius and *d* is the distance from the wheel to the center of mass which is assumed to coincide with the center of rotation.
+
+The system is symmetric with respect to actions of the group G = SE(2). The shape space is described by coordinates r = (φR, φL) ∈ Q/G. The matrix representation of the local connection in (10):
+
+$$
+[A_{loc}(r)] = \begin{bmatrix} -\frac{\rho}{2} & -\frac{\rho}{2} \\ 0 & 0 \\ \frac{\rho}{2\omega} & -\frac{\rho}{2\omega} \end{bmatrix} \quad (19)
+$$
+
+The constrained Lagrangian is
+
+$$
+\ell_c = \frac{1}{2} C (\dot{\phi}_R^2 + \dot{\phi}_L^2) + D \dot{\phi}_R \dot{\phi}_L,
+$$
+
+where
+
+$$
+C = \left( J_{\omega} + \frac{m\rho^2}{4} + \frac{J\rho^2}{4d^2} \right), \quad D = \left( \frac{m\rho^2}{4} - \frac{J\rho^2}{4d^2} \right) \quad (21)
+$$
+
+Substituting the Lagrangian and the connection in equations (14), we see from (13) that the forces $\hat{f}$ vanish and the reduced discrete equations of motion become:
+
+$$
+\begin{align*}
+C (\Delta\phi_{Rk} - \Delta\phi_{Rk-1}) + D (\Delta\phi_{Lk} - \Delta\phi_{Lk-1}) \\
+&= \frac{h}{4} (\tau_{Rk-1} + 2\tau_{Rk} + \tau_{Rk+1})
+\end{align*}
+$$
+
+$$
+D (\Delta\phi_{Rk} - \Delta\phi_{Rk-1}) + C (\Delta\phi_{Lk} - \Delta\phi_{Lk-1}) \\
+= \frac{h}{4} (\tau_{Lk-1} + 2\tau_{Lk} + \tau_{Lk+1})
+$$
+
+$$
+x_{k+1} = x_k + \frac{v_k}{\omega_k}(\sin(\theta_k + h\omega_k) - \sin(\theta_k))
+$$
+
+$$
+y_{k+1} = y_k + \frac{v_k}{\omega_k}(-\cos(\theta_k + h\omega_k) + \cos(\theta_k))
+$$
+
+$\theta_{k+1} = \theta_k + h\omega_k,$
+
+where $v_k = \frac{\rho}{2} (\Delta\phi_{Rk} + \Delta\phi_{Lk})$, $\omega_k = \frac{\rho}{2d} (\Delta\phi_{Lk} - \Delta\phi_{Rk})$
+
+2) Simple Car: The simple car is controlled using rear wheel torque $u^\psi$ and torque $u^\sigma$ steering the front wheels. The configuration space is $Q = S^1 \times S^1 \times SE(2)$ with coordinates $q = (\psi, \sigma, x, y, \theta)$, where $(x, y, \theta) \in SE(2)$ are the position and orientation of the car, $\psi$ is the rolling angle of the rear wheels, and $\sigma$ is defined as $\sigma = \tan(\phi)$ where $\phi$ is the orientation of the front wheels relative to the car orientation $\theta$, i.e. the steering angle. The model assumes that the distance between the left and the right wheels is negligible, such as in a bicycle model (e.g. [30]). The Lagrangian is
+
+$$
+L(q, \dot{q}) = \frac{1}{2} I \dot{\psi}^2 + \frac{1}{2} J \dot{\sigma}^2 + \frac{1}{2} m (\dot{x}^2 + \dot{y}^2) + \frac{1}{2} K \dot{\theta}^2 \quad (22)
+$$
+
+where *m* is the mass, *I* and *J* are the moments of inertia, *l* is the distance between front and rear wheel axles, and ρ is the radius of the wheels. We choose to parametrize the steering angle in order to avoid a nonlinear term in the constraint connection as derived below and to avoid the computation of tan during optimization.
+
+The Lagrangian and constraints (see [1]) are again invariant under action of the group G = SE(2). The shape coordinates are now r = (ψ, σ) ∈ M. The matrix representation of the local connections in (10) is
+
+$$
+[A_{loc}(r)] = \begin{bmatrix} -\rho & 0 \\ 0 & 0 \\ -\frac{l}{t}\sigma & 0 \end{bmatrix} \quad (23)
+$$
+
+The constrained Lagrangian is
+
+$$
+\ell_c(r, \dot{r}) = \frac{1}{2} \left( I + m\rho^2 + \frac{K\rho^2\sigma^2}{l^2} \right) \dot{\psi}^2 + \frac{1}{2} \left( \frac{J}{(1+\sigma^2)^2} \right) \dot{\sigma}^2
+$$
+
+Using (13) one can compute
+
+$$
+\hat{f} = \left[ -\frac{K\rho^2\sigma\dot{\psi}\dot{\sigma}}{l^2}, \frac{K\rho^2\sigma\dot{\psi}^2}{l^2} \right] (24)
+$$
+
+The resulting discrete equations of motion are found by expressing the discrete Lagrangian, forces, and constraints in terms of the corresponding quantities $\ell_c$, $A_{loc}$, and $\hat{f}$.
\ No newline at end of file
diff --git a/samples/texts/2926839/page_1.md b/samples/texts/2926839/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9757fd05f9d2e40d59d889d90067db9ebbe849b
--- /dev/null
+++ b/samples/texts/2926839/page_1.md
@@ -0,0 +1,71 @@
+Robust multi objective optimization method
+using satisfying trade-off method
+
+
+
+ |
+ 著者
+ |
+
+ Toyoda Masahiro, Kogiso Nbzomu
+ |
+
+
+ |
+ journal or publication title
+ |
+
+ Journal of Mechanical Science and Technology
+ |
+
+
+ |
+ volume
+ |
+
+ 29
+ |
+
+
+ |
+ number
+ |
+
+ 4
+ |
+
+
+ |
+ page range
+ |
+
+ 1361-1367
+ |
+
+
+ |
+ year
+ |
+
+ 2015-04-14
+ |
+
+
+ |
+ 権利
+ |
+
+ The final publication is available at link.springer.com via http://dx.doi.org/10.1007/s12206-015-0305-9.
+ |
+
+
+ |
+ URL
+ |
+
+ http://hdl.handle.net/10466/15650
+ |
+
+
+
+doi: 10.1007/s12206-015-0305-9
\ No newline at end of file
diff --git a/samples/texts/2926839/page_7.md b/samples/texts/2926839/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..b411a3c3e67cd8f2b345cadf7292575dc520fa26
--- /dev/null
+++ b/samples/texts/2926839/page_7.md
@@ -0,0 +1,30 @@
+Fig. 11. Cross-sectional area distributions
+
+usefulness of this method was demonstrated.
+
+* An accurate Pareto set is obtained by parametrically changing the aspiration level, because each Pareto solution is obtained using a mathematical programming method.
+* It was shown that the proposed method could be used to investigate the effect of the variation of random variables on the shape of the Pareto frontier. In addition, the shift of each Pareto solution with the same aspiration level could be traced with respect to the variation of uncertain parameters.
+* This method makes it possible to investigate how the variation of the design variables and parameters affect the Pareto set. This investigation is possible without obtaining full Pareto set.
+
+## Acknowledgment
+
+Part of this research was supported by JSPS KAKENHI 26249131. The authors wish to express their appreciation to
+
+Prof. S. Kitayama at Kanazawa University for his valuable comments on this research.
+
+## References
+
+[1] G. Park, T. Lee, K. Lee, and K. Hwang, Robust Design: An Overview, *AIAA Journal*, 44(1) (2006) 181-191.
+[2] H. Beyer and B. Sendhoff, Robust Optimization–A Comprehensive Survey, *Computer Methods in Applied Mechanics and Engineering*, 196(33-34) (2007) 3190-3218.
+[3] K. M. Mittinen, *Nonlinear Multiobjective Optimization*, Kluwer Academic Publishers (2004).
+[4] H. Nakayama and Y. Sawaragi, Satisficing Trade-Off Method for Multiobjective Programming, *Lecture Notes in Economics and Mathematical Systems*, 229 (1984) 113-122.
+[5] H. Nakayama, K. Kaneshige, S. Takemoto, and Y. Watada, An Application of a Multiobjective Programming Technique to Construction Accuracy Control of Cable Stayed Bridges, *European. Journal of Operational Research*, 83(3) (1995) 731-738.
+[6] S. Kitayama and K. Yamazaki, Compromise Point Incorporating Trade-off Ratio in Multiobjective Optimization, *Applied Soft Computing*, 12(8) (2012) 1959-1964.
+[7] H. Nakayama, Trade-off Analysis Using Parametric Optimization Techniques, *European. Journal of Operational Research*, 60(1) (1992) 87-98.
+[8] A. H-S. Ang and W. H. Tang, *Probabilistic Concepts in Engineering Planning and Design, Vol. 1, Basic Principles*, John Wiley & Sons (1975).
+[9] W. Chen, M. M. Wiecek, and J. Zhang, Quality Utility - A Compromise Programming Approach to Robust Design, *Journal of Mechanical Design*, 121(2) (1999) 179-187.
+[10] R. T. Haftka and Z. Gürdal, *Elements of Structural Optimization*, Kluwer Academic Publishers (1992).
+
+Masahiro Toyoda is presently M.S. candidate in Department of Aerospace Engineering at Osaka Prefecture University in Japan. He received his B.S. degree in aerospace engineering from Osaka Prefecture University, Japan in 2012. His research interest is multiobjective optimization and its application to aerospace structural systems.
+
+Nozomu Kogiso is presently an Associate Professor in Department of Aerospace Engineering at Osaka Prefecture University in Japan. He received his B.S. degree in physics from Nagoya University, Japan in 1988 and his M.S. and Dr. Eng. from Osaka Prefecture University in 1994 and 1997, respectively. His research interests include robust and reliability-based design optimization.
\ No newline at end of file
diff --git a/samples/texts/2952501/page_30.md b/samples/texts/2952501/page_30.md
new file mode 100644
index 0000000000000000000000000000000000000000..83cdcec1769b97e8c73ab2472a4b8ca74d3cd0ba
--- /dev/null
+++ b/samples/texts/2952501/page_30.md
@@ -0,0 +1,17 @@
+# 7 Distances along the half-plane boundary
+
+To fulfill our goal of showing the asymptotic equivalence between the oriented and non-oriented distances in uniform Eulerian triangulations, we need as a technical ingredient some estimates on the (oriented) distances along the boundary of $\mathcal{L}$.
+
+Note that the vertices on $\partial\mathcal{L}$ are of two types, those of coordinates $(i, 0)$ for some $i \in \mathbb{Z}$, and those of coordinates $(i + 1/2, \varepsilon)$, for some $i \in \mathbb{Z}$. To simplify notation, the results in this section only deal with the distances between vertices of the first type, since we are interested in asymptotic estimates, and including the vertices of the second type only adds 1 or 2 to the considered distances. We will lay the stress on this generalization whenever it arises later in the paper.
+
+In the sequel, we will use leftmost mirror geodesics, that were defined in Section 5 for finite cylinder triangulations, and that we generalize now to $\mathcal{L}$. For any $i \in \mathbb{Z}$, the leftmost mirror geodesic from $(i, 0)$ in $\mathcal{L}$ is an infinite path $\omega$ in $\mathcal{L}$, whose reverse is an oriented geodesic, and that visits a vertex $\omega(n)$ in $\mathcal{L}_n$ at every step $n \ge 0$. It starts at $(i, 0)$, and is obtained by choosing at step $n+1$ the leftmost edge between $\omega(n)$ and $\mathcal{L}_{n+1}$. As before, for $i < j$, the leftmost mirror geodesics from $(i, 0)$ and $(j, 0)$ will coalesce before hitting $\mathcal{L}_r$, if and only if all the trees $\mathcal{T}_i, \mathcal{T}_{i+1}, \dots, \mathcal{T}_{j-1}$ all have height strictly smaller than $r$.
+
+## 7.1 Block decomposition and lower bounds
+
+We first want to obtain lower bounds on the distances along the boundary of $\mathcal{L}$. For that purpose, we adapt the **block decomposition** of causal triangulations [14, Section 2.1], to $\mathcal{L}$.
+
+Figure 16: The block of height 3 between $\mathcal{T}_0$ and $\mathcal{T}_1$ in the triangulation of Figure 15. As before, ghost modules are shown in pale grey.
+
+For $r \ge 1$, we define the random map $G_r$ to be the planar map obtained from $\mathcal{L}_{[0,r]}$ by keeping only the faces and edges that are between $\mathcal{T}_0$ and $\mathcal{T}_{i_r}$, where $i_r$ is the smallest integer $i > 0$ such that $\mathcal{T}_i$ has height at least $r$. More precisely, we only keep the skeleton modules that are at height smaller than or equal to $r$, belonging to trees $\mathcal{T}_i$, with $0 \le i \le i_r$, and the slots that are to the left of all these skeleton modules (see Figure 16 for an example). Thus, $G_r$ has one boundary that is naturally divided into four parts: the upper and lower parts that it shares with $\mathcal{L}_{[0,r]}$, and the left and right parts.
+
+Note that $\mathcal{L}$ contains a lot of submaps that have the same law as $G_r$: if $\mathcal{T}_i, \mathcal{T}_j$ are two
\ No newline at end of file
diff --git a/samples/texts/2952501/page_31.md b/samples/texts/2952501/page_31.md
new file mode 100644
index 0000000000000000000000000000000000000000..70d60e5e2d2fcb1d8ff005209e24a19ecba62d94
--- /dev/null
+++ b/samples/texts/2952501/page_31.md
@@ -0,0 +1,37 @@
+consecutive trees reaching height $r$ in the skeleton of $\mathcal{L}$ (with $i < j$), we can define the submap of $\mathcal{L}_{[0,r]}$ encased between $\mathcal{T}_i$ (strictly) and $\mathcal{T}_j$ (included), which is obtained by keeping only the skeleton modules belonging to trees $\mathcal{T}_k$, with $i < k \le j$, and the slots that are to the left of all these skeleton modules. Such a map has the same law as $\mathcal{G}_r$.
+
+We call any map that can be a realization of $\mathcal{G}_r$, a **block of height r**.
+
+We define the **diameter** of $\mathcal{G}_r$, denoted $Diam(\mathcal{G}_r)$ to be the minimal oriented distance from a vertex on its left boundary, to a vertex on its right boundary. Note that this diameter is not uniformly large when $r$ is large. However, we will now show that a long block is also typically wide. To do so, we consider the **median diameter** of a block.
+
+**Definition 7.1.** For any $r \ge 1$, let $f(r)$ be the median diameter of $\mathcal{G}_r$, that is, the largest number such that
+
+$$\mathrm{P}(\mathrm{Diam}(\mathcal{G}_r) \ge f(r)) \ge \frac{1}{2}.$$
+
+We show the following upper bound on the median diameter, which is similar to the first part of [14, Theorem 5]:
+
+**Theorem 7.2.** There exists $c > 0$ such that
+
+$$f(r) \ge cr,$$
+
+for all $r$ sufficiently large.
+
+Let us introduce a bit of notation before explaining how to prove this theorem. For any $m \ge 1$ and $h \ge 0$, consider the layer $\mathcal{L}_{[h,h+m]}$: it is composed of a (bi-infinite) sequence of blocks of height $m$, $(\mathcal{G}_m(i,h))_{i \in \mathbb{Z}}$. To avoid ambiguities, we set $\mathcal{G}_m(1,h)$ to be the block that has a part of $\mathcal{T}_k$ as its right boundary, where $k$ is the smallest integer $l \ge 1$ such that $\mathcal{T}_l$ has height at least $h+m$. For fixed $h,m$, these blocks are independent and distributed as $\mathcal{G}_m$. For $r \ge h+m$, we denote by $N_r(m,h)$ the maximal index $i$ such that the block $\mathcal{G}_m(i,h)$ is a sub-block of $\mathcal{G}_r$. (Note that, by our convention, the minimal such index is $i=1$, so that $N_r(m,h)$ is also the number of blocks $\mathcal{G}_m(i,h)$ that are sub-block of $\mathcal{G}_r$.)
+
+To prove Theorem 7.2, we use, like in [14], a renormalization scheme, splitting $\mathcal{G}_r$ into smaller blocks. This relies on an estimate of the numbers $N_r(2m, lm)$:
+
+**Lemma 7.3.** There exists $c > 0$ such that, for every $1 \le m \le cr$, we have
+
+$$\mathrm{P}\left(\inf_{0 \le l \le (r/m)-2} N_r(2m, lm) \ge c \left(\frac{r}{m}\right)^2\right) \ge \frac{7}{8}.$$
+
+The proof of this lemma can be adapted straightforwardly from the proof of [14, Lemma 1], in the case $\beta = 2$.
+
+**Proposition 7.4.** There exists $C > 0$ such that, for any integer $m$ with $1 \le m \le Cr$, we have
+
+$$f(r) \ge C \cdot \min\left\{m, \left(\frac{r}{m}\right)^2 f(m)\right\}.$$
+
+*Proof.* Let us give a sketch of the proof of this result, as it is very similar to the one of [14, Proposition 1]. The idea is to consider the shortest (oriented) path going from a vertex on the left boundary of $\mathcal{G}_r$, to its right boundary, and whether or not it leaves a small horizontal layer of a specific type.
+
+More precisely, we pick a vertex $x$ on the left boundary of $\mathcal{G}_r$, at a height $0 \le j \le r$. Then, we can find an integer $l$ such that $x$ is located in the layer $\mathcal{L}_{[lm,(l+2)m]}$, with $0 \le l \le (r/m)-2$ and so that $|lm-j| \ge m/3$ and $|(l+2)m-j| \ge m/3$. Consider then the shortest oriented path from $x$ to the right boundary of $\mathcal{G}_r$. Either it stays in that layer, or it leaves it at some point.
+
+If it leaves the layer, then its length is bounded below by $m/3$.
\ No newline at end of file
diff --git a/samples/texts/2952501/page_32.md b/samples/texts/2952501/page_32.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f3f1f6ab8e59db3a982fe8aa4f96ef2d2f66662
--- /dev/null
+++ b/samples/texts/2952501/page_32.md
@@ -0,0 +1,69 @@
+Convergence of Eulerian triangulations
+
+If it does not leave this layer, then its length is bounded below by
+
+$$
+\sum_{i=1}^{N_r(2m,lm)} \mathrm{Diam}(\mathcal{G}_{2m}(i,lm)).
+$$
+
+Then, from Lemma 7.3, and from the definition of f, we get that, for r/m large enough,
+for some c' > 0 independent of r, l, m,
+
+$$
+P\left(\{N_r(2m, lm) < c \left(\frac{r}{m}\right)^2\} \cup \{\exists i \in \{1, \dots, N_r(2m, lm)\} \mid \text{Diam}(\mathcal{G}_{2m}(i, lm)) \le c' f(2m)\} \right) \le \frac{1}{4},
+$$
+
+so that
+
+$$
+\mathbb{P}(\mathrm{Diam}(\mathcal{G}_r) \leq m/3 \wedge c \cdot c'(r/m)^2 f(2m)) \leq \frac{1}{4},
+$$
+
+which implies the desired bound, by the definition of *f*.
+
+The details of the proof can be adapted from the proof of [14, Proposition 1]. $\square$
+
+Theorem 7.2 is then a purely analytic consequence of Proposition 7.4, and its proof is
+a straightforward adaptation of that of [14, Theorem 5].
+
+We can now use Theorem 7.2 to obtain the following lower bounds for the distances
+along the boundary of $\mathcal{L}$:
+
+**Proposition 7.5.** For every $\varepsilon > 0$, there exists an integer $K > 0$ such that, for every $r \ge 1$,
+
+$$
+\mathbb{P} \left( \min_{|j| \ge K r^2} \vec{d}_{\mathcal{L}}((0,0), (j,0)) \ge r \right) \ge 1 - \varepsilon.
+$$
+
+Consequently, for $K' = 9K$, we also have, for every $r \ge 1$,
+
+$$
+\mathbb{P} \left( \min_{|j| \ge 2K'r^2} \min_{-K'r^2 \le i \le K'r^2} \vec{d}_{\mathcal{L}}((i,0), (j,0)) \ge r \right) \ge 1 - 2\varepsilon.
+$$
+
+Proof. Let us start with the first assertion. Let $\epsilon > 0$. Fix $r \ge 1$, and $K \ge 1$. Then, from (5.18), the number $N_{(K,r)}$ of trees that reach height $r$ between $(0,0)$ and $(j,0)$ is bounded below by a binomial variable of parameters $(Kr^2, 3/((r+2)^2-1))$, so that, using Chebyshev's inequality, for any $a > 0$,
+
+$$
+\mathbb{P}\left(N_{(K,r)} \le \frac{3}{8}K - a\right) \le \frac{3K}{a^2}.
+$$
+
+(Note that the binomial variable in question has expectation greater than or equal to
+$3K/8$, with equality when $r=1$, and a variance smaller than $3K$.)
+
+Taking $a = \sqrt{(6K/\epsilon)}$, for $K$ large enough that $a \le (1/8)K + 1$, we get
+
+$$
+\mathbb{P}\left(N_{(K,r)} \le \frac{1}{4}K + 1\right) \le \frac{\epsilon}{2}. \tag{7.1}
+$$
+
+Now, on the event that $N_{(K,r)} > K/4$, for any $j \ge Kr^2$, we have
+
+$$
+\vec{d}_{\mathcal{L}}((0,0), (j,0)) \geq \sum_{i=1}^{\lfloor \frac{K}{4} \rfloor + 1} \mathrm{Diam}(\mathcal{G}_r(i)) \wedge r,
+$$
+
+so that, using Theorem 7.2,
+
+$$
+\mathbb{P}\left(\vec{d}_{\mathcal{L}}((0,0), (j,0)) < c r \frac{K}{4} \wedge r\right) \leq \frac{1}{2^{K/4}}.
+$$
\ No newline at end of file
diff --git a/samples/texts/2952501/page_35.md b/samples/texts/2952501/page_35.md
new file mode 100644
index 0000000000000000000000000000000000000000..7599df726225371e2035b7116470fe9861c260f7
--- /dev/null
+++ b/samples/texts/2952501/page_35.md
@@ -0,0 +1,33 @@
+The details of the proof can be adapted verbatim from the proof of Proposition 17 in [13]. □
+
+# 8 Asymptotic equivalence between oriented and non-oriented distances
+
+Recall that, on any Eulerian triangulation with a boundary $A$, we write $\vec{d}_A$ for the oriented distance on $A$, and $d_A$ for the usual graph distance. We will show that these two distances are asymptotically proportional, first on the layers of the LHPET $\mathcal{L}$, then on the ones of the UIPET $\mathcal{T}_{\infty}^{(1)}$, and finally in large finite Eulerian triangulations. To do so, we follow the chain of proofs of Sections 5 and 6 in [13], once again detailing mostly the additional arguments needed in our case.
+
+## 8.1 Subadditivity in the LHPET and the UIPET
+
+Recall that we write $\rho$ for the root vertex (0, 0) of the LPHET $\mathcal{L}$, and that $\mathcal{L}_r$ is the lower boundary of the layer $\mathcal{L}_{[0,r]}$. We have the following result:
+
+**Proposition 8.1.** There exists a constant $c_0 \in [2/3, 1]$ such that
+
+$$r^{-1} d_{\mathcal{L}}(\rho, \mathcal{L}_r) \xrightarrow[r \to \infty]{a.s.} c_0.$$
+
+*Proof.* The proof of this result, apart from the bounds on $c_0$, is essentially the same as that of [13, Proposition 18]. However, as it is a very short argument, but central in this whole work, we write it here in its entirety.
+
+For integers $0 \le m < n$, recall that $\mathcal{L}_{[m,n]}$ is the infinite planar map obtained by keeping only the layers of $\mathcal{L}$ between the levels $m$ and $n$. The non-oriented distance $d_{\mathcal{L}_{[m,n]}}$ on this strip is defined by considering the shortest non-oriented paths that stay in $\mathcal{L}_{[m,n]}$. Thus, for two vertices $v, v' \in \mathcal{L}_{[m,n]}$, we have $d_{\mathcal{L}_{[m,n]}}(v, v') \ge d_{\mathcal{L}}(v, v')$.
+
+Let then $m, n \ge 1$, and let $x_m$ be the leftmost vertex $x$ of $\mathcal{L}_m$ such that $d_{\mathcal{L}}(\rho, \mathcal{L}_m) = d_{\mathcal{L}}(\rho, x)$. We have
+
+$$d_{\mathcal{L}}(\rho, \mathcal{L}_{m+n}) \le d_{\mathcal{L}}(\rho, \mathcal{L}_m) + d_{\mathcal{L}_{[m,m+n]}}(x_m, \mathcal{L}_{m+n}).$$
+
+As $x_m$ is a function of $\mathcal{L}_{[0,m]}$ only, and the layers in $\mathcal{L}$ are independent, the random variable $d_{\mathcal{L}_{[m,m+n]}}(x_m, \mathcal{L}_{m+n})$ is independent of $\mathcal{L}_{[0,m]}$, and has the same distribution as $d_{\mathcal{L}}(\rho, \mathcal{L}_n)$.
+
+We can then apply Liggett's version of Kingman's subadditive theorem [24], to get the desired convergence: the fact that the limit is a constant follows from Kolmogorov's zero-one law. As for the bounds for $c_0$, it is clear from (2.1) that $c_0 \in [1/2, 1]$. Our proof that $c_0$ must be at least 2/3 relies on a result of asymptotic proportionality in finite Eulerian triangulations, that will be stated further in Theorem 1.2. We thus postpone this argument to after Theorem 1.2. □
+
+To carry this asymptotic proportionality over to large finite Eulerian triangulations, we will make a stop at the UIPET of the digon $\mathcal{T}_{\infty}^{(1)}$. In the remainder of this subsection, we write $d$ for the non-oriented distance on $\mathcal{T}_{\infty}^{(1)}$, $B_n^{\bullet}$ for $B_n^{\bullet}(\mathcal{T}_{\infty}^{(1)})$ and $\partial^* B_n^{\bullet}$ for $\partial^* B_n^{\bullet}(\mathcal{T}_{\infty}^{(1)})$, to simplify notation.
+
+**Proposition 8.2.** Let $\epsilon, \delta \in (0, 1)$. We can find $\eta \in (0, 1/2)$ such that, for every sufficiently large $n$, the property
+
+$$ (1 - \epsilon)\mathbf{c}_0\eta n \le d(v, \partial^* B_{n-\lfloor\eta n\rfloor}^{\bullet}) \le (1 + \epsilon)\mathbf{c}_0\eta n \quad \forall v \in \partial^* B_n^{\bullet} $$
+
+holds with probability at least $1 - \delta$.
\ No newline at end of file
diff --git a/samples/texts/2952501/page_37.md b/samples/texts/2952501/page_37.md
new file mode 100644
index 0000000000000000000000000000000000000000..0481b752dd92254272123a7706e0debdaf402513
--- /dev/null
+++ b/samples/texts/2952501/page_37.md
@@ -0,0 +1,35 @@
+a distinguished vertex $o_n$. The hull $B_r^\bullet(\bar{T}_n^{(1)})$ is well-defined when $\vec{d}(\rho_n, o_n) > r + 1$, otherwise we set it to be $\bar{T}_n^{(1)}$.
+
+**Lemma 8.5.** There exists a constant $\bar{c} > 0$ such that, for every $n,r,p \ge 1$ and every $\Delta \in C_{1,r}$ with top boundary half-length $p$, such that $n > N(\Delta) + p$,
+
+$$ P(B_r^\bullet(\bar{T}_n^{(1)}) = \Delta) \le \bar{c} \left( \frac{n}{n - N(\Delta) + 1} \right)^{3/2} \cdot P(B_r^\bullet(\mathcal{T}_\infty^{(1)}) = \Delta). \quad (8.1) $$
+
+*Proof.* The proof of this lemma is very similar to that of [13, Lemma 22], with the additional subtlety that, like for Lemma 5.3, we do not start with explicit expressions for probabilities in finite triangulations, as shown in (8.3).
+
+Fix $r \ge 1$ and $\Delta \in C_{1,r}$ with top boundary half-length $p$. We will write $V$ for $\#V(\Delta)$ to simplify notation. Using (5.4) and the fact that $\mathcal{T}_\infty^{(1)}$ is the local limit of $\mathcal{T}_n^{(1)}$, we have
+
+$$ P(B_r^{\bullet}(\mathcal{T}_{\infty}^{(1)}) = \Delta) = \frac{C(p)}{C(1)} 8^{-N(\Delta)}. \qquad (8.2) $$
+
+On the other hand, (5.3) gives the formula
+
+$$ P \left( B_r^\bullet (\bar{T}_n^{(1)}) = \Delta \right) = \frac{B_{n-N,p}}{B_{n,1}} \cdot \frac{\#\text{inner vertices in } \mathcal{T}_n^{(1)} \setminus \Delta}{\#\text{inner vertices in } \mathcal{T}_n^{(1)}} \quad (8.3) $$
+
+$$ \leq \frac{B_{n-N,p}}{B_{n,1}} \cdot \frac{n-V}{n}, \quad (8.4) $$
+
+where the last inequality is given by Euler's formula and the fact that at most $p$ vertices of $\partial^*\Delta$ are identified together in $\mathcal{T}_n^{(1)}$. (We still need $n > N + p$ since $\mathcal{T}_n^{(1)} \setminus \Delta$ will have $n - N - p$ inner vertices if none of these identifications occur.)
+
+Then, using the bounds of (4.5) and the asymptotics of (4.3), we get that
+
+$$ P(B_r^{\bullet}(\overline{\mathcal{T}}_n^{(1)}) = \Delta) \le c^* C(p) \left(\frac{n}{n-N}\right)^{3/2} 8^{-N} $$
+
+for some constant $c^*$. Comparing the last bound with (8.2) gives the desired result. $\square$
+
+*Proof of Proposition 8.4.* Fix $\varepsilon > 0$ and $\nu > 0$. Its suffices to prove that, for all $n$ sufficiently large, we have
+
+$$ P\left(\left|\frac{d(\rho_n, o_n)}{\vec{d}(\rho_n, o_n)} - c_0\right| > 2\varepsilon\right) < \nu. \quad (8.5) $$
+
+Indeed, as detailed in Proposition 4.1, the sequence $n^{-1/4}\vec{d}(\rho_n, o_n)$ is bounded in probability, so that the statement of the proposition will follow from (8.5). Note that, in [13], the equivalent tightness is obtained as a consequence of the convergence of usual planar triangulations to the Brownian map: in our case, we had to use the weaker result of Theorem 2.12, as we obviously do not have a convergence at the level of maps yet.
+
+To obtain (8.5), we want to transfer the results of Proposition 8.3 on the UIPET to large finite triangulations. This necessitates the bounds of Lemma 8.5, together with the statement of Proposition 4.2 on the profile of distances in large finite triangulations (which is once again a consequence of Theorem 2.12, as we cannot rely on a convergence to the Brownian map).
+
+We omit the details of the proof of (8.5), as they can be straightforwardly adapted from the equivalent statement in the proof of Proposition 21 in [13], replacing once again $d_{gr}$ by $\vec{d}$, and $d_{fpp}$ by $d$. $\square$
\ No newline at end of file
diff --git a/samples/texts/2952501/page_38.md b/samples/texts/2952501/page_38.md
new file mode 100644
index 0000000000000000000000000000000000000000..6421b3d97415c2a88e94af74ad169f5bc857e068
--- /dev/null
+++ b/samples/texts/2952501/page_38.md
@@ -0,0 +1,33 @@
+We will now derive our final result of asymptotic proportionality between the oriented and non-oriented distances, Theorem 1.2. This one is in the context of $\mathcal{T}_n$, the uniform rooted plane Eulerian triangulation with $n$ black faces, that is in correspondence with $\mathcal{T}_n^{(1)}$ as shown in Figure 10. As previously, we use $d$ to denote the non-oriented graph distance.
+
+*Proof of Theorem 1.2.* Let us give an idea of the proof of this theorem, which follows the arguments of the proof of Theorem 1 in [13].
+
+From Proposition 8.4 and the correspondence between $\mathcal{T}_n^{(1)}$ and $\mathcal{T}_n$, we get that, if $o_n'$ is a uniform vertex of $\mathcal{T}_n$, we have
+
+$$ \mathbb{P}(|d(\rho_n, o_n') - c_0 \vec{d}(\rho_n, o_n')| > \varepsilon n^{1/4}) \xrightarrow{n \to \infty} 0. \qquad (8.6) $$
+
+Observe now that $\bar{\mathcal{T}}_n$, re-rooted at $\rho_n'$, the origin vertex of a random uniform edge $e_n$ (remember that all edges of $\mathcal{T}_n$ have a canonical orientation), still pointed at $o_n'$, has the same distribution as $\bar{\mathcal{T}}_n$. This implies that the statement of (8.6) also holds for the distances from $\rho_n'$, that is sampled according to its degree:
+
+$$ \mathbb{P}(|d(\rho'_n, o'_n) - c_0 \vec{d}(\rho'_n, o'_n)| > \varepsilon n^{1/4}) \xrightarrow{n \to \infty} 0. $$
+
+As the numbers of edges and vertices of $\mathcal{T}_n$ are fixed, this allows us to deduce a similar statement on distances between two random uniform vertices $o_n'$, $o_n''$ of $\mathcal{T}_n$:
+
+$$ \mathbb{P}\left(|d(o_n', o_n'') - c_0 \vec{d}(o_n', o_n'')| > \varepsilon n^{1/4}\right) \xrightarrow{n \to \infty} 0. \qquad (8.7) $$
+
+We now want to make this statement into a global one on all the vertices of $\mathcal{T}_n$.
+
+Let us fix $\delta \in (0, 1/2)$. We can choose an integer $k \ge 1$ such that, for every $n$ sufficiently large, we can pick $k$ random vertices $(o_n^1, \dots, o_n^k)$ uniformly in $\mathcal{T}_n$ and independently from one another, satisfying
+
+$$ \mathbb{P}\left(\sup_{x \in V(\mathcal{T}_n)} \left(\inf_{1 \le j \le k} \vec{d}(x, o_n^j)\right) < \varepsilon n^{1/4}\right) > 1 - \delta. \qquad (8.8) $$
+
+This follows from Proposition 4.3. Note that, once again, the equivalent property in [13] was obtained as a consequence of the convergence of usual planar triangulations to the Brownian map, whereas here we had to obtain it from the convergence of the rescaled oriented distances from $o_n$ to a Brownian snake, which is a weaker result.
+
+Then, (8.7) implies that we also have, for all sufficiently large $n$,
+
+$$ \mathbb{P}\left(\bigcap_{1 \le i \le j \le k} \{|d(o_n^i, o_n^j) - c_0 \vec{d}(o_n^i, o_n^j)| \le \varepsilon n^{1/4}\}\right) > 1 - \delta. $$
+
+Observe now that
+
+$$ \sup_{x,y \in V(\mathcal{T}_n)} |d(x,y) - c_0 \vec{d}(x,y)| \le \sup_{1 \le i,j \le N} |d(o_n^i, o_n^j) - c_0 \vec{d}(o_n^i, o_n^j)| + 5 \sup_{x \in V(\mathcal{T}_n)} \left( \inf_{1 \le j \le N} \vec{d}(x,o_n^j) \right). $$
+
+Using the previous two bounds, the right-hand side of this inequality can be bounded by $6\varepsilon$ outside a set of probability at least $2\delta$ for all sufficiently large $n$, which concludes the proof. □
\ No newline at end of file
diff --git a/samples/texts/2952501/page_39.md b/samples/texts/2952501/page_39.md
new file mode 100644
index 0000000000000000000000000000000000000000..1b5ba58838986958b1e6de1fe2e18e40159583f3
--- /dev/null
+++ b/samples/texts/2952501/page_39.md
@@ -0,0 +1,31 @@
+Let us finally give a short proof of why $c_0 \ge 2/3$. Consider $\mathcal{T}_n$, the uniform rooted plane Eulerian triangulation with $n$ black faces. From Theorem 1.2, for any $\varepsilon, \delta \in (0, 1/2)$, for any $n$ large enough,
+
+$$|d_n(x, y) - c_0 \vec{d}_n(x, y)| \le \varepsilon n^{1/4}, \quad \forall x, y \in V(\mathcal{T}_n), \qquad (8.9)$$
+
+outside an event of probability less than $\delta$.
+
+Suppose that $c_0 < 2/3$. Let us fix $n \ge 1$, and consider some $c \in (0, 1)$. Then, on the event of (8.9), for any $x, y \in V(\mathcal{T}_n)$ such that
+
+$$d_n(x, y) \ge cn^{1/4}, \qquad (8.10)$$
+
+we have:
+
+$$\vec{d}_n(x, y) \ge \left( \frac{1}{c_0} - \frac{\varepsilon}{c} \right) d_n(x, y).$$
+
+This means that, for any geodesic $\gamma$ for the distance $d_n$ from $x$ to $y$ in $\mathcal{T}_n$, a fraction larger than or equal to $(1/c_0 - 1 - \varepsilon/c)$ of the edges of $\gamma$ are oriented from $y$ to $x$. But, as the above bound also applies when we exchange $x$ and $y$, a same fraction of edges of $\gamma$ must be oriented from $x$ to $y$, which is not possible if $(1/c_0 - 1 - \varepsilon/c) > 1/2$, that is, $\varepsilon/c < 1/c_0 - 3/2$.
+
+Since, for any $\delta \in (0, 1/2)$, there exists a $c(\delta) \in (0, 1)$ such that, if $n$ is large enough, outside of an event of probability less than $\delta$, a positive proportion of pairs of vertices of $\mathcal{T}_n$ satisfy (8.10), we deduce that (8.9) cannot have a high probability for large $n$, if $c_0 < 2/3$.
+
+It would be interesting to also refine the upper bound on $c_0$. However, this seems to necessitate deeper arguments than our refinement of the lower bound.
+
+# 9 Convergence for the Riemannian distance
+
+We now turn our attention to another distance that can be defined on Eulerian triangulations, the **Riemannian distance** $d^R$. To define this distance, we start by assigning to a triangulation $A$, the piecewise-linear metric space $S(A)$, obtained by gluing equilateral, Euclidean triangles with sides of unit length, according to the combinatorics of $A$. We call this space the **Euclidean geometric realization** of $A$. It naturally comes endowed with a metric, that we denote by $d^R$, and, by a slight abuse of notation, we also denote by $d^R$ the induced distance on the vertices of $A$.
+
+We want to show that, like the usual graph distance $d$, the Riemannian distance $d^R$ is asymptotically proportional to the oriented distance $\vec{d}$, so that, endowed with $d^R$, the uniform Eulerian triangulation $\mathcal{T}_n$ still converges to the Brownian map. This can be once again proven using the layer decomposition of finite and infinite Eulerian triangulations with respect to $\vec{d}$, together with an ergodic subadditivity argument. Once this argument gives the desired asymptotic proportionality on $\mathcal{L}$, the results of Section 8 can be directly adapted to $d^R$, to obtain the new convergence to the Brownian map.
+
+However, the subbadditivity argument presents here a hurdle that was not present in the case of $d$: indeed, while we still have the immediate upper bound $d^R \le \vec{d}$ (and even: $d^R \le d$), we have no obvious way to bound $d^R$ from below with $\vec{d}$. Such a bound is crucial, since, without it, the proportionality constant given by the ergodic subbadditivity theorem could very well be zero. We therefore prove the following result:
+
+**Proposition 9.1.** Let $A$ be a triangulation, endowed with its graph distance $d$, canonical oriented pseudo-distance $\vec{d}$ and Riemannian distance $d^R$. Then,
+
+$$d^R \ge \frac{\sqrt{3}}{4} d.$$
\ No newline at end of file
diff --git a/samples/texts/2952501/page_4.md b/samples/texts/2952501/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..2305ba945beb48f78feba73270995fe76d668ef1
--- /dev/null
+++ b/samples/texts/2952501/page_4.md
@@ -0,0 +1,13 @@
+Figure 6: Two disjoint curves in $C_n(A)$ cannot encircle one other.
+
+**Lemma 2.13.** Let A be a planar rooted Eulerian triangulation. For a given vertex v at (oriented) distance at least $n + 1$ from the root, there is a unique curve in $C_n(A)$ that separates v from the root.
+
+Moreover, all curves in $C_n(A)$ are simple.
+
+*Proof.* First consider two disjoint curves in $C_n(A)$ that separate the same vertex v from the origin. Necessarily, a geodesic path from the root to a vertex belonging to one of them should go through the other, and thus have length at least $n + 1$ (see Figure 6).
+
+Now, if two curves of $C_n(A)$ intersect at a vertex of type $n$, then by our resolution rule, they cannot go counterclockwise around the same region of A (see Figure 7), and, as explained before, these are precisely the regions they separate from the origin.
+
+This rule also implies that a curve $\mathcal{C}$ in $C_n(A)$ cannot go twice through the same type-$n$ vertex. Indeed, if that were the case, then $\mathcal{C}$ would separate from the origin vertices of oriented distance $n-1$ and less, so that any oriented geodesic from the origin to these vertices should be of length at least $n+1$ (see Figure 7). $\square$
+
+We define the **ball** $B_n(A)$ as the submap of A obtained by keeping only the faces and edges of A incident to at least a vertex at distance $n-1$ or less from the origin, cutting along the edges of type $n \to n+1$, and filling in the produced holes by simple faces (see Figure 8 for a local depiction of this procedure). Thus, in $B_n(A)$, for each closed curve $\mathcal{C} \in C_n(A)$, we have replaced all faces that $\mathcal{C}$ separates from the root, by a single, simple face. In particular, if two faces of A of type $n$ share a type-( $n \to n+1$ ) edge, in $B_n(A)$ their respective type-( $n \to n+1$ ) edges are not identified, so that their common type-( $n+1$ ) vertex gives rise to two vertices in $B_n(A)$ (see Figure 9). Two type-$n$ faces $f, f'$ may also share a type-( $n+1$ ) vertex $v$ but no edge: in that case, it means that $v$ is also shared by faces of types $n+1$, so that we would need to add these faces and the type $n \to n+1$ edges they share with $f$ and/or $f'$, in order to identify the type-( $n+1$ ) vertices of $f$ and $f'$ into $v$. Note that as $B_n(A)$ contains all the type-( $n-1 \to n$ ) edges of A, type-$n$ vertices of A are never duplicated in $B_n(A)$.
\ No newline at end of file
diff --git a/samples/texts/2952501/page_41.md b/samples/texts/2952501/page_41.md
new file mode 100644
index 0000000000000000000000000000000000000000..a18846578a26bb7521e0e9ed54acae31b920e00c
--- /dev/null
+++ b/samples/texts/2952501/page_41.md
@@ -0,0 +1,49 @@
+**Proposition 9.2.** There exists a constant $\mathbf{c}_1 \in [(\sqrt{3}/4)\mathbf{c}_0, 1]$ such that
+
+$$r^{-1} d_{\mathcal{L}}^{R}(\rho, \mathcal{L}_r) \xrightarrow[r \to \infty]{a.s.} \mathbf{c}_1.$$
+
+In the sequel, $\mathbf{c}_1$ will refer to the constant of Proposition 9.2.
+
+*Proof.* We proceed like for Proposition 8.1, by considering the layers $\mathcal{L}_{[m,n]}$, for integers $0 \le m \le n$. Such a layer corresponds to a strip in the Euclidean geometrical realization $S(\mathcal{L})$ of $\mathcal{L}$: we can define the Riemannian distance $d_{\mathcal{L}_{[m,n]}}^R$ on the vertices of $\mathcal{L}_{[m,n]}$ by considering the shortest paths (**starting and ending at vertices**) in $S(\mathcal{L})$ that stay in this strip. Then, we have, for any two vertices $v, v' \in \mathcal{L}[m, n]$, we have $d_{\mathcal{L}_{[m,n]}}^R(v, v') \ge d_{\mathcal{L}}^R(v, v')$.
+
+Thus, as for the graph distance, if $m, n \ge 1$, and $x_m$ is the leftmost vertex $x$ of $\mathcal{L}_m$ such that $d_{\mathcal{L}}^R(\rho, \mathcal{L}_m) = d_{\mathcal{L}}^R(\rho, x)$, we have
+
+$$d_{\mathcal{L}}^{R}(\rho, \mathcal{L}_{m+n}) \leq d_{\mathcal{L}}^{R}(\rho, \mathcal{L}_{m}) + d_{\mathcal{L}_{[m,n]}}^{R}(x_m, \mathcal{L}_{m+n}).$$
+
+As in the case of the graph distance $d$, since $x_m$ is a function of $\mathcal{L}_{[0,m]}$ only, and since the layers in $\mathcal{L}$ are i.i.d., this yields the desired convergence.
+
+The upper bound on $\mathbf{c}_1$ is immediate; let us briefly explain how we obtain the lower bound. Fix $\varepsilon > 0$ and $\delta \in (0, 1/2)$. We have that, for $n$ large enough, outside of an event of probability less than $\delta$,
+
+$$|d_n(x, y) - \mathbf{c}_0 \vec{d}_n(x, y)| \leq \varepsilon n^{1/4} \quad \forall x, y \in V(\mathcal{T}_n),$$
+
+so that we get from Proposition 9.1:
+
+$$d_n^R(x, y) \geq \frac{\sqrt{3}}{4} \mathbf{c}_0 \vec{d}_n(x, y) - \varepsilon n^{1/4}.$$
+
+Now, there exists a constant $0 < C(\delta) < 1$, that depends only on $\delta$, such that, for $n$ large enough, outside of an event of probability less than $\delta$, a positive proportion of pairs of vertices of $\mathcal{T}_n$ satisfy
+
+$$\vec{d}_n(x, y) \geq C(\delta)n^{1/4}.$$
+
+Therefore, for all such pairs, we have
+
+$$d_n^R(x, y) \geq \left( \frac{\sqrt{3}}{4} \mathbf{c}_0 - \frac{\varepsilon}{C(\delta)} \right) \vec{d}_n(x, y),$$
+
+so that, necessarily,
+
+$$\mathbf{c}_1 \geq \frac{\sqrt{3}}{4} \mathbf{c}_0. \quad \square$$
+
+Retracing for $d^R$ the same arguments as the ones used for $d$ in Section 8, we deduce from Proposition 9.2 the following result:
+
+**Theorem 9.3.** Let $\mathcal{T}_n$ be a uniform random rooted Eulerian planar triangulation with $n$ black faces, and let $V(\mathcal{T}_n)$ be its vertex set. For every $\varepsilon > 0$, we have
+
+$$\mathrm{P}\left(\sup_{x,y \in V(\mathcal{T}_n)} |d_n^R(x,y) - c_1 \vec{d}_n(x,y)| > \varepsilon n^{1/4}\right) \xrightarrow[n \to \infty]{} 0.$$
+
+This allows us to add a third scaling limit to the joint convergence of Theorem 3.1:
+
+**Corollary 9.4.** Let $(\mathbf{m}_\infty, D^*)$ be the Brownian map. We have the following joint convergences
+
+$$\begin{align*}
+& n^{-1/4} \cdot (V(\mathcal{T}_n), \overleftrightarrow{d}_n) \xrightarrow[n \to \infty]{(d)} (\mathbf{m}_\infty, D^*) \\
+& n^{-1/4} \cdot (V(\mathcal{T}_n), d_n) \xrightarrow[n \to \infty]{(d)} \mathbf{c}_0 \cdot (\mathbf{m}_\infty, D^*) \\
+& n^{-1/4} \cdot (S(\mathcal{T}_n), d_n^R) \xrightarrow[n \to \infty]{(d)} \mathbf{c}_1 \cdot (\mathbf{m}_\infty, D^*),
+\end{align*}$$
\ No newline at end of file
diff --git a/samples/texts/2952501/page_42.md b/samples/texts/2952501/page_42.md
new file mode 100644
index 0000000000000000000000000000000000000000..31e3f29ec4a9add38a28f17f4f8bd8590895fa6e
--- /dev/null
+++ b/samples/texts/2952501/page_42.md
@@ -0,0 +1,35 @@
+for the Gromov-Hausdorff distance on the space of isometry classes of compact metric spaces.
+
+Let us sketch very quickly the proof of Corollary 9.4: following the same steps as the ones we made for $d$ in Section 8, the result of Theorem 9.3 implies that the Brownian map is the scaling limit of the vertex set $V(T_n)$ endowed with the distance induced by $S(T_n)$, and not $S(T_n)$ itself. However, the Gromov-Hausdorff distance between $(S(T_n), d^R)$ and $(V(T_n), d^R)$ is at most $\sqrt{3}/4$ (considering $V(T_n)$ as embedded into $S(T_n)$), so that this convergence does extend to $S(T_n)$.
+
+As explained in the introduction, the result of Corollary 9.4 allows us to make a more direct comparison between models of random maps as studied by probabilists, and models of 2D quantum gravity studied by theoretical physicists, such as Causal Dynamical Triangulations, as the latter models focus on the Euclidean geometric realization associated to some combinatorial maps.
+
+Note that the geometric argument in the proof of Proposition 9.1 works for any triangulation, and not just an Eulerian one. Thus, relying on the layer decomposition of usual triangulations of [13], we can prove in the same way as here that usual triangulations, equipped with the Riemannian metric, also converge to the Brownian map. A similar geometric argument should also work for quadrangulations, along with the layer decomposition of [21].
+
+## References
+
+[1] C. Abraham, *Rescaled bipartite planar maps converge to the Brownian map*, Ann. Inst. Henri Poincaré Probab. Stat. **52** (2016), no. 2, 575–595. MR-3498001
+
+[2] L. Addario-Berry and M. Albenque, *Convergence of odd-angulations via symmetrization of labeled trees*, arXiv:1904.04786
+
+[3] L. Addario-Berry and M. Albenque, *The scaling limit of random simple triangulations and random simple quadrangulations*, Ann. Probab. **45** (2017), no. 5, 2767–2825. MR-3706731
+
+[4] M. Albenque and J. Bouttier, *Constellations and multicontinued fractions: application to Eulerian triangulations*, 24th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2012), Discrete Math. Theor. Comput. Sci. Proc., AR, Assoc. Discrete Math. Theor. Comput. Sci., Nancy, 2012, pp. 805–816. MR-2958050
+
+[5] J. Ambjørn, A. Görlich, J. Jurkiewicz, and R. Loll, *Quantum gravity via causal dynamical triangulations*, Springer handbook of spacetime, Springer, Dordrecht, 2014, pp. 723–741. MR-3525043
+
+[6] O. Angel, *Growth and percolation on the uniform infinite planar triangulation*, Geom. Funct. Anal. **13** (2003), 935–974. MR-2024412
+
+[7] J. Bettinelli, E. Jacob, and G. Miermont, *The scaling limit of uniform random plane maps, via the Ambjørn-Budd bijection*, Electron. J. Probab. **19** (2014), no. 74, 16. MR-3256874
+
+[8] M. Bousquet-Mélou and G. Schaeffer, *Enumeration of planar constellations*, Adv. in Appl. Math. **24** (2000), no. 4, 337–368. MR-1761777
+
+[9] J. Bouttier and A. Carrance, *Enumeration of hypermaps with alternating boundaries*, arXiv:2012.14258
+
+[10] J. Bouttier, P. Di Francesco, and E. Guitter, *Planar maps as labeled mobiles*, Electr. J. Combin. **11** (2004), no. 1. MR-2097335
+
+[11] D. Burago, Y. Burago, and S. Ivanov, *A course in metric geometry*, Graduate Studies in Mathematics, vol. 33, American Mathematical Society, 2001. MR-1835418
+
+[12] A. Carrance, *Triangulations colorées aléatoires*, Ph.D. thesis, Université Claude Bernard Lyon 1, 2019.
+
+[13] N. Curien and J.-F. Le Gall, *First-passage percolation and local modifications of distances in random triangulations*, Ann. Sci. ENS **52** (2019), 631–701. MR-3982872
\ No newline at end of file
diff --git a/samples/texts/2952501/page_44.md b/samples/texts/2952501/page_44.md
new file mode 100644
index 0000000000000000000000000000000000000000..b10cbdf077098d0e1a2cd263f2feaf48c2aa573c
--- /dev/null
+++ b/samples/texts/2952501/page_44.md
@@ -0,0 +1,28 @@
+For any map or graph $G$, we will denote by $V(G)$ its vertex set.
+In this paper, we will come upon two specific types of maps:
+
+**Definition 2.5.** A **tree** is a connected graph with no cycle. A **plane tree** is a map $T$ that, as a graph, is a tree. Since $T$ has no cycle, it is necessarily a planar map.
+
+**Definition 2.6.** An **Eulerian triangulation** is a map whose faces have all degree 3, and such that these faces can be properly bicolored, i.e., colored in black and white, such that all white faces are only adjacent to black faces, and vice versa.
+
+We will also deal with **Eulerian triangulations with a boundary**, that is, maps with one distinguished face, such that all its other faces have degree 3, and these inner faces can be properly bicolored (i.e., colored in black and white, such that white faces are only adjacent to black faces or to the external face, and similarly for black faces).
+
+By convention, when we root an Eulerian triangulation with a boundary, we do so on an edge adjacent to the external face.
+
+## 2.2 Bijection with trees
+
+We consider here rooted, planar Eulerian triangulations. Bouttier, Di Francesco and Guitter [10] have established a bijection between this family of maps and a particular class of labeled trees, whose construction we now briefly recall and extend.
+
+Let $A$ be a rooted planar Eulerian triangulation. The orientation of the root edge of $A$ fixes a **canonical orientation** of all its edges, by requiring that orientations alternate around each vertex. By construction, edges around a given face are necessarily oriented either all clockwise, or all anti-clockwise (with respect to the plane embedding in which the face on the left of the root edge is the infinite one). This fixes the bicoloration of the faces of $A$, by setting for instance that clockwise faces are black, and anti-clockwise faces, white.
+
+From now on, any mention of orientation refers to this canonical orientation.
+
+Figure 1: A planar Eulerian triangulation with its canonical orientation, bicoloration and oriented geodesic distances.
+
+For any pair $(u, v)$ of vertices of $A$, we define the **oriented distance** $\vec{d}(u, v)$ from $u$ to $v$, as the minimal length of an oriented path from $u$ to $v$.
+
+Let us state a useful fact. Denoting by $d$ the usual graph distance, in any Eulerian triangulation, we always have:
+
+$$d \leq \vec{d} \leq 2d, \tag{2.1}$$
+
+as, in the worst case, the oriented distance forces a path to go through two edges of a triangle instead of just taking the third one.
\ No newline at end of file
diff --git a/samples/texts/2952501/page_46.md b/samples/texts/2952501/page_46.md
new file mode 100644
index 0000000000000000000000000000000000000000..b46187462ea3c9df4f2f9433dd63e1bcf82ac5cb
--- /dev/null
+++ b/samples/texts/2952501/page_46.md
@@ -0,0 +1,19 @@
+face into a number of white faces. By construction, the clockwise sequence of labels
+around any of these white faces is of the form $0 \rightarrow 2 \rightarrow \dots \rightarrow 2 \rightarrow 1$, where all the labels
+between the first and last "2" are greater or equal to 2, and all increments but the first
+are $\pm 1$. For each white face $F$ that is not already a triangle, and for each type-$ (2 \rightarrow 3)$
+edge whose right side is adjacent to $F$, we create a black triangle of type 2 to the right of
+this edge, by adding edges between its vertices and the unique vertex labeled 1 around
+$F$. This induces a splitting of $F$ into smaller white faces, and we repeat the procedure
+again, until all labels are exhausted. This yields an Eulerian triangulation $A$ rooted at
+the $0 \rightarrow 1$ edge linking the origin to the root of $T$ (see Figure 3).
+
+Figure 3: The inverse construction of the triangulation of Figure 1 from the labeled tree.
+
+In the sequel, it will be more convenient to deal with trees whose labels are not
+necessarily positive. For that purpose, we point $A$ at some vertex $v_*$, and consider the
+new labeling $\ell$ on $V(A)$ given by:
+
+$$\ell(u) = \vec{d}(v_*, u) - \vec{d}(v_*, \rho).$$
+
+Let us proceed with this new labeling as we did with the oriented geodesic distance: starting from the triangulation $A$, in each black triangle of type $n+1$ (where now the integer $n$ may be nonpositive), we only keep the edge of type $n+1 \rightarrow n+2$. This
\ No newline at end of file
diff --git a/samples/texts/2952501/page_47.md b/samples/texts/2952501/page_47.md
new file mode 100644
index 0000000000000000000000000000000000000000..8af4da2637412cd5340936ec462a86be52d32495
--- /dev/null
+++ b/samples/texts/2952501/page_47.md
@@ -0,0 +1,25 @@
+construction now gives a correspondance between pointed planar Eulerian triangulations with $n$ black triangles, and well-labeled trees with $n$ edges, with no constraint on the label signs, that still maps the vertices of the triangulation, minus the distinguished vertex, to those of the tree. Let us now pay attention to the rooting: since the distinguished vertex that we add to the tree is no longer the origin of the triangulation (and, conversely, the root corner of the tree is no longer at a minimal label), we need additional information with either object, to know how to root the other. Thus, in the triangulation-to-tree direction, we start from a couple $(A, \varepsilon)$, with $\varepsilon \in \{0, 1\}$: depending on the value of $\varepsilon$, we root the tree $T$ at the edge remaining from the (black) root face of $A$, either with its original direction, or the reverse one, see Figure 4. Note that we have to shift the labels of $T$ by an integer $L(A)$ between -2 and 2, so that the root corner has label 1, so that the labeling of the vertices of the tree is:
+
+$$l(u) = \vec{d}(v_*, u) - \vec{d}(v_*, \rho) + L(A).$$
+
+Conversely, in the tree-to-triangulation direction, we start from a couple $(T, \delta)$, with $\delta \in \{0, 1, 2\}$: depending on the value of $\delta$, the root edge of $A$ is either the type-$n-1 \to n$, $n \to n+1$, or $n+1 \to n-1$ edge of the black face adjacent to the root edge of $T$ (where this face is itself of type $n$).
+
+It is straightforward to prove similarly to the case of $\varphi_n$, that this new mapping is a bijection as well:
+
+**Proposition 2.10.** Let us denote by $\bar{\mathcal{T}}_n$ the set of pointed, rooted planar Eulerian triangulations with $n$ faces, and by $\mathbb{T}_n$ the set of well-labeled trees with $n$ edges. The mapping $\psi_n$ detailed above from $\bar{\mathcal{T}}_n \times \{0, 1\}$ to $\mathbb{T}_n \times \{0, 1, 2\}$ is a bijection.
+
+Consider now the random variable $(\mathcal{T}_n^\bullet, \varepsilon_n)$, where $\mathcal{T}_n^\bullet$ is picked uniformly at random in $\bar{\mathcal{T}}_n$, and $\varepsilon_n$ is uniform over $\{0, 1\}$. Since these additional decorations of $\psi_n$ only influence the rooting of the obtained map, the image of $(\mathcal{T}_n^\bullet, \varepsilon_n)$ can be written as $(\mathcal{T}_n, \delta_n)$, where $\mathcal{T}_n$ is uniform over $\mathbb{T}_n$, and $\delta_n$ is uniform over $\{0, 1, 2\}$.
+
+Furthermore, note that, if $T = \psi_n(A)$, for every vertex $u$ of $T$, the oriented distance from $v_*$ to $u$ in $A$ is given by:
+
+$$\vec{d}(v_*, u) = l(u) - \min_{v \in V(T)} l(v) + 1. \quad (2.2)$$
+
+To get more general information on the oriented distances in $A$ from the labels of $T$, we need a bit of additional notation.
+
+First observe that, with the construction of $A$ from $T$, a corner $c$ of $T$ is always incident in $A$ to an edge oriented from the first corner $c'$ encountered when going anticlockwise around the unique face of $T$, starting at $c$, and that has label $l(c) - 1$ (by convention, we define the label of a corner in $T$ to be the label of the associated vertex). Indeed, either this corner was already adjacent to $c$ in $T$, or we create an edge between them when adding a black triangle to the right of the edge of type $l(c) \to l(c) + 1$ that starts at $c$.
+
+We call $c'$, the **predecessor** of $c$, and denote it by $p(c)$. (The predecessor of a corner of minimal label is naturally the corner of the origin to which it is linked in the first step of the construction.) We also call $p^k(c)$ the $k$-th predecessor of $c$, whenever it is defined.
+
+Let us denote by $l_0$ the minimal label of the vertices of $T$. For a corner $c$ of $T$, the edge $p(c) \to c$ in $A$ is obviously of type $l(c) - l_0 \to l(c) - l_0 + 1$ (for the oriented distance from $v_*$). This implies that the path from $v_*$ to $c$ going through all its predecessors: $v_* \to p(l(c)-1)(c) \to \dots \to p(c) \to c$ is a geodesic for $\vec{d}$ in $A$.
+
+For any pair of corners $c, c'$ in $T$, we denote by $[c, c']$ the set of corners of $T$ encountered when starting from $c$, going anticlockwise around $T$, and stopping at $c'$. The property (2.2) yields the following bound on oriented distances in $A$:
\ No newline at end of file
diff --git a/samples/texts/2952501/page_48.md b/samples/texts/2952501/page_48.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b5ea29fff825390168fa1c932a96fdca30fcfb0
--- /dev/null
+++ b/samples/texts/2952501/page_48.md
@@ -0,0 +1,7 @@
+Figure 4: Example of the mapping from a pointed, rooted planar Eulerian triangulation to a well-labeled trees: top left, the original triangulation $A$, top right, the triangulation with the shifted labels. To determine the rooting of the associated tree $T$, we need an additional parameter $\epsilon \in \{0, 1\}$. Choosing it to be 0, we root $T$ at the edge remaining from the root face of $A$, with its original direction: bottom left, the tree with the labeling from top right, an bottom right, the shifted labels that respect the condition that the root corner has label 1. Here, the value of the parameter $\delta$ is 1, as the root edge of $A$ has type $3 \to 4$, and its root face has type 3.
+
+**Proposition 2.11.** Let $c, c'$ be two corners of $T$, with corresponding vertices $u, v$. Then
+
+$$ \vec{d}(u, v) \le 2 \left( l(u) + l(v) - 2 \min_{c'' \in [c, c')} l(c'') + 2 \right). $$
+
+*Proof.* Let $m = \min_{c'' \in [c, c]} l(c'')$, and let $c''$ be the first corner in $[c, c]$ such that $l(c'') = m$. Then $c''$ is the $(l(c)-m)$-th predecessor of $c$. Moreover, by definition, $p(c'')$ does not belong to $[c, c']$, so that it is also the $(l(c') - m + 1)$-th predecessor of $c'$. Thus, the predecessor geodesic $p(c'') \to c'' \to \dots \to c$, concatenated with the similar geodesic $p(c'') \to \dots \to c'$, is a simple path in $A$ made of $l(c) + l(c') - 2m + 2$ edges. However, part of it is not oriented from $c$ to $c'$, so that we lose a multiplicative factor of 2 when deducing a bound on the distance from $u$ to $v$. □
\ No newline at end of file
diff --git a/samples/texts/300903/page_1.md b/samples/texts/300903/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f7103258ae3bfba14036c7e14da81379490aa66
--- /dev/null
+++ b/samples/texts/300903/page_1.md
@@ -0,0 +1,17 @@
+# HOMOLOGICAL STABILITY FOR THE RIBBON HIGMAN–THOMPSON GROUPS
+
+RACHEL SKIPPER AND XIAOLEI WU
+
+**ABSTRACT.** We generalize the notion of asymptotic mapping class groups and allow them to surject to the Higman–Thompson groups, answering a question of Aramayona and Vlamis in the case of the Higman–Thompson groups. When the underlying surface is a disk, these new asymptotic mapping class groups can be identified with the ribbon and oriented ribbon Higman–Thompson groups. We use this model to prove that the ribbon Higman–Thompson groups satisfy homological stability, providing the first homological stability result for dense subgroups of big mapping class groups. Our result can also be treated as an extension of Szymik–Wahl’s work on homological stability for the Higman–Thompson groups to the surface setting.
+
+## INTRODUCTION
+
+The family of Thompson’s groups and the many groups in the extended Thompson family have long been studied for their many interesting properties. Thompson’s group $F$ is the first example of a type $F_\infty$, torsion-free group with infinite cohomological dimension [BG84] while Thompson’s groups $T$ and $V$ provided the first examples of finitely presented simple groups. More recently the braided and labeled braided Higman–Thompson groups have garnered attention in part due to their connections with big mapping class groups [Bro06, Deh06, AC20, SW]. In particular, Thumann constructed the ribbon version of Thompson’s group $V$ and proved that it is of type $F_\infty$ [Thu17]. The authors studied the ribbon Higman–Thompson groups $RV_{d,r}$ and their oriented version $RV_{d,r}^+$ in [SW]. In fact, we identified them with the so-called labeled braided Higman-Thompson groups and proved that they are all of type $F_\infty$.
+
+The homology of Thompson’s groups has also been an well-studied topic. Brown and Geoghegan computed the homology of $F$ in [BG84]; Ghys and Sergiescu calculated the homology of $T$ in [GS87]. More recently Szymik and Wahl showed that $V$ is acyclic [SW19], answering a question due to Brown [Bro92]. One of the key ingredients for their proof was showing that the Higman–Thompson groups $V_{d,1} \hookrightarrow V_{d,2} \hookrightarrow \dots \hookrightarrow V_{d,r} \hookrightarrow \dots$ satisfy homological stability for any fixed $d$. Recall that a family of groups $G_1 \hookrightarrow G_2 \hookrightarrow \dots \hookrightarrow G_n \hookrightarrow \dots$ is said to satisfy homological stability if the induced maps $H_i(G_n) \to H_i(G_{n+1})$ are isomorphisms for sufficiently large $n$. Classical examples of families of groups which satisfy homological stability include symmetric groups [Nak61], general linear groups [vdK80] and mapping class groups of surfaces [Har85].
+
+In the present paper, we extend Szymik and Wahl’s work to the class of ribbon Higman–Thompson groups. To accomplish this, we first build a geometric model for the ribbon Higman–Thompson groups using Funar–Kapoudjian’s asymptotic mapping class groups [FK04]. These groups are defined using a rigid structure on a surface minus a Cantor set and they sit naturally inside the ambient big mapping class groups. More recently, Aramayona and Funar [AF17] generalized the definition to surfaces with nonzero genus. In fact, Aramayona and Funar showed that the half-twist version of their asymptotic mapping class group (cf. Definition 3.14) is dense in the big mapping class group [AF17, Theorem 1.3]. Another surprising result of Funar and Neretin says that the half-twist asymptotic mapping class group of a closed surface minus a
+
+*Date:* May 2021.
+2010 Mathematics Subject Classification. 20F36, 57M07, 19D23, 20J05.
+Key words and phrases. Braided Higman–Thompson groups, ribbon Higman–Thompson groups, asymptotic mapping class groups, big mapping class groups, finiteness property, homological stability.
\ No newline at end of file
diff --git a/samples/texts/300903/page_10.md b/samples/texts/300903/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..7448442a6e5bc24722c1de4fdc14abc832330864
--- /dev/null
+++ b/samples/texts/300903/page_10.md
@@ -0,0 +1,25 @@
+in the interior of the surface bounded by $\alpha$. Then the surface bounded by $\alpha$ and $A$ must be an annulus. From here one can produce an arc connecting the base point 0 to a point in $A$. Together with $A$, this provides the lollipop.
+
+Finally, we argue that $g$ is injective. Suppose $L_1$ and $L_2$ are two lollipops such that $g(L_1)$ and $g(L_2)$ are isotopic, denote the isotopy by $f$. By the isotopy extension theorem (see for example [FM11, Proposition 1.11]) there exists an isotopy $F: D_{d,r}^{\infty} \times [0, 1] \rightarrow D_{d,r}^{\infty}$ such that $F|_{D_{d,r}^{\infty} \times 0} = id_{D_{d,r}^{\infty}}$ and $F|_{g(L_1) \times [0,1]} = f$. In particular $F|_{D_{d,r}^{\infty} \times 1}$ maps the almost admissible loop $g(L_1)$ to the almost admissible loop $g(L_2)$. Hence $L_1$ is isotoped through $F$ to a lollipop which lies in a small neighborhood of $L_2$ and is bounded by the loop $g(L_2)$. Therefore, one can then isotope $L_1$ to $L_2$. $\square$
+
+We now have the following definition of lollipop complex.
+
+**Definition 4.21.** The lollipop complex $L_r^\infty(D, D_{d,1}^\infty)$ has vertices as lollipops, and $p+1$ lollipops, $L_0, L_1, \dots, L_p$, form a $p$-simplex if they are pairwise disjoint outside the base point 0 and there exists at least one admissible loop which does not lie inside the disks bounded by the $L_i$s.
+
+The following lemma is immediate from Lemma 4.20.
+
+**Lemma 4.22.** The complex $L_r^\infty(D, D_{d,1}^\infty)$ is isomorphic to $T_r^\infty(D, D_{d,1}^\infty)$ as a simplicial complex.
+
+**Lemma 4.23.** Given a $p$-simplex $\sigma$ in $L_r^\infty(D, D_{d,1}^\infty)$, its link $Lk(\sigma)$ is isomorphic to $L_{r_\sigma}^\infty(D, D_{d,1}^\infty)$ for some $r_\sigma > 0$.
+
+**Proof.** By Lemma 4.22, we can just prove the lemma for $T_r^\infty(D, D_{d,1}^\infty)$. Let $\alpha_0, \alpha_1, \dots, \alpha_p$ be the vertices of $\sigma$ which are almost admissible loops. Up to isotopy, we can assume they are pairwise disjoint except at the basepoint 0. Now let $C$ be the complement surface of $\sigma$, whose based boundary is the concatenation of $\alpha_p, \alpha_{p-1}, \dots, \alpha_0$ and $\partial D$. The surface $C$ has a naturally induced $d$-rigid structure. In particular, $C$ is asymptotically rigidly homeomorphic to $D_{d,r_\sigma}^\infty$ for some $r_\sigma > 0$. Thus link $Lk(\sigma)$ is isomorphic to $T_{r_\sigma}^\infty(D, D_{d,1}^\infty)$. $\square$
+
+Let us summarize the relationships we have so far between our various complexes by the following diagram.
+
+In Proposition 4.27, we will deduce the connectivity of $U_r^\infty(D, D_{d,1}^\infty)$ using the connectivity of the lollipop complex $L_r^\infty(D, D_{d,1}^\infty)$ by applying a bad simplices argument. Our goal now is to show that $L_r^\infty(D, D_{d,1}^\infty)$ is highly connected. Let us make some definitions first.
+
+**Definition 4.24.** Given any lollipop $L: (A, 0) \rightarrow (D_{d,r}^\infty, 0)$, we define the *free height* $\mathfrak{h}_L$ to be the minimal number $m$ such that $L([1, 2])$ is contained in $D_{d,r,m}$ up to free isotopy. We also define the height of an admissible loop to be the minimal number $m$ such that it is contained in $D_{d,r,m}$.
+
+To analyze the connectivity of $L_r^\infty(D, D_{d,1}^\infty)$, we need the following lemma which is a direct translation of [SW19, Lemma 3.8].
+
+**Lemma 4.25.** For any $r,p,N \ge 1$, there exists a number $\mathfrak{h}_{r,p,N} \ge 0$, such that for any $p$-simplex $\sigma$ in $L_r^\infty(D, D_{d,1}^\infty)$, and any $\mathfrak{h} \ge \mathfrak{h}_{r,p,N}$, there are at least $N$ lollipops of free height $\mathfrak{h}$ in $L_r^\infty(D, D_{d,1}^\infty)$ that are in $Lk(\sigma)$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_11.md b/samples/texts/300903/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..528f49ed28ccac0a3124855349d1575252ac2ad0
--- /dev/null
+++ b/samples/texts/300903/page_11.md
@@ -0,0 +1,17 @@
+**Proof.** Note that for any vertex $L$ in $L_r^\infty(D, D_{d,1}^\infty)$, $L|_{[1,2]}$ is an admissible loop in $\oplus_r D_{d,1}^\infty$. Recall the function $q$ defined in Definition 3.16 which maps an admissible loop to an edge midpoint in the tree $\mathcal{T}_{d,r}$. Since each edge has a unique descendent vertex, we can instead map the loop to this vertex which lies in the forest $\mathcal{F}_{d,r}$. Using this connection, we can now choose $\mathfrak{h}_{r,p,N}$ to be the same as in [SW19, Lemma 3.8]. Then we have at least $N$ admissible loops of height $\mathfrak{h} \ge \mathfrak{h}_{r,p,N}$ which lie in the complement of the surface corresponding to $\sigma$ in $\oplus_r D_{d,1}^\infty$. Connecting each of these admissible loops to the base point in the complement surface, we get a set of lollipops in $Lk(\sigma)$. $\square$
+
+We now show that the complex $L_r^\infty(D, D_{d,1}^\infty)$ is in fact contractible. The idea of proof is similar to that of [SW19, Proposition 3.1] but with significantly more technical difficulty.
+
+**Proposition 4.26.** *The complex $L_r^\infty(D, D_{d,1}^\infty)$ is contractible for any $r \ge 1$.*
+
+**Proof.** The complex $L_r^\infty(D, D_{d,1}^\infty)$ is obviously non-empty. We will show by induction that for all $k \ge 0$, any map $S^k \to L_r^\infty(D, D_{d,1}^\infty)$ is null-homotopic. Assume $L_r^\infty(D, D_{d,1}^\infty)$ is $(k-1)$-connected.
+
+Let $f: S^k \to L_r^\infty(D, D_{d,1}^\infty)$ be a map. As usual, we can assume that the sphere $S^k$ comes with a triangulation such that the map $f$ is simplicial. We first use Lemma 1.7 to make the map $S^k$-simplexwise injective. For that we need for every $p$-simplex $\sigma$ in $L_r^\infty(D, D_{d,1}^\infty)$, its link $Lk(\sigma)$ is $(k-p-2)$-connected. But by Lemma 4.23, $Lk(\sigma)$ can be identified with $L_{r_\sigma}^\infty(D, D_{d,1}^\infty)$ for some $r_\sigma \ge 1$, so we have it is $(k-1)$-connected. Thus by Lemma 1.7, we can homotope $f$ to a map that is simplexwise injective.
+
+Now since $S^k$ is a finite simplicial complex, the free height of the vertices of $S^k$ has a maximum value. We first want to homotope $f$ to a new map such that all the vertices have free height at least $\mathfrak{h} = \mathfrak{h}_{r,k,N}$ where $N = v_0 + v_1 + \dots + v_k + 2$, where $v_i$ is the number of $i$-simplices of $S^k$ and $\mathfrak{h}_{r,k,N}$ is determined by Lemma 4.25. For that we use a bad simplices argument.
+
+We call a simplex of the sphere $S^k$ bad if all of its vertices are mapped to vertices in $L_r^\infty(D, D_{d,1}^\infty)$ that have free height less than $\mathfrak{h}$. We will modify $f$ by removing the bad simplices inductively starting by those of the highest dimension. Let $\sigma$ be a bad simplex of maximal dimension $p$ among all bad simplices. We will modify $f$ and the triangulation of $S^k$ in the star of $\sigma$ in a way that does not add any new bad simplices. In the process, we will increase the number of vertices by at most 1 in each step, and not at all if $\sigma$ is a vertex. This implies that, after doing this for all bad simplices, we will have increased the number of vertices of the triangulation of $S^k$ by at most $v_1 + \dots + v_k$. As $S^k$ originally had $v_0$ vertices, at the end of the process its new triangulation will have at most $v = v_0 + v_1 + \dots + v_k$ vertices. There are two cases.
+
+**Case 1:** $p=k$. If the bad simplex $\sigma$ is of the dimension $k$ of the sphere $S^k$, then its image $f(\sigma)$ has a complement loop which bounds a surface $C$ asymptotically rigidly homeomorphic to $D_{d,r_\sigma}^\infty$ for some $r_\sigma \ge 1$ by Lemma 4.23. Now we can choose a lollipop $y$ in $C$ with free height at least $\mathfrak{h}+1$. In particular $f(\sigma) \cup y$ form a $(k+1)$-simplex. We can then add a vertex $a$ in the center of $\sigma$, replacing $\sigma$ by $\partial\sigma * a$ and replacing $f$ by the map $(f|_{\partial\sigma}) * (a \mapsto y)$ on $\partial\sigma * a$. This map is homotopic to $f$ through the simplex $f(\sigma) \cup \{y\}$. We have added a single vertex to the triangulation. Because $L$ has free height $\mathfrak{h}+1$, we have not added any new bad simplices, and we have removed one bad simplex, namely $\sigma$. Moreover, $f$ remains simplexwise injective.
+
+**Case 2:** $p < k$. If the bad simplex $\sigma$ is a $p$-simplex for some $p < k$, by maximality of its dimension, the link of $\sigma$ is mapped to vertices of free height at least $\mathfrak{h}$ in the complement of the subsurface $f(\sigma)$. The simplex $\sigma$ has $p+1$ vertices whose images are pairwise disjoint outside the based point up to based isotopy. By Lemma 4.25 and our choice of $\mathfrak{h}$, there are at least $N = v+2$ lollipops $y_1, \dots, y_N$ of free height $\mathfrak{h}$ such that each $f(\sigma) \cup \{y_i\}$ form a $(p+1)$-simplex. As there are fewer vertices in the link than in the whole sphere $S^k$, and $S^k$ has at most $v$ vertices, by the pigeonhole principle, the loop part of the vertices in $f(Lk(\sigma))$ are contained in at most $v$ punctured disks bounded by the corresponding admissible loops with free height $\mathfrak{h}$. As $N = v+2$, there are at least two of the above vertices $y_i$ and $y_j$ of free height $\mathfrak{h}$ such that
\ No newline at end of file
diff --git a/samples/texts/300903/page_12.md b/samples/texts/300903/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..2953f87148441b274038103a69bdf65c34ba74d8
--- /dev/null
+++ b/samples/texts/300903/page_12.md
@@ -0,0 +1,23 @@
+standard ternary Cantor set is in fact isomorphic to its smooth mapping class group [FN18, Corollary 2]. In [AV20, Question 5.37], the following question was raised by Aramayona and Vlamis.
+
+**Question.** Are there other geometrically defined subgroups of $\text{Map}(\Sigma_g)$ which surject to other interesting classes of subgroups of homeomorphism group of the Cantor set, such as the Higman-Thompson groups, Neretin groups, etc?
+
+We proceed to construct two new classes of asymptotic mapping class groups, one of which answers their question in the case of Higman-Thompson groups while the other family surjects to the symmetric Higman-Thompson groups $V_{d,r}(\mathbb{Z}/2\mathbb{Z})$.
+
+**Theorem (3.17, 3.20).** Let $\Sigma$ be any compact surface and $C$ be a Cantor set which lies in the interior of a disk in $\Sigma$. Then the mapping class group $\text{Map}(\Sigma \setminus C)$ contains the following two families of dense subgroups: the asymptotic mapping class groups $\mathcal{B}V_{d,r}(\Sigma)$, which surject to the Higman-Thompson groups $V_{d,r}$, and the half-twist asymptotic mapping class groups $\mathcal{H}V_{d,r}(\Sigma)$, which surject to the symmetric Higman-Thompson groups $V_{d,r}(\mathbb{Z}/2\mathbb{Z})$.
+
+When $\Sigma$ is the disk, we identify $\mathcal{H}V_{d,r}(\Sigma)$ with the ribbon Higman-Thompson group $RV_{d,r}$ and $\mathcal{B}V_{d,r}(\Sigma)$ with the oriented ribbon Higman-Thompson group $RV_{d,r}^+$ (cf. Theorem 3.24). Using this geometric model for the ribbon Higman-Thompson groups, we are able to prove the following.
+
+**Theorem (4.30, 4.31).** Suppose $d \ge 2$. Then the inclusion maps induce isomorphisms
+
+$$ \iota_{R,d,r}: H_i(RV_{d,r}, M) \to H_i(RV_{d,r+1}, M) $$
+
+in homology in all dimensions $i \ge 0$, for all $r \ge 1$ and for all $H_1(RV_{d,\infty})$-modules $M$. The same also holds for the oriented ribbon Higman-Thompson groups $RV_{d,r}^+$.
+
+To the best of our knowledge, this is the first homological stability result for dense subgroups of big mapping class groups. Our proof uses a recent convenient framework given by Randal-Williams and Wahl [RWW17]. The core of the proof is similar to [SW19], but with new technical difficulties arising from infinite type surface topology. We hope our result here can be further used to calculate the homology of ribbon Higman-Thompson groups and shed light on the question whether braided $V$ is acyclic.
+
+**Outline of paper.** In Section 1, we describe the connectivity tools that will be necessary for the remainder of the paper. In Section 2, we introduce the definition of the Higman-Thompson, ribbon Higman-Thompson, and oriented ribbon Higman-Thompson groups using paired forest diagrams to define the elements. In Section 3, we generalize the notion of asymptotic mapping class groups and allow them to surject to the Higman-Thompson groups. And finally, in Section 4, we prove homological stability for the ribbon Higman-Thompson groups and their oriented version.
+
+**Notation and convention.** All surfaces in this paper are assumed to be connected and orientable unless otherwise stated. Given a simplicial complex $X$ and a cell $\sigma \in X$, we denote the link of $\sigma$ in $X$ by $Lk_X(\sigma)$ (resp. the star of $\sigma$ by $\text{St}_X(\sigma)$). When the situation is clear, we quite often omit $X$ and simply denote the link by $Lk(\sigma)$ and the star by $\text{St}(\sigma)$. We also use the convention that $(-1)$-connected means non-empty and that every space is $(-2)$-connected. In particular, the empty set is $(-2)$-connected. Finally, we adopt the convention that elements in groups are multiplied from left to right.
+
+**Acknowledgements.** The first part of this project was done while the first author was a visitor in the Unité de mathématiques pures et appliquées at the ENS de Lyon and during a visit to the University of Bonn. She thanks them for their hospitality. She was also supported by the GIF, grant I-198-304.1-2015, “Geometric exponents of random walks and intermediate growth groups” and NSF DMS-2005297 “Group Actions on Trees and Boundaries of Trees”.
\ No newline at end of file
diff --git a/samples/texts/300903/page_13.md b/samples/texts/300903/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..8fb8274e6e9bfc2c71af5695c812a910a9d27e92
--- /dev/null
+++ b/samples/texts/300903/page_13.md
@@ -0,0 +1,11 @@
+FIGURE 7. Replacing $\beta$ by $\beta'$ to reduce the number of intersection points with $y_i$.
+
+any loop parts of vertices in $f(\mathrm{Lk}(\sigma))$ are disjoint from the loop parts of $y_i$ and $y_j$. We can further assume that the arc parts of $y_i$ and $y_j$ never intersect with any loop part of the vertices in $f(\mathrm{Lk}(\sigma))$. And up to replacing the loop part of $y_i$ and $y_j$ with an admissible loop lying inside the disk bounded by the loop parts of $y_i$ and $y_j$ (note that this may increase the free height of $y_i$ and $y_j$), we can further assume that the arc parts of vertices in $f(\mathrm{Lk}(\sigma))$ are disjoint from the loop part of $y_i$ and $y_j$. But unlike the situation in the proof of [SW19, Proposition 3.1], a new problem we are facing here is that the arc parts of $y_i$ or $y_j$ might intersect the arc parts of the vertices in $f(\mathrm{Lk}(\sigma))$ even up to isotopy. In particular, given a simplex $\tau$ lying in the link of $\sigma$, $f(\sigma) \cup f(\tau) \cup y_i$ does not necessarily form a simplex now.
+
+For that we want to apply the mutual link trick (cf. Lemma 1.8) to remove the intersections of $f(\mathrm{Lk}(\sigma))$ with $y_i$ via a sequence of homotopies. In the process, we will only modify $f$ on $\mathrm{Lk}(\sigma)$ and the new map still maps $\mathrm{Lk}(\sigma)$ to $\mathrm{Lk}_{L_r^\infty(D,D_{d,1}^\infty)}(f(\sigma))$. Recall that $f$ is simplexwise injective. Up to isotopy, we can further choose representatives for vertices in $f(\mathrm{Lk}(\sigma))$ such that the intersection points of vertices in $f(\mathrm{Lk}(\sigma))$ and $y_i$ are isolated. Moreover, we assume the number of intersection points is minimal for each vertex in $f(\mathrm{Lk}(\sigma))$. Now we choose an intersection point $x_0$ in the arc $y_i([0, 1])$ that is closest to $y_i(1)$, denote the corresponding lollipop by $\beta$ which is the image of some vertex $b \in \mathrm{Lk}(\sigma)$. We can choose $\beta'$ to be a variation of $\beta$: $\beta'$ coincides with $\beta$ for the most part, except around the intersection point with $y_i$, we replace it by an arc going around the loop part of $y_i$. See Figure 7 for a picture of this. Now we apply Lemma 1.8, for which we need to check the following two conditions:
+
+(1) $f(\mathrm{Lk}_{S^k}(b)) \le \mathrm{Lk}_{L_r^\infty(D, D_{d,1}^\infty)}(\beta')$. This follows from our definition of $\beta'$. If a vertex $v$ in $f(\mathrm{Lk}_{S^k}(b))$ is disjoint from $\beta$, using the fact that the intersection point $x_0$ is the closest one to $y_i(1)$ and $f(v)$ is disjoint from the loop part of $y_i$, we have $\beta'$ is also disjoint from $v$.
+
+(2) $\mathrm{Lk}(\beta) \cap \mathrm{Lk}(\beta')$ is $(k-1)$-connected. The lollipops $\beta$ and $\beta'$ together will bound a disk which contains the loop part of $\beta$ and $y_i$. In any event, the complement of these is a surface asymptotically rigidly homeomorphic to some surface $D_{\alpha,\beta'}^\infty$ for some $r' \ge 1$. By our induction, it is $(k-1)$-connected.
+
+Now Lemma 1.8 says we can homotope $f$ to a new map such that $f(b) = \beta'$ and $f(\mathrm{Lk}(\sigma))$ has fewer intersection points with $y_i$. Step by step, at the end we have a simplexwise injective map $f$ such that for any vertex in $f(\mathrm{Lk}(\sigma))$, it only intersects with $y_i$ at the base point. In particular for any $\tau \in \mathrm{Lk}(\sigma)$, we have $f(\sigma) \cup f(\tau) \cup \{y_i\}$ forms a simplex in $L_r^\infty(D, D_{d,1}^\infty)$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_14.md b/samples/texts/300903/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..61c7ff5027aeeb63144d1793337ade332df28b7e
--- /dev/null
+++ b/samples/texts/300903/page_14.md
@@ -0,0 +1,27 @@
+We can then replace $f$ inside the star
+
+$$ \mathrm{St}(\sigma) = \mathrm{Lk}(\sigma) * \sigma \simeq S^{k-p-1} * D^p $$
+
+by the map $(f|_{\mathrm{Lk}(\sigma)}) * (a \mapsto y_i) * (f|_{\partial\sigma})$ on
+
+$$ \mathrm{Lk}(\sigma) * a * \partial\sigma \simeq S^{k-p-1} * D^0 * S^{p-1}. $$
+
+which agrees with $f$ on the boundary $\mathrm{Lk}(\sigma) * \partial\sigma$ of the star, and is homotopic to $f$ through the map $(f|_{\mathrm{Lk}(\sigma)}) * (a \mapsto y_i) * (f|_{\sigma})$ defined on
+
+$$ \mathrm{Lk}(\sigma) * a * \sigma \simeq S^{k-p-1} * D^0 * D^p. $$
+
+Now $\mathrm{Lk}(\sigma) * a * \partial(\sigma)$ has exactly one extra vertex $a$ compared to the star of $\sigma$, unless $\sigma$ is just a vertex, in which case its boundary is empty and it has the same number of vertices. As $y_i$ has height at least $\hbar$, we have not added any new bad simplices. Hence we have reduced the number of bad simplices by one by removing $\sigma$.
+
+By induction, we can now assume that there are no bad simplices for $f$ with respect to a triangulation with at most $v$ vertices. With this assumption, we want to cone off $f$ just as we coned off the links in the above argument. We have more than $N = v + 2$ vertices of free height $\hbar$ in $L_r^\infty(D, D_{d,1}^\infty)$, and at most $v$ vertices in the sphere. The loop parts of these vertices are admissible loops of height at least $\hbar$. By the pigeonhole principle, we know that there are at least two lollipops $z_i$ and $z_j$ of free height $\hbar$ such that the punctured disks bounded by their loop parts are disjoint from the punctured disk bounded by any loop part of the lollipops in the vertices of $f(S^k)$. Just as before, we can further assume that the arc parts of $z_i$ and $z_j$ never intersect with any loop part of the vertices in $f(S^k)$, and the arc parts of vertices in $f(S^k)$ are disjoint from the loop part of $z_i$ and $z_j$. But the same problem appears again, as we want vertices of $f(S^k)$ to be disjoint from the whole lollipop $z_i$. For that we apply Lemma 1.8 again and the same proof as before implies that we can homotope $f$ such that its image is disjoint from $z_i$. In particular $f(S^k)$ lies in the link of $z_i$. Hence we can homotope $f$ to a constant map since $\mathrm{St}(z_i)$ is contractible. □
+
+**Proposition 4.27.** The complex $U_r^\infty(D, D_{d,1}^\infty)$ is contractible.
+
+**Proof.** As $T_r^\infty(D, D_{d,1}^\infty)$ is a subcomplex of $U_r^\infty(D, D_{d,1}^\infty)$, we can use a bad simplices argument.
+
+We call a vertex of $U_r^\infty(D, D_{d,1}^\infty)$ bad if it does not lie in $T_r^\infty(D, D_{d,1}^\infty)$ and a simplex bad if all of its vertices are bad. Given a bad $p$-simplex $\sigma$, we need to determine the connectivity of the good link $G_\sigma$ (see Subsection 1.2 for the definition of $G_\sigma$). As in the proof of Lemma 4.23, we have a complement surface $C_\sigma$ of $\sigma$ in $D_{d,1}^\infty$. Note that $C_\sigma$ inherits a $d$-rigid structure and it is asymptotically rigidly homeomorphic to $\oplus_{r_\sigma} D_{d,1}^\infty$ for some $r_\sigma > 0$. In particular, we can now identify $G_\sigma$ with $T_{r_\sigma}^\infty(D, D_{d,1}^\infty)$ which is contractible. Thus by Proposition 1.5, we have the pair $(U_r^\infty(D, D_{d,1}^\infty), T_r^\infty(D, D_{d,1}^\infty))$ is $i$-connected for any $i \ge 0$. By Proposition 4.26, $T_r^\infty(D, D_{d,1}^\infty) \cong L_r^\infty(D, D_{d,1}^\infty)$ is contractible, so we also have $U_r^\infty(D, D_{d,1}^\infty)$ is contractible. □
+
+**Corollary 4.28.** The complex $U_r(D, D_{d,1}^\infty)$ is weakly Cohen-Macaulay of dimension $r-2$.
+
+**Proof.** Note first that a simplicial complex is $(r-3)$-connected if and only if its $(r-2)$-skeleton is. Since $U_r(D, D_{d,1}^\infty)$ has the same $(r-2)$-skeleton as $U_r^\infty(D, D_{d,1}^\infty)$ and $U_r^\infty(D, D_{d,1}^\infty)$ is contractible, in particular $(r-3)$-connected, we indeed have $U_r(D, D_{d,1}^\infty)$ is $(r-3)$-connected.
+
+Now let $\sigma$ be a $p$-simplex of $U_r(D, D_{d,1}^\infty)$, with vertices $\phi_0, \phi_1, \dots, \phi_p$. We need to check that the link $\mathrm{Lk}_{U_r(D,D_{d,\sigma}^\infty)}(\sigma)$ is $(r-p-4)$-connected. We can assume $p \le r-3$ as any space is $(-2)$-connected. Moreover, it suffices to show the $(r-p-3)$-skeleton of $\mathrm{Lk}_{U_r(D,D_{d,\sigma}^\infty)}(\sigma)$ is $(r-p-4)$-connected. Since $\phi_0, \phi_1, \dots, \phi_p$ forms a $p$-simplex, similar to the proof of Lemma 4.23, we have the complement surface of $\sigma$ is asymptotically rigidly homeomorphic to some $d$-rigid surface $D_{d,\kappa_\sigma}^\infty$ for some $\kappa_\sigma > 0$. Then we can identify the $(r-p-3)$-skeleton of $\mathrm{Lk}_{U_r(D,D_{d,\sigma}^\infty)}(\sigma)$ with the $(r-p-3)$-skeleton of $U_{\kappa_\sigma}^\infty(D, D_{d,\sigma}^\infty)$. Since $U_{\kappa_\sigma}^\infty(D, D_{d,\sigma}^\infty)$ is even contractible, we have the connectivity bound we need. □
\ No newline at end of file
diff --git a/samples/texts/300903/page_15.md b/samples/texts/300903/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..32a6789858cd45fb5d608e1add0eb77013c747dd
--- /dev/null
+++ b/samples/texts/300903/page_15.md
@@ -0,0 +1,41 @@
+Now by Lemma 4.14 and Proposition 1.3, we have the following.
+
+**Corollary 4.29.** The complexes $S_r(D, D_{d,1}^\infty)$ and $W_r(D, D_{d,1}^\infty) \cdot$ are weakly Cohen-Macaulay of dimension $r-2$.
+
+4.4. **Homological stability.** We are finally ready to prove the homological stability result.
+
+**Theorem 4.30.** Suppose $d \ge 2$. Then the inclusion maps induce isomorphisms
+
+$$\iota_{R^+,d,r} : H_i(\mathbb{R}V_{d,r}^+, M) \to H_i(\mathbb{R}V_{d,r+1}^+, M)$$
+
+*in homology in all dimensions $i \ge 0$, for all $r \ge 1$ and for all $H_1(\mathbb{R}V_{d,\infty}^+)$-modules $M$.*
+
+**Proof.** By Theorem 4.6 and Corollary 4.29, choose $k=3$, we have for any abelian $\mathbb{R}V_\infty^+$-module $M$ the map
+
+$$H_i(\mathbb{R}V_r^+; M) \to H_i(\mathbb{R}V_{r+1}^+; M)$$
+
+induced by the natural inclusion map is surjective if $i \le \frac{r-k+2}{3}$, and injective if $i \le \frac{r-k}{3}$. But we can improve the stability range as in the proof of [SW19, Theorem 3.5] by noticing that we have the same canonical isomorphism between $\mathbb{R}V_{d,r}^+$ and $\mathbb{R}V_{d,r+d-1}^+$ using the ribbon braid model. This finishes the proof of Theorem 4.30. □
+
+**Theorem 4.31.** Suppose $d \ge 2$. Then the inclusion maps induce isomorphisms
+
+$$\iota_{R,d,r} : H_i(\mathbb{R}V_{d,r}, M) \to H_i(\mathbb{R}V_{d,r+1}, M)$$
+
+*in homology in all dimensions $i \ge 0$, for all $r \ge 1$ and for all $H_1(\mathbb{R}V_{d,\infty})$-modules $M$.*
+
+**Sketch of Proof.** The proof will be exactly the same as that of Theorem 4.30. Note first that by Theorem 3.24, it is the same as proving the half-twist asymptotic mapping class groups $\mathcal{H}V_{d,r}$ have homological stability. We define the braided monoidal category $\mathcal{G}'_d$ to be the category with objects $\oplus_r D_{d,1}^\infty$, $r \ge 0$, $\oplus$ as the operation, and $D$ as the 0 object. When $r=s$, we define the morphisms $\operatorname{Hom}(\oplus_r D_{d,1}^\infty, \oplus_s D_{d,1}^\infty) = \mathcal{H}V_{d,r}$ which can also be understood as the group of isotopy classes of asymptotically quasi-rigid homeomorphisms of $\oplus_r D_{d,1}^\infty$; when $r \ne s$, let $\operatorname{Hom}(\oplus_r D_{d,1}^\infty, \oplus_s D_{d,1}^\infty) = \emptyset$. We then have a homogeneous category $\mathcal{U}\mathcal{G}'_d$ and to prove the homological stability for the sequence of groups $\mathcal{H}V_{d,1} \le \mathcal{H}V_{d,2} \le \dots$, we only need to prove the associated space $W_r(D, D_{d,1}^\infty) \cdot$, in fact the associated simplicial complex $S_r(D, D_{d,1}^\infty)$, is highly connected. At this point, the complex is slightly different from the oriented case, but still the new complex $S_r(D, D_{d,1}^\infty)$ is a complete join over the old complex $U_r(D, D_{d,1}^\infty)$. Hence, the connectivity of $S_r(D, D_{d,1}^\infty)$ again follows from Corollary 4.28 and Proposition 1.3. □
+
+## REFERENCES
+
+[AC20] Julio Aroca and María Cumplido. A new family of infinitely braided Thompson's groups. *J. Algebra*, 2020. To appear. arxiv: 2005.09593.
+
+[AF17] Javier Aramayona and Louis Funar. Asymptotic mapping class groups of closed surfaces punctured along Cantor sets. *Moscow Math. J.*, 2017. to appear. arXiv:1701.08132.
+
+[AV20] Javier Aramayona and Nicholas G. Vlamis. Big mapping class groups: an overview. In Ken'ichi Ohshika and Athanase Papadopoulos, editors, *In the Tradition of Thurston: Geometry and Topology*, pages 459–496. Springer, 2020.
+
+[BDJ17] Collin Bleak, Casey Donoven, and Julius Jonušas. Some isomorphism results for Thompson-like groups $V_n(G)$. *Israel J. Math.*, 222(1):1–19, 2017.
+
+[BFM$^+$16] Kai-Uwe Bux, Martin G. Fluch, Marco Marschler, Stefan Witzel, and Matthew C. B. Zaremsky. The braided Thompson’s groups are of type $F_\infty$. *J. Reine Angew. Math.*, 718:59–101, 2016. With an appendix by Zaremsky.
+
+[BG84] Kenneth S. Brown and Ross Geoghegan. An infinite-dimensional torsion-free $FP_\infty$ group. *Invent. Math.*, 77(2):367–381, 1984.
+
+[Bro92] Kenneth S. Brown. The geometry of finitely presented infinite simple groups. In *Algorithms and classification in combinatorial group theory* (Berkeley, CA, 1989), volume 23 of *Math. Sci. Res. Inst. Publ.*, pages 121–136. Springer, New York, 1992.
\ No newline at end of file
diff --git a/samples/texts/300903/page_18.md b/samples/texts/300903/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..3df704832d068c9909345a7379e7d194fb7e79b1
--- /dev/null
+++ b/samples/texts/300903/page_18.md
@@ -0,0 +1,31 @@
+We can apply the proposition in the following way.
+
+**Theorem 1.6.** [HV17, Corollary 2.2] Let $Y$ be a subcomplex of a simplicial complex $X$ and suppose the space $X \setminus Y$ has a set of bad simplices satisfying (1) and (2) above, then:
+
+(1) If $X$ is $n$-connected and $G_\sigma$ is $(n - \dim(\sigma))$-connected for all bad simplices $\sigma$, then $Y$ is $n$-connected.
+
+(2) If $Y$ is $n$-connected and $G_\sigma$ is $(n - \dim(\sigma) - 1)$-connected for all bad simplices $\sigma$, then $X$ is $n$-connected.
+
+1.3. **The mutual link trick.** In the proof of [BFM$^+$16, Theorem 3.10], there is a beautiful argument for resolving intersections of arcs inspired by Hatcher's flow argument [Hat91]. They attributed the idea to Andrew Putman. Recall Hatcher's flow argument allows one to "flow" a complex to its subcomplex. But in the process, one can only "flow" a vertex to a new one in its link. The mutual link trick will allow one to "flow" a vertex to a new one not in its link provided "the mutual link" is sufficiently connected.
+
+To apply the mutual link trick, we first need a lemma that allows us to homotope a simplicial map to a simplexwise injective one [BFM$^+$16, Lemma 3.9]. Recall a simplicial map is called simplexwise injective if its restriction to any simplex is injective. See also [GRW18, Section 2.1] for more information.
+
+**Lemma 1.7.** Let $Y$ be a compact $m$-dimensional combinatorial manifold. Let $X$ be a simplicial complex and assume that the link of every $p$-simplex in $X$ is $(m-p-2)$-connected. Let $\psi: Y \to X$ be a simplicial map whose restriction to $\partial Y$ is simplexwise injective. Then after possibly subdividing the simplicial structure of $Y$, $\psi$ is homotopic relative $\partial Y$ to a simplexwise injective map.
+
+Note that as discussed in [GLU20, Lemma 5.19], there is a mistake in the connectivity bound given in [BFM$^+$16] that has been corrected here.
+
+**Lemma 1.8 (The mutual link trick).** Let $Y$ be a closed $m$-dimensional combinatorial manifold and $f: Y \to X$ be a simplexwise injective simplicial map. Let $y \in Y$ be a vertex and $f(y) = x$ for some $x \in X$. Suppose $x'$ is another vertex of $X$ satisfying the following condition.
+
+(1) $f(\mathrm{Lk}_Y(y)) \leq \mathrm{Lk}_X(x')$,
+
+(2) the mutual link $\mathrm{Lk}_X(x) \cap \mathrm{Lk}_X(x')$ is $(m-1)$-connected,
+
+Then we can define a new simplexwise injective map $g: Y \to X$ by sending $y$ to $x'$ and all the other vertices $y'$ to $f(y')$ such that $g$ is homotopic to $f$.
+
+**Proof.** The conditions that $f$ is simplexwise injective and $f(\mathrm{Lk}_Y(y)) \leq \mathrm{Lk}_X(x')$ guarantee that the definition of $g$ can be extended over $Y$ and $g$ is again simplexwise injective.
+
+We need to prove $g$ is homotopic to $f$. The homotopy will be the identity outside $\mathrm{St}_Y(y)$. Note that since $f$ is simplexwise injective, we have $f(\mathrm{Lk}_Y(y)) \leq \mathrm{Lk}_X(x)$. Together with Condition (1), we have $f(\mathrm{Lk}_Y(y)) \leq \mathrm{Lk}_X(x) \cap \mathrm{Lk}_X(x')$. Since $\mathrm{Lk}_Y(y)$ is an $(m-1)$-sphere and $\mathrm{Lk}_X(x) \cap \mathrm{Lk}_X(x')$ is $(m-1)$-connected, there exists an $m$-disk $B$ with $\partial B = \mathrm{Lk}_Y(y)$ and a simplicial map $\varphi: B \to \mathrm{Lk}_X(x) \cap \mathrm{Lk}_X(x')$ so that $\varphi$ restricted to $\partial B$ coincides with $\psi$ restricted to $\mathrm{Lk}_Y(y)$. Since the image of $B$ under $\varphi$ is contained in $\mathrm{St}_X(x)$ which is contractible, we can homotope $g$, replacing $g|_{\mathrm{St}_Y(y)}$ with $\varphi$. Since the image of $B$ under $f$ is also contained in $\mathrm{Lk}_X(x')$, we can similarly homotope $f$, replacing $f|_{\mathrm{St}_Y(y)}$ with $\varphi$. These both yield the same map, so $g$ is homotopic to $f$. $\square$
+
+## 2. HIGMAN–THOMPSON GROUPS AND THEIR BRAIDED VERSIONS
+
+In this section, we first give an introduction to the Higman–Thompson groups and then introduce their ribbon version.
\ No newline at end of file
diff --git a/samples/texts/300903/page_19.md b/samples/texts/300903/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc2670d33f77486646a9991cae43e9545a2322d4
--- /dev/null
+++ b/samples/texts/300903/page_19.md
@@ -0,0 +1,13 @@
+2.1. **Higman–Thompson groups.** The Higman–Thompson groups were first introduced by Higman as a generalization of the groups [Hig74] given earlier in handwritten, unpublished notes of Richard Thompson. First let us recall the definition of the Higman–Thompson groups. Although there are a number of equivalent definitions of these groups, we will use the notion of paired forest diagrams. First we define a *finite rooted d-ary tree* to be a finite tree such that every vertex has degree $d+1$ except the *leaves* which have degree 1, and the *root*, which has degree $d$ (or degree 1 if the root is also a leaf). Usually we draw such trees with the root at the top and the nodes descending from it down to the leaves. A vertex $v$ of the tree along with its $d$ adjacent descendants will be called a *caret*. If the leaves of a caret in the tree are leaves of the tree, we will call the caret *elementary*. A collection of $r$ many $d$-ary trees will be called a $(d,r)$-forest. When $d$ is clear from the context, we may just call it an $r$-forest.
+
+Define a *paired $(d,r)$-forest diagram* to be a triple $(F_-, \rho, F_+)$ consisting of two $(d,r)$-forests $F_-$ and $F_+$ both with $l$ leaves for some $l$, and a permutation $\rho \in S_l$, the symmetric group on $l$ elements. We label the leaves of $F_-$ with $1, \dots, l$ from left to right, and for each $i$, the $\rho(i)^{\text{th}}$ leaf of $F_+$ is labeled $i$.
+
+Define a *reduction* of a paired $(d, r)$-forest diagram to be the following: Suppose there is an elementary caret in $F_-$ with leaves labeled by $i, \dots, i+d-1$ from left to right, and an elementary caret in $F_+$ with leaves labeled by $i, \dots, i+d-1$ from left to right. Then we can “reduce” the diagram by removing those carets, renumbering the leaves and replacing $\rho$ with the permutation $\rho' \in S_{l-d+1}$ that sends the new leaf of $F_-$ to the new leaf of $F_+$, and otherwise behaves like $\rho$. The resulting paired forest diagram $(F'_-, \rho', F'_+)$ is then said to be obtained by *reducing* $(F_-, \rho, F_+)$. See Figure 1 below for an idea of reduction of paired (3,2)-forest diagrams. The reverse operation to reduction is called *expansion*, so $(F_-, \rho, F_+)$ is an expansion of $(F'_-, \rho', F'_+)$. A paired forest diagram is called *reduced* if there is no reduction possible. Define an equivalence relation on the set of paired $(d, r)$-forest diagrams by declaring two paired forest diagrams to be equivalent if one can be reached by the other through a finite series of reductions and expansions. Thus an equivalence class of paired forest diagrams consists of all diagrams having a common reduced representative. Such reduced representatives are unique.
+
+FIGURE 1. Reduction, of the top paired (3,2)-forest diagram to the bottom one.
+
+There is a binary operation $\ast$ on the set of equivalence classes of paired $(d, r)$-forest diagrams. Let $\alpha = (F-, \rho, F_+)$ and $\beta = (E-, \xi, E_+)$ be reduced paired forest diagrams. By applying repeated expansions to $\alpha$ and $\beta$ we can find representatives $(F'_-, \rho', F'_+)$ and $(E'_-, \xi', E'_+)$ of the equivalence classes of $\alpha$ and $\beta$, respectively, such that $F'_+ = E'_-$. Then we declare $\alpha \ast \beta$ to be $(F'_-, \rho'\xi', E'_+)$. This operation is well defined on the equivalence classes and is a group operation.
+
+**Definition 2.1.** The Higman–Thompson group $V_{d,r}$ is the group of equivalence classes of paired $(d, r)$-forest diagrams with the multiplication $\ast$.
+
+The usual Thompson group $V$ is a special case of Higman–Thompson groups. In fact, $V = V_{2,1}$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_2.md b/samples/texts/300903/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..82d89cdea0e94b3a785c9ba08b9e9ca09be1b0c1
--- /dev/null
+++ b/samples/texts/300903/page_2.md
@@ -0,0 +1,29 @@
+(3) If an admissible boundary component $L$ of $A$ is contained in $\Sigma'$, then the punctured disk component of $\Sigma_{d,r}^\infty$ cutting along $L$ is also contained in $\Sigma'$.
+
+Then $\Sigma'$ has a naturally induced $d$-rigid structure. In fact, we can take $A \cap \Sigma'$ to be the base surface and $d$-rigid structure can simply be inherited from $\Sigma_{d,r}^\infty$. We, of course, can choose different $A$ here which may give different induced $d$-rigid structure, but it is unique up to asymptotically rigid homeomorphism.
+
+**3.2. Asymptotic mapping class groups surjecting to Higman-Thompson groups.**
+
+Given a (possibly noncompact) surface $\Sigma$, recall the mapping class group of $\Sigma$ is defined to be the group of isotopy classes of orientation preserving homeomorphisms of $\Sigma$ that fixes $\partial\Sigma$ pointwise, i.e.
+
+$$ \mathrm{Map}(\Sigma) = \mathrm{Map}(\Sigma, \partial\Sigma) := \mathrm{Homeo}^+(\Sigma, \partial\Sigma)/\mathrm{Homeo}_0(\Sigma, \partial\Sigma). $$
+
+With this, we can now define the asymptotic mapping class group and the half-twist asymptotic mapping class group.
+
+**Definition 3.14.** The asymptotic mapping class group $\mathcal{B}V_{d,r}(\Sigma)$ (resp. the half-twist asymptotic mapping class group $\mathcal{H}V_{d,r}(\Sigma)$) is the subgroup of $\mathrm{Map}(\Sigma_{d,r}^\infty)$ consisting of isotopy classes of asymptotically rigid (resp. quasi-rigid) self-homeomorphisms of $\Sigma_{d,r}^\infty$. When $\Sigma$ is the disk, we sometimes simply denote the group by $\mathcal{B}V_{d,r}$ (resp. $\mathcal{H}V_{d,r}$).
+
+**Definition 3.15.** Let $A$ be an admissible subsurface of $\Sigma_{d,r}^\infty$, and $\mathrm{Map}(A)$ be its mapping class group which fixes the each boundary component pointwise. Each inclusion $A \subseteq A'$ of admissible surfaces induces an injective embedding $j_{A,A'} : \mathrm{Map}(A) \to \mathrm{Map}(A')$. The collection forms a direct system whose direct limit we call the *compactly supported pure mapping class group*, denoted by $\mathrm{PMap}_c(\Sigma_{d,r}^\infty)$. The group $\mathrm{PMap}_c(\Sigma_{d,r}^\infty)$ is naturally a subgroup of $\mathcal{B}V_{d,r}(\Sigma)$ and we denote the inclusion map by $j$.
+
+**Definition 3.16.** Let $\mathcal{F}_{d,r}$ be the forest with $r$ copies of a rooted $d$-ary tree and $\mathcal{T}_{d,r}$ be the rooted tree obtained from $\mathcal{F}_{d,r}$ by adding an extra vertex to $\mathcal{F}_{d,r}$ and $r$ extra edges each connecting this vertex to a root of a tree in $\mathcal{F}_{d,r}$. There is a natural projection $q: \Sigma_{d,r}^\infty \to \mathcal{T}_{d,r}$, such that the pullback of the root is $\Sigma_{d,r,0}$ and the pull back of the midpoints of any edges are admissible loops.
+
+Now any element in $\mathcal{B}V_{d,r}(\Sigma)$ can be represented by an asymptotically rigid homeomorphism $\varphi: \Sigma_{d,r}^\infty \to \Sigma_{d,r}^\infty$. In particular we have an admissible subsurface $A$ of $\Sigma_{d,r}^\infty$ such that $\varphi|_A: (A, \partial_b A) \to (\varphi(A), \varphi(\partial_b A))$ is a homeomorphism. Let $F_-$ be the smallest subforest of $\mathcal{F}_{d,r}$ which contains $q(A) \cap \mathcal{F}_{d,r}$, and $F_+$ be the smallest subforest of $\mathcal{F}_{d,r}$ which contains $q(\varphi(A)) \cap \mathcal{F}_{d,r}$. Note that $F_-$ and $F_+$ have the same number of leaves and their leaves are in one-to-one correspondence with the admissible loops of $A$ and $\varphi(A)$. Now let $\rho$ be the map from leaves of $F_-$ to $F_+$ induced by $\varphi$. Together this defines an element $[(F_-, \rho, F_+)] \in V_{d,r}$. We call this map $\pi$. One can show $\pi$ is well defined. Similarly to [FK04, Proposition 2.4] and [AF17, Proposition 4.2, 4.6], we now have the following proposition.
+
+**Proposition 3.17.** We have the short exact sequences:
+
+$$ 1 \to \mathrm{PMap}_c(\Sigma_{d,r}^\infty) \xrightarrow{j} \mathcal{B}V_{d,r}(\Sigma) \xrightarrow{\pi} V_{d,r} \to 1; $$
+
+$$ 1 \to \mathrm{PMap}_c(\Sigma_{d,r}^\infty) \xrightarrow{j} \mathcal{H}V_{d,r}(\Sigma) \xrightarrow{\pi} V_{d,r}(\mathbb{Z}/2\mathbb{Z}) \to 1. $$
+
+**Remark 3.18.** Here, as in [AF17], $V_{d,r}(\mathbb{Z}/2\mathbb{Z})$ is the twisted version of the Higman-Thompson group where one allows flipping the subtree below every leaf. See for example [BDJ17] for more information.
+
+**Proof.** We will prove the proposition for $\mathcal{B}V_{d,r}(\Sigma)$. The other case is essentially the same. First we show the map $\pi$ is surjective. Given any element $[(F_-, \rho, F_+)] \in V_{d,r}$, let $T_-$ (resp. $T_+$) be the tree obtained from $F_-$ (resp. $F_+$) by adding a single root on the top and $r$ edges
\ No newline at end of file
diff --git a/samples/texts/300903/page_20.md b/samples/texts/300903/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..513890a10047065bf251421bf618f87677e42889
--- /dev/null
+++ b/samples/texts/300903/page_20.md
@@ -0,0 +1,17 @@
+2.2. **Ribbon Higman–Thompson groups.** For convenience, we will think of the forest $F_+$ drawn beneath $F_-$ and upside down, i.e., with the root at the bottom and the leaves at the top. The permutation $\rho$ is then indicated by arrows pointing from the leaves of $F_-$ to the corresponding paired leaves of $F_+$. See Figure 2 for this visualization of (the unreduced representation of) the element of $V_{3,2}$ from Figure 1.
+
+FIGURE 2. An element of $V_{3,2}$.
+
+Now in the ribbon version of the Higman–Thompson groups, the permutations of leaves are simply replaced by ribbon braids which can twist between the leaves.
+
+**Definition 2.2.** Let $\mathcal{I} = \Pi_{i=1}^d I_i : [0,1] \times \{1, \dots, l\} \to \mathbb{R}^2$ be an embedding which we refer to as the *marked bands*. A *ribbon braid* is a map $R : ([0,1] \times \{0,1,\dots,l\}) \times [0,1] \to \mathbb{R}^2$ such that for any $0 \le t \le 1$, $R_t : [0,1] \times \{1,\dots,l\} \to \mathbb{R}^2$ is an embedding, $R_0 = \mathcal{I}$, and there exists $\sigma \in S_l$ such that $R_1(t)|_{I_i} = I_{\sigma(i)}(t)$ or $R_1(t)|_{I_i} = I_{\sigma(i)}(1-t)$. The usual product of paths defines a group structure on the set of ribbon braids up to homotopy among ribbon braids. This group, denoted by $RB_l$, does not depend on the choice of the marked bands and it is called the ribbon braid group with $l$ bands.
+
+**Remark 2.3.** Note that $RB_l \cong \mathbb{Z}^l \ltimes B_l$ where the action of $B_l$ is induced by the symmetric group action on the coordinates of $\mathbb{Z}^l$.
+
+**Definition 2.4.** A *ribbon braided paired (d,r)-forest diagram* is a triple $(F_-, r, F_+)$ consisting of two $(d,r)$-forests $F_-$ and $F_+$ both with $l$ leaves for some $l$ and a ribbon braid $r \in RB_l$ connecting the leaves of $F_-$ to the leaves of $F_+$.
+
+The expansion and reduction rules for the ribbon braids just come from the natural way of splitting a ribbon band into $d$ components and the inverse operation to this. See Figure 3 for how to split a half twisted band when $d=2$. Note that not only are the two bands themselves twisted but the bands are also braided. Everything else will be the same as in the braided case, so we omit the details here. As usual, we define two ribbon braided paired forest diagrams to be equivalent if one is obtained from the other by a sequence of reductions or expansions. The multiplication operation $\ast$ on the equivalence classes is defined the same way as for $bV_{d,r}$. We direct the reader to [SW, Section 2]
+
+FIGURE 3. Splitting a ribbon into 2 ribbons.
+
+**Definition 2.5.** The *ribbon Higman–Thompson group* $RV_{d,r}$ (resp. the *oriented ribbon Higman–Thompson group* $RV_{d,r}^+$) is the group of equivalence classes of (resp. oriented) ribbon braided paired $(d,r)$-forests diagrams with the multiplication $\ast$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_21.md b/samples/texts/300903/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..0c0e172976ec07e497ded90359a7e76cab534693
--- /dev/null
+++ b/samples/texts/300903/page_21.md
@@ -0,0 +1,27 @@
+### 3. ASYMPTOTIC MAPPING CLASS GROUPS RELATED TO THE RIBBON HIGMAN–THOMPSON GROUPS
+
+The purpose of this section is to generalize the notion of asymptotic mapping class groups and allow them to surject to the Higman–Thompson groups. In particular, we will build a geometric model for the ribbon Higman–Thompson groups which will be crucial for proving homological stability in Section 4. Our construction is largely based on the ideas in [FK04, Section 2] and [AF17, Section 3].
+
+**3.1. d-rigid structure.** In this subsection, we generalize the notion of a rigid structure to that of a *d*-rigid structure.
+
+**Definition 3.1.** A $d$-leg pants is a surface which is homeomorphic to a $(d+1)$-holed sphere.
+
+Recall that the usual pair of pants is a 2-leg pants. We will draw a $d$-leg pants with one boundary component at the top. In this way, we can conveniently put a counter-clockwise total order on the boundary components, making the top component the minimal one. See Figure 4 for an example of a 3-leg pants.
+
+We proceed to build some infinite type surfaces using some basic building blocks.
+
+**Definition 3.2.** Let $\Sigma$ be an compact oriented surface. Call the boundary components of $\Sigma$ the *based boundary components*. Then $\Sigma_{d,r}^\infty$ is the infinite surface, built up as an inductive limit of infinite surfaces $\Sigma_{d,r,m}$ with $m \ge 0$:
+
+(1) $\Sigma_{d,r,0}$ is obtained from $\Sigma$ by deleting the interior of a disk in $\Sigma$. When $\Sigma$ is a disk $D$, we declare $D_{d,r,0} = \partial D$.
+
+(2) $\Sigma_{d,r,1}$ is obtained from $\Sigma_{d,r,0}$ attaching a copy of $r$-leg pants along the newly created boundary of $\Sigma_{d,r,0}$.
+
+(3) For $m \ge 1$, $\Sigma_{d,r,m+1}$ is obtained from $\Sigma_{d,r,m}$ by gluing a pair of $d$-leg pants to every nonbased boundary circle of $\Sigma_{d,r,m}$ along the top boundary of the pants.
+
+The surface $\Sigma_{d,r,1}$ is called the *base* of $\Sigma_{d,r}^\infty$ and the boundary components of $\Sigma_{d,r}^\infty$ coming from the base are the *based boundary components*. For each $m \ge 1$, the nonbased boundary components of $\Sigma_{d,r,m}$ naturally embed in $\Sigma_{d,r}^\infty$ and we call these the *admissible loops*. We call the admissible loops coming from $\Sigma_{d,r,1}$ the *rooted loops*. The surface $\Sigma_{d,r}^\infty$ has a natural induced orientation.
+
+**Remark 3.3.** To define our $d$-rigid structure, we do not really need $\Sigma_{d,r,0}$. But it will be convenient to have $\Sigma_{d,r,0}$ later in Definition 3.16 for defining the map from $\Sigma_{d,r}^\infty$ to the tree $T_{d,r}$.
+
+**Remark 3.4.** In the special case where the starting surface is a disk, we will use the notation $\Sigma = D$, $\Sigma_{d,r,m} = D_{d,r,m}$, and $\Sigma_{d,r}^\infty = D_{d,r}^\infty$. See Figure 4 for a picture of the surface $D_{3,3}^\infty$. In this case, we can think of $D_{d,r}^\infty$ as a subsurface of a disk $D$. More specifically, let $D = \{(x,y) | x^2 + y^2 \le 1\}$ and $x_i = \frac{2i-r-1}{r+1}$, $1 \le i \le r$. We place $r$ disks with center at each $(x_i, 0)$ of radius $r_0 = \frac{1}{4(r+1)}$. Denote these disks by $D_1, \cdots D_r$. The complement of the interior of these $r$ disks in $D$ is homeomorphic to the $r$-leg pants $D_{d,r,1}$. Now for each disk $D_i$, $1 \le i \le r$, we can equally distribute $d$ points in the $x$-axis inside $D_i$ and place a disk with radius $\frac{r_0}{d}$ centered at each. We have the complement of the interior of these $d$ disks in $D_i$ are all $d$-leg pants. We can continue the process inductively. At the end, the disks converge to a Cantor set which we denote by $C$. In particular $D_{d,r}^\infty$ is homeomorphic to $D \setminus C$. We will refer to this as the *puncture model* for $D_{3,3}^\infty$. See Figure 5 for a picture of $D_{3,3}^\infty$ with this model. The advantage of this model is we can view $D_{d,r}^\infty$ and all its admissible subsurfaces directly as a subsurfaces of $D$.
+
+**Remark 3.5.** Now $\Sigma_{d,r}^\infty$ can be obtained from $\Sigma$ by attaching a copy of $D_{d,r}^\infty$ to the nonbased boundary component of $\Sigma_{d,r,0}$. In particular, $\Sigma_{d,r}^\infty$ is obtained from $\Sigma$ by deleting a copy of the Cantor set, and any admissible subsurface of $\Sigma_{d,r}^\infty$ can be viewed directly as a subsurface of $\Sigma$ using the puncture model. Recall that any two Cantor sets are homeomorphic, hence, by the classification of infinite surfaces [AV20, Theorem 2.2], we have $\Sigma_{d,r}^\infty$ is homeomorphic to $\Sigma \setminus C$
\ No newline at end of file
diff --git a/samples/texts/300903/page_22.md b/samples/texts/300903/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..f398f5744ebe0b791b65df09dda5fd9dfe818512
--- /dev/null
+++ b/samples/texts/300903/page_22.md
@@ -0,0 +1,19 @@
+FIGURE 4. 3-leg pants and the surface $D_{3,3}^{\infty}$ with canonical seams
+
+FIGURE 5. Disk model for the surface $D_{3,3}^{\infty}$
+
+where $\mathcal{C}$ is the standard ternary Cantor set sitting inside some disk in $\Sigma$ regardless of the choice of $d$ and $r$.
+
+**Definition 3.6.** A compact subsurface $A \subset \Sigma_{d,r}^{\infty}$ is *admissible* if $\Sigma_{d,r,1} \subseteq A$ and all of its nonbased boundaries are admissible. The subsurfaces $\Sigma_{d,r,m}$ are called the *standard admissible subsurfaces* of $\Sigma_{d,r}^{\infty}$.
+
+**Definition 3.7.**
+
+(1) A *suited d-pants decomposition* of the infinite surface $\Sigma_{d,r}^{\infty}$ is a maximal collection of distinct nontrivial simple closed curves in the interior of $\Sigma_{d,r}^{\infty} \setminus \Sigma_{d,r,1}$ which are not isotopic to the boundary, pairwise disjoint and pairwise non-isotopic, with the additional property that the complementary regions in $\Sigma_{d,r}^{\infty} \setminus \Sigma_{d,r,1}$ are all $d$-leg pants.
+
+(2) A *d-rigid structure* on $\Sigma_{d,r}^{\infty}$ consists of two pieces of data:
+
+• a suited *d*-pants decomposition, and
+
+• a *d-prerigid structure*, i.e. a countable collection of disjoint line segments embedded into $\Sigma_{d,r}^{\infty} \setminus \Sigma_{d,r,1}$, such that the complement of their union in each component of $\Sigma_{d,r}^{\infty} \setminus \Sigma_{d,r,1}$ has 2 connected components.
+
+These pieces must be *compatible* in the following sense: first, the traces of the *d*-prerigid structure on each *d*-leg pants (i.e. the intersections with pants) are made up of $d+1$
\ No newline at end of file
diff --git a/samples/texts/300903/page_23.md b/samples/texts/300903/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..7ae047a048eb8739fdb4664aefd122c690da8ef0
--- /dev/null
+++ b/samples/texts/300903/page_23.md
@@ -0,0 +1,31 @@
+connected components, called *seams*; secondly, each boundary component of the pants intersects with exactly two components of the seams at two distinct points; thirdly, the seams cut each pants into two components. Note that these conditions imply that each component is homeomorphic to a disk. One says then that the suited *d*-pants decomposition and the *d*-prerigid structure are *subordinate* to the *d*-rigid structure.
+
+(3) By construction, $\Sigma_{d,r}^\infty$ is naturally equipped with a suited $d$-pants decomposition, which will be referred to below as the *canonical suited d-pants decomposition*. We also fix a $d$-prerigid structure on $\Sigma_{d,r}^\infty$ (called the *canonical d-prerigid structure*) compatible with the canonical suited $d$-pants decomposition. See Figure 4. Using the puncture model, the seams of the canonical $d$-prerigid structure are just the intersections of $[0,1] \times \{0\}$ with each $d$-pants. The resulting $d$-rigid structure is called the *canonical $d$-rigid structure* on $\Sigma_{d,r}^\infty$. Very importantly, for each admissible subsurface, the canonical $d$-rigid structure induces an order on the admissible boundaries. In Figure 4, the induced order on the admissible loops are counterclockwise. Using the puncture model, the admissible loops are ordered from left to right.
+
+(4) The seams cut each component of $\Sigma_{d,r}^\infty \setminus \Sigma_{d,r,1}$ into two pieces, we choose the front piece in each component and these $r$ pieces together form the *visible side* of $\Sigma_{d,r}^\infty$.
+
+(5) A suited $d$-pants decomposition (resp. $d$-(pre)rigid structure) is *asymptotically trivial* if outside a compact subsurface of $\Sigma_{d,r}^\infty$, it coincides with the canonical suited $d$-pants decomposition (resp. canonical $d$-(pre)rigid structure).
+
+**Remark 3.8.** It is important that the seams cut each $d$-pants into two components and each component is homeomorphic to a disk as the mapping class group of a disk is trivial.
+
+**Definition 3.9.** Let $\Sigma_{d,r}^\infty$ and $\bar{\Sigma}_{d,r'}^\infty$ be two surfaces with $d$-rigid structure and let $\varphi : \Sigma_{d,r}^\infty \to \bar{\Sigma}_{d,r'}^\infty$ be a homeomorphism. One says that $\varphi$ is *asymptotically rigid* if there exists an admissible subsurface $A \subset \Sigma_{d,r}^\infty$ such that:
+
+(1) $\varphi(A)$ is also admissible in $\bar{\Sigma}_{d,r'}^\infty$,
+
+(2) $\varphi|_A$ maps the based boundaries to based boundaries, admissible loops to admissible loops and
+
+(3) the restriction of $\varphi: \Sigma_{d,r}^\infty \setminus A \to \bar{\Sigma}_{d,r'}^\infty \setminus \varphi(A)$ is *rigid*, meaning that it respects the traces of the canonical $d$-rigid structure, mapping the suited $d$-pants decomposition into the suited $d$-pants decomposition, the seams into the seams, and the visible side into the visible side.
+
+If we drop the condition that $\varphi$ should map the visible side into the visible side, $\varphi$ is called *asymptotically quasi-rigid*. The surface $A$ is called a *support* for $\varphi$.
+
+**Remark 3.10.** We are not using the word “support” in the usual sense, as the map outside the support defined above might well not being the identity, but the map is uniquely determined up to isotopy by Remark 3.8.
+
+**Remark 3.11.** In [FK04, Definition 2.3], they do not actually require that the support must contain the base. This will not make a difference, as one can always enlarge the support so that it contains the base.
+
+**Remark 3.12.** The surface $\Sigma_{d,r+d-1}^\infty$ can be identified with the surface $\Sigma_{d,r}^\infty$ such that $\Sigma_{d,r+d-1,m} = \Sigma_{d,r,m+1}$ for any $m \ge 1$, and the $d$-rigid structure of $\Sigma_{d,r}^\infty$ coincides with $d$-rigid on $\Sigma_{d,r+d-1}^\infty$ outside $\Sigma_{d,r,2}$. In this way, $\Sigma_{d,r}^\infty$ is asymptotically rigid homeomorphic to $\Sigma_{d,r+d-1}^\infty$ through the identity map.
+
+**Remark 3.13.** Let $\Sigma'$ be a subsurface of $\Sigma_{d,r}^\infty$ such that there exist an admissible subsurface $A$ of $\Sigma_{d,r}^\infty$ satisfying:
+
+(1) $A \cap \Sigma'$ is a compact surface,
+
+(2) The boundaries of $\Sigma'$ are disjoint from the admissible boundary components of $A$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_3.md b/samples/texts/300903/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1de5e41e0f556519b1a1bb1117cf541bf20445b
--- /dev/null
+++ b/samples/texts/300903/page_3.md
@@ -0,0 +1,37 @@
+connecting to each root of the trees in $F_-$ (resp. $F_+$). Furthermore, let $T'_-$ (resp. $T'_+$) be
+the tree obtained from $F_-$ (resp. $F_+$) by throwing away the leaves and the open half edge
+connecting to the leaves. Then let $A_- = q^{-1}(T'_+)$ and $A_+ = q^{-1}(T'_+)$. We have $A_-$ and $A_+$ are
+both admissible subsurfaces of $\Sigma_{d,r}^\infty$. Now one can produce a homeomorphism $\varphi_0: A_- \to A_+$
+which is identity on the based boundary and maps the admissible loops of $A_-$ to the admissible
+loops of $A_+$ following the information from $\rho$, mapping the visible part to the visible part for
+each admissible loop. From here, we extend $\varphi_0$ to a map $\varphi: \Sigma_{d,r}^\infty \to \Sigma_{d,r}^\infty$ such that $\varphi$ is a
+asymptotically rigid homeomorphism.
+
+If an element $g \in BV_{d,r}(\Sigma)$ is mapped to a trivial element $\pi(g) = [(F_-, \rho, F_+)] \in V_{d,r}$, then
+the two forests $F_-$ and $F_+$ are the same and the induced map $\rho$ is trivial. This means we can
+assume the support $A$ for the asymptotically rigid homeomorphism $\varphi_g$ corresponding to $g$ is
+the same as $\varphi(A)$ and $\varphi$ induces identity map on the admissible boundary components. Thus
+$g \in PMap_c(\Sigma_{d,r}^\infty)$. Finally, given any element $g \in PMap_c(\Sigma_{d,r}^\infty)$, it is clear that $\pi \circ j(g) = 1$. $\square$
+
+The mapping class group $\mathrm{Map}(\Sigma_{d,r}^{\infty})$ has a natural quotient topology coming from the compact-open topology on $\mathrm{Homeo}^{+}(\Sigma_{d,r}^{\infty}, \partial\Sigma_{d,r}^{\infty})$. See [AV20, Section 2.3, 4.1] for more information. In [AF17, Theorem 1.3], Aramayona and Funar showed that when $\Sigma$ is a closed surface, $\mathcal{H}\mathrm{V}_{2,1}(\Sigma)$ is dense in $\mathrm{Map}(\Sigma_{2,1}^{\infty})$. We improve their result to the following.
+
+**Theorem 3.19.** *The groups* $\mathcal{B}V_{d,r}(\Sigma)$ *and* $\mathcal{H}V_{d,r}(\Sigma)$ *are dense in the mapping class group*
+$\mathrm{Map}(\Sigma_{d,r}^{\infty})$.
+
+**Proof.** The proof in [AF17, Section 6] adapts directly to show that $\mathcal{H}V_{d,r}(\Sigma)$ is dense in $\mathrm{Map}(\Sigma_{d,r}^{\infty})$ and so we will not repeat it here. To show $\mathcal{B}V_{d,r}(\Sigma)$ is also dense in $\mathrm{Map}(\Sigma_{d,r}^{\infty})$, it suffices to show any element in $\mathcal{H}V_{d,r}(\Sigma)$ can be approximated by a sequence of elements in $\mathcal{B}V_{d,r}(\Sigma)$. Since $\mathcal{H}V_{d,r}(\Sigma)$ can be generated by $\mathcal{B}V_{d,r}(\Sigma)$ and half Dehn twists around the admissible loops in $\Sigma_{d,r}^{\infty}$, it suffices to show that any half Dehn twists around an admissible loop in $\Sigma_{d,r}^{\infty}$ can be approximated by a sequence of elements in $\mathcal{B}V_{d,r}(\Sigma)$. Given a admissible loop $L$, let $h_L$ be a half Dehn twist at $L$. We will construct a sequence of elements $x_i \in \mathcal{B}V_{d,r}(\Sigma)$ such that for any compact subset $K$ of $\Sigma_{d,r}^{\infty}$, there exists $N$ such that for any $j \ge N$, $x_j$ and $h_L$ coincide on $K$. Recall we have the map $q: \Sigma_{d,r}^{\infty} \to \mathcal{T}_{d,r}$ (cf. Definition 3.16) such that the admissible loops are mapped to edge middle points in $\mathcal{T}_{d,r}$. Now consider those admissible loops such that their image under $q$ lying below $q(L)$ have distance $i$ to $q(L)$. Note that there are $d^i$ such admissible loops. We list them as $L_{i,1}, \cdots, L_{i,d^i}$. Let $h_{L_{i,k}}$ be the half Dehn twists around $L_{i,k}$ and let $x_i = h_L h_{L_{i,1}} \cdots h_{L_{i,d^i}}$, then $x_i \in \mathcal{B}V_{d,r}(\Sigma)$ and the sequence $\{x_i\}$ has the desired property. $\square$
+
+Now recall by Remark 3.5 that $\Sigma_{d,r}^\infty$ is homeomorphic to $\Sigma \setminus C$ for any $d$ and $r$, hence we have
+the following corollary.
+
+**Corollary 3.20.** Let $\Sigma$ be any compact surface and $C$ be a Cantor set which lies in the interior of a disk in $\Sigma$. Then the mapping class group $\mathrm{Map}(\Sigma \setminus C)$ contains the following two families of dense subgroups: the asymptotic mapping class groups $\mathcal{B}V_{d,r}(\Sigma)$ which surject to the Higman-Thompson group $V_{d,r}$, and the half-twist asymptotic mapping class groups $\mathcal{H}V_{d,r}(\Sigma)$ which surject to the symmetric Higman-Thompson group $V_{d,r}(Z/2Z).
+
+**3.3. The asymptotic mapping class group of the disk punctured by the Cantor set.**
+
+In the last subsection, we want to identify the asymptotic mapping class group $\mathcal{B}V_{d,r}(D)$ with
+the oriented ribbon Higman–Thompson groups $\mathcal{R}\mathcal{V}_{d,r}^{+}$ and the half-twist asymptotic mapping
+class group $\mathcal{H}\mathcal{V}_{d,r}(D)$ with the ribbon Higman–Thompson group $\mathcal{R}\mathcal{V}_{d,r}$. The following lemma
+appears in [BT12, Section 2] without a proof, so we provide the details here.
+
+**Lemma 3.21.** Let $D_k$ be the $(k+1)$-holed sphere. Then $\mathrm{Map}(D_k)$ can be naturally identified with the pure oriented ribbon braid group $\mathrm{PRB}_k^+$.
+
+**Proof.** Note that $D_k$ can be identified with a disk with $k$ holes. Let $\partial_b$ denote the boundary of the disk. Let $\bar{D}_k$ be a disk with $k$ punctures obtained from $D_k$ by attaching one punctured disk
\ No newline at end of file
diff --git a/samples/texts/300903/page_4.md b/samples/texts/300903/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb11c6737aaa000f9f0c961edf3ba43dfc6fee85
--- /dev/null
+++ b/samples/texts/300903/page_4.md
@@ -0,0 +1,15 @@
+to each hole. The induced map $Cap : Map(D_k) \to PMap(\bar{D}_k)$ is the capping homomorphism. Note that $PMap(\bar{D}_k) \cong PB_k$. Now applying [FM11, Proposition 3.19] and the fact that the Dehn twists around the holes of $D_k$ commute, one sees that the kernel $K$ is a free abelian group of rank $k$ generated by these $k$ Dehn twists. Here the capping homomorphism splits. To prove this, we first embed $PB_k$ into $PRB_k^+$ by viewing the pure braid group of $k$-strings as the set of ribbon braids on $k$ bands such that the bands have no twists. We can think of $D_k$ as being embedded into $\mathbb{R}^2$ with $\partial_b$ as the unit circle and the $k$ holes in $D_k$ equally distributed inside $\partial_b$ along the $x$-axis. The intersections of these holes with the $x$-axis gives $k$ sub-intervals of the $x$-axis denoted $I_1, \cdots, I_k$. We now put the bands representing a pure braid $x \in PB_k \le PRB_k^+$ in $D \times [0, 1]$ which starts and ends at $I_1, \cdots, I_k$. Note that the bands here will not twist at all. Now we comb the bands straight from bottom to top. This induces a homeomorphism of $D_k \times \{0\}$ and hence an element in the mapping class group $Map(D_k)$. One checks that this map is a group homomorphism and injective. Since $PB_k$ acts on $K$ trivially, we have $Map(D_k) \cong K \times PB_k \cong \mathbb{Z}^k \times PB_k \cong PRB_k^+$ where the number of Dehn twists around each boundary component is naturally identified with the number of full twists on each bands. $\square$
+
+To promote Lemma 3.21 such that it works for the ribbon braid group, we need some extra terminology. As in the proof of Lemma 3.21, we identify $D_k$ with the unit disk in $\mathbb{R}^2$ with $k$ small disks whose centers are equally distributed on the $x$-axis removed. The $x$-axis cuts the boundary loops of each deleted disk into two components, providing a cell structure on the loops. We will call the part that lies above the $x$-axis the *visible part*. We define the *rigid mapping class group* RMap$_+(D_k)$ of $D_k$ to be the isotopy classes of homeomorphisms of $D_k$ which fix $\partial_b D_k$ pointwise and map the visible part of the holes to the visible part of the holes. Note elements in $\partial_b D_k$ are allowed to map one boundary hole to another just as in the definition of the asymptotic mapping class group. If we only assume the cell structure on the loops has to be preserved, the resulting group is called *quasi-rigid mapping class group* $D_k$ and denoted by $RMap(D_k)$. With these preparations, the following lemma is now clear.
+
+**Lemma 3.22.** There is a natural isomorphism between the oriented ribbon braid group $RB_k^+$ and $RMap_+(D_k)$ (resp. between the ribbon braid group $RB_k$ and $RMap(D_k)$).
+
+**Proof.** As in the proof of Lemma 3.21, we put the element in the (oriented) ribbon braid group between $D \times [0, 1]$, then we comb the bands straight from bottom to top which gives the corresponding element in $RMap_+(D_k)$ (resp. $RMap(D_k)$). $\square$
+
+Given two admissible subsurfaces $A$ and $A'$ of $D_{d,r}^\infty$ (possibly with different $r$) with $k$ admissible boundary components, we want to fix a canonical way to identify a homeomorphism $f: A \to A'$ as an element in the ribbon braid group. Note that each boundary loop except the base one inherits a visible side from $D_{d,r}^\infty$. We will use the puncture model for $D_{d,r}^\infty$ going forward.
+
+As above, let $D_k$ be the subsurface of $D$ which is the complement of $k$ disjoint open disks with centers at $a_i = \frac{2i-k-1}{k+1}$ of radius $2^{-k}$ for $1 \le i \le k$. Now given any admissible subsurface $A_k$ of $D_{d,r}^\infty$ with $k$ many admissible boundaries, denote the centers from left to right by $c_i \in [0, 1] \times \{0\}$, $1 \le i \le k$ with radius $r_1, r_2, \dots, r_k$. Now we define an isotopy $\mathcal{N}_{A_k}: D \times [0, 1] \to D$ such that $\mathcal{N}_{A_{k,0}} = \text{id}_D$ and $\mathcal{N}_{A_{k,1}}$ maps $A_k$ to $D_k$ via a homeomorphism. We first shrink the admissible boundary loops of $A_k$ such that they have radius $r$, where $r = \min\{r_1, \dots, r_k, 2^{-k}\}$. Then we isotope $A_k$ by moving the centers $c_i$ to $a_i$ along $[0, 1] \times \{0\}$ in $D$. And in the last step we enlarge the radius one by one to $2^{-k}$. The following lemma is now immediate.
+
+**Lemma 3.23.** Let $\phi : D_{d,r}^\infty \to D_{d,r}^\infty$ be an asymptotically rigid (resp. quasi-rigid) homeomorphism which is supported on the admissible subsurface $A_k$. Denote $\phi'_k = \phi(A_k)$, then
+
+(1) $\mathbf{r}_\phi = \mathcal{N}_{A'_{k,1}} \circ \phi|_{A_k} \circ \mathcal{N}_{A_{k,1}}^{-1} : D_k \to D_k$ gives an element in the oriented ribbon braid group $RB_k^+$ (resp. the ribbon braid group $RB_k$). Conversely, given an element $\mathbf{r} \in RB_k^+$ (resp. $RB_k$), we have an asymptotic rigid (resp. quasi-rigid) homeomorphism which is unique up to isotopy, supported on $A_k$, and map $A_k$ to $\phi'_k$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_6.md b/samples/texts/300903/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..3ef7602f55bf493a79e7e32fdcc768740de71ff5
--- /dev/null
+++ b/samples/texts/300903/page_6.md
@@ -0,0 +1,41 @@
+**Definition 4.2** ([RWW17, Definition 1.5]). Let $(\mathcal{C}, \oplus, 0)$ be a monoidal category with 0 initial. We say that $\mathcal{C}$ is *prebraided* if its underlying groupoid is braided and for each pair of objects $A$ and $B$ in $\mathcal{C}$, the groupoid braiding $b_{A,B}: A \oplus B \to B \oplus A$ satisfies
+
+$$ b_{A,B} \circ (A \oplus \iota_B) = \iota_B \oplus A : A \to B \oplus A. $$
+
+**Definition 4.3.** [RWW17, Definition 2.1] Let $(\mathcal{C}, \oplus, 0)$ be a monoidal category with 0 initial and $(A, X)$ a pair of objects in $\mathcal{C}$. Define $W_n(A, X)_\bullet$ to be the semi-simplicial set with set of $p$-simplices
+
+$$ W_n(A, X)_p := \operatorname{Hom}_\mathcal{C}(X^{\oplus p+1}, A \oplus X^{\oplus n}) $$
+
+and with face map
+
+$$ d_i : \operatorname{Hom}_{\mathcal{C}}(X^{\oplus p+1}, A \oplus X^{\oplus n}) \to \operatorname{Hom}_{\mathcal{C}}(X^{\oplus p}, A \oplus^{\oplus n}) $$
+
+defined by precomposing with $X^{\oplus i} \oplus \iota_X \oplus X^{\oplus p-i}$.
+
+Also define the following property for a fixed pair $(A, X)$ and a slope $k \ge 2$.
+
+**LH3** For all $n \ge 1$, $W_n(A, X)_\bullet$ is $(\frac{n-2}{k})$-connected.
+
+Quite often, we can reduce the semi-simplicial complex to a simplicial complex.
+
+**Definition 4.4** ([RWW17, Definition 2.8]). Let $A, X$ be objects of a homogeneous category $(\mathcal{C}, \oplus, 0)$. For $n \ge 1$, let $S_n(A, X)$ denote the simplicial complex whose vertices are the maps $f: X \to A \oplus X^{\oplus n}$ and whose $p$-simplices are $(p+1)$-sets $\{f_0, \dots, f_p\}$ such that there exists a morphism $f: X^{\oplus p+1} \to A \oplus X^{\oplus n}$ with $f \circ i_j = f_j$ for some order on the set, where
+
+$$ i_j = \iota_{X \oplus j} \oplus \mathrm{id}_X \oplus \iota_{X^{\oplus p-j}} : X = 0 \oplus X \oplus 0 \to X^{\oplus p+1}. $$
+
+**Definition 4.5.** Let $\mathrm{Aut}(A \oplus X^{\oplus\infty})$ be the colimit of
+
+$$ \dots \xrightarrow{-\oplus X} \mathrm{Aut}(A \oplus X^{\oplus n}) \xrightarrow{-\oplus X} \mathrm{Aut}(A \oplus X^{\oplus n+1}) \xrightarrow{-\oplus X} \mathrm{Aut}(A \oplus X^{\oplus n+2}) \xrightarrow{-\oplus X} \dots $$
+
+Then any $\mathrm{Aut}(A \oplus X^{\oplus\infty})$-module $M$ may be considered as an $\mathrm{Aut}(A \oplus X^{\oplus n})$-module for any $n$, by restriction, which we continue to call $M$. We say that the module $M$ is abelian if the action of $\mathrm{Aut}(A \oplus X^{\oplus\infty})$ on $M$ factors through the abelianizations of $\mathrm{Aut}(A \oplus X^{\oplus\infty})$, or in other words if the derived subgroup of $\mathrm{Aut}(A \oplus X^{\oplus\infty})$ acts trivially on $M$.
+
+We are now ready to quote the theorem that we will use.
+
+**Theorem 4.6** ([RWW17, Theorem 3.1]). Let $(\mathcal{C}, \oplus, 0)$ be a pre-braided homogeneous category satisfying **LH3** for a pair $(A, X)$ with slope $k \ge 3$. Then for any abelian $\mathrm{Aut}(A \oplus X^{\oplus\infty})$-module $M$ the map
+
+$$ H_i(\mathrm{Aut}(A \oplus X^{\oplus n}); M) \to H_i(\mathrm{Aut}(A \oplus X^{\oplus n+1}); M) $$
+
+induced by the natural inclusion map is surjective if $i \le \frac{n-k+2}{k}$, and injective if $i \le \frac{n-k}{k}$.
+
+**4.2. Homogeneous category for the groups** $RV_{d,r}^+$. The purpose of this section is to produce a homogeneous category for proving homological stability of the ribbon Higman–Thompson groups $RV_{d,r}^+$. Note that by Theorem 3.24, it is same as proving the asymptotic mapping class groups $BV_{d,r}$ have homological stability. This allows us to define our homogeneous category geometrically. The category is similar to the ones produced in [RWW17, Section 5.6]. Essentially, we replace the annulus or Möbius band by the infinite surface $D_{d,1}^\infty$.
+
+Recall $D_{d,r}^\infty$ is an infinite surface equipped with a canonical asymptotic rigid structure. Let $I = [-1, 1] \subset \partial_b D_{d,r}^\infty$ be an embedded interval. Let $I^- = [-1, 0]$ and $I^+ = [0, 1]$ be subintervals of $I$. Let $D_{d,1}^\infty \oplus D_{d,1}^\infty$ be the boundary sum of two copies of $D_{d,1}^\infty$ obtained by identifying $I^+$ of the first copy with $I^-$ of the second copy. Inductively, we could define similarly $\oplus_r D_{d,1}^\infty$ for any $r \ge 0$. Here $\oplus_0 D_{d,1}^\infty$ is just the standard disk $D$. Abusing notation, when referring to $I^-$ and $I^+$ on $\oplus_r D_{d,1}^\infty$, we will mean the two copies of $I^-$ and $I^+$ which remain on the boundary. Thus we have an operation $\oplus$ on the set $\oplus_r D_{d,1}^\infty$ for any $r \ge 0$. See Figure 6(b) for a picture of $(\oplus_2 D_{d,1}^\infty) \oplus (\oplus_3 D_{d,1}^\infty)$. In fact, we have (($\oplus_r D_{d,1}^\infty$), $\oplus$) is the free monoid generated by $D_{d,1}^\infty$. Note
\ No newline at end of file
diff --git a/samples/texts/300903/page_7.md b/samples/texts/300903/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..d420fdcdc9e9e65a60b54533f0dfca67c412ddf4
--- /dev/null
+++ b/samples/texts/300903/page_7.md
@@ -0,0 +1,15 @@
+FIGURE 6. The braided monoidal structure for the category $\mathcal{G}_d$
+
+that $\oplus_r D_{d,1}^\infty$ has a naturally induced $d$-rigid structure and we can identify it with $D_{d,r}^\infty$, which will be of use to us later.
+
+We can now define the category $\mathcal{G}_d$ to be the monoidal category with objects $\oplus_r D_{d,1}^\infty$, $r \ge 0$, $\oplus$ as the operation, and $D$ as the $0$ object. So far it is the same as defining the objects as the natural numbers and addition as the operation. When $r=s$, we define the morphisms $\text{Hom}(\oplus_r D_{d,1}^\infty, \oplus_s D_{d,1}^\infty) = \mathcal{B}\text{V}_{d,r}$ which is the group of isotopy classes of asymptotically rigid homeomorphisms of $D_{d,r}^\infty$; when $r \ne s$, let $\text{Hom}(\oplus_r D_{d,1}^\infty, \oplus_s D_{d,1}^\infty) = \emptyset$. Note that we did not universally define the morphisms to be the sets of isotopy classes of asymptotically rigid homeomorphisms as we want our category to satisfy cancellation, i.e., if $A \oplus C = A$ then $C = 0$, see [RWW17, Remark 1.11] for more information. The category $\mathcal{G}_d$ has a natural braiding as in the usual braid group case, see Figure 6(c).
+
+Now applying [RWW17, Theorem 1.10], we have a homogeneous category $\mathcal{U}\mathcal{G}_d$. The category $\mathcal{U}\mathcal{G}_d$ has the same objects as $\mathcal{G}_d$ and morphisms defined as following: For any $s \le r$, a morphism in $\text{Hom}(\oplus_s D_{d,1}^\infty, \oplus_r D_{d,1}^\infty)$ is an equivalence class of pairs $(\oplus_{r-s} D_{d,1}^\infty, f)$ where $f: (\oplus_{r-s} D_{d,1}^\infty) \oplus (\oplus_s D_{d,1}^\infty) \to \oplus_r D_{d,1}^\infty$ is a morphism in $\mathcal{G}_d$ and $(\oplus_{r-s} D_{d,1}^\infty, f) \sim (\oplus_{r-s} D_{d,1}^\infty, f')$ if there exists an isomorphism $g: \oplus_{r-s} D_{d,1}^\infty \to \oplus_{r-s} D_{d,1}^\infty \in \mathcal{G}_d$ making the diagram commute up to isotopy.
+
+We write $[\oplus_{s-r} D_{d,1}^{\infty}, f]$ for such an equivalence class. Now by Theorem 4.6, to prove the homological stability for the oriented ribbon Higman–Thompson groups, we only need to verify Condition **LH3**, i.e. the complex $W_r(D_{d,1}^{\infty}, D_{d,1}^{\infty})$ is highly connected. As a matter of fact, we will show that $W_r(D, D_{d,1}^{\infty})\cdot$ is $(r-3)$-connected in the next subsection. First, let us further characterize the morphisms in $\mathcal{U}\mathcal{G}_d$. Call $0 = I^- \cap I^+$ the basepoint of $\oplus_r D_{d,1}^\infty$.
+
+**Definition 4.7.** Given $s < r$, an injective map $\varphi : (\oplus_s D_{d,1}^\infty, I^+) \to (\oplus_r D_{d,1}^\infty, I^+)$ is called an *asymptotically rigid embedding* if it satisfies the following properties:
+
+(1) $\varphi(\partial D_{d,s}^\infty) \cap \partial D_{d,r}^\infty = I^+$.
+
+(2) $\varphi$ maps $\oplus_s D_{d,1}^\infty$ homeomorphically to $\varphi(\oplus_s D_{d,1}^\infty)$ and there exists an admissible surface $A \subset \oplus_s D_{d,1}^\infty$ such that $\varphi: \oplus_s D_{d,1}^\infty \setminus A \to \varphi(\oplus_s D_{d,1}^\infty) \setminus \varphi(A)$ is rigid.
\ No newline at end of file
diff --git a/samples/texts/300903/page_8.md b/samples/texts/300903/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..481fbbb361086f259f9c37790b64f4c2a67d8668
--- /dev/null
+++ b/samples/texts/300903/page_8.md
@@ -0,0 +1,25 @@
+(3) The closure of the complement of $\varphi(\oplus_s D_{d,1}^\infty)$ in $\oplus_r D_{d,1}^\infty$ with its induced $d$-rigid structure is asymptotically rigidly homeomorphic to $\oplus_{r-s} D_{d,1}^\infty$.
+
+**Lemma 4.8.** For $s < r$, the equivalence classes of pairs $[\oplus_{r-s} D_{d,1}^\infty, f]$ are in one-to-one correspondence with the isotopy classes of asymptotically rigid embeddings of $(\oplus_s D_{d,1}^\infty, I^+)$ into $(\oplus_r D_{d,1}^\infty, I^+)$.
+
+**Remark 4.9.** Here isotopies are carried out among asymptotically rigid embeddings.
+
+*Proof.* Given an equivalence class of a pair $[\oplus_{t-s}D_{d,1}^{\infty}, f]$, we have the restriction map $f|_{\oplus_s D_{d,1}^{\infty}}$ is an asymptotically rigid embedding. Any two equivalence classes of pairs will induce the same map $f|_{\oplus_s D_{d,1}^{\infty}}$, hence we have a well-defined map from the set of equivalence pairs to the set of isotopy classes of asymptotically rigid embeddings.
+
+We produce the inverse of the restriction map as follows. If we have an asymptotically rigid embedding $\varphi : (\oplus_s D_{d,1}^\infty, I^+) \to (\oplus_r D_{d,1}^\infty, I^+)$, by part 3 of Definition 4.7, we also have an asymptotically rigid homeomorphism $\phi : C \to \oplus_{r-s} D_{d,1}^\infty$ where $C$ is the closure of the complement of $\varphi(\oplus_s D_{d,1}^\infty)$ in $\oplus_r D_{d,1}^\infty$. Up to isotopy, we can assume $\phi^{-1}|_{I^+}$ coincides with $\varphi|_{I^-}$. Now define a map $\tilde{f} : (\oplus_{r-s} D_{d,1}^\infty) \oplus (\oplus_s D_{d,1}^\infty) \to \oplus_r D_{d,1}^\infty$ by $\tilde{f}|_{\oplus_{r-s} D_{d,1}^\infty} = \phi^{-1}$ and $\tilde{f}|_{\oplus_s D_{d,1}^\infty} = \varphi$. One can check that $\tilde{f}$ is an asymptotically rigid homeomorphism. Then $(\oplus_{r-s} D_{d,1}^\infty, \tilde{f})$ gives a representative of an equivalence class of pairs. $\square$
+
+4.3. **Higher connectivity of the complex Wr(D, Dd,1∞)•.** We want to prove that the complex Wr(D, Dd,1∞)• is highly connected. As explained in the proof of [RWW17, Lemma 5.21], a simplex of Sr(D, Dd,1∞) has a canonical ordering on its vertices induced by the local orientation of the surfaces near the parameterized interval in their based boundary. Thus the geometric realization |Wr(D, Dd,1∞)•| is homeomorphic to Sr(D, Dd,1∞). Our first step now is to simplify the complex Sr(D, Dd,1∞) further.
+
+**Definition 4.10.** Given $r \ge 2$, we call a loop $\alpha : (I, \partial I) = ([0, 1], \{0, 1\}) \to (\oplus_r D_{d,1}^\infty, 0)$ an *asymptotically rigidly embedded loop* if there exists an asymptotically rigid embedding $\varphi : (D_{d,1}^\infty, I^+) \to (\oplus_r D_{d,1}^\infty, I^+)$ with $\varphi|_{(\partial D_{d,1}^\infty, 0)} = \alpha$ up to based isotopy.
+
+**Remark 4.11.** When $r=1$, we just call a loop asymptotically rigidly embedded if it is isotopic to the boundary.
+
+**Lemma 4.12.** When $r \ge 2$, a loop $\alpha : (I, \partial I) \to (\oplus_r D_{d,1}^\infty, 0)$ is isotopic to an asymptotically rigidly embedded loop if and only if there exists an admissible surface $A \subseteq \oplus_r D_{d,1}^\infty$ such that the admissible loops of $A$ are disjoint from $\alpha$, the number of admissible loops of $A$ that lie in the disk bounded by $\alpha$ is $1 + a(d-1)$ for some $a \ge 0$ and there exist some admissible loops which do not lie inside the disk bounded by $\alpha$ up to isotopy.
+
+*Proof.* It is clear that a loop which is isotopic to an asymptotically rigidly embedded loop has the properties given in the lemma.
+
+For the other direction, we can assume up to isotopy that $\alpha(I) \cap \partial(\oplus_r D_{d,1}^\infty) = I^+$. We know that $D_{d,r}$ is asymptotically rigidly homeomorphic to $D_{d,r+d-1}$, thus the surface bounded by the loop $\alpha$ is asymptotically rigidly homeomorphic to $D_{d,1}^\infty$. Therefore, the number of the boundary components bounded by the complement disk is $r-1 \mod d-1$ and thus it is asymptotically rigidly homeomorphic to $D_{d,r-1}^\infty$. These two facts together imply $\alpha$ is an asymptotically rigidly embedded loop. $\square$
+
+Now we define the complex $U_r(D, D_{d,1}^\infty)$ which is the surface version of the complex $U_r$ given in [SW19, Section 2.4].
+
+**Definition 4.13.** For $r \ge 1$, let $U_r(D, D_{d,1}^\infty)$ denote the simplicial complex whose vertices are isotopy classes of asymptotically rigidly embedded loops and a set of vertices $\alpha_0, \dots, \alpha_p$ forms a $p$-simplex if and only if any corresponding asymptotically rigid embeddings $\phi_0, \dots, \phi_p$ form a $p$-simplex in $S_r(D, D_{d,1}^\infty)$.
\ No newline at end of file
diff --git a/samples/texts/300903/page_9.md b/samples/texts/300903/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b247903007337d4dcc41c3f4cc0600698672560
--- /dev/null
+++ b/samples/texts/300903/page_9.md
@@ -0,0 +1,35 @@
+We denote the canonical map from $S_r(D, D_{d,1}^\infty)$ to $U_r(D, D_{d,1}^\infty)$ by $\pi$. The following lemma follows directly from the definition.
+
+**Lemma 4.14.** *The map $\pi$ is a complete join.*
+
+Now by Proposition 1.3, we only need to show that $U_r(D, D_{d,1}^\infty)$ is highly connected. Similar to [SW19, Section 2.4], we will produce several other complexes closely related to $U_r(D, D_{d,1}^\infty)$. We first have the following complex which is analogous to the complex $U_r^\infty$ in [SW19, Definition 2.12].
+
+**Definition 4.15.** Let $U_r^\infty(D, D_{d,1}^\infty)$ be the simplicial complex with vertices given by asymptotically rigidly embedded loops in $\oplus_r D_{d,1}^\infty$ where $\alpha_0, \alpha_1, \dots, \alpha_p$ form a $p$-simplex if the punctured disks bounded by them are pairwise disjoint (outside of the based point) and there exists at least one admissible loop that does not lie in those disks.
+
+**Remark 4.16.** (1) The $(r-2)$-skeleton of $U_r^\infty(D, D_{d,1}^\infty)$ is the same as that of $U_r(D, D_{d,1}^\infty)$. Notice though that $U_r^\infty(D, D_{d,1}^\infty)$ is in fact infinite dimensional.
+
+(2) Since $\oplus_r D_{d,1}^\infty$ is asymptotically rigidly homeomorphic to $\oplus_{r+d-1} D_{d,1}^\infty$, we have $U_r^\infty(D, D_{d,1}^\infty)$ is isomorphic to $U_{r+d-1}^\infty(D, D_{d,1}^\infty)$ as a simplicial complex.
+
+We also need the another complex which is the surface version of the complex $T_r^\infty$ given in [SW19, Definition 2.14]. For convenience, we will orient the admissible loops in $\oplus_r D_{d,1}^\infty$ such that they bound the punctured disk according to the orientation.
+
+**Definition 4.17.** An almost admissible loop is a loop $\alpha : (I, \partial I) \to (\oplus_r D_{d,1}^\infty, 0)$ which is freely isotopic to one of the nonbased admissible loops.
+
+Note that by Lemma 4.12, an almost admissible loop is an asymptotically rigidly embedded loop.
+
+**Definition 4.18.** Define the simplicial complex $T_r^\infty(D, D_{d,1}^\infty)$ to be the full subcomplex of $U_r^\infty(D, D_{d,1}^\infty)$ such that all its vertices are almost admissible loops.
+
+Just as discussed in Remark 4.16, we have $T_r^\infty(D, D_{d,1}^\infty)$ is in fact isomorphic to $T_{r+d-1}^\infty(D, D_{d,1}^\infty)$ as a simplicial complex.
+
+We now want to further characterise the almost admissible loops by building a connection to the usual arc complex. We let $A$ be the quotient $[0,2]/1 \sim 2$. This corresponds to identifying the endpoint 1 of the interval $[0,1]$ with the base point 1 of the circle given by $[1,2]/1 \sim 2$.
+
+**Definition 4.19.** An injective continuous map $L : (A, 0) \to (D_{d,r}^\infty, 0)$ is called a *lollipop* on the surface $D_{d,r}^\infty$ if $\alpha|_{[1,2]}$ is isotopic to an admissible loop in $D_{d,r}^\infty$ and $L|_{[0,1]}$ is an arc connecting the base point 0 to the loop $L([1,2])$. The map $L|_{[0,1]}$ is called the *arc part* of the lollipop $L$ and $L|_{[1,2]}$ is called the *loop part*.
+
+Lollipops are examples of what Hatcher-Vogtmann refer to as tethered curves [HV17].
+
+**Lemma 4.20.** *The set of isotopy classes of almost admissible loops is in one-to-one correspondence with the set of isotopy classes of lollipops.*
+
+**Proof.** We define a map $g$ from the isotopy classes of lollipops to the isotopy classes of almost admissible loops and show that the map is bijective.
+
+Given a lollipop $L : (A, 0) \to (D_{d,r}^\infty, 0)$, we can map it to an almost admissible loop $\alpha : [0, 1] \to (D_{d,r}^\infty, 0)$ as follows. We define $\alpha(0) = 0$ and let $\alpha(t)$ run parallel to $L$ outside the region bounded by $L$. The orientation of $\alpha$ is simply the one coincides with the loop part of $L$. Since $\alpha$ can be freely homotoped to the admissible loop $L|_{[1,2]}$, we have $\alpha$ is almost admissible. Any isotopy of $L$ induces an isotopy of $\alpha$, hence the map is well-defined.
+
+Now we show $g$ is surjective. Given any almost admissible loop $\alpha : [0, 1] \to (D_{d,r}^\infty, 0)$, let $A$ be the admissible loop which is freely isotopic to $\alpha$. Up to isotopy, we can assume that $A$ lies
\ No newline at end of file
diff --git a/samples/texts/3349676/page_15.md b/samples/texts/3349676/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..59003d4b0be8728779efbd97d8cd2778a33ad835
--- /dev/null
+++ b/samples/texts/3349676/page_15.md
@@ -0,0 +1,43 @@
+[27] H. Krause and M. Saorín, *On minimal approximations of modules*, in: Trends in the Representation Theory of Finite Dimensional Algebras, E. L. Green and B. Huisgen-Zimmermann (eds.), Contemp. Math. 229, Amer. Math. Soc., 1998, 227–236.
+
+[28] T. Y. Lam, *A First Course in Noncommutative Rings*, Springer, New York, 1991.
+
+[29] M. Y. Prest, *Duality and pure-semisimple rings*, J. London Math. Soc. 38 (1988), 403–409.
+
+[30] C. M. Ringel, *Representations of K-species and bimodules*, J. Algebra 41 (1976), 269–302.
+
+[31] C. M. Ringel and H. Tachikawa, *QF-3 rings*, J. Reine Angew. Math. 272 (1975), 49–72.
+
+[32] M. Schmidmeier, *The local duality for homomorphisms and an application to pure semisimple PI-rings*, Colloq. Math 77 (1998), 121–132.
+
+[33] D. Simson, *Functor categories in which every flat object is projective*, Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys. 22 (1974), 375–380.
+
+[34] —, *Pure semisimple categories and rings of finite representation type*, J. Algebra 48 (1977), 290–296; Corrigendum, ibid. 67 (1980), 254–256.
+
+[35] —, *Partial Coxeter functors and right pure semisimple hereditary rings*, ibid. 71 (1981), 195–218.
+
+[36] —, *Linear Representations of Partially Ordered Sets and Vector Space Categories*, Algebra Logic Appl. 4, Gordon and Breach, 1992.
+
+[37] —, *On right pure semisimple hereditary rings and an Artin problem*, J. Pure Appl. Algebra 104 (1995), 313–332.
+
+[38] —, *A class of potential counter-examples to the pure semisimplicity conjecture*, in: Advances in Algebra and Model Theory, M. Droste and R. Göbel (eds.), Gordon and Breach, 1997, 345–373.
+
+[39] —, *Dualities and pure semisimple rings*, in: Abelian Groups, Module Theory and Topology, D. Dikranjan and L. Salce (eds.), Lecture Notes in Pure Appl. Math. 201, Dekker, 1998, 381–388.
+
+[40] —, *An Artin problem for division ring extensions and the pure semisimplicity conjecture II*, J. Algebra 227 (2000), 670–705.
+
+[41] —, *On local right pure semisimple rings of length two or three*, Osaka J. Math. 39 (2002), 985–1003.
+
+[42] B. Stenström, *Rings of Quotients*, Springer, 1975.
+
+[43] R. Wisbauer, *Foundations of Module and Ring Theory*, Gordon and Breach, 1991.
+
+Department of Mathematics
+Ohio University-Zanesville
+Zanesville, OH 43701, U.S.A.
+E-mail: nguyend2@ohiou.edu
+
+Department of Mathematics
+University of Murcia
+30100 Espinardo, Murcia, Spain
+E-mail: jlgarcia@um.es
\ No newline at end of file
diff --git a/samples/texts/3349676/page_5.md b/samples/texts/3349676/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..af7513d2d666efeb6741697a06c5891df92a7bca
--- /dev/null
+++ b/samples/texts/3349676/page_5.md
@@ -0,0 +1,17 @@
+$x \in G$, with the usual matrix multiplication; and $\text{End}(C)$ is canonically isomorphic to $\text{End}_G(G) \cong G$, so that $C$ is isomorphic to $G_G$ as a module over its endomorphism ring. It follows that $C$ is endofinite.
+
+Thus, we will assume below that $C$ is not isomorphic to $Q$. It is enough to show that for any finitely generated indecomposable left $R_M$-module $X$, $\text{Hom}_{R_M}(X, C)$ is finitely cogenerated over $\text{End}(C)$ (because if $L$ is any finitely generated left $R_M$-module, then $\text{Hom}_{R_M}(L, C)$ is a finite direct sum of modules of the form $\text{Hom}_{R_M}(X, C)$ with $X$ indecomposable, and the property of being finitely cogenerated as $\text{End}(C)$-modules is preserved under taking finite direct sums). If $X$ is not isomorphic to $Q$, then in view of (ii) above, $\text{Hom}_{R_M}(X, C)$ is isomorphic to $\text{Hom}_{R_B}(S^-(X), S^-(C))$, and because $\text{Hom}_{R_B}(S^-(X), S^-(C))$ is finitely cogenerated over $\text{End}(S^-(C))$ by hypothesis, we conclude that $\text{Hom}_{R_M}(X, C)$ is finitely cogenerated over $\text{End}(C)$. Finally, if $X$ is isomorphic to $Q$, then because $Q$ is simple injective and $C$ is indecomposable non-injective, it is clear that $\text{Hom}_{R_M}(Q, C) = 0$ in this case, completing our proof.
+
+(b) We now assume that $R_B$ is left and right artinian and that all finitely generated indecomposable left $R_B$-modules are finendo. We will show that every finitely generated indecomposable left $R_M$-module is finendo.
+
+Thus let $GU$ and $FV$ be finite-dimensional left vector spaces such that
+
+$$U \xrightarrow{\bar{\mu}} \operatorname{Hom}_F(M, V)$$
+
+corresponds to a finitely generated indecomposable left $R_M$-module $X = (U, V, \mu)$, and assume that the homomorphism $\bar{\mu}$ is injective. If we view $X$ as a left $R_M$-module, the elements of $X$ are column vectors ($\begin{smallmatrix} u \\ v \end{smallmatrix}$) with $u \in U$, $v \in V$, and each endomorphism of $X$ takes that element to ($\begin{smallmatrix} u' \\ v' \end{smallmatrix}$) for some $u' \in U$ and $v' \in V$. Therefore $X$ is, as a right module over its endomorphism ring, isomorphic to a direct sum $U \oplus V$. To show that $X$ is finendo, it is enough to prove that each of the endosubmodules $U, V$ is finitely generated over $\text{End}(X)$.
+
+By the construction of the functor $S^-$, we have the exact sequence
+
+$$U \xrightarrow{\bar{\mu}} \operatorname{Hom}_F(M, V) \cong B \otimes_F V \xrightarrow{p} C \to 0$$
+
+and $S^-(X) = (V, C, p)$, being a finitely generated indecomposable left $R_B$-module, is finendo. By the same argument used in the preceding paragraph for the endosubmodules $U, V$ of $X$, both $V$ and $C$ are finitely generated over the endomorphism ring of $S^-(X)$, which is canonically isomorphic to $\text{End}(X)$. It is not hard to see that this entails that $V$ is finitely generated over the endomorphism ring of $X$, and that also $B \otimes_F V$ is finitely generated over $\text{End}(S^-(X)) \cong \text{End}(X)$. But $U$, being a direct summand of $B \otimes_F V$, is then finitely generated over the endomorphism ring $\text{End}(X)$, and we are done.
\ No newline at end of file
diff --git a/samples/texts/3349676/page_8.md b/samples/texts/3349676/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..a916b541b4d7287c4cc75081c9343d3540f38c25
--- /dev/null
+++ b/samples/texts/3349676/page_8.md
@@ -0,0 +1,35 @@
+mentioned in the introduction can be reduced to the problem whether a
+left pure semisimple ring $R$ with all finitely generated indecomposable left
+$R$-modules cofinendo has to be of finite representation type. We know that
+the answer is positive when $R$ is hereditary (Theorem 3.5). In the general
+case, we can prove that $R$ is right *Morita*, i.e. $R$ is right artinian and every
+indecomposable injective right $R$-module is finitely generated. This is the
+goal of this section, where we follow the steps in [21, Section 6], and, in
+order to do this, we first state and prove some results about triangular
+matrix rings.
+
+Let $R$ be a semiprime ring with Jacobson radical $J = J(R)$. Suppose that $I$ is a two-sided ideal of $R$ such that $JI = IJ = 0$. Then we may construct the triangular matrix ring
+
+$$T_I(R) = \begin{pmatrix} R/J & 0 \\ I & R/J \end{pmatrix}.$$
+
+As in the preceding section, we shall use the identification of the left $T_I(R)$-modules with the triples $(X, Y, \lambda)$, where $X, Y$ are left $R/J$-modules and $\lambda : I \otimes_{R/J} X \to Y$ is an $R/J$-homomorphism. Following [5, Section 2] or [21, p. 175], we shall say that a left $T_I(R)$-module $(X, Y, \lambda)$ is *Grassmannian* when there are no non-zero elements $x \in X$ with $\lambda(a \otimes x) = 0$ for all $a \in I$. Note that, as observed in [21, p. 175], any left $T_I(R)$-module is a direct sum of a Grassmannian module and a module of the form $(X, 0, 0)$. For any left $R$-module $X$, let $\text{soc}(X)$ denote the socle of $X$, and $\text{ann}_X(I)$ the annihilator $\{x \in X \mid Ix = 0\}$.
+
+We now define the following functor, which is closely related to the func-
+tors $F$ or $J$ appearing in [5, Section 3] (see also [20, p. 99] and [41]); in the
+form below, the definition was given in [21]):
+
+$$
+\operatorname{Gr} : \operatorname{R-Mod} \rightarrow T_I(R)\text{-Mod}, \quad \operatorname{Gr}(X) = (X/\operatorname{ann}_X(I), \operatorname{soc}(X), f),
+$$
+
+where the mapping $f : I \otimes_{R/J} (X/\operatorname{ann}_X(I)) \rightarrow \operatorname{soc}(X)$ is canonical.
+
+LEMMA 4.1. Let $R$ be a left artinian ring with Jacobson radical $J = J(R)$, and let $I$ be a two-sided ideal of $R$ such that $I \subseteq J$ and $JI = IJ = 0$. Let $T_I(R)$ be the triangular matrix ring constructed above.
+
+(a) Each finitely generated indecomposable left $T_I(R)$-module is isomor-
+phic either to $(X, 0, 0)$ for some simple left $R/J$-module $X$ or to
+$\operatorname{Gr}(X)$ for some finitely generated indecomposable left $R$-module $X$.
+
+(b) If every finitely generated indecomposable left $R$-module is cofinendo, then every finitely generated indecomposable left $T_I(R)$-module is cofinendo.
+
+*Proof.* (a) It follows from the observation above on Grassmannian $T_I(R)$-modules that a finitely generated indecomposable left $T_I(R)$-module is either
\ No newline at end of file
diff --git a/samples/texts/3410193/page_1.md b/samples/texts/3410193/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..f637f9a5371c7b1de181e2bf2ef2bb518b2b36e7
--- /dev/null
+++ b/samples/texts/3410193/page_1.md
@@ -0,0 +1,23 @@
+# Updates & Errata
+
+## May 2021
+
+Based on a comment by Sergiu Carpov, we fixed a bug in our code related to the use of the 80-bit **long double** and we re-ran all related scenarios. Find the list of affected contents below:
+
+### Section 5.3:
+
+- in the second scenario, all coefficients are equal to the bound *minus one*,
+
+- for 80-bit **long double**, *all scenarios* were calculated correctly,
+
+- updated Figure 2 with new measurements,
+
+- updated Discussion.
+
+### Section 6:
+
+- acknowledgment for Sergiu Carpov.
+
+## June 2021
+
+We changed the *y*-axis labels in Figure 2 from decadic to binary. We fixed the values also in Conclusion.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_10.md b/samples/texts/3410193/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9d5ca00587137c3867898e9aaedefef6a61c585
--- /dev/null
+++ b/samples/texts/3410193/page_10.md
@@ -0,0 +1,17 @@
+20. Jakub Klemsa. Benchmarking FFNT. https://gitlab.fit.cvut.cz/klemsjak/ffnt-benchmark, 2021.
+
+21. Vadim Lyubashevsky, Chris Peikert, and Oded Regev. On ideal lattices and learning with errors over rings. In *Annual International Conference on the Theory and Applications of Cryptographic Techniques*, pages 1–23. Springer, 2010.
+
+22. Fast Fourier transform in x86 assembly. https://www.nayuki.io/page/fast-fourier-transform-in-x86-assembly, 2021. Accessed: 2021-01-30.
+
+23. NIST. NIST's Post-Quantum Cryptography Program Enters “Selection Round”. https://www.nist.gov/news-events/news/2020/07/nists-post-quantum-cryptography-program-enters-selection-round, 2020.
+
+24. Oded Regev. On lattices, learning with errors, random linear codes, and cryptography. *Journal of the ACM (JACM)*, 56(6):1–40, 2009.
+
+25. Arnold Schönhage and Volker Strassen. Schnelle multiplikation grosser zahlen. *Computing*, 7(3):281–292, 1971.
+
+26. Peter W Shor. Algorithms for quantum computation: discrete logarithms and factoring. In *Proceedings 35th annual symposium on foundations of computer science*, pages 124–134. Ieee, 1994.
+
+27. Victor Shoup et al. NTL: A library for doing number theory. https://libntl.org/, 2001.
+
+28. TFHE: Fast Fully Homomorphic Encryption Library over the Torus. https://github.com/tfhe/tfhe, 2016.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_11.md b/samples/texts/3410193/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..db6eed36d1df573bfd3e9ab529e3f9db5bce08c9
--- /dev/null
+++ b/samples/texts/3410193/page_11.md
@@ -0,0 +1,62 @@
+# Appendix
+
+## A Proof of Proposition 1
+
+*Proof.* Let us begin with the cyclic convolution. By (30) and Lemma 1 and 2, we have
+
+$$
+\begin{align}
+\| \mathrm{Err}(\mathbf{F} \odot \mathbf{G}) \|_{\infty} &\lesssim \left( c_H^{(\mathbf{f})} \cdot (\sqrt{2}+1)^{\nu} \cdot \underbrace{2^{\gamma_0+\nu}}_{\gtrsim \| \mathrm{Err}(\mathbf{F}) \|_{\infty}} + c_H^{(\mathbf{g})} \cdot (\sqrt{2}+1)^{\nu} \cdot 2^{\varphi_0+\nu} \right) \cdot \sqrt{2} = \nonumber \\
+&= (\sqrt{2}+1)^{\nu} \cdot 2^{\nu+\varphi_0+\gamma_0-\chi+2} \cdot (2-\sqrt{2}) \cdot \sqrt{2} =: E_H, \quad \text{and} \tag{42}
+\end{align}
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\mathrm{Var}(\mathrm{Err}(\mathbf{F} \odot \mathbf{G})) &\lesssim \left( \underbrace{d_N^{(\mathbf{f})} \cdot 2^{2\nu}}_{\gtrsim \mathrm{Var}(\mathrm{Err}(\mathbf{F}))} \cdot \underbrace{2^{2\gamma_0+2\nu}}_{\geq \|\mathbf{G}\|_{\infty}^2} + d_N^{(\mathbf{g})} \cdot 2^{2\nu} \cdot 2^{2\varphi_0+2\nu}) \cdot 2 = \\
+&= 2/3 \cdot 2^{4\nu+2\varphi_0+2\gamma_0-2\chi} =: V_H,
+\end{split}
+\tag{43}
+\end{equation}
+$$
+
+which we apply as the initial error and variance bound to (22) and (23), respectively, together with multiplication by $1/N = 2^{-\nu}$, which poses the only difference between FFT$^{-1}$ and FFT from the error point of view. We neglect other than leading terms and we get
+
+$$
+\begin{align*}
+\|\operatorname{Err}(\mathbf{h})\|_{\infty} &\lesssim 2^{-\nu} \cdot 2(\sqrt{2}-1) \cdot E_H \cdot (\sqrt{2}+1)^{\nu} \lesssim \\
+&\qquad \approx c_H^{(\mathbf{H})} \\
+&\lesssim (\sqrt{2}+1)^{2\nu-2} \cdot 2^{\varphi_0+\gamma_0-\chi+4}, \quad \text{and} \tag{44}
+\end{align*}
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\mathrm{Var}(\mathrm{Err}(\mathbf{h})) &\lesssim 2^{-2\nu} \cdot \underbrace{1/6 \cdot 2^{2(\varphi_0+\gamma_0+2\nu)-2\chi}}_{= d_N^{(\mathbf{H})}} \cdot 4^\nu = 1/6 \cdot 2^{4\nu+2\varphi_0+2\gamma_0-2\chi}, \quad (45)
+\end{split}
+\end{equation}
+$$
+
+and the cyclic results follow.
+
+For the negacyclic convolution, we feed DFT with a folded and twisted input vector; cf. (31). It enters DFT with error bounded as
+
+$$
+\| \mathrm{Err}(f'') \|_{\infty} \lesssim (1 \cdot 0 + 2^{\varphi_0+1/2} \cdot 2^{-\chi-1}) \cdot \sqrt{2} = 2^{\varphi_0-\chi}. \quad (46)
+$$
+
+Regarding variance, it shows that the term with $\mathrm{Var}(\mathrm{Err}(f''))$ will be neglected.
+Next, we precompute
+
+$$
+\begin{align}
+c_H^{(\boldsymbol{f}'')} &= 2(\sqrt{2}-1) \| \mathrm{Err}(\boldsymbol{f}'') \|_\infty + (2-\sqrt{2}) \cdot 2^{\varphi_0+1/2-\chi+1} \lesssim \nonumber \\
+&\lesssim 6(\sqrt{2}-1) \cdot 2^{\varphi_0-\chi}, && \text{and} \tag{47}
+\end{align}
+$$
+
+$$
+d_N^{(\boldsymbol{f}'')} = 1/6 2^{2(\varphi_0+1/2)-2x}, \quad (48)
+$$
\ No newline at end of file
diff --git a/samples/texts/3410193/page_12.md b/samples/texts/3410193/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..af8afdfc8d545aecc4c39d59af70b96fb26d331f
--- /dev/null
+++ b/samples/texts/3410193/page_12.md
@@ -0,0 +1,21 @@
+# Fast and Error-Free
+Negacyclic Integer Convolution
+using Extended Fourier Transform *
+
+Jakub Klemsa
+
+Czech Technical University in Prague, Czech Republic
+
+jakub.klemsa@fel.cvut.cz
+
+**Abstract.** With the rise of lattice cryptography, (negacyclic) convolution has received increased attention. E.g., the NTRU scheme internally employs cyclic polynomial multiplication, which is equivalent to the standard convolution, on the other hand, many Ring-LWE-based cryptosystems perform negacyclic polynomial multiplication. A method by Crandall implements an efficient negacyclic convolution over a finite field of prime order using an extended Discrete Galois Transform (DGT) – a finite field analogy to Discrete Fourier Transform (DFT). Compared to DGT, the classical DFT runs faster by an order of magnitude, however, it suffers from inevitable rounding errors due to finite floating-point number representation. In a recent Fully Homomorphic Encryption (FHE) scheme by Chillotti et al. named TFHE, small errors are acceptable (although not welcome), therefore we decided to investigate the application of DFT for negacyclic convolution.
+
+The primary goal of this paper is to suggest a method for fast negacyclic convolution over integer coefficients using an extended DFT. The key contribution is a thorough analysis of error propagation, as a result of which we derive parameter bounds that can guarantee even error-free results. We also suggest a setup that admits rare errors, which allows to increase the degree of the polynomials and/or their maximum norm at a fixed floating-point precision. Finally, we run benchmarks with parameters derived from a practical TFHE setup. We achieve around 24× better times than the generic NTL library (comparable to Crandall’s method) and around 4× better times than a naïve approach with DFT, with no errors.
+
+**Keywords:** Negacyclic Convolution, Fast Fourier Transform, Fully Homomorphic Encryption
+
+## 1 Introduction
+
+In 1994, Peter Shor discovered efficient quantum algorithms for discrete logarithm and factoring [26], which started the quest to design novel quantum-proof algorithms, aka. *Post-Quantum Cryptography*. Since then, there have emerged
+
+* This is the full and updated version of the paper.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_13.md b/samples/texts/3410193/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..4363b7bfb7f57332d017f82083e0a557017fd1d6
--- /dev/null
+++ b/samples/texts/3410193/page_13.md
@@ -0,0 +1,63 @@
+and apply into
+
+$$
+\begin{align}
+\|\mathrm{Err}(\bar{\mathbf{F}} \odot \bar{\mathbf{G}})\|_{\infty} &\lesssim \left( c_H^{(\mathbf{f}'')} \cdot (\sqrt{2}+1)^{\nu-1} \cdot \underbrace{2^{\gamma_0+1/2+\nu-1}}_{\gtrsim \|\mathrm{Err}(\bar{\mathbf{F}})\|_{\infty}} + \right. \nonumber \\
+&\qquad \left. + c_H^{(\mathbf{g}'')} \cdot (\sqrt{2}+1)^{\nu-1} \cdot 2^{\varphi_0+1/2+\nu-1} \right) \cdot \sqrt{2} = \nonumber \\
+&= 3(\sqrt{2}+1)^{\nu-2} \cdot 2^{\nu+\varphi_0+\gamma_0-\chi+2} =: E_{\bar{\mathbf{H}}}, \quad \text{and} \tag{49}
+\end{align}
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\mathrm{Var}(\mathrm{Err}(\bar{\mathbf{F}} \odot \bar{\mathbf{G}})) &\lesssim \left( d_N^{(\mathbf{f}'')} \cdot 4^{\nu-1} \cdot \underbrace{2^{2\gamma_0+1+2\nu-2}}_{\gtrsim \mathrm{Var}(\mathrm{Err}(\bar{\mathbf{F}}))} + \right. \\
+&\qquad \left. + d_N^{(\mathbf{g}'')} \cdot 4^{\nu-1} \cdot 2^{2\varphi_0+1+2\nu-2} \right) \cdot 2 = \\
+&= 1/3 \cdot 2^{4\nu+2\varphi_0+2\gamma_0-2\chi-1} =: V_{\bar{\mathbf{H}}}.
+\end{split}
+\tag{50}
+\end{equation}
+$$
+
+Next, we apply these estimates as the initial error and variance bound into (22)
+and (23), respectively, together with multiplication by $2/N = 2^{-\nu+1}$. We have
+
+$$
+\begin{align*}
+\|\operatorname{Err}(\mathbf{h}'')\|_{\infty} &\lesssim 2^{-\nu+1} \cdot \underbrace{2(\sqrt{2}-1) \cdot E_{\mathbf{H}}}_{\approx c_H^{(\mathbf{H})}} \cdot (\sqrt{2}+1)^{\nu-1} \\
+&\approx 3(\sqrt{2}+1)^{2\nu-4} \cdot 2^{\varphi_0+\gamma_0-\chi+4}, \quad \text{and} \tag{51}
+\end{align*}
+$$
+
+$$
+\begin{equation}
+\begin{aligned}
+& \operatorname{Var}(\operatorname{Err}(\mathbf{h}'')) \lesssim 2^{-2\nu+2} \cdot \underbrace{1/6 \cdot 2^{(2\varphi_0+2\gamma_0+2+4\nu-4)-2\chi}}_{=} d_N^{(\mathbf{H})} \\
+& = 1/3 \cdot 2^{4\nu+2\varphi_0+2\gamma_0-2\chi-3},
+\end{aligned}
+\tag{52}
+\end{equation}
+$$
+
+while in (52), it has shown that the term with $V_{\mathbf{H}}$ was not the leading term,
+hence it was neglected. By (31) it remains to untwist and unfold, we have
+
+$$
+\begin{align*}
+\|\mathrm{Err}(\mathbf{h}')\|_{\infty} &\lesssim (1 \cdot 3(\sqrt{2}+1)^{2\nu-4} \cdot 2^{\varphi_0+\gamma_0-\chi+4}) + \underbrace{2^{\nu+\varphi_0+\gamma_0-1}}_{\geq \|\mathbf{h}''\|_{\infty}} \cdot 2^{-\chi-1}) \cdot \sqrt{2} \\
+&\approx 3\sqrt{2} \cdot (\sqrt{2}+1)^{2\nu-4} \cdot 2^{\varphi_0+\gamma_0-\chi+4}, \quad \text{and} \tag{53}
+\end{align*}
+$$
+
+$$
+\begin{equation}
+\begin{split}
+\mathrm{Var}(\mathrm{Err}(\mathbf{h}')) &\lesssim (1^2 \cdot \underbrace{1/3 \cdot 2^{4\nu+2\varphi_0+2\gamma_0-2\chi-3}}_{\approx \mathrm{Var}(\mathrm{Err}(\mathbf{h}''}}) + \underbrace{2^{4\nu+2\varphi_0+2\gamma_0-2}}_{\geq \|h''\|_\infty} \cdot 1/12 \cdot 2^{-2\chi}) \cdot 2 = \\
+&= 2^{4\nu+2\varphi_0+2\gamma_0-2\chi-3}.
+\end{split}
+\tag{54}
+\end{equation}
+$$
+
+Since the unfolding operation does not change the error, the negacyclic results
+follow. □
\ No newline at end of file
diff --git a/samples/texts/3410193/page_14.md b/samples/texts/3410193/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..2faaacb57bbae3669e1e361fa1c28529f7089c38
--- /dev/null
+++ b/samples/texts/3410193/page_14.md
@@ -0,0 +1,40 @@
+many new schemes, which are based on various problems that are believed
+to be quantum hard. E.g., supersingular elliptic curve isogeny [18], multivari-
+ate cryptography [12], or lattice cryptography [2], in particular Learning With
+Errors (LWE) and its variants [24,21]. In addition, many *Fully Homomorphic
+Encryption* (FHE) schemes (e.g. [6,8]) belong to lattice-based ones, including
+Gentry's first-ever FHE scheme [14]. Most notably, the NIST's Post-Quantum
+Cryptography Standardization Program entered the third "Selection Round"
+in July 2020 [23], while lattice-based cryptosystems occur among the selected
+algorithms.
+
+With the popularity of lattice-based cryptography, the need for its fast im-
+plementation has risen. Besides linear algebra, many schemes require a fast algo-
+rithm for cyclic (i.e., mod $X^N - 1$) or negacyclic (i.e., mod $X^N + 1$) polynomial
+multiplication. Some schemes work with polynomial coefficients modulo an in-
+teger (e.g., NTRU [17]), however, our main interest is in the TFHE scheme [8],
+where negacyclic multiplication of integer-torus polynomials is performed. Here
+the *torus* refers to reals modulo 1, i.e., the fractional part of a real number. In
+practice, torus elements are represented as unsigned integers, which represent
+the fraction of 1 uniformly in the interval [0, 1). It follows that integer-torus
+polynomial multiplication can be performed with their integer representation.
+Also note that TFHE accepts small errors – we prefer to avoid them, but their
+impact is not fatal for decryption.
+
+Recently, there have emerged efforts to make TFHE work with multivalued
+ plaintexts [7], also applications of TFHE for homomorphic evaluation of neu-
+ ral networks show promising results [5]. In particular, for neural networks, it
+ holds that they are quite error-tolerant (also verified in [5]), which supports the
+ acceptability of errors.
+
+**Problem Statement.** Our goal is to develop a method for fast negacyclic multiplication of univariate integer polynomials. For this method, we aim to estimate and tune its parameters in order to provide certain guarantees of its correctness. As outlined above, we will not focus solely on an error-free case and we will also accept the scenario, where errors may rarely occur. Last but not least—as we intend our method also for an FPGA implementation—we derive all results in a generic manner, i.e., without sticking to a concrete platform, although we run our tests on an ordinary 64-bit machine.
+
+**Related Work.** There is a long and rich history of methods for fast multi-
+plication over various rings, ranging from Karatsuba's algorithm [19], through
+Fast Fourier Transform (FFT; [9]) to Schönhage-Strassen algorithm [25]. Most
+of these methods are based on a similar principle as Bernstein pointed out in his
+survey [4].
+
+It was the classical cyclic convolution, which was accelerated by FFT and
+Convolution Theorem, and which can be employed for polynomial multiplica-
+tion modulo $X^N - 1$, too. On the contrary, polynomial multiplication modulo
\ No newline at end of file
diff --git a/samples/texts/3410193/page_15.md b/samples/texts/3410193/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..f2acb508677e3a2ae300b20382506cff7f2519e6
--- /dev/null
+++ b/samples/texts/3410193/page_15.md
@@ -0,0 +1,13 @@
+$X^N+1$ (negacyclic convolution) cannot be directly calculated via FFT. One possible approach was implemented as a part of the TFHE Library [28], although not discussed in the paper [8]. However, this method suffers from a four-tuple redundancy in its intermediate results. An effective (non-redundant) method for negacyclic convolution has been proposed by Crandall [11] and recently improved by Al Badawi et al. [3]. In these methods, polynomials are considered over a finite ring and both authors employed a number-theoretic variant of FFT, named DGT, which operates on the field GF($p^2$). On the one hand, DGT calculates exact results (as opposed to FFT, where rounding errors occur and propagate), on the other hand, it runs significantly slower as it uses modular arithmetics.
+
+**Our Contributions.** We propose an efficient algorithm for negacyclic convolution over the reals, for which we derive estimates of bounds on the maximum error and its variance. Based on our estimates, we show that our method can be used for an error-free negacyclic convolution over integers. Or—in case we admit errors—we suggest to relax the estimates in order to achieve higher performance: either in terms of shorter number representation (useful in particular for FPGA), longer polynomials, or larger polynomial coefficients that can be processed. Finally, we provide experimental benchmarking results of our implementation as well as we evaluate its rounding error magnitudes and result correctness, even with remarkably underestimated parameters.
+
+**Paper Outline.** In Section 2, we provide a brief overview of the required mathematical background, i.e., cyclic and negacyclic convolutions, their relation to modular polynomial multiplication, as well as the Discrete Fourier Transform and Convolution Theorem. Next, in Section 3, we revisit a straightforward FFT-based approach for negacyclic polynomial multiplication, and we propose a method that avoids the calculation of redundant intermediates. We analyze error propagation thoroughly in Section 4, where we suggest lower bounds on floating point type bit-precision in order to guarantee certain levels of correctness. In Section 5, we discuss the implementation details and we propose a set of testing parameters with respect to TFHE. Using these parameters, we benchmark our implementation and we also examine the error magnitude and result correctness. Finally, we conclude our paper in Section 6.
+
+## 2 Preliminaries
+
+In this section, we briefly recall some basic mathematical concepts related to convolution and Discrete Fourier Transform.
+
+**Cyclic & Negacyclic Convolution.** Let $\mathbf{f}, \mathbf{g} \in \mathbb{C}^N$ for some $N \in \mathbb{N}$. As opposed to the classical cyclic convolution defined as
+
+$$ (\mathbf{f} * \mathbf{g})_k := \sum_{j=0}^{N-1} f_j g_{(k-j) \bmod N}, \quad (1) $$
\ No newline at end of file
diff --git a/samples/texts/3410193/page_16.md b/samples/texts/3410193/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..398d13619d9675b76f00d4f9ad1f4beca493691c
--- /dev/null
+++ b/samples/texts/3410193/page_16.md
@@ -0,0 +1,31 @@
+*negacyclic convolution* adds a factor of −1 with each wrap of the cyclic index at **g**, i.e.,
+
+$$
+(\mathbf{f} * \mathbf{g})_k := \sum_{j=0}^{N-1} (-1)^{\lfloor \frac{k-j}{N} \rfloor} f_j g_{(k-j) \bmod N}. \quad (2)
+$$
+
+With respect to polynomials, it is easy to verify that the cyclic convolution calculates the coefficients of a product of two polynomials modulo $X^N - 1$. Indeed, their coefficients can be considered cyclic since $X^N = 1$. On the other hand, the *negacyclic* convolution calculates the coefficients of a product of two polynomials modulo $X^N + 1$, since $X^N = -1$ adds a factor of $-1$ with each wrap.
+
+**Convolution Theorem.** A relation known as the *Convolution Theorem* (CT) states an equality between the Fourier image of convoluted vectors and an element-wise (dyadic) product of their respective Fourier images (in the discrete variant). CT writes as follows:
+
+$$
+\mathcal{F}(\mathbf{f} * \mathbf{g}) = \mathcal{F}(\mathbf{f}) \odot \mathcal{F}(\mathbf{g}), \tag{3}
+$$
+
+where $\mathcal{F}(\cdot)$ stands for the *Discrete Fourier Transform* (DFT) and $\odot$ denotes the dyadic multiplication of two vectors. In fact, DFT is a change of basis, defined as
+
+$$
+\mathcal{F}(\mathbf{f})_k := \sum_{j=0}^{N-1} f_j \exp\left(-\frac{2\pi i j k}{N}\right) = F_k, \quad (4)
+$$
+
+$$
+\mathcal{F}^{-1}(\mathbf{F})_j = \frac{1}{N} \sum_{k=0}^{N-1} F_k \exp\left(\frac{2\pi i j k}{N}\right) = f_j. \quad (5)
+$$
+
+Convolution theorem has gained its practical significance after *Fast Fourier Transform* (FFT) was (re)invented¹ in 1965 by Cooley & Tukey [9]. As opposed to a direct calculation of DFT coefficients, which requires $O(N^2)$ time, FFT runs in $O(N \log N)$. Next, by the convolution theorem, one can calculate the convolution of two vectors as $\mathbf{f} * \mathbf{g} = \mathcal{F}^{-1}(\mathcal{F}(\mathbf{f}) \odot \mathcal{F}(\mathbf{g}))$, which spends $O(N \log N)$ time, compared to $O(N^2)$ needed for a direct calculation.
+
+**3 Efficient Negacyclic Convolution**
+
+First, we describe a method for negacyclic convolution that uses the standard cyclic convolution and FFT. We identify its redundancy and briefly comment on possible workarounds. Next, we outline an approach that yields no redundancy and achieves a 4× better performance than the previous method.
+
+¹ Goldstine [15] attributes an FFT-like algorithm to C. F. Gauss dating to around 1805.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_17.md b/samples/texts/3410193/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..f1ceb5ec8e89dcc76dd73f91a2c0ffc4080eb03f
--- /dev/null
+++ b/samples/texts/3410193/page_17.md
@@ -0,0 +1,21 @@
+## 3.1 Redundant Approach
+
+Since (negacyclic) convolution is equivalent to (negacyclic) polynomial modular multiplication, we switch to the polynomial point of view for now. Interested in polynomial multiplication modulo $X^N + 1$, we note that $X^{2N} - 1 = (X^N - 1) \cdot (X^N + 1)$. Hence, we can calculate the product first modulo $X^{2N} - 1$ (via cyclic convolution of $2N$ elements) and then only reduce the result modulo $X^N + 1$. This method can be optimized based on the following observations.
+
+**Observation 1 (Redundancy of negacyclic extension).** Let $p \in \mathbb{R}[X]$ be a real-valued polynomial of degree $N-1$, $N \in \mathbb{N}$, and let $\bar{p}(X) := p(X) - X^N \cdot p(X)$ be a negacyclic extension of $p(X)$. Then the Fourier image of $\text{coeffs}(\bar{p})$ contains zeros at eventh positions (indexed from 0). In addition, the remaining coefficients (at oddth positions) are mirrored and conjugated. I.e.,
+
+$$ \mathcal{F}(\text{coeffs}(\bar{p})) = (0, P_1, 0, P_3, \dots, 0, P_{N-1}, 0, \overline{P_{N-1}}, \dots, 0, \overline{P_3}, 0, \overline{P_1}). \quad (6) $$
+
+*Note 1.* Given *N* input (real-valued) polynomial coefficients, $\mathcal{F}(\text{coeffs}(\bar{p}))$ needs to calculate $2N$ complex values, i.e., $4N$ real values. The redundancy is clearly in the *N* complex zeros and in the *N*/2 complex conjugates.
+
+**Observation 2 (Convolution of negacyclic extensions).** Let $p, q \in \mathbb{R}[X]$ be real-valued polynomials of degree $N-1$ for some $N \in \mathbb{N}$ and let $\bar{p}, \bar{q}$ be their respective negacyclic extensions. Then it holds
+
+$$ \text{coeffs}(p \cdot q \bmod (X^N+1)) = \frac{1}{2} \mathcal{F}^{-1}\left(\mathcal{F}(\text{coeffs}(\bar{p})) \odot \mathcal{F}(\text{coeffs}(\bar{q}))\right)[0 \dots N-1]. \quad (7) $$
+
+By Observation 1, it follows that the dyadic multiplication in (7) can only be performed at odd positions of the first half, the rest can be copied (with appropriate sign). Also note that after $\mathcal{F}^{-1}$, the coefficients are negacyclic, hence we can only take the first half of the vector. This method is implemented in the original TFHE Library [28].
+
+**Possible Improvements.** The clear goal is to omit all calculations leading to redundant values as outlined in Note 1. Digging deeper into FFT, we deduced the same initial step as proposed by Crandall [11] in his method for negacyclic convolution (namely, the folding step). However, without the additional twisting step, we ended up with a bunch of numbers, from which we were not able to recover the original values efficiently. Therefore, we decided to adapt the concept of the method by Crandall.
+
+## 3.2 Non-Redundant Approach
+
+The method for negacyclic polynomial multiplication by Crandall [11] is intended for polynomials over $\mathbb{Z}_p$ and it employs internally the Discrete Galois Transform (DGT). DGT is an analogy to DFT, which operates over the field
\ No newline at end of file
diff --git a/samples/texts/3410193/page_18.md b/samples/texts/3410193/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..c3b199986ae90b1fd922a860a898a6e773e95891
--- /dev/null
+++ b/samples/texts/3410193/page_18.md
@@ -0,0 +1,29 @@
+GF($p^2$) for a Gaussian prime number $p$, whereas DFT operates over $\mathbb{C}$. Note that recently Al Badawi et al. [3] extended the Crandall's method for non-Gaussian primes, too. The Crandall's method prepends DGT with two steps: folding and twisting. In the following definition we propose an analogous transformation using DFT.
+
+**Definition 1.** Let $\mathbf{f} \in \mathbb{R}^N$ for some $N \in \mathbb{N}$, $N$ even. We define the Discrete Fourier Negacyclic Transform (DFNT, denoted $\tilde{\mathcal{F}}$) as follows:
+
+$$ \tilde{\mathcal{F}}(\mathbf{f}) := \mathcal{F}\left(\underbrace{\mathbf{f}[0 \dots N/2-1] + i \cdot \mathbf{f}[N/2 \dots N-1]}_{\text{folding}} \odot \underbrace{\left(\omega_{2N}^j\right)_{j=0}^{N/2-1}}_{\text{twisting}}\right), \quad (8) $$
+
+where $\omega_{2N}^j = \exp(\frac{2\pi ij}{2N})$ and $\mathcal{F}$ stands for the ordinary DFT. For the inverse DFNT, we have
+
+$$ t := \mathcal{F}^{-1}(\mathbf{F}) \odot (\omega_{2N}^{-j})_{j=0}^{N/2-1}, \quad (9) $$
+
+$$ \bar{\mathcal{F}}^{-1}(\mathbf{F}) = [\Re(t), \Im(t)]. \quad (10) $$
+
+*Note 2.* We will refer to DFNT, where DFT is internally calculated via FFT, as the *Fast Fourier Negacyclic Transform* (FFNT).
+
+With respect to negacyclic convolution, DFNT has two important properties:
+
+1. given $N$ reals at input, it outputs $N/2$ complex numbers, i.e., there is no redundancy, unlike in the previous approach, and
+
+2. it can be used for negacyclic convolution in the same manner as DFT for cyclic convolution, a theorem follows.
+
+**Theorem 1 (Negacyclic Convolution Theorem; NCT).** Let $\mathbf{f}, \mathbf{g} \in \mathbb{R}^N$ for some $N \in \mathbb{N}$, $N$ even. It holds
+
+$$ \bar{\mathcal{F}}(\mathbf{f} * \mathbf{g}) = \bar{\mathcal{F}}(\mathbf{f}) \odot \bar{\mathcal{F}}(\mathbf{g}). \quad (11) $$
+
+For a full description of negacyclic convolution over the reals via NCT see Algorithm 1. Next, we analyze this algorithm from the error propagation point of view, which allows us to apply this method for negacyclic convolution over integers, too.
+
+## 4 Analysis of Error Propagation
+
+Since Algorithm 1 operates implicitly with real numbers (starting $N = 4$, $\omega_{2N}$'s are irrational), there emerge rounding errors provided that we use a standard finite floating-point representation. In this section, we analyze Algorithm 1 from the error propagation point of view and we derive estimates of the bounds of errors as well as their variance. Based on our estimates, we derive a bound for sufficient bit-precision of the employed floating point representation, which guarantees error-free convolution over the ring of integers. We also provide an estimate of the bit-precision based on error variance and the $3\sigma$-rule. In addition and as a byproduct, we derive all bounds for cyclic convolution, too. First of all, we revisit the FFT algorithm, as we will refer to it later.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_19.md b/samples/texts/3410193/page_19.md
new file mode 100644
index 0000000000000000000000000000000000000000..a3b93d47d282366e822b4dd69f6c6f3e6025373e
--- /dev/null
+++ b/samples/texts/3410193/page_19.md
@@ -0,0 +1,40 @@
+**Algorithm 1 Efficient Negacyclic Convolution over $\mathbb{R}$**
+
+**Input:** $\mathbf{f}, \mathbf{g} \in \mathbb{R}^N$ for some $N \in \mathbb{N}$, $N$ even.
+**Precompute:** $\omega_{2N}^j := \exp(\frac{2\pi i j}{2N})$ for $j = -N/2 + 1... N/2 - 1$.
+**Output:** $\mathbf{h} \in \mathbb{R}^N$, $\mathbf{h} = \mathbf{f} * \mathbf{g}$.
+
+1: **for** $j = 0... N/2 - 1$ **do**
+2: $f_j' = f_j + if_{j+N/2}$ // fold
+3: $g_j' = g_j + ig_{j+N/2}$
+4: **for** $j = 0... N/2 - 1$ **do**
+5: $f_j'' = f_j' \cdot \omega_{2N}^j$ // twist
+6: $g_j'' = g_j' \cdot \omega_{2N}^j$
+7: **F** = $\mathcal{F}_{N/2}(\mathbf{f}'')$, **G** = $\mathcal{F}_{N/2}(\mathbf{g}'')$
+8: **for** $j = 0... N/2 - 1$ **do**
+9: $H_j = F_j \cdot G_j$
+10: **h''** = $\mathcal{F}^{-1}_{N/2}(\mathbf{H})$
+11: **for** $j = 0... N/2 - 1$ **do**
+12: $h'_j = h''_j \cdot \omega_{2N}^{j-1}$ // untwist
+13: **for** $j = 0... N/2 - 1$ **do**
+14: $h_j = \Re(h'_j)$ // unfold
+15: $h_{j+N/2} = \Im(h'_j)$
+16: **return** $\mathbf{h}$
+
+**FFT in Brief.** FFT [9] is a recursive algorithm, which builds upon the following observation: for $N = n_1 \cdot n_2$ and $k = k_1 + k_2n_1$, we can write the $k$-th Fourier coefficient of an $\mathbf{f} \in \mathbb{C}^N$ as
+
+$$ \mathcal{F}(\mathbf{f})_{k_1+k_2n_1} = \sum_{j_2=0}^{n_2-1} \left( \frac{\left( \sum_{j_1=0}^{n_1-1} f_{j_2+j_1n_2} \omega_{n_1}^{j_1k_1} \right) \omega_N^{-j_2k_1}}{\mathcal{F}\left( (f_{j_2+j_1n_2})_{j_1=0}^{n_1-1} \right)_{k_1}} \right) \omega_{n_2}^{-j_2k_2}, \quad (12) $$
+
+where
+
+$$ \omega_N^j = \exp\left(\frac{2\pi i j}{N}\right), \qquad (13) $$
+
+while $\omega$'s can be precomputed.
+
+*Note 3.* There exist two major FFT data paths for $N$ a power of two: the Cooley-Tukey data path [9] (aka. decimation-in-time), and the Gentleman-Sande data path [13] (aka. decimation-in-frequency). At this point, let us describe the decimation-in-time data path, we will discuss their implementation consequences later in Section 5.
+
+For $N$ a power of two, FFT splits its input into two halves and proceeds recursively. Next, it multiplies the results with $\omega$'s, and finally it proceeds adequate pairs; see (14) and (15).
+
+At the end of the recursion we have for $N = 2$:
+
+$$ \text{FFT}_2 |f_0 \ f_1| = |f_0 + f_1 \ f_0 - f_1|. \qquad (14) $$
\ No newline at end of file
diff --git a/samples/texts/3410193/page_2.md b/samples/texts/3410193/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..198fefdc526448e87cdf6d9da6c6dc4a3e76ca1b
--- /dev/null
+++ b/samples/texts/3410193/page_2.md
@@ -0,0 +1,80 @@
+which can be bounded as
+
+$$
+\begin{align}
+|\mathrm{Err}(\Re(a \cdot b))| &\lesssim P_0\|\mathrm{Err}(b)\|_{\infty} + R_0\|\mathrm{Err}(a)\|_{\infty} + Q_0\|\mathrm{Err}(b)\|_{\infty} + S_0\|\mathrm{Err}(a)\|_{\infty} \lesssim \nonumber \\
+&\lesssim (P_0+Q_0)\|\mathrm{Err}(b)\|_{\infty} + (S_0+R_0)\|\mathrm{Err}(a)\|_{\infty}. \tag{20}
+\end{align}
+$$
+
+Since $|p+iq| \lesssim A_0$, we can bound $P_0 + Q_0 \lesssim \sqrt{2}A_0$ and the result (17) follows, similarly for (18). $\square$
+
+**Lemma 2.** Let **f** ∈ **C***N*, where *N* = 2*v* for some *v* ∈ **N**, ||**f**||∞ ≤ 2*φ*0 for some
+*φ*0 ∈ **N**, and let χ denote the bit-precision of ω's as well as all intermediate
+values during the calculation of FFT*N*(**f**) := **F**, represented as a floating point
+type. Then
+
+$$
+\Vert \mathbf{F} \Vert_{\infty} \leq 2^{\varphi_0 + \nu}, \tag{21}
+$$
+
+$$
+\Vert \operatorname{Err}(\mathbf{F}) \Vert_{\infty} \lesssim c_H \cdot (\sqrt{2} + 1)^{\nu} + c_N \cdot 2^{\nu} \quad (\text{for } \nu \ge 2), \quad \text{and} \quad (22)
+$$
+
+$$
+\mathrm{Var}(\mathrm{Err}(\mathbf{F})) \lesssim d_H \cdot 3^\nu + d_N \cdot 4^\nu \quad (\text{for } \nu \ge 2), \quad (23)
+$$
+
+where
+
+$$
+c_H = 2(\sqrt{2}-1) \cdot \| \mathrm{Err}(\mathbf{f}) \|_{\infty} + (2-\sqrt{2}) \cdot 2^{\varphi_0 - \chi + 1},
+$$
+
+$$
+d_H = 2/3 \operatorname{Var}(\operatorname{Err}(\mathbf{f})) - 8/27 2^{2\varphi_0 - 2\chi},
+$$
+
+$$
+d_N = 1/6 2^{2\varphi_0 - 2\chi}. \tag{24}
+$$
+
+Proof. We write
+
+$$
+\mathrm{FFT}_N : \mathrm{FFT}_2 \circ (\odot \omega_N) \circ \mathrm{FFT}_{N/2}, \qquad (25)
+$$
+
+from where we derive recurrence relations for the bounds on absolute value, error
+and variance.
+
+In each recursion level, the values propagate to a lower level, then they are
+multiplied by a complex unit and two such values are added, or subtracted.
+Firstly, note that in every level the initial bound on the absolute value is doubled,
+hence (21) follows.
+
+Regarding the errors, it is important to note that the final $FFT_2$ acts on two
+values, each of which has been previously multiplied by $\omega_N^{j_2 k_1}$, where $j_2$ ranges
+in $\{0, 1\}$. I.e., one value is multiplied by 1 and only the other is multiplied by
+a (mostly) non-trivial complex unit, which is rounded to $\chi$ bits of precision, i.e.,
+$\|\mathrm{Err}(\omega)\|_\infty \le 2^{-\chi-1}$. Putting things together, we get the following recurrence
+relations for the bounds on the error and its variance after $\nu$ levels, respectively:
+
+$$
+E_{\nu} = \sqrt{2} \cdot (1 \cdot E_{\nu-1} + 2^{\varphi_{0}+\nu-1} \cdot 2^{-\chi-1}) + E_{\nu-1} = \\
+= (\sqrt{2} + 1) \cdot E_{\nu-1} + \sqrt{2} \cdot 2^{\varphi_{0}+\nu-\chi-2}, \tag{26}
+$$
+
+$$
+E_2 = (E_1 + 2^{\varphi_0+1} \cdot E_{\omega_4}) \cdot \sqrt{2} + E_1 = (\sqrt{2}+1)E_1 = 2(\sqrt{2}+1)E_0, \quad \text{and} \quad (27)
+$$
+
+$$
+V_{\nu} = 2 \cdot (1^2 \cdot V_{\nu-1} + (2^{\varphi_0+\nu-1})^2 \cdot 1/12 (2^{-\chi})^2) + V_{\nu-1} \\
+= 3V_{\nu-1} + 1/3 2^{2\varphi_0+2\nu-2\chi-3}, \tag{28}
+$$
+
+$$
+V_2 = 3V_1 = 6V_0, \tag{29}
+$$
\ No newline at end of file
diff --git a/samples/texts/3410193/page_20.md b/samples/texts/3410193/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..07ee032f52a4bdbd343076ae30bc6008cf9128b2
--- /dev/null
+++ b/samples/texts/3410193/page_20.md
@@ -0,0 +1,90 @@
+Next, for $N \ge 4$ we have
+
+$$
+\text{FFT}_N(\mathbf{f}) : \left|
+\begin{array}{ccc}
+f_0 & f_1 \\
+f_2 & f_3 \\
+\vdots & \vdots \\
+f_{N-2} & f_{N-1}
+\end{array}
+\right|_{n_1 \times n_2 = N/2 \times 2} \xrightarrow[\text{(recursively)}]{\text{FFT}_{N/2} \text{ columns}} \left|
+\begin{array}{ccc}
+f'_0 & f'_1 \\
+f'_2 & f'_3 \\
+\vdots & \vdots \\
+f'_{N-2} & f'_{N-1}
+\end{array}
+\right| \odot \left|
+\begin{array}{ccc}
+1 & 1 \\
+1 & \omega_N^{-1 \cdot 1} \\
+\vdots & \vdots \\
+1 & \omega_N^{-1 \cdot (N/2-1)}
+\end{array}
+\right|_{\omega_N^{-j_2 k_1}} \longrightarrow
+$$
+
+$$
+\rightarrow \left|
+\begin{array}{cc}
+f_0'' & f_1'' \\
+f_2'' & f_3'' \\
+\vdots & \vdots \\
+f_{N-2}'' & f_{N-1}''
+\end{array}
+\right| \xrightarrow{\text{FFT}_2 \text{ rows}} \left|
+\begin{array}{cc}
+f_0'' + f_1'' & f_0'' - f_1'' \\
+f_2'' + f_3'' & f_2'' - f_3''
+\\
+\vdots & \vdots \\
+f_{N-2}'' + f_{N-1}' & f_{N-2}'' - f_{N-1}'
+\end{array}
+\right| = \left|
+\begin{array}{cc}
+F_0 & F_{N/2} \\
+F_1 & F_{N/2+1} \\
+\vdots & \vdots \\
+F_{N/2-1} & F_{N-1}
+\end{array}
+\right|. \tag{15}
+$$
+
+$FFT^{-1}$ proceeds similarly to the direct transformation with the following exceptions:
+
+1. in the second step, it multiplies by $\omega_N^{j_2 k_1}$ (i.e., with a positive exponent), and
+
+2. the final result is multiplied by $1/N$ (only once at the top level).
+
+## 4.1 Error Propagation through FFT and FFNT
+
+Let us begin with two lemmas, which provide bounds on the error and variance of complex multiplication and FFT, respectively. Note that we will assume for our estimates of variance bounds that the rounding errors are uniformly random and independent.
+
+*Note 4.* We will distinguish two types of the maximum norm $\|\cdot\|_\infty$ over $\mathbb{C}^N$. For 1. error vectors, and for 2. other complex vectors, we consider:
+
+1. the maximum of real and imaginary parts (i.e., rectangular), and
+
+2. the maximum of absolute values (i.e., circular), respectively.
+
+**Lemma 1.** Let $a, b \in \mathbb{C}$, $|a| \le A_0$ and $|b| \le B_0$ for some $A_0, B_0 \in \mathbb{R}^+$. Then
+
+$$
+|a \cdot b| \le A_0 \cdot B_0, \tag{16}
+$$
+
+$$
+\|\operatorname{Err}(a \cdot b)\|_{\infty} \lesssim \sqrt{2} \cdot (A_0 \cdot \|\operatorname{Err}(b)\|_{\infty} + B_0 \cdot \|\operatorname{Err}(a)\|_{\infty}), \quad \text{and} \tag{17}
+$$
+
+$$
+\mathrm{Var}(\mathrm{Err}(a \cdot b)) \lesssim 2 \cdot (A_0^2 \cdot \mathrm{Var}(\mathrm{Err}(b)) + B_0^2 \cdot \mathrm{Var}(\mathrm{Err}(a))), \quad (18)
+$$
+
+where we neglected second-order error terms and for (18), we further assumed that the errors of *a* and *b* are independent.
+
+*Proof.* Let $a = (p+E_p)+i(q+E_q)$ and $b = (r+E_r)+i(s+E_s)$, where we denote the parts' bounds as $|p| \le P_0$ etc. According to Note 4, we split the complex error into parts – we write for the real part (similarly for the complex part)
+
+$$
+\mathrm{Err}(\Re(a \cdot b)) = pE_r + rE_p - (qE_s + sE_q) + \mathrm{negl}, \quad (19)
+$$
\ No newline at end of file
diff --git a/samples/texts/3410193/page_3.md b/samples/texts/3410193/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..5b7cac806bcd4cb66468d3066c43fac502dbf1d6
--- /dev/null
+++ b/samples/texts/3410193/page_3.md
@@ -0,0 +1,52 @@
+where in (27), we applied the fact that $\omega_4$ is error-free; cf. (13). Also note that the
+error more than doubles in each step (while the bound only doubles), therefore
+the $\chi$ bits of precision are sufficient and rounding errors can be neglected. The
+results follow by solving (26) and (27), and (28) and (29), respectively. $\square$
+
+In the following proposition, we bound the error and variance of the result
+of cyclic and negacyclic convolution via FFT/FFNT, respectively. For a quick
+reference, we provide an overview of these methods in (30) and (31), respectively:
+
+$$
+\begin{align}
+& \xrightarrow{\text{FFT}_N} \mathbf{F} && \xrightarrow{\circled{0}} \xrightarrow{\text{FFT}_N^{-1}} \mathbf{H} \xrightarrow{\text{FFT}_N} \mathbf{h} = \mathbf{f} * \mathbf{g}, \tag{30} \\
+& \xrightarrow{\text{FFT}_N} \mathbf{G} && \xrightarrow{\circled{0}} \xrightarrow{\text{FFT}_{N/2}} \bar{\mathbf{F}} \xrightarrow{\circled{0}} \bar{\mathbf{H}} \xrightarrow{\text{FFT}_{N/2}^{-1}} \mathbf{h}'' \xrightarrow{\text{untwist}} \mathbf{h}' \xrightarrow{\text{unfold}} \bar{\mathbf{h}} = \mathbf{f} * \bar{\mathbf{g}}. \tag{31}
+\end{align}
+$$
+
+**Proposition 1.** Let **f**, **g** ∈ ℝN, where N = 2ν for some ν ∈ N, ||**f**||∞ ≤ 2φ0 and ||**g**||∞ ≤ 2γ0 for some φ0, γ0 ∈ N, and let χ denote the bit-precision of ω's as well as all intermediate values during the calculation of FFTN(·) and its inverse, represented as a floating point type. We denote **h** := FFTN-1(FFTN(**f**) ⊙ FFTN(**g**)) and **h**̄ := FFNTN-1(FFNTN(**f**) ⊙ FFNTN(**g**)), while we consider the errors as ||Err(**h**)‖∞ = ||**h** − **f** * **g**‖∞ and ||Err(**h**̄)**‖∞ = ||**h**̄ − **f** * **g**‖∞, respectively. Then
+
+$$
+\log \|\mathrm{Err}(\mathbf{h})\|_{\infty} \lesssim (2\nu - 2) \cdot \log(\sqrt{2} + 1) + \varphi_0 + \gamma_0 - \chi + 4, \quad (32)
+$$
+
+$$
+\log \operatorname{Var}(\operatorname{Err}(\mathbf{h})) \lesssim 4\nu + 2\varphi_0 + 2\gamma_0 - 2\chi - 1 - \log(3), \quad \text{and} \tag{33}
+$$
+
+$$
+\log \| \mathrm{Err}(\bar{\mathbf{h}}) \|_{\infty} \lesssim (2\nu - 4) \cdot \log(\sqrt{2} + 1) + \varphi_0 + \gamma_0 - \chi + 4 + \log(3) + 1/2, \quad (34)
+$$
+
+$$
+\log \mathrm{Var}(\mathrm{Err}(\bar{\mathbf{h}})) \lesssim 4\nu + 2\varphi_0 + 2\gamma_0 - 2\chi - 3. \quad (35)
+$$
+
+*Proof.* Find the proof in Appendix A. $\square$
+
+We apply our estimates of the error and variance bounds in order to derive
+two basic parameter setups for convolution over integers: an error-free setup and
+a setup with rare errors based on the 3σ-rule; see the following corollary.
+
+**Corollary 1.** *Provided that*
+
+$$
+\chi_0^{(c.)} \geq \underbrace{2\log(\sqrt{2}+1)}_{\approx 2.54} \cdot \nu + \varphi_0 + \gamma_0 + \underbrace{5 - 2\log(\sqrt{2}+1)}_{\approx 2.46}, \quad \text{or} \tag{36}
+$$
+
+$$
+\chi_0^{(nc.)} \geq \underbrace{2\log(\sqrt{2}+1)}_{\approx 2.54} \cdot \nu + \varphi_0 + \gamma_0 + \underbrace{5 + \log(3) + 1/2 - 4\log(\sqrt{2}+1)}_{\approx 2.00}, \quad (37)
+$$
+
+we have ||Err(**h**)||∞ ≲ 1/2, or ||Err(**h̄**)||∞ ≲ 1/2, which means an error-free cyclic,
+or negacyclic convolution on integers via FFTN, or FFNTN, respectively. I.e.,
\ No newline at end of file
diff --git a/samples/texts/3410193/page_4.md b/samples/texts/3410193/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..6d263b07e1340ffe2d36df8b584ee2857a1fd6ca
--- /dev/null
+++ b/samples/texts/3410193/page_4.md
@@ -0,0 +1,42 @@
+for **f**, **g** ∈ ℤ*N*, we have
+
+$$
+\left[ \text{FFT}_{N}^{-1} (\text{FFT}_{N}(\mathbf{f}) \odot \text{FFT}_{N}(\mathbf{g})) \right] = \mathbf{f} * \mathbf{g}, \quad \text{or} \quad (38)
+$$
+
+$$
+\left[ \mathrm{FFNT}_N^{-1} (\mathrm{FFNT}_N(\mathbf{f}) \odot \mathrm{FFNT}_N(\mathbf{g})) \right] = \mathbf{f} * \bar{\mathbf{g}}, \quad (39)
+$$
+
+respectively, up to negligible probability.
+
+Next, if
+
+$$
+\chi_{3\sigma}^{(c.)} \ge 2\nu + \varphi_0 + \gamma_0 + \underbrace{\frac{1}{2}\log(6)}_{\approx 1.29}, \quad \text{or} \quad (40)
+$$
+
+$$
+\chi_{3\sigma}^{(\text{nc.})} \geq 2\nu + \varphi_0 + \gamma_0 + \underbrace{\log(3) - 1/2}_{\approx 1.08}, \quad (41)
+$$
+
+we have $3\sqrt{\mathrm{Var}(\mathrm{Err}(\mathbf{h}))} \lesssim 1/2$, or $3\sqrt{\mathrm{Var}(\mathrm{Err}(\bar{\mathbf{h}}))} \lesssim 1/2$, which estimates the required floating point type precision for the respective convolution variant based on the $3\sigma$-rule.
+
+Note 5. In the most common practical setting with the **binary64** type as per IEEE 754 standard [1] (aka. **double**), we have $\chi = 53$ bits of precision. For the 80-bit variant of the extended precision format (aka. **long double**), we have $\chi = 64$ bits of precision.
+
+# 5 Implementation & Experimental Results
+
+In this section, we briefly comment on how we use the data paths in our im-
+plementation (as outlined in Note 3), we discuss the choice of parameters with
+respect to TFHE, and then we focus on the following:
+
+1. benchmarking with other implementations using chosen parameters,
+
+2. performance on long polynomials using both 64-bit **double** and 80-bit **long double** floating point number representations, and
+
+3. error magnitude and correctness of the results.
+
+**Implementation Remarks.** In our implementation of the Cooley-Tukey data path [9], we adapted the 4-vector approach from the Nayuki Project [22], which optimizes the RAM access for the most common 64-bit architectures. In a similar manner, we implemented the Gentleman-Sande data path [13]. To calculate FFT properly, both data paths require a specific reordering of their input or output, respectively. The reordering is based on bit-reversal of position indexes, counting from 0. E.g., for 16 elements (4 bits), we exchange the elements at positions 5 ↔ 10, since 5 = **0b0101** and 10 = **0b1010**.
+
+Since our goal is solely convolution, i.e., we do not care about the exact order
+of the FFT coefficients, the bit-reverse reordering can be omitted, as pointed out
\ No newline at end of file
diff --git a/samples/texts/3410193/page_5.md b/samples/texts/3410193/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..3b24c6a046554074a9051507f6808ade70b17183
--- /dev/null
+++ b/samples/texts/3410193/page_5.md
@@ -0,0 +1,15 @@
+by Crandall and Pomerance [10]. By construction, it follows that the Gentleman-Sande data path must be used for the direct transformation and the Cooley-Tukey data path for the inverse.
+
+For benchmarking purposes, we also adopted some code from the TFHE Library [28] to compare the redundant and non-redundant approaches; cf. Sections 3.1 and 3.2, respectively.
+
+**Relation to the TFHE Parameters.** The main (cryptographic) motivation of our algorithm for negacyclic convolution over integers is the negacyclic polynomial multiplication in the TFHE scheme [8]. Below we outline a relation of the TFHE parameters to the parameters of negacyclic convolution via FFNT. As a result, we suggest a reasonable parameter setup for benchmarking.
+
+In TFHE, negacyclic polynomial multiplication occurs in the bootstrapping procedure (namely, in the calculation of the external product), where an integer polynomial is multiplied by a torus polynomial. The coefficients of the right-hand side (torus) polynomial can be represented as integers scaled to [0, 1) and bounded by 2 to the power of their bit-precision, denoted by $\tau$. In the left-hand side (integer) polynomial, the coefficients are bounded by $2^{\gamma}$, where $\gamma$ is one of the fundamental TFHE parameters. By construction, the parameter $\gamma$ is smaller than $\tau$, namely, $\gamma \le \tau/l$, where $l$ is another TFHE parameter. In a corner case, it can be $\gamma = 1$ and the bound can be hence as low as $2^0$.
+
+Based on our preliminary calculations for multivalue TFHE, we need the degree of TFHE polynomials to be at least $N = 2^{14}$ for 8-bit plaintexts with 128-bit security, and the torus precision to be at least $\tau = 34$ (both can be smaller for shorter plaintexts). Finally, we suggest to run the tests using polynomials with $\varphi_0 = \gamma_0 = \tau/2 = 17$ and $N = 2^{10}, \dots, 2^{14}$.
+
+## 5.1 Benchmarking Results
+
+As a reference for benchmarking of our implementation [20] of negacyclic convolution, we have chosen the NTL Library [27] and the redundant method (as used in the original TFHE Library [28]; cf. Section 3.1), for which we used the same implementation of FFT as for our non-redundant method. Note that the implementation by Al Badawi et al. [3] shows similar results to the popular NTL (only about 1.01–1.2× faster) and they also show that NTL is faster than the concurrent FLINT Library [16]. For NTL, we tested both ZZ\_pX and ZZ\_pE classes, while the latter shows slightly better performance, hence we used that for benchmarking. Find the results of our benchmarks in Table 1.
+
+*Note 6.* During the parameter setup, we silently passed over the fact that $\chi = 53$ (bit-precision of double) is lower than our $3\sigma$-rule estimates for all tested $\nu$'s, as per (41) in Corollary 1. Indeed, they dictate $\chi_{3\sigma}^{(nc.)} \gtrsim 2\nu + \varphi_0 + \gamma_0 + 1.08 = 55.08\dots63.08$. For this reason, we reran the scenario with $\nu = 14$ for 1000-times, we checked the results for correctness, and we did not detect any error across all tested polynomials.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_6.md b/samples/texts/3410193/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..98fe51cf7563fee4d78b8295bb5c4f0744b401c4
--- /dev/null
+++ b/samples/texts/3410193/page_6.md
@@ -0,0 +1,9 @@
+| Degree (N) | 210 | 211 | 212 | 213 | 214 |
|---|
| NTL [ms] | 0.617 | 1.258 | 2.643 | 6.132 | 12.771 |
| FFT2N [ms] | 0.122 | 0.230 | 0.458 | 0.982 | 2.277 |
| FFNTN [ms] | 0.036 | 0.069 | 0.120 | 0.243 | 0.541 |
| FFNTN over FFT2N | 3.35x | 3.33x | 3.82x | 4.04x | 4.21x |
| FFNTN avg. error [%] | 0.06 | 0.08 | 0.12 | 0.18 | 0.27 |
| FFNTN max. error [%] | 0.37 | 0.55 | 0.98 | 1.47 | 1.95 |
+
+Table 1: Mean time per negacyclic multiplication of uniformly random polynomials with $\|p\|_{\infty} \le 2^{17}$ using NTL (similar times as FLINT), $FFT_{2N}$ on negacyclic extension (implemented in [28]), and $FFNT_N$, both using 64-bit double. Speedup of $FFNT_N$ over $FFT_{2N}$. Average and maximum rounding errors of $FFNT_N$. 1000 runs per degree and method on an Intel Core i7-8550U CPU @ 1.80GHz.
+
+## 5.2 Performance on Long Polynomials
+
+As a reference for other prospective applications of our method, we tested our code on longer polynomials, too. We provide the performance results using both 64-bit double and 80-bit long double in Figure 1.
+
+Fig. 1: Mean time per polynomial multiplication mod $X^N+1$ and speedup factor of double over long double. Uniformly random polynomials with $\|p\|_{\infty} \le 2^{17}$, 1000 measurements.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_7.md b/samples/texts/3410193/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..6bfaf4c2d5103ce0c3718665369397ce543a1c48
--- /dev/null
+++ b/samples/texts/3410193/page_7.md
@@ -0,0 +1,13 @@
+## 5.3 Error Magnitude & Correctness on Long Polynomials
+
+As outlined in Note 6, our experimental setup exceeds the derived theoretical bounds, even for lower-degree polynomials. Hence, our next goal is to evaluate the error magnitude as well as to check the correctness of the results. We tested the following input polynomial scenarios:
+
+1. uniformly random coefficients (bounded by $||p||_{\infty} \le 2^{\varphi_0}$), and
+
+2. all coefficients equal to the bound minus one, i.e., $2^{\varphi_0} - 1$.
+
+Find the results of the random polynomial setup in Figure 2, where we tested both 64-bit **double** and 80-bit **long double** implementations.
+
+Regarding the setup with all coefficients equal to the bound, we ran the same scenarios as for random polynomials (cf. Figure 2). With 64-bit **double**, the only correct results were obtained for the setup with $||p||_{\infty} \le 2^{17}$ and $N = 2^{14}$, or $N = 2^{15}$, respectively. With 80-bit **long double**, all scenarios were calculated correctly, with maximum rounding error $\lesssim 0.109$ for $||p||_{\infty} \le 2^{20}$ and $N = 2^{18}$.
+
+Fig. 2: Median (solid) and Maximum (dashed) rounding errors for uniformly random polynomials. Erroneous results emphasized by empty red circles. 10 measurements per degree, bound and floating point type.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_8.md b/samples/texts/3410193/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..34a724b6aacbd5b008cf16d691a909bb9f4201e8
--- /dev/null
+++ b/samples/texts/3410193/page_8.md
@@ -0,0 +1,15 @@
+**Discussion.** We observed a factor $\sim 4 \times$ speedup of $FFNT_N$ (i.e., the non-redundant approach) over $FFT_{2N}$ (i.e., the redundant approach). Compared to NTL, which calculates the coefficients precisely using a number-theoretic transform, our FFT-based method shows by more than an order of magnitude better results. Even though we ran our tests with underestimated precision, we obtained correct results for much larger polynomials with uniformly random coefficients. Note that random-like polynomials occur in **TFHE**, hence our benchmarking scenario with random polynomials is representative for the usage with **TFHE**.
+
+In addition, we tested our code with the 80-bit **long double** floating point type. It enabled error-free calculations with polynomials of higher degree and/or with greater coefficient bound, yet it was only about 3–4 times slower than the variant with the 64-bit **double**.
+
+# 6 Conclusion
+
+We showed that FFT-based convolution algorithms can significantly outperform similar algorithms based on number-theoretic transforms, and they can still guarantee error-free results in the integer domain. We derived estimates of the lower bound of the employed floating point type for error-free cyclic and negacyclic convolutions, as well as we suggested the bounds based on the $3\sigma$-rule.
+
+We suggested a set of testing parameters for negacyclic convolution with particular respect to the usage with the **TFHE** Scheme on a multivalue plain-text space. We ran a benchmark that compares the popular NTL Library, the approach that is used in the **TFHE** Library, and our approach. Compared to the generic NTL Library, which employs a number-theoretic transform, and to the **TFHE** Library approach, which calculates redundant intermediate values, we achieved a speedup of around $24 \times$ and $4 \times$, respectively.
+
+Finally, our experiments have shown approximate bounds for practical error-free results. Namely, using **double**, we could multiply polynomials without errors up to degree $N = 2^{17}$ and norm $\|p\|_\infty \le 2^{20}$ with uniformly random coefficients, and up to degree $N = 2^{15}$ with coefficients equal to $2^{17}$. To conclude, we find our approach particularly useful for negacyclic integer polynomial multiplication, not only in **TFHE**.
+
+**Future Directions.** Our aim is to implement a version based on the 64-bit signed integer type instead of **double**, where we would keep the exponent at one place for the entire array. Such an approach requires less demanding arithmetics and it would serve as a proof-of-concept for a prospective FPGA implementation.
+
+**Acknowledgments.** We would like to thank Ahmad Al Badawi and Sergiu Carpov for useful comments and remarks.
\ No newline at end of file
diff --git a/samples/texts/3410193/page_9.md b/samples/texts/3410193/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..144b066b3f0b98fcd466449f1cfd9089cc5c2752
--- /dev/null
+++ b/samples/texts/3410193/page_9.md
@@ -0,0 +1,39 @@
+References
+
+1. IEEE Standard for Floating-Point Arithmetic. *IEEE Std 754-2019 (Revision of IEEE 754-2008)*, pages 1–84, 2019.
+
+2. Miklós Ajtai. Generating hard instances of lattice problems. In *Proceedings of the twenty-eighth annual ACM symposium on Theory of computing*, pages 99–108, 1996.
+
+3. Ahmad Al Badawi, Bharadwaj Veeravalli, and Khin Mi Mi Aung. Efficient polynomial multiplication via modified discrete galois transform and negacyclic convolution. In *Future of Information and Communication Conference*, pages 666–682. Springer, 2018.
+
+4. Daniel J Bernstein. Multidigit multiplication for mathematicians. 2001.
+
+5. Florian Bourse, Michele Minelli, Matthias Minihold, and Pascal Paillier. Fast homomorphic evaluation of deep discretized neural networks. In *Annual International Cryptology Conference*, pages 483–512. Springer, 2018.
+
+6. Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. (leveled) fully homomorphic encryption without bootstrapping. *ACM Transactions on Computation Theory (TOCT)*, 6(3):13, 2014.
+
+7. Sergiu Carpov, Malika Izabachène, and Victor Mollimard. New techniques for multi-value input homomorphic evaluation and applications. In *Cryptographers’ Track at the RSA Conference*, pages 106–126. Springer, 2019.
+
+8. Ilaria Chillotti, Nicolas Gama, Mariya Georgieva, and Malika Izabachène. TFHE: fast fully homomorphic encryption over the torus. *Journal of Cryptology*, 33(1):34–91, 2020.
+
+9. James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. *Mathematics of computation*, 19(90):297–301, 1965.
+
+10. Richard Crandall and Carl B Pomerance. *Prime numbers: a computational perspective*, volume 182. Springer Science & Business Media, 2006.
+
+11. Richard E Crandall. Integer convolution via split-radix fast galois transform. *Center for Advanced Computation Reed College*, 1999.
+
+12. Jintai Ding and Dieter Schmidt. Rainbow, a new multivariable polynomial signature scheme. In *International Conference on Applied Cryptography and Network Security*, pages 164–175. Springer, 2005.
+
+13. W Morven Gentleman and Gordon Sande. Fast fourier transforms: for fun and profit. In *Proceedings of the November 7-10, 1966, fall joint computer conference*, pages 563–578, 1966.
+
+14. Craig Gentry and Dan Boneh. *A fully homomorphic encryption scheme*, volume 20. Stanford University, 2009.
+
+15. Herman H Goldstine. A history of numerical analysis from the 16th through the 19th century. *Bull. Amer. Math. Soc.*, 1:388–390, 1979.
+
+16. William Hart, Fredrik Johansson, and Sebastian Pancratz. FLINT: Fast Library for Number Theory. https://www.flintlib.org/, 2011.
+
+17. Jeffrey Hoffstein, Jill Pipher, and Joseph H Silverman. Ntru: A ring-based public key cryptosystem. In *International Algorithmic Number Theory Symposium*, pages 267–288. Springer, 1998.
+
+18. David Jao and Luca De Feo. Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies. In *International Workshop on Post-Quantum Cryptography*, pages 19–34. Springer, 2011.
+
+19. Anatolii Alekseevich Karatsuba and Yu P Ofman. Multiplication of many-digital numbers by automatic computers. In *Doklady Akademii Nauk*, volume 145, pages 293–294. Russian Academy of Sciences, 1962.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_1.md b/samples/texts/3474975/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..3fcbe1460d65ec9e68012331020ddcfedb9bcad7
--- /dev/null
+++ b/samples/texts/3474975/page_1.md
@@ -0,0 +1,22 @@
+Coordination Variables and Consensus
+Building in Multiple Vehicle Systems
+
+Wei Ren, Randal W. Beard, and Timothy W. McLain
+
+Brigham Young University, Provo, Utah 84602
+{weiren,beard}@ee.byu.edu, mclain@byu.edu
+
+Much of the research focus in the cooperative control community has been on formation control problems [1, 3, 7, 10, 19]. This focus may be due to the fact that the group control problem can be reduced to well-established single-agent control problems by employing a leader-follower type control strategy. For example, single-agent path planning and trajectory generation techniques can be employed for the leader, and conventional trajectory tracking strategies can be employed for the followers. Indeed, formation control problems are much like linear systems theory: we search where the light is the brightest. It can be argued that formation control problems are the simplest type of coordination problems and that even if they were to be completely solved, the solution would be of limited usefulness since the formation concept is of limited utility. This last comment is supported by the observation that humans cooperate to perform a wide variety of tasks, yet we rarely maintain formation with each other.
+
+The usefulness of cooperative control technologies will be greatly enhanced if, as a community, we develop techniques that apply more generally to non-formation cooperative control problems. The first requirement is that we understand the fundamental issues inherent in all coordination problems. Toward that end, we offer the following, intuitively appealing, fundamental axiom:
+
+**Axiom 1** *Shared information is a necessary condition for coordination.*
+
+Underlying this axiom are two important questions: “What information should be shared?” and “With whom should information be shared?” The focus of this chapter is on providing answers to these two questions.
+
+In every cooperative control problem, there must be an identifiable cooper-
+ation objective. To achieve the team objective, specific kernels of information
+must be shared among members of the team. Identification of the key pieces
+of information to be shared is a critical step in the formulation of a coopera-
+tive control solution. Our approach is to collect the information that must be
+jointly shared to facilitate cooperation into a vector quantity called the coor-
\ No newline at end of file
diff --git a/samples/texts/3474975/page_10.md b/samples/texts/3474975/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..029b10886bc492908da5241d4330d2f16ba00d42
--- /dev/null
+++ b/samples/texts/3474975/page_10.md
@@ -0,0 +1,13 @@
+22. Wei Ren and Randal W. Beard. A decentralized scheme for spacecraft formation flying via the virtual structure approach. In *Proceedings of the American Control Conference*, pages 1746–1751, Denver, CO, June 2003.
+
+23. A. Richards, J. Bellingham, M. Villerson, and J. How. Coordination and control of UAVs. In *Proceedings of the AIAA Guidance, Navigation, and Control Conference*, pages A12-2002-4588, Monterey, CA, August 2002.
+
+24. Wilson J. Rugh. *Linear System Theory*. Prentice Hall, Englewood Cliffs, New Jersey, 2nd edition, 1996.
+
+25. Reza Olfati Saber and Richard M. Murray. Agreement problems in networks with directed graphs and switching topology. In *Proceedings of the IEEE Conference on Decision and Control*, 2003. (to appear).
+
+26. Shahab Sheikholeslam and Charles A. Desoer. Control of interconnected nonlinear dynamical systems: The platoon problem. *IEEE Transactions on Automatic Control*, 37(6):806–810, June 1992.
+
+27. Daniel J. Stilwell and Bradley E. Bishop. Platoons of underwater vehicles. *IEEE Control Systems Magazine*, 20(6):45–52, December 2000.
+
+28. P. K. C. Wang and F. Y. Hadaegh. Coordination and control of multiple microspacecraft moving in formation. *The Journal of the Astronautical Sciences*, 44(3):315–355, 1996.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_11.md b/samples/texts/3474975/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..3500b395efa6d6a2ac9b32678e50229c85e6aeaf
--- /dev/null
+++ b/samples/texts/3474975/page_11.md
@@ -0,0 +1,35 @@
+dination variable. The coordination variable represents the minimal amount of information needed to effect a specific coordination objective.
+
+Although it is known by different names, the notion of a coordination vari-
+able is found in many other works on cooperative control. For example [16, 17]
+introduce an “action reference” which, if known by each vehicle, facilities for-
+mation keeping. In leader-following applications [26, 28], the states of the
+leader constitute the coordination variable since the actions of the other ve-
+hicles in the formation are completely specified once the leader states are
+known. In [3, 18, 19], the notion of a virtual structure is used to derive for-
+mation control strategies. The motion of each vehicle is causally dependent
+on the dynamic states of the virtual structure, therefore the states of the vir-
+tual structure are the coordination variables. In [27] a team of autonomous
+underwater vehicles are controlled to swarm around a desired mean location
+of the team with a specified standard deviation. The action of each vehicle is
+dependent on the location of its nearest neighbor, and the desired mean and
+standard deviation. This information is the coordination variable.
+
+Coordination variables may also be more discrete in nature. For example,
+in [6, 23], cooperative task allocation is addressed. Individual vehicle behav-
+ior is dependent on the task allocation vector which becomes the coordination
+variable. Similarly, in [11], the coordination variable is the dynamic role as-
+ignment in a robot soccer scenario.
+
+Information necessary for cooperation may be shared in a variety of ways.
+For example, relative position sensors may enable vehicles to construct state
+information for other vehicles [8], or knowledge may be communicated be-
+tween vehicles using a wireless network [12], or joint knowledge might be
+pre-programmed into the vehicles before a mission begins [2]. In Section 1 we
+offer some definitions and general principles regarding coordination variables.
+
+For cooperative control strategies to be effective, a team of vehicles must be able to respond to unanticipated situations or changes in the environment that are sensed as a cooperative task is carried out. As the environment changes, the vehicles on the team must be in agreement as to what changes took place. A direct consequence of Axiom 1 is that cooperation requires that the group of agents reach a consensus on the coordination data. In other words, the instantiation of the coordination variable on each agent must asymptotically approach a sufficiently common value.
+
+A critical problem for cooperative control is to determine algorithms so that a team of vehicles can reach consensus on the values of the coordination data in the presence of (1) imperfect sensors, (2) communication dropout, (3) sparse communication topologies, and (4) noisy and unreliable communication links. The question of “With whom does communication take place?” is of great significance in seeking consensus among a team of vehicles. Section 2 states some results on multi-agent consensus seeking for fixed communication topologies.
+
+The consensus problem has recently been addressed in [9, 15, 12, 25]. The work reported in [15] is particularly relevant to the results reported in this
\ No newline at end of file
diff --git a/samples/texts/3474975/page_12.md b/samples/texts/3474975/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..287a469953d84a442f46a106e1d3e9800c2e8084
--- /dev/null
+++ b/samples/texts/3474975/page_12.md
@@ -0,0 +1,19 @@
+paper. Ref [15] addresses the knowledge consensus problem when teams of agents only have local communication between nearest neighbors. Since the set of nearest neighbors is constantly changing, the overall system becomes a hybrid system. The paper shows that if the union over all bidirectional communication graphs is connected for finite periods of time, then consensus is achieved. While the results in this paper are not as strong, only unidirectional communication links are assumed.
+
+# 1 Coordination Variables and Functions
+
+This section introduces a general approach to coordination problems where the team objectives are coupled through the assigned tasks rather than through dynamic interactions or tight physical constraints.
+
+Cooperative control by a team of vehicles is dependent on the environment or mission scenario in which the vehicles are acting. To characterize the significant elements of the environment, define $\mathcal{X}_i$ to be the situation state space for the $i^{th}$ vehicle and let $\mathbf{x}_i \in \mathcal{X}_i$ be the situation state of the $i^{th}$ vehicle. For many cooperation problems, the situation state would include current information about an agent's position and the environment in which the agent is acting. For a given situation $\mathbf{x}_i$, the set of feasible actions for an agent is given by $\mathcal{U}_i(\mathbf{x}_i)$ and $\mathbf{u}_i \in \mathcal{U}_i$ is the action variable for the $i^{th}$ agent. The choice of the action variable by each agent on the team affects both the feasibility and the quality of the cooperation achieved.
+
+Axiom 1 implies that there is a minimum amount of information needed by the team to effect cooperation. We will call this information the *coordination variable* and denote it by $\theta$. The essential idea is that if every agent knows the coordination variable and responds appropriately, then cooperative behavior will be achieved. The coordination variable is a vector in coordination space $\mathbb{R}^c$.
+
+A representation of the distillation of information from the situation state and influence variables (full information) to the coordination variable (minimal information) is central to this method. If $f_i : \mathcal{X}_i \times \mathcal{U}_i \to \mathbb{R}^c$ is a function that maps situation state and influence vector pairs to $\mathbb{R}^c$, then the set of feasible coordination variables for the $i^{th}$ vehicle at state $\mathbf{x}_i$ is given by
+
+$$ \Theta_i(\mathbf{x}_i) = \bigcup_{\mathbf{u}_i \in \mathcal{U}_i(\mathbf{x}_i)} f_i(\mathbf{x}_i, \mathbf{u}_i). \quad (1) $$
+
+Note that $\Theta_i(\mathbf{x}_i)$ is not necessarily a connected set.
+
+We assume that $f_i$ is (pseudo) invertible in the sense that there exists a function $f_i^\dagger : \mathcal{X}_i \times \Theta_i \to \mathcal{U}_i$ (called the pseudo-inverse of $f_i$), such that for every $\vartheta \in \Theta_i(\mathbf{x}_i)$, $f_i(\mathbf{x}_i, f_i^\dagger(\mathbf{x}_i, \vartheta)) = \vartheta$. Simply stated, if the situational state and the coordination variable are known, the decision variable is unique.
+
+In addition to cooperative behavior, the team may have individual performance objectives. Associated with the $i^{th}$ vehicle is a myopic performance
\ No newline at end of file
diff --git a/samples/texts/3474975/page_14.md b/samples/texts/3474975/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..80aa331c5b8a2a871830fa91a115d36d25410b35
--- /dev/null
+++ b/samples/texts/3474975/page_14.md
@@ -0,0 +1,11 @@
+## 1.1 Example: Cooperative Timing
+
+The application of coordination variables and functions can be demonstrated by a simple example. Suppose that a group of friends decides that they will meet for dinner on a certain date, but fail to specify a specific time and place. On the afternoon of the dinner date, everyone realizes that they are uncertain about where or when to meet for dinner. For the moment, assume that all of the friends can get together on a conference call to make a decision. This group cooperation problem can be used to illustrate the cooperation strategy outlined above.
+
+Clearly the coordination variables that must be determined for the group are the restaurant to eat at and the time to meet. For each individual there is range of feasible times and places to meet described by $\Theta_i$. This range is determined by the situation state $\mathbf{x}_i$ (e.g., traffic conditions, work location) and by the individual's actions $\mathbf{u}_i$ (e.g., departure time, choice to change clothes, choice of route).
+
+The coordination function describes the cost to each individual over the range of the coordination variable. Influencing the cost might be the distance to the restaurant, the budget of the individual, the dietary tastes or restrictions of the individual, or the average wait-time at the restaurant. Thus, for every $\vartheta \in \Theta_i$, the coordination function $\phi_i(\mathbf{x}_i, \vartheta)$ describes the feeling of the individual about all of the possible choices of time and place in the form of a numeric cost metric.
+
+In the conference call, coordination functions are exchanged and a decision can be made about the restaurant and meeting time that maximizes the collective happiness of the group. Although the happiness of each individual is unlikely to be maximal, the goal would be to craft the group objective function so that all of the friends are satisfied with the decision. Note that the exchange of information is efficient. The coordination function captures the essential information necessary to come to a group decision. No discussion of work load, dietary tastes, personal finances, or travel routes is necessary. Furthermore, once a group decision for the coordination variable $\theta$ is made, all individual decisions, such as those about the route and departure time, are left to the individual. This decomposition of group and individual decisions streamlines the cooperation process leading to both efficient decision making and communication.
+
+In this example, the requirement that each agent exchange coordination function information with every other agent is limiting. Is it be possible to come to a decision by placing phone calls among individuals? If individual members of the group are not in agreement about the situation state (e.g., a traffic accident on a main thoroughfare or the availability of seating at a restaurant), is coming to an agreement on time and place possible? The following section deals specifically with the questions of who must communicate and how differences in information among members of the team can be resolved.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_15.md b/samples/texts/3474975/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9953c92d62b19809f41049c34af90f92c7fc7ed
--- /dev/null
+++ b/samples/texts/3474975/page_15.md
@@ -0,0 +1,15 @@
+## 2 Consensus Seeking
+
+Coordination variables focus the attention on the minimal amount of information required for cooperation. The essential idea is that if each vehicle has the same instantiation of the coordination variable, then team action will be coordinated provided that each vehicle acts to achieve the desired value of the coordination variable. The consensus problem is to ensure that there is a sufficiently common instantiation of the coordination variable among members of the team. As shown in Figure 1, consensus can be formed on the situational state $\mathbf{x}_i$, or the coordination variable $\theta_i$. To be general, we will let $\xi_i$ represent the $i^{th}$ vehicle's "information" variable over which consensus is to be formed.
+
+**Fig. 1.** Consensus Diagram.
+
+Let $\mathcal{G}$ be a directed graph $\mathcal{G}$ (c.f. [13]) representing the (possibly unidirectional) communication topology, where vertices, denoted $A_i$, $i = 1, \dots, N$, represent the vehicles and edges represent unidirectional communication links between vehicles. A directed tree is a directed graph, where every vertex, except the root, has exactly one parent. A spanning tree of a directed graph is a tree formed by graph edges that includes all the vertices of the graph. We assume unidirectional communication to allow for scenarios where some of the agents do not possess a transmitter, or perhaps do not wish to transmit information, either to conserve energy or to increase stealth.
+
+The linear consensus scheme proposed in [5] is
+
+$$ \dot{\xi}_i = -\sum_{j=1}^{N} k_{ij} G_{ij} (\xi_i - \xi_j), \quad i = 1, \dots, N, \qquad (5) $$
+
+where $k_{ij}$ are positive constants, and $G_{ij}$ is 1 if information flows from $A_j$ to $A_i$, and 0 otherwise. The intuition behind Equation (5) is that when $\mathcal{A}_i$ receives information from $\mathcal{A}_j$, its information variable is “pushed” toward $\mathcal{A}_j$'s information variable with strength $k_{ij}$.
+
+In the case of $\xi_i \in \mathbb{R}$, Eq. (5) can be written in matrix form as
\ No newline at end of file
diff --git a/samples/texts/3474975/page_16.md b/samples/texts/3474975/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c14a3e2cef689d7de860c7f6dcc9cbd357f4ead
--- /dev/null
+++ b/samples/texts/3474975/page_16.md
@@ -0,0 +1,23 @@
+$$
+\dot{\xi} = C\xi, \tag{6}
+$$
+
+where $\xi = [\xi_1, \dots, \xi_N]^T$, $C = [c_{ij}]$, $(i,j) = 1, \dots, N$, with $c_{ii} = -(\sum_{j \neq i} k_{ij} G_{ij})$, $i = 1, \dots, N$, and $c_{ij} = k_{ij} G_{ij}$, $j \neq i$.
+
+We say that $\mathcal{C}$ is the matrix associated with graph $\mathcal{G}$. Note that this update scheme accommodates all possible communication topologies. In [5] the information variable $\xi_i$ is assumed to be a scalar and continuously differentiable in time. For simplicity, we maintain this assumption, but note that all the results are valid for $\xi_i \in \mathbb{R}^p$ by simply multiplying each element of $\mathcal{C}$ by an identity matrix $I_p$ so that matrix $\mathcal{C}$ has a dimension $N_p \times N_p$ instead of $N \times N$. In [5] the weightings $k_{ij}$ are assumed to be equal. In this chapter, we relax this assumption by allowing $k_{ij}$ to be any positive constant representing the relative confidence between vehicles. We will assume that the graph $\mathcal{G}$ is time-invariant.
+
+We have the following definition from [5].
+
+**Definition 1.** The set of agents $\mathcal{A} = \{A_i | i = 1, \dots, N\}$ is said to be in consensus at time $t_0$, if $t \ge t_0$ implies that $\|\xi_i(t) - \xi_j(t)\| = 0$ for each $(i, j) = 1, \dots, N$. The set of agents $\mathcal{A}$ is said to reach global consensus asymptotically if for any $\xi_i(0)$, $i = 1, \dots, N$, $\|\xi_i(t) - \xi_j(t)\| \to 0$ as $t \to \infty$ for each $(i, j) = 1, \dots, N$. The set $\mathcal{A}$ is said to be global consensus reachable if there exists an information update strategy for each $\xi_i$, $i = 1, \dots, N$ that achieves global consensus asymptotically for $\mathcal{A}$.
+
+Obviously $\mathcal{C}$ is diagonally dominant, has zero row sum, and non-positive diagonal elements. Therefore, from the Gersgorin disc theorem (c.f. [14]), $\mathcal{C}$ has at least one zero eigenvalue and all the other non-zero eigenvalues are in the open left half plane.
+
+## 2.1 Consensus and Evolution of Coordination Variables
+
+In this section, we first consider the case when the information variable is inherently constant. We then consider the case when the information variable is dynamically evolving in time. This is the case, for example, in formation control problems where the information variable is the dynamic state of a virtual leader.
+
+### Static Consensus
+
+It has been shown in [5] that the group of vehicles $\mathcal{A}$ reach consensus asymptotically using the update scheme (5) if matrix $\mathcal{C}$ in Eq. (6) has exactly one zero eigenvalue and all the others are in the open left half plane. The following result computes the value of the information variable that is reached through the consensus process.
+
+Before moving on, we need the following definitions from matrix theory (c.f. [14]). A real matrix $M = [a_{ij}]$ is said to be nonnegative, denoted as
\ No newline at end of file
diff --git a/samples/texts/3474975/page_17.md b/samples/texts/3474975/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..ff5fb1c1f88710ca9c89bcf4e6157d4538eacb83
--- /dev/null
+++ b/samples/texts/3474975/page_17.md
@@ -0,0 +1,17 @@
+$M \ge 0$, if all its entries are nonnegative. A nonnegative matrix is said to be a stochastic matrix if all its row sums are 1.
+
+**Lemma 1.** If $C$ is given by Eq. (6), then $e^{ Ct}$, $\forall t > 0$, is a stochastic matrix with positive diagonal entries. Furthermore, if $C$ has exactly one zero eigenvalue, then $e^{ Ct} \to b\nu^T$ and $\xi_i(t) \to \sum_{j=1}^N (\nu_j \xi_j(0))$ as $t \to \infty$, where $b = [1, \dots, 1]^T_{N \times 1}$, $\nu = [\nu_1, \dots, \nu_N]^T \ge 0$, and $\sum_{j=1}^N \nu_j = 1$.
+
+*Proof:* Given eigenvalues $\lambda_i \in \sigma(C)$ with eigenvectors $z_i$, $i = 1, \dots, N$, where $\sigma(A)$ represents the spectrum of $A$, we know that $e^{\lambda_i t} \in \sigma(e^{Ct})$ with the same eigenvectors as $C$ (c.f. [14]). Noting that $C$ has a zero eigenvalue with an associated eigenvector given by $b$, then $e^{Ct}$ has an eigenvalue 1 with the same eigenvector $b$. Thus we know that $e^{Ct}b = b$, which implies that $e^{Ct}$ always has row sum equal to 1. Also note that $C$ can be written as the sum of a nonnegative matrix $M$ and $-\beta I_N$, where $\beta$ is the maximum absolute value of the diagonal entries of $C$ and $I_N$ is the $N \times N$ identity matrix. We can see that $e^{Ct} = e^{-\beta t} e^{Mt}$, which is obviously nonnegative and has positive diagonal entries. As a result, $e^{Ct}$, $\forall t > 0$, is a stochastic matrix with positive diagonal entries.
+
+Furthermore, if $C$ has exactly one zero eigenvalue, then $e^{Ct}$ has exactly one eigenvalue equal to 1 and all the other eigenvalues have modulus less than 1. Let $J = [j_{ml}]$, $(m, l) = 1, \dots, N$, be the Jordan matrix corresponding to matrix $C$, then $j_{mm} = \lambda_m$. Without loss of generality, assume that $\lambda_N = 0$ and $\lambda_m$ is on the open left half plane, $m = 1, \dots, N-1$.
+
+Let $C = PJP^{-1}$, where $P = [p_1, \dots, p_N]$ is an $N \times N$ matrix. Note that $p_N$ can correspond to an eigenvector associated with eigenvalue $\lambda_N = 0$. Without loss of generality, choose $p_N = b$ as the eigenvector.
+
+We know that $e^{Ct} = Pe^{Jt}P^{-1}$. It can be verified that
+
+$$e^{Jt} \rightarrow \begin{bmatrix} 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 1 \end{bmatrix}$$
+
+as $t \to \infty$ from the property of $C$ (c.f. [14]). After some manipulation, we know that $e^{Ct} \to b\nu^T$ as $t \to \infty$, where $\nu_i$, $i = 1, \dots, N$, corresponds to the last row of matrix $P^{-1}$. The result $\sum_{i=1}^N \nu_i = 1$ comes from the fact that $e^{Ct}$ has row sum equal to 1 for any $t$.
+
+We also need to show that $\nu \ge 0$. Now consider matrix $e^{Ck}$, $k = 0, 1, 2, \dots$. Obviously $e^{Ck}$ should also approach to $b\nu^T$ as $k \to \infty$. From Lemma 8.2.7 in [14], $\nu$ should be an eigenvector of matrix $(e^C)^T$ associated with the simple eigenvalue 1. From Theorem 8.3.1 in [14], $(e^C)^T$ has a nonnegative eigenvector $x \ge 0$ associated with the simple eigenvalue 1. Thus it can be seen that $\nu = \alpha x$ for some $\alpha \neq 0$. Since $\sum_{i=1}^N \nu_i = 1$, it must be true that $\alpha > 0$, which implies that $\nu \ge 0$.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_18.md b/samples/texts/3474975/page_18.md
new file mode 100644
index 0000000000000000000000000000000000000000..bae1bea49853c8601a50c8573eae67075812b1f7
--- /dev/null
+++ b/samples/texts/3474975/page_18.md
@@ -0,0 +1,27 @@
+The solution to Eq. (6) is given by $\xi(t) = e^{Ct}\xi(0)$. Therefore, it is obvious
+that $\xi_i(t) \rightarrow \sum_{j=1}^{N} (\nu_i \xi_j(0))$, $i = 1, \dots, N$, as $t \rightarrow \infty$.
+
+Note that if we replace matrix C with $\gamma C$ in Eq. (6), where $\gamma > 0$, we can increase consensus speed by increasing $\gamma$. The solution to Eq. (6) with this new matrix is given by $\xi = e^{\gamma C t} \xi(0) = e^{C(\gamma t)} \xi(0)$, which converges faster than the original solution if we choose $\gamma > 1$.
+
+Let $\mathcal{G}_1$ be a communication graph for the group of agents $\mathcal{A}$. Let $\mathcal{G}_2$ be the communication graph by adding one more directed link from any node $m$ to node $\ell$ to graph $\mathcal{G}_1$, where $m \neq \ell$. Also let $Q$ and $S$ be the matrices in the update law (6) associated with graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ respectively. Denote $p_Q(t) = \det(tI - Q)$ and $p_S(t) = \det(tI - S)$ as the characteristic polynomial of $Q$ and $S$ respectively. Let $Q_t = tI - Q$ and $S_t = tI - S$. Given any matrix $M$, denote $M([i, j])$ as the sub-matrix of $M$ formed by deleting the $i$th row and $j$th column.
+
+**Lemma 2.** If matrix *Q* has exactly one zero eigenvalue, then so does matrix *S*.
+
+*Proof:* Without loss of generality, we assume that the new directed communication link added to graph $\mathcal{G}_1$ is from node $m$ to node 1, where $m \neq 1$, for simplicity since we can always renumber node $l$ as node 1.
+
+Obviously matrix *S* has at least one zero eigenvalue and all the other non-zero eigenvalues are in the open left half plane. Below we will show that *S* has only one zero eigenvalue.
+
+Assume that $Q = [q_{ij}]$, $S = [s_{ij}]$, $Q_t = [q_{tij}]$, and $S_t = [s_{tij}]$, $(i, j) = 1, \dots, N$. From the property of $Q$ and $S$, we know that $s_{11} = q_{11} - k_{1m}$, $s_{1m} = q_{1m} + k_{1m}$, and $s_{ij} = q_{ij}$ otherwise. Accordingly, it can be seen that $s_{t11} = t - s_{11} = t - q_{11} + k_{1m} = q_{t11} + k_{1m}$, $s_{t1m} = -s_{1m} = -q_{1m} - k_{1m} = q_{t1m} - k_{1m}$, and $s_{tij} = q_{tij}$ otherwise. Also note that $\det S_t([1, j]) = \det Q_t([1, j])$, $j = 1, \dots, N$. Then we know that
+
+$$
+\begin{align*}
+\det S_t &= \sum_{j=1}^{N} (-1)^{1+j} s_{t1j} \det S_t([1,j]) \\
+&= \sum_{j=1}^{N} (-1)^{1+j} q_{t1j} \det S_t([1,j]) \\
+&\quad + k_{1m} \det S_t([1,1]) - (-1)^{1+m} k_{1m} \det S_t([1,m]) \\
+&= \det Q_t + k_{1m} (\det S_t([1,1]) + (-1)^m \det S_t([1,m])) .
+\end{align*}
+$$
+
+Consider a matrix $E = [e_{ij}]$, $(i,j) = 1, \dots, N-1$, given by adding
+[$s_{21}, s_{31}, \dots, s_{N1}$]ᵀ to the $(m-1)$th column of matrix $S([1,1])$. Matrix $E$
+can be denoted as
\ No newline at end of file
diff --git a/samples/texts/3474975/page_2.md b/samples/texts/3474975/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..a978dfdfb0e962d566e96773feb45c088d1a7bff
--- /dev/null
+++ b/samples/texts/3474975/page_2.md
@@ -0,0 +1,24 @@
+$$E = \begin{bmatrix}
+s_{22} & s_{23} & \cdots & s_{2m} + s_{21} & \cdots & s_{2N} \\
+s_{32} & s_{33} & \cdots & s_{3m} + s_{31} & \cdots & s_{3N} \\
+\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
+s_{N2} & s_{N3} & \cdots & s_{Nm} + s_{N1} & \cdots & s_{NN}
+\end{bmatrix}$$
+
+Thus $e_i(m-1) = s_{(i+1)m} + s_{(i+1)1}$, $i = 1, \dots, N-1$. Using the properties of determinants, it can be verified that
+
+$$\det(tI - E) = \det S_t([1, 1]) + (-1)^m \det S_t([1, m]).$$
+
+Obviously matrix *E* has zero row sum and nonpositive diagonal elements. Also matrix *E* is diagonally dominant. From the Gersgorin disc theorem, we know that *E* has at least one zero eigenvalue and all the other non-zero eigenvalues are on the open left half plane. As a result, the Routh stability criterion implies that the characteristic polynomial of *E* denoted as $\det(tI - E)$ has a nonnegative coefficient in the first power of *t*. We also know that matrix *Q* has a positive coefficient for the first power of *t* in its characteristic polynomial $\det Q_t$ since *Q* has exactly one zero eigenvalue and all the others are in the open left half plane.
+
+Noting that $\det S_t = \det Q_t + k_{1m}\det(tI - E)$, it is obvious that $S$ has a positive coefficient for the first power of $t$.
+
+Therefore, $S$ can only have one zero eigenvalue. ■
+
+Ref. [5] shows that the group of agents $\mathcal{A}$ is global consensus reachable if and only if the associated communication graph $\mathcal{G}$ has a spanning tree. The proof for this claim in [5] is constructive in that the linear update law is based on a communication graph which is the spanning tree of $\mathcal{G}$. Of course, there may exist other connections in graph $\mathcal{G}$ which are ignored. Ref. [5] only partially answers the question of whether the update law (5) accounting for all existing connections achieves global consensus asymptotically. The next result provides a complete answer.
+
+**Theorem 2.** The consensus strategy (5), achieves global consensus asymptotically for $\mathcal{A}$ if and only if the associated (static) communication graph $\mathcal{G}$ has a spanning tree.
+
+*Proof:* (Sufficiency.) Obviously $C$ in Eq. (6) associated with graph $\mathcal{G}$ always has at least one zero eigenvalue and all the other non-zero eigenvalues are in the left half plane. We only need to check the algebraic multiplicity of the zero eigenvalue.
+
+From [5], we know that the matrix associated with the spanning tree has exactly one zero eigenvalue. If graph $\mathcal{G}$ is itself the spanning tree, we know that the update law (5) achieves consensus asymptotically for $\mathcal{A}$. If not, graph $\mathcal{G}$ can be constructed by consecutively adding communication links to the tree. Lemma 2 implies that adding one additional communication link to the spanning tree results in an associated matrix that also has exactly one zero eigenvalue. We can recursively add additional links, where Lemma 2 implies
\ No newline at end of file
diff --git a/samples/texts/3474975/page_3.md b/samples/texts/3474975/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..a16a90a4f824d09fd172a3589c9355eb8d27b18e
--- /dev/null
+++ b/samples/texts/3474975/page_3.md
@@ -0,0 +1,25 @@
+that the matrix associated with the new graph has exactly one zero eigenvalue, until we obtain the graph $\mathcal{G}$. By induction, we know that the update law (5) achieves global consensus asymptotically for $\mathcal{A}$.
+
+(Necessity.) The consensus strategy (5) achieves global consensus asymptotically for $\mathcal{A}$ implies that $\mathcal{A}$ is global consensus reachable, which in turn implies that graph $\mathcal{G}$ has a spanning tree following the necessity part of the proof for Theorem 3.1 in [5]. ■
+
+**Corollary 1.** Suppose that $B = [b_{ij}]$, where $b_{ii} \le 0$, $b_{ij} \ge 0$, $\forall i \ne j$, and $\sum_{j=1}^{n} b_{ij} = 0$. The $B$ has at least one zero eigenvalue and all the other non-zero eigenvalues are in the open left half plane. Furthermore, $B$ has exactly one zero eigenvalue if and only if the directed graph associated with $B$ has a spanning tree.
+
+*Proof:* $B$ has the same property as matrix $C$ in Eq. (6), therefore the corollary follows from Theorem 2. ■
+
+**Corollary 2.** *The Laplacian matrix of a graph has a simple zero eigenvalue if and only if the graph has a spanning tree.*
+
+*Proof:* If we multiply the Laplacian matrix by -1, we get a matrix satisfying the properties defined in Corollary 1. ■
+
+Note that the linear update law (5) only achieves consensus for constant coordination variables, which may not be suitable for applications where the coordination variable evolves dynamically. For example, in the context of leader-following approaches (c.f. [22]), the group leader's trajectory can act as the coordination variable for the whole group.
+
+## Dynamic Consensus
+
+Suppose that the information variable on each vehicle is driven by the same time-varying input $u(t)$, which might represent an *a priori* known feedforward signal. The associated consensus scheme is given by
+
+$$ \dot{\xi}_i = - \sum_{j=1}^{N} k_{ij} G_{ji} (\xi_i - \xi_j) + u(t), \quad i = 1, \dots, N. \qquad (7) $$
+
+Eq. (7) can also be written in matrix form as
+
+$$ \dot{\xi} = C\xi + Bu(t), \qquad (8) $$
+
+where $C$ is the matrix associated with graph $\mathcal{G}$ and $B = [1, \cdots, 1]^T$. We have the following theorem regarding consensus of the information variables $\xi_i$, $i = 1, \dots, N$.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_4.md b/samples/texts/3474975/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..9b5fea07b1071065d7ce7bb7af3763a5ced28416
--- /dev/null
+++ b/samples/texts/3474975/page_4.md
@@ -0,0 +1,21 @@
+**Theorem 3.** The consensus strategy (8) achieves global consensus asymptotically for $\mathcal{A}$ if and only if the associated communication graph $\mathcal{G}$ has a spanning tree. Furthermore the information variables satisfy $\|\xi_i(t) - \zeta(t)\| \to 0$ as $t \to \infty$, where $\zeta(t)$ is the solution of
+
+$$\dot{\zeta} = u(t), \quad \zeta(0) = \mu,$$
+
+where $\mu B$ is equilibrium of the differential equation
+
+$$\dot{\pi} = C\pi, \quad \pi(0) = \xi(0).$$
+
+*Proof:* (Sufficiency.) The solution to Eq. (8) is given by $\xi(t) = \xi_s(t) + \xi_e(t)$, where $\xi_s(t) = e^{Ct}\xi(0)$ and $\xi_e(t) = \int_0^t e^{C(t-\tau)}Bu(\tau)d\tau$ (c.f. [24]). Note that $\xi_s$ represents the zero input solution to Eq. (8), that is, solution to $\dot{\xi} = C\xi$. From Theorem 2, it is obvious that each component of $\xi_s$ satisfies $\xi_{si}(t) \to \mu$ as $t \to \infty$, $i = 1, \dots, N$. Also note that $\xi_e$ represents the zero state solution to Eq. (8). We know that $e^{C(t-\tau)}B = B$ since $e^{C(t-\tau)}$ always has row sum equal to 1. Therefore, it can be seen that each component of $\xi_e$ satisfies $\xi_{ei} = \int_0^t u(\tau)d\tau$, $i = 1, \dots, N$. Combining $\xi_s$ and $\xi_e$, gives $\|\xi_i(t) - \zeta(t)\| \to 0$ as $t \to \infty$.
+
+(Necessity.) The necessary part follows directly from Theorem 2. ■
+
+## Equilibrium Points
+
+We have shown that the linear consensus strategy (5) achieves global consensus asymptotically for $\mathcal{A}$ if the graph $\mathcal{G}$ has a spanning tree. In addition, $\xi_i(t)$ will converge to $\sum_{i=1}^N (\nu_i\xi_i(0))$ as $t \to \infty$, where $\sum_{i=1}^N \nu_i = 1$ and $\nu_i \ge 0$. A natural question is whether each initial condition $\xi_i(0)$ will contribute to the final equilibrium point. In the following we provide a partial answer to this question. We assume that graph $\mathcal{G}$ has a spanning tree in this section.
+
+Observe that if there is a node $A_k$ in $\mathcal{G}$ without an incoming link (there is at most one such node in graph $\mathcal{G}$ from Theorem 2), the linear update law corresponding to this node is given by $\dot{\xi}_k = 0$ from Eq. (5), which implies that $\xi_k(t) = \xi_k(0)$ for all $t$. Therefore, the other nodes must converge to $\xi_k(0)$ for any $k_{ij} > 0$. That is, $\nu_k = 1$ and $\nu_i = 0, \forall i \ne k$.
+
+In general, the initial condition of a node contributes to the equilibrium value if and only if the node has a directed path to all the other nodes in $\mathcal{G}$. Thus $\nu_i \ne 0$ for any node which has directed paths to all the other nodes in $\mathcal{G}$ and $\nu_i = 0$ otherwise. As a special case, the initial condition of each node in a graph contributes to the final equilibrium point if and only if the graph is strongly connected. The above argument can be explained as follows. If there is no path from node $j$ to node $m$ in $\mathcal{G}$, it is impossible for $\xi_m(t)$ to be influenced by $\xi_j(0)$. On the other hand, if there is a path from node $j$ to every other node in $\mathcal{G}$, then $\xi_i(t)$, $\forall i \ne j$, will be influenced by $\xi_j(0)$.
+
+The fact that $\nu_i \ge 0$, $i = 1, \dots, N$ can also be explained from the following perspective. Assume that $\nu_\ell < 0$ for some $\ell$. Consider the case $\xi_\ell(0) > 0$ and
\ No newline at end of file
diff --git a/samples/texts/3474975/page_5.md b/samples/texts/3474975/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..63839501cff7739292b9af5a27462e11be7fc414
--- /dev/null
+++ b/samples/texts/3474975/page_5.md
@@ -0,0 +1,15 @@
+$\xi_i(0) = 0, \forall i \neq l$. We know that $\xi_i(t)$ will converge to $\sum_{i=1}^N (\nu_i \xi_i(0)) = \nu_l \xi_l(0)$, which is negative. Following the update law (5), $\dot{\xi}_l(0) < 0$ if there is any incoming link to $A_l$ and $\dot{\xi}_l(0) = 0$ otherwise. In the first situation, $\xi_l(t)$ will decrease and $\xi_i(t)$, $\forall i \neq l$ cannot decrease since $\dot{\xi}_i(0) \ge 0$, which implies that $\xi_i(t)$ will be synchronized to a value $c$ with $0 \le c < \xi_l(0)$. In the second situation, $\xi_i(t)$ will be synchronized to $\xi_l(0)$. Both cases are contradictory to the above result. Therefore, $\nu_i \ge 0, i = 1, \dots, N$.
+
+## 2.2 Illustrative Example
+
+In this section, we consider a scenario where six vehicles are to rendezvous at a position along a parameterized trajectory represented by $(r_x(\tau(t)), r_y(s(t)))$. Figure 2 shows the corresponding communication links between these vehicles. Note the existence of a spanning tree.
+
+It is assumed that each vehicle knows the parameterized trajectory. Therefore the parameters $\tau$ and $s$ therefore represent the minimum information needed to achieve the coordination objective: i.e., $\tau$ and $s$ are the coordination variables. We will instantiate $\tau$ and $s$ on each vehicle as $\tau_i$ and $s_i$, $i = 1, \dots, 6$. Here we let $\xi_i = [\tau_i, s_i]^T$, $i = 1, \dots, 6$.
+
+Fig. 2. Communication topology.
+
+Based on the communication topology shown in Figure 2, the matrix $C$ is given by
+
+$$ C = \gamma \begin{bmatrix} -1.5 & 1.5 & 0 & 0 & 0 & 0 \\ 2 & -2 & 0 & 0 & 0 & 0 \\ 0.9 & 0 & -2.8 & 0 & 1.9 & 0 \\ 0 & 1.2 & 0 & -2.5 & 0 & 1.3 \\ 0 & 0 & 1.4 & 1.8 & -3.2 & 0 \\ 0 & 0 & 0 & 0 & 0.7 & -0.7 \end{bmatrix} \otimes I_2, $$
+
+where $\gamma > 0$ is a coefficient, $\otimes$ denotes the Kronecker product, and $k_{ij} > 0$, $(i,j) = 1, \dots, 6$, is chosen arbitrarily. The initial conditions for each instantiation of $\tau$ and $s$ are given by $\tau_i = 0.2i - 0.1$ and $s_i = 0.2i$, $i = 1, \dots, 6$.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_6.md b/samples/texts/3474975/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc81651745bcf2ca204f3d2a9b4483395ca89b5a
--- /dev/null
+++ b/samples/texts/3474975/page_6.md
@@ -0,0 +1,5 @@
+Figure 3 shows the consensus scenario using update law (6) for $\gamma = 1$ and $\gamma = 5$ respectively. We can see that only the initial conditions of $A_1$ and $A_2$ affect the equilibrium value, which is consistent with the communication graph shown in Figure 2, where it can be seen that only $A_1$ and $A_2$ have a directed path to all the other nodes. Figure 4 shows the same consensus scenario corresponding to the communication graph formed by deleting the link from $A_2$ to $A_1$ in Figure 2. It can be seen that each instantiation of $\tau$ and $s$ converges to $\tau_1(0)$ and $s_1(0)$ respectively.
+
+**Fig. 3.** Consensus of $\tau_i$ and $s_i$ using update law (6).
+
+Figure 5 illustrates a dynamic consensus scenario using update law (8) for $\gamma = 1$ and $\gamma = 5$ respectively. The common predefined planning schemes for $\tau$ and $s$ are given by $\dot{\tau} = \frac{1}{5}|\sin(t)|$ and $\dot{s} = \frac{1}{4}|\cos(t)|$ respectively. Here we let $u(t) = [\frac{1}{5}|\sin(t)|, \frac{1}{4}|\cos(t)|]^T$ in Eq. (8). It can be seen that consensus is achieved asymptotically and that both $\tau_i$ and $s_i$ follow the appropriate trajectories.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_7.md b/samples/texts/3474975/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9b35898bef71a9db2b076284590805da854a942
--- /dev/null
+++ b/samples/texts/3474975/page_7.md
@@ -0,0 +1,11 @@
+Fig. 4. Consensus of $\tau_i$ and $s_i$ without link from $A_2$ to $A_1$ using update law (6).
+
+# 3 Research Challenges and Future Directions
+
+In this chapter we have highlighted some of the challenging problems inherent in coordinated control. In particular, we have argued that coordination is inherently tied to information exchange. This perspective highlights several key problems that need to be addressed.
+
+1. What are the appropriate coordination variables for broad classes of problems?
+
+2. How does a group of vehicles form consensus on those variables when the data is (a) continuous in time, (b) discrete in time, (c) quantized in amplitude, or (d) originates from sources with variable reliability.
+
+3. How do we make the team objectives invariant with respect to the consensus seeking problem? In other words, as consensus is being formed, the vehicles must act on the best information available to them at the time. One way of viewing this is that the individuals understand the team objectives differently. Under what conditions will the “design” objectives be satisfied?
\ No newline at end of file
diff --git a/samples/texts/3474975/page_8.md b/samples/texts/3474975/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..e893ad4deab0bbd00f99f2786d9ac1a9b5f5ba3d
--- /dev/null
+++ b/samples/texts/3474975/page_8.md
@@ -0,0 +1,15 @@
+Fig. 5. Consensus and evolution of $\tau_i$ and $s_i$ using update law (8).
+
+## Acknowledgements
+
+This work was partially funded by AFOSR grants F49620-01-1-0091 and F49620-02-C-0094, and by DARPA grant NBCH1020013.
+
+## References
+
+1. Tucker Balch and Ronald C. Arkin. Behavior-based formation control for multirobot teams. *IEEE Transactions on Robotics and Automation*, 14(6):926–939, December 1998.
+
+2. Tucker Balch and Lynne E. Parker, editors. *Robot Teams: From Diversity to Polymorphism*. A. K. Peters, Ltd., Natick, Massachusetts, 2002.
+
+3. Randal W. Beard, Jonathan Lawton, and Fred Y. Hadaegh. A feedback architecture for formation control. *IEEE Transactions on Control Systems Technology*, 9(6):777–790, November 2001.
+
+4. Randal W. Beard and Timothy W. McLain. Multiple UAV cooperative search under collision avoidance and limited range communication constraints. In *Proceedings of the IEEE Conference on Decision and Control*, 2003. To appear.
\ No newline at end of file
diff --git a/samples/texts/3474975/page_9.md b/samples/texts/3474975/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..04706849007b28097e58a53f50aa5bb84466105b
--- /dev/null
+++ b/samples/texts/3474975/page_9.md
@@ -0,0 +1,33 @@
+5. Randal W. Beard and Vahram Stepanyan. Synchronization of information in distributed multiple vehicle coordinated control. In *Proceedings of the IEEE Conference on Decision and Control*, 2003. To appear.
+
+6. John Bellingham, Michael Villerson, Arthur Richards, and Johnathan P. How. Multi-task allocation and path planning for cooperating UAVs. In *Cooperative Control: Models, Applications and Algorithms*, pages 1–19. Conference on Coordination, Control and Optimization, November 2001.
+
+7. Calin Delta and Vijay Kumar. Trajectory design for formations of robots by kinetic energy shaping. In *Proceedings of the IEEE International Conference on Robotics and Automation*, pages 2593–2598, Washington DC, May 2002.
+
+8. J. Russell Carpenter. Decentralized control of satellite formations. *International journal of Robust and Nonlinear Control*, 12:141–161, 2002.
+
+9. Gustavo Ayres de Castro and Fernando Paganini. Convex synthesis of controllers for consensus over a network of agents. In *Proceedings of the IEEE Conference on Decision and Control*, 2003. To appear.
+
+10. Magnus Egerstedt and Xiaoming Hu. Formation constrained multi-agent control. *IEEE Transactions on Robotics and Automation*, 17(6):947–951, December 2001.
+
+11. Rosemary Emery, Kevin Sikorski, and Tucker Balch. Protocols for collaboration, coordination and dynamic role assignment in a robot team. In *Proceedings of the IEEE International Conference on Robotics and Automation*, pages 3008–3015, Washington DC, May 2002.
+
+12. J. Alexander Fax and Richard M. Murray. Graph laplacians and stabilization of vehicle formations. In *IFAC World Congress, Barcelona, Spain*, 2002.
+
+13. C. Godsil and G. Royle. *Algebraic Graph Theory*, volume 207 of *Graduate Text in Mathematics*. Springer, New York, 2001.
+
+14. Roger A. Horn and Charles R. Johnson. *Matrix Analysis*. Cambridge University, 1985.
+
+15. Ali Jadbabaie, Jie Lin, and A. Stephen Morse. Coordination of groups of mobile autonomous agents using nearest neighbor rules. *IEEE Transactions on Automatic Control*, 48(6):988–1001, June 2003.
+
+16. W. Kang, N. Xi, and Andy Sparks. Formation control of autonomous agents in 3D workspace. In *Proceedings of the IEEE International Conference on Robotics and Automation*, pages 1755–1760, San Francisco, CA, April 2000.
+
+17. Wei Kang and Hsi-Han Yeh. Coordinated attitude control of multishotellite systems. *International Journal of Robust and Nonlinear Control*, 12:185–205, 2002.
+
+18. Jonathan Lawton and Randal Beard. A projection approach to spacecraft formation attitude control. In *23rd Annual AAS Guidance and Control Conference, Breckenridge, Colorado*, February 2000. Paper no. AAS 00-011.
+
+19. Naomi Ehrich Leonard and Edward Fiorelli. Virtual leaders, artificial potentials and coordinated control of groups. In *Proceedings of the IEEE Conference on Decision and Control*, pages 2968–2973, Orlando, Florida, December 2001.
+
+20. T. McLain, P. Chandler, S. Rasmussen, and M. Pachter. Cooperative control of UAV rendezvous. In *Proc. of the ACC*, pages 2309–2314, June 2001.
+
+21. Timothy W. McLain and Randal W. Beard. Coordination variables, coordination functions, and cooperative timing missions. In *Proceedings of the American Control Conference*, pages 296–301, Denver, CO, June 2003.
\ No newline at end of file
diff --git a/samples/texts/3826450/page_10.md b/samples/texts/3826450/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..558cac50846c34ddc48dabfd729ca2e0c8689fcb
--- /dev/null
+++ b/samples/texts/3826450/page_10.md
@@ -0,0 +1,37 @@
+Note here that the **W** used above produced a $(\mathbf{WD})_{sym} - \mathbf{I}$ that just barely avoided being negative definite with the original $\mathbf{D}_1$ and $\mathbf{D}_2$, so we will have to increase the values on the off-diagonals a bit for this next example. In fact anything with magnitude larger than 2 will have some $\epsilon > 0$ that will cause a constant metric to be impossible, but for simplicity we will now take
+
+$$ \mathbf{W}_{*} = \begin{bmatrix} 0 & -4 \\ 4 & 0 \end{bmatrix} $$
+
+Note that with $\mathbf{W}_{*}$, even just halving one of the off-diagonals while keeping the other intact will produce a $(\mathbf{WD})_{sym} - \mathbf{I}$ that is not negative definite. Anything less than halving however will keep the identity metric valid. Therefore, we expect that taking $\epsilon$ in $\mathbf{D}_{1*}$ and $\mathbf{D}_{2*}$ to be in the range $0.5 \ge \epsilon > 0$ will also cause issues when trying to obtain a constant metric.
+
+We will now actually show via a similar proof to the above that $\mathbf{M}$ is impossible to find for $\mathbf{W}_{*}$ when $\epsilon \le 0.5$. This result is compelling because it not only shows that $\epsilon$ does not need to be a particularly small value, but it also drives home the point about antisymmetry - the larger in magnitude the antisymmetric weights are, the larger the $\epsilon$ where we will begin to encounter problems.
+
+Working out the matrix multiplication again, we now get
+
+$$ (\mathbf{M}\mathbf{W}_{*}\mathbf{D}_{1*})_{sym} - \mathbf{M} = \begin{bmatrix} 4m-a & 2b-m-2a\epsilon \\ b-m-2a\epsilon & -4m\epsilon-b \end{bmatrix} $$
+
+and
+
+$$ (\mathbf{M}\mathbf{W}_{*}\mathbf{D}_{2*})_{sym} - \mathbf{M} = \begin{bmatrix} 4m\epsilon-a & -(2a+m-2b\epsilon) \\ -(2a+m-2b\epsilon) & -4m-b \end{bmatrix} $$
+
+Resulting in two new main necessary conditions:
+
+$$ |4m - a - b - 4m\epsilon| > 2|2b - m - 2a\epsilon| \quad (9) $$
+
+$$ |4m\epsilon - a - b - 4m| > 2|2a + m - 2b\epsilon| \quad (10) $$
+
+As well as new conditions on the diagonal elements:
+
+$$ 4m - a < 0 \quad (11) $$
+
+$$ -4m - b < 0 \quad (12) $$
+
+We will now proceed with trying to find $a, b, m$ that can simultaneously meet all conditions, setting $\epsilon = 0.5$ for simplicity.
+
+Looking at $m = 0$, we can see again that $\mathbf{M}$ will require off-diagonal elements, as condition (9) is now equivalent to the condition $a+b > |4b-2a|$ and condition (10) is similarly now equivalent to $a+b > |4a-2b|$.
+
+Evaluating these conditions in more detail, if we assume $4b > 2a$ and $4a > 2b$, we can remove the absolute value and the conditions work out to the contradicting $3a > 3b$ and $3b > 3a$ respectively. As an aside, if $\epsilon > 0.5$, this would no longer be the case, whereas with $\epsilon < 0.5$, the conditions would be pushed even further in opposite directions.
+
+If we instead assume $2a > 4b$, this means $4a > 2b$, so the latter condition would still lead to $b > a$, contradicting the original assumption of $2a > 4b$. $2b > 4a$ causes a contradiction analogously. Trying $4b = 2a$ will lead to the other condition becoming $b > 2a$, once again a contradiction. Thus a diagonal $\mathbf{M}$ is impossible.
+
+So now we again break down the conditions into $m > 0$ and $m < 0$ cases, first looking at $m > 0$. Using condition (11) and knowing all unknowns have positive sign, condition (9) reduces to $a+b-2m > |4b-2(a+m)|$ and condition (10) reduces to $a+b+2m > |4a-2(b-m)|$. This looks remarkably similar to the $m=0$ case, except now condition (9) has $-2m$ added to both sides (inside the absolute value), and condition (10) has $2m$ added to both sides in the same manner. If $4b > 2(a+m)$ the $-2m$ term on each side will simply
\ No newline at end of file
diff --git a/samples/texts/3826450/page_11.md b/samples/texts/3826450/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..288a92b14d330679c05c108255eae0a62c8f5b9e
--- /dev/null
+++ b/samples/texts/3826450/page_11.md
@@ -0,0 +1,24 @@
+cancel, and similarly if $4a > 2(b-m)$ the $+2m$ terms will cancel, leaving us with the same contradictory conditions as before.
+
+Therefore we check $2(a+m) > 4b$. This rearranges to $2a > 2(2b-m) > 2(b-m)$, so that from condition (10) we get $b>a$. Subbing condition (11) into $2(a+m) > 4b$ gives $8b < 4a + 4m < 5a$ i.e. $b < \frac{5}{8}a$, a contradiction. The analogous issue arises if trying $2(b-m) > 4a$. Trying $2(a+m) = 4b$ gives $m=2b-a$, which in condition (10) results in $5b-a > |6a-6b|$, while in condition (11) leads to $5a > 8b$, so (10) can further reduce to $5b-a > 6a-6b$ i.e. $11b > 7a$. But $b > \frac{7}{11}a$ and $b < \frac{5}{8}a$ is a contradiction. Thus there is no way for $m>0$ to work.
+
+Finally, trying $m<0$, we now use condition (12) and the signs of the unknowns to reduce condition (9) to $a+b+2|m| > |4b-2(a-|m|)|$ and condition (10) to $a+b-2|m| > |4a-2(b+|m|)|$. These two conditions are clearly directly analogous to in the $m>0$ case, where $b$ now acts as $a$ with condition (12) being $b > 4|m|$. Therefore the proof is complete.
+□
+
+The main intuition behind this counterexample is that high levels of antisymmetry can prevent a constant metric from being found in the nonlinear system. This is because $\mathbf{D}$ is a diagonal matrix with values between 0 and 1, so the primary functionality it can have in the symmetric part of the Jacobian is to downweight the outputs of certain neurons selectively. In the extreme case of all 0 or 1 values, we can think of this as selecting a subnetwork of the original network, and taking each of the remaining neurons to be single unit systems receiving input from the subnetwork. For a given static configuration of $\mathbf{D}$ (think linear gains), this is a hierarchical system that will be stable if the subnetwork is stable. But as $\mathbf{D}$ can evolve over time when a nonlinearity is introduced, we would need to find a constant metric that can serve completely distinct hierarchical structures simultaneously - which is not always possible.
+
+Put in terms of matrix algebra, $\mathbf{D}$ can zero out columns of $\mathbf{W}$, but not their corresponding rows. So for a given weight pair $w_{ij}, w_{ji}$, which has entry in $\mathbf{W}_{sym} = \frac{w_{ij}+w_{ji}}{2}$, if $D_i = 0$ and $D_j = 1$, the $i,j$ entry in $(\mathbf{WD})_{sym}$ will be guaranteed to have lower magnitude if the signs of $w_{ij}$ and $w_{ji}$ are the same, but guaranteed to have higher magnitude if the signs are different. Thus if the linear system would be stable based on magnitudes alone $\mathbf{D}$ poses no real threat, but if the linear system requires antisymmetry to be stable, $\mathbf{D}$ can make proving contraction quite complicated (if possible at all).
+
+## A.8 Proof of Theorem 8
+
+**Theorem.** Consider any contracting combination of (1), contracting in a block-diagonal, constant metric with blocks $\mathbf{M}_i$. If the recurrent weight matrix $\mathbf{W}_i$ for each subsystem is updated according to the learning rule:
+
+$$ \dot{\mathbf{W}}_i = -\eta \mathbf{M}_i \mathbf{e}_i \phi(\mathbf{x}_i)^T $$
+
+where $\mathbf{e}_i = \mathbf{x}_i - \mathbf{x}_{i,d}$ denotes the local error for each subsystem, $\mathbf{x}_{i,d}$ is the desired trajectory for subsystem $i$, and $\eta > 0$ is the learning rate, then the overall error of the system decreases.
+
+*Proof.* Consider the candidate Lyapunov-like function:
+
+$$ V = \frac{1}{2} \mathbf{e}^T \mathbf{M} \mathbf{e} + \frac{1}{2\eta} \sum_i \operatorname{Tr}(\tilde{\mathbf{W}}_i \tilde{\mathbf{W}}_i^T) $$
+
+where $\mathbf{e} = [\mathbf{e}_1^T \dots \mathbf{e}_P^T]^T$, $\tilde{\mathbf{W}}_i = \mathbf{W}_i - \mathbf{W}_{i,d}$. The time derivative of $V$ is:
\ No newline at end of file
diff --git a/samples/texts/3826450/page_12.md b/samples/texts/3826450/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..ba45d2f5a852ed5bd8904c9abeed82dd4d1c9bfc
--- /dev/null
+++ b/samples/texts/3826450/page_12.md
@@ -0,0 +1,36 @@
+is contracting if
+
+$$
+\mathbf{J}^T \mathbf{M}_{t+1} \mathbf{J} - \mathbf{M}_t \preceq -\beta \mathbf{M}_t
+$$
+
+A primary advantage of contraction analysis is that it is directly applicable to non-autonomous systems, which the vast majority of recurrent models are, allowing in turn modular contraction-preserving combination properties to be derived [33, 31].
+
+To briefly review the current literature on application of contraction analysis to recurrent models, we first note that the ‘Echo-State Condition’ introduced in [13] is equivalent to discrete-time contraction in the identity metric. A later generalization of this condition included a diagonal metric [4]. In the context of neuroscience, contraction analysis has been applied to analyzing the dynamics of winner-take-all networks [29, 30] as well as networks with synaptic plasticity [14]. In the machine learning context, Miller and Hardt recently rederived an ‘echo-state property’ for discrete recurrent models, and went on to prove that these contracting recurrent models could be well-approximated by feedforward networks in certain cases [18]. More recently still, in a series of papers [25, 27, 26] Revay, Wang, and Manchester applied contraction analysis to discrete-time recurrent networks and greatly expanded the class of models considered in [18].
+
+Within this line of research, our paper has two main aims: 1) To provide simple contraction conditions for continuous-time recurrent neural networks and 2) To show how these continuous-time contraction conditions imply a combination property. For both aims, we provide direct parameterizations so that these conditions can be 'plugged in' to a modern optimization library (such as PyTorch [23]) and trained with minimal effort.
+
+Ultimately, stability is a basic presupposition of experimental neuroscience research. In particular, the act of having an animal perform many repeated trials of a task and then averaging the neural responses across tasks assumes an underlying stability or consistency to the observed neural dynamics. Moreover, cognitive models are moving increasingly toward study of multi-area dynamics, but many questions remain on how to best train and evaluate such networks [41, 40]. Understanding how different brain regions interact harmoniously to produce a unified percept/action will require new ideas and analysis tools.
+
+1.1 Combination Properties
+
+The combination and reuse of primitive “modules” has enabled a great deal of progress in computer science, and is also a key theme in biological evolution, particularly apparent in cortical structure of the human brain. In fact, it is thought that the majority of traits that have developed over the last 400+ million years are the result of evolutionary forces acting on regulatory elements that combine core components, rather than mutations to the core components themselves. This mechanism of action makes meaningful variation in population phenotypes much more feasible to achieve, and is appropriately titled “facilitated variation” [8]. In addition to the biological evidence for facilitated variation, computational models have demonstrated that this approach produces populations that are better able to generalize to new environments [22], an ability that will be critical to further develop in deep learning systems.
+
+While the benefits of building modular systems are clear, in general there is no guarantee that a com-
+bination of stable systems will itself be stable. Thus the tractability of these evolutionary processes hinges
+on some mechanism for ensuring stability of combinations. As contraction analysis tools allow complicated
+contracting systems to be built up recursively from simpler elements, this form of stability is well suited for
+neural and other biological systems [33, 32]. Here, we describe two common forms of contracting system
+combinations–hierarchical and feedback–that automatically guarantee overall system stability, as depicted in
+Figure 1. In section 2.5, we will put these combinations into the context of our novel contraction conditions.
+Note also that contracting combinations can be made between systems with very different dynamics–
+so long as those dynamics are contracting. In our particular case, this means that the contracting RNNs
+discussed herein can be combined with physical systems with quantified contraction properties.
+
+1.1.1 Feedback Combination
+
+Consider two systems, independently contracting in constant metrics **M**₁ and **M**₂, which are combined in feedback:
+
+$$
+\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, t) + \mathbf{B}\mathbf{y}
+$$
\ No newline at end of file
diff --git a/samples/texts/3826450/page_13.md b/samples/texts/3826450/page_13.md
new file mode 100644
index 0000000000000000000000000000000000000000..ad8dd7f0cce560d459ef5849fbbc179a0988edbd
--- /dev/null
+++ b/samples/texts/3826450/page_13.md
@@ -0,0 +1,37 @@
+$$
+\begin{align*}
+\dot{V} = \mathbf{e}^T \mathbf{M} \dot{\mathbf{e}} + \frac{1}{\eta} \sum_i \operatorname{Tr}(\tilde{\mathbf{W}}_i \dot{\mathbf{W}}_i^T) &\le \\
+-\mathbf{e}^T \mathbf{M} \mathbf{e} + \sum_i \mathbf{e}_i^T \mathbf{M}_i \tilde{\mathbf{W}}_i \phi(\mathbf{x}_i) + \frac{1}{\eta} \sum_i \operatorname{Tr}(\tilde{\mathbf{W}}_i \dot{\mathbf{W}}_i^T) &= \\
+-\mathbf{e}^T \mathbf{M} \mathbf{e} + \sum_i \operatorname{Tr}(\tilde{\mathbf{W}}_i \phi(\mathbf{x}_i) \mathbf{e}_i^T \mathbf{M}_i) + \frac{1}{\eta} \sum_i \operatorname{Tr}(\tilde{\mathbf{W}}_i \dot{\mathbf{W}}_i^T) &= \\
+-\mathbf{e}^T \mathbf{M} \mathbf{e} + \sum_i \operatorname{Tr}(\tilde{\mathbf{W}}_i \phi(\mathbf{x}_i) \mathbf{e}_i^T \mathbf{M}_i) + \frac{1}{\eta} \tilde{\mathbf{W}}_i \dot{\mathbf{W}}_i^T)
+\end{align*}
+$$
+
+Therefore, picking $\dot{\mathbf{W}}_i = -\eta \mathbf{M}_i \mathbf{e}_i \phi(\mathbf{x}_i)^T$ leads to $\dot{V} \le -\mathbf{e}^T \mathbf{M} \mathbf{e}$ which, by Barbalat's Lemma, implies $\dot{V} \to 0$ asymptotically and therefore $||\mathbf{e}|| \to 0$ as well. $\square$
+
+## A.9 Proof of Theorem 9
+
+**Theorem.** In order for a linearly stable RNN to be contracting with general activation function in (1), every subnetwork must also be linearly stable. Put another way, it is necessary for all principal submatrices of the matrix **W** to have only eigenvalues with negative real part.
+
+Additionally, we show that if the nonlinear system (1) is contracting in a diagonal metric, all of its subnetworks are also contracting in a diagonal metric.
+
+*Proof.* Unstable subnetworks can be exploited by the activation functions purely via linear gains, so this necessary condition does not even require a dynamic **D**. If there is an unstable subnetwork within our network, we can simply assign linear activation to those units in that subnetwork, and constant activation to all other units. This will produce a **D** that zeros outputs from any units not in the unstable subnetwork, essentially generating a network that is the hierarchical combination of the unstable subnetwork with each other individual unit. Therefore the unstable portion of the network will have its behavior unchecked, and as it provides inputs to the other units can drive the entire network to blow up.
+
+In matrix algebra terms, suppose that some principal submatrix $W_S$ of an $n \times n$ weight matrix $W$ has an eigenvalue with real part in the right half plane. Without loss of generality (as units can be renumbered), assume that indices 0 through $m-1$ of $W$ are also in $W_S$, and indices $m$ through $n-1$ are not in $W_S$. Then we can set $\phi_i$ to be the identity function for $i < m$, and set $\phi_i$ to be a constant function $f(x) = 1$ for $i \ge m$. Now **D** is a static diagonal matrix with 1 on diagonals up to index $m-1$, and 0 on diagonals after. **WD**, which can also be viewed as a linear system and therefore evaluated by linear stability criteria, is equal to:
+
+$$
+\begin{bmatrix}
+W_S & 0 \\
+W_{S_{out}} & 0
+\end{bmatrix}
+$$
+
+where $W_{S_{out}}$ is a matrix containing the connections from the first $m$ neurons to the last $n-m$ neurons.
+
+Observe that this is a hierarchical combination, with $W_S$ receiving no inputs. Therefore it is impossible for the overall system **WD** to be stable, as it contains an unstable system at the top of its hierarchy. This completes the proof of the necessary condition.
+
+As mentioned, we will additionally prove that diagonal stability of the nonlinear system (1) guarantees all subnetworks of that system will also be stable in a diagonal metric. This extends a known result for linear diagonal stability [20] to the nonlinear RNN, and is of particular relevance here due to both results such as Theorem 1 that guarantee diagonal stability of the nonlinear system, and also the theoretical benefits of subnetwork stability described in section 2.6.
+
+To begin, suppose we are given an RNN with weights **W**, contracting in known diagonal metric **M**. Then we have:
+
+$$(\text{MWD})_{sym} - \mathbf{M} < 0$$
\ No newline at end of file
diff --git a/samples/texts/3826450/page_14.md b/samples/texts/3826450/page_14.md
new file mode 100644
index 0000000000000000000000000000000000000000..2c647ca83c437697ddede06be545b63f32987fff
--- /dev/null
+++ b/samples/texts/3826450/page_14.md
@@ -0,0 +1,45 @@
+uniformly.
+
+Now suppose we want to keep an arbitrary subset, $S$, of the neurons in $\mathbf{W}$. We will call the principal submatrix of $\mathbf{W}$ containing only the indices in $S \mathbf{W}_S$, and we will prove that this network is contracting in the analogously defined $\mathbf{M}_S$. As of course only the activation functions that act on the kept neurons will remain relevant, we will need to show:
+
+$$(\mathbf{M}_S \mathbf{W}_S \mathbf{D}_S)_{\text{sym}} - \mathbf{M}_S \prec 0$$
+
+uniformly.
+
+Observe that with diagonal $\mathbf{M}$, taking the principal submatrix of the overall Jacobian is equivalent to taking the principal submatrix of each component independently, i.e.
+
+$$(\mathbf{M}_S \mathbf{W}_S \mathbf{D}_S)_{\text{sym}} - \mathbf{M}_S = ((\mathbf{MWD})_{\text{sym}} - \mathbf{M})_S$$
+
+This can be seen in steps, first focusing on the $\mathbf{M}_S \mathbf{W}_S \mathbf{D}_S$ term, as this equals $(\mathbf{MWD})_S$ due to $\mathbf{M}$ and $\mathbf{D}$ both being diagonal. $\mathbf{M}$ simply weights each row by the corresponding diagonal element, so it is irrelevant whether a row is discarded before or after multiplication - and analogously for $\mathbf{D}$ on the columns.
+
+Next, note that the symmetric part can also be taken before or after subsetting, as it is simply the sum of a matrix and its transpose multiplied by a scalar. The sum operates elementwise, so again there is no reason for discard time to matter. This can also be applied to subtraction of the $\mathbf{M}_S$ term, completing the transformation.
+
+To complete the proof, we use Cauchy's Interlace Theorem as described in [12] to show that:
+
+$$((\mathbf{MWD})_{\text{sym}} - \mathbf{M})_S \prec (\mathbf{MWD})_{\text{sym}} - \mathbf{M}$$
+
+with the latter term already known to be uniformly negative definite.
+
+Because we are concerned with the maximum eigenvalues of symmetric matrices, we can guarantee we will only be working with real values. Thus Cauchy's Interlace Theorem can be applied directly, as it states that given a square Hermitian matrix $\mathbf{A}$ and a principal submatrix of $\mathbf{A}$, $\mathbf{B}$, then $\lambda_{max}(\mathbf{B}) < \lambda_{max}(\mathbf{A})$. $\square$
+
+## References
+
+[1] Alireza Alemi et al. "Learning nonlinear dynamics in efficient, balanced spiking networks using local plasticity rules". In: *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 32. 1. 2018.
+
+[2] Yoshua Bengio, Ian Goodfellow, and Aaron Courville. *Deep learning*. Vol. 1. MIT press Massachusetts, USA: 2017.
+
+[3] Nicholas M Boffi and Jean-Jacques E Slotine. "Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction". In: *Neural Computation* 33.3 (2021).
+
+[4] Michael Buehner and Peter Young. "A tighter bound for the echo state property". In: *IEEE Transactions on Neural Networks* 17.3 (2006), pp. 820–824.
+
+[5] Bo Chang et al. "Antisymmetricnn: A dynamical system view on recurrent neural networks". In: *arXiv preprint arXiv:1902.09689* (2019).
+
+[6] Ricky TQ Chen et al. "Neural ordinary differential equations". In: *arXiv preprint arXiv:1806.07366* (2018).
+
+[7] Yimian Dai et al. "Attention as Activation". In: *arXiv preprint arXiv:2007.07729v2* (2020).
+
+[8] John Gerhart and Marc Kirschner. "The theory of facilitated variation". In: *Proceedings of the National Academy of Sciences* 104.1 (2007), pp. 8582–8589. DOI: 10.1073/pnas.0701035104.
+
+[9] Eldad Haber and Lars Ruthotto. "Stable architectures for deep neural networks". In: *Inverse Problems* 34.1 (2017), p. 014004.
+
+[10] Demis Hassabis et al. "Neuroscience-inspired artificial intelligence". In: *Neuron* 95.2 (2017), pp. 245–258.
\ No newline at end of file
diff --git a/samples/texts/3826450/page_15.md b/samples/texts/3826450/page_15.md
new file mode 100644
index 0000000000000000000000000000000000000000..64d8f6018b8202304f7a0e630c0281a78a3f4992
--- /dev/null
+++ b/samples/texts/3826450/page_15.md
@@ -0,0 +1,47 @@
+[11] JB Hutchins and SW Barger. "Why neurons die: cell death in the nervous system". In: *The Anatomical Record* 253.3 (1998), pp. 79-90.
+
+[12] Suk-Geun Hwang. "Cauchy's Interlace Theorem for Eigenvalues of Hermitian Matrices". In: *The American Mathematical Monthly* 111.2 (2004), pp. 157-159.
+
+[13] Herbert Jaeger. "The “echo state” approach to analysing and training recurrent neural networks-with an erratum note". In: ().
+
+[14] Leo Kozachkov et al. "Achieving stable dynamics in neural circuits". In: *PLoS computational biology* 16.8 (2020), e1007659.
+
+[15] Dmitry Krotov and John Hopfield. "Large Associative Memory Problem in Neurobiology and Machine Learning". In: *arXiv preprint arXiv:2008.06996* (2020).
+
+[16] Winfried Lohmiller and Jean-Jacques E Slotine. "On contraction analysis for non-linear systems". In: *Automatica* 34.6 (1998), pp. 683-696.
+
+[17] Kiyotoshi Matsuoka. "Stability conditions for nonlinear continuous neural networks with asymmetric connection weights". In: *Neural networks* 5.3 (1992), pp. 495-500.
+
+[18] John Miller and Moritz Hardt. "Stable recurrent models". In: *arXiv preprint arXiv:1805.10369* (2018).
+
+[19] Kenneth D Miller and Francesco Fumarola. "Mathematical equivalence of two common forms of firing rate models of neural networks". In: *Neural computation* 24.1 (2012), pp. 25-31.
+
+[20] Kumpati S. Narendra and Robert Shorten. "Hurwitz Stability of Metzler Matrices". In: *IEEE TRANS-ACTIONS ON AUTOMATIC CONTROL* 55.6 (2010), pp. 1484-1487.
+
+[21] Hung D. Nguyen et al. "Contraction and Robustness of Continuous Time Primal-Dual Dynamics". In: *arXiv preprint arXiv:1803.05975v2* (2018).
+
+[22] Merav Parter, Nadav Kashtan, and Uri Alon. "Facilitated Variation: How Evolution Learns from Past Environments To Generalize to New Environments". In: *PLOS Computational Biology* 4.11 (2008).
+
+[23] Adam Paszke et al. "Pytorch: An imperative style, high-performance deep learning library". In: *arXiv preprint arXiv:1912.01703* (2019).
+
+[24] Hubert Ramsauer et al. "Hopfield networks is all you need". In: *arXiv preprint arXiv:2008.02217* (2020).
+
+[25] Max Revay and Ian Manchester. "Contracting implicit recurrent neural networks: Stable models with improved trainability". In: *Learning for Dynamics and Control*. PMLR. 2020, pp. 393-403.
+
+[26] Max Revay, Ruigang Wang, and Ian R Manchester. "A Convex Parameterization of Robust Recurrent Neural Networks". In: *IEEE Control Systems Letters* 5.4 (2020), pp. 1363-1368.
+
+[27] Max Revay, Ruigang Wang, and Ian R Manchester. "Recurrent Equilibrium Networks: Unconstrained Learning of Stable and Robust Dynamical Models". In: *arXiv preprint arXiv:2104.05942* (2021).
+
+[28] Giovanni Russo, Mario di Bernardo, and Jean-Jacques Slotine. "A Graphical Approach to Prove Contraction of Nonlinear Circuits and Systems". In: *IEEE Transactions on Circuits and Systems* 58.2 (2011), pp. 336-348.
+
+[29] Ueli Rutishauser, Rodney J Douglas, and Jean-Jacques Slotine. "Collective stability of networks of winner-take-all circuits". In: *Neural computation* 23.3 (2011), pp. 735-773.
+
+[30] Ueli Rutishauser, Jean-Jacques Slotine, and Rodney Douglas. "Computation in dynamically bounded asymmetric systems". In: *PLoS Comput Biol* 11.1 (2015), e1004039.
+
+[31] Jean-Jacques Slotine. "Modular Stability Tools for Distributed Computation and Control". In: *Int. J. Adaptive Control and Signal Processing* 17.6 (2003).
+
+[32] Jean-Jacques Slotine and Yang-Yu Liu. "The missing link". In: *Nature Physics* 8.7 (2012), pp. 512-513.
+
+[33] Jean-Jacques Slotine and Winfried Lohmiller. "Modularity, evolution, and the binding problem: a view from stability theory." In: *Neural Networks* 14.2 (2001), pp. 137-145.
+
+[34] Jean-Jacques Slotine and Winfried Lohmiller. "Nonlinear Process Control Using Contraction Theory". In: *A.I.Ch.E Journal* (March 2000).
\ No newline at end of file
diff --git a/samples/texts/3826450/page_16.md b/samples/texts/3826450/page_16.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d2ba4472835694c7bbe9455b8ca5b4ca54a3eda
--- /dev/null
+++ b/samples/texts/3826450/page_16.md
@@ -0,0 +1,13 @@
+[35] Jean-Jacques E Slotine and Weiping Li. *Applied Nonlinear Control*. Prentice-Hall, 1991.
+
+[36] Nicolas Tabareau and Jean-Jacques Slotine. "Notes on contraction theory". In: arXiv preprint nlin/0601011 (2006).
+
+[37] Wei Wang and Jean-Jacques Slotine. "On partial contraction analysis for coupled nonlinear oscillators". In: *Biological Cybernetics* 92 (2005), pp. 38–53.
+
+[38] Eric W. Weisstein. *Positive Definite Matrix*. URL: https://mathworld.wolfram.com/PositiveDefiniteMatrix.html (visited on 06/04/2021).
+
+[39] Ezra Winston and J Zico Kolter. "Monotone operator equilibrium networks". In: *arXiv preprint arXiv:2006.08591* (2020).
+
+[40] Guangyu R Yang and Manuel Molano-Mazon. *Next-generation of recurrent neural network models for cognition*. Apr. 2021. DOI: 10.31234/osf.io/w34n2. URL: psyarxiv.com/w34n2.
+
+[41] Guangyu Robert Yang et al. "A dataset and architecture for visual reasoning with a working memory". In: *Proceedings of the European Conference on Computer Vision (ECCV)*. 2018, pp. 714–731.
\ No newline at end of file
diff --git a/samples/texts/3826450/page_17.md b/samples/texts/3826450/page_17.md
new file mode 100644
index 0000000000000000000000000000000000000000..6e14b95128c48109452fce5c8c0e8044e411250b
--- /dev/null
+++ b/samples/texts/3826450/page_17.md
@@ -0,0 +1,47 @@
+$$
+\dot{y} = g(y, t) + Gx
+$$
+
+If:
+
+$$
+\mathbf{B} = -\mathbf{M}_{1}^{-1}\mathbf{G}^{T}\mathbf{M}_{2}
+$$
+
+the combined system is contracting as well. This may be seen as a special case of the feedback combination derived in [36]. Importantly, this direct parameterization can be easily implemented in a modern deep learning library.
+
+1.1.2 Hierarchical Combination
+
+Consider again two systems, independently contracting in some metrics, which are combined in hierarchy:
+
+$$
+\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, t)
+$$
+
+$$
+\dot{y} = g(y, t) + h(x, t)
+$$
+
+where $\mathbf{h}(\mathbf{x}, t)$ is a function with *bounded* Jacobian. Then this combined system is contracting in a diagonal metric, as shown in [16]. By recursion, this extends to hierarchies of arbitrary depth.
+
+It is straightforward to show that discrete-time and continuous-time contracting systems can be combined hierarchically while preserving contraction.
+
+Figure 1: Our stability certificate implies a modularity principle. It may be used to construct complicated ‘networks of networks’ while automatically maintaining stability.
+
+2 Results
+
+2.1 Contraction Conditions for Continuous-Time RNNs
+
+Consider the continuous-time RNN described by:
+
+$$
+\tau \dot{\mathbf{x}} = -\mathbf{x} + \mathbf{W}\phi(\mathbf{x}) + \mathbf{v}(t) \quad (1)
+$$
+
+where $\phi$ is a static nonlinearity such that $0 \le \phi' \le g$, $v(t)$ is some input (potentially time-varying) and $\tau > 0$. We do not constrain the sign of $\phi$. Different units may have different activation functions, as long as they each satisfy these conditions. Example nonlinearities $\phi(x)$ that satisfy the constraints are $\tanh(ax)$, $\log(1+e^x)$, and $\max(0, x)$, with $g=a, 1, 1$ respectively. We seek a stability certificate for this system in terms of the recurrent weight matrix $\mathbf{W}$. As discussed in the introduction, we are specifically interested in restricting $\mathbf{W}$ such that the RNN is globally *contracting*. All the proofs for our theorems are found in appendix A.
+
+Note that in neuroscience, the variable **x** in equation (1) is typically thought of as a vector of neural membrane potentials. It was shown in [19] that the RNN (1) is equivalent via an affine transformation to another commonly used RNN model,
+
+$$
+\tau \dot{\mathbf{y}} = -\mathbf{y} + \phi(\mathbf{W}\mathbf{y} + \mathbf{b}(t)) \quad (2)
+$$
\ No newline at end of file
diff --git a/samples/texts/3826450/page_2.md b/samples/texts/3826450/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..4d7adee4e41a686865342076705ce3a851d5f316
--- /dev/null
+++ b/samples/texts/3826450/page_2.md
@@ -0,0 +1,21 @@
+between predefined "modules", but also explore the use of this combination property within a single nonlinear RNN (1), through finding hierarchical structure in **W**. In section 2.6, we use the word "subnetwork" in this latter sense, to investigate subsets of the units specified by **W**.
+
+## 2.6 Relation of Stability Conditions with Neuron Pruning
+
+While we have mainly focused on sufficient conditions for the system (1) to be contracting regardless of (eligible) activation function, identifying *necessary* conditions is also of great potential use. Interestingly, one such necessary condition, presented in Theorem 9, has clear interpretations in neuroscience and machine learning, mirroring some of the implications of the combination properties described above. Furthermore, the necessary condition again stems from the presence of both positive and negative connections between neurons.
+
+**Theorem 9.** In order for a linearly stable RNN to be contracting with general activation function in (1), every subnetwork must also be linearly stable. Put another way, it is necessary for all principal submatrices of the matrix **W** to have only eigenvalues with negative real part.
+
+Note that in the linear case, diagonal stability guarantees that all principal submatrices will also be diabolically stable [20], so that any diagonally stable network meets this necessary condition. Additionally, we show in the proof of Theorem 9 that the same property applies to the nonlinear system (1) – if it is contracting in a diagonal metric, all of its subnetworks are also contracting in a diagonal metric. This is relevant in the case of Theorem 1, where we can guarantee a diagonal metric for the nonlinear system.
+
+Enforcing stability of subnetworks has a number of functional benefits. Robustness to loss of units has obvious safety implications, and furthermore the ability to remove units may also play a role in learning and development. Dropout, a powerful regularization technique for deep neural networks, involves randomly removing a fraction of units for each training round [2]. Many Neural Architecture Search algorithms include removing units as a potential step. Even in biology, a large number of neurons undergo programmed cell death during human development, and this pattern is conserved across many animals [11]. It is likely important not only to be able to combine stable blocks without concern for stability of the overall system, but also to be able to take blocks away without concern.
+
+# 3 Discussion
+
+Applying the theoretical results described here to applications of deep learning is a major next step that is currently underway. It is straightforward to enforce many of these constraints during RNN optimization, as described in the main text. An interesting question is whether the different stability conditions will impact performance in different ways on different tasks. For example, in the case of content addressable memory, symmetric weight matrices may perform better than asymmetric ones.
+
+In addition to utilizing more general properties of stability, we also aim to leverage the advantages of contraction analysis, in particular the combination properties described in section 1.1. Because stability guarantees can easily be extended to feedback combination of networks meeting our conditions, we will train small nonlinear RNNs on specific variations of a task with stability constraint, and then train linear feedback connections between these "modules" on the larger task. We expect this to greatly speed up the training process, and potentially also improve performance.
+
+Beyond the experiments in development, there are numerous future directions enabled by this work. Most notably, Theorem 6 requires further investigation. As argued above, we suspect that diagonal stability of $g\mathbf{W} - \mathbf{I}$ is a sufficient condition for global contraction. However, as we showed in Theorem 7, proving this will require a time-varying metric. The counterexample provided is mainly exploited by changing inputs, so research into input-dependent metrics may prove particularly fruitful.
+
+Our conditions could also be used in other applications distinct from those already outlined. For instance, questions about synchronization of networks could be addressed via a virtual systems perspective as in [37]. Related methods could even be applied to models that include weight updates, such as the RNN with
\ No newline at end of file
diff --git a/samples/texts/3826450/page_21.md b/samples/texts/3826450/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..b29c915f22e7f838de6e1ceffe8fac94f66be01c
--- /dev/null
+++ b/samples/texts/3826450/page_21.md
@@ -0,0 +1,47 @@
+## 2.2 Contracting Neural ODEs
+
+In this section we consider an application of the theory developed above to the case of Neural ODEs. Consider a model which takes in a hidden state $\mathbf{h}_t$ and passes it through a series of nonlinearities, as discussed in [6]:
+
+$$ \mathbf{h}_{t+1} = \mathbf{h}_t + f(\mathbf{h}_t, \theta_t, t) \quad (3) $$
+
+In the limit of many layers and smaller steps, the dynamics of the hidden units become:
+
+$$ \dot{\mathbf{h}} = f(\mathbf{h}, \theta_t, t) $$
+
+which is known as a Neural ODE. Now consider a standard neural network architecture which evolves the hidden state $\mathbf{h}_t$ according to:
+
+$$ \mathbf{h}_{t+1} = \mathbf{W}_t \phi(\mathbf{h}_t) + \mathbf{v}_t $$
+
+Rewriting this in the notation of equation (3), we can identify:
+
+$$ f(\mathbf{h}, \theta_t, t) = -\mathbf{h} + \mathbf{W}(t)\phi(\mathbf{h}) + \mathbf{v}(t) $$
+
+Which is the same system as (1), except for the fact that $\mathbf{W}$ varies with time. Of course, one can analyze the above discrete-time system directly using discrete-time contraction tools as was done in [25]. Here, we show that the same contraction result also applies to the limit of continuous layers, i.e the corresponding Neural ODE.
+
+If any of the above presented theorems hold in a constant metric which is independent of any particular $\mathbf{W}_t$, then for sufficiently large $T$ the output layer $\mathbf{x}_T$ will respond the same way independently of $\mathbf{x}_0$. In other words, the Neural ODE will be contracting. As a concrete example, imagine that $\mathbf{W}(t)$ evolves according to:
+
+$$ \mathbf{W}(t) = g \mathbf{P} \mathbf{U}(t) \mathbf{S}(t) \mathbf{V}(t)^T \mathbf{P}^{-1} $$
+
+where $\mathbf{P}$ is a constant, positive diagonal matrix, $\mathbf{V}(t)$ and $\mathbf{U}(t)$ are orthogonal, and $\mathbf{S}(t)$ is a diagonal, non-negative matrix such that $0 \le S_{ii}(t) < 1$; then the associated Neural ODE is contracting by Theorem 5. Also consider the more commonly used feedforward network:
+
+$$ \mathbf{y}_{t+1} = \phi(\mathbf{W}_t \mathbf{y}_t + \mathbf{v}_t) $$
+
+These two networks are related by a static nonlinearity:
+
+$$ \mathbf{y}_t = \phi(\mathbf{x}_t) $$
+
+To see this, note that:
+
+$$ \mathbf{y}_{t+1} = \phi(\mathbf{x}_{t+1}) = \phi(\mathbf{W}_t \phi(\mathbf{x}_t) + \mathbf{v}_t) = \phi(\mathbf{W}_t \mathbf{y}_t + \mathbf{v}_t) $$
+
+Therefore if the x Neural ODE is contracting, then the y system will be as well. This is because $\phi'$ is bounded by assumption, implying that:
+
+$$ \delta\mathbf{x}_T \rightarrow \mathbf{0} \implies \delta\mathbf{y}_T = \phi'\delta\mathbf{x}_T \rightarrow \mathbf{0} $$
+
+## 2.3 On Stability Theorems in Machine Learning
+
+Several recent papers in machine learning, e.g [9, 5], claim that a sufficient condition for stability of the nonlinear system:
+
+$$ \dot{\mathbf{x}} = f(\mathbf{x}, t) $$
+
+is that the associated Jacobian matrix $J(\mathbf{x}) = \frac{\partial f}{\partial \mathbf{x}}$ has eigenvalues whose real parts are strictly negative, i.e:
\ No newline at end of file
diff --git a/samples/texts/3826450/page_22.md b/samples/texts/3826450/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..309b9faa28c30bc4615ea3ac2551f100215600ca
--- /dev/null
+++ b/samples/texts/3826450/page_22.md
@@ -0,0 +1,35 @@
+$$ \max_i \operatorname{Re}(\lambda_i(\mathbf{J}(\mathbf{x}))) \leq -\alpha $$
+
+with $\alpha > 0$. This claim is generally false. For a counter-example, see section 4.4.2 in [35].
+
+However, in the specific case of the RNN (1), it appears that the eigenvalues of the symmetric part of $\mathbf{W}$ do provide information on global stability in a number of applications. For example, in [17] it was shown that if $\mathbf{W}_s = \frac{1}{2}(\mathbf{W} + \mathbf{W}^T)$ has all its eigenvalues less than one, and $\mathbf{v}$ is constant, then (1) has a unique fixed point and it is globally asymptotically stable. It is easy to see that this condition also implies that the real parts of the eigenvalues of the Jacobian are uniformly negative. Moreover, in [5] it was shown that setting the symmetric part of $\mathbf{W}_s = \frac{1}{2}(\mathbf{W} + \mathbf{W}^T)$ almost equal to zero (yet slightly negative) led to rotational, yet stable dynamics in practice. This leads us to the following theorem, which shows that if the slopes of the activation functions change sufficiently slowly as a function of time, then the condition in [17] in fact implies global contraction of (1).
+
+**Theorem 6.** Let $\mathbf{D}$ be a positive, diagonal matrix with $D_{ii} = \frac{d\phi_i}{dx_i}$, and let $\mathbf{P}$ be an arbitrary, positive diagonal matrix. If:
+
+$$ (g\mathbf{W} - \mathbf{I})\mathbf{P} + \mathbf{P}(g\mathbf{W}^T - \mathbf{I}) \preceq -c\mathbf{P} $$
+
+and
+
+$$ \dot{\mathbf{D}} - c g^{-1} \mathbf{D} \preceq -\beta \mathbf{D} $$
+
+for $c, \beta > 0$, then (1) is contracting in metric $\mathbf{D}$ with rate $\beta$.
+
+In theory, bounding $\dot{\mathbf{D}}$ is difficult, because it is implicitly a function of the input (as $D_{ii} = \phi_i'' \dot{x}_i$). However, in simulations, we have found that diagonal stability of $g\mathbf{W} - \mathbf{I}$ seems to be a sufficient condition for global contraction. To better characterize why this conjectured stability condition has been so difficult to prove, we present Theorem 7, which shows by way of counterexample that diagonal stability of $g\mathbf{W} - \mathbf{I}$ does not imply global contraction in a constant metric for (1).
+
+**Theorem 7.** $g\mathbf{W}_{sym} - \mathbf{I} = g\frac{\mathbf{W}+{\mathbf{W}}^T}{2} - \mathbf{I} < 0$, i.e. contraction of the linear system in the identity metric, is not a sufficient condition for the general nonlinear system (1) to be contracting in a constant metric. High levels of antisymmetry in $\mathbf{W}$ can make it impossible to find such a metric, which we demonstrate via a 2 × 2 counterexample of the form
+
+$$ \mathbf{W} = \begin{bmatrix} 0 & -c \\ c & 0 \end{bmatrix} $$
+
+with $c \ge 2$.
+
+## 2.4 Local Learning with Contracting Combinations
+
+Consider grouping together many subsystems in a contracting combination, as discussed in section 1.1. In this section, we derive a *local* learning rule for the parameters of each recurrent neural network subsystem which leads to *global* decreasing error for the whole system. Before we analyze the specifics of the neural network case, consider in general the closed-loop plant error dynamics analyzed in [34, 31, 3],
+
+$$ \dot{\mathbf{z}} = \mathbf{f}(\mathbf{z}, t) - \mathbf{f}(\mathbf{z}_d, t) + \mathbf{H}(\mathbf{z}, t)\mathbf{a} - \mathbf{H}(\mathbf{z}, t)\hat{\mathbf{a}} $$
+
+where it is assumed **f** is contracting in constant metric **M** (i.e its generalized Jacobian is everywhere negative definite) and $\hat{\mathbf{a}}$ is the estimate of parameter vector **a**. The parameter estimate adaptation:
+
+$$ \dot{\hat{\mathbf{a}}} = -\mathbf{H}(\mathbf{z}, t)^T \mathbf{M} \tilde{\mathbf{z}} \quad (4) $$
+
+can be shown to lead to decreasing error between $\mathbf{z}$ and $\mathbf{z}_d$ as well as $\hat{\mathbf{a}}$ and **a** [16]. In the case where **f** represents a contracting combination and the adaptation is assumed to be local to each subsystem, note that **H** has block diagonal structure. If we also assume that the metric has a block diagonal structure, which
\ No newline at end of file
diff --git a/samples/texts/3826450/page_23.md b/samples/texts/3826450/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..d479db78a123dd8d6b6503cf6709968671972f3e
--- /dev/null
+++ b/samples/texts/3826450/page_23.md
@@ -0,0 +1,47 @@
+would be the case in e.g the feedback combination discussed in 1.1, then substituting this block structure into (4) yields:
+
+$$
+\dot{\mathbf{a}}_i = -\mathbf{H}_i(\mathbf{z}_i, t)^T \mathbf{M}_i \tilde{\mathbf{z}}_i
+$$
+
+In other words, contracting combinations allow for local parameter adaptations which decrease the global adaptation error. In the specific case of (1), this observation leads to the following theorem:
+
+**Theorem 8.** Consider any contracting combination of (1), contracting in a block-diagonal, constant metric with blocks $\mathbf{M}_i$. If the recurrent weight matrix $\mathbf{W}_i$ for each subsystem is updated according to the learning rule
+
+$$
+\dot{\mathbf{W}}_i = -\eta \mathbf{M}_i \mathbf{e}_i \phi(\mathbf{x}_i)^T
+$$
+
+where $\mathbf{e}_i \equiv \mathbf{x}_i - \mathbf{x}_{i,d}$ denotes the local error for each subsystem, $\mathbf{x}_{i,d}$ is the desired trajectory for subsystem $i$, and $\eta > 0$ is the learning rate, then the overall error of the system decreases.
+
+Note that this result may also be viewed as generalizing [1] to the case of combinations of recurrent neural networks contracting in potentially different metrics, as can be shown by using the Lyapunov-like function
+
+$$
+V = \frac{1}{2} \mathbf{e}^T \mathbf{M} \mathbf{e} + \frac{1}{2\eta} \sum_i \mathrm{Tr}(\tilde{\mathbf{W}}_i \tilde{\mathbf{W}}_i^T)
+$$
+
+where $\mathbf{e} = [\mathbf{e}_1^T \dots \mathbf{e}_P^T]^T$ and $\tilde{\mathbf{W}}_i = \mathbf{W}_i - \mathbf{W}_{i,d}$.
+
+## 2.5 Ensuring Contraction of Network Combinations
+
+We now present a methodology for using our novel stability conditions in conjunction with the global con-
+traction combination properties described in section 1.1, to recursively construct a stable assembly of the
+RNNs defined in (1).
+
+Consider *p* nonlinear subnetworks, independently contracting in metrics **M***i*. Given a global adjacency matrix **A**, where *A**ij* = 1 denotes a linear connection matrix from subnetwork *j* to subnetwork *i*, how can we constrain the weights between subnetworks to ensure overall contraction, or prove that no such constraint exists?
+
+A simple algorithm is to start with *i* = 0 and then loop through *j* > *i*, checking for *A**ij* = *A**ji* = 1. If no such instance is found, increment *i*. When an *i*, *j* pair is found such that *A**ij* = *A**ji* = 1, establish a negative feedback combination between these two systems (see section 1.1.1), and then merge them into one larger subsystem. This will reduce the size of **A** from *p* × *p* to (*p* − 1) × (*p* − 1), by combining rows *i* and *j* into one row with an OR operation (and analogously for columns *i* and *j*). Repeat this process from the top with the new **A**, until no more subsystems can be merged.
+
+Having finished the feedback combinations, we now must check for hierarchical combinations. If the final
+**A** is a Directed Acyclic Graph (DAG), then the overall system is contracting, because subnetwork indices
+can be relabelled such that **A** is triangular. A DAG can be detected by running the strongly connected
+components algorithm on **A** and checking that each node belongs to its own component. If **A** is indeed a
+DAG, the topological sort algorithm described in [36] can be used to obtain an appropriate relabelling of **A**.
+If it is not, there is no way to use the original input **A** to construct a feedback and hierarchical combination
+of contracting subnetworks.
+
+Interestingly, while this approach takes an adjacency matrix as input and returns stable weight assign-
+ments if possible, one can also derive an adjacency matrix from a system Jacobian to check for contraction,
+as in [28].
+
+Note that in this section we have used the word “subnetwork” to refer to an individual RNN as defined in (1), a set of which will then be combined using linear connections designed to match the topology specified by **A** and meet the combination property constraints detailed in section 1.1. We are restricted to linear connections here because contraction of the negative feedback system can no longer be guaranteed when nonlinear connections are allowed. However, the hierarchical combination property does permit nonlinear connections between subnetworks - meaning we can not only expand our repertoire of possible connections
\ No newline at end of file
diff --git a/samples/texts/3826450/page_3.md b/samples/texts/3826450/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d275d19094e024f08eff7c032d772c0afd2a980
--- /dev/null
+++ b/samples/texts/3826450/page_3.md
@@ -0,0 +1,61 @@
+Hebbian and anti-Hebbian learning rules discussed in [14]. Input-dependent stability criteria for updating
+networks are of particular interest, to provide theoretical groundwork on topics such as curriculum learning
+and transfer learning, as well as better understand development in the biological brain.
+
+Additionally, a number of theoretical questions could be pursued by directly using the current conditions.
+Exploring more combination properties would enable study of additional network architectures of interest.
+This might include feedback combination of continuous and discrete systems, or application of singular
+perturbation isolation to characterize combinations of systems that contract on very different timescales (as
+in [21]). Further, while our work focuses primarily on stability of combinations, preservation of function is
+also an important question to address in a model of evolution that acts on edges rather than nodes [32].
+Therefore investigating whether our conditions can produce any guarantees on functionality of combinations
+is another future aim.
+
+More broadly, combination properties resulting from this work could be employed in research of Neural
+Architecture Search algorithms, and the converse property of stable subnetworks could be investigated in
+the context of mass pruning techniques. Breakthroughs in deep learning have often coincided with major
+developments in architecture design, so these avenues of questioning are critical to future research.
+
+In summary, our key contributions include:
+
+* Novel sufficient conditions for global exponential stability of the continuous-time RNN with nonlinear activation.
+
+* Identification of flaws in prior proofs of supposed stability conditions for RNNs.
+
+• Application of contraction analysis to study combinations (assemblies) of RNNs.
+
+* Connection of theoretical results with concepts in neurobiology and machine learning.
+
+Ultimately, our work represents a step forward in understanding the stability properties of recurrent neural
+networks. Due to the key role nonlinear activation functions have played in the success of deep learning, it is
+important to better understand how they impact network behavior. Stability is a fundamental property of
+dynamical systems, and is inextricably linked to concepts such as generalization, control, predictability, and
+robustness. Therefore, as systems trained with deep learning become more modular, complex, and integrated
+into our lives, understanding the conditions under which these systems are stable will become increasingly
+important.
+
+**Acknowledgements** This work benefited from stimulating discussions with Michael Happ and Quang-Cuong Pham.
+
+A Proofs for Main Results
+
+A.1 Proof of Theorem 1
+
+**Theorem.** Let $|W|$ denote the matrix formed by taking the element-wise absolute value of $W$. If there exists a positive, diagonal $P$ such that:
+
+$$
+\mathbf{P}(g|\mathbf{W}| - \mathbf{I}) + (g|\mathbf{W}| - \mathbf{I})^T \mathbf{P} < 0
+$$
+
+then (1) is contracting in metric P. Moreover, if $W_{ii} \le 0$, then $|W|_{ii}$ may be set to zero to reduce conservation.
+
+*Proof.* Consider the differential, quadratic Lyapunov function:
+
+$$
+V = \delta \mathbf{x}^T \mathbf{P} \delta \mathbf{x}
+$$
+
+where $\mathbf{P} > 0$ is diagonal. The time derivative of $V$ is:
+
+$$
+\dot{V} = 2\delta\mathbf{x}^T\mathbf{P}\delta\mathbf{x} = 2\delta\mathbf{x}^T\mathbf{P}\mathbf{J}\delta\mathbf{x} = -2\delta\mathbf{x}^T\mathbf{P}\delta\mathbf{x} + 2\delta\mathbf{x}^T\mathbf{P}\mathbf{W}\mathbf{D}\delta\mathbf{x}
+$$
\ No newline at end of file
diff --git a/samples/texts/3826450/page_4.md b/samples/texts/3826450/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..e4012bcf02aed6474e292b940803d478636d1362
--- /dev/null
+++ b/samples/texts/3826450/page_4.md
@@ -0,0 +1,51 @@
+where $\mathbf{D}$ is a diagonal matrix such that $D_{ii} = \frac{d\phi_i}{dx} \ge 0$. We can upper bound the quadratic form on the right as follows:
+
+$$
+\begin{align*}
+\delta \mathbf{x}^T \mathbf{P} \mathbf{W} \mathbf{D} \delta \mathbf{x} &= \sum_{ij} P_i W_{ij} D_j \delta x_i \delta x_j \le \\
+&\quad \sum_i P_i W_{ii} D_i |\delta x_i|^2 + \sum_{ij, i \neq j} P_i |W_{ij}| D_j |\delta x_i| |\delta x_j| \le g |\delta \mathbf{x}|^T \mathbf{P} |\mathbf{W}| |\delta \mathbf{x}|
+\end{align*}
+$$
+
+If $W_{ii} \le 0$, the term $P_i W_{ii} D_i |\delta x_i|^2$ contributes non-positively to the overall sum, and can therefore be set to zero without disrupting the inequality. Now using the fact that $\mathbf{P}$ is positive and diagonal, and therefore $\delta \mathbf{x}^T \mathbf{P} \delta \mathbf{x} = |\delta \mathbf{x}|^T \mathbf{P} |\delta \mathbf{x}|$, we can upper bound $\dot{V}$ as:
+
+$$
+\dot{V} \leq |\delta\mathbf{x}|^T (-2\mathbf{P} + \mathbf{P}|\mathbf{W}| + |\mathbf{W}|\mathbf{P})|\delta\mathbf{x}| = |\delta\mathbf{x}|^T [(\mathbf{P}(|\mathbf{W}| - \mathbf{I}) + (|\mathbf{W}|^T - \mathbf{I})\mathbf{P}))]|\delta\mathbf{x}|
+$$
+
+where $|W|_{ij} = |W_{ij}|$, and $|W|_{ii} = 0$ if $W_{ii} \le 0$ and $|W|_{ii} = |W_{ii}|$ if $W_{ii} > 0$. This completes the proof. Note that $\mathbf{W} - \mathbf{I}$ is Metzler, and therefore will be Hurwitz stable if and only if $\mathbf{P}$ exists [20].
+
+It is also worth noting that highly negative diagonal values in $\mathbf{W}$ will prevent the same metric $\mathbf{P}$ from being used for the nonlinear system. Therefore the method used in this proof cannot feasibly be adapted to further relax the treatment of the diagonal part of $\mathbf{W}$. The intuitive reason behind this is that in the symmetric part of the Jacobian, $\frac{\mathrm{PWD}+DW^T P}{2}-\mathbf{P}$, the diagonal self weights will also be scaled down by small $\mathbf{D}$, while the leak portion $-\mathbf{P}$ remains untouched by $\mathbf{D}$.
+
+Now we actually demonstrate a counterexample, presenting a 2 × 2 symmetric Metzler matrix $\mathbf{W}$ that is contracting in the identity in the linear system, but cannot be contracting in the identity in the nonlinear system (1):
+
+$$
+\mathbf{W} = \begin{bmatrix} -9 & 2.5 \\ 2.5 & 0 \end{bmatrix}
+$$
+
+To see that it is not possible for the more general nonlinear system with these weights to be contracting in
+the identity, take $\mathbf{D} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$. Now
+
+$$
+(\mathbf{W}\mathbf{D})_{\text{sym}} - \mathbf{I} = \begin{bmatrix} -1 & 1.25 \\ 1.25 & -1 \end{bmatrix}
+$$
+
+which has a positive eigenvalue of $\frac{1}{4}$. $\square$
+
+A.2 Proof of Theorem 2
+
+**Theorem.** If $\mathbf{W} = \mathbf{W}^T$ and $g\mathbf{W} \prec \mathbf{I}$, then (1) is contracting.
+
+*Proof.* We begin by writing $\mathbf{W} = \mathbf{R} - \mathbf{P}$ for some unknown $\mathbf{R} = \mathbf{R}^T$ and $\mathbf{P} = \mathbf{P}^T > 0$. The approach of this proof is to show by construction that the condition $g\mathbf{W} \prec \mathbf{I}$ implies the existence of an $\mathbf{R}$ and $\mathbf{P}$ such that the system is contracting in metric $\mathbf{P}$. We consider the $y$ version of the RNN, which as discussed above is equivalent to the $x$ version via an affine transformation.
+
+Consider the contraction condition:
+
+$$
+-2\mathbf{M} + \mathbf{M}\mathrm{D}\mathbf{W} + \mathbf{W}^T\mathrm{D}\mathbf{M} \preceq -\beta\mathbf{M}
+$$
+
+with $\beta > 0$. Substituting in the definitions of $\mathbf{W}$ and $\mathbf{M}$, this condition becomes:
+
+$$
+-2\mathbf{P} + \mathbf{P}\mathrm{D}(\mathbf{R}-\mathbf{P}) + (\mathbf{R}-\mathbf{P})\mathrm{D}\mathbf{P} \preceq -\beta\mathbf{P}
+$$
\ No newline at end of file
diff --git a/samples/texts/3826450/page_5.md b/samples/texts/3826450/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..c9dcd6744c9591a13d1dd06b0b0872feba3effcf
--- /dev/null
+++ b/samples/texts/3826450/page_5.md
@@ -0,0 +1,79 @@
+Simplifying the terms and collecting them all on one side, the above may be written as:
+
+$$
+(\beta - 2)\mathbf{P} + \mathbf{RDP} + \mathbf{PDR} - 2\mathbf{PDP} \preceq 0
+$$
+
+via the Schur complement, the above term will be satisfied if:
+
+$$
+(2 - \beta)\mathbf{P} - \mathbf{RDP}(2\mathbf{PDP})^{-1}\mathbf{PDR} =
+$$
+
+$$
+(2 - \beta)\mathbf{P} - \frac{1}{2}(\mathbf{RDR}) \geq (2 - \beta)\mathbf{P} - \frac{g}{2}(\mathbf{RR}) \geq 0
+$$
+
+We continue by setting $\mathbf{P} = \gamma^2 \mathbf{RR}$ with $\gamma^2 = \frac{g}{2(2-\beta)}$, so that the above inequality is satisfied. At this point,
+we have shown that if $\mathbf{W}$ can be written as:
+
+$$
+\mathbf{W} = \mathbf{R} - \gamma^2 \mathbf{RR}
+$$
+
+then (1) is contracting in metric $\mathbf{M} = \gamma^2 \mathbf{RR}$. What remains to be shown is that if the condition:
+
+$$
+g\mathbf{W} - \mathbf{I} < 0
+$$
+
+Is satisfied, then this implies the existence of an **R** such that the above is true. To show that this is indeed
+the case, assume that:
+
+$$
+\frac{1}{4\gamma^2} \mathbf{I} - \mathbf{W} \succeq 0
+$$
+
+Substituting in the definition of $\gamma$, this is just the statement that:
+
+$$
+\frac{2(2 - \beta)}{4g} \mathbf{I} - \mathbf{W} \succeq 0
+$$
+
+Setting $\beta = 2\lambda > 0$, this yields:
+
+$$
+(1 - \lambda) \mathbf{I} \succeq g \mathbf{W}
+$$
+
+Since $\mathbf{W}$ is orthogonal, we have the eigendecomposition:
+
+$$
+\frac{1}{4\gamma^2}\mathbf{I}-\mathbf{W}=\mathbf{V}\left(\frac{1}{4\gamma^2}\mathbf{I}-\boldsymbol{\Lambda}\right)\mathbf{V}^T
+$$
+
+where $\mathbf{V}^\mathrm{T}\mathbf{V} = \mathbf{I}$ and $\boldsymbol{\Lambda}$ is a diagonal matrix containing the eigenvalues of $\mathbf{W}$. Denote the symmetric square-root of this expression as $\mathbf{S}$:
+
+$$
+\mathbf{S} = \mathbf{V} \sqrt{\left(\frac{1}{4\gamma^2}\mathbf{I} - \boldsymbol{\Lambda}\right)} \mathbf{V}^T = \mathbf{S}^T
+$$
+
+Which implies that:
+
+$$
+\frac{1}{4\gamma^2}\mathbf{I}-\mathbf{W}=\mathbf{S}^T\mathbf{S}
+$$
+
+We now define **R** in terms of **S** as follows:
+
+$$
+\mathbf{R} = \frac{1}{\gamma} \mathbf{S} + \frac{1}{2\gamma^2} \mathbf{I}
+$$
+
+Which means that:
+
+$$
+\frac{1}{4\gamma^2}\mathbf{I}-\mathbf{W}=(\gamma\mathbf{R}-\frac{1}{2\gamma}\mathbf{I})(\gamma\mathbf{R}-\frac{1}{2\gamma}\mathbf{I})
+$$
+
+Expanding out the right side, we get:
\ No newline at end of file
diff --git a/samples/texts/3826450/page_6.md b/samples/texts/3826450/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..9de7a373e99c8a93802702ee45903cd438c4b483
--- /dev/null
+++ b/samples/texts/3826450/page_6.md
@@ -0,0 +1,51 @@
+$$ \frac{1}{4\gamma^2}\mathbf{I} - \mathbf{W} = \gamma^2\mathbf{R}\mathbf{R} - \mathbf{R} + \frac{1}{4\gamma^2}\mathbf{I} $$
+
+Subtracting $\frac{1}{4\gamma^2}\mathbf{I}$ from both sides yields:
+
+$$ \mathbf{W} = \mathbf{R} - \gamma^2 \mathbf{R}\mathbf{R} $$
+
+As desired.
+
+### A.3 Proof of Theorem 3
+
+**Theorem.** If there exists positive diagonal matrices $P_1$ and $P_2$, as well as $\mathbf{Q} = \mathbf{Q}^T > 0$ such that
+
+$$ \mathbf{W} = -\mathbf{P}_1 \mathbf{Q} \mathbf{P}_2 $$
+
+then (1) is contracting in metric $\mathbf{M} = (\mathbf{P}_1 \mathbf{Q} \mathbf{P}_1)^{-1}$.
+
+*Proof.* Consider again a differential Lyapunov function:
+
+$$ V = \delta\mathbf{x}^T \mathbf{M} \delta\mathbf{x} $$
+
+the time derivative is equal to:
+
+$$ \dot{\mathbf{V}} = -2\mathbf{V} + \delta\mathbf{x}^T \mathbf{M} \mathbf{W} \mathbf{D} \delta\mathbf{x} $$
+
+Substituting in the definitions of $\mathbf{W}$ and $\mathbf{M}$, we get:
+
+$$ \dot{\mathbf{V}} = -2\mathbf{V} - \delta\mathbf{x}^T \mathbf{P}_1^{-1} \mathbf{P}_2 \mathbf{D} \delta\mathbf{x} \leq -2\mathbf{V} $$
+
+Therefore $\mathbf{V}$ converges exponentially to zero.
+
+### A.4 Proof of Theorem 4
+
+**Theorem.** If $g\mathbf{W} - \mathbf{I}$ is triangular and Hurwitz, then (1) is contracting in a diagonal metric.
+
+*Proof.* Without loss of generality, assume that $\mathbf{W}$ is lower triangular. This implies that $W_{ij} = 0$ if $i \le j$. Now consider the generalized Jacobian:
+
+$$ \mathbf{F} = -\mathbf{I} + \Gamma \mathbf{W} \mathbf{D} \Gamma^{-1} $$
+
+with $\Gamma$ diagonal and $\Gamma_i = \epsilon^i$ where $\epsilon > 0$. Because $\Gamma$ is diagonal, the generalized Jacobian is equal to:
+
+$$ \mathbf{F} = -\mathbf{I} + \Gamma \mathbf{W} \Gamma^{-1} \mathbf{D} $$
+
+Now note that:
+
+$$ (\Gamma \mathbf{W} \Gamma^{-1})_{ij} = \epsilon^i W_{ij} \epsilon^j = W_{ij} \epsilon^{i-j} $$
+
+Where $i \le j$, we have $W_{ij} = 0$ by assumption. Therefore, the only nonzero entries are where $i \ge j$. This means that by making $\epsilon$ arbitrarily small, we can make $\Gamma \mathbf{W} \Gamma^{-1}$ approach a diagonal matrix with $W_{ii}$ along the diagonal. Therefore, if:
+
+$$ \max_i g W_{ii} - 1 < 0 $$
+
+the nonlinear system is contracting. Since $\mathbf{W}$ is triangular, $W_{ii}$ are the eigenvalues of $\mathbf{W}$, meaning that this condition is equivalent to $g\mathbf{W} - \mathbf{I}$ being Hurwitz.
\ No newline at end of file
diff --git a/samples/texts/4730718/page_1.md b/samples/texts/4730718/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..cf9bfc7bf2905b8a4162c29852079bef16d7bf73
--- /dev/null
+++ b/samples/texts/4730718/page_1.md
@@ -0,0 +1,29 @@
+On the sub-shock formation in extended thermodynamics
+
+Shigeru Taniguchi¹ and Tommaso Ruggeri²*
+
+¹Department of Creative Engineering,
+National Institute of Technology, Kitakyushu College, Japan
+
+²Department of Mathematics & Alma Mater Research Center on Applied Mathematics,
+University of Bologna, Bologna, Italy
+
+(Dated: July 4, 2021)
+
+In hyperbolic dissipative systems, the solution of the shock structure is not always continuous and a discontinuous part (sub-shock) appears when the velocity of the shock wave is greater than a critical value. In principle, the sub-shock may occur when the shock velocity $s$ reaches one of the characteristic eigenvalues of the hyperbolic system. Nevertheless, Rational Extended Thermodynamics (ET) for a rarefied monatomic gas predicts the sub-shock formation only when $s$ exceeds the maximum characteristic velocity of the system evaluated in the unperturbed state $\lambda_0^{\text{max}}$. This fact agrees with a general theorem asserting that continuous shock structure cannot exist for $s > \lambda_0^{\text{max}}$. In the present paper, first, the shock structure is numerically analyzed on the basis of ET for a rarefied polyatomic gas with 14 independent fields. It is shown that, also in this case, the shock structure is still continuous when $s$ meets characteristic velocities except for the maximum one and therefore the sub-shock appears only when $s > \lambda_0^{\text{max}}$. This example reinforces the conjecture that, the differential systems of ET theories have the special characteristics such that the sub-shock appears only for $s$ greater than the unperturbed maximum characteristic velocity. However, in the second part of the paper, we construct a counterexample of this conjecture by using a simple 2 × 2 hyperbolic dissipative system which satisfies all requirements of ET. In contrast to previous results, we show the clear sub-shock formation with a slower shock velocity than the maximum unperturbed characteristic velocity.
+
+PACS numbers: 47.40.-x, 05.70.Ln, 47.45.-n
+
+# I. INTRODUCTION
+
+Hyperbolic dissipative systems, which are sometimes called as hyperbolic systems with relaxation in the mathematical community, describe a large class of the physical systems and appear in many fields, in particular, in the field of non-equilibrium thermodynamics within the framework of so-called Rational Extended Thermodynamics (hereafter, for simplicity, referred to as ET, instead of RET) [1, 2]. In (parabolic or hyperbolic) dissipative systems, the shock wave is represented by a solution of the type of traveling waves that is called *shock structure* because it predicts a thickness of the shock wave. In contrast to the parabolic system with the Navier-Stokes and Fourier (NSF) constitutive equations obtained in the framework of Thermodynamics of Irreversible Processes (TIP), the hyperbolic dissipative system predicts, in general, the formation of a *sub-shock*. In other words, the shock structure is not always continuous and a discontinuous part (sub-shock) appears when the velocity of the shock wave $s$ is greater than a critical value.
+
+For the shock structure in rarefied monatomic gases, the following features have been reported in literature: Grad proposed the moment method of closure of the field equations [3] and showed that the discontinuity (sub-shock) may appear in the so-called Grad-13 moment system when the Mach number is greater than 1.65, which corresponds to the value of $s$ reaching the maximum char-
+
+acteristic velocity evaluated in equilibrium unperturbed state [4]. Ruggeri showed that, for any hyperbolic system of balance laws, the shock structure becomes in principle singular when the shock velocity $s$ meets a characteristic velocity and therefore the sub-shock seems to appear when $s$ meets all the supersonic characteristic velocities of the hyperbolic system [5].
+
+In order to check the theoretical prediction of the sub-shock formation, Weiss performed numerical calculations of the shock structure in a rarefied monatomic gas on the basis of ET with 13, 14 and 21 independent variables with the use of the assumption of the Maxwellian molecule for production terms. The numerical results showed that, except for the maximum characteristic velocity, the singular points become regular and continuous solution is obtained until $s$ reaches the maximum characteristic velocity. Weiss concluded, as a conjecture, that for any number of moments the sub-shock appears only after the maximum characteristic velocity, at least numerically [6]. This conjecture was reinforced by a theorem of Boillat and Ruggeri in which it was proven that, for hyperbolic system of balance laws satisfying the convexity of the entropy, no continuous solution exists with larger shock velocity $s$ than the maximum characteristic velocity evaluated in the unperturbed state $\lambda_0^{\text{max}}$ [7].
+
+However, there is no mathematical proof about the absence of the sub-shock when the shock velocity is slower than the maximum characteristic velocity. There still remain the following questions: “Is the above conjecture valid for all systems satisfying the requirements of ET theory?” and “Are there any possibilities to have the
+
+* taniguchi.shigeru@kct.ac.jp, tommaso.ruggeri@unibo.it
\ No newline at end of file
diff --git a/samples/texts/4730718/page_11.md b/samples/texts/4730718/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..e01754224aba34fdd1966d4f676d823e03cfe33f
--- /dev/null
+++ b/samples/texts/4730718/page_11.md
@@ -0,0 +1,74 @@
+where $h, h^1$ and $\Sigma$ are, respectively, the entropy density, the entropy flux and the entropy production density given by
+
+$$
+\begin{aligned}
+h &= -\frac{1}{2}(u^2 + v^2), \\
+h^1 &= -u \frac{\partial K}{\partial u} - v \frac{\partial K}{\partial v} + K, \\
+\Sigma &= \frac{1}{\tau}(u - v)^2.
+\end{aligned}
+\quad (23) $$
+
+Moreover, the concavity of the entropy density $h$ with respect to the field $(u, v)^T$ is automatically satisfied (see (23)$_1$).
+
+It is well known that, by introducing the formal substitution
+
+$$ \partial_t \rightarrow -\lambda \delta \qquad \partial_x \rightarrow \delta $$
+
+and by putting zero for the production terms in (20) (or (21)), we obtain a linear system of two equations where $\lambda$ represents the characteristic velocity and $(\delta u, \delta v)^T$ is proportional to the characteristic eigenvector of the system associated with $\lambda$:
+
+$$
+\begin{aligned}
+& \left(-\lambda + \frac{\partial^2 K}{\partial u^2}\right) \delta u + \frac{\partial^2 K}{\partial u \partial v} \delta v = 0, \\
+& \frac{\partial^2 K}{\partial u \partial v} \delta u + \left(-\lambda + \frac{\partial^2 K}{\partial u^2}\right) \delta v = 0.
+\end{aligned}
+\quad (24) $$
+
+Therefore the characteristic velocities $\lambda^{(1)}$ and $\lambda^{(2)}$ are obtained as the solutions of the characteristic polynomial $P(\lambda) = 0$, where
+
+$$ P(\lambda) = \lambda^2 - \left\{ \frac{\partial^2 K}{\partial u^2} + \frac{\partial^2 K}{\partial v^2} \right\} \lambda + \frac{\partial^2 K}{\partial u^2} \frac{\partial^2 K}{\partial v^2} - \left( \frac{\partial^2 K}{\partial u \partial v} \right)^2 . $$
+
+In particular, the equilibrium characteristic velocities $\lambda_E^{(1)}$ and $\lambda_E^{(2)}$ are the roots of $P_E(\lambda_E) = 0$, where
+
+$$ P_E(\lambda_E) = \lambda_E^2 - \left\{ \frac{\partial^2 K}{\partial u^2} + \frac{\partial^2 K}{\partial v^2} \right\} \Big|_E \lambda_E + \left\{ \frac{\partial^2 K}{\partial u^2} \frac{\partial^2 K}{\partial v^2} - \left( \frac{\partial^2 K}{\partial u \partial v} \right)^2 \right\} \Big|_E . \quad (25) $$
+
+Here the quantities with subscript $E$ represent the quantities evaluated in the equilibrium state in which $v = u$.
+
+According to the definition given by Boillat and Ruggeri [22], in the present case, the equilibrium subsystem associated with the system (21) is obtained, by putting $v = u$ into the equation (21)$_1$:
+
+$$ \frac{\partial u}{\partial t} + \frac{1}{2} \frac{\partial}{\partial x} \left( \frac{d\bar{K}}{du} \right) = 0, \quad (26) $$
+
+where $\bar{K} = \bar{K}(u)$ is defined by $\bar{K}(u) = K(u, u)$. The characteristic velocity $\mu$ of the equilibrium subsystem (26) is given by
+
+$$ \mu = \frac{1}{2} \frac{d^2 \bar{K}}{du^2}. \quad (27) $$
+
+FIG. 4. (Case B) Dependence of the characteristic velocities in the perturbed state $\lambda_1$ on the shock speed $s$ for $u_0 = 0.85$.
+
+Taking into account the following identities:
+
+$$
+\begin{aligned}
+\frac{d\bar{K}}{du} &= \left. \left( \frac{\partial K}{\partial u} + \frac{\partial K}{\partial v} \right) \right|_E, \\
+\frac{d^2\bar{K}}{du^2} &= \left. \left( \frac{\partial^2 K}{\partial u^2} + 2\frac{\partial^2 K}{\partial u\partial v} + \frac{\partial^2 K}{\partial v^2} \right) \right|_E,
+\end{aligned}
+$$
+
+we have
+
+$$ P_E(\mu) = -\frac{1}{4} \left\{ \left. \left( \frac{\partial^2 K}{\partial u^2} - \frac{\partial^2 K}{\partial v^2} \right) \right|_E \right\}^2 \le 0. \quad (28) $$
+
+Therefore, we have the sub-characteristic conditions [22]:
+
+$$ \lambda_E^{(1)} \leq \mu \leq \lambda_E^{(2)}. \quad (29) $$
+
+The system (21) also belongs to the general hyperbolic system of balance laws in one-space dimension (1) with
+
+$$
+\begin{align}
+& U = (u+v, u)^T, && F = \left( \left( \frac{\partial K}{\partial u} + \frac{\partial K}{\partial v} \right), \frac{\partial K}{\partial u} \right)^T, && (30) \\
+& f = \left( 0, -\frac{1}{\tau}(u-v) \right)^T. &&&
+\end{align}
+$$
+
+This kind of dissipative hyperbolic systems have recently been studied with particular attention to the existence of global smooth solutions. In fact, under the Shizuta-Kawashima coupling condition (K-condition) [35, 36]
+
+$$ \nabla f \cdot r^{(i)}|_{E} \neq 0, \quad \forall i = 1, \dots, N, \quad (31) $$
\ No newline at end of file
diff --git a/samples/texts/4730718/page_2.md b/samples/texts/4730718/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf01016abdf3bda336b7bbcc0a7c9aa61021fa46
--- /dev/null
+++ b/samples/texts/4730718/page_2.md
@@ -0,0 +1,55 @@
+FIG. 6. (Case A) Shock structure (solid curves) for $u_1 = 1.2$ (top) and $u_1 = 1.3$ (bottom). The possible state just after the sub-shock (dotted curves) predicted by the RH conditions is also shown. $u_0 = 1.15$ and $\tau = 1$.
+
+to the shock velocity at the critical characteristic veloc-
+ity $s_*$, which is smaller than the maximum characteristic
+velocity in the unperturbed state; $s_* < \lambda_0^{(u)}$. There are
+two possibilities of the sub-shock formation. The first
+possibility is the sub-shock when $s_* < s$. The second is
+the sub-shock when $s > \lambda_0^{(u)}$. The necessary condition
+(14) holds for $s_* < s < \lambda_0^{(u)}$. Therefore this case is a
+candidate of a counter example to have a sub-shock with
+the shock velocity smaller than the maximum character-
+istic velocity in the unperturbed state and also to have
+multiple sub-shock. As a typical example, we show the
+shock velocity dependence of the characteristic velocities
+in the perturbed state for $u_0 = 0.85$ in Figure 4. In the
+present case, $\mu_0 = 0.622$, $\lambda_0^{(u)} = 0.723$, $\lambda_0^{(v)} = 0.522$ and
+$s_* = 0.689$.
+
+C. Case C
+
+If we choose the state $\mathbf{U}_0 = (u_0, u_0)^T$ with $0 < u_0 < 0.536$, the relationship $\lambda_0^{(u)} > \lambda_0^{(v)}$ holds. The characteristic velocity $\lambda_1^{(v)}$ in the state $\mathbf{U}_1 = (u_1, u_1)$ coincides with the shock velocity at the critical characteristic velocity $s_*$, which is larger than the maximum characteristic velocity $s_* > \lambda_0^{(u)}$. We understand that there are two possibilities of the sub-shock formation both for $s$ greater than $\lambda_0^{\text{max}}$. The first is the sub-shock appearing when $s > \lambda_0^{(u)}$. The second possibility is the sub-shock when $s > s_*$. The necessary condition (14) is violated. As a typical example, we show the shock velocity dependence of the characteristic velocities in the state $\mathbf{U}_1 = (u_1, u_1)^T$ for $u_0 = 0.3$ in Figure 5. In the present case, $\mu_0 = 0.050$, $\lambda_0^{(v)} = 0.0081$, $\lambda_0^{(u)} = 0.09$ and $s_* = 0.13$.
+
+There is another region of the state $\mathbf{U}_0 = (u_0, u_0)^T$
+with $1 < u_0 < 1.06$, which belongs to the Case C. The
+relationship $\lambda_0^{(v)} > \lambda_0^{(u)}$ holds and the characteristic ve-
+
+locity $\lambda_1^{(u)}$ in the perturbed state meets the shock velocity
+at the critical characteristic velocity $s_*$ larger than the
+maximum characteristic velocity $s_* > \lambda_0^{(v)}$.
+
+VI. NUMERICAL RESULTS ON THE SHOCK WAVE STRUCTURE
+
+In this section, we perform the numerical calculation on the shock structure in order to check the theoretical predictions of the sub-shock formation discussed in the previous section. We numerically solve the Riemann problem with the following initial condition:
+
+$$
+u(x, 0) = v(x, 0) = \begin{cases} u_1 & (x < 0) \\ u_0 & (x \ge 0) \end{cases}
+$$
+
+with $u_1(u_0, s)$ satisfying RH conditions for the equilib-
+rium subsystem (39) and we analyze the shock-structure
+solution obtained after long time according with the con-
+jecture explained in Sec. III A. Hereafter, we adopt $\tau = 1$.
+
+As it is not easy to distinguish numerically a real sub-
+shock from a steep change of the profile, we adopt a strat-
+egy used in a previous paper [19]. This strategy is based
+on the fact that, if there exists a sub-shock, the two states
+$\mathbf{U}_{-}$ and $\mathbf{U}_{+}$ must satisfy the Rankine-Hugoniot for the
+full system, i.e. [1, 43]:
+
+$$
+-s[\mathbf{U}] + [\mathbf{F}(\mathbf{U})] = 0,
+$$
+
+where $[\psi] = \psi_{+} - \psi_{-}$ represents the jump of a generic quantity $\psi$ across the (discontinuous) shock front. Here $\psi_{+}$ and $\psi_{-}$ are, respectively, the values of $\psi$ in the just right state and in the just left state of the jump. Therefore first we plot the profile of the shock structure and we consider any point of the profile as the state just before a potential sub-shock $(u_{+}, v_{+})^{T}$, and then, from the
\ No newline at end of file
diff --git a/samples/texts/4730718/page_3.md b/samples/texts/4730718/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..b57cda8c7d7012bab14da4f5b9f951258109e5e1
--- /dev/null
+++ b/samples/texts/4730718/page_3.md
@@ -0,0 +1,11 @@
+FIG. 7. (Case B) Shock structure for $u_1 = 0.9$ (top), $u_1 = 0.935$ (middle) and $u_1 = 0.95$ (bottom). The possible state just after the sub-shock (dotted curves) predicted by the RH conditions is also shown. $u_0 = 0.85$ and $\tau = 1$.
+
+FIG. 8. (Case C) Shock structure for $u_1 = 0.4$ (top), $u_1 = 0.55$ (middle) and $u_1 = 0.65$ (bottom). The possible state just after the sub-shock (dotted curves) predicted by the RH conditions is also shown. $u_0 = 0.3$ and $\tau = 1$.
+
+Rankine Hugoniot conditions for the full system (37),
+
+$$s = \frac{u_{-}^{2} + u_{+}u_{-} + u_{+}^{2}}{3},$$
+
+$$s = \frac{v_{-}^{4} + v_{+}^{3}v_{+} + v_{-}^{2}v_{+}^{2} + v_{-}v_{+}^{3} + v_{+}^{4}}{5},$$
+
+we associate $(u_+, v_+)^T$ with a point $(u_-, v_-)^T$. In this way we have two curves: the profile of the shock structure and the curve of potential state just after the sub-shock. If the two curves never meet, we understand that the
\ No newline at end of file
diff --git a/samples/texts/4730718/page_6.md b/samples/texts/4730718/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..318827b42909420321aef61623dba68356c78a8b
--- /dev/null
+++ b/samples/texts/4730718/page_6.md
@@ -0,0 +1,53 @@
+sub-shock with slower characteristic velocity than the maximum characteristic velocity?" These questions are interesting not only mathematically but also physically due to the following recent progresses:
+
+(a) Extended thermodynamics of polyatomic gases has been developed [8–10]. The ET theory with 14 independent variables ($ET_{14}$) explains the shock structure in rarefied polyatomic gases where the internal modes, namely, rotational and vibrational modes, are partially excited [11]. In particular, $ET_{14}$ can explain the structure composed of thin and thick layers [12, 13] in a fully consistent way [11] in contrast to previous Bethe-Teller theory [14]. It is also shown that the very steep change in the thin layer may be described as a sub-shock within the resolution of the simplified ET theory with only 6 independent fields ($ET_6$) [11, 15, 16]. The numerical results based on the kinetic theory also support the theoretical predictions by the ET theories quantitatively [17]. Therefore the sub-shock formation does not necessarily imply the violation of the validity range of the ET theory in a polyatomic gas and the sub-shock may have the physical meaning in this kind of problems.
+
+(b) In the context of a binary mixture of Eulerian monatomic gases, the sub-shock formation with slower shock velocity than the maximum unperturbed characteristic velocity and the multiple sub-shock was observed via numerical analysis [18, 19]. However, the system of balance equations for binary mixtures is very special because the field equations for each component have exactly the same form of a single fluid and the coupling effect is only through the production terms that take the mechanical and thermal diffusions into account.
+
+In the present paper, in order to understand the problematics more deeply, we first reconsider the shock structure in a rarefied polyatomic gas predicted by $ET_{14}$ and it will be shown that, also in this case, the singular points where $s$ reaches slower characteristic velocities may become regular and the sub-shock appears only when the shock velocity is greater than the maximum characteristic velocity in the unperturbed state. This example reinforces the conjecture that, the differential systems of ET theories have the special characteristics such that the sub-shock occurs only for $s$ greater than the unperturbed maximum characteristic velocity.
+
+However, in the second part of the paper, we construct a counterexample of this conjecture by using a simple 2 × 2 hyperbolic dissipative system that satisfies all requirements of extended thermodynamics, that is, the entropy inequality, concavity of the entropy, sub-characteristic condition and Shizuta-Kawashima condition. In contrast to previous results, we show clearly the sub-shock formation with a shock velocity slower than the maximum characteristic velocity. Moreover, multiple sub-shock is also observed in this simple system.
+
+Final section is devoted to the concluding remarks and the discussion on some open problems.
+
+## II. SHOCK-STRUCTURE PROBLEM
+
+The system of field equations of ET in one space dimension belongs to a particular case of general first order hyperbolic quasi-linear system of balance laws:
+
+$$ \frac{\partial \mathbf{U}}{\partial t} + \frac{\partial \mathbf{F}(\mathbf{U})}{\partial x} = \mathbf{f}(\mathbf{U}), \qquad (1) $$
+
+where $\mathbf{U}, \mathbf{F}$ and $\mathbf{f}$ are column vectors of $R^N$. Here $\mathbf{U}(x,t)$ is the unknown field vector with $x$ and $t$ being, respectively, the space and time.
+
+Let us consider a solution of (1) representing a shock structure, that is, the field variable $\mathbf{U}$ depends only on a single variable $z$ (traveling wave):
+
+$$ \mathbf{U} = \mathbf{U}(z), \quad z = x - st $$
+
+with constant equilibrium boundary conditions at infinity:
+
+$$ \lim_{z \to +\infty} \mathbf{U} = \mathbf{U}_0, \quad \lim_{z \to -\infty} \mathbf{U} = \mathbf{U}_1, \qquad (2) $$
+
+where
+
+$$ \mathbf{f}(\mathbf{U}_0) = \mathbf{f}(\mathbf{U}_1) = 0. $$
+
+We call the state $\mathbf{U}_0$ as the *unperturbed state* and the state $\mathbf{U}_1$ as the *perturbed state*, respectively. Hereafter, the quantities with the subscript 0 represent the quantities evaluated in the unperturbed state and the quantities with subscript 1 represent the ones evaluated in the perturbed state. From (1), we have the following ODE system:
+
+$$ (\mathbf{A}(\mathbf{U}) - s\mathbf{I}) \frac{d\mathbf{U}}{dz} = \mathbf{f}(\mathbf{U}), \quad \mathbf{A} = \frac{\partial \mathbf{F}}{\partial \mathbf{U}} \qquad (3) $$
+
+with boundary conditions given by (2).
+
+Following [7], by taking the typical features of extended thermodynamics into account, we may split the system (1) into the blocks of $M$ conservation laws and of $N-M$ balance equations as follows:
+
+$$ \begin{gathered} \frac{\partial \mathbf{V}(\mathbf{U})}{\partial t} + \frac{\partial \mathbf{P}(\mathbf{U})}{\partial x} = 0, \\ \frac{\partial \mathbf{W}(\mathbf{U})}{\partial t} + \frac{\partial \mathbf{R}(\mathbf{U})}{\partial x} = \mathbf{g}(\mathbf{U}). \end{gathered} \qquad (4) $$
+
+We may also choose the field variable $\mathbf{U}$ to coincide with the main field by which the original system becomes symmetric hyperbolic [20, 21]:
+
+$$ \mathbf{U} = (\mathbf{v}, \mathbf{w})^T, \qquad (5) $$
+
+where $\mathbf{v} \in R^M$ and $\mathbf{w} \in R^{N-M}$, such that [7, 22]:
+
+$$ g(\mathbf{v}, \mathbf{w}) = 0 \iff \mathbf{w} = 0. \qquad (6) $$
+
+The state with $\mathbf{w}=0$ represents the equilibrium state and we associate the system (4) with the corresponding equilibrium subsystem [22]:
+
+$$ \frac{\partial \mathbf{V}(\mathbf{v}, 0)}{\partial t} + \frac{\partial \mathbf{P}(\mathbf{v}, 0)}{\partial x} = 0. \qquad (7) $$
\ No newline at end of file
diff --git a/samples/texts/4730718/page_7.md b/samples/texts/4730718/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..b6fd0a225fcb1f1718d5634065203a8687d6572c
--- /dev/null
+++ b/samples/texts/4730718/page_7.md
@@ -0,0 +1,109 @@
+Taking (4) into account, we may rewrite (3) as
+
+$$
+\begin{equation}
+\begin{aligned}
+& \frac{d}{dz} \{-s\mathbf{V}(\mathbf{v}, \mathbf{w}) + \mathbf{P}(\mathbf{v}, \mathbf{w})\} = 0, \\
+& -s \frac{d\mathbf{W}(\mathbf{v}, \mathbf{w})}{dz} + \frac{d\mathbf{R}(\mathbf{v}, \mathbf{w})}{dz} = \mathbf{g}(\mathbf{v}, \mathbf{w}).
+\end{aligned}
+\tag{8}
+\end{equation}
+$$
+
+By integrating (8)$_1$, we have
+
+$$
+-s \mathbf{V}(\mathbf{v}, \mathbf{w}) + \mathbf{P}(\mathbf{v}, \mathbf{w}) = \text{const.} \quad (9)
+$$
+
+and by taking the fact that unperturbed and perturbed states are constant states (see (2)), from (8)$_2$ and (6), we have
+
+$$
+\mathbf{w}_1 = \mathbf{w}_0 = 0 \tag{10}
+$$
+
+and, from (9),
+
+$$
+-s \mathbf{V}(\mathbf{v}_0, 0) + \mathbf{P}(\mathbf{v}_0, 0) = -s \mathbf{V}(\mathbf{v}_1, 0) + \mathbf{P}(\mathbf{v}_1, 0). \quad (11)
+$$
+
+This is nothing else the Rankine-Hugoniot (RH) condi-
+tions associated with the equilibrium subsystem (7) and
+permits us to obtain **v**₁ ≡ **v**₁(**v**₀, s). Therefore, once the
+unperturbed equilibrium state **v**₀ and the shock velocity
+s are given, the shock structure is obtained as the solu-
+tion of (9) and (8)₂ under the boundary conditions (10)
+and (11).
+
+According with [7], a singularity (sub-shock) may ap-
+pear when a characteristic velocity λ, which is the eigen-
+value of the matrix **A**, meets the shock velocity (see (3))
+for some z. More precisely, let U_s(z) be a solution of (3)
+for a given s,
+
+$$
+\exists \bar{z}, \text{ such that } \lambda(\mathbf{U}_s(\bar{z})) = s. \quad (12)
+$$
+
+Assuming that, for a prescribed $s$, the solutions of (3)
+satisfy the following condition for any genuine non-linear
+eigenvalues $\lambda$:
+
+$$
+\lambda_0 \leq \lambda(\mathbf{U}_s(z)) \leq \lambda_1(s), \quad \forall z \in [-\infty, \infty]. \quad (13)
+$$
+
+Then the necessary condition for a sub-shock with the
+shock velocity slower than $\lambda_0^{\max}$, is that, for some $s$, there
+exists an eigenvalue $\lambda$ such that
+
+$$
+\lambda_0 < s < \lambda_1(s) < \lambda_0^{\max}. \tag{14}
+$$
+
+In fact, if (14) is true, from (13), (12) holds for continuity reason.
+
+We notice that, if we increase the shock velocity more,
+such that $s > \lambda_0^{\max}$, the sub-shock corresponding to the
+fastest mode also becomes admissible and therefore we
+may expect that there exist two or more sub-shocks.
+
+III. SUB-SHOCK FORMATION IN A
+RAREFIED POLYATOMIC GAS
+
+Let us analyze the shock structure in a rarefied poly-
+atomic gas based on extended thermodynamics with 14
+
+fields (ET₁₄); the mass density ρ, the velocity vᵢ, the
+temperature T, the dynamic (non-equilibrium) pressure
+Π, the shear stress σᵢⱼ and the heat flux qᵢ, where
+i, j = 1, 2, 3 and the angular brackets in σᵢⱼ indicate
+that the shear stress is symmetric traceless tensor. The
+ET₁₄ theory is the simplest and natural extension of
+the Navier-Stokes and Fourier (NSF) theory and ET₁₄
+includes NSF as a special case.
+
+We adopt the caloric and thermal equations of state for
+a rarefied polyatomic gas. The specific internal energy ε
+and the (equilibrium) pressure p are expressed by
+
+$$
+\varepsilon = \frac{D k_B T}{2 m}, \quad p = \frac{k_B \rho T}{m},
+$$
+
+where *D*, *k**B* and *m* are, respectively, the degrees of free-
+dom of a molecule, the Boltzmann constant and the mass
+of a molecule. Hereafter, we consider a polytropic gas,
+that is, the specific heat is assumed to be constant (*D* is
+constant). For the case of a non-polytropic rarefied gas,
+the shock structure was studied in [11].
+
+We focus on the one-dimensional (plane) shock waves propagating along the x-axis where the vectorial and ten- sorial quantities are given by
+
+$$
+v_i \equiv \begin{pmatrix} v \\ 0 \\ 0 \end{pmatrix}, \sigma_{ij} \equiv \begin{pmatrix} \sigma & 0 & 0 \\ 0 & -\frac{1}{2}\sigma & 0 \\ 0 & 0 & -\frac{1}{2}\sigma \end{pmatrix}, q_i \equiv \begin{pmatrix} q \\ 0 \\ 0 \end{pmatrix}
+$$
+
+and in this case, the independent variables are **U** ≡
+(ρ, v, T, Π, σ, q)T. The field equations of ET14 are sum-
\ No newline at end of file
diff --git a/samples/texts/4730718/page_8.md b/samples/texts/4730718/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..ceec14db813d17d6b398aa43f288a4ca40a10959
--- /dev/null
+++ b/samples/texts/4730718/page_8.md
@@ -0,0 +1,48 @@
+FIG. 1. Mach number dependences of the dimensionless characteristic velocities in the perturbed state for $D = 3$ (left) and for $D = 7$ (right).
+
+marized as follows: [8]
+
+$$
+\begin{aligned}
+& \frac{\partial \rho}{\partial t} + \frac{\partial}{\partial x}(\rho v) = 0, \\
+& \frac{\partial \rho v}{\partial t} + \frac{\partial}{\partial x}(p + \Pi - \sigma + \rho v^2) = 0, \\
+& \frac{\partial}{\partial t}(2\rho\epsilon + \rho v^2) + \\
+& \qquad + \frac{\partial}{\partial x}\{2\rho\epsilon v + 2(p + \Pi - \sigma)v + \rho v^3 + 2q\} = 0, \\
+& \frac{\partial}{\partial t}\{3(p + \Pi) + \rho v^2\} + \\
+& \qquad + \frac{\partial}{\partial x}\{(5p + 5\Pi - 2\sigma)v + \rho v^3 + \frac{5}{1+\hat{c}_v}q\} = -\frac{3\Pi}{\tau_{\Pi}}, \\
+& \frac{\partial}{\partial t}(p + \Pi - \sigma + \rho v^2) + \\
+& \qquad + \frac{\partial}{\partial x}\{3(p + \Pi - \sigma)v + \rho v^3 + \frac{3}{1+\hat{c}_v}q\} = \frac{\sigma}{\tau_S} - \frac{\Pi}{\tau_{\Pi}}, \\
+& \frac{\partial}{\partial t}\{2\rho\epsilon v + 2(p + \Pi - \sigma)v + \rho v^3 + 2q\} + \\
+& \qquad + \frac{\partial}{\partial x}\Bigg\{2\rho\epsilon v^2 + 5(p + \Pi - \sigma)v^2 + \rho v^4 + \\
+& \qquad \qquad + 2\left(\epsilon + \frac{k_B}{m}T\right)p + 2\left(\epsilon + 2\frac{k_B}{m}T\right)(\Pi - \sigma) + \\
+& \qquad \qquad + \frac{10+4\hat{c}_v}{1+\hat{c}_v}qv\Bigg\} = -2\left\{\frac{q}{\tau_q} + \left(\frac{\Pi}{\tau_{\Pi}} - \frac{\sigma}{\tau_S}\right)v\right\},
+\end{aligned}
+\tag{15}
+$$
+
+where $\tau_{\Pi}$, $\tau_S$, and $\tau_q$ are the relaxation times for the dynamic pressure, the shear stress, and the heat flux,
+
+respectively. Here $\hat{c}_v$ is the dimensionless specific heat defined by $\hat{c}_v = (m/k_B)c_v$ with $c_v$ being the specific heat and in the present case $\hat{c}_v = D/2$. The equilibrium state of (15) is achieved when $\Pi = \sigma = q = 0$. The characteristic velocities in the equilibrium state $\lambda_E$ are [2, 23]:
+
+$$ \frac{\lambda_E - v}{c} = 0, 0, \pm\Delta^{(1)}, \pm\Delta^{(2)}, \tag{16} $$
+
+where
+
+$$
+\begin{aligned}
+\Delta^{(1)} &= \sqrt{\frac{\hat{c}_v (7 + 4\hat{c}_v - \sqrt{37 + 32\hat{c}_v + 4\hat{c}_v^2})}{2(1 + \hat{c}_v)^2}}, \\
+\Delta^{(2)} &= \sqrt{\frac{\hat{c}_v (7 + 4\hat{c}_v + \sqrt{37 + 32\hat{c}_v + 4\hat{c}_v^2})}{2(1 + \hat{c}_v)^2}},
+\end{aligned}
+$$
+
+and c is the sound velocity:
+
+$$ c = \sqrt{\gamma \frac{k_B}{m} T}. \tag{17} $$
+
+Here $\gamma$ is the ratio of specific heats related with $\hat{c}_v$ and $D$ by the following relations:
+
+$$ \gamma = \frac{1 + \hat{c}_v}{\hat{c}_v} = \frac{2 + D}{D}. $$
+
+The equilibrium subsystem (7) of the system of ET$_{14}$ (15) is the system of Euler equations. The relationship between the unperturbed and perturbed states is given by the RH conditions (9) for the system of the Euler equations. Let $\mathbf{U}_0 = (\rho_0, v_0, T_0, 0, 0, 0)^T$ be the unperturbed state and the unperturbed Mach number $M_0$ is defined as
+
+$$ M_0 = \frac{s - v_0}{c_0}, $$
\ No newline at end of file
diff --git a/samples/texts/5029325/page_1.md b/samples/texts/5029325/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..7322f79be6517d894cd393e31f37b2f9c40f1cc2
--- /dev/null
+++ b/samples/texts/5029325/page_1.md
@@ -0,0 +1,22 @@
+# Strong ETH and Resolution via Games and the Multiplicity of Strategies
+
+Ilario Bonacina¹ · Navid Talebanfard²
+
+Received: 10 January 2016 / Accepted: 4 October 2016 / Published online: 19 October 2016
+© Springer Science+Business Media New York 2016
+
+**Abstract** We consider a proof system intermediate between *regular Resolution*, in which no variable can be resolved more than once along any refutation path, and general *Resolution*. We call *δ-regular Resolution* such system, in which at most a fraction *δ* of the variables can be resolved more than once along each refutation path (however, the re-resolved variables along different paths do not need to be the same). We show that when for *δ* not too large, *δ*-regular Resolution is consistent with the Strong Exponential Time Hypothesis (SETH). More precisely, for large *n* and *k*, we show that there are unsatisfiable *k*-CNF formulas which require *δ*-regular Resolution refutations of size 2(1−*ϵ**k*)*n*, where *n* is the number of variables and *ϵ**k* = O*(k-1/4)* and *δ* = O*(k-1/4)* is the number of variables that can be resolved multiple times.
+
+**Keywords** Satisfiability · Resolution · Strong ETH
+
+This work was completed while the 1st author was affiliated to the Computer Science Department of Sapienza University of Rome (Italy). The 1st author was funded by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007–2013) / ERC Grant Agreement No. 279611.
+
+Navid Talebanfard
+ntalebanfard@gmail.com
+
+Ilario Bonacina
+ilario@kth.se
+
+¹ School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
+
+² Saarland University and the Cluster of Excellence, MMCI, Saarbrücken, Germany
\ No newline at end of file
diff --git a/samples/texts/5029325/page_3.md b/samples/texts/5029325/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..4915f7c263781414b477173870e525e24ab9f4c5
--- /dev/null
+++ b/samples/texts/5029325/page_3.md
@@ -0,0 +1,43 @@
+$$
+\begin{align*}
+|\mathcal{R}| \stackrel{(\dagger\dagger)}{\ge} |\{\rho_\beta \in \mathcal{R} : \beta \in S_{B^*}\}| &\ge \frac{|S_{B^*}|}{\binom{n^\ell}{\delta^\ell n} (e^{2\ell})^{\delta\ell n + w}} && \stackrel{\text{(eq. 2)}}{\ge} \frac{2^{w\ell}}{\binom{n}{w} \binom{\ell n}{\delta\ell n} (e^{2\ell})^{\delta\ell n + w}} \\
+&\ge \frac{2^{w\ell}}{(\frac{en}{w})^w (\frac{e}{\delta})^{\delta\ell n} (e^{2\ell})^{\delta\ell n + w}} \\
+&= 2^{w(\ell - \log(\frac{e^3\ell n}{w}) - \frac{\delta\ell n}{w} \log \frac{e^3\ell}{\delta})},
+\end{align*}
+$$
+
+where the inequality (††) follows by the definition of ρβ. □
+
+The next step now is to obtain formulas which require very large Resolution width.
+Such a construction is given by Beck and Impagliazzo in [5] and improved in [8].
+
+**Theorem 4.1** ([8]) For any large *n* and *k*, there exist an unsatisfiable *k*-CNF formula $\varphi$ on *n* variables and some $\zeta_k = \tilde{O}(k^{-1/3})$ such that
+
+$$
+\mathrm{width}(\varphi \vdash \bot) \geq (1 - \zeta_k)n.
+$$
+
+Now, we have set all the preliminary results to prove Theorem 1.1 and in particular,
+our SETH lower bound for Resolution will follow from the existence of a CNF formula
+requiring very high Resolution width (Theorem 4.1) and the previous theorem about
+xorifications (Theorem 1.2).
+
+**Restated Theorem 1.1** (Main theorem) For any large $n$ and $k$, there exists an unsatisfiable $k$-CNF formula $\psi$ on $n'$ ≥ $n$ variables such that any $\delta$-regular Resolution refutation of $\psi$ requires size at least $2^{(1-\epsilon_k)n}$ where both $\epsilon_k$ and $\delta$ are $\tilde{O}(k^{-1/4})$.
+
+*Proof* Let $\varphi$ be the $k$-CNF formula given by Theorem 4.1, in particular $\operatorname{width}(\varphi \vdash \bot) \ge (1-\zeta_k)n$ where $\zeta_k = \tilde{O}(k^{-1/3})$. Then $\varphi[\oplus^\ell]$ is a $k'$-CNF formula on $n'\ell$ variables where $k'=k\ell$. By the choice of $\ell = \tilde{\Theta}(k^{1/3})$, $\delta = \tilde{O}(k^{-1/3})$ and by Theorem 1.2, it follows that
+
+$$
+\begin{align*}
+\text{size}_{\delta}(\varphi[\oplus^{\ell}] \vdash \bot) &\ge 2^{(1-\zeta_k)n(\ell - \log(\frac{e^3\ell n}{w}) - \frac{\delta\ell n}{w} \log \frac{e^3\ell}{\delta})} \\
+&\stackrel{(\dagger)}{\le} 2^{(1-\zeta_k)n(\ell - O(\log k) - \ell \tilde{O}(k^{-1/3}))} = 2^{(1-\tilde{O}(k^{-1/3}))n\ell} \\
+&= 2^{(1-\epsilon_{k'})n\ell}.
+\end{align*}
+$$
+
+In particular the equality (†) follows from the choice of ℓ = $\tilde{\Theta}(k^{1/3})$ and δ =
+= $\tilde{O}(k^{-1/3})$. To obtain the asymptotic behaviour of εₖ' with respect to k', just observe
+that k' = kℓ = Θ(k⁴/³) and εₖ' = O(k⁻¹/³), hence εₖ' = O(k'⁻¹/⁴). Similarly we get
+the asymptotic behaviour of δ as a function of k'. So the formula ψ in the statement
+is the constructed formula ϕ[⊕ℓ].
+
+**Acknowledgements** We would like to thank Nicola Galesi for discussions on the topic. We would also like to thank Jakob Nordström and Massimo Lauria for discussions on Resolution size and strong width lower bounds.
\ No newline at end of file
diff --git a/samples/texts/5029325/page_4.md b/samples/texts/5029325/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..933eeaa3b7333c3849dfb85ea06ebab83bc3815b
--- /dev/null
+++ b/samples/texts/5029325/page_4.md
@@ -0,0 +1,53 @@
+References
+
+1. Atserias, A., Dalmau, V.: A combinatorial characterization of resolution width. J. Comput. Syst. Sci. **74**, 323–334 (2008)
+
+2. Atserias, A., Fichte, J.K., Thurley, M.: Clause-learning algorithms with many restarts and bounded-width resolution. J. Artif. Intell. Res. (JAIR) **40**, 353–373 (2011)
+
+3. Bayardo Jr., R.J.B., Schrag, R.: Using CSP look-back techniques to solve real-world SAT instances. In: Kuipers, B., Webber, B.L. (eds.) Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 97, IAAI 97, pp. 203–208, AAAI Press/The MIT Press, Providence 27–31 July 1997
+
+4. Beame, P., Beck, C., Impagliazzo, R.: Time-space tradeoffs in resolution: superpolynomial lower bounds for superlinear space. In: Karloff, H.J., Pitassi, T. (eds.) Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, pp. 213–232, ACM, New York, 19–22 May 2012
+
+5. Beck, C., Impagliazzo, R.: Strong ETH holds for regular resolution. In: Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pp. 487–494, ACM (2013)
+
+6. Ben-Sasson, E., Wigderson, A.: Short proofs are narrow-resolution made simple. J. ACM **48**, 149–169 (2001)
+
+7. Blake, A.: Canonical Expressions in Boolean Algebra, PhD thesis. University of Chicago (1937)
+
+8. Bonacina, I., Talebanfard, N.: Improving resolution width lower bounds for *k*-CNFs with applications to the strong exponential time hypothesis. Inf. Process. Lett. **116**, 120–124 (2015)
+
+9. Chen, R., Kabanets, V., Kolokolova, A., Shaltiel, R., Zuckerman, D.: Mining circuit lower bound proofs for meta-algorithms. In: IEEE 29th Conference on Computational Complexity, CCC, pp. 262–273 (2014)
+
+10. Chen, R., Kabanets, V., Saurabh, N.: An improved deterministic #SAT algorithm for small de morgan formulas. In: Mathematical Foundations of Computer Science 2014 - 39th International Symposium, MFCS, pp. 165–176 (2014)
+
+11. Chen, S., Scheder, D., Talebanfard, N., Tang, B.: Exponential lower bounds for the PPSZ *k*-SAT algorithm. In: SODA, pp. 1253–1263 (2013)
+
+12. Dantchev, S.S.: Relativisation provides natural separations for resolution-based proof systems. In: Proceedings of Computer Science - Theory and Applications, First International Computer Science Symposium in Russia, pp. 147–158, CSR 2006, St. Petersburg, 8–12 June 2006
+
+13. Dantsin, E., Goerdt, A., Hirsch, E.A., Kannan, R., Kleinberg, J.M., Papadimitriou, C.H., Raghavan, P., Schöning, U.: A deterministic $(2-2/(k+1))^n$ algorithm for $k$-SAT based on local search. Theor. Comput. Sci. **289**, 69–83 (2002)
+
+14. Davis, M., Logemann, G., Loveland, D.W.: A machine program for theorem-proving. Commun. ACM **5**, 394–397 (1962)
+
+15. Davis, M., Putnam, H.: A computing procedure for quantification theory. J. ACM **7**, 201–215 (1960)
+
+16. Haken, A.: The intractability of resolution. Theor. Comput. Sci. **39**, 297–308 (1985)
+
+17. Impagliazzo, R., Paturi, R.: On the complexity of k-SAT. J. Comput. Syst. Sci. **62**, 367–375 (2001)
+
+18. Moskewicz, M.W., Madigan, C.F., Zhao, Y., Zhang, L., Malik, S.: Chaff: engineering an efficient SAT solver. In: Proceedings of the 38th Design Automation Conference, DAC 2001, pp. 530–535, ACM, Las Vegas, 18–22 June 2001
+
+19. Paturi, R., Pudlák, P., Saks, M.E., Zane, F.: An improved exponential-time algorithm for *k*-SAT. J. ACM **52**, 337–364 (2005)
+
+20. Paturi, R., Pudlák, P., Zane, F.: Satisfiability coding lemma. In: 38th Annual Symposium on Foundations of Computer Science, FOCS, pp. 566–574 (1997)
+
+21. Pipatsrisawat, K., Darwiche, A.: On the power of clause-learning SAT solvers as resolution engines. Artif. Intell. **175**, 512–525 (2011)
+
+22. Pudlák, P.: Proofs as games. Am. Math. Mon. **107**, 541–550 (2000)
+
+23. Pudlák, P., Impagliazzo, R.: A lower bound for DLL algorithms for k-SAT (preliminary version). In: Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’00, pp. 128–136 (2000)
+
+24. Robinson, J.A.: A machine-oriented logic based on the resolution principle. J. ACM **12**, 23–41 (1965)
+
+25. Santhanam, R.: Fighting perebor: new and improved algorithms for formula and QBF satisfiability. In: 51st Annual IEEE Symposium on Foundations of Computer Science. FOCS 2010, 183–192 (2010)
+
+26. Schöning, U.: A probabilistic algorithm for k-SAT and constraint satisfaction problems. In: 40th Annual Symposium on Foundations of Computer Science, FOCS, pp. 410–414 (1999)
\ No newline at end of file
diff --git a/samples/texts/5029325/page_5.md b/samples/texts/5029325/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f2d04dc1f19494d049d638af178e6dba4cb90b5
--- /dev/null
+++ b/samples/texts/5029325/page_5.md
@@ -0,0 +1,7 @@
+27. Silva, J.P.M., Sakallah, K.A.: GRASP: a search algorithm for propositional satisfiability. IEEE Trans. Comput. **48**, 506–521 (1999)
+
+28. Urquhart, A.: Hard examples for resolution. J. ACM **34**, 209–219 (1987)
+
+29. Williams, R.: Improving exhaustive search implies superpolynomial lower bounds. In: Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC, pp. 231–240 (2010)
+
+30. Williams, R.: Non-uniform ACC circuit lower bounds. In: Proceedings of the 26th Annual IEEE Conference on Computational Complexity, CCC, pp. 115–125 (2011)
\ No newline at end of file
diff --git a/samples/texts/5029325/page_6.md b/samples/texts/5029325/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..e2f1ffb2e198ab8285951f5650437425dca9887b
--- /dev/null
+++ b/samples/texts/5029325/page_6.md
@@ -0,0 +1,9 @@
+# 1 Introduction
+
+The SAT problem is one of the most fundamental NP-complete problems. Paturi, Pudlák and Zane [20] proved tight depth-3 circuit lower bounds and from their technique they obtained a k-SAT algorithm which beats exhaustive search. Along similar lines, Santhanam [25] modified a lower bound argument to obtain improved satisfiability algorithms for De Morgan formulas of linear size. Employing stronger lower bound arguments, satisfiability algorithms were given for formulas of larger size in [9] and [10]. In a different direction, Williams [29] showed that even small improvements over exhaustive search for satisfiability on certain circuit classes implies a lower bound against that class. In fact he obtained his seminal NEXP $\not\subseteq$ ACC$^0$ result in [30] by giving a non-trivial ACC$^0$-SAT algorithm.
+
+In this paper we focus on the k-SAT problem. There are several non-trivial algorithms known for this problem, see e.g. [13, 19, 20, 26]. Despite this however, the exact complexity of k-SAT under suitable assumptions remains unknown. Formalizing what this complexity could be, Impagliazzo and Paturi [17] formulated the following two hypotheses. The *Exponential Time Hypothesis* (ETH) which states that there are no sub-exponential time algorithms for the k-SAT problem, for any k. The *Strong Exponential Time Hypothesis* (SETH) which states that the complexity of k-SAT grows as k increases and the running time of the best k-SAT algorithms approach that of exhaustive search. More formally, it says that k-SAT requires running time $2^{(1-\epsilon_k)n}$ where $\epsilon_k \to 0$ as $k \to \infty$.
+
+Both ETH and SETH are stronger than $P \neq NP$ and hence we do not expect to be able to verify either of them in any new future. We can however ask whether known algorithms are consistent with these hypotheses. For the PPSZ algorithm [19] strong lower bounds were proved in [11] supporting SETH. But one may ask for such a result that holds for a class of algorithms rather than for a specific one. Proof complexity provides a framework to do this. One can think of the run of a SAT algorithm on an unsatisfiable instance as a proof of unsatisfiability hence, if this proof is structured enough, we can employ tools from proof complexity and obtain lower bounds.
+
+For instance practical SAT-solvers are based on the *Davis-Putnam-Logemann-Loveland* algorithm (DPLL) that is a backtracking method introduced by [14,15] to search for assignments satisfying a CNF formula. It is a well known result that DPLL is equivalent to *tree-like Resolution*, a sub-system of the proof system Resolution where only proofs having a tree structure are allowed. Hence size tree-like Resolution lower bounds transfer to lower bounds for the running time of the DPLL algorithm. In a series of works, [3, 18, 27] introduced the idea of *Conflict Driven Clause Learning* (CDCL) as a way for DPLL SAT-solvers to cut the search space and avoid duplicated work. This is done by performing a *conflict analysis* when the search for an assignments leads to a contradiction and then *learning* a clause encoding a reason for that failure. By definition Resolution (polynomially) simulates runs of CDCL solvers over unsatisfiable instances, hence lower bounds for Resolution transfer to lower bounds for CDCL solvers. We recall that the converse also holds under certain assumptions on the behaviour of the CDCL solver, see [21] and [2].
\ No newline at end of file
diff --git a/samples/texts/5741006/page_1.md b/samples/texts/5741006/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..26f157f7d085d6fc667564ed65c540031de5608f
--- /dev/null
+++ b/samples/texts/5741006/page_1.md
@@ -0,0 +1,26 @@
+# H13-234
+## DESIGNING A MONITORING SYSTEM FOR A SEMI-URBAN SET OF BUILDINGS
+
+Jean-Pierre Issartel and Tristan Gamel
+
+Délégation Générale pour l'Armement, Maîtrise NRBC, Vert le Petit, France
+
+**Abstract:** A prototype system is being developed for the surveillance of air quality and security at an industrial facility manipulating chemical compounds. This system is based on a recent approach of data assimilation highlighting the importance of an auxiliary geometry, interpreted as an apparent geometry since the close environment of the detectors is maximised as happens in eye's view. The technique allows to optimally design a network that will detect rapidly an accidental release to the atmosphere. It also allows identifying one or several simultaneous accidental releases. This is important for the immediate management of the consequences and also to later discriminate exploiter's responsibility. In addition to the presentation of the theoretical framework of inverse modelling, the work raises several issues with the difficulties of urban dispersion modelling and with the operational constraint of real time computations.
+
+**Key words:** Industrial security, trace species, inverse problem, source identification.
+
+### INTRODUCTION
+
+In this study, we want to monitor accidental chemical releases in order to ensure the security of a set of industrial buildings. Whereas most studies in assimilation of data have the Earth or continents as a theater, this study corresponds to a local semi-urban area. The work is illustrated with a set of buildings (figure 1) in Aix en Provence (southern France). The meteorology is chosen corresponding to local average conditions corresponding to Pasquill class C stability, with a wind of 3.0 ms⁻¹ from SE at 10 m above the ground and away from the buildings. The wind goes across the 600 m of the modeled area within approximately 4 minutes and thus, a time interval of 15 minutes is utilized in the computations. The scattering of a trace species is simulated by the model PANEPR appropriate for the small scale urban areas. Ten detectors are settled among the buildings as indicated on the figures by the red dots and 2 m above the ground. Each of them performs one concentration measurement each thirtieth second. We want to use this sequence of artificial observations to retrieve the origin, time and intensity of a point release. This inverse problem is addressed here with a non Bayesian technique based on a new concept of visibility. A distinction is made between the possible distribution of the emissions and the visibility of these emissions actually provided by the monitoring network in the prevailing meteorological conditions. This visibility is described based on a geometric weighting of the various parts of the environment according to a visibility function introduced by Issartel, J.-P. (2005) and developed by Issartel, J.-P. et al. (2007). The artificial data are free from the noises due to the detectors or to the difference between model and reality. The imperfection of the real detectors and the imperfection of the dispersion model are an essential but separate difficulty addressed, for instance, by Sharan, M. et al. (2009).
+
+Figure 1. The semi-urban area considered for the illustration of the present study. The industrial buildings are indicated in white colour. The arrow indicates the main wind direction (3 m.s⁻¹).
+
+### PRIOR ASSUMPTIONS ABOUT POSSIBLE EMISSIONS: THE FUNDAMENTAL GEOMETRY
+
+The most general description of a tracer source may be given as a function $\sigma$ of the horizontal position x, y, altitude z and time t, $\sigma(x,y,z,t)$ being a rate of release in unit of tracer per kg of air and per second. The mixing ratio of tracer $\chi(x,y,z,t)$ is obtained from advection diffusion equation (1). Each measurement $\mu_i$ is obtained as an integral in which the sampling function $\pi_i$ describes where and when the air of the sample was taken; $\pi_i$ reduces to a Dirac delta function for an instantaneous measurement at a point.
+
+$$ \frac{\partial \chi}{\partial t} + v \nabla \chi + \frac{\partial}{\partial z} \left( \kappa \frac{\partial \chi}{\partial z} \right) = \sigma \qquad \mu_i = \int_{\Omega \times T} \rho \chi \pi_i(x,y,z,t) dxdydzdt = (\chi, \pi_i) \quad (1) $$
+
+in which $v$, $\kappa$ and $\rho$ are the fields of wind, diffusion coefficient and air density. As explained by Issartel, J.-P. and J. Baverel, (2003) the expression for $\mu_i$ is a scalar product denoted $(\cdot, \cdot)$. It may be transformed using an adjoint function or retroplume $r_i$ subject to the retrograde advection diffusion equation:
+
+$$ -\frac{\partial r_i}{\partial t} - v \nabla r_i + \frac{\partial}{\partial z} \left( \kappa \frac{\partial r_i}{\partial z} \right) = \pi_i \qquad \mu_i = \int_{\Omega \times T} \rho \sigma r_i(x,y,z,t) dxdydzdt = (\sigma, r_i) \quad (2) $$
\ No newline at end of file
diff --git a/samples/texts/5741006/page_2.md b/samples/texts/5741006/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..5dd43f8c3a03b606d3440843aafb55628b40fb5e
--- /dev/null
+++ b/samples/texts/5741006/page_2.md
@@ -0,0 +1,33 @@
+Given the Euclidean geometry of the scalar product (·, ·), the adjoint function describes the sensitivity of the measurement with respect to the various locations and times of the emissions.
+
+It is assumed that the sought source is located on the ground ($z=0$) and out of the buildings. Then, the emissions are better described by a flux $s(x,y,t)$ in kg m⁻² s⁻¹. The measurement $\mu_i$ becomes an integral through the time interval $T$ and the ground surface $\Sigma_{ext}$ with the buildings discarded; $\delta$ is the Dirac delta function:
+
+$$ \sigma(x, y, z, t) = \frac{s(x, y, t)\delta(z)}{\rho(x, y, z, t)} \qquad \mu_i = \int_{\Sigma_{ext} \times T} sa_i(x, y, t)dxdydt = (s, a_i)_1 \qquad a_i(x, y, t) = \int_T r_i(x, y, 0, t)dz \quad (3) $$
+
+In case of an instantaneous point release, the emission function $\sigma(x,y,z,t)$ or $s(x,y,t)$ are proportional to a Dirac delta function. Under the prior hypotheses of superficial outer emissions $s(x,y,t)$, the measurements are accounted for by the new product (·, )₁. This product is the geometric counterpart of the prior hypotheses and we designate it as the *fundamental product*. This concept, raised in (Issartel J.-P. et al. 2007), was formalized by Sharan, M. et al. (2009).
+
+## EMISSIONS VISIBLE BY THE MONITORING NETWORK: THE RENORMALIZED GEOMETRY
+
+The equations $\mu_i = (s, a_i)_1$ are used to obtain the projection $s_{||} = \sum \lambda_i a_i$ of $s$ on the space spanned by the $a_i$. The coefficients $\lambda_i$ are obtained by inverting a matrix **H**:
+
+$$ \mathbf{\lambda} = \mathbf{H}^{-1}\mathbf{\mu} \quad \text{where} \quad H_{ij} = (a_{ij})_1 \qquad \mathbf{\lambda} = \begin{bmatrix} \lambda_1 \\ \lambda_n \end{bmatrix} \qquad \mathbf{\mu} = \begin{bmatrix} \mu_1 \\ \mu_n \end{bmatrix} \quad (4) $$
+
+This simple method returns an unsatisfactory estimate $s_{||}$ (Issartel J.-P. et al. 2007) with artifacts in the form of peaks at detector locations. These peaks are related to singularities of the adjoint functions at detector location. They can be interpreted as artificial information with an excessive attention paid to the surrounding of the detectors. This bias in the visibility of the monitoring network is described through an illumination function $E$:
+
+$$ E(x,y,t) = a(x,y,t)^T \mathbf{H}^{-1} a(x,y,t) \quad \text{with} \quad a(x,y,t) = \begin{bmatrix} a_1(x,y,t) \\ a_n(x,y,t) \end{bmatrix} \quad (5) $$
+
+Since $E$ is positive and $\int_E f(x,y,t) dx dy dt = n$ (Issartel, J.-P., 2005), the illumination is interpreted as a density of information. To remove the singularities of $E$ at detector locations, a renormalizing function $f(x,y,t) > 0$ is introduced and the measurements are rewritten as:
+
+$$ \mu_i = \int_{\Sigma_{ext} \times T} f s a_{fi}(x, y, t) dxdydt = (s, a_{fi})_f \quad \text{with} \quad a_{fi}(x, y, t) = \frac{a_i(x, y, t)}{f(x, y, t)} \quad (6) $$
+
+This new expression is associated with the renormalized product (·, )f weighted by f. In the new weighted geometry, the source is estimated according to its projection $s_{||} = \sum \lambda_i a_{fi}$ with now coefficients: $\mathbf{\Lambda} = \mathbf{H}_{f}^{-1} \mathbf{\mu}$ where $H_{fij} = (a_{fi}a_{jj})_f$. The illumination becomes:
+
+$$ E_f(x,y,t) = f(x,y,t) a_f(x,y,t)^T H_f^{-1} a_f(x,y,t) \quad (7) $$
+
+Again, $E_f$ is positive and $\int_E f(x,y,t) dx dy dt = n$. The optimal weights $\phi$ obey the renormalising condition that for all x, y, t:
+
+$$ \phi(x,y,t) = E_\phi(x,y,t) \quad \text{equivalent to} \quad a_\phi(x,y,t)^T H_\phi^{-1} a_\phi(x,y,t) = 1 \quad (8) $$
+
+The figure 2 shows the optimal renormalizing function $\phi$ and figure 3 shows renormalized inversion clearly improved compared to the more classical one.
+
+Figure 2. The renormalizing function $\phi$ is shown at a given time with an arbitrary colour scale, the grey colour levels correspond to a factor 10. Notice that $\phi$ decreases away from the monitoring network and indicates a visibility mainly in the upwind direction.
\ No newline at end of file
diff --git a/samples/texts/5741006/page_3.md b/samples/texts/5741006/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a9a58aa306a49fe43b3ec589f6a38885d15e288
--- /dev/null
+++ b/samples/texts/5741006/page_3.md
@@ -0,0 +1,11 @@
+The function $s_{//\varphi}$ is continuously distributed through space and time. Thus, in general it cannot be used to estimate a point release directly. However, $s_{//\varphi}$ has an essential property emphasised by the figure 3: in case the observed measurements are produced from a point release, $s_{//\varphi}$ is maximum for the location and date of this release. This allows to identify it from the observations. Indeed, the estimate associated with observations $\mu$ may be written: $s_{//\varphi}(x,y,t) = \mu^T H_\varphi^{-1} a_\varphi(x,y,t)$. If the release is from location and time $(x_0,y_0,t_0)$ with the intensity $q_0$, the measurements are proportional to the local value of the sensitivity functions, i.e. $\mu_0 = q_0 a(x_0,y_0,t_0) = q_0 \varphi(x_0,y_0,t_0) a_\varphi(x_0,y_0,t_0)$. From these observations $\mu_0$ we would obtain, writing $C_0 = q_0 \varphi(x_0,y_0,t_0)$, a renormalized estimate:
+
+$$s_{//\varphi 0}(x,y,t) = C_0 a_{\varphi}(x_0,y_0,t_0)^T H_{\varphi}^{-1} a_{\varphi}(x,y,t) \leq C_0 \quad (9)$$
+
+Figure 3. Non-renormalized estimation $s_{//}$ (top panels) and renormalized estimation $s_{/\varphi}$ (bottom panels) at times $t_0-75$s, $t_0$ and $t_0+75$s, obtained from 100 measurements generated from a point release (black cross) at $t_0$. The non renormalized estimation is maximum at detector location whereas the renormalized estimation is maximum at release location. The colour levels (arbitrary units) are separated by a factor 5, blue levels representing negative values
+
+The Cauchy-Schwarz inequality, $(a^T H_\varphi^{-1} b)^2 \leq (a^T H_\varphi^{-1} a)(b^T H_\varphi^{-1} b)$, and the renormalizing condition (equation 8) imply that $s_{//\varphi 0}$ becomes maximum exactly for the sought location and time $(x_0,y_0,t_0)$. The renormalized assimilation thus provides a unique possibility to determine the origin of an accident from remotely observed concentrations. The quality of this identification is better, with a sharper maximum, if the release happens in a well illuminated region (Sharan, M. et al., 2009). This is a criterion for network design.
+
+## NUMBER OF MEASUREMENTS REQUIRED FOR IDENTIFYING SIMULTANEOUS RELEASES
+
+In order to obtain criteria for designing a monitoring network, it is necessary to determine the arrangement and number of detectors required. In particular, it is important to discriminate several small releases simultaneously detected by various detectors from a bigger one detected by the whole network. The theory for identification of single point releases can be extended for the identification of several simultaneous point releases. Figure 4 shows that, when a set of measurements $\mu_0$ has been generated from several point releases, the estimated function $s_{//\varphi}$ displays local maxima close to the location of each of the releases. This property is supported by theoretical arguments and may be exploited to identify the location and intensity of the releases from efficient computations and with accuracy limited only to the noise in the data.
\ No newline at end of file
diff --git a/samples/texts/7686943/page_1.md b/samples/texts/7686943/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..69ff408d947b1706b7727f9e68b4663d414d1156
--- /dev/null
+++ b/samples/texts/7686943/page_1.md
@@ -0,0 +1,25 @@
+Fatigue-Aware Bandits for Dependent Click Models
+
+Junyu Cao,¹* Wei Sun,² Zuo-Jun (Max) Shen,¹ Markus Ettl²
+
+¹University of California, Berkeley, California 94720
+²IBM Research, Yorktown Height, New York 10591
+
+Abstract
+
+As recommender systems send a massive amount of content to keep users engaged, users may experience fatigue which is contributed by 1) an overexposure to irrelevant content, 2) boredom from seeing too many similar recommendations. To address this problem, we consider an online learning setting where a platform learns a policy to recommend content that takes user fatigue into account. We propose an extension of the Dependent Click Model (DCM) to describe users' behavior. We stipulate that for each piece of content, its attractiveness to a user depends on its intrinsic relevance and a discount factor which measures how many similar contents have been shown. Users view the recommended content sequentially and click on the ones that they find attractive. Users may leave the platform at any time, and the probability of exiting is higher when they do not like the content. Based on user's feedback, the platform learns the relevance of the underlying content as well as the discounting effect due to content fatigue. We refer to this learning task as "fatigue-aware DCM Bandit" problem. We consider two learning scenarios depending on whether the discounting effect is known. For each scenario, we propose a learning algorithm which simultaneously explores and exploits, and characterize its regret bound.
+
+# 1 Introduction
+
+Recommender systems increasingly influence how users discover content. Some well-known examples include Facebook's News Feed, movie recommendations on Netflix, Spotify's Discover Weekly playlists, etc. To compete for users' attention and time, platforms push out a vast amount of content when users browse their websites or mobile apps. While a user sifts through the proliferation of content, her experience could be adversely influenced by: 1) marketing fatigue which occurs due to an overexposure to irrelevant content, and 2) content fatigue which refers to the boredom from seeing too many similar recommendations.
+
+Motivated by this phenomenon, we consider an online learning setting with a “fatigue-aware” platform, i.e., it learns a policy to select a sequence of recommendations for
+
+a user, while being conscious that both the choices of content and their placement could influence the experience. We propose a variant of the Dependent Click Model (DCM), where a user can click on multiple items¹ that she finds attractive. For each piece of content, its attractiveness depends on its intrinsic relevance and a discount factor that measures how many similar recommendations have already been shown to this user. The user may leave the platform at any time, and the probability of exiting is higher when she does not like the content, reflecting the presence of marketing fatigue. Meanwhile, showing too much relevant but similar content could also reduce its attractiveness due to content fatigue and drive bored users to abandon the platform. The platform whose objective is to maximize the total number of clicks needs to learn users' latent preferences in order to determine the optimal sequence of recommendations, based on users' feedback. We refer to this online learning task which the platform faces as fatigue-aware DCM bandits.
+
+Our contribution of our work is fourfold. Firstly, we propose a novel model which captures the effects of marketing fatigue and content fatigue on users' behavior. In the original DCM model (Guo, Liu, and Wang 2009; Katariya et al. 2016b), users can only exit the platform upon clicking on a recommendation. In reality, users may leave at any time, and they are more likely to exit when they are not engaged with the recommendations. We extend the DCM model to capture marketing fatigue by incorporating exiting behavior after both non-clicks and clicks. To reflect content fatigue, we incorporate a discount factor into DCM such that it penalizes repetitive content of similar types and promotes diversity in its recommendations. Secondly, even in the offline setting where all the information is known, the optimization problem to select a sequence of recommendation which is central to the learning task is combinatorial in nature without a straightforward efficient algorithm. We propose a polynomial-time algorithm for this problem. Thirdly, for the online setting, we first consider a scenario where the discount factor is known (e.g., when it can be estimated from historical data). We propose a learning algorithm and quantify the regret bound. Lastly, we consider a more general scenario where both the discount factor and intrinsic relevance
+
+*Correspondence to: Junyu Cao (jycao@berkeley.edu)
+Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+¹We use “content” and “items” interchangeably in this work.
\ No newline at end of file
diff --git a/samples/texts/7686943/page_2.md b/samples/texts/7686943/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..02f1188c6f89233e142e5992e286f3b6ba9c7f6f
--- /dev/null
+++ b/samples/texts/7686943/page_2.md
@@ -0,0 +1,29 @@
+of items need to be learned. This is a significantly more chal-
+lenging setting to analyze because 1) we only observe partial
+feedback on item's attractiveness which depends on both the
+discount factor and the relevance; 2) the discount factor is
+dependent on the content that we recommend. For this sce-
+nario, we also establish a regret bound for the algorithm that
+we propose.
+
+Our paper is organized as follows. In Section 2, we review related work. In Section 3, we introduce the fatigue-aware DCM model and provide an offline algorithm to find the optimal sequence when all parameters are known. In Section 4 and 5, we introduce our learning problems for two scenarios depending on whether the discount factor is known and analyze the regret associated with the proposed algorithms. In Section 6, we perform several numerical experiments to evaluate the performance of our algorithms.
+
+## 2 Related Literature
+
+Cascade Model (Chuklin, Markov, and Rijke 2015; Craswell et al. 2008) is a popular model that describes user's single-click behavior. Several variants of this model have been considered in the bandit literature (Combes et al. 2015; Kveton et al. 2015a; 2015b). One of its limitations is that the positions of the items do not influence the reward since a user is expected to browse the list of recommendations until she clicks on the content that she likes. DCM (Guo, Liu, and Wang 2009) generalizes Cascade Model to allow multiple clicks by incorporating a parameter to indicate the probability that a user resumes browsing after clicking on a recommendation. On the other hand, if a user does not click, she views the next item with certainty unless the sequence runs out. Katariya et al. 2016b analyzed one type of DCM bandit settings. However, the reward in Katariya et al. 2016b does not exactly correspond to the number of clicks. In our DCM bandit setting, the rewards exactly correspond to the number of users' clicks, which is an ubiquitous measure on recommenders' effectiveness. In addition, as our model allows users to exit at any time, the position of the items matters as users may exit the platform early if they do not like what has been shown initially.
+
+Alternative multi-click models include Position-Based Model (PBM) (Richardson, Dominowska, and Ragno 2007). In PBM, the click probability of an item is a product of a static position-based examination probability and the item's intrinsic relevance score. The following papers have investigated PBM bandits. More specifically, in Lagrée, Vernade, and Cappé 2016, the position-based examination probability is assumed known, whereas this quantity has to be learned together with item attractiveness in Komiyama, Honda, and Takeda 2017. In particular, Komiyama, Honda, and Takeda 2017 derive a parameter-dependent regret bound, which assumes that the gap between parameters is strictly positive. While our model also includes a position-dependent parameter which discounts the attractiveness of an item, this quantity does not merely depend on the position. The discount factor also depends on the order and the content types of the recommendations. For our setting, we derive a parameter-independent regret bound.
+
+Our work also bears some resemblance to the stochastic rank-1 bandits (Katariya et al. 2016a; 2017), where a row arm and a column arm are pulled simultaneously in each around, and the reward corresponds to the product of the two values. In our setting, item attractiveness is a product of the discount factor and item's intrinsic relevance. However, in rank-1 bandits, one is only interested in determining the particular pair that yields the highest reward, whereas we need to learn and rank all items, which is significantly harder.
+
+An important topic of recommender systems is fatigue control (Kapoor et al. 2015; Ma, Liu, and Shen 2016). One way to manage content fatigue is to introduce diversity in recommendations (Radlinski, Kleinberg, and Joachims 2008; Ziegler et al. 2005). The linear submodular function has been used to tackle the diversification problem in bandit setting (Yu, Fang, and Tao 2016; Yue and Guestrin 2011). Warlop, Lazaric, and Mary 2018 propose a reinforcement learning framework that uses a linear reward structure which captures the effect of the recent recommendations on user's preferences. However, instead of recommending specific items as in our work, Warlop, Lazaric, and Mary 2018 restrict to recommending genres or content types to a user. While Cao and Sun 2019; Wang and Tulabandhula 2019 have also studied marketing fatigue in an online setting, their setting only captures a single click, whereas our model allows multiple clicks and incorporates the effect of content fatigue in addition to marketing fatigue.
+
+# 3 Problem Formulation
+
+In this section, we formally introduce the fatigue-aware DCM model and present an algorithm to determine the optimal sequence of recommendations in an offline setting where all underlying parameters are known.
+
+## 3.1 Setting
+
+Suppose there are *N* available items (e.g., songs, videos, articles) for the platform to choose from, denoted as $[N]$. Each item belongs to one of the *K* types (e.g., genres, categories, topics). Let $\mathbf{S} = (S_1, \dots, S_m)$ denote a sequence of recommended items. For each item $i$, $u_i$ denotes its intrinsic relevance to a user, and $C(i) = j$ if item $i$ belongs to type $j$. We define $h_i(\mathbf{S})$ as the number of items with type $C(i)$ shown before item $i$. Similar to Warlop, Lazaric, and Mary 2018, we model the impact of showing too many similar contents as a discount factor on items' attractiveness. More precisely, define the attractiveness of item $i$ as $z_i(\mathbf{S}) := f(h_i(\mathbf{S}))u_i$, where $f(\cdot)$ represents the discount factor on users' preferences due to content fatigue, which depends on how many items of the same type have been shown. $f(\cdot)$ is assumed to be a decreasing function. Without loss of generality, we assume $f(0) = 1$.
+
+Given a list of recommendations $\mathbf{S} = (S_1, \dots, S_m)$, a user examines it sequentially, starting from $S_1$. The user clicks on item $i$ when she finds it attractive. The platform receives a reward for every click. DCM models the multi-click behavior by incorporating a parameter $g$ which is the probability that the user will see more recommendations after a click. In case of no click or skip, unlike in the original DCM
\ No newline at end of file
diff --git a/samples/texts/7686943/page_3.md b/samples/texts/7686943/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..70a0ce9917a8bfb30940b10b30658623683cb942
--- /dev/null
+++ b/samples/texts/7686943/page_3.md
@@ -0,0 +1,47 @@
+(Guo, Liu, and Wang 2009; Katariya et al. 2016b) where a user examines the next item with probability 1, we use $q$ to denote the probability that the user will resume browsing. As users are more likely to stay on the platform and continue browsing after consuming a piece of good content², we let $q \le g$. The interaction between a user and recommended content is illustrated in Fig. 1.
+
+Figure 1: User behavior in a fatigue-aware DCM.
+
+The probability of clicking on item $i$ from a sequence $\mathbf{S}$, denoted as $\mathbb{P}_i(\mathbf{S})$, depends on its position in the list, its own content, as well as the content of other items shown previously. Formally,
+
+$$ \mathbb{P}_i(\mathbf{S}) = \begin{cases} u_i, & \text{if } i \in S_1 \\ \prod_{k=1}^{l-1} (gz_{I(k)}(\mathbf{S}) + q(1 - z_{I(k)}(\mathbf{S}))) z_i(\mathbf{S}), & \text{if } i \in S_l \end{cases} $$
+
+where $I(\cdot)$ denotes the index function, i.e., $I(k) = i$ if and only if $S_k = \{i\}$. When $i$ is the first item of the sequence, the probability of clicking is simply $u_i$, which is its intrinsic relevance. For the remainder of the sequence, the probability of clicking on $i$ as the $l^{th}$ item is the joint probability that 1) the user finds item $i$ attractive after taking content fatigue into account, $z_i(\mathbf{S})$; 2) she remains on the platform after examining the first $l-1$ items, $\prod_{k=1}^{l-1} (gz_{I(k)}(\mathbf{S}) + (1 - z_{I(k)}(\mathbf{S})))q$, which accounts for both clicking and skipping behavior.
+
+## 3.2 Platform's optimization problem
+
+The platform's objective is to maximize the total number of clicks by optimizing the sequence of recommended items. We use $\mathbb{R}(\mathbf{S})$ to denote the platform's expected reward, i.e., $E[\mathbb{R}(\mathbf{S})] = \sum_{i \in [N]} \mathbb{P}_i(\mathbf{S})$. Thus, the platform's optimization problem can be written as
+
+$$ \begin{align} \max_{\mathbf{S}} & \quad E[\mathbb{R}(\mathbf{S})] \tag{1} \\ \text{s.t.} \quad & S_i \cap S_j = \emptyset, \forall i \neq j. \nonumber \end{align} $$
+
+²We provide some evidence of this phenomenon based on the estimates from a real dataset, with more details in Section 6.
+
+The constraint requires no duplicated items in the recommendations. We define $\mathbf{S}^*$ as the optimal sequence of content, and $\mathbb{R}^*$ as the corresponding maximum reward. While this problem is combinatorial in nature, Theorem 3.1 shows that this problem is polynomial-time solvable and the optimal sequence $\mathbf{S}^*$ can be identified by Algorithm 1.
+
+**Algorithm 1:** Determine the optimal sequence $\mathbf{S}^*$ to the offline combinatorial problem
+
+1 **for** *i* = 1 : *K* **do**
+
+2 Sort $u_j$ for $C(j) = i$ in an decreasing order $o(\cdot)$
+where $o(j) = r$ if item $j$ is at the $(r+1)^{th}$ position;
+
+3 Set $\lambda_j = u_j f(o(j))$;
+
+4 **end**
+
+5 Sort $\lambda_j$ for all $j = 1 : N$ in an decreasing order $o'(\cdot)$
+where $o'(r) = j$ if item $j$ is at the $r^{th}$ position;
+
+6 Set $\mathbf{S} = (o'(1), o'(2), \dots, o'(N))$.
+
+**Theorem 3.1** *Algorithm 1 finds the optimal sequence S\*.*
+
+Due to the space limit, we present the proof outline in the main paper and the detailed proofs are included in the Supplementary Material.
+
+**Proof outline:** We prove the result by contradiction. If the optimal sequence $\mathbf{S}^*$ is not ordered by Algorithm 1, then there exists a neighboring pair $I(i)$ and $I(i+1)$ such that either 1) $I(i)$ and $I(i+1)$ belong to the same type and $u_{I(i)} < u_{I(i+1)}$ or 2) $I(i)$ and $I(i+1)$ belong to different types and $f(h_{I(i)}(\mathbf{S}^*))u_{I(i)} < f(h_{I(i+1)}(\mathbf{S}^*))u_{I(i+1)}$. We then show that swapping items $I(i)$ and $I(i+1)$ increases the expected reward, which is a contradiction to $\mathbf{S}^*$ being optimal. □
+
+For each category, Algorithm 1 first ranks items based on their intrinsic relevance. If an item has the $(r+1)^{th}$ largest relevance score within a category, we will then multiply the score with $f(r)$ to compute its attractiveness. Next, the algorithm ranks items across all categories in their decreasing attractiveness. The complexity of Algorithm 1 is $O(N \log N)$. We also observe that the optimal sequence is completely determined by items’ intrinsic relevance $\mathbf{u}$ and the discount factor $f$, and is independent of the resuming probabilities $g$ and $q$. In other words, the platform only needs to learn $\mathbf{u}$ and $f$ for the online learning task. Nevertheless, the resuming probabilities influence the learning speed as we will investigate in the following sections.
+
+## 4 Learning with Known Discount Factor $f$
+
+In the previous section, we assume all the parameters are known to the platform. It is natural to ask what the platform should do in the absence of such knowledge. Beginning from this section, we will present learning algorithms to the fatigue-aware DCM bandit problem and characterize the corresponding regret bound. We first investigate a scenario where the discounting effect caused by content fatigue is known (e.g., it could be estimated from historical data), and the platform needs to learn the item’s intrinsic relevance
\ No newline at end of file
diff --git a/samples/texts/7686943/page_4.md b/samples/texts/7686943/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..4ef566e04a573b0afc84b11a95655ff205c440bc
--- /dev/null
+++ b/samples/texts/7686943/page_4.md
@@ -0,0 +1,47 @@
+**Algorithm 2:** [FA-DCM-P] An algorithm for fatigue-aware DCM bandit when *f* is known
+
+1 Initialization: Set $u_{i,0}^{UCB} = 1$ and $T_i(0) = 0$ for all $i \in [N]$; $t = 1$;
+
+2 while $t < T$ do
+
+3 Compute $\tilde{\mathbf{S}}^t = \arg\max_{\mathbf{S}} E[R(\mathbf{S}, \mathbf{u}_{t-1}^{UCB})]$ according to Theorem 3.1;
+
+4 Offer sequence $\tilde{\mathbf{S}}^t$, observe the user's feedback;
+
+5 update $T_i(t)$; update $\mathbf{u}_t^{UCB}$ according to Equation (2); $t = t + 1$;
+
+6 end
+
+**Lemma 4.1** For any $t$ and $j \in [N]$ we have
+
+$$P\left(u_{j,t}^{UCB} - \sqrt{8 \frac{\log t}{T_j(t)}} < u_j < u_{j,t}^{UCB}\right) \geq 1 - \frac{2}{t^4}.$$
+
+**Lemma 4.2** Assume $\mathbf{S}^*$ is the optimal sequence of messages. Under the condition that $0 \leq \mathbf{u} \leq \mathbf{u}'$, we have
+
+$$E[R(\mathbf{S}^*, \mathbf{u}')] \geq E[R(\mathbf{S}^*, \mathbf{u})].$$
+
+**Proof of Lemma 4.2.** We prove this theorem by induction. For the optimal sequence $\mathbf{S}^* = (S_1, S_2, \dots, S_N)$, since $u'_{I(N)} \geq u_{I(N)}$, we have $E[R(S_N, \mathbf{u}')] \geq E[R(S_N, \mathbf{u})]$. Assume the inequality
+
+$$E[R((S_{k+1}, \dots, S_N), \mathbf{u}')] \geq E[R((S_{k+1}, \dots, S_N), \mathbf{u})]$$
+
+holds, and now we prove that $E[R((S_k, \dots, S_N), \mathbf{u}')] \geq E[R((S_k, \dots, S_N), \mathbf{u})]$. Since
+
+$$
+\begin{align*}
+& E[R((S_k, \dots, S_N), \mathbf{u}')] \\
+&= u'_{I(k)} + (g u'_{I(k)} + q(1 - u'_{I(k)})) E[R((S_k, \dots, S_N), \mathbf{u}')] \\
+&= u'_{I(k)} + ((g-q)u'_{I(k)} + q) E[R((S_k, \dots, S_N), \mathbf{u}')] \\
+&\geq u_{I(k)} + ((g-q)u_{I(k)} + q) E[R((S_k, \dots, S_N), \mathbf{u}] \\
+&= E[R((S_k, \dots, S_N), \mathbf{u})].
+\end{align*}
+$$
+
+Therefore, we reach the desired result. ■
+
+**Theorem 4.3** *The regret of Algorithm FA-DCM-P during time T is bounded by*
+
+$$\text{Regret}_{\pi}(T) \le C \left( \frac{1-g^N}{1-g} \right)^{3/2} \sqrt{NT \log T}.$$
+
+for some constant $C$.
+
+**Proof outline:** Define event $A_{i,t} = \{u_{i,t}^{UCB} - \sqrt{8 \frac{\log t}{T_i(t)}} < u < u_{i,t}^{UCB}\}$ and $E_t = \bigcap_{i=1}^N A_{i,t}$. On the “large probability” event $E_t$, with Lemma 4.2, we obtain $E[R(\tilde{\mathbf{S}}^t, \mathbf{u})] \le E[R(\mathbf{S}^*, \mathbf{u})] \le E[R(\mathbf{S}^*, \mathbf{u}^{UCB})] \le E[R(\tilde{\mathbf{S}}^t, \mathbf{u}^{UCB})]$. It implies that $E[R(\tilde{\mathbf{S}}^t, \mathbf{u})] - E[R(\tilde{\mathbf{S}}^t, \mathbf{u}^{UCB})] \le E[R(\tilde{\mathbf{S}}^t, \mathbf{u}^{UCB})] - E[R(\tilde{\mathbf{S}}^t, \mathbf{u}^{UCB})]$. Let $I_t(\cdot)$ denote the index function for user $t$. We then show that the cumulative
\ No newline at end of file
diff --git a/samples/texts/7686943/page_5.md b/samples/texts/7686943/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..2b422f242f3b4c0037960a84f6d12982b809aaf0
--- /dev/null
+++ b/samples/texts/7686943/page_5.md
@@ -0,0 +1,90 @@
+difference on event $E_t$ can be bounded from above by
+
+$$
+\begin{align*}
+& E_{\pi} \left[ \sum_{t=1}^{T} \sum_{i=1}^{|{\tilde{\mathbf{S}}^t}|} \prod_{k=1}^{i-1} ((g-q)f(h_{I_t(i)}({\tilde{\mathbf{S}}^t})) u_{I_t(k),t}^{UCB} + q) \right. \\
+& \qquad \left. f(h_{I_t(i)}({\tilde{\mathbf{S}}^t})) u_{I_t(i),t}^{UCB} 1(E_t) \right] - E_{\pi} \left[ \sum_{t=1}^{T} \sum_{i=1}^{|{\tilde{\mathbf{S}}^t}|} \prod_{k=1}^{i-1} ((g-q) \right. \\
+& \qquad \left. f(h_{I_t(k)}({\tilde{\mathbf{S}}^t})) u_{I_t(k)} + q) f(h_{I_t(i)}({\tilde{\mathbf{S}}^t})) u_{I_t(i)} 1(E_t) \right] \\
+& \leq \frac{1-g^N}{1-g} E_{\pi} \left[ \sum_{t=1}^{T} \sum_{j=1}^{N} \kappa_{j,t} (u_{j,t}^{UCB} - u_{j,t}) 1(E_t) \right],
+\end{align*}
+$$
+
+where $\kappa_{j,t} = \prod_{k=1}^{l_t(j)-1} ((g-q)f(h_{I_t(k)}(\tilde{\mathbf{S}}^t))u_{I_t(k)} + q)$. It denotes the probability that user $t$ observes item $j$ and $l_t(i)$ specifies the position of item $i$ in $\tilde{\mathbf{S}}^t$. We then show that the regret term can be further bounded by $\frac{1-g^N}{1-g}\sqrt{2\log T}\sum_{i\in[N]}\sqrt{E_\pi[T_i(T)]}$. Since $\sum_{i\in[N]} E_\pi[T_i(T)] \le \frac{1-g^N}{1-g}T$, we can bound it by $C_1(\frac{1-g^N}{1-g})^{3/2}\sqrt{NT\log T}$. To complete the proof, we show that on the “small probability” event $E_t^c$, the regret can also be bounded. $\square$
+
+The following results can be shown under two special cases of the DCM setting.
+
+**Corollary 4.4** When $g = q = 1$, the regret can be bounded by
+
+$$
+\text{Regret}_{\pi}(T) \leq C N^2 \sqrt{T \log T}.
+$$
+
+**Corollary 4.5** When at most *L* items can be recommended to a single user, the regret can be bounded by
+
+$$
+\mathrm{Regret}_{\pi}(T) \le CL^{3/2}\sqrt{NT\log T}.
+$$
+
+Corollary 4.4 characterizes the regret bound under a spe-
+cial setting where users never exit the platform and browse
+all the recommendations. In theorem 4.3, we do not limit
+the length of the sequence as online recommenders contin-
+uously send content to users to keep them on the platforms.
+If at most *L* items will be recommended, the correponding
+regret is shown in Corollary 4.5. The detailed proofs can be
+found in the Supplementary Material.
+
+# 5 Learning with Unknown Discount Factor f
+
+In this section, we consider a more general scenario where
+the discount factor $f$ is also unknown and needs to be
+learned, in conjunction with learning items’ intrinsic rele-
+vance scores $\mathbf{u}$. Before delving into our approach, we first
+discuss a few alternative approaches and their trade-offs. A
+straightforward approach towards this problem is to treat
+each combination of $f(i)$ and $u_j$ as an arm whose expected
+reward is $z_{ij} := f(i)u_j$. However, it is not clear how to
+solve the offline combinatorial problem. An alternative ap-
+proach is to view the problem as a generalized linear bandit
+which can be solved by GLM-UCB (Li, Lu, and Zhou 2017).
+Taking logarithm, we have $\log(z_{ij}) = \log(f(i)) + \log(u_j),
+
+which is a linear model. However, this approach is problem-
+atic for the following reasons: a) When $u_j$ and $f(i)$ go to
+0, $\log(u_j)$ and $\log(f(i))$ can go to negative infinity, which
+implies that the parameter space is unbounded; b) In each
+step, GLM-UCB needs to compute the maximum likelihood
+estimators, which is computationally costly. We now present
+our approach, a UCB-based algorithm for this learning task
+which circumvents the limitations associated with the alter-
+nated approaches discussed earlier.
+
+5.1 Algorithm FA-DCM
+
+Throughout this section, we define *M* as a threshold such that after showing *M* items from the same category, the discounting effect on item attractiveness due to content fatigue stays the same. That is, *f*(*r*) = *f*(*r* + 1) for *r* ≥ *M*. Note that the maximum possible value for *M* could be *N*.
+
+Following the notations introduced in Section 4.1, we de-
+fine the unbiased estimator for *f*ᵢ and *u*ᵢ as
+
+$$
+\hat{f}_i^t = \frac{1}{\hat{T}_i(t)} \sum_j \sum_{r \in T_{ij}(t)} \frac{z_{ij}^r}{\hat{\mu}_j}, \quad \text{and} \quad \hat{u}_j^t = \frac{\sum_{r \in T_{0j}(t)} z_{0j}^r}{T_{0j}(t)},
+$$
+
+where $\hat{T}_i(t) = \sum_j T_{ij}(t)$, the subscript in $T_{0j}$ and $z_{0j}^r$ represents the event $(0,j)$, that is, item $j$ is the first message from category $C(j)$ to be shown. The corresponding upper confidence bounds at time $t$ are defined as
+
+$$
+f_t^{UCB}(i) = \hat{f}_i^t + \Delta_i^t, \quad \text{and} \quad u_{j,t}^{UCB} = \hat{u}_{j,t} + \sqrt{2 \frac{\log t}{T_{0j}(t)}}, \tag{4}
+$$
+
+where
+
+$$
+\Delta_i^t = \sum_j \sum_{r \in T_{ij}(t)} \frac{z_{ij}^r}{\hat{T}_i(t)} \left( 1 - \frac{1}{\hat{\mu}_j^t} \sqrt{\frac{\log t}{T_{0j}(t)}} \right) \\
+\qquad 1 \left( \hat{\mu}_j^t \ge \sqrt{\frac{\log t}{T_{0j}(t)}} \right) \sqrt{\frac{\log t}{T_{0j}(t)}} + \sqrt{\frac{\log t}{\hat{T}_i(t)}}.
+$$
+
+$\Delta_i^t$ consists of two parts: the first part refers to the explo-
+ration term related to $\mu_j$ for all $j \in [N]$, and the second part
+is the exploration term with respect to $f(i)$.
+
+We propose a learning algorithm, which we refer to as FA-DCM for the scenario when both the function $f$ and $\mathbf{u}$ need to be learned. Define $\theta^{UCB} = (f^{UCB}, \mathbf{u}^{UCB})$ and $\theta = (f, \mathbf{u})$. At time $t$, we recommend a sequence $\tilde{\mathbf{S}}^t$ based on $\theta_t^{UCB}$. Suppose there exists an item $i$ which we have not collected sufficient feedback such that $T_i(t) < \alpha T^{2/3}$ where $\alpha$ is a tuning parameter. This item will then be inserted to the beginning of the sequence $\tilde{\mathbf{S}}^t$ to guarantee that it will be examined. This procedure ensures that we obtain a reliable estimate for $u_i$, since the discount factor $f(0) = 1$ for the first item. Once the feedback of the user at time $t$ is observed, we then update $f_t^{UCB}$ and $\mathbf{u}_i^{UCB}$. Finally, it may be possible that some estimated values of $f_t^{UCB}(i)$ violate the decreasing property of $f$, i.e., $f_t^{UCB}(i) > f_t^{UCB}(i-1)$, then we need to correct them to enforce the property.
\ No newline at end of file
diff --git a/samples/texts/7686943/page_6.md b/samples/texts/7686943/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e4a6edb7618703a0f1b6c4dffe128179542ec0e
--- /dev/null
+++ b/samples/texts/7686943/page_6.md
@@ -0,0 +1,113 @@
+**Algorithm 3:** [FA-DCM] An algorithm for fatigue-aware DCM when f is unknown
+
+1 Initialization: Set $u_{j,0}^{UCB} = 1$ and $f^{UCB}(i) = 1$ for all $j \in [N]$ and $i \in [M]$; $t = 1$;
+
+2 while $t < T$ do
+
+3 Compute $\tilde{S}^t = \operatorname*{argmax}_{\mathbf{S}} E[R(\mathbf{S}, \theta_{t-1}^{UCB})]$ according to Theorem 3.1;
+
+4 if there exists $i \in [N]$ such that $T_i(t) < \alpha T^{2/3}$ then
+
+5 Insert item $i$ to the beginning of of $\tilde{S}^t$ as the first item;
+
+6 end
+
+7 Offer sequence $\tilde{S}^t$, observe the user's feedback;
+
+8 Update $u_{j,t}^{UCB}$ for all $j$ and $f_t^{UCB}(i)$ for all $i$ according to Equation (4);
+
+9 for $i=1:M$ do
+
+10 if $f_t^{UCB}(i) > f_t^{UCB}(i-1)$ then
+
+11 $f_t^{UCB}(i) = f_t^{UCB}(i-1)$;
+
+12 end
+
+13 end
+
+14 $t = t + 1$;
+
+15 end
+
+## 5.2 Regret analysis for Algorithm FA-DCM
+
+Define $V_i^t = \frac{1}{T_i} \sum_j \sum_{t \in \mathcal{T}_{ij}(t)} \frac{z_{ij}^t}{\mu_j}$. We first have the following lemma to bound the difference between $V_i^t$ and $f_i$.
+
+**Lemma 5.1** For any $t$, $P(|V_i^t - f(i)| \ge \sqrt{\frac{\log t}{T_i(t)}}) \le \frac{2}{t^2}$.
+
+**Lemma 5.2** For any $1 \le i \le M$, we have $\sum_{t=1}^T P(|\hat{f}_t(i) - f(i)| \ge \Delta_i^t) \le CN$, for some constant $C$, where $\hat{f}_t(i)$ is defined in Equation (3).
+
+**Proof outline:** First we note that
+
+$$
+\begin{aligned}
+& P(|\hat{f}_t(i) - f(i)| \ge \Delta_i^t) \\
+& \le P\left( \left| \frac{1}{\hat{T}_i(t)} \sum_j \sum_{r \in \mathcal{T}_{ij}} z_{ij}^r \left( \frac{1}{\hat{\mu}_j} - \frac{1}{\mu_j} \right) \right| \ge \right. \\
+& \qquad \left. \sum_j \sum_{r \in \mathcal{T}_{ij}} \frac{z_{ij}^r}{\hat{T}_i(t)} \frac{1}{\hat{\mu}_j^2} \left( 1 - \frac{1}{\hat{\mu}_j} \sqrt{\frac{\log t}{T_{0j}(t)}} \right) \right. \\
+& \qquad \left. 1 \left( \hat{\mu}_j \ge \sqrt{\frac{\log t}{T_{0j}(t)}} \right) \sqrt{\frac{\log t}{T_{0j}(t)}} \right) \\
+& + P\left( \left| \frac{1}{\hat{T}_i(t)} \sum_j \sum_{r \in \mathcal{T}_{ij}} \frac{z_{ij}^r}{\mu_j} - f(i) \right| \ge \sqrt{\frac{\log t}{T_i(t)}} \right)
+\end{aligned}
+\tag{5}
+$$
+
+$$
+\tag{6}
+$$
+
+For Equation (5), we can bound it above by
+
+$$
+\sum_j \frac{2}{t^2} + \exp\left(-\frac{1}{2} T_{0j}(t) \mu_j^2\right) + P\left(\sqrt{\frac{\log t}{T_{0j}(t)}} < \frac{\mu_j}{2}\right).
+$$
+
+For Equation (6), using Lemma 5.1, we have
+
+$$ P\left(\left|\frac{1}{\hat{T}_i(t)} \sum_j \sum_{r \in \mathcal{T}_{ij}} \frac{z_{ij}^r}{\mu_j} - f(i)\right| \ge \sqrt{\frac{\ln t}{T_i(t)}}\right) \le \frac{2}{t^2}. $$
+
+It implies that
+
+$$
+\sum_{t=1}^{T} P(|\hat{f}_t(i) - f(i)| \geq \Delta_i^t) \leq CN
+$$
+
+for some constant $C$. $\square$
+
+**Lemma 5.3** Assume $\mathbf{S}^*$ is the optimal sequence of messages. Under the condition that $0 \le u \le u'$ and $0 \le f \le f'$, we have
+
+$$ E[U(\mathbf{S}^*, f', u')] \ge E[U(\mathbf{S}^*, f, u)]. $$
+
+**Proof:** Similar to the proof of Lemma 4.2, we can prove it by induction. $\square$
+
+**Theorem 5.4** *The regret of Algorithm FA-DCM during time T is bounded by*
+
+$$
+\text{Regret}_{\pi}(T) \le C \left( \frac{1-g^N}{1-g} \right)^2 \sqrt{NT^{4/3} \log T}.
+$$
+
+**Proof outline:** Define the following events, $A_{j,t} = \{u_{j,t}^{UCB} - \sqrt{8\frac{\log t}{T_j(t)}} < u_j < u_{j,t}^{UCB}\}$, $B_{i,t} = \{f_t^{UCB}(i) - 2\Delta_i^t < f(i) < f_t^{UCB}(i)\}$, and $E_t = \bigcap_{j=1}^N (A_{j,t}) \bigcap_{i=1}^M (B_{i,t})$. Let $\tilde{f}_t^{UCB}(i) = \min_{j \le i} f_t^{UCB}(j)$, $\tilde{B}_{i,t} = \{\tilde{f}_t^{UCB}(i) - 2\Delta_i^t < f(i) < \tilde{f}_t^{UCB}(i)\}$ and $\tilde{E}_t = \bigcap_{j=1}^N (A_{j,t}) \bigcap_{i=1}^M (\tilde{B}_{i,t})$. Note that if $E_t$ holds, $\tilde{E}_t$ also holds. We consider time $t$ is in the exploration phase, denoted as $t \in E$, if there exists $j \in [N]$ such that $T_{0j}(t) < \alpha t^{2/3}$. Similar to the proof for Theorem 4.3, the regret quantity $E[\sum_{t \notin E} R(\mathbf{S}^*, u) - R(\tilde{\mathbf{S}}^*, u)]$ on event $\tilde{E}_t$ for $t \notin E$ can be first bounded from above by
+
+$$
+\frac{1-g^N}{1-g}
+\left(
+E_\pi
+\left[
+\sum_{t\notin E}
+\sum_{j=1}^N
+\kappa_{j,t} (u_{j,t}^{UCB} - u_j) 1(\tilde{E}_t)
+\right]
++
+E_\pi
+\left[
+\sum_{t\notin E}
+\sum_{i=1}^M
+\tilde{\kappa}_{i,t} (\tilde{f}_t^{UCB}(i) - f(i)) 1(\tilde{E}_t)
+\right]
+\right),
+$$
+
+where $\tilde{\kappa}_{i,t} = \sum_{\tilde{l}_t(i)} \prod_{k=1}^{\tilde{l}_t(i)-1} ((g-q)f(h_{I_t(k)}(\tilde{\mathbf{S}}^t))u_{I_t(k)} + q)$ denotes the probability of observing an item as the $i^{th}$ recommendation within its own category. The regret term can be further bounded by $C (\frac{1-g^N}{1-g})^2 \sqrt{NT^{4/3} \log T}$. Finally using Lemma 5.2, we can bound the “small probability” event $E_t^c$ and the cumulative regret on $t \in E$. $\square$
+
+# 6 Numerical Experiments
+
+In this section, we perform four sets of numerical experiments to evaluate the performance of our online learning algorithms. In the first two experiments, we investigate the robustness of Algorithm FA-DCM-P and FA-DCM respectively. In the last two experiments, we compare our algorithm with a benchmark using simulated and real datasets respectively.
\ No newline at end of file
diff --git a/samples/texts/7686943/page_7.md b/samples/texts/7686943/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e769323da64e116f1455800d1564c3c0c55f301
--- /dev/null
+++ b/samples/texts/7686943/page_7.md
@@ -0,0 +1,27 @@
+## 6.1 Robustness study
+
+**Experiment I: With a known discount factor** In this experiment, we investigate the robustness of Algorithm FA-DCM-P, when the discount factor *f* is known to the platform. We consider a setting with three categories, and each contains 10 items. Items' intrinsic relevance *u* is uniformly generated from [0, 0.5]. The known discount factor is set to $f(r) = \exp(-0.1*(r-1))$. We fix the resuming probability after non-clicks to *q* = 0.7, and compare three cases by varying the resuming probabilities after clicks, i.e, Case 1: *g* = 0.95; Case 2: *g* = 0.85; Case 3: *g* = 0.75. For each case, we run 20 independent simulations.
+
+**Experiment II: With an unknown discount factor** We assume both *f* and *u* are unknown in this experiment, and study the robustness of Algorithm FA-DCM. There are three categories which contain 10 items each. We consider three cases, where we use *q* = 0.7 as in Experiment I and vary *p* and the discount factor *f*. More precisely, Case 4: *g* = 0.85, $f(r) = \exp(-0.1(r-1))$; Case 5: *g* = 0.85, $f(r) = \exp(-0.15(r-1))$; Case 6: *g* = 0.75, $f(r) = \exp(-0.1(r-1))$.
+
+**Result:** The left plot in Fig 2 shows the average regret for Algorithm FA-DCM-P as the solid line and its 95% sampled confidence interval as the shaded area, generated from 20 independent simulations for each case. Average values for the regret at *T* = 10,000 are 307.52, 277.77 and 265.73 for Case 1, 2 and 3 respectively. The result shows that the regret decreases as *g* decreases, which is consistent with the regret bound shown in Theorem 4.3.
+
+The right plot in Fig 2 shows the results for Algorithm FA-DCM, where the average regret at $T = 10,000$ is 1237.82, 1178.24, and 991.97 for Case 4, 5 and 6 respectively. Comparing Case 4 and 5, we find that when the discount factor increases, the regret decreases. Meanwhile, comparing Case 4 and 6, it shows that the regret increases with *g*, which is consistent with the conclusion in Theorem 5.4. Comparing Case 2 and 4, as well as Case 3 and 6, the simulation results show that the regret is much larger when *f* also needs to be learned.
+
+Figure 2: Simulation results for Experiments I and II
+
+## 6.2 Comparison with a benchmark algorithm
+
+Since there is no other algorithm addressing our specific setting, we choose the explore-then-exploit algorithm as a benchmark for Algorithm FA-DCM. During the exploration
+
+time steps, items are randomly selected. For the first *t* time steps, we ensure that there are $\beta \log t$ periods for exploration where $\beta$ is a tuning parameter. That is, if the number of exploration periods before *t* is less than $\beta \log t$, then we choose *t* as the exploration period where items are randomly selected. During the exploitation period, we use the empirical mean of *u* and *f* to find the optimal sequence. In both Experiment III and IV, we run 20 independent simulations to compare the performance of our algorithm against the benchmark.
+
+**Experiment III: With simulated data** We set parameters $g = 0.75$, $q = 0.7$ and $f(r) = \exp(-0.1(r-1))$. Items' intrinsic relevance *u* is uniformly generated from [0, 0.5].
+
+**Experiment IV: With real data** The data is from Taobao³, which contains 26 million ad display and click logs from 1,140,000 randomly sampled users from the website of Taobao for 8 days (5/6/2017-5/13/2017). To fit the data into our framework, we use long period of inactivity from the last interaction with the website as a proxy for exiting the platform. Based on the data, we estimate that the average probability to resume browsing after non-clicks is 0.823, i.e., $q = 0.823$. Meanwhile, the probability to resume browsing after clicking is higher, i.e., $g = 0.843$. For the discount factor, we use $f(r) = \exp(-0.1(r-1))$. In this experiment, we consider five categories of items, and select the top 20 selling products from each category. We estimate the click probabilities of these 100 products, and use them as the groundtruth values for $u_i$.
+
+**Result:** Fig 3 compares the performance of our algorithm and the benchmark based on 20 independent simulations. Our algorithm outperforms the benchmark in both experiments, highlighting the benefits of simultaneous exploration and exploitation. In particular, in Experiment III which is shown as the left plot, the average regret for the benchmark is 1279.88 at $T = 10,000$, which is 29.02% higher than that of Algorithm FA-DCM. Meanwhile, in Experiment IV as shown as the right figure, the average regret for Algorithm FA-DCM is 910.87, while the regret for the benchmark is 1519.20 at $T = 100,000$.
+
+Figure 3: Simulation results for Experiments III and IV
+
+³https://tianchi.aliyun.com/datalab/dataSet.html?spm=5176.100073.0.0.14d53ea7Rleuc9\&dataId=56
\ No newline at end of file
diff --git a/samples/texts/7686943/page_8.md b/samples/texts/7686943/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..47fce400733c770889acbe2fa6e9da3aec74b3ad
--- /dev/null
+++ b/samples/texts/7686943/page_8.md
@@ -0,0 +1,51 @@
+# 7 Conclusion
+
+In this work, we investigated a fatigue-aware DCM bandit problem for an online recommender. The setting takes into account of user's fatigue which is triggered by an overexposure to irrelevant content, as well as the boredom one might experience from seeing too much similar content. Depending on whether the discount factor $f$ is known to the platform, we proposed two online learning algorithms and quantified its regret bound for the DCM bandit setting. We further showed the regret bounds for two learning algorithms and used numerical experiments to illustrate the performance.
+
+There are several interesting future directions. One natural extension is to consider the contextual version of this model by incorporating item features and user attributes, such that the system is capable of producing personalized recommendations. Another direction for future work is to approach the fatigue-aware DCM bandits via Thompson Sampling, although the regret analysis remains a challenging problem. Last but not least, as users' preferences often change with time, it would be interesting to incorporate non-stationary item intrinsic relevance ($u_i$) into the model.
+
+## References
+
+Cao, J., and Sun, W. 2019. Dynamic learning of sequential choice bandit problem under marketing fatigue. In *The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)*.
+
+Chuklin, A.; Markov, I.; and Rijke, M. d. 2015. Click models for web search. *Synthesis Lectures on Information Concepts, Retrieval, and Services* 7(3):1–115.
+
+Combes, R.; Magureanu, S.; Proutiere, A.; and Laroche, C. 2015. Learning to rank: Regret lower bounds and efficient algorithms. *ACM SIGMETRICS Performance Evaluation Review* 43(1):231–244.
+
+Craswell, N.; Zoeter, O.; Taylor, M.; and Ramsey, B. 2008. An experimental comparison of click position-bias models. In *Proceedings of the 2008 international conference on web search and data mining*, 87–94. ACM.
+
+Guo, F.; Liu, C.; and Wang, Y. M. 2009. Efficient multiple-click models in web search. In *Proceedings of the second acm international conference on web search and data mining*, 124–131. ACM.
+
+Kapoor, K.; Subbian, K.; Srivastava, J.; and Schrater, P. 2015. Just in time recommendations: Modeling the dynamics of boredom in activity streams. In *Proceedings of the Eighth ACM International Conference on Web Search and Data Mining*, 233–242. ACM.
+
+Katariya, S.; Kveton, B.; Szepesvari, C.; Vernade, C.; and Wen, Z. 2016a. Stochastic rank-1 bandits. *arXiv preprint arXiv:1608.03023*.
+
+Katariya, S.; Kveton, B.; Szepesvari, C.; and Wen, Z. 2016b. Dcm bandits: Learning to rank with multiple clicks. In *International Conference on Machine Learning*, 1215–1224.
+
+Katariya, S.; Kveton, B.; Szepesvári, C.; Vernade, C.; and Wen, Z. 2017. Bernoulli rank-1 bandits for click feedback. In *Proceedings of the 26th International Joint Conference on Artificial Intelligence*, 2001–2007. AAAI Press.
+
+Komiyama, J.; Honda, J.; and Takeda, A. 2017. Position-based multiple-play bandit problem with unknown position bias. In *Advances in Neural Information Processing Systems*, 4998–5008.
+
+Kveton, B.; Szepesvari, C.; Wen, Z.; and Ashkan, A. 2015a. Cascading bandits: Learning to rank in the cascade model. In *International Conference on Machine Learning*, 767–776.
+
+Kveton, B.; Wen, Z.; Ashkan, A.; and Szepesvari, C. 2015b. Combinatorial cascading bandits. In *Advances in Neural Information Processing Systems*, 1450–1458.
+
+Lagrée, P.; Vernade, C.; and Cappé, O. 2016. Multiple-play bandits in the position-based model. In *Advances in Neural Information Processing Systems*, 1597–1605.
+
+Li, L.; Lu, Y.; and Zhou, D. 2017. Provably optimal algorithms for generalized linear contextual bandits. In *Proceedings of the 34th International Conference on Machine Learning-Volume 70*, 2071–2080. JMLR.org.
+
+Ma, H.; Liu, X.; and Shen, Z. 2016. User fatigue in online news recommendation. In *Proceedings of the 25th International Conference on World Wide Web*, 1363–1372. International World Wide Web Conferences Steering Committee.
+
+Radlinski, F.; Kleinberg, R.; and Joachims, T. 2008. Learning diverse rankings with multi-armed bandits. In *Proceedings of the 25th international conference on Machine learning*, 784–791. ACM.
+
+Richardson, M.; Dominowska, E.; and Ragno, R. 2007. Predicting clicks: estimating the click-through rate for new ads. In *Proceedings of the 16th international conference on World Wide Web*, 521–530. ACM.
+
+Wang, Y., and Tulabandhula, T. 2019. Thompson sampling for a fatigue-aware online recommendation system. *arXiv preprint arXiv:1901.07734*.
+
+Warlop, R.; Lazaric, A.; and Mary, J. 2018. Fighting boredom in recommender systems with linear reinforcement learning. In *Advances in Neural Information Processing Systems*, 1757–1768.
+
+Yu, B.; Fang, M.; and Tao, D. 2016. Linear submodular bandits with a knapsack constraint. In *Thirtieth AAAI Conference on Artificial Intelligence*.
+
+Yue, Y., and Guestrin, C. 2011. Linear submodular bandits and their application to diversified retrieval. In *Advances in Neural Information Processing Systems*, 2483–2491.
+
+Ziegler, C.-N.; McNee, S. M.; Konstan, J. A.; and Lausen, G. 2005. Improving recommendation lists through topic diversification. In *Proceedings of the 14th international conference on World Wide Web*, 22–32. ACM.
\ No newline at end of file
diff --git a/samples/texts/7691855/page_2.md b/samples/texts/7691855/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..98a6c89793069d359eaeaa470795cbbe3c11a2db
--- /dev/null
+++ b/samples/texts/7691855/page_2.md
@@ -0,0 +1,56 @@
+**Example 3**
+
+Show that $5^n - 1$ is divisible by 4 for every $n \in \mathbb{N}$.
+
+**Solution**
+
+We will prove this by induction on $n$.
+
+Base Case: $n=1$ is true because $4$ divides $5^1 - 1 = 4$.
+
+Induction Step: Assume the result is true for $n=k$. We will prove $n=k+1$ is true.
+Observe that
+
+$$
+\begin{aligned}
+5^{k+1} - 1 &= 5^{k+1} + 5^k - 5^k - 1 \\
+&= 5^k(5-1) + (5^k - 1) \\
+&= 4 \cdot 5^k + (5^k - 1)
+\end{aligned}
+ $$
+
+Since 4 is divisible by 4 and by induction hypothesis, $5^k - 1$ is divisible by 4. Thus, $5^{k+1} - 1$ is divisible by 4. Therefore, by induction, we have shown that $5^n - 1$ is divisible by 4 for all $n \in \mathbb{N}$.
+
+**Remark 3**
+
+Notice that the key point to this solution is to add $5^k$ and subtract $5^k$. This is a common technique in mathematics.
+
+# Principle of Strong Mathematical Induction
+
+There will be times when we are try to prove $P(k+1)$, we also need more than just $P(k)$ to be true. Therefore, we need the following:
+
+Let $P(n)$ be a statement that depends on $n \in \mathbb{N}$. If the following two conditions hold
+
+1. $P(1)$ is true. (This is called the base case)
+
+2. $P(1), P(2), \dots, P(k)$ are all true implies $P(k+1)$ is true for all $k \in \mathbb{N}$. (This is called the induction hypothesis and induction step)
+
+then $P(n)$ is true for all $n \in \mathbb{N}$.
+
+**Example 3**
+
+Show that every $n \in \mathbb{N}_{>1}$ is either a prime or a product of primes.
+
+**Solution**
+
+We will use induction on the natural number $n > 1$.
+
+Base Case: $n=2$ is true because 2 is prime. (Notice that we are starting with 2 instead of 1)
+
+Induction Step: Assume the result is true for $n=2, 3, 4, \dots, k$. We will show that $n=k+1$ is also true.
+If $k+1$ is a prime then we are done. So assume that $k+1$ is not prime. Then there exists $a, b > 1$ that are natural numbers such that $k+1 = ab$. Since $1 < a, b < k+1$, by induction hypothesis we can write $a$ and $b$ as a product of primes, $a = a_1a_2 \cdots a_l$ and $b = b_1b_2 \cdots b_m$ where each $a_i$ and $b_j$ is a prime. Hence, $k+1 = a_1 \cdots a_i b_1 \cdots b_j$ is a product of primes.
+Therefore, by strong induction every $n \in \mathbb{N}_{>1}$ is either a prime or a product of primes.
+
+**Remark 4**
+
+Notice that an induction doesn't have to start at 1. In fact, it can even start from a negative number.
\ No newline at end of file
diff --git a/samples/texts/7827780/page_1.md b/samples/texts/7827780/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..f984f7b5edbb1eac03d542012271588a8e2ccba6
--- /dev/null
+++ b/samples/texts/7827780/page_1.md
@@ -0,0 +1,26 @@
+Dependence of NaI(Tl) detector intrinsic efficiency
+on source–detector distance, energy and off-axis
+distance: Their implications for radioactivity
+measurements
+
+F O OGUNDARE¹*, E O ONIYA² and F A BALOGUN³
+
+¹Department of Physics, University of Ibadan, Ibadan, Nigeria
+
+²Department of Physics and Electronics, Adekunle Ajasin University, Akungba-Akoko, Ondo State, Nigeria
+
+³Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria
+
+*Corresponding author. E-mail: ogun_dare@yahoo.com
+
+MS received 1 May 2007; revised 19 November 2007; accepted 27 December 2007
+
+**Abstract.** In this work the dependence of intrinsic efficiency of a NaI(Tl) detector of radius 3.82 cm and height 7.62 cm on source-detector distance (d), source-off-axis distance (d₀) and γ-photon energy have been investigated using analytical and Monte Carlo methods. The results showed that, for a given off-axis distance, there exists a value of the ratio of source-detector distance (d) to detector radius (R) where intrinsic efficiency is minimum. This d/R value at which minimum efficiency occurs approaches zero as off-axis distance increases and it is almost constant with increase in energy. In the region where d/R < 0.01, a criteria given by Jehouani et al [1] for good photon detection, intrinsic efficiency decreases with increasing off-axis distance. The implications of the results for radioactivity measurement and radiation protection are discussed. Chacteristics of intrinsic efficiency in the regions d/R < 0.01 and d/R > 10 are also compared.
+
+**Keywords.** Detections; γ-photons; analytical formula; Monte Carlo; intrinsic efficiency.
+
+PACS Nos 29.40.-n; 29.40.Mc; 21.60.Ka
+
+# 1. Introduction
+
+In γ-ray spectrometry and radiation transmission experiments, bare-surface and well-type HpGe and NaI(Tl) detectors are widely used for γ-photon detection [1,2]. When better efficiency is the most important parameter, then a NaI(Tl) detector is usually preferred for γ-photon detection [3] due to its high efficiency [4]. High detection efficiency is an important requirement in γ-ray spectrometry for good signal-to-noise ratio. Signal-to-noise ratio increases as detection efficiency increases.
\ No newline at end of file
diff --git a/samples/texts/7827780/page_10.md b/samples/texts/7827780/page_10.md
new file mode 100644
index 0000000000000000000000000000000000000000..9670f73b1c61cc081488614889bd6926c819d5fa
--- /dev/null
+++ b/samples/texts/7827780/page_10.md
@@ -0,0 +1,9 @@
+*Detector efficiency energy and detector geometrical parameters*
+
+**Figure 3.** Comparison of the analytical method and the Monte Carlo method with the approach of Kaplanis [5] for γ-photon energy 662 keV for two off-axis distances of (a) 0 cm and (b) 1.5 cm.
+
+### 3. Results and discussion
+
+The results from the analytical formula and the Monte Carlo approach employed in this work were compared with those obtained using a Monte Carlo approach described by Kaplanis [5] in order to check the validity of approaches used in this work. The results from the two methods used in this work compare very well with the results from the Monte Carlo approach in Kaplanis [5]. The agreement among the three approaches, shown in figure 3, indicates that the analytical formula and the Monte Carlo approach employed in this work are valid for the calculation of intrinsic efficiency.
+
+Figure 4 shows the variation of intrinsic efficiency with change in $d/R$ for different energies and off-axis distances. As shown in the figure, for all off-axis distances and energies considered in this work, as $d/R$ increases intrinsic efficiency first decreases and then passes through a minimum after which it starts to increase and saturates for all $d/R > 100$. Similar results have been reported by Jehouani et al [1] but just
\ No newline at end of file
diff --git a/samples/texts/7827780/page_11.md b/samples/texts/7827780/page_11.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f1b20536e02d1f8775f2141d4de8c57095cb248
--- /dev/null
+++ b/samples/texts/7827780/page_11.md
@@ -0,0 +1,3 @@
+Figure 4. Variation of intrinsic efficiency with the ratio of source-detector distance to detector radius at four different off-axis distances for $\gamma$-photon with energies (a) 662 keV, (b) 1500 keV and (c) 2750 keV.
+
+for the case of an axial source. On both sides of the minimum efficiency there are two regions ($d/R < 0.01$ and $d/R > 10$) where intrinsic efficiency is relatively high. The fact that intrinsic efficiency is high in the region $d/R < 0.01$ informed the suggestion made by Jehouani et al [1] that for good photon detection, the source and the detector should be arranged such that $d/R < 0.01$. Jehouani et al [1] did not comment on the region $d/R > 10$ where intrinsic efficiency is also high and even
\ No newline at end of file
diff --git a/samples/texts/7827780/page_12.md b/samples/texts/7827780/page_12.md
new file mode 100644
index 0000000000000000000000000000000000000000..f37aa4761900c013c73b8fa1c196123b76636163
--- /dev/null
+++ b/samples/texts/7827780/page_12.md
@@ -0,0 +1,7 @@
+*Detector efficiency energy and detector geometrical parameters*
+
+**Figure 5.** Dependence of ratio of source-detector distance to detector radius at which minimum intrinsic efficiency occur on $\gamma$-photon energy.
+
+higher than for $d/R < 0.01$. In some situations, however, the use of the criteria that $d/R > 10$ becomes more justifiable than $d/R < 0.01$. One such situation is when the area or the radiation source to be monitored presents a high radiation risk to the personnel. Examples are *in situ* measurements of high background radiation level area and of contaminated surfaces in industries such as nuclear power plants. Radiation protection regulation requires that for any operation, radiation exposure of personnel should be kept as low as reasonably achievable. In these cases, the use of the criteria $d/R > 10$ instead of $d/R < 0.01$ will ensure compliance with regulation. Another advantage which is in favour of the use of $d/R > 10$ instead of $d/R < 0.01$ is that in the region where $d/R > 10$ intrinsic efficiency is independent of off-axis distance. A possible explanation for this is that at these large source-detector distances, the directions of all the photons getting to the surface of the detector are parallel to the axis of the detector. This means that all the photons that get to the surface of the detector will travel the same distance inside the detector. Therefore, for a given energy the intrinsic efficiency calculated using eq. (16) or (26) becomes independent of off-axis distance. Where possible, the best choice is $d/R > 100$ where intrinsic efficiency is almost a constant. $d/R > 10$ is suggested mainly because it will be easier to arrange, intrinsic efficiency in this region is usually greater than in the region $d/R < 0.01$ as suggested by Jehouani et al [1] and also because intrinsic efficiency in the region is independent of off-axis distance.
+
+For the region where $d/R < 0.01$, intrinsic efficiency decreases as $d_0$ increases. From practical point of view, the implication of this is that when monitoring a large surface area that requires measurements at several points in the area, with reference to the centre of the detector, off-axis distances of the points that constitute the area to be monitored will differ, and more than one efficiency value (depending on the number of points with different off-axis distance) will be needed for activity calculation if source and detector are positioned such that $d/R < 0.01$. Using one
\ No newline at end of file
diff --git a/samples/texts/7827780/page_2.md b/samples/texts/7827780/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..f0d741a1eac4a831a74ad6362e00f68bf935eb96
--- /dev/null
+++ b/samples/texts/7827780/page_2.md
@@ -0,0 +1,3 @@
+Figure 6. Variation of ratio of source-detector distance to detector radius at which minimum intrinsic efficiency occur with off-axis distance for different $\gamma$-photon energies (a) 662 keV and (b) 1500 keV.
+
+efficiency value in this situation will not yield accurate result. However, if the source and the detector are positioned such that $d/R > 10$, only one efficiency value is needed irrespective of the number of points where measurements are made. For low-level radioactivity measurements in the laboratory where the use of the criteria $d/R < 0.01$ is more justified, it means that for accurate results the off-axis distance of the standard source container during measurements for the determination of efficiency should be the same as the off-axis distance of sample container during
\ No newline at end of file
diff --git a/samples/texts/7827780/page_3.md b/samples/texts/7827780/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..3b77ad1fd39ae31297bec4bfe87f41ccdc016ad6
--- /dev/null
+++ b/samples/texts/7827780/page_3.md
@@ -0,0 +1,9 @@
+*Detector efficiency energy and detector geometrical parameters*
+
+**Figure 7.** Variation of (a) intrinsic efficiency with ratio of source-detector distance to detector radius and (b) ratio of source-detector distance to detector radius at which minimum intrinsic efficiency occur with off-axis distance and energy for a disk source of radius 0.38 cm and thickness 0.1 cm.
+
+measurements. This is to make sure that the off-axis distances in both cases are the same; otherwise the efficiency determined using the standard source would not adequately represent the efficiency of photon detection from the sample. This will not be a problem if standard containers (e.g. Marinelli beaker), which have been designed in such a way that its position on the detector is always the same, are used. However, the containers that are being used in many laboratories [9–11] are not designed this way. In such laboratories it should be known that for accurate results, off-axis distance must always be the same from sample to sample.
+
+Figure 5 shows the variation in the value of $d/R$ at which minimum intrinsic efficiency occurs ($d_m$) as a function of energy for four different off-axis distances. Knowledge of this parameter is important in γ-spectrometry because it gives information about where to expect minimum efficiency. As can be seen in figure 5, $d_m$ is generally constant with increasing energy except for a slight decrease at small energies. The dependence of $d_m$ on off-axis distance is clearly shown in figure 6. The figure shows that as off-axis distance increases, $d_m$ first decreases slowly and then at a faster rate. The fact that $d_m$ approaches zero as $d_0$ increases indicate that care must be taken when using the criterion $d/R < 0.01$. This is because this minimum efficiency can shift into this region. Probably the only way to ensure good photon detection efficiency in this region is to set $d/R = 0$.
+
+All the results presented so far were based on a point source. In figure 7, we present similar calculations for a disc source of radius 0.38 cm and thickness 0.1 cm.
\ No newline at end of file
diff --git a/samples/texts/7827780/page_4.md b/samples/texts/7827780/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..6230dac7fae812800c96e8b45063f705cff7c20f
--- /dev/null
+++ b/samples/texts/7827780/page_4.md
@@ -0,0 +1,29 @@
+As shown in figure 7a the variation of the intrinsic efficiency with $d/R$ using a disk source is qualitatively similar to that of a point source. The data in figure 7a were obtained with the source placed 2.0 cm off-detector axis. Variations of $d_m$ with off-axis distance and energy, as shown in figure 7a, were also similar to that of a point source. For variation of $d_m$ with energy the source was placed 2.0 cm off-detector axis. This shows that the results presented in this work are also valid for distributed sources.
+
+## 4. Conclusion
+
+The dependence of intrinsic efficiency on source-detector distance, off-axis distance and $\gamma$-photon energy are reported in this work. The $d/R$ value at which minimum intrinsic efficiency occurs approaches zero as $\gamma$-photon energy and/or off-axis distance increases. The implication of this is that, when following the criteria that $d/R < 0.01$, the source should be placed on the detector (i.e. $d=0$) to ensure that this point of minimum efficiency is avoided. Also, for accurate determination of activity and dose rate from radioactivity measurements, it was found that the off-axis distance of the standard source container during measurements for efficiency determination should be the same as that of the sample to be measured. For compliance with radiation protection regulation when the radiation source to be monitored can cause over-exposure of the personnel, the source-detector distance should be $d/R > 10$ instead of $d/R < 0.01$.
+
+## References
+
+[1] A Jehouani, R Ichaoui and M Boulkheir, *Appl. Radiat. Isot.* **53**, 887 (2000)
+
+[2] I A Mahmoud, *Appl. Radiat. Isot.* **55**, 245 (2001)
+
+[3] P N Cooper, *Introduction to nuclear radiation detectors* (Cambridge University Press, UK, 1986)
+
+[4] N Tsouflanidis, *Measurement and detection of radiation* (McGraw-Hill, New York, 1983)
+
+[5] S N Kaplanis, *Int. J. Appl. Radiat. Isot.* **33**, 127 (1982)
+
+[6] C C Scheffing and A Krichinshy, U.S. Department of Energy, *J. Undergraduate Research* **4**, 109 (2004)
+
+[7] J Rodenas, M C Burgos, I Zarza and S Gallardo, *Radiat. Prot. Dosim.* **116**, 55 (2005)
+
+[8] M J Berger and S M Seltzer, XCOM: Photon cross sections software (http://physics.nist.gov/PhysRefData/Xcom/Text/version.html) (1999)
+
+[9] J A Ademola and P O Oguneletu, *J. Environ. Radioactivity* **81**, 107 (2005)
+
+[10] I P Farai and J A Ademola, *J. Environ. Radioactivity* **79**, 119 (2005)
+
+[11] N N Jibiri and A O Ajao, *J. Environ. Radioactivity* **78**, 105 (2005)
\ No newline at end of file
diff --git a/samples/texts/7827780/page_5.md b/samples/texts/7827780/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..6ece5fbcbee5cf85c3d1af6b2d205c1ba01114ac
--- /dev/null
+++ b/samples/texts/7827780/page_5.md
@@ -0,0 +1,42 @@
+F O Ogundare, E O Oniya and F A Balogun
+
+Many analytical and Monte Carlo methods that can be used to predict the effi-
+ciency of a detector for different source and detector geometries have been reported
+[1,2,5]. Jehouani et al [1] studied the dependence of efficiency of a NaI(Tl) detec-
+tor on its geometrical parameters for a radiation source placed on the axis of the
+detector. Based on their studies, they suggested that for good photon detection
+the ratio of source-detector distance (d) to detector radius (R) should be less than
+0.01. It should be noted at this point that from the point of view of radiation
+protection these criteria cannot be justified in certain circumstances, for instance,
+when the area to be monitored presents high background radiation level or conta-
+minated by sources of high radiation level [6,7]. Radiation protection regulation
+requires that exposure of radiation to personnel should be kept as low as reasonably
+achievable. This possibly is the reason why Scheffing and Krichinsky [6] used d val-
+ues of 192 inches (4876.8 mm) and 254 inches (6451.6 mm) for a detector of radius
+25 mm in their measurements. This corresponds to d/R = 195 and 258 respectively.
+Therefore, there is the need to suggest another criterion that will not violate this
+radiation protection regulation in cases where sources of very high activity are to
+be monitored. Furthermore, since the study of Jehouani et al [1] was only for the
+case where the source axis and that of the detector coincides, there is the need
+to study the dependence of intrinsic efficiency on off-axis distance (d₀) (i.e. the
+distance between the lines that pass through the centres of both the source and the
+detector). This may be useful for measurements where it may not be possible to
+align the two axes to coincide.
+
+In the present work, we present the dependence of intrinsic efficiency of NaI(Tl)
+detector on source-detector distance, off-axis distance and energy. From the results
+we hope to suggest another criterion for good photon detection for γ-spectrometry
+in cases where ratio of source-detector distance (d) to detector radius (R) should
+be greater than 0.01. The cases considered in this work are those for which d₀ < R.
+The results from this work are important for low-level radioactivity measurements
+and also for monitoring area or radiation source of very high activity.
+
+**2. Methods**
+
+## 2.1 Analytical formula
+
+The analytical formula used in this work to calculate the intrinsic efficiency ($\epsilon$) of a bare-surface detector at different off-axis distances is adapted from the analytical formulae presented by Mahmoud [2] for total efficiency ($\epsilon_{total}$) calculation of a well-type NaI(Tl). These formulae for a well-type detector become that of a bare-surface detector shown in figure 1 if $k$ is set to 0 and $R_i$ to $R_o$. In the paper of Mahmoud [2] $k$, $R_i$ and $R_o$ stand for well depth, inner radius and outer radius of the well-type detector respectively. By setting $k = 0$ and $R_i = R_o$ in the analytical formula presented by Mahmoud [2] it becomes
+
+$$ \epsilon_{\text{total}} = \frac{W_1 + W_2 + W_3}{4\pi} \tag{1} $$
+
+The geometric solid angle [5] is also obtained to be
\ No newline at end of file
diff --git a/samples/texts/7827780/page_6.md b/samples/texts/7827780/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..f043e2ebbeb6c0788115ca709c4e5738e6c1a703
--- /dev/null
+++ b/samples/texts/7827780/page_6.md
@@ -0,0 +1,18 @@
+Figure 1. Geometrical configuration of the punctual source at off-axial distance above the bare-surface NaI(Tl) detector.
+
+$$ \Omega = B_1 + B_2 \qquad (2) $$
+
+with
+
+$$ W_1 = 2\pi \int_{0}^{\theta_1} (1 - \exp(-\mu(E)l_1)) \sin\theta d\theta, \qquad (3) $$
+
+$$
+\begin{aligned}
+W_2 = {}& 2 \int_{\theta_1}^{\theta_3} \int_{0}^{\pi} (1 - \exp(-\mu(E)l_2)) \sin\vartheta d\vartheta d\theta \\
+& + 2 \int_{\theta_1}^{\theta_2} (\phi_{\max(d+L)}(1 - \exp(-\mu(E)l_1)) \sin\theta \\
+& \qquad - \int_{0}^{\phi_{\max(d+L)}} (1 - \exp(-\mu(E)l_2)) \sin\theta d\phi) d\theta,
+\end{aligned}
+\qquad (4)
+$$
+
+$$ W_3 = 2 \int_{\theta_3}^{\theta_4} \int_{0}^{\phi_{\max(d)}} (1 - \exp(-\mu(E)l_2)) \sin\theta d\phi d\theta, \qquad (5) $$
\ No newline at end of file
diff --git a/samples/texts/7827780/page_7.md b/samples/texts/7827780/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..8a7841417712b3ea6eb5e2b3182303b242fbadf7
--- /dev/null
+++ b/samples/texts/7827780/page_7.md
@@ -0,0 +1,36 @@
+F O Ogundare, E O Oniya and F A Balogun
+
+$$B_1 = 2\pi \int_{0}^{\theta_3} \sin \theta d\theta, \qquad (6)$$
+
+$$B_2 = 2 \int_{\theta_3}^{\theta_4} \int_{0}^{\phi_{\max(d)}} \sin \theta d\phi d\theta, \qquad (7)$$
+
+where $\mu(E)$ is the $\gamma$-photon attenuation coefficient at the energy $E$. $\theta_1$, $\theta_2$, $\theta_3$ and $\theta_4$ are as shown in figure 1 and are given by
+
+$$\theta_1 = \arctan\left(\frac{R - d_0}{d + L}\right), \qquad (8)$$
+
+$$\theta_2 = \arctan\left(\frac{R+d_0}{d+L}\right), \qquad (9)$$
+
+$$\theta_3 = \arctan\left(\frac{R-d_0}{d}\right), \qquad (10)$$
+
+$$\theta_4 = \arctan\left(\frac{R+d_0}{d}\right) \qquad (11)$$
+
+and
+
+$$\phi_{\max(x)} = \arccos\left(\frac{d_0^2 - R^2 + x^2 \tan^2 \theta}{2d_0 x \tan \theta}\right). \qquad (12)$$
+
+A relation between intrinsic efficiency, which is of interest in this work, and total efficiency is obtained by the following two expressions [5]:
+
+$$\varepsilon_{\text{total}} = \frac{\Omega_{\text{ef}}}{4\pi} \qquad (13)$$
+
+and
+
+$$\varepsilon = \frac{\Omega_{\text{ef}}}{\Omega}, \qquad (14)$$
+
+where $\Omega_{\text{ef}}$ is the effective solid angle and $\Omega$ is the geometric solid angle.
+From eqs (13) and (14), it implies that the intrinsic efficiency ($\varepsilon$) is
+
+$$\varepsilon = \frac{4\pi\varepsilon_{\text{total}}}{\Omega}. \qquad (15)$$
+
+Using eqs (1) and (2) in eq. (15), the intrinsic efficiency is given by
+
+$$\varepsilon = \frac{W_1 + W_2 + W_3}{B_1 + B_2}. \qquad (16)$$
\ No newline at end of file
diff --git a/samples/texts/7827780/page_8.md b/samples/texts/7827780/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..48413ad3f37365ed789064c6aefa137e745d5427
--- /dev/null
+++ b/samples/texts/7827780/page_8.md
@@ -0,0 +1,31 @@
+*Detector efficiency energy and detector geometrical parameters*
+
+The configuration of the source and detector used in this work is shown in figure 1. The arbitrarily positioned point source is defined by the quantities ($d_0, d$). The direction of the incidence of a $\gamma$-ray photon entering the detector's top is defined by the polar ($\theta$) and the azimuthal ($\phi$) angles. $l(\theta, \phi)$ is the path length of the photon through the detector's active volume until it emerges from the crystal. The effective rays may enter the top of the detector and
+
+(1) emerge from the bottom, and for this case the path length
+
+$$l_1 = \frac{L}{\cos \theta}, \qquad (17)$$
+
+(2) emerge from the detector's sides and for this the path length
+
+$$l_2 = \frac{Y}{\sin \theta} - \frac{d}{\cos \theta}, \qquad (18)$$
+
+where Y is the distance from the source axis on the detector to the edge of the detector (OA and CB in figure 1) and is defined as
+
+$$Y = d_0 \cos \phi + \sqrt{R^2 - d_0^2 \sin^2 \phi}. \qquad (19)$$
+
+Equation (16) was solved using the 10-point gauss-quadrature numerical integration formula. Attenuation coefficients used in the work are calculated with the XCOM program and database of Berger and Seltzer [8].
+
+## 2.2 Monte Carlo method
+
+The Monte Carlo simulation approach used in this work is hereby discussed. $\gamma$-photon emission is sampled over a hypothetical detector of radius $R+d_0$ centred at $O^1$ and with the punctual $\gamma$-photon source S centred at $O^1$ as shown in figure 2. The real detector is considered to be part of this hypothetical detector with a radius R and centre O as shown in figure 2. Only $\gamma$-photons that impinge on the surface of the real detector are considered detected.
+
+The polar angle ($\theta$) and the azimuthal angle ($\phi$) which define $\gamma$-photon emission direction with respect to the hypothetical detector of radius $R+d_0$ take their values from 0 to $\theta_4$ and $2\pi$ respectively, while with respect to the real detector, the polar angle ($\theta$) takes values from 0 to $\theta_{2\max}$ and the azimuthal angle ($\phi$) from 0 to $2\pi$. For $\phi=0$, $\theta_{2\max}=\theta_4$.
+
+$$\theta_{2\max} = \arctan\left(\frac{Y}{d}\right) \qquad (20)$$
+
+and
+
+$$\theta_{1\max} = \arctan\left(\frac{Y}{d+L}\right). \qquad (21)$$
+
+The $\gamma$-photon emission direction ($\theta, \phi$) towards detector's surface is sampled by
\ No newline at end of file
diff --git a/samples/texts/7827780/page_9.md b/samples/texts/7827780/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..268a2e2acbd2b7a2aee8173c70e0db36d3c9c598
--- /dev/null
+++ b/samples/texts/7827780/page_9.md
@@ -0,0 +1,17 @@
+**Figure 2.** Geometrical configuration for the hypothetical detector for punctual $\gamma$-photon sampling.
+
+$$ \theta = \arccos(1 - r_2(1 - \cos \theta_4)), \quad (22) $$
+
+$$ \phi = \pi(2r_1 - 1), \quad (23) $$
+
+where $r_1$ and $r_2$ are random numbers in [0,1]. The $\gamma$-photon path length $l(R, d, \theta, \phi)$ inside the detector are as follows:
+
+$$ l = \frac{L}{\cos \theta}, \quad \text{if } \theta \le \theta_{1\max} \quad (24) $$
+
+$$ l = \frac{Y - d \tan \theta}{\sin \theta}, \quad \text{if } \theta_{1\max} < \theta \le \theta_{2\max}. \quad (25) $$
+
+If $\theta > \theta_{2\max}$ the $\gamma$-photon is not detected by the real detector and so not included in the efficiency calculation. The efficiency is calculated using
+
+$$ \varepsilon = \frac{\sum_{i=1}^{N} (1 - \exp(-\mu(E)l(\theta, \phi)_i))}{N}, \quad (26) $$
+
+where N is the number of histories.
\ No newline at end of file