+
+ |
+ Execution time $\mathcal{E}xe(\tau, c, f) = [\mathcal{E}xe_{nom}(\tau, c)/f]$
+ |
+
+
+ |
+ Failure rate $\lambda_{sys} = \lambda_0 \cdot 10^{\frac{b(1-f)}{1-f_{min}}} \cdot e^{-\frac{E_a}{K}\left(\frac{1}{T(t)} - \frac{1}{T_0}\right)}$
+ |
+
+
+
+ Reliability $R(\tau, \mathcal{K}, t) = 1 - \left( \prod_{i=1}^{k} \left(1 - e^{-\lambda_{sys}(c_i) \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})}\right) \right)$
+ (computed with Reliability Block Diagrams)
+ |
+
+
+ |
+ GSFR $\Lambda(S) = -\log(R(S))/U(S)$
+ |
+
+
+ |
+ Utilization $U(S) = \sum_{(\tau,c,f) \in S} \mathcal{E}xe(\tau,c,f)$
+ |
+
+
+
+ Power $P_{sys}(t) = \underbrace{\alpha \cdot T(t) + \beta_h}_{leakage} + \gamma \cdot C_{ef} \cdot V^2 \cdot f$
+ dynamic
+ |
+
+
+
+ Temperature differential equation
+ $C \cdot \left( \frac{dT_c(t)}{dt} \right) + G(T_c(t) - T_{amb}) + 2D\_heat = P(t)$
+ |
+
+
+
+ Heat transfer from neighbor cores
+ $2D\_heat = \sum_{c' \in nbr(c)} \kappa(c, c') \cdot (T_c(t) - T_{c'}(t))$
+ |
+
+
+ |
+ Solution $T_c(t) = T_{\infty}^{heat} + (T_0 - T_{\infty}^{heat}) \cdot e^{-A(t-t_0)}$
+ |
+
+
+ |
+ Steady state temperature $T_{\infty}^{heat} = B/A$
+ |
+
+
+
+Table 1: Summary of all the computations.
+
+## 4.6 Integer Linear Program
+
+We now propose an ILP formulation of our scheduling problem, with the purpose of comparison
+with the heuristic algorithm presented in Section 4.2. The models and the assumptions used in
+Section 4.2 are also used here for the ILP program. The decision variables are the following:
+
+$$
+\begin{align*}
+S_{ik} &\in \mathbb{N}: && \text{start time of replica } k \text{ of task } i \\
+F_{ik} &\in \mathbb{N}: && \text{finish time of replica } k \text{ of task } i \\
+Sb_{ik} &\in \mathbb{N}: && \text{start time of replica } k \text{ of data dependency } i \\
+Fb_{ik} &\in \mathbb{N}: && \text{finish time of replica } k \text{ of data dependency } i \\
+W &\in \mathbb{N}: && \text{total execution time of the application}
+\end{align*}
+$$
+
+$$
+x_{ikc} = \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ is assigned to core } c \\ 0 & \text{otherwise} \end{cases}
+$$
\ No newline at end of file
diff --git a/samples/texts/4011427/page_2.md b/samples/texts/4011427/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..348d9ccd27927cdeee6a1f3988eac76773ed3c37
--- /dev/null
+++ b/samples/texts/4011427/page_2.md
@@ -0,0 +1,9 @@
+Figure 1: Two transformation methods to compute the Pareto front (2D case): (a) $\varepsilon$-constraint method; (b) aggregation method.
+
+Building the whole Pareto front and considering all constraints in a multi-criteria problem is a complicated task. To do this, several approaches exist [16], including the *aggregation* method that combines all the criteria in a single cost function, the *hierarchization* method that optimizes one criteria at a time, and the *transformation* method that transforms all the criteria except one into thresholds, and optimizes the remaining criterion under the constraints that the thresholds are satisfied (this last method is also called “budget optimization”). It is also possible to use population based methods, (e.g., genetic algorithms, particle swarm, ant colony, ...) or the Normal-Boundary Intersection method (NBI) [17].
+
+Varying the cost function in the aggregation method or varying the order of the criteria in the hierarchization method can lead to computing several Pareto points, but not the entire Pareto front, a major theoretical drawback. The aggregation method is illustrated in Fig. 1(b) where the aggregation function is $f(Z_1, Z_2) = \alpha_1Z_1 + \alpha_2Z_2$. For two given values of $\alpha_1$ and $\alpha_2$, the Pareto point that is found is the one that minimizes $f$: geometrically, it is the point from the Pareto front intersecting the line of slope $-\alpha_1/\alpha_2$ and having the smallest value at origin (the $p$ in Fig. 1(b)). The problem is that the concave portions of the Pareto front will be missed, e.g., the $x_4$ point in Fig. 1(b); more generally, this is always the case if the aggregation function is convex (which is the case of $f$). However, if the aggregation function is not convex, then there is no guarantee that the computed points are on the Pareto front. For instance, a non-convex aggregation function could return the point $y_1$.
+
+Overall, the transformation method is an effective method to build the entire Pareto front when used in an iterative way. With two criteria, this is known as the $\varepsilon$-Constraint Method ($\varepsilon$CM) [12], depicted in Fig. 1(a). The criterion $Z_1$ is transformed into a constraint. At iteration 1, the threshold for $Z_1$ is set to $K_1^1 = +\infty$, yielding the Pareto optimum $x_1$. At iteration 2, the threshold $K_1^2$ is set to the horizontal coordinate of $x_1$, therefore excluding the portion of the plane that is emphasized (in pink) and yielding the Pareto optimum $x_2$. This process repeats until all the points of the Pareto front have been found (if there is finite number of them), or until some pre-decided number of Pareto point have been found. Under the two conditions that (i) the number of Pareto optima is *finite* and that (ii) the minimization algorithm for $Z_2$ computes the *optimal* result, $\varepsilon$CM computes the entire optimal Pareto front.
+
+$\varepsilon$CM has been later generalized to more than two criteria in [13], but at a very high computational cost: $k^{m-1}\mathcal{O}(\text{opt})$, where $k$ is the number of points in the Pareto front, $m$ is the number of criteria, and $\mathcal{O}(\text{opt})$ is the complexity of the single criterion optimization algorithm. This computational complexity makes the generalized $\varepsilon$CM unfeasible for our problem (if only for the
\ No newline at end of file
diff --git a/samples/texts/4011427/page_20.md b/samples/texts/4011427/page_20.md
new file mode 100644
index 0000000000000000000000000000000000000000..a6811e3988bc5709a0df5fc8db827813c8f6e1e2
--- /dev/null
+++ b/samples/texts/4011427/page_20.md
@@ -0,0 +1,65 @@
+$$
+\begin{align*}
+x_{ikcfs} &= \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ is assigned to core } c \text{ at frequency } f \text{ and after a cooling time of } s \text{ time units} \\ 0 & \text{otherwise} \end{cases} \\
+\sigma_{ijkk'} &= \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ starts before replica } k' \\ 0 & \text{of task } j \\ 0 & \text{otherwise} \end{cases} \\
+Y_{iK} &= \begin{cases} 1 & \text{if task } i \text{ is replicated } K \text{ times} \\ 0 & \text{otherwise} \end{cases} \\
+B_{ik} &= \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ has an outgoing data dependency} \\ 0 & \text{otherwise} \end{cases}
+\end{align*}
+$$
+
+The main objective of our optimization problem is minimizing the total execution time. Then, two kinds of ILP constraints must be formulated. The first kind are the constraints that guarantee the schedulability:
+
+1. Every replica *k* of task *i* should be assigned to exactly one core *c*:
+
+$$
+\forall i, \forall k, \sum_c x_{ikcfs} = 1 \tag{23}
+$$
+
+2. Every replica *k* of task *i* on core *c* should be assigned to exactly one level of frequency and be preceded by exactly one cooling time (possibly of size 0):
+
+$$
+\forall i, \forall k, \forall c, \sum_{f,s} x_{ikcfs} = x_{ikc} \quad (24)
+$$
+
+3. The finish time of every replica *k* of task *i* should be less or equal than the total execution time:
+
+$$
+\forall i, \forall k, F_{ik} \le W \tag{25}
+$$
+
+4. The finish time of every replica *k* of task *i* is computed based on its execution time and its start time:
+
+$$
+\forall i, \forall k, F_{ik} = S_{ik} + \sum_{c,f,s} x_{ikcfs} \cdot exe_c(i,c,f,s) + Fb_{ik} \quad (26)
+$$
+
+where, $exe_c(i, c, f, s)$ is the execution time of task $i$ on core $c$ at the $f$-th frequency level and after a cooling time of size $s$: $exe_c(i, c, f, s) = Eexe(i, c, f) + s$.
+
+5. Tasks can not overlap and must obey their precedence order (*M* is a constant greater than the largest existing number in the ILP program — “big *M* method” [39]):
+
+$$
+\forall i \neq j, \forall k, \forall k', \sigma_{ijkk'} + \sigma_{jik'k} \leq 1 \tag{27}
+$$
+
+$$
+\forall i, \forall j, \forall k, \forall k', S_{ik} \leq S_{jk'} + (1 - \sigma_{ijkk'}) \cdot M \quad (28)
+$$
+
+$$
+\begin{equation}
+\begin{split}
+&\forall i, \forall j, \forall k, \forall k', \forall c, F'_{ik} \le (2 - x_{ikc} - x_{jk'c}) \cdot M \\
+&\phantom{\forall i, \forall j, \forall k, \forall k', \forall c, F'_{ik}} + S_{jk'} + (1 - \sigma_{ijkk'}) \cdot M
+\end{split}
+\tag{29}
+\end{equation}
+$$
+
+$$
+\forall i \in pred(j), \forall k, \forall k', F_{ik} \leq S_{jk'} \quad (30)
+$$
+
+$$
+\forall i \in pred(j), \forall k, \forall k', \sigma_{ijkk'} = 1
+\quad (31)
+$$
\ No newline at end of file
diff --git a/samples/texts/4011427/page_21.md b/samples/texts/4011427/page_21.md
new file mode 100644
index 0000000000000000000000000000000000000000..a27951d34d1d8fd5712bd13bafcaeadaaa836d78
--- /dev/null
+++ b/samples/texts/4011427/page_21.md
@@ -0,0 +1,69 @@
+6. If task *j* is a successor of *i* and both are assigned to different cores, then this data dependency must be transmitted on the bus:
+
+$$
+\forall i, \forall k, \forall j \in \text{pred}(i), \forall k', \forall c' \neq c,
+$$
+
+$$
+B_{ik} = \bigvee_{c} \left( x_{ikc} \wedge \left( \bigvee_{j,k',c'} x_{jk'c'} \right) \right) \quad (32)
+$$
+
+where the logical operators `^` and `^` are linearized [39].
+
+7. The start time of data dependency *i* is computed based on the first idle time of the bus and on the previous data dependencies transmitted on the bus:
+
+$$
+\forall i, \forall k, \forall b, Sb_{ik} = \sum_{j,k'} ((\sigma_{jik'k} \land B_{ik} \land B_{jk'}) \cdot exe_b(j,b)) \quad (33)
+$$
+
+where `exe_b(j,b)` is the transmission time of data-dependency `j` on bus `b`: `exe_b(j,b) = E[xe(j,b, f_b)]` (recall that buses operate at the fixed frequency `f_b`, and that we do not insert cooling times on the buses).
+
+8. The finish time of each data dependency is the sum of its start time and its transmission time:
+
+$$
+\forall i, \forall b, \forall k, F_{bik} = S_{bik} + B_{ik} \cdot exe_b(i, b) \quad (34)
+$$
+
+9. Data dependencies must be serialized on the bus:
+
+$$
+\begin{array}{l}
+\forall i, \forall k, \forall j \ge i, \forall k', \forall b, \\
+Sb_{ik} \le Sb_{jk'} - exe_b(i, b) + (1 - B_{ik} + \sigma_{ijkk'}) \cdot M
+\end{array}
+\tag{35}
+$$
+
+The second kind are the ILP constraints that guarantee that the GSFR / power consumption / temperature remain below $\Lambda_{obj}/P_{obj}/T_{obj}$:
+
+1. The GSFR must be less than or equal to $\Lambda_{obj}$:
+
+$$
+\forall i, \sum_k Y_{ik} = 1
+\quad (36)
+$$
+
+$$
+\forall i, \forall c, \sum_k x_{ikc} \le 1
+\quad (37)
+$$
+
+$$
+\forall i, \sum_{k,c} x_{ikc} = \sum_k k \cdot Y_{iK} \quad (38)
+$$
+
+$$
+\forall i, \sum_{k,c,f,s} x_{ikcfs} \cdot \text{GSFR}(c, f, s) \\
+\qquad + \sum_{k,b} B_{ik} \cdot \text{GSFR}(b, f_b, 0) \leq \Lambda_{obj}
+\quad (39)
+$$
+
+2. The power consumption must be less than $P_{obj}$:
+
+$$
+\sum_{i,k,c,f,s} x_{iklmc} \cdot exe_c(i,c,f,s) \cdot P(f,s) \\
++ \sum_{i,k,b} B_{ik} \cdot P(f_b, 0) \cdot exe_b(i,b) \leq P_{obj} \cdot W
+\quad (40)
+$$
+
+where $P(f,s)$ is the sum of leakage and dynamic power consumption when the task runs at frequency $f$ and is preceded by a cooling time of size $s$.
\ No newline at end of file
diff --git a/samples/texts/4011427/page_22.md b/samples/texts/4011427/page_22.md
new file mode 100644
index 0000000000000000000000000000000000000000..8fcda9d9a21c2b0e838ce56dbc8c6e3e0ccdb3a8
--- /dev/null
+++ b/samples/texts/4011427/page_22.md
@@ -0,0 +1,29 @@
+3. The temperature on each hardware component (cores and bus) must be less than or equal to $T_{obj}$:
+
+$$
+\begin{aligned}
+& \forall i, \forall k, \log(T_{\infty}^{\text{heat}} - T_0) - a \cdot F_{ik} + C \cdot M \ge \\
+& \qquad \log(T_{\infty}^{\text{heat}} - T_{obj})
+\end{aligned}
+\tag{41} $$
+
+$$
+\begin{aligned}
+& \forall i, \forall k, \log(T_{\infty}^{\text{cool}} - T_0) - a \cdot F_{ik} \le \\
+& \qquad \log(T_{obj} - T_{\infty}^{\text{cool}}) + (1-C) \cdot M
+\end{aligned}
+\tag{42} $$
+
+where $T_0$, $T_\infty^{\text{heat}}$, and $T_\infty^{\text{cool}}$ represent respectively the initial temperature at $t_0$, the heating steady state temperature, and the cooling steady state temperature. Eqs. (41) and (42) are for the cores; for the bus it suffices to replace $F_{ik}$ by $Fb_{ik}$ and to take the value of parameter $a$ corresponding to the bus.
+
+Based on these equations, the main objective of ILP is to minimize the total execution length (the *W* variable in our ILP formulation), under the constraints specified by Eqs (23) to (42). In Section 5.4, we will compare the Pareto fronts computed respectively by our quad-criteria heuristic ERPOT and by an ILP program.
+
+# 5 Simulation results
+
+We ran several kinds of experiments to evaluate our ERPOT heuristic. In Section 5.1, we assess the influence of the *temperature*, *power consumption*, and *reliability* constraints on the *execution time*. In Section 5.2, we show a whole Pareto front for a given problem instance. In Section 5.3, we compare ERPOT with the PowerPerf-PET scheduling heuristic from [4]. Finally, in Section 5.4, we compare ERPOT with the ILP program of Section 4.6.
+
+The target multicore chip is shown in Fig. 3(b) and the parameter values are provided in Table 2, taken in part from [8] and [7].
+
+λ0 = 10-5, C = 0.03 JK-1, G = 0.3 WK-1, βh = -11 W, (voltage,frequency) pairs for the cores | for each core |
C = 0.01 JK-1, G = 0.1 WK-1, βh = -4 W, βc = -8 W, α = 0.04 WK-1 | for the bus |
| Cef = 10-8 JV-2 | same for the cores and the bus |
| κ(bus, ci) = 0.03 WK-1, κ(c1, c2) = κ(c3, c4) = 0.1 WK-1 | thermal conductivity |
| {(900 MHz, 1.20 V), (600 MHz, 1.10 V), (300 MHz, 1.06 V)} | (voltage,frequency) pairs for the cores |
| {fmax = f3 = 1, f2 = 2/3, fmin = f1 = 1/3} | scaling factors |
| (300 MHz, 1.06 V) | (voltage,frequency) pair for the bus |
| fb = 1/3 | scaling factor |
+
+Table 2: Parameter values.
\ No newline at end of file
diff --git a/samples/texts/4011427/page_23.md b/samples/texts/4011427/page_23.md
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/samples/texts/4011427/page_24.md b/samples/texts/4011427/page_24.md
new file mode 100644
index 0000000000000000000000000000000000000000..79ad9e3ac3def8a81187dfa61147665b3d77796a
--- /dev/null
+++ b/samples/texts/4011427/page_24.md
@@ -0,0 +1,13 @@
+## 5.1 Influence of the constraints on the schedules
+
+Fig. 8 has been obtained with an *Alg* graph consisting of 41 nodes, generated randomly with TGFF [40] (with the maximum value of in and out degree set to 4), and scheduled on the fully connected quad-core chip specified above. The nominal WCETs of the tasks are in the range [$5\ ms$, $15\ ms$] while the nominal WCCTs of the data-dependencies are in the range [$3\ ms$, $5\ ms$]$^3$.
+
+Fig. 8(a) shows the variation of the chip temperature in function of the execution time and the effect of the insertion of cooling times in the schedule, for two different values of the initial temperature $T_{init}$: 298 K and 357 K. In both cases, $T_{obj} = 360\ K$, $P_{obj} = 2\ W$, and $\Lambda_{obj} = 10^{-8}$. When $T_{init} = 298\ K$, the temperature increases steadily during a transient phase, and then stabilizes just below $T_{obj}$, by virtue of the cooling times. When $T_{init} = 357\ K$, the temperature remains just below $T_{obj}$ during the whole schedule, again by virtue of the cooling times. The initial temperature has a significant impact on the schedule length, from 451 ms for 298 K (indicated by the dashed vertical line) to 608 ms for 357 K, a 35% increase.
+
+Figure 8: (a) Evolution of the temperature when $T_{init} = 298\ K$ and $T_{init} = 357\ K$. (b) Evolution of the temperature of each component.
+
+Fig. 8(b) depicts the temperature variation of the five hardware components of the chip (bus, C1, C2, C3, and C4) during a schedule produced with the same parameters as Fig. 8(a). The temperatures of the four cores remain in a very small interval, [356 K, 360 K], demonstrating the effectiveness of our scheduling heuristic for the peak temperature. The bus temperature is significantly below for the simple reason that the bus is often idle. The fact that the temperature variations are very small, both over time and between the cores, is also very good to limit the aging of the chip [7].
+
+Fig. 9 has been obtained with 50 DAGs generated randomly, each with 50 tasks having an $\mathcal{E}xe_{nom}$ in the range [3, 12], and such that the total sum of the $\mathcal{E}xe_{nom}$ of their tasks is in the range [540, 560]. Each point is the average value of the $C_{max}$ over the 50 DAGs and each vertical bar shows the range around the average value. The schedule length increases when $\Lambda_{obj}$ decreases (Fig. 9(a)). This is expected since more replications are required to satisfy the lower failure rate constraint: the two criteria are antagonistic. Moreover, the schedule length increases when $P_{obj}$ decreases (Fig. 9(b)). This is expected since lowering the power consumption requires lowering the frequencies used by the cores, which increases the execution time. Again, the two criteria are antagonistic.
+
+³From now on, the time unit will be the millisecond (ms).
\ No newline at end of file
diff --git a/samples/texts/4011427/page_25.md b/samples/texts/4011427/page_25.md
new file mode 100644
index 0000000000000000000000000000000000000000..37b88c8b891f46e783fa832a61c1601354f61a11
--- /dev/null
+++ b/samples/texts/4011427/page_25.md
@@ -0,0 +1,15 @@
+Figure 9: (a) Influence of $\Lambda_{obj}$ and (b) of $P_{obj}$ on the execution time.
+
+## 5.2 Pareto fronts obtained with ERPOT
+
+In this Section, we compute the whole Pareto front for an *Alg* graph with 41 nodes onto the quad-core *Arc* graph of Fig. 3(b) with the parameters of Table 2. Ideally, we would like to visualize this Pareto front in 4D. However, when printed on paper, it is very hard to understand. To circumvent this difficulty, we show several Pareto fronts in 3D in the (execution time, GSFR, temperature) space and make it vary in the fourth dimension, the power consumption. We use 10 different values for each criterion. These threshold values must be provided by the user because they are application and platform dependent.
+
+• $\Lambda_{obj} \in \{10^{-9}, 3.16 \cdot 10^{-9}, 10^{-8}, \dots, 3.16 \cdot 10^{-5}\}$;
+
+• $P_{obj} \in \{1.3, 1.6, 1.9, \dots, 4.0\}$, in Watts;
+
+• $T_{obj} \in \{340, 345, 350, \dots, 385\}$, in Kelvin.
+
+Algorithm 2 implements the grid method for our four criteria. The function ERPOT with the parameters $\Lambda_i$, $P_j$, $T_k$ returns the Pareto point that minimizes the execution time under the constraints $\Lambda < L_i$, $P < P_j$, and $T < T_k$. Since $\Lambda_{obj}$ follows a logarithmic scale, $\Lambda_{incr}$ is used as a multiplier.
+
+Fig. 10 shows the resulting Pareto front in 3D for three different values of $P_{obj}$, 1.3 W, 2.5 W, and 4.0 W. A lower value of $P_{obj}$ implies higher values for the $C_{max}$. This is expected because the power consumption and the execution time are antagonistic.
\ No newline at end of file
diff --git a/samples/texts/4011427/page_26.md b/samples/texts/4011427/page_26.md
new file mode 100644
index 0000000000000000000000000000000000000000..84d07739db8d8417a556cb5a4a1f656432f7c1f1
--- /dev/null
+++ b/samples/texts/4011427/page_26.md
@@ -0,0 +1,25 @@
+**Algorithm 2** Grid method algorithm for 4 criteria.
+
+**input:** The range [$\Lambda_{min}, \Lambda_{max}$] and the increment $\Lambda_{incr}$
+**input:** The range [$P_{min}, P_{max}$] and the increment $P_{incr}$
+**input:** The range [$T_{min}, T_{max}$] and the increment $T_{incr}$
+**output:** The list of Pareto points *Res*
+
+1: **function** GRID($\Lambda_{min}, \Lambda_{max}, \Lambda_{incr}, P_{min}, P_{max}, P_{incr}, T_{min}, T_{max}, T_{incr}$)
+2: Res ← ∅; $\Lambda_1$ ← $\Lambda_{min}$; i ← 1
+3: **while** $\Lambda_i \le \Lambda_{max}$ **do**
+4: $P_1$ ← $P_{min}$; j ← 1
+5: **while** $P_j \le P_{max}$ **do**
+6: $T_1$ ← $T_{min}$; k ← 1
+7: **while** $T_k \le T_{max}$ **do**
+8: Res ← Res ∪ ERPOT($\Lambda_i, P_j, T_k$)
+9: $T_k$ ← $T_k + T_{incr}$
+10: **end while**
+11: $P_j$ ← $P_j + P_{incr}$
+12: **end while**
+13: $\Lambda_i$ ← $\Lambda_i \times \Lambda_{incr}$
+14: **end while**
+15: **return** REMOVENONDOMINATEDPOINTS(*Res*)
+16: **end function**
+
+Figure 10: Pareto fronts in 3D for three different values of $P_{obj}: 1.3 W$, $2.5 W$, and $4.0 W$.
\ No newline at end of file
diff --git a/samples/texts/4011427/page_27.md b/samples/texts/4011427/page_27.md
new file mode 100644
index 0000000000000000000000000000000000000000..d1d252c85d9b8a9f29d7dde46bfd9c5b9bf7af46
--- /dev/null
+++ b/samples/texts/4011427/page_27.md
@@ -0,0 +1,19 @@
+| 1 | Introduction | 4 |
| 2 | Pareto optimization | 6 |
| 3 | System model | 9 |
| 3.1 | Application and architecture models | 9 |
| 3.2 | Static mapping and scheduling | 10 |
| 3.3 | Reliability. | 10 |
| 3.4 | Power consumption. | 13 |
| 3.5 | Temperature | 14 |
| 4 | ERPOT: The Proposed Quad-Criteria Optimization Scheduling Heuristic Method | 16 |
| 4.1 | General principles of ERPOT | 16 |
| 4.2 | Quad-criteria scheduling heuristic algorithm. | 18 |
| 4.3 | Soundness of our scheduling heuristic. | 19 |
| 4.4 | Dealing with reactive systems | 20 |
| 4.5 | Taking into account the temperature of the adjacent cores | 22 |
| 4.6 | Integer Linear Program | 23 |
| 5 | Simulation results | 26 |
| 5.1 | Influence of the constraints on the schedules. | 26 |
| 5.2 | Pareto fronts obtained with ERPOT | 27 |
| 5.3 | Comparison with PowerPerf-PET. | 30 |
| 5.4 | Evaluation of the ILP model. | 31 |
| 6 | Related work | 32 |
| 7 | Conclusion | 34 |
+
+# 1 Introduction
+
+Multicores are widely used in modern safety critical embedded systems design. Their advantages over super-scalar processor architectures are lower power consumption, higher performance, and lower design complexity [1]. When designing safety critical applications, many non-functional criteria must be addressed. The most important ones are the *total execution time* (because these systems must react to inputs within a fixed delay), the *reliability* (because failures could have fatal consequences), the *power consumption* (to maximize the autonomy of the system when it operates on a battery), and the *temperature* (because of its negative influence on processing speed, reliability, and power consumption) [1, 2, 3, 4]. There are many real-life applications that motivate our study, including satellite systems, portable medical devices, and full authority digital engine control (FADEC) in aircraft.
+
+Considering these four criteria simultaneously during the design phase is very difficult because they are *antagonistic* [5, 1, 6, 2, 7, 4, 8, 9]. For instance, the total execution time and reliability are antagonistic because increasing the reliability requires some form of redundancy (be it spatial or temporal), which negatively impacts the execution time. Similarly, the execution time and the temperature are antagonistic because adding idle times to cool the cores obviously has a negative impact on the execution time. Finally, the execution time and the power consumption are antagonistic because reducing the power consumption requires lowering the operating voltage
\ No newline at end of file
diff --git a/samples/texts/4011427/page_4.md b/samples/texts/4011427/page_4.md
new file mode 100644
index 0000000000000000000000000000000000000000..36ca5ea57dccdb4c3e55ed2c8f732b2db4f4988d
--- /dev/null
+++ b/samples/texts/4011427/page_4.md
@@ -0,0 +1,30 @@
+**Algorithm 1** Grid method algorithm for 2 criteria.
+
+**input:** The range [$K_1^{min}$, $K_1^{max}$] and the decrement $\Delta$
+
+**output:** The list of Pareto points *Res*
+
+1: **function** GRID($K_1^{min}$, $K_1^{max}$, $\Delta$)
+2: *Res* ← ∅; $K_i^1$ ← $K_1^{max}$; *i* ← 1
+3: **while** $K_1^i \ge K_1^{min}$ **do**
+4: *Res* ← *Res* ∪ OPT($K_1^i$)
+5: $K_1^i$ ← $K_1^i$ − $\Delta$
+6: **end while**
+7: **return** REMOVENONDOMINATEDPOINTS(*Res*)
+8: **end function**
+
+between $Z_1$ and $Z_2$, because no Pareto optimum has been found that dominates $x_4$.
+
+# 3 System model
+
+## 3.1 Application and architecture models
+
+An application is modeled as a directed acyclic graph (DAG) $Alg = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges. Each node represents a computing task, and each edge represents data-dependencies among two tasks. All tasks are assumed to be side-effect free (this assumption is required for active replication). If $X \rightarrow Y$ is a data-dependency, then X is predecessor of Y and Y is successor of X. X is called the source of the data-dependency and Y is called its destination. We also define the sets $pred(X) = \{Y | (Y, X) \in \mathcal{E}\}$ and $succ(X) = \{Y | (X, Y) \in \mathcal{E}\}$. Tasks with no predecessor are called input tasks, and those with no successors are called output tasks.
+
+Fig. 3(a) shows an example of a DAG with two input tasks ($I_1$ and $I_2$), one output task ($O_1$) and four regular tasks ($A$, $B$, $C$ and $D$).
+
+Figure 3: (a) A sample application graph. (b) A sample architecture graph. (c) The corresponding coarse grain floorplan.
+
+An *architecture* is a possibly heterogeneous multicore chip with one or more communication buses. It is modeled as a graph *Arc* = (*C*, *B*, *L*), where *C* is the set of cores, *B* is the set of communication buses, and each *e* ∈ *L* is a pair (*c*,*b*) ∈ *C* × *B* specifying that the core *c* is connected to the bus *b*. We assume that there exists a path between any two cores *c* and *c'*. An example of a target architecture made of four cores and one bus is shown in Fig. 3(b).
+
+We are also given a function $\mathcal{Exe}_{nom}$ that returns the nominal (corresponding to the highest frequency) worst case execution times (WCETs) of all the tasks of *Alg* onto all the cores of *Arc*,
\ No newline at end of file
diff --git a/samples/texts/4011427/page_40.md b/samples/texts/4011427/page_40.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e3e37523ff36e69ef5c1eb02cd37d12f7c1f5c0
--- /dev/null
+++ b/samples/texts/4011427/page_40.md
@@ -0,0 +1,17 @@
+and frequency of the cores, which increases the execution time. Those tradeoffs are easy to grasp (but difficult to address), but other tradeoffs are less obvious: for instance, lowering the operating voltage and frequency of a core (which lowers the power consumption) increases the nominal failure rate per time unit of this core. The reason is that the sensitivity of processors to energy particles leads to an increase of the failure rate at low voltage/frequency operating points [10, 11], because lowering the voltage decreases the critical charge of the circuit. As a consequence, the power consumption and the reliability are also antagonistic. Failing to take into account these antagonisms could result in bad design choices.
+
+These antagonisms call for the computation of as many tradeoffs as possible, rather than a single tradeoff, so that the user will have a choice. We must therefore produce a *set* of solutions in the 4-dimensions space (execution time, reliability, power consumption, temperature). We rely on the notion of *Pareto dominance*, and we use a variant of the $\varepsilon$-constraint method [12, 13] coupled with a scheduling algorithm that accounts for the four criteria to produce the *Pareto front* in this 4D space. More precisely, we transform three criteria into *constraints* (the reliability, the power consumption, and the peak temperature), and we *minimize* the fourth one (the execution time of the schedule) under these three constraints.
+
+Although several studies have addressed some of these parameters, none have considered these four criteria jointly in an optimization problem. For instance, some studies completely ignore the reliability [14, 4] or the temperature [2, 9]. Other studies tackle the problem as a hardware/software co-design problem, jointly optimizing the floorplan of the multicore and the schedule of the application task graph to minimize the peak temperature [14], but without considering the reliability.
+
+We therefore propose a static scheduling heuristic method called ERPOT, an acronym that stands for Execution time, Reliability, Power consumption and Temperature. Given an application modeled as a Directed Acyclic Graph (DAG) of tasks, a multicore architecture, and thresholds on the reliability, the power consumption, and the temperature, ERPOT generates a static schedule of this DAG onto this multicore such that each constraint is below its corresponding threshold, and such that the execution time is as small as possible. Each schedule is interpreted as a point in the 4D space (execution time, reliability, power consumption, temperature). By varying the values of the thresholds and calling iteratively ERPOT, we are able to produce a full Pareto front in this 4D space.
+
+The problem of scheduling a DAG of tasks onto a distributed architecture is known to be NP-complete [15], and so is the multi-criteria scheduling problem, which motivates the design of a heuristic algorithm. Additionally, we present an ILP program of the optimization problem, which is used to validate ERPOT (i.e., both algorithms produce the same schedule on the same problem instance) and to assess experimentally how good ERPOT is. Comparing the results of ERPOT with the optimal results obtained by the ILP program shows that the average difference is less than 10%. However, ERPOT is much faster than the ILP program, which fails to complete even for application graphs of relatively small sizes (8 tasks at most).
+
+The key contributions of this paper are:
+
+* The ERPOT quad criteria scheduling heuristic, which optimizes the *execution time*, the *reliability*, the *power consumption*, and the *temperature*.
+
+* A 4D variant of the $\varepsilon$-constraint method [12] to build the Pareto front of the solutions in the 4D space (execution time, reliability, power, temperature).
+
+* An ILP program of the quad criteria optimization problem to compare the solution computed by ERPOT with the optimal solution.
\ No newline at end of file
diff --git a/samples/texts/4011427/page_41.md b/samples/texts/4011427/page_41.md
new file mode 100644
index 0000000000000000000000000000000000000000..5746a35927ce5325304b5e16e93bd6743658c313
--- /dev/null
+++ b/samples/texts/4011427/page_41.md
@@ -0,0 +1,17 @@
+ERPOT extends the heuristics proposed in [2] by taking into account the peak temperature. The first challenge of doing so lies in the intricate dependence of the temperature on the other criteria of [2], namely the failure rate, the power consumption, and the execution time. The second challenge is in the scheduling heuristic itself: each scheduling decision is made by “predicting” what will be the value of the temperature, power consumption, and failure rate at the end of the task being scheduled. However, the temperature varies during the execution of the task, because it obeys the classical thermal differential equation. Since the power consumption (and similarly the failure rate) depends on the temperature, the computation of the power consumption is inexact unless it is performed continuously during the execution of the task being scheduled, which is much too expensive. Addressing this challenge requires an over-approximation of the temperature and the proof that this is safe for the power-consumption constraint. This was not the case when only the power consumption and the failure rate were considered, making the scheduling heuristic of [2] much simpler. The third challenge resides in maintaining the peak temperature below a given threshold, which involves a combination of lowering the voltage/frequency (thanks to DVFS), inserting cooling intervals, and over-estimating the temperature when there are “holes” at the end of schedule under construction. A final contribution compared to [2] is that The ILP program of [2] does not consider the cost of the communications, while the ILP program of Section 4.6, so the comparison performed in Section 5.4 is more relevant than the one presented in [2].
+
+The rest of this paper is organized as follows. Section 2 recalls the basics about Pareto dominance and how to compute the Pareto front with the $\epsilon$-constraint method. Section 3 provides the required preliminaries including the application and architecture models and the interplay between the reliability, the power consumption, the temperature, and the execution time. Section 4 provides the proposed scheduling heuristic ERPOT, along with its ILP counterpart. Section 5 presents the results of our simulations, performed both with syntactic benchmarks and with real-life benchmarks. Finally, Section 6 surveys the related work and Section 7 gives some concluding remarks.
+
+## 2 Pareto optimization
+
+Before detailing our problem formulation, solutions, and algorithms, we give foundational background on Pareto optimization. When optimizing more than one criterion, there can be several non-comparable solutions, e.g., (42, 13) versus (9, 78) in the case of two criteria that must be minimized. The principle of Pareto optimization is to explore the design space by providing as many solutions as possible, to study the tradeoffs between these solutions. To compare solutions, we rely on the notion of dominance and Pareto optima, presented below in the case of two criteria that must be minimized (see Fig. 1(a)):
+
+* The point $(x, y)$ weakly dominates the point $(x', y')$ iff $(x < x' \land y = y') \lor (x = x' \land y < y')$. E.g., $x_2$ weakly dominates $x_1$.
+
+* The point $(x, y)$ strongly dominates the point $(x', y')$ iff $(x < x' \land y < y')$. E.g., $x_3$ strongly dominates $y_1$.
+
+* A point is a weak Pareto optimum iff there does not exist another point that strongly dominates it. E.g., $x_1, ..., x_5$ are weak Pareto optima.
+
+* A point is a strong Pareto optimum iff there does not exist another point that dominates it (weakly or strongly). E.g., $x_2, ..., x_5$ are strong Pareto optima.
+
+* The Pareto front is the set of all weak and strong Pareto optima.
\ No newline at end of file
diff --git a/samples/texts/4011427/page_5.md b/samples/texts/4011427/page_5.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f8c70ae083f32fe5e663dcb2a82ebd5ce2e1bc0
--- /dev/null
+++ b/samples/texts/4011427/page_5.md
@@ -0,0 +1,43 @@
+as well as the worst case communication times (WCCTs) of all the data-dependencies of *Alg* onto
+all the communication buses of *Arc*. An intra-core communication takes no time to execute. For
+the sake of simplicity, all execution times are assumed to be integer numbers.
+
+Computing the WCET of a given task on a processor has been the topic of much work. It
+involves finding the sequence of instructions in the program of the task that leads to the longest
+execution time. This is achieved by extracting the control flow graph (CFG) of the program,
+then by giving a duration (i.e., a number of clock cycles) to each basic block of the CFG. These
+durations are computed based on a model of the micro-architecture of the processor. This steps
+includes some pessimism because of the hardware abstraction, be it in the cache replacement
+policy, the pipeline, the branch predictor, or the prefetch buffer. Based on this, the WCET is the
+length of the most weighted path in the annotated CFG. In general, the CFG contains backward
+edges, corresponding to the loops of the program. In this case, it is necessary to analyze the
+program in order to bound the number of iterations of each loop, which is classically done with
+abstract interpretation [18].
+
+WCET analysis has been applied with success to real-life single-core processors actually used
+in embedded systems, with branch prediction [19] or with caches and pipelines [20]. These
+methods have later been adapted to multicores [21, 22, 23], taking into account the shared
+resources in the multicore (e.g., the shared memory or the bus).
+
+Finally, the multicore is equipped with per-core DVFS. For each core, a set of (voltage,frequency)
+pairs {($V_i$, $f_i$)$_{1 \le i \le l}$ is given. For the sake of simplicity, we assume that all the cores have the
+same set of (voltage,frequency) pairs. The actual execution time of a task $\tau$ on a core $c$ de-
+pends on the frequency $f$ (in contrast, the buses are assumed to run at a fixed frequency de-
+noted $f_b$). To ease the computations, we transform the frequencies into scaling factors. E.g.,
+if the set of available frequencies is {900 MHz, 600 MHz, 300 MHz}, then we use the scaling
+factors {$f_{max} = f_3 = 1$, $f_2 = \frac{2}{3}$, $f_{min} = f_1 = \frac{1}{3}$}. As a result, the WCET of task $\tau$ at frequency $f$ is
+given by:
+
+$$
+\mathcal{E}xe(\tau, c, f) = [\mathcal{E}xe_{nom}(\tau, c)/f] \quad (1)
+$$
+
+where the [·] function guarantees that $\mathcal{E}xe$ always returns an integer number.
+
+## 3.2 Static mapping and scheduling
+
+The specifications of the system consists of *Alg*, *Arc*, and * nom with respect to *Arc* and one or several communication buses of *Arc* for each data-dependency: this is the *mapping*. During this phase, we take into account (i) the reliability constraint by choosing how many cores must execute each task, (ii) the power consumption constraint by choosing at what frequency/voltage each component (core or bus) should execute each task and data-dependency, and (iii) the temperature constraint by inserting cooling times whenever necessary. Second, we must compute the starting time for each pair (task,proc) and each pair (data dep.,bus): this is the *scheduling*. This paper solves these two steps *statically*, i.e., at compile time, based on a ready list scheduling heuristic. Finally, as said in the introduction, we schedule under constraints on the failure rate, the power consumption, and the temperature. We note respectively $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ these constraints.
+
+**3.3 Reliability**
+
+Both the cores and the buses are assumed to be *fail-silent*. Classically, we adopt the failure model of Shatz and Wang [24]: failures are *transient*, and the maximal duration of a failure is
\ No newline at end of file
diff --git a/samples/texts/4011427/page_6.md b/samples/texts/4011427/page_6.md
new file mode 100644
index 0000000000000000000000000000000000000000..22296cd9fc34f83b193431e04d73fa4816209bed
--- /dev/null
+++ b/samples/texts/4011427/page_6.md
@@ -0,0 +1,28 @@
+such that it affects only the current task executing onto the faulty core and not the subsequent
+tasks (same for the buses); this is known as the “hot” failure model.
+
+Since the real-time systems we target are safety critical, the occurrence of failures is not acceptable and their reliability must be as close as possible to 1. One of the main causes of system failure are transient failures [25], which are commonly modeled by a Poisson distribution with a constant rate denoted $\lambda$ [26]. Accordingly, the reliability of a single task or data-dependency $\tau$ mapped onto a hardware component $c$ (either a core or a bus) running at frequency $f$ is:
+
+$$R(\tau, c, f) = e^{-\lambda_c \cdot \mathcal{E}xe(\tau, c, f)} \quad (2)$$
+
+where $\lambda_c$ is the *failure rate per time unit* of the hardware component $c$, and $\mathcal{E}xe(\tau, c, f)$ is the execution time of $\tau$ on $c$ at frequency $f$, computed with Eq. (1). When $\tau$ is not replicated, we use Eq.(2). When $\tau$ is actively replicated on a set $K$ of $k$ hardware components numbered $\{c_i\}_{1 \le i \le k}$, each of them operating at frequency $f_{c_i}$, its reliability is:
+
+$$R(\tau, K) = 1 - \left( \prod_{i=1}^{k} \left( 1 - e^{-\lambda_{c_i} \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})} \right) \right) \quad (3)$$
+
+However, because of the operating frequency $f$, $\lambda$ is not constant anymore but is instead a function of the frequency [10]:
+
+$$\lambda_f = \lambda_0 \cdot \rho_f \quad \text{with} \quad \rho_f = 10^{\frac{b(1-f)}{1-f_{min}}} \qquad (4)$$
+
+where $\lambda_0$ is the nominal failure rate per time unit, $\rho_f$ is the frequency-dependent factor, $b$ is a strictly positive constant that accounts for the susceptibility of hardware to transient faults due to frequency scaling, $f$ is the operational frequency level, and $f_{min}$ is the lowest frequency of the system. Recall that the frequency value $f$ is normalized in the range [0, 1] with $f_{max} = 1$. This is consistent with Eq (1).
+
+Many articles have studied the impact of the temperature on the rate of transient faults [27, 28, 29]. In addition, there are several mechanisms that lead to permanent failures, most notably electro-migration, negative bias temperature instability, stress migration, time-dependent dielectric breakdown, and thermal cycling [30, 3]. All of these phenomena can be characterized by a failure rate as an exponential function of the temperature. We take into account the effect of the temperature on the failure rate per time unit with the Arrhenius equation [3]:
+
+$$\lambda_T = \lambda_0 \cdot \rho_T \quad \text{with} \quad \rho_T = e^{\frac{-E_a}{K} \left( \frac{1}{T(t)} - \frac{1}{T_0} \right)} \qquad (5)$$
+
+where again $\lambda_0$ is the nominal failure rate per time unit, $\rho_T$ is the temperature-related factor, $E_a$ is the activation energy, $K$ is the Boltzmann's constant, $T(t)$ is the temperature of the system at time $t$ in Kelvin, and $T_0$ is the initial temperature. Of course, we will also have to take into account the effect of each core's temperature on the other cores (see Section 3.5).
+
+Finally, we combine Eqs (4) and (5) to provide a global equation of the failure rate per time unit as a function of the frequency and the temperature. Since the frequency factor $\rho_f$ and the temperature factor $\rho_T$ are both dimension-less, the dimension of $\lambda_{sys}$ is the same as $\lambda_0$, hence $\lambda_{sys}$ is also a failure rate per time unit:
+
+$$\lambda_{sys} = \lambda_0 \cdot \rho_f \cdot \rho_T = \lambda_0 \cdot 10^{\frac{b(1-f)}{1-f_{min}}} \cdot e^{\frac{-E_a}{K} \left( \frac{1}{T(t)} - \frac{1}{T_0} \right)} \quad (6)$$
+
+When computing the reliability of a given task or data-dependency $\tau$ on a single hardware component $c_i$ (resp. a set $K = \{c_i\}_{1 \le i \le k}$), we therefore use Eq. (2) (resp. Eq. (3)) by replacing
\ No newline at end of file
diff --git a/samples/texts/4011427/page_7.md b/samples/texts/4011427/page_7.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd59b2b783e77ff350cb403d7d37bb5000e76c3f
--- /dev/null
+++ b/samples/texts/4011427/page_7.md
@@ -0,0 +1,19 @@
+$\lambda_{c_i}$ by $\lambda_{sys}(c_i):$
+
+$$R(\tau, c_i, f_{c_i}, t) = e^{-\lambda_{sys}(c_i) \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})} \qquad (7)$$
+
+$$R(\tau, K, t) = 1 - \left( \prod_{i=1}^{k} \left(1 - e^{-\lambda_{sys}(c_i) \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})}\right) \right) \qquad (8)$$
+
+where $t$ is shown to make explicit the dependency of the temperature of $c_i$ on the time in $\lambda_{sys}(c_i)$. In the entire paper, we take the temperature at the *task granularity*, i.e., we assume that $T(t)$ remains constant for the entire duration of $\tau$. We will prove at the end of this section that doing so is safe for the $\Lambda_{obj}$ constraint.
+
+It has been demonstrated in [5, 2] that using the reliability as a constraint in the $\epsilon$-constraint method does not work. Intuitively, this is because the reliability is not an invariant measure of the number of scheduled tasks. Indeed, computing the reliability of a schedule involves, at each mapping decision, a multiplication by a factor that is strictly less than 1: see Eq.(3). This is illustrated in Fig. 4(a), where the horizontal axis counts the task numbers in their mapping order (recall that we use a ready list scheduling algorithm). As long as the reliability is above the threshold $R_{obj}$, the tasks are not replicated, because this is what minimizes the schedule length; thus the replication level of tasks 1 to 4 is 1 (red dashed line). This results in a multiplicative factor significantly below 1, which causes the system's reliability to drop (blue solid line). Once task 4 has been scheduled, the reliability is very close to $R_{obj}$; this causes the replication level to skyrocket up to a value sufficient for the multiplying factor to be close enough to 1, so that the system's reliability remains above $R_{obj}$. We call this the “funnel effect” [2].
+
+Figure 4: Funnel effect: (a) when using the reliability, (b) when using the energy.
+
+For this reason, instead of the reliability, we use the Global System Failure Rate (GSFR) [5]. Intuitively, the GSFR of a possibly partial schedule is the failure rate of the system operating under this schedule as if it was a single task mapped on a single core. As a consequence, we schedule under a constraint $\Lambda_{obj}$ on the GSFR instead of a constraint $R_{obj}$ on the reliability. For a single task $\tau$, the GSFR is denoted $\Lambda(\tau)$ and is computed as:
+
+$$\Lambda(\tau, c, f, t) = \frac{-\log(R(\tau, c, f, t))}{\mathcal{E}xe(\tau, c, f)} \qquad (9)$$
+
+And for a schedule $S$, the GSFR $\Lambda(S)$ is computed as:
+
+$$\Lambda(S) = \frac{-\log(R(S))}{U(S)} \quad \text{with} \quad U(S) = \sum_{(\tau,c,f) \in S} \mathcal{E}xe(\tau,c,f) \qquad (10)$$
\ No newline at end of file
diff --git a/samples/texts/4011427/page_8.md b/samples/texts/4011427/page_8.md
new file mode 100644
index 0000000000000000000000000000000000000000..3173f8fd20444069277829d3dcd3a1647dced54a
--- /dev/null
+++ b/samples/texts/4011427/page_8.md
@@ -0,0 +1,35 @@
+where $R(S)$ is the reliability of the schedule $S$ and $U(S)$ is the overall sum of the execution times of the cores in $S$. The notation $(\tau, c, f) \in S$ means that, in the schedule $S$, task $\tau$ is executed on core $c$ at frequency $f$. Eq. (10) is equivalent to $R(S) = e^{-\Lambda(S) \cdot U(S)}$, which is the same as Eq. (2) but for a schedule $S$ instead of a single task $\tau$.
+
+One key aspect of Eq. (10) is that it uses $U(S)$ and not the schedule length. There are two reasons behind this choice: first it makes the computation of the GSFR *compositional* with respect to the structure of the schedule, and second it is consistent with the “hot” failure model [5].
+
+The consequence of this shift from the reliability to the GSFR is that, from now on, our state space will be the 4D space (execution time, GSFR, power, temperature).
+
+We are now ready to prove that assuming the temperature on each core $c_j$ and on the bus $b$ to remain constant during the duration of each task/data-dependency $\tau$ is safe w.r.t. the $\Lambda_{obj}$ constraint.
+
+**Proposition 1** Let $\tau$ be a task or a data-dependency scheduled on a hardware component $c$ at frequency $f$, starting at time $t_0$ and finishing at time $t_f = t_0 + \mathcal{E}xe(\tau, c, f)$. The reliability of $\tau$ on $c$ is computed with Eq. (7) and the GSFR with Eq. (9). (i) If the temperature increases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_f)$ is safe regarding the $\Lambda_{obj}$ constraint. (ii) If the temperature decreases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_0)$ is safe regarding the $\Lambda_{obj}$ constraint.
+
+**Proof:** (i) In the heating mode, the temperature increases during the execution of $\tau$, and when it does, $\lambda_{sys}(c)$ increases too. Since $R$ is decreasing in function of $\lambda_{sys}$, we have:
+
+$$ \forall t \in [t_0, t_f], R(\tau, c, f, t) \geq R(\tau, c, f, t_f) $$
+
+Since $R(\tau, c, f, t) \geq R(\tau, c, f, t_f) \iff \Lambda(\tau, c, f, t) \leq \Lambda(\tau, c, f, t_f)$, we therefore have:
+
+$$ \Lambda(\tau, c, f, t_f) \leq \Lambda_{obj} \implies \forall t \in [t_0, t_f], \Lambda(\tau, c, f, t) \leq \Lambda_{obj} $$
+
+which proves that assuming that $T(t)$ remains constant and equal to $T(t_f)$ is safe regarding the $\Lambda_{obj}$ constraint.
+
+(ii) In the cooling mode, the proof is identical since $T(t)$ decreases so $\lambda_{sys}(t)$ decreases, hence assuming that $T(t)$ remains constant and equal to $T(t_0)$ is safe regarding the $\Lambda_{obj}$ constraint. $\square$
+
+## 3.4 Power consumption
+
+The power consumption of a single task (or data-dependency) running on a hardware component is composed of two aspects [10, 31]: (i) the leakage power and (ii) the dynamic power. The former depends on the leakage current, which itself mostly depends on the chip temperature, while the latter depends on the chosen pair (voltage $V$, frequency $f$). The overall power consumption $P_{sys}$ is equal to $P_{leak} + P_{dyn}$, computed by Eq. (11):
+
+$$
+\begin{cases}
+P_{sys}(t) = \alpha \cdot T(t) + \beta_h + \gamma \cdot C_{ef} \cdot V^2 \cdot f & \text{if heating} \\
+P_{sys}(t) = \alpha \cdot T(t) + \beta_c + \gamma \cdot C_{ef} \cdot V^2 \cdot f & \text{if cooling}
+\end{cases}
+\quad (11)
+$$
+
+Regarding the leakage power, $\alpha$, $\beta_h$, and $\beta_c$ are architecture-dependent coefficients and are determined based on the characteristics of the platform; $\beta_h$ is used in the heating mode and $\beta_c$ in the cooling mode [8]. Finally, $T(t)$ is the chip temperature at time $t$, in Kelvin. Regarding the dynamic power, $V$ is the supply voltage, $f$ is the frequency, $C_{ef}$ is the switching capacitance (a constant that depends on the chip technology), and $\gamma$ is the activity ratio, which varies from
\ No newline at end of file
diff --git a/samples/texts/4011427/page_9.md b/samples/texts/4011427/page_9.md
new file mode 100644
index 0000000000000000000000000000000000000000..9a07eed724d29c69b4501c71a4776b5da340e051
--- /dev/null
+++ b/samples/texts/4011427/page_9.md
@@ -0,0 +1,25 @@
+0 (no activity) to 1 (all gates are active at each cycle). In theory, there should be a different $\gamma$ for each task, and our scheduling algorithm can handle it. In practice, for the sake of simplicity we take an average $\gamma$ value, identical for all the tasks.
+
+Recall that we take the temperature at the *task granularity*, i.e., we assume that $T(t)$ remains constant for the entire duration of $\tau$. The following property states that doing this is safe regarding the $P_{obj}$ constraint.
+
+**Proposition 2** Let $\tau$ be a task or a data-dependency scheduled on a hardware component c at frequency f, starting at time $t_0$ and finishing at time $t_f = t_0 + \mathcal{E}xe(\tau, c, f)$. The power consumption of c during the execution of $\tau$ is computed with Eq. (11). (i) If the temperature increases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_f)$ is safe regarding the $P_{obj}$ constraint. (ii) If the temperature decreases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_0)$ is safe regarding the $P_{obj}$ constraint.
+
+**Proof:** (i) In the heating mode, the temperature increases during the execution of $\tau$, and when it does, $P_{sys}(t)$ increases too. It follows that assuming $T(t)$ to remain constant over the interval $[t_0, t_f]$ and equal to $T(t_f)$ yields $\forall t, P_{sys}(t) \le P_{sys}(t_f)$. Therefore, we have:
+
+$$P_{sys}(t_f) \le P_{obj} \implies \forall t \in [t_0, t_f], P_{sys}(t) \le P_{obj}$$
+
+which proves that assuming that $T(t)$ remains constant and equal to $T(t_f)$ is safe regarding the $P_{obj}$ constraint.
+
+(ii) In the cooling mode, the proof is identical since $T(t)$ decreases so $P_{sys}(t)$ decreases, hence assuming that $T(t)$ remains constant and equal to $T(t_0)$ is safe regarding the $P_{obj}$ constraint. $\square$
+
+From Eq. (11), we can then compute the energy consumed by the system when executing a schedule (possibly partial). However, the same funnel effect as with the reliability occurs if one uses the energy as a constraint in the $\epsilon$-constraint method [2]. The reason again is that the energy is not an invariant measure of the number of scheduled tasks. Indeed, computing the energy consumed by a schedule involves, at each mapping decision, an addition of a term that is strictly positive. This is illustrated in Fig. 4(b): the horizontal axis counts the task numbers in their mapping order; the blue solid line depicts the cumulative energy consumed by the system; up to task 6, the energy is below the energy constraint $E_{obj}$ so everything is fine; however, there is no possibility to schedule task 7 without violating the energy constraint. For this reason, in our multi-criteria scheduling heuristic we use the power consumption, with a constraint $P_{obj}$, which is an invariant measure of the number of scheduled tasks.
+
+## 3.5 Temperature
+
+The instantaneous temperature of a computing system depends on the power consumption and on the current temperature (and its variations in time). For a given hardware component c (core or bus), it is computed based on the following differential equation [32]:
+
+$$C \cdot \left( \frac{dT_c(t)}{dt} \right) + G(T_c(t) - T_{amb}) = P(t) \quad (12)$$
+
+where C and G are the architecture-based constants for the heat conductivity, $T_c$, t, $T_{amb}$, and P are respectively the temperature of c, the time, the ambient temperature (assumed to be less than $T_{obj}$¹), and the instantaneous power consumption of the system. The power consumption is the sum of the static and dynamic power, as given by Eq. (11).
+
+¹If $T_{amb} > T_{obj}$, then putting the component in the idle mode does not allow it to cool down.
\ No newline at end of file
diff --git a/samples/texts/5195943/page_1.md b/samples/texts/5195943/page_1.md
new file mode 100644
index 0000000000000000000000000000000000000000..237428e958ee9564f465ad111b5f68aa6894092d
--- /dev/null
+++ b/samples/texts/5195943/page_1.md
@@ -0,0 +1,28 @@
+# The competition between simple and complex evolutionary trajectories in asexual populations
+
+Ian E Ochs and Michael M Desai*
+
+**Abstract**
+
+**Background:** On rugged fitness landscapes where sign epistasis is common, adaptation can often involve either individually beneficial "uphill" mutations or more complex mutational trajectories involving fitness valleys or plateaus. The dynamics of the evolutionary process determine the probability that evolution will take any specific path among a variety of competing possible trajectories. Understanding this evolutionary choice is essential if we are to understand the outcomes and predictability of adaptation on rugged landscapes.
+
+**Results:** We present a simple model to analyze the probability that evolution will eschew immediately uphill paths in favor of crossing fitness valleys or plateaus that lead to higher fitness but less accessible genotypes. We calculate how this probability depends on the population size, mutation rates, and relevant selection pressures, and compare our analytical results to Wright-Fisher simulations.
+
+**Conclusion:** We find that the probability of valley crossing depends nonmonotonically on population size: intermediate size populations are most likely to follow a "greedy" strategy of acquiring immediately beneficial mutations even if they lead to evolutionary dead ends, while larger and smaller populations are more likely to cross fitness valleys to reach distant advantageous genotypes. We explicitly identify the boundaries between these different regimes in terms of the relevant evolutionary parameters. Above a certain threshold population size, we show that the probability that the population finds the more distant peak depends only on a single simple combination of the relevant parameters.
+
+**Keywords:** Epistasis, Rugged fitness landscape, Fitness valley
+
+**Background**
+
+In an adapting population, evolution often has the potential to follow many distinct mutational trajectories. In order to predict how the population will adapt, we must understand how evolution chooses among these possibilities. Many experimental and theoretical studies have analyzed this question, focusing primarily on the simple case where epistasis is absent, so that each mutation has some fixed fitness effect [1-6]. This work can explain the probability that a given mutation will fix as a population adapts, as a function of its fitness effect, the population size, mutation rate, distribution of fitness effects of
+
+other mutations, and other parameters of the evolutionary process.
+
+However, the fitness effect of a mutation often depends on the genetic background in which it occurs. A particularly interesting form of this phenomenon, *sign epistasis*, occurs when several mutations are individually neutral or deleterious but their combination is beneficial [7]. Sign epistasis has been observed repeatedly in experiments [8-13], and plays a central role in the evolution of complex phenotypes that involve multiple interacting components. When sign epistasis is present, adaptation can involve passing through genotypes of lower fitness — i.e. a population may have to cross a fitness valley or plateau. Thus the fate of a mutation depends not only on its fitness, but also on its adaptive potential [14].
+
+Several recent theoretical studies have analyzed the evolutionary dynamics of fitness valley crossing [15-20]. This
+
+*Correspondence: mmdesai@fas.harvard.edu
+Department of Organismic and Evolutionary Biology, Department of Physics,
+and FAS Center for Systems Biology, Harvard University, 02138 Cambridge, MA,
+USA
\ No newline at end of file
diff --git a/samples/texts/5195943/page_2.md b/samples/texts/5195943/page_2.md
new file mode 100644
index 0000000000000000000000000000000000000000..cd9720e7068fb4543ee8cfd099edc388a7b2a1dc
--- /dev/null
+++ b/samples/texts/5195943/page_2.md
@@ -0,0 +1,25 @@
+work has focused on calculating the rate at which adapting populations cross a valley or plateau, in the absence of any other possible mutational trajectories. However, individually beneficial mutations may often compete with more complex evolutionary trajectories. We must then ask how likely evolution is to eschew the immediately uphill paths, and instead cross valleys or plateaus to reach better but less accessible genotypes. In other words, when the fitness landscape is rugged, we wish to understand whether evolution will take the more “farsighted” path to reach distant advantageous genotypes, rather than a “greedy” trajectory that fixes immediately beneficial mutations regardless of whether these may lead to evolutionary dead ends.
+
+In this article, we analyze this evolutionary choice between immediately beneficial mutations and more complex mutational trajectories that ultimately lead to higher fitness. We calculate the probability that an adapting population will follow each type of competing trajectory, as a function of the population size, mutation rates, and selection pressures. We focus on asexual populations, where the only way for a population to acquire a complex adaptation is for a single lineage to acquire each mutation in turn. Our analysis is similar in spirit to earlier work which also considered the tradeoff between short-term and long-term fitness advantages [21-24]. However, these earlier studies dealt with competition between different strictly uphill or neutral paths, and considered the case where the less beneficial initial mutation led to better long-term evolutionary opportunities. In contrast, our analysis describes the competition between uphill mutations and more complex trajectories. While these two cases can be qualitatively similar in very small populations, they lead to very different dynamics in larger populations where the sign of the effect of the intermediate mutation can play a crucial role.
+
+Our results show that population size has a crucial impact on how “farsighted” evolution can be. This dependence is not monotonic: evolution at intermediate population sizes is most “greedy”, while both larger and smaller populations are more likely to eschew uphill paths in favor of complex trajectories. In large populations, our results show that a single parameter reliably predicts the extent of this evolutionary “foresight” across a wide range of parameters. Finally, we describe how our analysis can be generalized to predict how evolution will choose among even more complex trajectories, such as broad fitness valleys with multiple intermediate genotypes, and we discuss evolution in genotype spaces with many possible evolutionary paths.
+
+## Methods
+
+We are interested in how a population makes an evolutionary choice when confronted with multiple possible mutational trajectories. Specifically, we focus on
+
+the extent to which adaptation proceeds by crossing fitness valleys rather than acquiring immediately beneficial (uphill) mutations. Of course, the relative frequency of valley crossing will depend on the number of available fitness valleys, their depth, and the fitness advantage of the multiple-mutants, as well as the distribution of fitness effects (DFE) of the uphill mutants. Our goal is to understand how the prevalence of valley crossing depends on these factors.
+
+### Model
+
+Throughout most of this article, we consider the simplest context in which we can address this question: the choice between a single uphill path and a single fitness valley. Specifically, we consider a haploid asexual population of constant size *N* which can either acquire an uphill mutation (*u*) that confers an immediate fitness advantage $s_u$, or alternatively acquire a deleterious fitness valley intermediate (*i*) with fitness deficit $\delta_i$ on which background a double-mutant (*v*) with fitness $s_v > s_u$ can arise. This scenario is illustrated in Figure 1. We also consider the case of a fitness plateau, where $\delta_i = 0$.
+
+Because we are interested in the evolutionary choice between competing mutational trajectories, we assume that these two trajectories are mutually exclusive (i.e. the mixed genotypes *ui* and *uv* are strongly deleterious), so that only one genotype (either *u* or *v*) can eventually fix in the population. As a measure of evolutionary foresight, we analyze the probability that the double-mutant *v* fixes as a function of the relevant mutation rates, selection coefficients, and population size. In some situations, we could imagine that after either genotype *u* or *v* fixes, another set of competing potential trajectories become available. In this case, our analysis predicts the long-term relative ratio of fixed uphill versus valley-crossing mutations. In the Discussion, we consider how this model can be extended to the situation where there are many different competing uphill paths and valleys, and to broader fitness valleys involving multiple intermediate genotypes.
+
+### Simulations
+
+In addition, we compare our analytical predictions for valley crossing probability to Wright-Fisher simulations. Each simulated population was evolved until either the uphill genotype or valley-crossing genotype fixed. Valley crossing probabilities were then inferred from the number of trials in which the valley-crossing genotype fixed, out of 1000 trials per parameter set.
+
+### Results
+
+In the absence of the uphill genotype, fitness valley crossing can be modeled as a homogeneous Poisson process with rates as calculated by [17]. In small populations, the primary role of the uphill genotype is to introduce an
\ No newline at end of file
diff --git a/samples/texts/5195943/page_3.md b/samples/texts/5195943/page_3.md
new file mode 100644
index 0000000000000000000000000000000000000000..3d2ad03dd7b79162528c348c71bc13fd4bdb00d8
--- /dev/null
+++ b/samples/texts/5195943/page_3.md
@@ -0,0 +1,51 @@
+**Figure 1** Model and characteristic trajectories. (a) The model to study fitness valley crossing prevalence. The population starts as wild type (w), and then acquires uphill mutations (u) at rate $\mu_u$ that confer an immediate fitness advantage $s_u$, and acquires deleterious fitness valley intermediates (i) at rate $\mu_i$ with fitness deficit $\delta_i$ on which background double-mutants (v) with fitness $s_v > s_u$ arise at rate $\mu_v$. (b)-(e) The four main forms of fitness valley crossing. (b) Small populations are characterized by low genetic diversity and strong genetic drift, leading sequential fixation of intermediates to dominate the dynamics. (c) For larger populations, genetic diversity is maintained longer, and double mutants will tend to arise on transient single-mutant backgrounds, in a process known as stochastic tunneling. (d) If the drift time is small compared to the maximal rate of change in background fitness, we can approximate the drift time of the intermediate by its expectation, dramatically simplifying the mathematical analysis. (e) For very large populations, we can treat single-mutants deterministically, in a process dubbed semi-deterministic tunneling.
+
+effective *time limit* on this process: once an uphill muta-
+tion destined to survive drift first occurs, it very quickly
+fixes, leading to the extinction of the wild-type. The prob-
+ability of valley-crossing can thus be calculated as the
+probability that the intermediate *i* fixes before the uphill
+genotype *u*. An example of this is shown in Figure 1b.
+
+In larger populations, the dynamics are more complex,
+as illustrated in Figure 1c. Rather than leading to a single
+cutoff time for valley-crossing to occur, the single-mutant
+occurs and gradually increases in frequency. This leads to
+a decline in the size of the wild-type background on which
+intermediate and valley-crossing mutants can arise, and a
+corresponding increase in the mean fitness of the popula-
+tion (Figure 1c). These effects gradually reduce the rate at
+which intermediates are produced, and make these inter-
+mediates effectively more deleterious relative to the mean
+fitness. These factors reduce the rate of the valley-crossing
+process. Thus valley-crossing becomes an *inhomogenous*
+Poisson process, with a rate that depends on the random
+appearance time T