Monketoo commited on
Commit
89d0916
·
verified ·
1 Parent(s): 6011a54

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. samples/pdfs/1239855.pdf +0 -0
  2. samples/pdfs/3332461.pdf +0 -0
  3. samples/texts/1228241/page_1.md +16 -0
  4. samples/texts/1228241/page_10.md +31 -0
  5. samples/texts/1228241/page_11.md +18 -0
  6. samples/texts/1228241/page_12.md +19 -0
  7. samples/texts/1228241/page_13.md +35 -0
  8. samples/texts/1228241/page_14.md +33 -0
  9. samples/texts/1228241/page_15.md +29 -0
  10. samples/texts/1228241/page_16.md +13 -0
  11. samples/texts/1228241/page_17.md +24 -0
  12. samples/texts/1228241/page_19.md +13 -0
  13. samples/texts/1228241/page_2.md +16 -0
  14. samples/texts/1228241/page_20.md +11 -0
  15. samples/texts/1228241/page_21.md +11 -0
  16. samples/texts/1228241/page_22.md +15 -0
  17. samples/texts/1228241/page_23.md +50 -0
  18. samples/texts/1228241/page_24.md +27 -0
  19. samples/texts/1228241/page_25.md +9 -0
  20. samples/texts/1228241/page_3.md +19 -0
  21. samples/texts/1228241/page_5.md +19 -0
  22. samples/texts/1228241/page_6.md +32 -0
  23. samples/texts/1228241/page_7.md +17 -0
  24. samples/texts/1228241/page_8.md +11 -0
  25. samples/texts/1228241/page_9.md +30 -0
  26. samples/texts/1469251/page_1.md +33 -0
  27. samples/texts/1469251/page_2.md +27 -0
  28. samples/texts/1469251/page_3.md +27 -0
  29. samples/texts/1469251/page_4.md +25 -0
  30. samples/texts/1469251/page_5.md +19 -0
  31. samples/texts/1754951/page_30.md +17 -0
  32. samples/texts/1754951/page_34.md +15 -0
  33. samples/texts/1754951/page_37.md +25 -0
  34. samples/texts/1754951/page_42.md +29 -0
  35. samples/texts/1754951/page_45.md +17 -0
  36. samples/texts/1754951/page_49.md +12 -0
  37. samples/texts/1754951/page_56.md +15 -0
  38. samples/texts/1754951/page_58.md +21 -0
  39. samples/texts/1754951/page_62.md +19 -0
  40. samples/texts/1754951/page_63.md +17 -0
  41. samples/texts/1754951/page_72.md +15 -0
  42. samples/texts/1754951/page_74.md +21 -0
  43. samples/texts/1754951/page_78.md +17 -0
  44. samples/texts/1754951/page_83.md +17 -0
  45. samples/texts/2262004/page_1.md +34 -0
  46. samples/texts/2262004/page_10.md +17 -0
  47. samples/texts/2262004/page_11.md +7 -0
  48. samples/texts/2262004/page_12.md +17 -0
  49. samples/texts/2262004/page_13.md +11 -0
  50. samples/texts/2262004/page_14.md +7 -0
samples/pdfs/1239855.pdf ADDED
Binary file (91.9 kB). View file
 
samples/pdfs/3332461.pdf ADDED
Binary file (99.4 kB). View file
 
samples/texts/1228241/page_1.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Reclaiming the energy of a schedule: models and algorithms
2
+
3
+ Guillaume Aupy, Anne Benoit, Fanny Dufossé, Yves Robert
4
+
5
+ ► To cite this version:
6
+
7
+ Guillaume Aupy, Anne Benoit, Fanny Dufossé, Yves Robert. Reclaiming the energy of a schedule: models and algorithms. Concurrency and Computation: Practice and Experience, Wiley, 2013, 25, pp.1505-1523. 10.1002/cpe.2889 . hal-00763388
8
+
9
+ HAL Id: hal-00763388
10
+ https://hal.inria.fr/hal-00763388
11
+
12
+ Submitted on 3 Sep 2013
13
+
14
+ **HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
15
+
16
+ L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
samples/texts/1228241/page_10.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### 4.3. General DAGs
2
+
3
+ For arbitrary execution graphs, we can rewrite the MINENERGY($G, D$) problem as follows:
4
+
5
+ $$
6
+ \begin{array}{ll}
7
+ \text{Minimize} & \displaystyle\sum_{i=1}^{n} u_i^{-2} \times w_i \\
8
+ \text{subject to} & (i) \quad b_i + w_i \times u_i \leq b_j \text{ for each edge } (T_i, T_j) \in E \\
9
+ & (ii) \quad b_i + w_i \times u_i \leq D \text{ for each task } T_i \in V \\
10
+ & (iii) \quad u_i \geq \frac{1}{s_{max}} \text{ for each task } T_i \in V \\
11
+ & (iv) \quad b_i \geq 0 \text{ for each task } T_i \in V
12
+ \end{array}
13
+ \qquad (2)
14
+ $$
15
+
16
+ Here, $u_i = 1/s_i$ is the inverse of the speed to execute task $T_i$. We now have a convex optimization problem to solve, with linear constraints in the non-negative variables $u_i$ and $b_i$. In fact, the objective function is a pos polynomial, so we have a geometric programming problem [16, Section 4.5] for which efficient numerical schemes exist. In addition, such an optimization problem with a smooth convex objective function is known to be well-conditioned [35].
17
+
18
+ However, as illustrated on simple fork graphs, the optimal speeds are not expected to be rational numbers but instead arbitrarily complex expressions (we have the cubic root of the sum of cubes for forks, and nested expressions of this form for trees). From a computational complexity point of view, we do not know how to encode such numbers in polynomial size of the input (the rational task weights and the execution deadline). Still, we can always solve the problem numerically and get fixed-size numbers that are good approximations of the optimal values.
19
+
20
+ In the following, we show that the total power consumption of any optimal schedule is constant throughout execution. While this important property does not help to design an optimal solution, it shows that a schedule with large variations in its power consumption is likely to waste a lot of energy.
21
+
22
+ We need a few notations before stating the result. Consider a schedule for a graph $G = (V, E)$ with $n$ tasks. Task $T_i$ is executed at constant speed $s_i$ (see Lemma 1) and during interval $[b_i, c_i]$:
23
+ $T_i$ begins its execution at time $b_i$ and completes it at time $c_i$. The total power consumption $P(t)$ of the schedule at time $t$ is defined as the sum of the power consumed by all tasks executing at time $t$:
24
+
25
+ $$ P(t) = \sum_{1 \le i \le n, t \in [b_i, c_i]} s_i^3 . $$
26
+
27
+ **Theorem 4.** *Consider an instance of CONTINUOUS, and an optimal schedule for this instance, such that no speed is equal to $s_{max}$. Then the total power consumption of the schedule throughout execution is constant.*
28
+
29
+ **Proof.** We prove this theorem by induction on the number of tasks of the graph. First we prove a preliminary result:
30
+
31
+ **Lemma 2.** Consider a graph $G = (V, E)$ with $n \ge 2$ tasks, and any optimal schedule of deadline $D$. Let $t_1$ be the earliest completion time of a task in the schedule. Similarly, let $t_2$ be the latest starting time of a task in the schedule. Then, either $G$ is composed of independent tasks, or $0 < t_1 \le t_2 < D$.
samples/texts/1228241/page_11.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Proof.** Task $T_i$ is executed at speed $s_i$ and during interval $[b_i, c_i]$. We have $t_1 = \min_{1 \le i \le n} c_i$ and $t_2 = \max_{1 \le i \le n} b_i$. Clearly, $0 \le t_1, t_2 \le D$ by definition of the schedule. Suppose that $t_2 < t_1$. Let $T_1$ be a task that ends at time $t_1$, and $T_2$ one that starts at time $t_2$. Then:
2
+
3
+ • $\nexists T \in V, (T_1, T) \in E$ (otherwise, $T$ would start after $t_2$), therefore, $t_1 = D$;
4
+
5
+ • $\nexists T \in V, (T, T_2) \in E$ (otherwise, $T$ would finish before $t_1$); therefore $t_2 = 0$.
6
+
7
+ This also means that all tasks start at time 0 and end at time D. Therefore, G is only composed
8
+ of independent tasks.
9
+
10
+
11
+ Back to the proof of the theorem, we consider first the case of a graph with only one task. In an optimal schedule, the task is executed in time *D*, and at constant speed (Lemma 1), hence with constant power consumption.
12
+
13
+ Suppose now that the property is true for all DAGs with at most *n* − 1 tasks. Let *G* be a DAG with *n* tasks. If *G* is exactly composed of *n* independent tasks, then we know that the power consumption of *G* is constant (because all task speeds are constant). Otherwise, let *t*₁ be the earliest completion time, and *t*₂ the latest starting time of a task in the optimal schedule. Thanks to Lemma 2, we have 0 < *t*₁ ≤ *t*₂ < *D*.
14
+
15
+ Suppose first that $t_1 = t_2 = t_0$. There are three kinds of tasks: those beginning at time 0 and ending at time $t_0$ (set $S_1$), those beginning at time $t_0$ and ending at time $D$ (set $S_2$), and finally those beginning at time 0 and ending at time $D$ (set $S_3$). Tasks in $S_3$ execute during the whole schedule duration, at constant speed, hence their contribution to the total power consumption $P(t)$ is the same at each time-step $t$. Therefore, we can suppress them from the schedule without loss of generality. Next we determine the value of $t_0$. Let $A_1 = \sum_{T_i \in S_1} w_i^3$, and $A_2 = \sum_{T_i \in S_2} w_i^3$. The energy consumption between 0 and $t_0$ is $\frac{A_1}{t_0^2}$, and between $t_0$ and $D$, it is $\frac{A_2}{(D-t_0)^2}$. The optimal energy consumption is obtained with $t_0 = \frac{A_1^{1/3}}{A_1^{1/3}+A_2^{1/3}}$. Then, the total power consumption of the optimal schedule is the same in both intervals, hence at each time-step: we derive that $P(t) = (\frac{A_1^{1/3}+A_2^{1/3}}{D})^3$, which is constant.
16
+
17
+ Suppose now that $t_1 < t_2$. For each task $T_i$, let $w'_i$ be the number of operations executed before $t_1$, and $w''_i$ the number of operations executed after $t_1$ (with $w'_i + w''_i = w_i$). Let $G'$ be the DAG $G$ with execution costs $w'_i$, and $G''$ be the DAG $G$ with execution costs $w''_i$. The tasks with a cost equal to 0 are removed from the DAGs. Then, both $G'$ and $G''$ have strictly fewer than $n$ tasks. We can therefore apply the induction hypothesis. We derive that the power consumption in both DAGs is constant. Since we did not change the speeds of the tasks, the total power consumption $P(t)$ in $G$ is the same as in $G'$ if $t < t_1$, hence a constant. Similarly, the total power consumption $P(t)$ in $G$ is the same as in $G''$ if $t > t_1$, hence a constant. Considering the same partitioning with $t_2$ instead of $t_1$, we show that the total power consumption $P(t)$ is a constant before $t_2$, and also a constant after $t_2$. But $t_1 < t_2$, and the intervals $[0, t_2]$ and $[t_1, D]$ overlap. Altogether, the total power consumption is the same constant throughout $[0, D]$, which concludes the proof.
18
+
samples/texts/1228241/page_12.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reclaiming the energy of a schedule: models and algorithms
2
+
3
+ Guillaume Aupy¹, Anne Benoit*¹,²,
4
+ Fanny Dufossé¹ and Yves Robert¹,²
5
+
6
+ ¹ ENS Lyon, Université de Lyon,
7
+ LIP laboratory, UMR 5668, ENS Lyon-CNRS-INRIA-UCBL, Lyon, France
8
+
9
+ ² Institut Universitaire de France
10
+
11
+ ## SUMMARY
12
+
13
+ We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs (DAGs). We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the Vdd-hopping model that allows to switch between different supply voltages ($V_{DD}$) while executing a task leads to a polynomial solution. Finally, we provide an approximation algorithm for the incremental model, which we extend for the general DVFS model.
14
+
15
+ KEY WORDS: Energy models, complexity, bi-criteria optimization, algorithms, scheduling.
16
+
17
+ *Correspondence to: Anne Benoit, LIP, ENS Lyon, 46 allée d'Italie, 69364 Lyon Cedex 07, France.
18
+ †E-mail: {Guillaume.Aupy, Anne.Benoit, Fanny.Dufosse, Yves.Robert}@ens-lyon.fr.
19
+ This work was supported in part by the ANR StochaGrid and RESCUE projects. A two-page extended abstract of this work appears as a short presentation in SPAA'2011.
samples/texts/1228241/page_13.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **5. Discrete models**
2
+
3
+ In this section, we present complexity results on the three energy models with a finite number of possible speeds. The only polynomial instance is for the VDD-HOPPING model, for which we write a linear program in Section 5.1. Then, we give NP-completeness results in Section 5.2, and approximation results in Section 5.3, for the DISCRETE and INCREMENTAL models.
4
+
5
+ **5.1. The Vdd-Hopping model**
6
+
7
+ **Theorem 5.** With the VDD-HOPPING model, MINENERGY(G, D) can be solved in polynomial time.
8
+
9
+ **Proof.** Let $G$ be the execution graph of an application with $n$ tasks, and $D$ a deadline. Let $s_1, \dots, s_m$ be the set of possible processor speeds. We use the following rational variables: for $1 \le i \le n$ and $1 \le j \le m$, $b_i$ is the starting time of the execution of task $T_i$, and $\alpha_{(i,j)}$ is the time spent at speed $s_j$ for executing task $T_i$. There are $n+n \times m = n(m+1)$ such variables. Note that the total execution time of task $T_i$ is $\sum_{j=1}^{m} \alpha_{(i,j)}$. The constraints are:
10
+
11
+ * $\forall 1 \le i \le n, b_i \ge 0$: starting times of all tasks are non-negative numbers;
12
+
13
+ * $\forall 1 \le i \le n, b_i + \sum_{j=1}^{m} \alpha_{(i,j)} \le D$: the deadline is not exceeded by any task;
14
+
15
+ * $\forall 1 \le i, i' \le n$ such that $T_i \rightarrow T_{i'}$, $b_i + \sum_{j=1}^{m} \alpha_{(i,j)} \le b_{i'}$: a task cannot start before its predecessor has completed its execution;
16
+
17
+ * $\forall 1 \le i \le n, \sum_{j=1}^{m} \alpha_{(i,j)} \times s_j \ge w_i$: task $T_i$ is completely executed.
18
+
19
+ The objective function is then $\min (\sum_{i=1}^{n} \sum_{j=1}^{m} \alpha_{(i,j)} s_j^3)$.
20
+
21
+ The size of this linear program is clearly polynomial in the size of the instance, all $n(m+1)$ variables are rational, and therefore it can be solved in polynomial time [36]. $\square$
22
+
23
+ **5.2. NP-completeness results**
24
+
25
+ **Theorem 6.** With the INCREMENTAL model (and hence the DISCRETE model), MINENERGY(G, D) is NP-complete.
26
+
27
+ **Proof.** We consider the associated decision problem: given an execution graph, a deadline, and a bound on the energy consumption, can we find an execution speed for each task such that the deadline and the bound on energy are respected? The problem is clearly in NP: given the execution speed of each task, computing the execution time and the energy consumption can be done in polynomial time.
28
+
29
+ To establish the completeness, we use a reduction from 2-Partition [37]. We consider an instance $\mathcal{I}_1$ of 2-Partition: given $n$ strictly positive integers $a_1, \dots, a_n$, does there exist a subset $I$ of $\{1, \dots, n\}$ such that $\sum_{i \in I} a_i = \sum_{i \notin I} a_i$? Let $T = \frac{1}{2} \sum_{i=1}^{n} a_i$.
30
+
31
+ We build the following instance $\mathcal{I}_2$ of our problem: the execution graph is a linear chain with $n$ tasks, where:
32
+
33
+ * task $T_i$ has size $w_i = a_i$;
34
+
35
+ * the processor can run at $m = 2$ different speeds;
samples/texts/1228241/page_14.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ • $s_1 = 1$ and $s_2 = 2$, (i.e., $s_{min} = 1, s_{max} = 2, \delta = 1$);
2
+
3
+ • $L = 3T/2$;
4
+
5
+ • $E = 5T$.
6
+
7
+ Clearly, the size of $\mathcal{I}_2$ is polynomial in the size of $\mathcal{I}_1$.
8
+
9
+ Suppose first that instance $\mathcal{I}_1$ has a solution $I$. For all $i \in I$, $T_i$ is executed at speed 1, otherwise it is executed at speed 2. The execution time is then $\sum_{i \in I} a_i + \sum_{i \notin I} a_i / 2 = \frac{3}{2} T = D$, and the energy consumption is $E = \sum_{i \in I} a_i + \sum_{i \notin I} a_i \times 2^2 = 5T = E$. Both bounds are respected, and therefore the execution speeds are a solution to $\mathcal{I}_2$.
10
+
11
+ Suppose now that $\mathcal{I}_2$ has a solution. Since we consider the DISCRETE and INCREMENTAL models, each task run either at speed 1, or at speed 2. Let $I = \{i | T_i$ is executed at speed 1$\}$. Note that we have $\sum_{i \notin I} a_i = 2T - \sum_{i \in I} a_i$.
12
+
13
+ The execution time is $D' = \sum_{i \in I} a_i + \sum_{i \notin I} a_i / 2 = T + (\sum_{i \in I} a_i) / 2$. Since the deadline is not exceeded, $D' \le D = 3T/2$, and therefore $\sum_{i \in I} a_i \le T$.
14
+
15
+ For the energy consumption of the solution of $\mathcal{I}_2$, we have $E' = \sum_{i \in I} a_i + \sum_{i \notin I} a_i \times 2^2 = 2T + 3 \sum_{i \notin I} a_i$. Since $E' \le E = 5T$, we obtain $3 \sum_{i \notin I} a_i \le 3T$, and hence $\sum_{i \notin I} a_i \le T$.
16
+
17
+ Since $\sum_{i \in I} a_i + \sum_{i \notin I} a_i = 2T$, we conclude that $\sum_{i \in I} a_i = \sum_{i \notin I} a_i = T$, and therefore $\mathcal{I}_1$ has a solution. This concludes the proof. $\square$
18
+
19
+ ### 5.3. Approximation results
20
+
21
+ Here we explain, for the INCREMENTAL and DISCRETE models, how the solution to the NP-hard problem can be approximated. Note that, given an execution graph and a deadline, the optimal energy consumption with the CONTINUOUS model is always lower than that with the other models, which are more constrained.
22
+
23
+ **Theorem 7.** With the INCREMENTAL model, for any integer $K > 0$, the MINENERGY$(G, D)$ problem can be approximated within a factor $(1 + \frac{\delta}{s_{\min}})^2(1 + \frac{1}{K})^2$, in a time polynomial in the size of the instance and in $K$.
24
+
25
+ **Proof.** Consider an instance $\mathcal{I}_{inc}$ of the problem with the INCREMENTAL model. The execution graph $G$ has $n$ tasks, $D$ is the deadline, $\delta$ is the minimum permissible speed increment, and $s_{min}, s_{max}$ are the speed bounds. Moreover, let $K > 0$ be an integer, and let $E_{inc}$ be the optimal value of the energy consumption for this instance $\mathcal{I}_{inc}$.
26
+
27
+ We construct the following instance $\mathcal{I}_{vdd}$ with the VDD-HOPPING model: the execution graph and the deadline are the same as in instance $\mathcal{I}_{inc}$, and the speeds can take the values
28
+
29
+ $$ \left\{ s_{\min} \times \left( 1 + \frac{1}{K} \right)^i \right\}_{0 \le i \le N}, $$
30
+
31
+ where $N$ is such that $s_{max}$ is not exceeded: $N = \lfloor (\ln(s_{max}) - \ln(s_{min})) / \ln(1 + \frac{1}{K}) \rfloor$. As $N$ is asymptotically of order $O(K \ln(s_{max}))$, the number of possible speeds in $\mathcal{I}_{vdd}$, and hence the size of $\mathcal{I}_{vdd}$, is polynomial in the size of $\mathcal{I}_{inc}$ and $K$.
32
+
33
+ Next, we solve $\mathcal{I}_{vdd}$ in polynomial time thanks to Theorem 5. For each task $T_i$, let $s_i^{(vdd)}$ be the average speed of $T_i$ in this solution: if the execution time of the task in the solution
samples/texts/1228241/page_15.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ is $d_i$, then $s_i^{(vdd)} = w_i/d_i$; $E_{vdd}$ is the optimal energy consumption obtained with these speeds. Let $s_i^{(algo)} = \min_i\{s_{min} + u \times \delta \mid s_{min} + u \times \delta \ge s_i^{(vdd)}\}$ be the smallest speed in $\mathcal{I}_{inc}$ that is larger than $s_i^{(vdd)}$. There exists such a speed since, because of the values chosen for $\mathcal{I}_{vdd}$, $s_i^{(vdd)} \le s_{max}$. The values $s_i^{(algo)}$ can be computed in time polynomial in the size of $\mathcal{I}_{inc}$ and $K$. Let $E_{algo}$ be the energy consumption obtained with these values.
2
+
3
+ In order to prove that this algorithm is an approximation of the optimal solution, we need to prove that $E_{algo} \le (1 + \frac{\delta}{s_{min}})^2(1 + \frac{1}{K})^2 \times E_{inc}$. For each task $T_i$, $s_i^{(algo)} - \delta \le s_i^{(vdd)} \le s_i^{(algo)}$. Since $s_{min} \le s_i^{(vdd)}$, we derive that $s_i^{(algo)} \le s_i^{(vdd)} \times (1 + \frac{\delta}{s_{min}})$. Summing over all tasks, we get
4
+
5
+ $$E_{algo} = \sum_i w_i (s_i^{(algo)})^2 \le \sum_i w_i (s_i^{(vdd)} \times (1 + \frac{\delta}{s_{min}}))^2 \le E_{vdd} \times (1 + \frac{\delta}{s_{min}})^2.$$
6
+
7
+ Next, we bound $E_{vdd}$ thanks to the optimal solution with the CONTINUOUS model, $E_{con}$. Let $\mathcal{I}_{con}$ be the instance where the execution graph $G$, the deadline $D$, the speeds $s_{min}$ and $s_{max}$ are the same as in instance $\mathcal{I}_{inc}$, but now admissible speeds take any value between $s_{min}$ and $s_{max}$. Let $s_i^{(con)}$ be the optimal continuous speed for task $T_i$, and let $0 \le u \le N$ be the value such that:
8
+
9
+ $$s_{min} \times \left(1 + \frac{1}{K}\right)^u \le s_i^{(con)} \le s_{min} \times \left(1 + \frac{1}{K}\right)^{u+1} = s_i^*.$$
10
+
11
+ In order to bound the energy consumption for $I_{vdd}$, we assume that $T_i$ runs at speed $s_i^*$, instead of $s_i^{(vdd)}$. The solution with these speeds is a solution to $I_{vdd}$, and its energy consumption is $E^* \ge E_{vdd}$. From the previous inequalities, we deduce that $s_i^* \le s_i^{(con)} \times (1 + \frac{1}{K})$, and by summing over all tasks,
12
+
13
+ $$
14
+ \begin{aligned}
15
+ E_{vdd} \le E^* &= \sum_i w_i (s_i^*)^2 \le \sum_i w_i (s_i^{(con)} \times (1 + \frac{1}{K}))^2 \\
16
+ &\le E_{con} \times (1 + \frac{1}{K})^2 \le E_{inc} \times (1 + \frac{1}{K})^2.
17
+ \end{aligned}
18
+ \quad \square
19
+ $$
20
+
21
+ **Proposition 3.**
22
+
23
+ • For any integer $\delta > 0$, any instance of MINENERGY$(G, D)$ with the CONTINUOUS model can be approximated within a factor $(1 + \frac{\delta}{s_{min}})^2$ in the INCREMENTAL model with speed increment $\delta$.
24
+
25
+ • For any integer $K > 0$, any instance of MINENERGY$(G, D)$ with the DISCRETE model can be approximated within a factor $(1 + \frac{\alpha}{s_1})^2(1 + \frac{1}{K})^2$, with $\alpha = \max_{1 \le i < m}\{s_{i+1} - s_i\}$, in a time polynomial in the size of the instance and in $K$.
26
+
27
+ **Proof.** For the first part, let $s_i^{(con)}$ be the optimal continuous speed for task $T_i$ in instance $\mathcal{I}_{con}$; $E_{con}$ is the optimal energy consumption. For any task $T_i$, let $s_i$ be the speed of $\mathcal{I}_{inc}$ such that $s_i - \delta < s_i^{con} \le s_i$. Then, $s_i^{(con)} \le s_i \times (1 + \frac{\delta}{s_{min}})$. Let $E$ be the energy with speeds $s_i$. $E_{con} \le E \times (1 + \frac{\delta}{s_{min}})^2$. Let $E_{inc}$ be the optimal energy of $\mathcal{I}_{inc}$. Then, $E_{con} \le E_{inc} \times (1 + \frac{\delta}{s_{min}})^2$.
28
+
29
+ For the second part, we use the same algorithm as in Theorem 7. The same proof leads to the approximation ratio with $\alpha$ instead of $\delta$. □
samples/texts/1228241/page_16.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## 6. Conclusion
2
+
3
+ In this paper, we have assessed the tractability of a classical scheduling problem, with task preallocation, under various energy models. We have given several results related to CONTINUOUS speeds. However, while these are of conceptual importance, they cannot be achieved with physical devices, and we have analyzed several models enforcing a bounded number of achievable speeds, a.k.a. modes. In the classical DISCRETE model that arises from DVFS techniques, admissible speeds can be irregularly distributed, which motivates the VDD-HOPPING approach that mixes two consecutive modes optimally. While computing optimal speeds is NP-hard with discrete modes, it has polynomial complexity when mixing speeds. Intuitively, the VDD-HOPPING approach allows for smoothing out the discrete nature of the modes. An alternate (and simpler in practice) solution to VDD-HOPPING is the INCREMENTAL model, where one sticks with unique speeds during task execution as in the DISCRETE model, but where consecutive modes are regularly spaced. Such a model can be made arbitrarily efficient, according to our approximation results.
4
+
5
+ Altogether, this paper has laid the theoretical foundations for a comparative study of energy models. In the recent years, we have observed an increased concern for green computing, and a rapidly growing number of approaches. It will be very interesting to see which energy-saving technological solutions will be implemented in forthcoming future processor chips.
6
+
7
+ Regardless of the (future) energy model, there are two important future research directions that can already be envisioned:
8
+
9
+ * For those situations where the optimal solutions or approximation algorithms provided in this paper would be too costly, fast heuristics can easily be introduced. Typically, such heuristics would greedily perform local changes in the schedule until a local optimum has been reached. It would be very interesting to assess the energy savings achieved by such "fast" solutions with respect to the gain provided by the optimal solution.
10
+
11
+ * This paper has dealt with a fixed (given) mapping of the task graph. In some situations, the user may well have the possibility to choose, say, the list-schedule that assigns tasks to physical resources. Given a deadline, the problem is already NP-complete without energy considerations. Introducing variable speeds together with an energy-oriented objective dramatically increases the combinatorial difficulty of the problem. Still, designing and evaluating fast yet efficient heuristics would be of great practical significance.
12
+
13
+ **Acknowledgement.** We thank the reviewers for their observations and suggestions that greatly improved the final version of the paper.
samples/texts/1228241/page_17.md ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ REFERENCES
2
+
3
+ 1. Mills MP. The internet begins with coal. *Environment and Climate News* 1999; :.
4
+ 2. Ge R, Feng X, Cameron KW. Performance-constrained distributed DVS scheduling for scientific applications on power-aware clusters. *Proceedings of the ACM/IEEE conference on SuperComputing (SC)*, IEEE Computer Society, 2005; 34.
5
+ 3. Skadron K, Stan MR, Sankaranarayanan K, Huang W, Velusamy S, Tarjan D. Temperature-aware microarchitecture: modeling and implementation. *ACM Transactions on Architecture and Code Optimization* 2004; 1(1):94–125.
6
+ 4. Hotta Y, Sato M, Kimura H, Matsuoka S, Boku T, Takahashi D. Profile-based optimization of power performance by using dynamic voltage scaling on a pc cluster. *Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS)*, IEEE Computer Society Press: Los Alamitos, CA, USA, 2006; 340, doi:http://doi.ieeecomputersociety.org/10.1109/IPDPS.2006.1639597.
7
+ 5. Ishihara T, Yasuura H. Voltage scheduling problem for dynamically variable voltage processors. *Proceedings of International Symposium on Low Power Electronics and Design (ISLPED)*, ACM Press, 1998; 197–202.
8
+ 6. Pruhs K, van Stee R, Uthaisombut P. Speed scaling of tasks with precedence constraints. *Theory of Computing Systems* 2008; 43:67–80.
9
+ 7. Chandrakasan AP, Sinha A. JouleTrack: A Web Based Tool for Software Energy Profiling. *Design Automation Conference*, IEEE Computer Society Press: Los Alamitos, CA, USA, 2001; 220–225.
10
+ 8. Aydin H, Yang Q. Energy-aware partitioning for multiprocessor real-time systems. *Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS)*, IEEE CS Press, 2003; 113–121.
11
+ 9. Chen JJ, Kuo TW. Multiprocessor energy-efficient scheduling for real-time tasks. *Proceedings of International Conference on Parallel Processing (ICPP)*, IEEE CS Press, 2005; 13–20.
12
+ 10. Rayward-Smith VJ, Burton FW, Janacek GJ. Scheduling parallel programs assuming preallocation. *Scheduling Theory and its Applications*, Chrétienne P, Coffman Jr EG, Lenstra JK, Liu Z (eds.), John Wiley and Sons, 1995.
13
+ 11. Wang L, von Laszewski G, Dayal J, Wang F. Towards Energy Aware Scheduling for Precedence Constrained Parallel Tasks in a Cluster with DVFS. *Proceedings of CCGrid’2010, the 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing*, 2010; 368–377, doi:10.1109/CCGRID.2010.19.
14
+ 12. Prathipati RB. Energy efficient scheduling techniques for real-time embedded systems. Master’s Thesis, Texas A&M University May 2004.
15
+ 13. Bansal N, Kimbrel T, Pruhs K. Speed scaling to manage energy and temperature. *Journal of the ACM* 2007; **54**(1):1–39, doi:http://doi.acm.org/10.1145/1206035.1206038.
16
+ 14. Okuma T, Yasuura H, Ishihara T. Software energy reduction techniques for variable-voltage processors. *Design Test of Computers*, IEEE Mar 2001; **18**(2):31–41, doi:10.1109/54.914613.
17
+ 15. Miermont S, Vivet P, Renaudin M. A Power Supply Selector for Energy- and Area-Efficient Local Dynamic Voltage Scaling. *Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation, Lecture Notes in Computer Science*, vol. 4644, Azémard N, Svensson L (eds.). Springer Berlin / Heidelberg, 2007; 556–565. URL http://dx.doi.org/10.1007/978-3-540-74442-9_54.
18
+ 16. Boyd S, Vandenberghe L. *Convex Optimization*. Cambridge University Press, 2004.
19
+ 17. Lee S, Sakurai T. Run-time voltage hopping for low-power real-time systems. *Proceedings of DAC’2000, the 37th Conference on Design Automation*, 2000; 806–809.
20
+ 18. Lahiri K, Raghunathan A, Dey S, Panigrahi D. Battery-driven system design: a new frontier in low power design. *Proceedings of ASP-DAC 2002, the 7th Asia and South Pacific Design Automation Conference and the 15th International Conference on VLSI Design*, 2002; 261–267, doi:10.1109/ASPDAC.2002.994932.
21
+ 19. Grosse P, Durand Y, Feautrier P. Methods for power optimization in SOC-based data flow systems. *ACM Trans. Des. Autom. Electron. Syst.* June 2009; **14**:38:1–38:20, doi:http://doi.acm.org/10.1145/1529255.1529260. URL http://doi.acm.org/10.1145/1529255.1529260.
22
+ 20. Jejurikar R, Pereira C, Gupta R. Leakage aware dynamic voltage scaling for real-time embedded systems. *Proceedings of DAC’04, the 41st annual Design Automation Conferencea*, ACM: New York, NY, USA, 2004; 275–280, doi:http://doi.acm.org/10.1145/996566.996650.
23
+ 21. Chen JJ, Kuo CF. Energy-Efficient Scheduling for Real-Time Systems on Dynamic Voltage Scaling (DVS) Platforms. *Proceedings of the International Workshop on Real-Time Computing Systems and Applications*, IEEE Computer Society: Los Alamitos, CA, USA, 2007; 28–38, doi:http://doi.ieeecomputersociety.org/10.1109/RTCSA.2007.37.
24
+ 22. Kim KH, Buyya R, Kim J. Power Aware Scheduling of Bag-of-Tasks Applications with Deadline Constraints on DVS-enabled Clusters. *Proceedings of CCGRID 2007, the 7th IEEE International
samples/texts/1228241/page_19.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 1. Introduction
2
+
3
+ The energy consumption of computational platforms has recently become a critical problem, both for economic and environmental reasons [1]. As an example, the Earth Simulator requires about 12 MW (Mega Watts) of peak power, and PetaFlop systems may require 100 MW of power, nearly the output of a small power plant (300 MW). At $100 per MW.Hour, peak operation of a PetaFlop machine may thus cost $10,000 per hour [2]. Current estimates state that cooling costs $1 to $3 per watt of heat dissipated [3]. This is just one of the many economical reasons why energy-aware scheduling has proved to be an important issue in the past decade, even without considering battery-powered systems such as laptops and embedded systems. As an example, the Green500 list (www.green500.org) provides rankings of the most energy-efficient supercomputers in the world, therefore raising even more awareness about power consumption.
4
+
5
+ To help reduce energy dissipation, processors can run at different speeds. Their power consumption is the sum of a static part (the cost for a processor to be turned on) and a dynamic part, which is a strictly convex function of the processor speed, so that the execution of a given amount of work costs more power if a processor runs in a higher mode [4]. More precisely, a processor running at speed *s* dissipates *s*³ watts [5, 6, 7, 8, 9] per time-unit, hence consumes *s*³ × *d* joules when operated during *d* units of time. Faster speeds allow for a faster execution, but they also lead to a much higher (supra-linear) power consumption.
6
+
7
+ Energy-aware scheduling aims at minimizing the energy consumed during the execution of the target application. Obviously, it makes sense only if it is coupled with some performance bound to achieve, otherwise, the optimal solution always is to run each processor at the slowest possible speed.
8
+
9
+ In this paper, we investigate energy-aware scheduling strategies for executing a task graph on a set of processors. The main originality is that we assume that the mapping of the task graph is given, say by an ordered list of tasks to execute on each processor. There are many situations in which this problem is important, such as optimizing for legacy applications, or accounting for affinities between tasks and resources, or even when tasks are pre-allocated [10], for example for security reasons. In such situations, assume that a list-schedule has been computed for the task graph, and that its execution time should not exceed a deadline *D*. We do not have the freedom to change the assignment of a given task, but we can change its speed to reduce energy consumption, provided that the deadline *D* is not exceeded after the speed change. Rather than using a local approach such as backfilling [11, 12], which only reclaims gaps in the schedule, we consider the problem as a whole, and we assess the impact of several speed variation models on its complexity. More precisely, we investigate the following models:
10
+
11
+ **Continuous model.** Processors can have arbitrary speeds, and can vary them continuously: this model is unrealistic (any possible value of the speed, say $\sqrt{e^{\pi}}$, cannot be obtained) but it is theoretically appealing [13]. A maximum speed, $s_{max}$, cannot be exceeded.
12
+
13
+ **Discrete model.** Processors have a discrete number of predefined speeds (or frequencies), which correspond to different voltages that the processor can be subjected to [14]. Switching frequencies is not allowed during the execution of a given task, but two different tasks scheduled on a same processor can be executed at different frequencies.
samples/texts/1228241/page_2.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ We conclude the study of this simple example with a short discussion on the energy savings that can be achieved. All three models have a maximum speed $s_{max} = 6$. Executing the four tasks at maximum speed leads to consuming an energy $E_{max} = 8 \times 6^2 = 288$. Such an execution completes within a delay $D = 1$. We clearly see the trade-off between execution time and energy consumption here, since we gain more than half the energy by slowing down the execution from $D = 1$ to $D = 1.5$. Note that with $D = 1$, we can still slow down task $T_2$ to speed 4, and still gain a little over the brute force solution. Hence, even such a toy example allows us to illustrate the benefits of energy-aware schedules. Obviously, with larger examples, the energy savings will be even more dramatic, depending upon the range of available speeds and the tightness of the execution deadline. In fact, the maximal energy gain that can be achieved is not bounded: when executing each task as slow as possible (instead of as fast as possible), we gain $\left(\frac{s_{max}}{s_{min}}\right)^2 W_{total}$, where $W_{total}$ is the sum of all task weights, and this quantity can be arbitrarily large. One of the main contributions of this paper is to provide optimal energy-aware algorithms for each model (or guaranteed polynomial approximations for NP-complete instances).
2
+
3
+ ## 4. The Continuous model
4
+
5
+ With the CONTINUOUS model, processor speeds can take any value between 0 and $s_{max}$. First we prove that, with this model, the processors do not change their speed during the execution of a task (Section 4.1). Then, we derive in Section 4.2 the optimal speed values for special execution graph structures, expressed as closed form algebraic formulas, and we show that these values may be irrational (as already illustrated in the example in Section 3.3). Finally, we formulate the problem for general DAGs as a convex optimization program in Section 4.3.
6
+
7
+ ### 4.1. Preliminary lemma
8
+
9
+ **Lemma 1 (constant speed per task)** *In all optimal solution with the CONTINUOUS model, each task is executed at constant speed, i.e., a processor does not change its speed during the execution of a task.*
10
+
11
+ **Proof.** Suppose that in the optimal solution, there is a task whose speed changes during the execution. Consider the first time-step at which the change occurs: the computation begins at speed $s$ from time $t$ to time $t'$, and then continues at speed $s'$ until time $t''$. The total energy consumption for this task in the time interval $[t; t'']$ is $E = (t' - t) \times s^3 + (t'' - t') \times (s'')^3$. Moreover, the amount of work done for this task is $W = (t' - t) \times s + (t'' - t') \times s'$.
12
+
13
+ If we run the task during the whole interval $[t; t'']$ at constant speed $W/(t'' - t)$, the same amount of work is done within the same time. However, the energy consumption during this interval of time is now $E' = (t'' - t) \times (W/(t'' - t))^3$. By convexity of the function $x \mapsto x^3$, we obtain $E' < E$ since $t < t' < t''$. This contradicts the hypothesis of optimality of the first solution, which concludes the proof. □
14
+
15
+ Copyright © 2011 John Wiley & Sons, Ltd.
16
+ Prepared using cpeauth.cls
samples/texts/1228241/page_20.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Vdd-Hopping model.** This model is similar to the DISCRETE one, except that switching modes during the execution of a given task is allowed: any rational speed can be simulated, by simply switching, at the appropriate time during the execution of a task, between two consecutive modes [15]. Note that $V_{DD}$ usually represents the supply voltage, hence the name VDD-HOPPING.
2
+
3
+ **Incremental model.** In this variant of the DISCRETE model, we introduce a value $\delta$ that corresponds the minimum permissible speed increment, induced by the minimum voltage increment that can be achieved when controlling the processor CPU. This new model aims at capturing a realistic version of the DISCRETE model, where the different modes are spread regularly instead of arbitrarily chosen.
4
+
5
+ Our main contributions are the following. For the CONTINUOUS model, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem [16] for general DAGs. For the VDD-HOPPING model, we show that the optimal solution for general DAGs can be computed in polynomial time, using a (rational) linear program. Finally, for the DISCRETE and INCREMENTAL models, we show that the problem is NP-complete. Furthermore, we provide approximation algorithms that rely on the polynomial algorithm for the VDD-HOPPING model, and we compare their solution with the optimal CONTINUOUS solution.
6
+
7
+ The paper is organized as follows. We start with a survey of related literature in Section 2. We then provide the formal description of the framework and of the energy models in Section 3, together with a simple example to illustrate the different models. The next two sections constitute the heart of the paper: in Section 4, we provide analytical formulas for continuous speeds, and the formulation into the convex optimization problem. In Section 5, we assess the complexity of the problem with all the discrete models: DISCRETE, VDD-HOPPING and INCREMENTAL, and we discuss approximation algorithms. Finally we conclude in Section 6.
8
+
9
+ ## 2. Related work
10
+
11
+ Reducing the energy consumption of computational platforms is an important research topic, and many techniques at the process, circuit design, and micro-architectural levels have been proposed [17, 18, 19]. The dynamic voltage and frequency scaling (DVFS) technique has been extensively studied, since it may lead to efficient energy/performance trade-offs [20, 2, 13, 21, 22, 23, 11]. Current microprocessors (for instance, from AMD [24] and Intel [25]) allow the speed to be set dynamically. Indeed, by lowering supply voltage, hence processor clock frequency, it is possible to achieve important reductions in power consumption, without necessarily increasing the execution time. We first discuss different optimization problems that arise in this context. Then we review energy models.
samples/texts/1228241/page_21.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## 2.1. DVFS and optimization problems
2
+
3
+ When dealing with energy consumption, the most usual optimization function consists in minimizing the energy consumption, while ensuring a deadline on the execution time (i.e., a real-time constraint), as discussed in the following papers.
4
+
5
+ In [14], Okuma et al. demonstrate that voltage scaling is far more effective than the shutdown approach, which simply stops the power supply when the system is inactive. Their target processor employs just a few discretely variable voltages. De Langen and Juurlink [26] discuss leakage-aware scheduling heuristics that investigate both dynamic voltage scaling (DVS) and processor shutdown, since static power consumption due to leakage current is expected to increase significantly. Chen et al. [27] consider parallel sparse applications, and they show that when scheduling applications modeled by a directed acyclic graph with a well-identified critical path, it is possible to lower the voltage during non-critical execution of tasks, with no impact on the execution time. Similarly, Wang et al. [11] study the slack time for non-critical jobs, they extend their execution time and thus reduce the energy consumption without increasing the total execution time. Kim et al. [22] provide power-aware scheduling algorithms for bag-of-tasks applications with deadline constraints, based on dynamic voltage scaling. Their goal is to minimize power consumption as well as to meet the deadlines specified by application users.
6
+
7
+ For real-time embedded systems, slack reclamation techniques are used. Lee and Sakurai [17] show how to exploit slack time arising from workload variation, thanks to a software feedback control of supply voltage. Prathipati [12] discusses techniques to take advantage of run-time variations in the execution time of tasks; it determines the minimum voltage under which each task can be executed, while guaranteeing the deadlines of each task. Then, experiments are conducted on the Intel StrongArm SA-1100 processor, which has eleven different frequencies, and the Intel PXA250 XScale embedded processor with four frequencies. In [28], the goal of Xu et al. is to schedule a set of independent tasks, given a worst case execution cycle (WCEC) for each task, and a global deadline, while accounting for time and energy penalties when the processor frequency is changing. The frequency of the processor can be lowered when some slack is obtained dynamically, typically when a task runs faster than its WCEC. Yang and Lin [23] discuss algorithms with preemption, using DVS techniques; substantial energy can be saved using these algorithms, which succeed to claim the static and dynamic slack time, with little overhead.
8
+
9
+ Since an increasing number of systems are powered by batteries, maximizing battery life also is an important optimization problem. Battery-efficient systems can be obtained with similar techniques of dynamic voltage and frequency scaling, as described by Lahiri et al. in [18]. Another optimization criterion is the energy-delay product, since it accounts for a trade-off between performance and energy consumption, as for instance discussed by Gonzalez and Horowitz in [29]. We do not discuss further these latter optimization problems, since our goal is to minimize the energy consumption, with a fixed deadline.
10
+
11
+ In this paper, the application is a task graph (directed acyclic graph), and we assume that the mapping, i.e., an ordered list of tasks to execute on each processor, is given. Hence, our problem is closely related to slack reclamation techniques, but instead on focusing on non-critical tasks as for instance in [11], we consider the problem as a whole. Our contribution is
samples/texts/1228241/page_22.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ to perform an exhaustive complexity study for different energy models. In the next paragraph, we discuss related work on each energy model.
2
+
3
+ ## 2.2. Energy models
4
+
5
+ Several energy models are considered in the literature, and they can all be categorized in one of the four models investigated in this paper, i.e., CONTINUOUS, DISCRETE, VDD-HOPPING or INCREMENTAL.
6
+
7
+ The CONTINUOUS model is used mainly for theoretical studies. For instance, Yao et al. [30], followed by Bansal et al. [13], aim at scheduling a collection of tasks (with release time, deadline and amount of work), and the solution is the time at which each task is scheduled, but also, the speed at which the task is executed. In these papers, the speed can take any value, hence following the CONTINUOUS model.
8
+
9
+ We believe that the most widely used model is the DISCRETE one. Indeed, processors have currently only a few discrete number of possible frequencies [24, 25, 14, 12]. Therefore, most of the papers discussed above follow this model. Some studies exploit the continuous model to determine the smallest frequency required to run a task, and then choose the closest upper discrete value, as for instance [12] and [31].
10
+
11
+ Recently, a new local dynamic voltage scaling architecture has been developed, based on the VDD-HOPPING model [15, 32, 33]. It was shown in [17] that significant power can be saved by using two distinct voltages, and architectures using this principle have been developed (see for instance [34]). Compared to traditional power converters, a new design with no needs for large passives or costly technological options has been validated in a STMicroelectronics CMOS 65nm low-power technology [15].
12
+
13
+ To the best of our knowledge, this paper introduces the INCREMENTAL model for the first time. The main rationale is that future technologies may well have an increased number of possible frequencies, and these will follow a regular pattern. For instance, note that the SA-1100 processor, considered in [12], has eleven frequencies that are equidistant, i.e., they follow the INCREMENTAL model. Lee and Sakurai [17] exploit discrete levels of clock frequency as $f, f/2, f/3, ...$, where *f* is the master (i.e., the higher) system clock frequency. This model is closer to the DISCRETE model, although it exhibits a regular pattern similarly to the INCREMENTAL model.
14
+
15
+ Our work is the first attempt to compare these different models: on the one hand, we assess the impact of the model on the problem complexity (polynomial vs NP-hard), and on the other hand, we provide approximation algorithms building upon these results. The closest work to ours is the paper by Zhang et al. [31], in which the authors also consider the mapping of directed acyclic graphs, and compare the DISCRETE and the CONTINUOUS models. We go beyond their work in this paper, with an exhaustive complexity study, closed-form formulas for the continuous model, and the comparison with the VDD-HOPPING and INCREMENTAL models.
samples/texts/1228241/page_23.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 3. Framework
2
+
3
+ First we detail the optimization problem in Section 3.1. Then we describe the four energy
4
+ models in Section 3.2. Finally, we illustrate the models and motivate the problem with an
5
+ example in Section 3.3.
6
+
7
+ **3.1. Optimization problem**
8
+
9
+ Consider an application task graph $\mathcal{G} = (V, \mathcal{E})$, with $n = |V|$ tasks denoted as $V = \{T_1, T_2, \ldots, T_n\}$, and where the set $\mathcal{E}$ denotes the precedence edges between tasks. Task $T_i$ has a cost $w_i$ for $1 \le i \le n$. We assume that the tasks in $\mathcal{G}$ have been allocated onto a parallel platform made up of identical processors. We define the *execution graph* generated by this allocation as the graph $\mathcal{G} = (V, E)$, with the following augmented set of edges:
10
+
11
+ • $\mathcal{E} \subseteq E$: if an edge exists in the precedence graph, it also exists in the execution graph;
12
+
13
+ • if $T_1$ and $T_2$ are executed successively, in this order, on the same processor, then
14
+ $(T_1, T_2) \in E$.
15
+
16
+ The goal is to the minimize the energy consumed during the execution while enforcing a
17
+ deadline *D* on the execution time. We formalize the optimization problem in the simpler case
18
+ where each task is executed at constant speed. This strategy is optimal for the CONTINUOUS
19
+ model (by a convexity argument) and for the DISCRETE and INCREMENTAL models (by
20
+ definition). For the VDD-HOPPING model, we reformulate the problem in Section 5.1. For
21
+ each task $T_i \in V$, $b_i$ is the starting time of its execution, $d_i$ is the duration of its execution,
22
+ and $s_i$ is the speed at which it is executed. We obtain the following formulation of the
23
+ MINENERGY(*G*, *D*) problem, given an execution graph *G* = (*V*, *E*) and a deadline *D*; the
24
+ $s_i$ values are variables, whose values are constrained by the energy model (see Section 3.2).
25
+
26
+ $$
27
+ \begin{array}{ll}
28
+ \text{Minimize} & \displaystyle\sum_{i=1}^{n} s_i^3 \times d_i \\
29
+ \text{subject to} & (i) \quad w_i = s_i \times d_i \quad \text{for each task } T_i \in V \\
30
+ & (ii) \quad b_i + d_i \leq b_j \quad \text{for each edge } (T_i, T_j) \in E \\
31
+ & (iii) \quad b_i + d_i \leq D \quad \text{for each task } T_i \in V \\
32
+ & (iv) \quad b_i \geq 0 \quad \text{for each task } T_i \in V
33
+ \end{array}
34
+ \tag{1}
35
+ $$
36
+
37
+ Constraint (i) states that the whole task can be executed in time $d_i$ using speed $s_i$.
38
+ Constraint (ii) accounts for all dependencies, and constraint (iii) ensures that the execution
39
+ time does not exceed the deadline $D$. Finally, constraint (iv) enforces that starting times
40
+ are non-negative. The energy consumed throughout the execution is the objective function.
41
+ It is the sum, for each task, of the energy consumed by this task, as we detail in the next
42
+ section. Note that $d_i = w_i/s_i$, and therefore the objective function can also be expressed as
43
+ $\sum_{i=1}^{n} s_i^2 \times w_i$.
44
+
45
+ Note that, whatever the energy model, there is a maximum speed that cannot be exceeded,
46
+ denoted $s_{max}$. We point out that there is a solution to the minimization problem if and only
47
+ if there is a solution with $s_i = s_{max}$ for all $1 \le i \le n$. Such a solution would correspond to
48
+ executing each task as early as possible (according to constraints (ii) and (iv)) and as fast as
49
+ possible. The optimal solution then slows down tasks to save as much energy as possible, while
50
+ enforcing the deadline constraint. There is no guarantee on the uniqueness of the solution,
samples/texts/1228241/page_24.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ since it may be possible to modify the beginning time of a task without affecting the energy
2
+ consumption, if some of the constraints (ii) are not tight.
3
+
4
+ **3.2. Energy models**
5
+
6
+ In all models, when a processor operates at speed $s$ during $d$ time-units, the corresponding consumed energy is $s^3 \times d$, which is the dynamic part of the energy consumption, following the classical models of the literature [5, 6, 7, 8, 9]. Note that we do not take static energy into account, because all processors are up and alive during the whole execution. We now detail the possible speed values in each energy model, which should be added as a constraint in Equation (1).
7
+
8
+ * In the CONTINUOUS model, processors can have arbitrary speeds, from 0 to a maximum value $s_{max}$, and a processor can change its speed at any time during execution.
9
+
10
+ * In the DISCRETE model, processors have a set of possible speed values, or modes, denoted as $s_1, ..., s_m$. There is no assumption on the range and distribution of these modes. The speed of a processor cannot change during the computation of a task, but it can change from task to task.
11
+
12
+ * In the VDD-HOPPING model, a processor can run at different speeds $s_1, ..., s_m$, as in the previous model, but it can also change its speed during a computation. The energy consumed during the execution of one task is the sum, on each time interval with constant speed $s$, of the energy consumed during this interval at speed $s$.
13
+
14
+ * In the INCREMENTAL model, we introduce a value $\delta$ that corresponds to the minimum permissible speed (i.e., voltage) increment. That means that possible speed values are obtained as $s = s_{min} + i \times \delta$, where $i$ is an integer such that $0 \le i \le \frac{s_{max}-s_{min}}{\delta}$. Admissible speeds lie in the interval [$s_{min}, s_{max}$]. This new model aims at capturing a realistic version of the DISCRETE model, where the different modes are spread regularly between $s_1 = s_{min}$ and $s_m = s_{max}$, instead of being arbitrarily chosen. It is intended as the modern counterpart of a potentiometer knob!
15
+
16
+ **3.3. Example**
17
+
18
+ Consider an application with four tasks of costs $w_1 = 3$, $w_2 = 2$, $w_3 = 1$ and $w_4 = 2$, and one precedence constraint $T_1 \rightarrow T_3$. We assume that $T_1$ and $T_2$ are allocated, in this order, onto processor $P_1$, while $T_3$ and $T_4$ are allocated, in this order, on processor $P_2$. The resulting execution graph $G$ is given in Figure 1, with two precedence constraints added to the initial task graph. The deadline on the execution time is $D = 1.5$.
19
+
20
+ We set the maximum speed to $s_{max} = 6$ for the CONTINUOUS model. For the DISCRETE and VDD-HOPPING models, we use the set of speeds $s_1^{(d)} = 2$, $s_2^{(d)} = 5$ and $s_3^{(d)} = 6$. Finally, for the INCREMENTAL model, we set $\delta = 2$, $s_{min} = 2$ and $s_{max} = 6$, so that possible speeds are $s_1^{(i)} = 2$, $s_2^{(i)} = 4$ and $s_3^{(i)} = 6$. We aim at finding the optimal execution speed $s_i$ for each task $T_i$ ($1 \le i \le 4$), i.e., the values of $s_i$ that minimize the energy consumption.
21
+
22
+ With the CONTINUOUS model, the optimal speeds are non rational values, and we obtain
23
+
24
+ $$s_1 = \frac{2}{3}(3 + 35^{1/3}) \approx 4.18; \quad s_2 = s_1 \times \frac{2}{35^{1/3}} \approx 2.56; \quad s_3 = s_4 = s_1 \times \frac{3}{35^{1/3}} \approx 3.83.$$
25
+
26
+ Copyright © 2011 John Wiley & Sons, Ltd.
27
+ Prepared using cpeauth.cls
samples/texts/1228241/page_25.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ Figure 1: Execution graph for the example.
2
+
3
+ Note that all speeds are lower than the maximum $s_{max}$. These values are obtained thanks to the formulas derived in Section 4. The energy consumption is then $E_{opt}^{(c)} = \sum_{i=1}^{4} w_i \times s_i^2 = 3 \cdot s_1^2 + 2 \cdot s_2^2 + 3 \cdot s_3^2 \approx 109.6$. The execution time is $\frac{w_1}{s_1} + \max(\frac{w_2}{s_2}, \frac{w_3+w_4}{s_3})$, and with this solution, it is equal to the deadline *D* (actually, both processors reach the deadline, otherwise we could slow down the execution of one task).
4
+
5
+ For the DISCRETE model, if we execute all tasks at speed $s_2^{(d)} = 5$, we obtain an energy $E = 8 \times 5^2 = 200$. A better solution is obtained with $s_1 = s_3^{(d)} = 6$, $s_2 = s_4 = s_5^{(d)} = 2$ and $s_4 = s_2^{(d)} = 5$, which turns out to be optimal: $E_{opt}^{(d)} = 3 \times 36 + (2+1) \times 4 + 2 \times 25 = 170$. Note that $E_{opt}^{(d)} > E_{opt}^{(c)}$, i.e., the optimal energy consumption with the DISCRETE model is much higher than the one achieved with the CONTINUOUS model. Indeed, in this case, even though the first processor executes during $3/6 + 2/2 = D$ time units, the second processor remains idle since $3/6 + 1/2 + 2/5 = 1.4 < D$. The problem turns out to be NP-hard (see Section 5.2), and the solution has been found by performing an exhaustive search.
6
+
7
+ With the VDD-HOPPING model, we set $s_1 = s_2^{(d)} = 5$; for the other tasks, we run part of the time at speed $s_2^{(d)} = 5$, and part of the time at speed $s_1^{(d)} = 2$ in order to use the idle time and lower the energy consumption. $T_2$ is executed at speed $s_1^{(d)}$ during time $\frac{5}{6}$ and at speed $s_2^{(d)}$ during time $\frac{2}{30}$ (i.e., the first processor executes during time $3/5 + 5/6 + 2/30 = 1.5 = D$, and all the work for $T_2$ is done: $2 \times 5/6 + 5 \times 2/30 = 2 = w_2$). $T_3$ is executed at speed $s_2^{(d)}$ (during time $1/5$), and finally $T_4$ is executed at speed $s_1^{(d)}$ during time $0.5$ and at speed $s_2^{(d)}$ during time $1/5$ (i.e., the second processor executes during time $3/5 + 1/5 + 0.5 + 1/5 = 1.5 = D$, and all the work for $T_4$ is done: $2 \times 0.5 + 5 \times 1/5 = 2 = w_4$). This set of speeds turns out to be optimal (i.e., it is the optimal solution of the linear program introduced in Section 5.1), with an energy consumption $E_{opt}^{(v)} = (3/5 + 2/30 + 1/5 + 1/5) \times 5^3 + (5/6 + 0.5) \times 2^3 = 144$. As expected, $E_{opt}^{(c)} \le E_{opt}^{(v)} \le E_{opt}^{(d)}$, i.e., the VDD-HOPPING solution stands between the optimal CONTINUOUS solution, and the more constrained DISCRETE solution.
8
+
9
+ For the INCREMENTAL model, the reasoning is similar to the DISCRETE case, and the optimal solution is obtained by an exhaustive search: all tasks should be executed at speed $s_2^{(i)} = 4$, with an energy consumption $E_{opt}^{(i)} = 8 \times 4^2 = 128 > E_{opt}^{(c)}$. It turns out to be better than DISCRETE and VDD-HOPPING, since it has different discrete values of energy that are more appropriate for this example.
samples/texts/1228241/page_3.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## 4.2. Special execution graphs
2
+
3
+ ### 4.2.1. Independent tasks
4
+
5
+ Consider the problem of minimizing the energy of *n* independent tasks (i.e., each task is mapped onto a distinct processor, and there are no precedence constraints in the execution graph), while enforcing a deadline *D*.
6
+
7
+ **Proposition 1 (independent tasks)** When *G* is composed of independent tasks {$T_1, \dots, T_n$}, the optimal solution to MINENERGY(*G*, *D*) is obtained when each task $T_i$ ($1 \le i \le n$) is computed at speed $s_i = \frac{w_i}{D}$. If there is a task $T_i$ such that $s_i > s_{max}$, then the problem has no solution.
8
+
9
+ **Proof.** For task $T_i$, the speed $s_i$ corresponds to the slowest speed at which the processor can execute the task, so that the deadline is not exceeded. If $s_i > s_{max}$, the corresponding processor will never be able to complete its execution before the deadline, therefore there is no solution. To conclude the proof, we note that any other solution would meet the deadline constraint, and therefore the $s_i$'s should be such that $\frac{w_i}{s_i} \le D$, which means that $s_i \ge \frac{w_i}{D}$. These values would all be higher than the $s_i$'s of the optimal solution, and hence would lead to a higher energy consumption. Therefore, this solution is optimal. $\square$
10
+
11
+ ### 4.2.2. Linear chain of tasks
12
+
13
+ This case corresponds for instance to *n* independent tasks {$T_1, \dots, T_n$} executed onto a single processor. The execution graph is then a linear chain (order of execution of the tasks), with $T_i \to T_{i+1}$, for $1 \le i < n$.
14
+
15
+ **Proposition 2 (linear chain)** When *G* is a linear chain of tasks, the optimal solution to MINENERGY(*G*, *D*) is obtained when each task is executed at speed *s* = *W/D*, with *W* = $\sum_{i=1}^{n} w_i$. If *s* > *s*<sub>max</sub>, then there is no solution.
16
+
17
+ **Proof.** Suppose that in the optimal solution, tasks $T_i$ and $T_j$ are such that $s_i < s_j$. The total energy consumption is $E_{opt}$. We define *s* such that the execution of both tasks running at speed *s* takes the same amount of time than in the optimal solution, i.e., $(w_i + w_j)/s = w_i/s_i + w_j/s_j$: $s = \frac{(w_i+w_j)}{w_i s_j + w_j s_i}$ × $s_i s_j$. Note that $s_i < s < s_j$ (it is the barycenter of two points with positive mass).
18
+
19
+ We consider a solution such that the speed of task $T_k$, for $1 \le k \le n$, with $k \neq i$ and $k \neq j$, is the same as in the optimal solution, and the speed of tasks $T_i$ and $T_j$ is *s*. By definition of *s*, the execution time has not been modified. The energy consumption of this solution is *E*, where $E_{opt} - E = w_i s_i^2 + w_j s_j^2 - (w_i + w_j)s^2$, i.e., the difference of energy with the optimal solution is only impacted by tasks $T_i$ and $T_j$, for which the speed has been modified. By convexity of the function $x \mapsto x^2$, we obtain $E_{opt} > E$, which contradicts its optimality. Therefore, in the optimal solution, all tasks have the same execution speed. Moreover, the energy consumption is minimized when the speed is as low as possible, while the deadline is not exceeded. Therefore, the execution speed of all tasks is $s = W/D$. $\square$
samples/texts/1228241/page_5.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Finally, we compute the exact expression of $\mathbf{minE}(G, D) = f(s_0)$, when $s_0 \le s_{max}$:
2
+
3
+ $$f(s_0) = s_0^2 \left( w_0 + \frac{W_3}{(s_0 D - w_0)^2} \right) = \left( \frac{W_3^{1/3} + w_0}{D} \right)^2 \left( \frac{W_3}{W_3^{2/3}} + w_0 \right) = \frac{\left( W_3^{1/3} + w_0 \right)^3}{D^2},$$
4
+
5
+ which concludes the proof. $\square$
6
+
7
+ **Corollary 2 (equivalent tasks for speed)** Consider a fork or join graph with tasks $T_i$, $0 \le i \le n$, and a deadline $D$, and assume that the speeds in the optimal solution to $\text{MINENERGY}(G, D)$ do not exceed $s_{max}$. Then, these speeds are the same as in the optimal solution for $n+1$ independent tasks $T'_0, T'_1, \dots, T'_n$, where $w'_0 = (\sum_{i=1}^n w_i^3)^{1/3} + w_0$, and, for $1 \le i \le n$, $w'_i = w_0 \cdot \frac{w_i}{(\sum_{i=1}^n w_i^3)^{1/3}}$.
8
+
9
+ **Corollary 3 (equivalent task for energy)** Consider a fork or join graph $G$ and a deadline $D$, and assume that the speeds in the optimal solution to $\text{MINENERGY}(G, D)$ do not exceed $s_{max}$. We say that the graph $G$ is equivalent to the graph $G^{(eq)}$, consisting of a single task $T_0^{(eq)}$ of weight $w_0^{(eq)} = (\sum_{i=1}^n w_i^{(eq)})^{1/3} + w_0$, because the minimum energy consumption of both graphs are identical: $\mathbf{minE}(G, D)=\mathbf{minE}(G^{(eq)}, D)$.
10
+
11
+ ### 4.2.4. Trees
12
+
13
+ We extend the results on a fork graph for a tree $G = (V, E)$ with $|V| = n + 1$ tasks. Let $T_0$ be the root of the tree; it has $k$ children tasks, which are each themselves the root of a tree. A tree can therefore be seen as a fork graph, where the tasks of the fork are trees.
14
+
15
+ The previous results for fork graphs naturally lead to an algorithm that peels off branches of the tree, starting with the leaves, and replaces each fork subgraph in the tree, composed of a root $T_0$ and $k$ children, by one task (as in Corollary 3) that becomes the unique child of $T_0$'s parent in the tree. We say that this task is equivalent to the fork graph, since the optimal energy consumption will be the same. The computation of the equivalent cost of this task is done thanks to a call to the **eq** procedure, while the **tree** procedure computes the solution to $\text{MINENERGY}(G, D)$ (see Algorithm 1). Note that the algorithm computes the minimum energy for a tree, but it does not return the speeds at which each task must be executed. However, the algorithm returns the speed of the root task, and it is then straightforward to compute the speed of each children of the root task, and so on.
16
+
17
+ **Theorem 2 (tree graphs)** When $G$ is a tree rooted in $T_0$ ($T_0 \in V$, where $V$ is the set of tasks), the optimal solution to $\text{MINENERGY}(G, D)$ can be computed in polynomial time $O(|V|^2)$.
18
+
19
+ **Proof.** Let $G$ be a tree graph rooted in $T_0$. The optimal solution to $\text{MINENERGY}(G, D)$ is obtained with a call to **tree** $(G, T_0, D)$, and we prove its optimality recursively on the depth of the tree. Similarly to the case of the fork graphs, we reduce the tree to an equivalent task that, if executed alone within a deadline $D$, consumes exactly the same amount of energy. The procedure **eq** is the procedure that reduces a tree to its equivalent task (see Algorithm 1).
samples/texts/1228241/page_6.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Algorithm 1:** Solution to MINENERGY(G, D) for trees.
2
+
3
+ procedure tree (tree G, root T₀, deadline D)
4
+ begin
5
+ Let w=eq (tree G, root T₀);
6
+ if $w/D \le s_{max}$ then
7
+ return $w^3/D^2$;
8
+ else
9
+ if $w_0/s_{max} > D$ then
10
+ return Error:No Solution;
11
+ else
12
+ /* T₀ is executed at speed $s_{max}$ */
13
+ return $w_0 \times s_{max}^2 + \sum_{G_i \text{ subtree rooted in } T_i \in \text{children}(T_0)} \text{tree}(G_i, T_i, D - \frac{w_0}{s_{max}})$;
14
+ end
15
+ end
16
+ end
17
+
18
+ procedure eq (tree G, root T₀)
19
+ begin
20
+ if $\text{children}(T_0)=\emptyset$ then
21
+ return $w_0$;
22
+ else
23
+ return $\left(\sum_{G_i \text{ subtree rooted in } T_i \in \text{children}(T_0)} (\mathbf{eq}(G_i, T_i))^3\right)^{\frac{1}{3}} + w_0$;
24
+ end
25
+ end
26
+
27
+ If the tree has depth 0, then it is a single task, **eq** (G, T₀) returns the equivalent cost w₀,
28
+ and the optimal execution speed is w₀/D (see Proposition 1). There is a solution if and only if
29
+ this speed is not greater than sₘₐₓ, and then the corresponding energy consumption is w₀³/D², as
30
+ returned by the algorithm.
31
+
32
+ Assume now that for any tree of depth *i* < *p*, **eq** computes its equivalent cost, and **tree** returns its optimal energy consumption. We consider a tree *G* of depth *p* rooted in *T*₀: *G* = *T*₀ ∪ {*G*ᵢ}, where each subgraph *G*ᵢ is a tree, rooted in *T*ᵢ, of maximum depth *p* − 1. As in the case of forks, we know that each subtree *G*ᵢ has a deadline *D* − *x*, where *x* = w₀/s₀, and *s*₀ is the speed at which task *T*₀ is executed. By induction hypothesis, we suppose that each graph *G*ᵢ is equivalent to a single task, *T*′ᵢ, of cost *w*′ᵢ (as computed by the procedure **eq**). We can then use the results obtained on forks to compute *w*⁽**eq**⁾(0) (see proof of Theorem 1):
samples/texts/1228241/page_7.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $$w_0^{(eq)} = \left( \sum_i (w'_i)^3 \right)^{\frac{1}{3}} + w_0.$$
2
+
3
+ Finally the tree is equivalent to one task of cost $w_0^{(eq)}$, and if $\frac{w_0^{(eq)}}{D} \le s_{max}$, the energy consumption is $\frac{(w_0^{(eq)})^3}{D^2}$, and no speed exceeds $s_{max}$.
4
+
5
+ Note that the speed of a task is always greater than the speed of its successors. Therefore, if $\frac{w_0^{(eq)}}{D} > s_{max}$, we execute the root of the tree at speed $s_{max}$ and then process each subtree $G_i$ independently. Of course, there is no solution if $\frac{w_0}{s_{max}} > D$, and otherwise we perform the recursive calls to **tree** to process each subtree independently. Their deadline is then $D - \frac{w_0}{s_{max}}$. $\square$
6
+
7
+ ### 4.2.5. Series-parallel graphs
8
+
9
+ We can further generalize our results to series-parallel graphs (SPGs), which are built from a sequence of compositions (parallel or series) of smaller-size SPGs. The smallest SPG consists of two nodes connected by an edge (such a graph is called an *elementary SPG*). The first node is the source, while the second one is the sink of the SPG. When composing two SGPs in series, we merge the sink of the first SPG with the source of the second one. For a parallel composition, the two sources are merged, as well as the two sinks, as illustrated in Figure 2.
10
+
11
+ We can extend the results for tree graphs to SPGs, by replacing step by step the SPGs by an equivalent task (procedure **cost** in Algorithm 2): we can compute the equivalent cost for a series or parallel composition.
12
+
13
+ However, since it is no longer true that the speed of a task is always larger than the speed of its successor (as was the case in a tree), we have not been able to find a recursive property on the tasks that should be set to $s_{max}$, when one of the speeds obtained with the previous method exceeds $s_{max}$. The problem of computing a closed form for a SPG with a finite value of $s_{max}$ remains open. Still, we have the following result when $s_{max} = +\infty$:
14
+
15
+ **Theorem 3 (series-parallel graphs)** *When G is a SPG, it is possible to compute recursively a closed form expression of the optimal solution of MINENERGY(G, D), assuming $s_{max} = +\infty$, in polynomial time $O(|V|)$, where V is the set of tasks.*
16
+
17
+ **Proof.** Let $G$ be a series-parallel graph. The optimal solution to $\text{MINENERGY}(G, D)$ is obtained with a call to **SPG** $(G, D)$, and we prove its optimality recursively. Similarly to trees, the main idea is to peel the graph off, and to transform it until there remains only a single equivalent task that, if executed alone within a deadline $D$, would consume exactly
samples/texts/1228241/page_8.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Figure 2: Composition of series-parallel graphs (SPGs).
2
+
3
+ the same amount of energy. The procedure **cost** is the procedure that reduces a tree to its equivalent task (see Algorithm 2).
4
+
5
+ The proof is done by induction on the number of compositions required to build the graph $G$, $p$. If $p = 0$, $G$ is an elementary SPG consisting in two tasks, the source $T_0$ and the sink $T_1$. It is therefore a linear chain, and therefore equivalent to a single task whose cost is the sum of both costs, $w_0+w_1$ (see Corollary 1 for linear chains). The procedure **cost** returns therefore the correct equivalent cost, and **SPG** returns the minimum energy consumption.
6
+
7
+ Let us assume that the procedures return the correct equivalent cost and minimum energy consumption for any SPG consisting of $i < p$ compositions. We consider a SPG $G$, with $p$ compositions. By definition, $G$ is a composition of two smaller-size SPGs, $G_1$ and $G_2$, and both of these SPGs have strictly fewer than $p$ compositions. We consider $G'_1$ and $G'_2$, which are identical to $G_1$ and $G_2$, except that the cost of their source and sink tasks are set to 0 (these costs are handled separately), and we can reduce both of these SPGs to an equivalent task, of respective costs $w'_1$ and $w'_2$, by induction hypothesis. There are two cases:
8
+
9
+ * If $G$ is a series composition, then after the reduction of $G'_1$ and $G'_2$, we have a linear chain in which we consider the source $T_0$ of $G_1$, the sink $T_1$ of $G_1$ (which is also the source of $G_2$), and the sink $T_2$ of $G_2$. The equivalent cost is therefore $w_0 + w'_1 + w_1 + w'_2 + w_2$, thanks to Corollary 1 for linear chains.
10
+
11
+ * If $G$ is a parallel composition, the resulting graph is a fork-join graph, and we can use Corollaries 1 and 3 to compute the cost of the equivalent task, accounting for the source $T_0$ and the sink $T_1$: $w_0 + ((w'_1)^3 + (w'_2)^3)^{\frac{1}{3}} + w_1$.
samples/texts/1228241/page_9.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Algorithm 2:** Solution to MINENERGY(G, D) for series-parallel graphs.
2
+
3
+ procedure **SPG** (series-parallel graph G, deadline *D*)
4
+ begin
5
+ return $\frac{(\mathbf{cost}(G))^3}{D^2}$;
6
+ end
7
+
8
+ procedure **cost** (series-parallel graph G)
9
+ begin
10
+ Let $T_0$ be the source of G and $T_1$ its sink;
11
+ if G is composed of only two tasks, $T_0$ and $T_1$ **then**
12
+ return $w_0 + w_1$;
13
+ else
14
+ /* G is a composition of two SPGs $G_1$ and $G_2$. */
15
+ For $i = 1, 2$, let $G'_i = G_i$ where the cost of source and sink tasks is set to 0;
16
+ $w'_1 = \mathbf{cost}(G'_1)$; $w'_2 = \mathbf{cost}(G'_2)$;
17
+ if G is a series composition **then**
18
+ Let $T_0$ be the source of $G_1$, $T_1$ be its sink, and $T_2$ be the sink of $G_2$;
19
+ return $w_0 + w'_1 + w_1 + w'_2 + w_2$;
20
+ else
21
+ /* It is a parallel composition. */
22
+ Let $T_0$ be the source of G, and $T_1$ be its sink;
23
+ return $w_0 + ((w'_1)^3 + (w'_2)^3)^{\frac{1}{3}} + w_1$;
24
+ end
25
+ end
26
+ end
27
+
28
+ Once the cost of the equivalent task of the SPG has been computed with the call to **cost** (*G*), the optimal energy consumption is $\frac{(\mathbf{cost}(G))^3}{D^2}$.
29
+
30
+ Contrarily to the case of tree graphs, since we never need to call the **SPG** procedure again because there is no constraint on $s_{max}$, the time complexity of the algorithm is the complexity of the **cost** procedure. There is exactly one call to **cost** for each composition, and the number of compositions in the SPG is in $O(|V|)$. All operations in **cost** can be done in $O(1)$, hence a complexity in $O(|V|)$. □
samples/texts/1469251/page_1.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Divisors in algebraic geometry
2
+
3
+ C.S. Seshadri
4
+
5
+ **Translator's note.**
6
+
7
+ This text is one of a series* of translations of various papers into English. The translator takes full responsibility for any errors introduced in the passage from one language to another, and claims no rights to any of the mathematical content herein.
8
+
9
+ What follows is a translation of the French seminar talk:
10
+
11
+ SESHADRI, C. S. "Diviseurs en géométrie algébrique". Séminaire Claude Chevalley, Volume 4 (1958-1959), Talk no. 4. http://www.numdam.org/item/SCC_1958-1959_4__A4_0
12
+
13
+ Contents
14
+
15
+ 1 Preliminaries 1
16
+
17
+ 2 Dévissage theorem 3
18
+
19
+ 3 Divisors (Generalities) 5
20
+
21
+ | p. 4-01
22
+
23
+ In the first part of this talk, we will prove a theorem of Serre on complete varieties [6], following the methods of Grothendieck [4]. The second part is dedicated to generalities on divisors. In the literature, we often call the divisors studied here “locally principal” divisors.
24
+
25
+ The algebraic spaces considered here are defined over an algebraically closed field $K$. By “variety”, we mean an irreducible algebraic space. If $X$ is an algebraic space, we denote by $\mathcal{O}(X)$, $\mathcal{R}(X)$, etc. (or simply $\mathcal{O}$, $\mathcal{R}$, etc.) the structure sheaf, of regular functions, etc. on $X$ (to define $\mathcal{R}(X)$ we assume that $X$ is a variety). By “coherent sheaf” on $X$, we mean a coherent sheaf of $\mathcal{O}$-modules on $X$.
26
+
27
+ # 1 Preliminaries
28
+
29
+ [4, 5, 6]
30
+
31
+ If $M$ is a module over an integral ring $A$ (commutative and with 1), then we say that an element $m \in M$ is a *torsion element* if there exists some non-zero $a \in A$ such that $a \cdot m = 0$. We say that $M$ is a *torsion module* (resp. *torsion-free module*) if every element of $M$ is a torsion element (resp. if $M \neq 0$ and no non-zero element of $M$ is a torsion element). The torsion elements of $M$ form a torsion submodule of $M$ (denoted by $T(M)$); if $M \neq 0$, then
32
+
33
+ *https://thosgood.com/translations
samples/texts/1469251/page_2.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $M/T(M)$ is a torsion-free module. If $M$ is a torsion module of finite type over $A$, then the ideal $\text{ann}\,M$ of $A$ (the ideal of $A$ given by the elements $a \in A$ such that $aM = 0$) is non-zero.
2
+
3
+ Let $X$ be an algebraic space and $\mathcal{F}$ a sheaf of $\mathcal{O}$-modules on $X$. We define supp $\mathcal{F}$ to be the set of points $x \in X$ such that $\mathcal{F}_x \neq 0$. If $\mathcal{F}$ is coherent, then supp $\mathcal{F}$ is a closed subset of $X$. If $X$ is affine, then supp $\mathcal{F}$ is the set defined by the ideal $\text{ann}\,H^0(X, \mathcal{F})$ of the affine algebra $H^0(X, \mathcal{O})$, where $H^0(X, \mathcal{F})$ is considered as a module over $H^0(X, \mathcal{O})$.
4
+
5
+ A sheaf $\mathcal{F}$ of $\mathcal{O}$-modules on a variety $X$ is said to be a *torsion sheaf* (resp. *torsion-free sheaf*) if, for every $x \in X$, the module $\mathcal{F}_x$ over the ring $\mathcal{O}_x$ is a torsion module (resp. torsion-free module).
6
+
7
+ p. 4-02
8
+
9
+ **Proposition 1.** If $\mathcal{F}$ is a coherent sheaf on a variety $X$, then there exists a coherent subsheaf $T(\mathcal{F})$ of $\mathcal{F}$ (and only one) such that $(T(\mathcal{F}))_x = T(\mathcal{F}_x)$.
10
+
11
+ *Proof.* The uniqueness is trivial. The existence is a consequence of the fact that, if $X$ is affine, then $T(\mathcal{F}_x)$ is given by localisation of the module $T(H^0*(X, \mathcal{F}))$ with respect to the maximal ideal of $H^0(X, \mathcal{O})$ that defines $x$. $\square$
12
+
13
+ **Corollary.** If $\mathcal{F} \neq 0$ then $\mathcal{F}/T(\mathcal{F})$ is a torsion-free coherent sheaf.*
14
+
15
+ **Proposition 2.** If $\mathcal{F}$ is a coherent sheaf on the variety $X$, then $\text{supp} \,\mathcal{F} \neq X$ if and only if $\mathcal{F}$ is a torsion sheaf.
16
+
17
+ *Proof.* This is a trivial consequence of the fact that, if $U$ is an affine open subset, then $\text{supp}.\mathcal{F} \cap U$ is defined by the ideal $\text{ann}\,H^0(U, \mathcal{F})$ of $H^0(U, \mathcal{O})$, where $H^0(U, \mathcal{F})$ is considered as a module over $H^0(U, \mathcal{O})$. $\square$
18
+
19
+ **Proposition 3.** If $\mathcal{F}$ is a torsion-free coherent sheaf on a variety $X$, with $\mathcal{F} \subset \mathbb{R}^n$, then there exists a coherent sheaf $\mathcal{I} \neq 0$ of ideals of $\mathcal{O}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$.
20
+
21
+ *Proof.* Let $\mathcal{I}_x$ be the ideal $\left[\mathcal{O}_x^n : \mathcal{F}_x\right]$ of $\mathcal{O}_x$, i.e. the ideal of elements $i_x$ of $\mathcal{O}_x$ such that $i_x \mathcal{F}_x \subset \mathcal{O}_x^n$. Since $\mathcal{F}_x$ is of finite type over $\mathcal{O}_x$, we know that $\mathcal{I}_x \neq 0$. If we take an affine open subset $U$ of $X$, then we can prove that $\mathcal{I}_x$ is given by localisation of the ideal $[H^0(U, \mathcal{O}^n) : H^0(U, \mathcal{F})]$ of $H^0(U, \mathcal{O})$ by the maximal ideal of $H^0(U, \mathcal{O})$ that defines $x$. Thus ${\mathcal{I}_x}_{x \in X}$ defines a coherent sheaf $\mathcal{I}$ of ideals of $\mathcal{O}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$. $\square$
22
+
23
+ Let $\mathcal{F}$ be a torsion-free coherent sheaf on a variety $X$. Then the canonical homomorphism $\mathcal{F} \to \mathcal{F} \otimes_{\mathcal{O}} \mathbb{R}$ is injective. The sheaves $\mathbb{R}$ and $\mathcal{F} \otimes_{\mathcal{O}} \mathbb{R}$ are locally constant sheaves, and thus constant ([5, page 229]). We can then identify $\mathcal{F} \otimes_{\mathcal{O}} \mathbb{R}$ with a vector space of finite dimension over $\mathbb{R}$ (we identify the field of rational functions with the sheaf $\mathbb{R}$ since $\mathbb{R}$ is constant). We call this dimension the *rank* of $\mathcal{F}$, and we can then consider $\mathcal{F}$ as a subsheaf of $\mathbb{R}^n$, where $n = \text{rank}.\mathcal{F}$.
24
+
25
+ **Proposition 4.** Under the same hypotheses as in Proposition 3, there exists a coherent sheaf $\mathcal{I} \neq 0$ of ideals of $\mathcal{O}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$, where $n = \text{rank}.\mathcal{F}$; then $\mathcal{O}^n/(\mathcal{I} \cdot \mathcal{F})$ and $\mathcal{F}/(\mathcal{I} \cdot \mathcal{F})$ are torsion sheaves.
26
+
27
+ * [Trans.] The condition that $\mathcal{F} \neq 0$ is unnecessary, but we include it here since it is in the original. Note that the zero sheaf is indeed a torsion-free sheaf, otherwise any coherent torsion sheaf $\mathcal{F}$ provides a counterexample to this corollary.
samples/texts/1469251/page_3.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *Proof.* The proof is immediate.
2
+
3
+ If $Y$ is a closed subset of an algebraic space $X$, then we denote by $\mathcal{I}_Y$ the coherent sheaf of ideals of $\mathcal{O}$ defined by $Y$.
4
+
5
+ **Proposition 5.** Let $Y$ be a closed subset of an algebraic space $X$, and $\mathcal{F}$ a coherent sheaf on $X$, with $\text{supp} \, \mathcal{F} \subset Y$; then there exists an integer $k$ such that $\mathcal{I}_Y^k \mathcal{F} = 0$.
6
+
7
+ *Proof.* We can reduce to the case where $X$ is affine, since there exists a finite cover of $X$ by affine opens. In this case, the hypothesis implies that the set defined by the ideal $\text{annH}^0(X, \mathcal{F})$ is contained in $Y$. This implies, as is well known, that $\text{annH}^0(X, \mathcal{F}) \supset \mathcal{I}_Y^k$. $\square$
8
+
9
+ **Proposition 6.** Let $\mathcal{F}$ be a coherent sheaf of fractional ideals on a variety $X$ (i.e. a coherent subsheaf of $\mathcal{R}$) such that, for every $x$ outside of a closed subset $Y$ of $X$, $\mathcal{F}_x$ is an ideal of $\mathcal{O}_x$. Then there exists an integer $k$ such that $\mathcal{I}_Y^k \cdot \mathcal{F} \subset \mathcal{O}$.
10
+
11
+ *Proof.* By Proposition 3 and the hypothesis, there exists a coherent sheaf $\mathcal{J}$ of ideals of $\mathcal{O}$ such that $\mathcal{I}_x = \mathcal{O}_x$ if $x \notin Y$, and such that $\mathcal{J} \cdot \mathcal{F} \subset \mathcal{O}$. Thus $\text{supp}(\mathcal{O}/\mathcal{J}) \subset Y$, and, by Proposition 5, there exists an integer $k$ such that $\mathcal{I}_Y^k(\mathcal{O}/\mathcal{J}) = 0$. This implies that $\mathcal{I}_Y^k \subset \mathcal{J}$. $\square$
12
+
13
+ ## 2 Dévissage theorem
14
+
15
+ Let $\mathscr{C}$ be an abelian category, and $\mathscr{C}'$ a subcategory of objects of $\mathscr{C}$. We say that $\mathscr{C}'$ is left exact in $\mathscr{C}$ if
16
+
17
+ 1. every subobject of an object of $\mathscr{C}'$ is in $\mathscr{C}'$;
18
+
19
+ 2. for every exact sequence $0 \to \mathscr{A}' \to \mathscr{A} \to \mathscr{A}'' \to 0$ in $\mathscr{C}$, the object $\mathscr{A}$ is in $\mathscr{C}'$ if the other two objects are in $\mathscr{C}'$.²
20
+
21
+ Let $X$ be an algebraic space. We denote by $\mathscr{C}(X)$ the abelian category of coherent sheaves on $X$. If $Y$ is a closed subset of $X$, then a coherent sheaf on $Y$ has a canonical extension to a coherent sheaf on $X$ (extending by 0 outside of $Y$), and so we can consider $\mathscr{C}(Y)$ as a subcategory of $\mathscr{C}(X)$. With this notation, we have the following theorem:
22
+
23
+ *Theorem (Dévissage).* Let $\mathcal{D}$ be a left-exact subcategory of $\mathcal{C}(X)$ that has the following property: for every closed irreducible subset $Y$ of $X$, there exists a coherent sheaf $\mathcal{M}_Y$ of $\mathcal{C}(Y)$ that belongs to $\mathcal{D}$, and that is torsion-free as a sheaf on $Y$. Then $\mathcal{D} = \mathcal{C}(X)$.
24
+
25
+ *Proof.* The proof works by induction on the dimension of $X$. If $\dim X = 0$, then $X$ consists of a finite number of points $P_1, \dots, P_r$, and a coherent sheaf on $X$ can be identified with a system $\{N_i\}_{i=1,\dots,r}$, where $N_i$ is a vector space of finite dimension over $K$. Thus the sheaf $\mathcal{M}_{P_i}$ on $P_i$ that we have, by hypothesis, is a vector space of finite dimension over $K$. By the axioms of a left-exact subcategory, it is trivial to show that every system $\{N_i\}_{i=1,\dots,r}$,
26
+
27
+ ²The axioms here that define a left-exact subcategory are slightly stronger than those of Grothendieck [4].
samples/texts/1469251/page_4.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ where $N_i$ is a vector space of finite dimension over $K$, considered as a coherent sheaf on $X$, belongs to $\mathcal{D}$.
2
+
3
+ Now assume that we have proven the theorem for all dimensions $\le (n-1)$. Let $\dim X = n$. Let $Y$ be a closed subset of $X$ such that $\dim Y \le (n-1)$. We can easily show that $\mathcal{D} \cap \mathcal{C}(Y)$ is a left-exact subcategory of $\mathcal{C}(Y)$ that satisfies the hypotheses of the theorem. So, by the induction hypothesis, $\mathcal{D} \supset \mathcal{C}(Y)$.
4
+
5
+ We will now prove that, if $\mathcal{F}$ is a coherent sheaf on $X$ with $\text{supp} \, \mathcal{F} = Y$, then $\mathcal{F} \in \mathcal{D}$. If $\mathcal{I}_Y \cdot \mathcal{F} = 0$, then $\mathcal{F} \in \mathcal{C}(Y)$, and, by the above, $\mathcal{F} \in \mathcal{D}$. No matter what, by Proposition 5, there exists an integer $k \ge 1$ such that $\mathcal{I}_Y^k \mathcal{F} = 0$. We will complete the proof by induction on $k$. Suppose that that claim has been proven for every coherent sheaf $\mathcal{G}$ on $X$ such that $\mathcal{I}_Y^{k-1} \mathcal{G} = 0$. For $\mathcal{F}$, we have an exact sequence
6
+
7
+ $$0 \to \mathcal{I}_Y \cdot \mathcal{F} \to \mathcal{F} \to \mathcal{F}/(\mathcal{I}_Y \cdot \mathcal{F}) \to 0.$$
8
+
9
+ The sheaf $\mathcal{I}_Y \mathcal{F}$ is annihilated by $\mathcal{I}_Y^{k-1}$, and the sheaf $\mathcal{F}/(\mathcal{I}_Y \mathcal{F})$ is annihilated by $\mathcal{I}_Y$. Thus $\mathcal{I}_Y \mathcal{F}$ and $\mathcal{F}/(\mathcal{I}_Y \mathcal{F})$ belong to $\mathcal{D}$. This implies that $\mathcal{F} \in \mathcal{D}$.
10
+
11
+ Suppose that $X$ is a variety, and that $\mathcal{F}$ is a torsion-free sheaf on $X$. We can consider $\mathcal{F}$ as a coherent subsheaf of $\mathbb{R}^n$, where $n = \text{rank} \, \mathcal{F}$, and, by Proposition 4, there then exists a coherent sheaf of ideals $\mathcal{I}$ such that $\mathcal{I} \cdot \mathcal{F} \subset \mathcal{O}^n$, and such that the sheaves $\mathcal{F}/(\mathcal{I}\mathcal{F})$ and $\mathcal{O}^n/(\mathcal{I}\mathcal{F})$ are torsion sheaves. Since $\mathcal{F}/(\mathcal{I}\mathcal{F})$ is a torsion sheaf, $\mathcal{F}/(\mathcal{I}\mathcal{F}) \in \mathcal{D}$; thus $\mathcal{F} \in \mathcal{D}$ if and only if $\mathcal{I}\mathcal{F} \in \mathcal{D}$. Analogously, $\mathcal{I}\mathcal{F} \in \mathcal{D}$ if and only if $\mathcal{O}^n \in \mathcal{D}$, and, by the axioms of an exact subcategory, if and only if $\mathcal{O} \in \mathcal{D}$. Thus $\mathcal{F} \in \mathcal{D}$ if and only if $\mathcal{O} \in \mathcal{D}$. If we repeat the same argument for the torsion-free sheaf $\mathbb{M}_X$, which we have by hypothesis, then we see that $\mathcal{O} \in \mathcal{D}$, which implies that $\mathcal{F} \in \mathcal{D}$.
12
+
13
+ Suppose again that $X$ is a variety, but now that $\mathcal{F}$ is an arbitrary coherent sheaf. We will show that $\mathcal{F} \in \mathcal{D}$. We can assume that $\mathcal{F} \ne 0$, and we then have
14
+
15
+ $$0 \to T(\mathcal{F}) \to \mathcal{F} \to \mathcal{F}/T(\mathcal{F}) \to 0$$
16
+
17
+ where $T(\mathcal{F})$ is a torsion sheaf, and $\mathcal{F}/T(\mathcal{F})$ is a torsion-free sheaf. By Proposition 2, $\text{supp} T(\mathcal{F}) \ne X$, and, since $X$ is a variety, $\text{dim} \, \text{supp} T(\mathcal{F}) < \text{dim} T(X)$. We then have, by the induction hypothesis, that $T(\mathcal{F}) \in \mathcal{D}$, and we have just proven that $\mathcal{F}/T(\mathcal{F}) \in \mathcal{D}$. Thus $\mathcal{F} \in \mathcal{D}$.
18
+
19
+ Now let $X$ be an arbitrary algebraic space, and $X_1, \dots, X_p$ its irreducible components. If $\mathcal{F}$ is a coherent sheaf on $X$, then $\mathcal{F}/(\mathcal{I}_{X_i}\mathcal{F})$ can be identified with a sheaf on the variety $X_i$ (where $\mathcal{I}_{X_i}$ is the sheaf of ideals of $\mathcal{O}(X)$ determined by $X_i$), and, by the above, $\mathcal{F}/(\mathcal{I}_{X_i}\mathcal{F}) \in \mathcal{D}$. Thus the sheaf $\mathcal{G} = \sum_{i=1}^p \mathcal{F}/(\mathcal{I}_{X_i}\mathcal{F})$ belongs to $\mathcal{D}$. We have a canonical homomorphism $\varphi: \mathcal{F} \to \mathcal{G}$. The image of $\varphi$ is a coherent subsheaf of $\mathcal{G}$, and so the image of $\varphi$ belongs to $\mathcal{D}$.
20
+
21
+ We know that $\text{supp} \ker \varphi \subset \bigcup_{i \ne j} X_i \cap X_j$, and so $\text{dim} \operatorname{supp} \ker \varphi < \text{dim} X$, and, by the induction hypothesis, $\ker \varphi \in \mathcal{D}$. Thus $\mathcal{F} \in \mathcal{D}$, and the theorem is proven. □
22
+
23
+ **Corollary (Serre's Theorem).** If $\mathcal{F}$ is a coherent sheaf on a complete algebraic space $X$, then $H^0(X, \mathcal{F})$ is a vector space of finite dimension over $K$.
24
+
25
+ *Proof.* We take $\mathcal{D}$ to be the category of all coherent sheaves $\mathcal{F}$ on $X$ such that $H^0(X, \mathcal{F})$ is of finite dimension over $K$. We can prove that $\mathcal{D}$ is a left-exact subcategory of $\mathcal{C}(X)$. Also,
samples/texts/1469251/page_5.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ we know that, if $Y$ is an irreducible closed subset of $X$, then $Y$ is a complete variety. Thus the coherent sheaf $\mathcal{O}(Y)$ on $Y$ is a torsion-free sheaf with the property that $H^0(Y, \mathcal{O}(Y)) \cong K$, and so $H^0(X, \mathcal{O}(Y)) = H^0(Y, \mathcal{O}(Y))$ is of finite dimension over $K$ (we denote also by $\mathcal{O}(Y)$ the canonical extension of $\mathcal{O}(Y)$ to $X$). By the theorem, the corollary is proven. $\square$
2
+
3
+ ### 3 Divisors (Generalities)
4
+
5
+ Let $X$ be an algebraic variety, and $\mathcal{R}^\times(X)$ and $\mathcal{O}^\times(X)$ (or simply $\mathcal{R}^\times$ and $\mathcal{O}^\times$) the constant sheaf on $X$ of non-zero rational functions and the sheaf on $X$ of invertible regular functions (respectively). The sheaves $\mathcal{R}^\times$ and $\mathcal{O}^\times$, endowed with their multiplicative structure, are sheaves of abelian groups.
6
+
7
+ A divisor $D$ on $X$ is a section of the quotient sheaf $\mathcal{R}^\times/\mathcal{O}^\times$. An element of $\mathcal{R}^\times$ that is a representative of the value $D(x)$ of $D$ at $x$ is called a *definition function of D at x*. More generally, a function $f \in \mathcal{R}^\times$ is called a *definition function of D in an open subset U* if, for all $x \in U$, $f$ is a representative of $D(x)$; then $f$ is determined up to an invertible regular function on $U$. Since we can locally lift a section of $\mathcal{R}^\times/\mathcal{O}^\times$ to a section of $\mathcal{R}^\times$, a divisor $D$ is determined by the following data: a cover $\{U_i\}$ by open subsets, and non-zero rational functions $f_i$ on $U_i$ such that, on $U_i \cap U_j$, $f_{ij} = f_i/f_j$ is an invertible regular function. We have that $f_{ij}f_{jk}f_{ki} = 1$ on $U_i \cap U_j \cap U_k$, and, as is well known, this allows us to construct a locally trivial fibre bundle with $K^\times$ as the structure group; it is easy to see that this fibre bundle is determined up to equivalence [7]. We also know that the coherent sheaves of fractional ideals (i.e. the coherent subsheaves of $\mathcal{R}$) that are generated by $f_i$ and $f_j$ (respectively) agree on $U_i \cap U_j$, and do not depend on the choice of definition functions of $D$ in $U_i$ and $U_j$. This implies that the divisor $D$ canonically determines a coherent sheaf of locally principal fractional ideals. We can easily see that the converse is true, and this gives us an equivalent definition of a divisor [1].
8
+
9
+ A divisor $D$ is said to be *positive* if, for each $x \in X$, $D(x) \in \mathcal{O}_x/\mathcal{O}_x^\times$ (i.e. if all the definition functions of $D$ at $x$ are regular functions in $x$).
10
+
11
+ Since $\mathcal{R}^\times/\mathcal{O}^\times$ is a sheaf of abelian groups on $X$, there is a canonical structure of an abelian group on the set of divisors on $X$; this group is called the *group of divisors on X*. The composition law in this group is written additively, and the identity element in this group is thus called the *zero divisor*, and is denoted by (0).
12
+
13
+ If $f$ is a non-zero rational function on $X$, then it defines a divisor $\text{div } f$ by the data $(\text{div } f)(x) = \text{Im } f \subseteq \mathcal{R}^\times/\mathcal{O}_x^\times$. The divisors obtained in this way are called *principal divisors*, and form a subgroup of the group of divisors on $X$; the quotient group is called the *group of classes of divisors on X*. Two divisors $D_1$ and $D_2$ are said to be equivalent if they are equivalent module the group of principal divisors; we write $D_1 \sim D_2$. We have seen that a divisor defines, up to equivalence, a locally trivial algebraic fibre bundle with structure group $K^\times$. On the other hand, it is easy to see that a locally trivial algebraic fibre bundle with $K^\times$ as its structure group defines, up to equivalence, a divisor [7]. Thus the group of classes of divisors on $X$ is equal to $H^1(X, \mathcal{O}^\times)$, the group of classes of equivalent algebraic fibre bundles with $K^\times$ as their structure group.
14
+
15
+ We can define, in an analogous way, an *additive divisor* on a variety $X$ as a section of the sheaf $\mathcal{R}/\mathcal{O}$ (the divisors defined above are called *multiplicative divisors*, or simply *divisors*). The additive divisors form an abelian group, and even a vector space over $K$. An additive divisor is determined by the following data: a cover $\{U_i\}$ of $X$ by open subsets,
16
+
17
+ p. 4-06
18
+
19
+ p. 4-07
samples/texts/1754951/page_30.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Figure 17: The clockwise / anti-clockwise enumeration of parallel edges with respect to $R$.
2
+
3
+ important in how we relate a solution of the given instance of **Planar Disjoint Paths** to a weak linkage in $H$. We remind that for a pair of adjacent vertices $u, v \in V(H)$, we denoted the $4n + 1$ parallel copies of edges between them by $e_{-2n}, e_{-2n+1}, \dots, e_{-1}, e_0, e_1, e_2, \dots, e_{2n}$ where $e = \{u, v\}$, such that when the edges incident to $u$ (or $v$) are enumerated in cyclic order, the occurrences of $e_i$ and $e_{i+1}$ are consecutive (that is, $e_i$ appears immediately before $e_{i+1}$ or vice versa) for every $i \in \{-2n, -2n+1, \dots, 2n-1\}$, and $e_{-2n}$ and $e_{2n}$ are the outermost copies of $e$. We say that such an embedding is *valid*. Now, we further refine the embedding of $H_G$ (so that it remains valid)—notice that for each edge, there are two possible ways to order its copies in the embedding so that it satisfies the condition above. Here, we will specify for the edges of $R$ a particular choice among the two to embed their copies. Towards this, for a vertex $v \in V(R)$, let $\tilde{E}_R(v) = \{e \in E_H(v) \mid e \text{ is parallel to an edge } e' \in E(R)\}$. (The set $E_H(v)$ contains all edges in $E(H)$ incident to $v$.)
4
+
5
+ For the definition of the desired embedding, we remind that any tree can be properly colored in two colors (that is, every vertex is assigned a color different than the colors of its neighbors), and that in such a coloring, for every vertex, all the neighbors of the vertex get the same color. We let **color**: $V(R) \rightarrow \{\text{red, green}\}$ be some such coloring of $R$. Then, we embed parallel copies such that for every $v \in V(R)$, the following conditions hold (see Fig. 17).
6
+
7
+ * If **color**(v) = red, then when we enumerate $\tilde{E}_R(v)$ in clockwise order, for every $e \in E_R(v)$, the $4n + 1$ copies of e are enumerated in this order: $e_{-2n}, e_{-2n+1}, \dots, e_0, \dots, e_{2n}$. We let **order**$_v$ denote such an enumeration starting with an edge indexed $-2n$.
8
+
9
+ * If **color**(v) = green, then when we enumerate $\tilde{E}_R(v)$ in counter-clockwise order, for every $e \in E_R(v)$, the $4n+1$ copies of e are enumerated in this order: $e_{-2n}, e_{-2n+1}, \dots, e_0, \dots, e_{2n}$. We let **order**$_v$ denote such an enumeration starting with an edge indexed $-2n$.
10
+
11
+ Let us observe that the above scheme is well defined.
12
+
13
+ **Observation 6.9.** Let $(G, S, T, g, k)$ be a good instance of Planar Disjoint Paths with a backbone Steiner tree $R$. Then, there is a valid embedding of $H$ such that, for every $v \in V(R)$, the enumeration order$_v$ is well defined with respect to some proper coloring **color**: $V(R) \to \{\text{red, green}\}$. Furthermore, such an embedding can be computed in time $\mathcal{O}(n^2)$.
14
+
15
+ *Proof.* Since $e = \{u, v\} \in E(R)$, $u$ and $v$ get different colors under **color**. Let us assume that **color**(u) = red and **color**(v) = green. Then the parallel copies of e are enumerated in clockwise order in **order**$_v$ and anti-clockwise order in **order**$_v$. Hence, they agree and by construction, it is a good enumeration. Finally, to bound the time required to obtain such an embedding, observe that it can be obtained by starting with any arbitrary embedding of H and then renaming the edges. Since the total number of edges in $E(H)$ (including parallel copies) is at most $\mathcal{O}(n^2)$, this can be done in $\mathcal{O}(n^2)$ time. □
16
+
17
+ From now on, we assume that $H$ is embedded in a way so that the enumerations $order_v$ are well defined. We also remind that $R$ only contains the 0-th copies of edges in $H$. Finally, we have the following observation.
samples/texts/1754951/page_34.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ to Disjoint Paths can be replaced by an equivalent one whose paths avoid v.
2
+
3
+ This result says that if the treewidth of the input planar graph is (roughly) $\Omega(2^k)$, then we can find an irrelevant vertex and remove it. A natural question is whether we can guarantee an irrelevant vertex even if the treewidth is $\Omega(\text{poly}(k))$. Adler and Krause [3] exhibited a planar graph $G$ with $k+1$ terminal pairs such that $G$ contains a $(2^k+1) \times (2^k+1)$ grid as a subgraph, Disjoint Paths on this input has a unique solution, and the solution uses all vertices of $G$; in particular, no vertex of $G$ is irrelevant. This implies that the irrelevant vertex technique can only guarantee a treewidth of $\Omega(2^k)$, even if the input graph is planar.
4
+
5
+ Combining items (1) and (3), we conclude that the known methodology for Disjoint Paths can only guarantee an algorithm with running time $2^{2^{\mathcal{O}(k)}}n^2$ for Planar Disjoint Paths. Thus, a $2^{\text{poly}(k)}n^{\mathcal{O}(1)}$-time algorithm for Planar Disjoint Paths appears to require entirely new ideas. As this obstacle was known to Adler et al. [1], it is likely to be the main motivation for Adler to pose the existence of a $2^{\text{poly}(k)}n^{\mathcal{O}(1)}$ time algorithm for Planar Disjoint Paths as an open problem.
6
+
7
+ **Our Methods.** Our algorithm is based on a novel combination of two techniques that do not seem to give the desired outcome when used on their own. The first ingredient is the treewidth reduction theorem of Adler et al. [2] that proves that given an instance of Planar Disjoint Paths, the treewidth can be brought down to $2^{\mathcal{O}(k)}$ (explained in item (3) above). This by itself is sufficient for an FPT algorithm (this is what Adler et al. [2] do), but as explained above, it seems hopeless that it will bring a $2^{\text{poly}(k)}n^{\mathcal{O}(1)}$-time algorithm.
8
+
9
+ We circumvent the obstacle by using an algorithm for a more difficult problem with a worse running time, namely, Schrijver's $n^{\mathcal{O}(k)}$-time algorithm for Disjoint Paths on directed planar graphs [42]. Schrijver's algorithm has two steps: a "guessing" step where one (essentially) guesses the homology class of the solution paths, and then a surprising homology-based algorithm that, given a homology class, finds a solution in that class (if one exists) in polynomial time. Our key insight is that for Planar Disjoint Paths, if the instance that we are considering has been reduced according to the procedure of Adler et al. [2], then we only need to iterate over $2^{\mathcal{O}(k^2)}$ homology classes in order to find the homology class of a solution, if one exists. The proof of this key insight is highly non-trivial, and builds on a cornerstone ingredient of the recent FPT algorithm of Cygan et al. [14] for Disjoint Paths on directed planar graphs. To the best of our knowledge, this is the first algorithm that finds the exact solution to a problem that exploits that the treewidth of the input graph is small in a way that is different from doing dynamic programming. A technical overview of our methods will appear in the next section. In our opinion, a major strength of the paper is that it breaks not only a barrier in running time, but also a longstanding methodological barrier. Since there are many algorithms that use the irrelevant vertex technique in some way, there is reasonable hope that they could benefit from the methods developed in this work.
10
+
11
+ We remark that we have made no attempt to optimize the polynomial factor in this paper. Doing that, and in particular achieving linear dependency on $n$ while keeping the dependency on $k$ single-exponential, is the natural next question for future research. In particular, this might require to “open-up” the black boxes that we use, whose naive analysis yields a large polynomial dependency on $n$, but there is no reason to believe that it cannot be made linear—most likely, this will require extensive independent work on these particular ingredients. Having both the best dependency on $k$ and the best dependency on $n$ simultaneously may be critical to achieve a practical exact algorithm for large-scale instances.
12
+
13
+ ## 2 Overview
14
+
15
+ **Homology.** In this overview, we explain our main ideas in an *informal* manner. Our starting point is Schrijver's view [42] of a collection of "non-crossing" (but possibly not vertex- or even edge-disjoint) sets of walks as flows. To work with flows (defined immediately), we deal with
samples/texts/1754951/page_37.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *Proof.* We think of $Q$ as oriented from its endpoint on $I_{\text{in}}$ to its endpoint on $I_{\text{out}}$. Let $a$ be the last vertex on $Q$ that lies on $\hat{I}_{\text{in}}$ and $b$ be the first vertex on $Q$ that lies on $\hat{I}_{\text{out}}$. Further, let $Q_{\text{in}}$ be the prefix of $Q$ from its start to $a$, and $Q_{\text{out}}$ be the suffix of $Q$ from $b$ to its end. By Claim 7.2, $Q_{\text{in}}$ is entirely contained in the ring induced by $I_{\text{in}}$ and $C_{20\ell}$. By the claim analogous to Claim 7.2, $Q_{\text{out}}$ is entirely contained in the ring induced by $I_{\text{out}}$ and $C_{p-20\ell+1}$. Since $p > 40\ell$, it follows that $Q_{\text{in}}$ and $Q_{\text{out}}$ are disjoint, and in particular $a$ appears before $b$ on $Q$. Let $\hat{Q}$ be the infix of $Q$ between $a$ and $b$. Then, $\hat{Q}$ is a path in $\text{Ring}(\hat{I}_{\text{in}}, \hat{I}_{\text{out}})$ that traverses this ring, so it suffices to check that $\lvert \text{WindNum}(\hat{Q}, \hat{\eta}) - \text{WindNum}(Q, \eta) \rvert \le 40\ell$.
2
+
3
+ Observe that every crossing of $Q$ and $\eta$ that is not a crossing of $\hat{Q}$ and $\hat{\eta}$ has to occur on either $Q_{\text{in}}$ or $Q_{\text{out}}$. However, $Q_{\text{in}}$ and $Q_{\text{out}}$ can have at most 40ℓ vertices in common with $\eta$, because these must be among the intersections of $\eta$ with cycles $C_1, \dots, C_{20\ell}, C_{p-20\ell+1}, \dots, C_p$, of which there are 40ℓ. Each such crossing can contribute +1 or -1 to the difference between the winding numbers $\overline{\text{WindNum}}(\hat{Q}, \hat{\eta})$ and $\overline{\text{WindNum}}(Q, \eta)$, hence the difference between these winding numbers is at most 40ℓ. ◇
4
+
5
+ For every path $Q \in Q$, fix the path $\hat{Q}$ provided by Claim 7.3, and let $\hat{Q} \subseteq \{\hat{Q} | Q \in Q\}$ such that $\lvert \hat{Q} \rvert = \lvert \mathcal{P}_{\text{traverse}} \rvert$. Then, $\mathcal{Q}'$ is a traversing linkage in $\text{Ring}(\hat{I}_{\text{in}}, \hat{I}_{\text{out}})$. Apply Proposition 7.2 to the linkages $\mathcal{P}_{\text{traverse}}$ and $\hat{Q}$ in $\text{Ring}(\hat{I}_{\text{in}}, \hat{I}_{\text{out}})$, yielding a linkage $\mathcal{P}'_{\text{traverse}}$ that is aligned from $\mathcal{P}_{\text{traverse}}$ and such that
6
+
7
+ $$ \lvert \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \hat{\eta}) - \overline{\text{WindNum}}(\hat{Q}, \hat{\eta}) \rvert \le 6. $$
8
+
9
+ Clearly, by construction we have that the paths of $\mathcal{P}'_{\text{traverse}}$ are disjoint with the paths of $\mathcal{P}_{\text{visitor}}$. Furthermore, the paths in $\mathcal{P}'_{\text{traverse}}$ traverse $\text{Ring}(I_{\text{in}}, I_{\text{out}})$ since they are aligned with $\mathcal{P}_{\text{traverse}}$ (i.e. they have the same endpoints). Finally, by Claim 7.3 we have
10
+
11
+ $$ \lvert \overline{\text{WindNum}}(\hat{Q}, \hat{\eta}) - \overline{\text{WindNum}}(Q, \eta) \rvert \le 40\ell, $$
12
+
13
+ and by Claim 7.1 (applied to paths in $\mathcal{P}'_{\text{traverse}}$) we have
14
+
15
+ $$ \lvert \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \eta) - \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \hat{\eta}) \rvert \le 20\ell. $$
16
+
17
+ By the above we conclude that
18
+
19
+ $$ \lvert \overline{\text{WindNum}}(\mathcal{P}'_{\text{traverse}}, \eta) - \overline{\text{WindNum}}(Q, \eta) \rvert \le 60\ell + 6, $$
20
+
21
+ which completes the proof. □
22
+
23
+ ## 7.3 Rings of the Backbone Steiner Tree
24
+
25
+ Based on results in previous subsections, we proceed to show that if the given instance admits a solution, then it also admits a solution of small winding number. Recall the backbone Steiner tree $R$ constructed in Section 6. Let $P = \textbf{path}_R(u, v)$ be a long maximal degree-2 path in $R$, where $u, v \in V_{=1}(R) \cup V_{\ge 3}(R)$, and assume without loss of generality (under the supposition that we are given a *Yes* instance) that the subtree of $R-V(P)-\{u, v\}$ containing $v$ also contains the terminal $t^* \in T$ lying on the outer face of $H$. Recall the (minimal) separators $S_u = \textbf{Sep}_R(P, u)$ and $S_v = \textbf{Sep}_R(P, v)$ in $H$. Hence $H[S_u]$ and $H[S_v]$ form two cycles in $H$, and $H[S_u]$ is contained in the strict interior of $H[S_v]$. Further, recall that $|S_u|, |S_v| \le \alpha_{\text{sep}}(k)$. Consider the ring induced by $H[S_u]$ and $H[S_v]$, i.e. $\text{Ring}(S_u, S_v) := \text{Ring}(H[S_u], H[S_v])$. Let $V(S_u, S_v)$ denote the set of all vertices (in $V(H)$) that lie in this ring, including those in $S_u$ and $S_v$. Then, $\text{Ring}(S_u, S_v) = H[V(S_u, S_v)]$. Note that by definition it contains $\textbf{Sep}_R(P, u)$ and $\textbf{Sep}_R(P, v)$. Let $G_{u,v}$ denote the restriction of this graph to $G$, i.e. $G_{u,v} = G[V(G) \cap V(S_u, S_v)]$. Additionally, recall that there are two distinct vertices $u'$ and $v'$ in $P$ that lie in $S_u$ and $S_v$, respectively, such that $P = \textbf{path}_R(u, u')-\textbf{path}_R(u', v')-(v', v)$ (by Lemma 6.5 and Definition 6.5). Lastly, we remind that $A^*_{R,P,u}$ and $A^*_{R,P,v}$ are the two components of $R-V(\textbf{path}_R(u', v') - \{u', v'\})$ that contain $u$ and $v$, respectively (Definition 6.5). Then, the following observation is immediate.
samples/texts/1754951/page_42.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Definition 8.7 (Segment).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$. A crossing of $W$ with $R$ is a crossing $(v, e, \hat{e}, e', \tilde{e})$ of $W$ and some path in $R$.¹⁴ Then, a segment of $W$ is a maximal subwalk of $W$ that has no crossings with $R$. Let $\text{Seg}(W)$ denote the set¹⁵ of segments of $W$.
2
+
3
+ We remind that $R$ only contains 0-copies of edges, hence we can ensure that we deal with walks that are edge-disjoint from $R$ by avoiding the usage of 0-copies. Towards the definition of potential for a weak linkage, we group segments together as follows (see Fig. 19).
4
+
5
+ **Definition 8.8 (Segment Group).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$. A segment group of $W$ is a maximal subwalk $W'$ of $W$ such that either (i) $\text{Seg}(W') \subseteq \text{Seg}(W)$ and all of the endpoints of all of the segments in $\text{Seg}(W')$ are internal vertices of the same maximal degree-2 path of $R$, or (ii) $W' \in \text{Seg}(W)$ and the two endpoints of $W'$ are not internal vertices of the same maximal degree-2 path in $R$.¹⁶ The set of segment groups of $W$ is denoted by $\text{SegGro}(W)$.
6
+
7
+ Observe that the set of segments, as well as the set of segment groups, define a partition of a walk. We define the “potential” of a segment group based on its winding number in the ring that corresponds to its path (in case it is a long path where a ring is defined). To this end, recall the labeling function in Definition 7.4. Note that the labeling is defined for any two walks irrespective of the existence of a ring.
8
+
9
+ **Definition 8.9 (Potential of Segment Group).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a walk in $H$ that is edge-disjoint from $R$ and whose endpoints are in $V_{=1}(R)$. Let $W' \in \text{SegGro}(W)$. If $|\text{Seg}(W')| = 1$, then the potential of $W'$, denoted by $\text{Potential}(W')$, is defined to be 1. Otherwise, it is defined as follows.
10
+
11
+ $$ \text{Potential}(W') = 1 + \sum_{(e,e') \in E(W^*) \times E(W')} \text{label}_P^{W'}(e, e'), $$
12
+
13
+ where $W^*$ is the walk obtained from $W'$ be adding two edges to $W'$—the edge consecutive to the first edge of $W'$ in $W$ and the edge consecutive to the last edge of $W'$ in $W$, and $P$ is the maximal degree-2 path of $R$ such that all of the endpoints of all of the segments in $\text{Seg}(W')$ are its internal vertices.
14
+
15
+ The potential of a segment group is well defined as we use the function label only for edges incident to internal vertices of the maximal degree-2 paths in $R$. For an example of a potential of a segment groups, see Fig. 19. Now, we generalize the notion of potential from segment groups to weak linkages as follows.
16
+
17
+ **Definition 8.10 (Potential of Weak Linkage).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be a weak linkage. Then, the potential of $W$ is
18
+
19
+ $$ \text{Potential}(W) = \sum_{W' \in \text{SegGro}(W)} \text{Potential}(W'), $$
20
+
21
+ where $\text{SegGro}(W) = \bigcup_{W \in W} \text{SegGro}(W)$.
22
+
23
+ To upper bound the potential of a solution of a small winding number, we first upper bound the number of segment groups.
24
+
25
+ ¹⁴The path might not be a maximal degree-2 path, thus $(v, e, \hat{e}, e', \tilde{e})$ may concern a vertex $v \in V_{\ge 3}(R)$.
26
+
27
+ ¹⁵Because we deal with walks that do not repeat edges, $\text{Seg}(W)$ is necessarily a set rather than a multiset.
28
+
29
+ ¹⁶That is, the two endpoints of $W'$ are internal vertices in different maximal degree-2 paths in $R$ or at least one endpoint of $W'$ is a vertex in $V_{=1}(R) \cup V_{\ge 3}(R)$.
samples/texts/1754951/page_45.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Figure 1: Flow at a vertex and its reduction.
2
+
3
+ Figure 2: Two different ways of extracting a walk from a flow.
4
+
5
+ directed graphs. (In this context, undirected graphs are treated as directed graphs by replacing each edge by two parallel arcs of opposite directions.) Specifically, we denote an instance of Directed Planar Disjoint Paths as a tuple $(D, S, T, g, k)$ where $D$ is a directed plane graph, $S, T \subseteq V(D)$, $k = |S|$ and $g: S \to T$ is bijective. Then, a *solution* is a set $\mathcal{P}$ of pairwise vertex-disjoint directed paths in $D$ containing, for each vertex $s \in S$, a path directed from $s$ to $g(s)$.
6
+
7
+ In the language of flows, each arc of $D$ is assigned a word with letters in $T \cup T^{-1}$ (that is, we treat the set of vertices $T$ also as an alphabet), where $T^{-1} = \{t^{-1} : t \in T\}$. This collection of words is denoted by $(T \cup T^{-1})^*$ and let 1 denote the empty word. A word is *reduced* if, for all $t \in T$, the letters $t$ and $t^{-1}$ do not appear consecutively. Then, a *flow* is an assignment of reduced words to arcs that satisfies two constraints. First, when we concatenate the words assigned to the arcs incident to a vertex $v \notin S \cup T$ in clockwise order, where words assigned to ingoing arcs are reversed and their letters negated, the result (when reduced) is the empty word 1 (see Fig. 1). This is an algebraic interpretation of the standard flow-conservation constraint. Second, when we do the same operation with respect to a vertex $v \in S \cup T$, then when the vertex is in $S$, the result is $g(s)$ (rather than the empty word), and when it is in $T$, the result is $t$. There is a natural association of flows to solutions: for every $t \in T$, assign the letter $t$ to all arcs used by the path from $g^{-1}(t)$ to $t$.
8
+
9
+ Roughly speaking, Schrijver proved that if a flow $\phi$ is given along with the instance $(D, S, T, g, k)$, then in *polynomial time* we can either find a solution or determine that there is no solution “similar to $\phi$”. Specifically, two flows are *homologous* (which is the notion of similarity) if one can be obtained from the other by a *set* of “face operations” defined as follows.
10
+
11
+ **Definition 2.1.** Let $D$ be a directed plane graph with outer face $f$, and denote the set of faces of $D$ by $\mathcal{F}$. Two flows $\phi$ and $\psi$ are homologous if there exists a function $h : \mathcal{F} \to (T \cup T^{-1})^*$ such that (i) $h(f) = 1$, and (ii) for every arc $e \in A(D)$, $h(f_1)^{-1} \cdot \phi(e) \cdot h(f_2) = \psi(e)$ where $f_1$ and $f_2$ are the faces at the left-hand side and the right-hand side of $e$, respectively.
12
+
13
+ Then, a slight modification of Schrijver's theorem [42] readily gives the following corollary.
14
+
15
+ **Corollary 2.1.** There is a polynomial-time algorithm that, given an instance $(D, S, T, g, k)$ of Directed Planar Disjoint Paths, a flow $\phi$ and a subset $X \subseteq A(D)$, either finds a solution of $(D - X, S, T, g, k)$ or decides that there is no solution of it such that the “flow associated with it” and $\phi$ are homologous in $D$.
16
+
17
+ **Discrete Homotopy and Our Objective.** While the language of flows and homology can be used to phrase our arguments, it also makes them substantially longer and somewhat obscure because it brings rise to multiple technicalities. For example, different sets of non-crossing walks may correspond to the same flow (see Fig. 2). Instead, we define a notion of *discrete homotopy*, inspired by (standard) homotopy. Specifically, we deal only with collections of non-crossing *and edge-disjoint* walks, called *weak linkages*. Then, two weak linkages are *discretely homotopic* if
samples/texts/1754951/page_49.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ $W'$ is sensible, well-behaved, shallow and outer-terminal having the same potential as $W$, and
2
+ $\sum_{\hat{S} \in \text{Seq}(W')} \text{Volume}(\hat{S}) < \sum_{\hat{S} \in \text{Seq}(W)} \text{Volume}(\hat{S})$.
3
+
4
+ *Proof.* We first argue that the cycle move/pull operation is applicable to $(W, C)$. Let $P_1, P_2$ and $P_3$ be the partition of $C$ in Definition 8.17. Note that because $S$ is innermost, its projecting cycle does not contain any edge in $E(W)$ that is not parallel to some edge of $R$, and hence so does $C$ as it is drawn in the interior (including the boundary) of the projecting cycle. Furthermore, the only edges parallel to $R$ that $C$ can contain are those parallel to the edge $e_i$ of $P_3$ whose subscripts have absolute value larger than $|i|$. However, none of these edges belong to $W$ because $i \in \{-n+\ell-1, n-h+1\}$ where $\ell$ and $h$ are as in Definition 8.16, and $W$ is shallow. Lastly, note that $P_1$ is either $S$ (which might be a cycle) or a subpath of $S$, and hence it is a subwalk of $W$. Thus, the cycle move or pull (depending on whether $P_1$ is a cycle) is applicable to $(W, C)$. Furthermore, the new walk $W'$ that results from the application is the modification of $W$ that replaces $P_1$ by the path consisting of $P_2$ and $P_3$.
5
+
6
+ Because $\mathcal{W}$ is sensible and the endpoints of no walk in $\mathcal{W}$ are changed in $\mathcal{W}'$, we have that $\mathcal{W}'$ is sensible as well. Moreover, the vertices of $P_2$ are not used by any walk in $\mathcal{W}$ apart from $\mathcal{W}'$ and only in its subwalk that traverses $P_2$, and therefore, as $\mathcal{W}$ is well-behaved, so is $\mathcal{W}'$. Additionally, note that $\mathcal{W}$ is shallow and that each edge belongs to at most as many projecting cycles of sequences in $\mathcal{W}'$ as it does in $\mathcal{W}$. Thus, if $P_3$ does not contain an edge (the only edge parallel to an edge of $R$ that might be used by $\mathcal{W}'$ but not by $\mathcal{W}$ is the edge $e_i$ of $P_3$, if it exists), it is immediate that $\mathcal{W}'$ is shallow. Now, suppose that $e_i$ exists. Let $b \in \{-1, 1\}$ have the same sign as $i$. Recall that $i \in \{-n+\ell-1, n-h+1\}$ where $\ell$ and $h$ are as in Definition 8.16, thus to conclude that $\mathcal{W}'$ is shallow, we only need to argue that $e_b$ belongs to the interior of fewer projecting cycles of sequences in $\mathcal{W}'$ as it does in $\mathcal{W}$. However, this holds since the only difference between the sequences of $\mathcal{W}$ and $\mathcal{W}'$ is that the sequence $S$ occurs in $\mathcal{W}$ (and contains $e_b$ in the interior of its projecting cycle), but is transformed into (one or two) other sequences in $\mathcal{W}'$, and these new sequences, by the definition of $\mathcal{W}'$, no longer contain $e_b$ in their projecting cycles. In this context, also note that the projecting cycles of the (one or two) new sequences enclose disjoint areas contained in the area enclosed by the projecting cycle of $S$, and the projecting cycles of the new sequences do not enclose the faces enclosed by $C$, but the projecting cycle of $S$ does enclose them. Thus, $\sum_{\hat{S} \in \text{Seq}(\mathcal{W})} \text{Volume}(\hat{S}) < \sum_{\hat{S} \in \text{Seq}(\mathcal{W})} \text{Volume}(\hat{S})$.
7
+
8
+ It remains to show that $\mathcal{W}'$ is outer-terminal and that has the same potential as $\mathcal{W}$. The second claim is immediate since $\mathcal{W}$ and $\mathcal{W}'$ have precisely the same crossings with $R$. For the first claim, note that since $\mathcal{W}$ is outer-terminal, it uses exactly one edge incident to $t^*$. The only vertex of $R$ that can possibly be incident to more edges in $E(\mathcal{W}')$ that in $E(\mathcal{W})$ is the other endpoint, say, $w$, of the edge of $P_3$ in the case where $P_3$ contains an edge. So, suppose that $P_3$ does contain an edge and that $w = t^*$, else we are done. Since $t^*$ is a leaf or $R$ that belongs to the boundary of the outer-face of $H$, it cannot be enclosed in the strict interior of the projecting cycle of $S$ and therefore it must be a vertex of $S$. However, this together with the maximality of the number of cycles enclosed by the shrinking cycle $C$ implies that the $C$ is equal to the projecting cycle of $S$. Thus, by the definition of $\mathcal{W}'$, the only difference between the edges incident to $t^*$ in $\mathcal{W}$ compared to $\mathcal{W}'$ is that in $\mathcal{W}$ it is incident to an edge of $S$, while in $\mathcal{W}'$ it is incident to the edge of $P_3$. In particular, this means that $\mathcal{W}'$ has exactly one edge incident to $t^*$ and therefore it is outer-terminal. □
9
+
10
+ Having Lemmas 8.2, 8.4, 8.5 and 8.6 at hand, we are ready to push a solution onto $R$. Since this part is only required to be existential rather than algorithmic, we give a simpler proof by contradiction rather than an explicit process to push the solution. Notice that once the solution has already been pushed, rather than using the notion of shallowness, we only demand to have multiplicity at most 2$n$.
11
+
12
+ **Lemma 8.7.** Let $(G, S, T, g, k)$ be a good Yes-instance of Planar Disjoint Paths, and $R$ be a backbone Steiner tree. Then, there exists a sensible outer-terminal weak linkage $\mathcal{W}$ in $H$ that
samples/texts/1754951/page_56.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Figure 3: Moving a walk of a weak linkage (in blue) onto the Steiner tree (the walk in purple) with “face operations” (e.g. a sub-path of the blue path is pushed giving the green sub-path).
2
+
3
+ one can be obtained from the other by using “face operations” that push/stretch its walks across faces and keep them non-crossing and edge-disjoint (see Fig. 3). More precisely, discrete homotopy is an equivalence relation that consists of three face operations, whose precise definition (not required to understand this overview) can be found in Section 5. We note that the order in which face operations are applied is important in discrete homotopy (unlike homology)—we cannot stretch a walk across a face if no walk passes its boundary, but we can execute operations that will move a walk to that face, and then stretch it. In Section 5, we translate Corollary 2.1 to discrete homotopy (and undirected graphs) to derive the following result.
4
+
5
+ **Lemma 2.1.** There is a polynomial-time algorithm that, given an instance $(G, S, T, g, k)$ of *Planar Disjoint Paths*, a weak linkage $W$ in $G$ and a subset $X \subseteq E(G)$, either finds a solution of $(G - X, S, T, g, k)$ or decides that no solution of it is discretely homotopic to $W$ in $G$.
6
+
7
+ In light of this result, our objective is reduced to the following task.
8
+
9
+ Compute a collection of weak linkages such that if there exists a solution, then there also exists a solution (*possibly a different one!*.) that is discretely homotopic to one of the weak linkages in our collection. To prove Theorem 1.1, the size of the collection should be upper bounded by $2^{O(k^2)}$.
10
+
11
+ **Key Player: Steiner Tree.** A key to the proof of our theorem is a very careful construction (done in three steps in Section 6) of a so-called *Backbone Steiner tree*. We use the term Steiner tree to refer to any tree in the *radial completion* of $G$ (the graph obtained by placing a vertex on each face and making it adjacent to all vertices incident to the face) whose set of leaves is precisely $S \cup T$. In the first step, we consider an arbitrary Steiner tree as our Steiner tree $R$. Having $R$ at hand, we have a more focused goal: we will zoom into weak linkages that are “pushed onto $R$”, and we will only generate such weak linkages to construct our collection. Informally, a weak linkage is *pushed onto* $R$ if all of the edges used by all of its walks are *parallel to* edges of $R$. We do not demand that the edges belong to $R$, because then the goal described immediately cannot be achieved—instead, we make $4n+1$ parallel copies of each edge in the radial completion (the number $4n+1$ arises from considerations in the “pushing process”), and then impose the above weaker demand. Now, our goal is to show that, if there exists a solution, then there also exists one that can be pushed onto $R$ by applying face operations (in discrete homotopy) so that it becomes *identical* to one of the weak linkages in our collection (see Fig. 3).
12
+
13
+ At this point, one remark is in place. Our Steiner tree $R$ is a subtree of the radial completion of $G$ rather than $G$ itself. Thus, if there exists a solution discretely homotopic to one of the weak linkages that we generate, it might not be a solution in $G$. We easily circumvent this issue by letting the set $X$ in Lemma 2.1 contain all “fake” edges.
14
+
15
+ **Partitioning a Weak Linkage Into Segments.** For the sake of clarity, before we turn to present the next two steps taken to construct $R$, we begin with the (non-algorithmic) part of
samples/texts/1754951/page_58.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *Proof.* Let $e_i$ and $e'_j$ be the first and last edges of $S$, and denote the endpoints of $S$ by $u$ and $v$ where $u$ is an endpoint of $e_i$. Because $S$ is a swollen segment, both these edges are on the same side of $\text{path}_R(u, v)$. Let $P$ be the unique subpath in $R$ between $u$ and $v$. Because $\mathcal{W}$ does not have any special U-turn, we have that one of the following cases occurs (see Fig. 27): (i) $S$ traverses a path that starts at $e_i$, consists of edges parallel to $P$ and ends at $e'_j$; (ii) $S$ traverses a path that starts at $e_i$, consists of edges parallel to $P$ but does not end at $e'_j$, and hence (to reach $e'_j$ without having U-turns) $S$ traverses at least two copies (on opposite sides) of every edge of $R$; (iii) the first edge that $S$ traverses after $e_i$ is not parallel to an edge of $P$, and hence (to reach $e'_j$ without having U-turns) $S$ traverses at least two copies (on opposite sides) of every edge of $R$ except possibly for the edges of $P$. In the first case, we are done. In the other two cases, we have that $\mathcal{E}(\mathcal{W})$ contains more than one copy of the edge incident to $t^*$ in $R$, which contradicts the assumption that $\mathcal{W}$ is outer-terminal. □
2
+
3
+ The segment chosen to move at each step is an innermost one, formally defined as follows.
4
+
5
+ **Definition 8.21 (Innermost Swollen Segment).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $\mathcal{W}$ be weak linkage that is pushed onto $R$. Let $S \in \text{Seg}(\mathcal{W})$ be swollen. Then, $S$ is innermost if there do not exist parallel edges $e_i \in E(S)$ and $e_j \in E(\mathcal{W}) \setminus E(S)$ such that $i$ and $j$ have the same sign and $|j| < |i|$.
6
+
7
+ We now argue that if there is a swollen segment, then there is also an innermost one.
8
+
9
+ **Lemma 8.14.** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $\mathcal{W}$ an outer-terminal weak linkage that has no special U-turns and is pushed onto $R$, such that $\text{Seg}(\mathcal{W})$ contains at least one swollen segment. Then, $\text{Seg}(\mathcal{W})$ contains at least one innermost swollen segment.
10
+
11
+ *Proof.* Let $S$ be a swollen segment of $\mathcal{W}$ such that the sum of the absolute values of the indices of the edge copies it uses is minimized. By Lemma 8.13, $S$ is parallel to a subpath $P$ of a maximal degree-2 path of $R$. Thus, because $S$ is a segment, all the edge copies it uses are on the same side. We claim that $S$ is innermost. Suppose, by way of contradiction, that this claim is false. Thus there exist parallel edges $e_i \in E(S)$ and $e_j \in E(\mathcal{W}) \setminus E(S)$ such that $i$ and $j$ have the same sign and $|j| < |i|$. Let $S'$ be the segment of $\mathcal{W}'$ to which $e_j$ belongs. Because $\mathcal{W}$ has no special U-turns and because weak linkages contain neither crossings not repeated edges, it follows that $S'$ is parallel to a subpath $Q$ of $P$ and consists only of edge copies whose indices strictly smaller absolute value than the edges of $P$ they are parallel to. However, this implies that $S'$ is a swollen segment of $\mathcal{W}$ such that the sum of the absolute value of the indices of the edge copies it uses is smaller than the sum of $S$. This contradicts the choice of $S$. □
12
+
13
+ Given an innermost swollen segment whose copies have, on one side of R, we would like to
14
+ move the segment to “the other side” of R. We know that these copies will be free in case we
15
+ handle an extremal weak linkage. We now define a tuple of cycles on which we will perform
16
+ move operations (see Fig. 26). The fact that this notion is well-defined (in the sense that the
17
+ indices ℓ in the definition exist) will be argued in the lemma that follows it.
18
+
19
+ **Definition 8.22 (Move-Through Tuple).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $\mathcal{W}$ an outer-terminal extremal weak linkage that has no special U-turns and is pushed onto $R$, and let $S \in \text{Seg}(\mathcal{W})$ be an innermost swollen segment. Let $e_{i_1}^1, e_{i_2}^2, \dots, e_{i_t}^t$, where $t = |E(S)|$, be the edges of $S$ in the order occurred when $S$ is traversed from one endpoint to another.¹⁹ Then, the move-through tuple of $S$ is $T = (C_1, \dots, C_t)$ where for every $j \in \{1, \dots, t\}$, $C_j$ is a cycle that consists of two parallel edges: $e_{ij}^j$ and $e_\ell^j$ where $\ell$ is
20
+
21
+ ¹⁹To avoid ambiguity in the context of this definition, suppose that we have a fixed choice (e.g., lexicographic) of which endpoint is traversed first.
samples/texts/1754951/page_62.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Lemma 8.20.** Let $(G, S, T, g, k)$ be a good Yes-instance of Planar Disjoint Paths, and $R$ be a backbone Steiner tree. Then, there exists a sensible, outer-terminal, U-turn-free weak linkage $W$ in $H$ that is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G, S, T, g, k)$, and $|\text{Seg}(W)| \le \alpha_{\text{potential}}(k)$.
2
+
3
+ *Proof.* By Lemma 8.17, there exists a sensible, outer-terminal, extremal weak linkage in $H$ that is pushed onto $R$, has no special U-turns and swollen segments, is discretely homotopic in $H$ to some solution of $(G, S, T, g, k)$ and whose potential is upper bounded by $\alpha_{\text{potential}}(k)$. By Lemma 8.18, its number of segments is also upper bounded $\alpha_{\text{potential}}(k)$. Among all such weak linkages that are sensible, outer-terminal, pushed onto $R$ and whose number of segments is upper bounded $\alpha_{\text{potential}}(k)$, let $W$ be one with minimum number of edges. To conclude the proof, it suffices to argue that $W$ is U-turn-free.
4
+
5
+ Suppose, by way of contradiction, that $W$ has at least one U-turn. Then, by Lemma 8.10, $W$ has an innermost U-turn $U = \{e_i, e_j\}$. Let $W$ be the walk in $W$ that uses $e_i$ and $e_j$, and $C$ be the cycle in $H$ that consists of $e_i$ and $e_j$. Then, by Lemma 8.11, the cycle pull operation is applicable to $(W, C)$. Furthermore, by Lemma 8.11, the resulting weak linkage $W'$ is sensible, outer-terminal, pushed onto $R$, has fewer edges than $W$, and its number of segments is upper bounded by the number of segments of $W$. Since discrete homotopy is an equivalence relation, $W'$ is discretely homotopic to some solution of $(G, S, T, g, k)$. However, this is a contradiction to the choice of $W$. $\square$
6
+
7
+ Now, we prove that having no U-turns implies that each segment can use only two parallel copies of every edge.
8
+
9
+ **Lemma 8.21.** Let $(G, S, T, g, k)$ be a nice instance of Planar Disjoint Paths, and $R$ be a Steiner tree. Let $W$ be an outer-terminal, U-turn-free weak linkage that is pushed onto $R$. Then, each segment $S \in \text{Seg}(W)$ uses at most two copies of every edge in $E(R)$.
10
+
11
+ *Proof.* Consider some segment $S \in \text{Seg}(W)$. Suppose, by way of contradiction, that there exists some edge $e_0 \in E(R)$ such that $S$ contains at least three edges parallel to $e_0$ (but it cannot contain $e_0$ as it is pushed onto $R$). Then, without loss of generality, suppose that it contains two copies with positive subscript, $e_i$ and $e_j$, and let $S'$ be the subwalk of $S$ having these edge copies as the extreme edges. Then, because $S$ is U-turn free, when we traverse $S'$ from $e_i$ to $e_j$, then we must visit the positive and negative copy of every other edge in $E(R)$ exactly once. However, this means that $E(W$ contains more than one copy of the edge incident to $t^*$ in $R$, which contradicts the assumption that $W$ is outer-terminal. $\square$
12
+
13
+ Having established Lemmas 8.8, 8.20 and 8.21, we are ready to prove Lemma 8.19.
14
+
15
+ *Proof of Lemma 8.19.* By Lemma 8.20, there exists a sensible, outer-terminal, U-turn-free weak linkage $W'$ in $H$ that is pushed onto $R$, discretely homotopic in $H$ to some solution of $(G, S, T, g, k)$, and whose number of segments is upper bounded by $\alpha_{\text{potential}}(k)$. By Lemma 8.21, the multiplicity of $W'$ is upper bounded by $2\alpha_{\text{potential}}(k) = \alpha_{\text{mul}}(k)$. By Lemma 8.8, there exists a weak linkage $W$ that is sensible, pushed onto $R$, canonical, discretely homotopic to $W'$, and whose multiplicity is upper bounded by the multiplicity of $W'$. Thus, $W$ is simplified. Moreover, since discrete homotopy is an equivalence relation, $W$ is discretely homotopic to some solution of $(G, S, T, g, k)$. $\square$
16
+
17
+ # 9 Reconstruction of Pushed Weak Linkages from Templates
18
+
19
+ In this section, based on the guarantee of Lemma 8.19, we only attempt to reconstruct simplified weak linkages. Towards this, we introduce the notion of a template (based on another notion called a pairing). Roughly speaking, a template indicates how many parallel copies of each edge
samples/texts/1754951/page_63.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ incident to a vertex in $V_1(R) \cup V_2(R)$ are used by the walks in the simplified weak linkage $\mathcal{W}$ under consideration, and how many times, for each pair $(e, e')$ of non-parallel edges sharing a vertex, the walks in $\mathcal{W}$ traverse from a copy of $e$ to a copy of $e'$. Observe that a template does not indicate which edge copy is used by each walk, but only specifies certain numbers. Nevertheless, we will show that this is sufficient for faithful reconstruction of simplified weak linkages. The great advantage of templates, proved later, is that there are only few of them.
2
+
3
+ ## 9.1 Generic Templates and Templates of Simplified Weak Linkages
4
+
5
+ We begin with the definition of the notion of a pairing, which will form the basis of a template. Let $V^*(R) = V_{\ge 3}(R) \cup V_2^*(R)$ where $V_2^*(R) = \{v \in V_{\ge 2}(R) | \exists u \in V_{\ge 1}(R) \cup V_{\ge 3}(R) \text{ such that } \{u, v\} \in E(R)\}$. Observe that $|V_2^*(R)| \le 2(|V_{\ge 1}(R)| + |V_{\ge 3}(R)| - 1) \le 8k$, by Observation 6.1. Therefore, $|V^*(R)| \le 12k$. Let $E^*(R)$ denote the set of edges in $E(R)$ that are incident on a vertex of $V^*(R)$, and observe that $|E^*(R)| \le 24k$ (since $R$ is a tree).
6
+
7
+ **Definition 9.1 (Pairing).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths with a Steiner tree $R$. For a vertex $v \in V_{\ge 3}(R)$, a pairing at $v$ is a set $\textbf{pairing}_v$ of unordered pairs of distinct edges in $R$ incident to $v$. For a vertex $v \in V_2^*(R)$, a pairing at $v$ is a collection of pairs (possibly non-distinct) edges in $E_R(v)$. And, for a vertex $v \in V_{\le 1}(R)$, it is the empty set or singleton set of the pair where the (unique) edge incident to $v$ in $R$ occurs twice. A collection $\{\textbf{pairing}_u\}_{u \in V^*(R)}$, where $\textbf{pairing}_u$ is a pairing at $u$ for every vertex $u \in V^*(R)$, is called a pairing.
8
+
9
+ As we will see later, simplified weak linkages can only give rise to a specific type of pairings, which we call non-crossing pairings.
10
+
11
+ **Definition 9.2 (Non-Crossing Pairing).** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths. Let $R$ be a Steiner tree. Consider a vertex $v \in V^*(R)$, and let $e^1, e^2, \dots, e^r$ be the edges in $E(R)$ incident to $v$ in clockwise order where the first edge $e^1$ is chosen arbitrarily. A pairing $\textbf{pairing}_v$ at $v$ is non-crossing if there do not exist two pairs $(e^i, e^j)$ and $(e^x, e^y)$ in $\textbf{pairing}_v$, where $i < j$ and $x < y$, such that $i < x < j < y$ or $x < i < y < j$. More generally, a pairing $\{\textbf{pairing}_u\}_{u \in V^*(R)}$ is non-crossing if, for every $u \in V^*(R)$, the pairing $\textbf{pairing}_u$ is non-crossing.
12
+
13
+ We now show that a non-crossing pairing can contain only $\mathcal{O}(k)$ pairs, which is better than a trivial bound of $\mathcal{O}(k^2)$. This bound will be required to attain a running time of $2^{\mathcal{O}(k^2)} n^{\mathcal{O}(1)}$ rather than $2^{\mathcal{O}(k^3)} n^{\mathcal{O}(1)}$.
14
+
15
+ **Lemma 9.1.** Let $(G, S, T, g, k)$ be an instance of Planar Disjoint Paths. Let $R$ be a Steiner tree. Let $\{\textbf{pairing}_v\}_{v \in V^*(R)}$ be a non-crossing pairing. Then, $\left|\bigcup_{v \in V^*(R)} \textbf{pairing}_v\right| \le \alpha_{npair}(k) := 48k$.
16
+
17
+ *Proof.* Towards the bound on $\left|\bigcup_{v \in V^*(R)} \textbf{pairing}_v\right|$, we first obtain a bound on each individual set $\textbf{pairing}_v$. To this end, consider some vertex $v \in V_{\ge 3}(R)$, and let $e^0, e^1, \dots, e^{r-1}$ are the edges in $E(R)$ incident to $v$ in clockwise order. Consider the undirected graph $C$ on vertex set $\{u_{e^i} | i \in \{0, 1, \dots, r-1\}\}$ and edge set $\{\{u_{e^i}, u_{e^{(i+1)} \bmod r}\} | i \in \{0, 1, \dots, r-1\}\} \cup \{\{u_{e^i}, u_{e^{j}}\} | (e^i, e^j) \in \textbf{pairing}_v\}$. Now, notice that $C$ is an outerplanar graph (Fig. 28). To see this, draw the vertices of $C$ on a circle on the plane, so that the curves on the cycle that connect them correspond to the drawing of the edges in $\{\{u_{e^i}, u_{e^{(i+1)} \bmod r}\} | i \in \{0, 1, \dots, r-1\}\}$. Now, for each edge in $\{\{u_{e^i}, u_{e^{j}}\} | (e^i, e^j) \in \textbf{pairing}_v\}$, draw a straight line segment inside the circle that connects $u_{e^i}$ and $u_{e^{j}}$. The condition that asserts that $\textbf{pairing}_v$ is non-crossing ensures that no two lines segments among those drawn previously intersect (except for at their endpoints). As an outerplanar graph on $q$ vertices can have at most $2q-3$ edges, we have that $|E(C)| < 2|V(C)| = 2r$. Because $\left|\textbf{pairing}_v\right| \le |E(C)|$, we have that $\left|\textbf{pairing}_v\right| \le 2r$. For a vertex in $v \in V_2^*(R)$, since it has only two edges incident on it, $\left|\textbf{pairing}_v\right| \le 3$. Finally, for $v \in V_{\le 1}(R)$, $\left|\textbf{pairing}_v\right| \le 1$.
samples/texts/1754951/page_72.md ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Let us now consider the edges in $E^*(v)$. Since $\mathcal{W}$ is a weak linkage, exactly one walk, say $W_1$, that has the vertex $v$ as an endpoint. Any other walk in $\mathcal{W}$ contains an even number of edges from $E_H(v)$, and since $\mathcal{W}$ is pushed onto $R$, these edges are all parallel copies of $e^*$. Hence, the walks in $\mathcal{W}$ contain an odd number of parallel copies of $e^*$ in total, i.e. $\ell(e^*)$ is an odd number. Since $v$ is an endpoint of $W_1 \in \mathcal{W}$, there is exactly one edge in $e_z^* \in E^*(v)$ such that $\text{stitch}_v(e_z^*) = e_z^*$.
2
+
3
+ We claim that $z = \frac{\ell(e^*)+1}{2}$. Towards this, let us argue that for any edge $e_i^*$, where $i < z$, if $\text{stitch}_v(e_i^*) = e_j^*$ then $j > z$. Suppose not, and without loss of generality assume that $i < j < z$. Let us choose $i$ such that $|j - i|$ is minimized, and note that $j \neq i$. Consider the collection of edges $e_p^*$ such that $i < p < j$. If this collection is empty, i.e. $j = i + 1$, then observe that the edges $(e_i, e_j)$ form a U-turn, since $\text{stitch}_v(e_i) = e_j$ only if they were consecutive edges of some walk in $\mathcal{W}$ and there is no edge in the strict interior of the cycle formed by the parallel edges $e_i$ and $e_j$. Otherwise this collection is non-empty, then observe that if $\text{stitch}_v(e_p^*) = e_q^*$ then $i < q < j$. Indeed, if this were not the case then the pairs $e_i^*, e_j^*$ and $e_p^*, e_q^*$ are crossing at $v$, since they occur as $e_i^* < e_p^* < e_j^* < e_q^*$ in order$_v$. This contradicts the weak linkage $\mathcal{W}$ is non-crossing. Otherwise, $i < q < j$ and hence $|q - p| < |j - i|$. But this contradicts the choice of $i$. Hence, for every $i < z$, $\text{stitch}_v(e_i^*) = e_j^*$ where $j > z$. A symmetric argument holds for the other case, i.e. if $i > z$ then $\text{stitch}_v(e_i^*) = e_j^*$ where $j < z$. Therefore, we can conclude that
4
+ $$z = \frac{\ell(e^*)+1}{2}, \text{ and hence } \text{stitch}_v(e_z^*) = e_{\ell(e^*)+1-z}^*$$
5
+
6
+ Let us now consider the other edges in $E^*(v)$. Suppose that there exist integers $1 \le i, p \le \ell(e^*)$ such that $\text{stitch}_v(e_i^*) = e_j^*$, $\text{stitch}_v(e_p^*) = e_q^*$ such that $i < p < z$ and $z < j < q$. Then it is clear that the pairs $e_i^*, e_j^*$ and $(e_p^*, e_q^*)$ are crossing at $v$, which is a contradiction. Therefore, if $i < p < z$ then $z < q < j$, and this holds for every choice of $i$ and $p$. A symmetric argument holds in the other direction, i.e. if $i > p > z$ and $\text{stitch}_v(e_i^*) = e_j^*$ and $\text{stitch}_v(e_p^*) = e_q^*$, then $j < q < z$. Now we claim that for any $i \in \{1, 2, \dots, \ell e^*\}$, if $\text{stitch}_v(e_i^*) = e_j^*$ then $j = \ell(e^*) + 1 - i$. Suppose not, and consider the case $i < z$, and further let $j < \ell(e^*) + 1 - i$. Then observe that, for any edge $e_p^* \in \{e_{i+1}^*, \dots, e_{z-1}^*\}$, $\text{stitch}_v(e_p^*) \in \{e_{z+1}^*, \dots, e^{j-1}^*\}$. But $\{e_{i+1}^*, \dots, e_{z-1}^*\}$ is strictly larger than $\{e_{z+1}^*, \dots, e^{j-1}^*\}$, which is a contradiction to the definition of $\text{stitch}_v$. Hence, $j \ge \ell(e^*) + 1 - i$. A symmetric argument implies that $j \le \ell(e^*) + 1 - i$. Therefore, for any $i < z$, $\text{stitch}_v(e_i^*) = e_{\ell(e^*)+1-i}^*$. We can similarly argue that for $i > z$ $\text{stitch}_v(e_i^*) = e_{\ell(e^*)+1-i}^*$. Since we have already shown that $\text{stitch}_v(e_z^*) = e_z^*$, this concludes the proof of this lemma. $\square$
7
+
8
+ **Lemma 9.8.** Let $(G, S, T, g, k)$ be a good Yes-instance of Planar Disjoint Paths, and let $R$ be a backbone Steiner tree. Let $\mathcal{W}$ be a simplified weak linkage in $H$ and let $\ell$ be the multiplicity function of $\mathcal{W}$. Consider the collection $\mathcal{A} = \{(pairing_v, template_v)\}|_{v \in V(R)} \subset \widehat{\text{ALL}},$ such that it is the collection of pairings and templates of $\mathcal{W}$. Let $\{f_v\}_{v \in V(R)}$ be the stitching extracted from $\mathcal{A}$. Then, for every vertex $v \in V_2(R) \cup V_{\ge 3}(R)$, $\text{stitch}_v(e) = f_v(e)$ for all edges $e \in E_H(v)$.
9
+
10
+ *Proof.* Let $\ell$ be the multiplicity function of the simplified weak linkage $\mathcal{W}$. Then, as $\mathcal{W}$ is canonical, for each edge $e \in E_R(v)$ with a parallel copy $e_i$, $\text{stitch}_v(e_i) \neq \perp$ if and only if $i \in \{1, 2, \dots, \ell(e^x)\}$. Since $\mathcal{W}$ is a sensible and $v \notin V_1(R)$, it cannot be the endpoint of any walk in $\mathcal{W}$. Hence, any walk contains an even number of edges from $E_H(v)$, and further any such edge is a parallel copy of an edge in $E_R(v) = \{e^1, e^2, \dots, e^r\}$, where these edges are enumerated according to **order$_v$$}$. Note that, the collections of parallel copies of these edges also occur in the same manner in **order$_v$$}$. We present our arguments in three steps.
11
+
12
+ **Claim 9.1.** Consider a pair of edges $(e, e') \in \text{pairing}_v$, such that $\text{template}_v(e, e') > 0$. Then $\text{stitch}_v$ maps each edge in $\{e_{(x_e,e')+1}, \dots, e_{(x_e,e' + \text{template}_v(e,e'))}\}$ to some edge in $\{e'_1, e'_2, \dots, e'_{\ell(e')} \}$,
13
+ and vice versa.
14
+
15
+ *Proof.* Suppose not, and consider the case where $e$ occur before $e'$ in order$_v$. Consider a parallel copy of $e$, say $e_i \in \{e_{(x_e,e')+1}, \dots, e_{(x_e,e' + \text{template}_v(e,e'))}\}$ such that $\text{stitch}_v(e_i) = \hat{e}_j$, where $\hat{e} \in$
samples/texts/1754951/page_74.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ to the edges in $\{e_{(x_e, e'+i+1)}, \dots, e_{(x_e, e'+\text{template}_v(e, e'))}\}$. If not, then consider an edge $e'_p \in \{e'_{(y_{e,e'}-\text{template}_v(e,e'))}, \dots, e'_{(j-1)}\}$ such that $\text{stitch}_v(e'_p) = e_q$ where $q < x_{e,e'} + i$. Then consider the pairs $e_{x_{e,e'}+i}, e_j$ and $e_p, e_q$ in $\text{order}_v$, and observe that $e_q < x_{e,e'}+i < e'_p < e'_j$ in $\text{order}_v$. Then these pairs of edges are crossing at $v$, which is a contradiction to the fact that $\mathcal{W}$ is weak linkage. On the other hand, $\left|\{e'_{(y_{e,e'}-\text{template}_v(e,e'))}, \dots, e'_{(j-1)}\}\right|$ is strictly larger than $\left|\{e_{(x_{e,e'}+i+1)}, \dots, e_{(x_{e,e'}+\text{template}_v(e,e'))}\}\right|$, which is again a contradiction, since all edges in $\{e'_1, \dots, e'_{\ell(e')}\}$ are mapped to distinct edges by $\text{stitch}_v$, and they are not mapped to $\perp$. By symmetric arguments, the case when $j < y_{e,e'} - i$ also leads to a contradiction. ◇
2
+
3
+ Now, by considering all pairs in $\text{pairing}_v$ and applying the above claims, we obtain that $\text{stitch}_v = f_v$ for all $v \in V_2(R) \cup V_{\ge e}(R)$. This concludes the proof of this lemma. □
4
+
5
+ The following lemma is a corollary of Lemma 9.7 and Lemma 9.8.
6
+
7
+ **Lemma 9.9.** Let $(G, S, T, g, k)$ be a nice instance of Planar Disjoint Paths. Let $R$ be a Steiner tree. Let $\mathcal{W}$ be a simplified weak linkage, and let $\mathcal{A} = \{\text{(pairing}_v, \text{template}_v)\}|_{v \in V(R)} \in \tilde{\text{ALL}}$ be pairing and template of $\mathcal{W}$. Let $\{f_v\}_{v \in V(R)}$ be the stitching extracted from $\mathcal{A}$. Then $f_v = \text{stitch}_v$ for every vertex $v \in V(R)$.
8
+
9
+ Now, we consider the computational aspect of the definitions considered so far in this section.
10
+
11
+ **Lemma 9.10.** Let $(D, S, T, g, k)$ be a nice instance of Directed Planar Disjoint Paths. Let $R$ be a Steiner tree. Let $\mathcal{A} = \{\text{(pairing}_v, \text{template}_v)\}|_{v \in V(R)} \in \tilde{\text{ALL}}$. Then, the multiplicity function extracted from $\mathcal{A}$ can be computed in time $k^{\mathcal{O}(1)n}$, and the stitching extracted from $\mathcal{A}$ can be computed in time $2^{\mathcal{O}(k)n}$.
12
+
13
+ *Proof.* First we consider the computation of the multiplicity function $\ell = \{\ell_v\}|_{v \in V(R)}$ extracted from $\mathcal{A}$ according to Definition 9.11. Note that, for every vertex $v \in V(R)$, we have that $|\text{pairing}_v| = \mathcal{O}(k)$ and the numbers assigned by $\text{template}_v$ are bounded by $2^{\mathcal{O}(k)}$. Therefore, $\ell_v$ can be computed in $2^{\mathcal{O}(k)}$ time for each $v \in V(R)$, taking a total of $2^{\mathcal{O}(k)}n$ time. Now, note that for any vertex $v \in V(R)$, it holds that $|\ell_v(e)| = 2^{\mathcal{O}(k)}$ for any edge $e \in E_H(v)$ (because $\text{template}_v$ is a $2^{\mathcal{O}(k)}$-template).
14
+
15
+ Let $\{f_v\}_{v \in V(R)}$ be the stitching extracted from $\mathcal{A}$ by Definition 9.12. Observe that when describing the stitching $f_v$ extracted at a vertex $v \in V(R)$, we only need to describe it for the parallel copies of edges in $E(R)$, and then only for the parallel copies $\{e_1, e_2, \dots, e_{\ell(v)}\}$ of $e \in E(R)$, where $\ell$ is the multiplicity function extracted from $\mathcal{A}$. For all other edges and parallel copies, the stitching maps them to $\perp$. Since $\ell(e) \le \alpha_{\text{mul}}(k)$, and the tree $R$ has at most $2k$ leaves, the stitching at each vertex can be described by a collection of $\mathcal{O}(k \cdot \alpha_{\text{mul}}) = 2^{\mathcal{O}(k)}$ pairs of edges in $E_H(v) \times E_H(v)$. Further, by the construction described in Definitions 9.12 and 9.13, the stitching $f_v$ at each vertex $v \in V(R)$ can be constructed in time $2^{\mathcal{O}(k)}$ time. Therefore, the collection $\{f_v\}_{v \in V(R)}$ can be constructed in $2^{\mathcal{O}(k)}n$ time. Finally, we need to test if this collection is a valid stitching, as described in Definition 9.14, which can be done by picking each edge $e \in E(R)$ and testing the parallel copies $\{e_1, e_2, \dots, e_{\ell(e)}\}$ one by one, which again takes $2^{\mathcal{O}(k)}n$ time. Hence the total time required to extract the stitching is $2^{\mathcal{O}(k)}n$. □
16
+
17
+ ## 9.3 Reconstruction of Weak Linkages from Templates
18
+
19
+ Now we describe the construction of a weak linkage from a valid stitching.
20
+
21
+ **Definition 9.15** (Weak Linkage of a stitching.) Let $(G, S, T, g, k)$ be a good instance of Planar Disjoint Paths, and let $R$ be a backbone Steiner tree. Let $\{f_v\}_{v \in V(R)}$ be a stitching and suppose that is is valid. Then the weak linkage $\mathcal{W}$ constructed from $f_v$ is obtained as follows.
samples/texts/1754951/page_78.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Figure 6: Detours in the Steiner tree.
2
+
3
+ guarantee their existence. Specifically, we will ensure that $R$ does not have any *detour*, which roughly means that each of its maximal degree-2 paths is a shortest path connecting the two subtrees obtained once it is removed. More formally, we define a detour as follows (see Fig. 6).
4
+
5
+ **Definition 2.3.** A detour in $R$ is a pair of vertices $u, v \in V_{\ge 3}(R) \cup V_{=1}(R)$ (i.e. the non-degree 2 vertices in $R$) that are endpoints of a maximal degree-2 path $L$ of $R$, and a path $P$ in the radial completion of $G$, such that (i) $P$ is shorter than $L$, (ii) one endpoint of $P$ belongs to the component of $R - V(L) \setminus \{u, v\}$ containing $u$, and (iii) one endpoint of $P$ belongs to the component of $R - V(L) \setminus \{u, v\}$ containing $v$.
6
+
7
+ By repeatedly “short-cutting” $R$, a process that terminates in a linear number of steps, we obtain a new Steiner tree $R$ with no detour. Now, if the separator $S_u$ is large, then there is a large number of vertex-disjoint paths that connect the two subtrees separated by $S_u$, and all of these paths are “long”, namely, of length at least $2^{c_2k} - 2^{c_1k}$. Based on a result by Bodlaender et al. [5] (whose application requires to work in the radial completion of $G$ rather than $G$ itself), we show that the existence of these paths implies that the treewidth of $G$ is large. Thus, if the treewidth of $G$ were small, all of our separators would have also been small. Fortunately, to guarantee this, we just need to invoke the following known result in a preprocessing step:
8
+
9
+ **Proposition 2.1 ([2]).** There is a $2^{\mathcal{O}(k)}n^2$-time algorithm that, given an instance $(G, S, T, g, k)$ of Planar Disjoint Paths, outputs an equivalent instance $(G', S, T, g, k)$ of Planar Disjoint Paths where $G'$ is a subgraph of $G$ whose treewidth is upper bounded by $2^{ck}$ for some constant $c$.
10
+
11
+ Having separators of size $2^{\mathcal{O}(k)}$, because segments going across different long paths must intersect these separators (or have an endpoint at distance $2^{\mathcal{O}(k)}$ in $R$ from some endpoint of a maximal degree-2 path), we immediately deduce the following.
12
+
13
+ **Observation 2.2.** Let $\mathcal{P}$ be a solution. Then, its number of segments that have one endpoint on one long path, and a second endpoint on a different long path, is upper bounded by $2^{\mathcal{O}(k)}$.
14
+
15
+ **Segments with Both Endpoints on the Same Long Path.** We are now left with segments whose both endpoints belong to the same long path, which have two different kinds of behavior: they may or may not spiral around $R$, where spiraling means that the two endpoints of the segment belong to different “sides” of the path (see Fig. 4 and Fig. 8). By making sure that at least one vertex in $S \cup T$ is on the outer face of the radial completion of $G$, we ensure that the cycle formed by any non-spiraling segment together with the subpath of $R$ connecting its two endpoints does not enclose all of $S \cup T$; specifically, we avoid having to deal with segments as the one in Fig. 7.
16
+
17
+ While it is tempting to try to devise face operations that transform a spiraling segment into a non-spiraling one, this is not always possible. In particular, if the spiral “captures” a path $P$ (of a solution), then when $P$ and the spiral are pushed onto $R$, the spiral is not reduced to a simple path between its endpoints, but to a walk that “flanks” $P$. Due to such scenarios, dealing with spirals (whose number we are not able to upper bound) requires special attention. Before we turn to this task, let us consider the non-spiraling segments.
samples/texts/1754951/page_83.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Figure 7: A bad segment that contains all of $S \cup T$ in its cycle.
2
+
3
+ **Non-Spiraling Segments.** To achieve our main goal, we aim to push a (hypothetical) solution onto $R$ so that the only few parallel copies of each edge will be used. Now, we argue that non-spiraling segments do not pose a real issue in this context. To see this, consider a less refined partition of a solution where some non-spiraling segments are “grouped” as follows (see Fig. 4).
4
+
5
+ **Definition 2.4.** A subwalk of a walk $W$ is a preliminary group of $W$ if either (i) it has endpoints on two different maximal degree-2 paths of $R$ or an endpoint in $V_{\ge 1}(R) \cup V_{\ge 3}(R)$ or it is spiraling, or (ii) it is the union of an inclusion-wise maximal collection of segments not of type (i).
6
+
7
+ The collection of preliminary groups of $W$ is denoted by $\text{PreSegGro}(W)$. Clearly, it is a partition of $W$. For a weak linkage $W$, $\text{PreSegGro}(W) = \bigcup_{W \in W} \text{PreSegGro}(W)$. Then,
8
+
9
+ **Observation 2.3.** Let $W$ be a weak linkage. The number of type-(ii) preliminary groups in $\text{PreSegGro}(W)$ is at most 1 plus the number of type-(i) preliminary groups in $\text{PreSegGro}(W)$.
10
+
11
+ Roughly speaking, a type-(i) preliminary group is easily pushed onto $R$ so that it becomes merely a simple path (see Fig. 4). Thus, by Observation 2.3, all type-(ii) preliminary groups of a solution in total do not give rise to the occupation of more than $x+1$ copies of an edge, where $x$ is the number of type-(i) preliminary groups.
12
+
13
+ **Rollback Spirals and Winding Number.** Unfortunately, the number of spirals can be huge. Nevertheless, we can pair-up *some* of them so that they will “cancel” each other when pushed onto $R$ (see Fig. 8), thereby behaving like a type-(ii) preliminary group. Intuitively, we pair-up two spirals of a walk if one of them goes from the left-side to the right-side of the path, the other goes from the right-side to the left-side of the same path, and “in between” them on the walk, there are only type-(ii) preliminary groups and spirals that have already been paired-up. We refer to paired-up spirals as *rollback spirals*. (Not all spirals can be paired-up in this manner.) This gives rise to the following strengthening of Definition 2.4.
14
+
15
+ **Definition 2.5.** A subwalk of a walk $W$ is called a group of $W$ if either (i) it is a non-spiral type-(i) preliminary group, or (ii) it is the union of an inclusion-wise maximal collection of segments not of type (i) (i.e., all endpoints of the segments in the group are internal vertices of the same maximal degree-2 path of $R$). The potential of a group is (roughly) 1 plus its number of non-rollback spirals.
16
+
17
+ Now, rather than upper bounding the total number of spirals, we only need to upper bound the number of non-rollback spirals. To this end, we use the notion of *winding number* (in Section 7), informally defined as follows. Consider a solution $\mathcal{P}$, a path $Q \in \mathcal{P}$, and a long path $P$ of $R$ with separators $S_u$ and $S_v$. As $S_u$ and $S_v$ are minimal separators in a triangulated graph (the radial completion is triangulated), they are cycles, and as at least one vertex in $T$ belongs to the outer face, they form a ring (see Fig. 9). Each maximal subpath of $Q$ that lies inside this ring can either *visit* the ring, which means that both its endpoints belong to the same separator, or *cross* the ring, which means that its endpoints belong one to $S_u$ and the other to $S_v$ (see Fig. 9). Then, the (absolute value of the) *winding number* of a crossing subpath is the number
samples/texts/2262004/page_1.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ We are IntechOpen,
2
+ the world's leading publisher of
3
+ Open Access books
4
+ Built by scientists, for scientists
5
+
6
+ 5,300
7
+ Open access books available
8
+
9
+ 131,000
10
+ International authors and editors
11
+
12
+ 160M
13
+ Downloads
14
+
15
+ Our authors are among the
16
+ TOP 1%
17
+ most cited scientists
18
+
19
+ 154
20
+ Countries delivered to
21
+
22
+ 12.2%
23
+ Contributors from top 500 universities
24
+
25
+ WEB OF SCIENCE™
26
+
27
+ Selection of our books indexed in the Book Citation Index
28
+ in Web of Science™ Core Collection (BKCI)
29
+
30
+ Interested in publishing with us?
31
+ Contact book department@intechopen.com
32
+
33
+ Numbers displayed above are based on latest data collected.
34
+ For more information visit www.intechopen.com
samples/texts/2262004/page_10.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The following symbols are assigned: C to instantaneous concentration of CDNB, B to instantaneous concentration of GSH, Q to instantaneous concentration of the product, $K_{ma}$ to K<sub>m</sub> of GST for CSNB, $K_{mb}$ to K<sub>m</sub> of GST for GSH, $K_{i\alpha}$ to the dissociation constant of GSH, $K_{iq}$ to the dissociation constant of the product, A for instantaneous absorbance, $A_m$ for the maximal absorbance of the product, $\varepsilon$ to difference in absorptivity of product and CDNB, $V_m$ for the maximal reaction rate of GST. The differential rate equation for GST reaction is Equ.(13). After the definition of $M_1$, $M_2$ and $M_3$, the integrated rate equation with the predictor variable of reaction time is Equ.(19) if GST reaction is irreversible and a process similar to that for Equ.(4) is employed (Zhao, L.N., et al., 2006).
2
+
3
+ $$ \frac{1}{V} = \left( \frac{K_{mb}}{V_m} \right) \times \left[ 1 + K_{ib} \times \frac{K_{ma} \times Q}{K_{iq} \times K_{mb} \times C} \right] / B + \left[ 1 + K_{ma} \times \frac{1+Q}{K_{iq}} \right] / V_m \quad (13) $$
4
+
5
+ $$ M_1 = \frac{K_{ma}}{\varepsilon \times K_{iq}} \quad (14) $$
6
+
7
+ $$ M_2 = K_{ma} - K_{ib} \times \frac{K_{ma}}{K_{iq}} - A_m \times \frac{K_{ma}}{\varepsilon \times K_{iq}} + C - A_0 \times \frac{K_{ma}}{\varepsilon \times K_{iq}} \quad (15) $$
8
+
9
+ $$ M_3 = K_{ma} \times A_m + \varepsilon \times K_{mb} \times C + C \times A_m - K_{ib} \times K_{mb} \times A_0 / K_{iq} - K_{ma} \times A_m \times A_0 / (K_{iq} \times \varepsilon) \quad (16) $$
10
+
11
+ $$ \frac{M_1 \times A^2 + M_2 \times A - M_3}{A - A_m} \times dA = C \times \varepsilon \times V_m \times dt \quad (17) $$
12
+
13
+ $$ Y = M_1 \times (A - A_m)^2 / 2 + (2 \times M_1 \times A_m + M_2) \times (A - A_m) + (M_1 \times A_m^2 + M_2 \times A_m - M_3) \times L\eta |A - A_m| \quad (18) $$
14
+
15
+ $$ Y = C \times \varepsilon \times V_m \times (t - T_{lag}) = a + b \times t \quad (19) $$
16
+
17
+ Fig. 5. Estimated $V_m$ to changes in data ranges for analyses with 60 µmol/L GSH.
samples/texts/2262004/page_11.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ As demonstrated in the definition of $M_1$, $M_2$ and $M_3$, kinetic parameters preset as constants for kinetic analysis of GST reaction curve should have strong covariance. Except $K_{iq}$ as an unknown kinetic parameter for optimization, other kinetic parameters are those reported (Kunze, 1997; Pabst, et al, 1974). To optimize $K_{iq}$, two criteria are used. The first is the consistency of predicted $A_m$ at a series of GSH concentrations using data of 6.0-min reaction with that by the equilibrium method after 40 min reaction (GST activity is optimized to complete the reaction within 40 min). The second is the resistance of $V_m$ to reasonable changes in data ranges for analyses. After stepwise optimization, $K_{iq}$ is fixed at 4.0 µmol/L; $A_m$ predicted for GSH from 5.0 µmol/L to 50 µmol/L is consistent with that by the equilibrium method (Zhao, L.N., et al. 2006); the estimation of $V_m$ is resistant to changes of data ranges (Fig. 5). Therefore, $K_{iq}$ is optimized and fixed as a constant at 4.0 µmol/L.
2
+
3
+ Fig. 6. Response of GSH concentration determined to preset GSH concentrations (the equilibrium method uses data with 6.0 min reaction).
4
+
5
+ Fig. 7. Response of initial rates to quantities of purified porcine alkaline GST.
6
+
7
+ Kinetic analysis of GST reaction curve can predict $A_m$ for GSH over 4.0 µmol/L, but there are no sufficient data for analyses at GSH below 3.0 µmol/L; after optimization of GST activity for complete conversion of GSH at 5.0 µmol/L within 6.0 min, reaction curve within 5.0 min for GSH at 5.0 µmol/L can be used for kinetic analysis of reaction curve to predict $A_m$. With the optimized GST activity for reaction within 5.0 min, the linear range for GSH assay is from 1.5 µmol/L to over 90.0 µmol/L by the integration strategy while it is from 4.0
samples/texts/2262004/page_12.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Kinetic Analyses of Enzyme Reaction Curves
2
+ with New Integrated Rate Equations
3
+ and Applications
4
+
5
+ Xiaolan Yang, Gaobo Long, Hua Zhao and Fei Liao*
6
+
7
+ College of Laboratory Medicine, Chongqing Medical University,
8
+ Chongqing,
9
+ China
10
+
11
+ # 1. Introduction
12
+
13
+ A reaction system of Michaelis-Menten enzyme on single substrate can be characterized by the initial substrate concentration before enzyme action ($S_0$), the maximal reaction rate ($V_m$) and Michaelis-Menten constant ($K_m$), besides some other required parameters. The estimations of $S_0$, $V_m$ and $K_m$ can be used to measure enzyme substrates, enzyme activities, epitope or hapten (enzyme-immunoassay), irreversible inhibitors and so on. During enzyme reaction, the changes of substrate or product concentrations can be monitored; continuous monitor of such changes provides a reaction curve while discontinuous monitor of such changes provides signals just for the starting point and the terminating point of enzyme reaction. It is an end-point method when only signals for the starting point and the terminating point are analyzed. It is a kinetic method when a range of data from a reaction curve are analyzed, and can be classified into the initial rate method and kinetic analysis of reaction curve. The initial rate method only analyzes data for initial rate reaction whose instantaneous rates are constants; kinetic analysis of reaction curve analyzes data whose instantaneous rates show obvious deviations from the initial rate (Bergmeyer, 1983; Guilbault, 1976; Marangoni, 2003). To estimate those parameters of an enzyme reaction system, kinetic analysis of reaction curve is favoured because the analysis of one reaction curve can concomitantly provide $V_m$, $S_0$ and $K_m$. Hence, methods for kinetic analysis of reaction curve to estimate parameters of enzyme reaction systems are widely studied.
14
+
15
+ An enzyme reaction curve is a function of dependent variables, which are proportional to concentrations of a substrate or product, with respect to reaction time as the predictor variable. In general, there are two types of enzyme reaction curves. The first type involves the action of just one enzyme, and employs either a selective substrate to detect the activity of one enzyme of interest or a specific enzyme to act on a unique substrate of interest. The second type involves the actions of at least two enzymes, and requires at least one auxiliary enzyme as a tool to continuously monitor a reaction curve. The second type is an enzyme-coupled reaction system. For kinetic analysis of reaction curve, there are many reports on
16
+
17
+ * Corresponding Author
samples/texts/2262004/page_13.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ µmol/L to over 90.0 µmol/L by kinetic analysis of reaction curve alone (Fig. 6, unpublished). By the equilibrium method alone for reaction within 5.0 min, the assay of 80.0 µmol/L GSH requires GST activity that is 50 folds higher due to the inhibition of GST by the accumulated product. Therefore, the integration strategy for GSH assay is obviously advantageous.
2
+
3
+ The integration strategy for measuring GST initial rates is tested. For convenience, $S_0$ of the final GSH is fixed at 50 µmol/L and the duration to monitor reaction curve is optimized. After the analyses of reaction curves recorded within 10 min, it is found that reaction for 6.0 min is sufficient to provide the required overlapped region of GST activities measurable by both methods. By using $K_{iq}$ fixed at 4.0 µmol/L as a constant, the reaction duration of 6.0 min and PSC at 48 µmol/L to convert $V_m$ to initial rates, the integration strategy gives a linear range from 2.0 U/L to 60 U/L; kinetic analysis of reaction curve alone gives the linear range from 5.0 U/L to 60 U/L while the classical initial rate method alone gives a linear range from 1.0 U/L to 5.0 U/L (Fig. 7, unpublished). Clearly, with enzyme suffering strong product inhibition, the integration strategy for enzyme initial rate assay is advantageous.
4
+
5
+ ### 2.5.3 Alcohol dehydrogenase reaction
6
+
7
+ ADH is widely used for serum ethanol assay. ADH kinetics is sophisticated due to the reversibility of reaction and the inhibition by both acetaldehyde and NADH as products. To simplify ADH kinetics, some special approaches are employed to make ADH reaction apparently irreversible on single substrate (alcohol). Thus, reaction pH is optimized to 9.2 to scavenge hydrogen ion; semicabarzide at final 75 mmol/L is used to remove acetaldehyde as completely as possible; final nicotinamide adenine dinucleotide (NAD+) is 3.0 mmol/L; final ADH is about 50 U/L (Liao, et al., 2007a). By assigning the maximal absorbance at 340 nm for reduced nicotinamide adenine dinucleotide (NADH) by the equilibrium method to $A_{me}$ and that by kinetic analysis of reaction curve to $A_{mk}$, kinetic analysis of ADH reaction curve should predict $A_{mk}$ consistent with $A_{me}$, but requires some special efforts.
8
+
9
+ Fig. 8. Response of $F$ values to preset $C_{ald}$ for kinetic analysis of reaction curve for 0.31 mmol/L ethanol (reproduced with permission from Liao, et al, 2007a).
10
+
11
+ The use of semicabarzide reduces concentrations of acetaldehyde ($C_{ald}$) to unknown levels, and thus complicates the treatment of acetaldehyde inhibition on ADH. The integration rate equation with the predictor variable of reaction time can be worked out for ADH (Liao, et
samples/texts/2262004/page_14.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ al., 2007a). All kinetic parameters and NAD+ concentrations are preset as those used or reported (Ganzhorn, et al. 1987). However, there are multiple maxima of the goodness of fit with the continuous increase in steady-state $C_{\text{ald}}$ for kinetic analysis of reaction curve (Fig. 8). Thus, $C_{\text{ald}}$ can not be concomitantly estimated by kinetic analysis of reaction curve, and a special approach is used to approximate steady-state $C_{\text{ald}}$ for predicting $A_{\text{mk}}$.
2
+
3
+ Fig. 9. Correlation function of the best steady-state $C_{\text{ald}}$ with $A_{\text{me}}$ (reproduced with permission from Liao, et al, 2007a).
4
+
5
+ Under the same reaction conditions, the equilibrium method can determine $A_{\text{me}}$ for ethanol below 0.20 mmol/L after reaction for 50 min. For kinetic analyses of such reaction curves, the lag time for steady-state reaction is estimated to be over 40 s and is used to select data of steady-state reaction for analysis. Using the equilibrium method as the reference method, the best steady-state $C_{\text{ald}}$ for data of 6.0-min reaction is obtained for consistency of $A_{\text{mk}}$ with $A_{\text{me}}$ at each tested ethanol level from 10 µmol/L to 0.17 mmol/L. After dilution and determination by the equilibrium method, $A_{\text{me}}$ for each tested ethanol level from 0.17 mmol/L to 0.30 mmol/L is also available. Consequently, an exponential additive function is obtained to approximate the correlation of the best $C_{\text{ald}}$ for predicting $A_{\text{mk}}$ consistent with $A_{\text{me}}$ (Fig. 9). This special correlation function for $C_{\text{ald}}$ and $A_{\text{mk}}$ is used as a restriction function to iteratively adjust $C_{\text{ald}}$ for predicting $A_{\text{mk}}$; namely, iterative kinetic analysis of reaction curve with $C_{\text{ald}}$ predicted from the restriction function using previous $A_{\text{mk}}$ finally gives the desired $A_{\text{mk}}$. Such an artificial intelligence approach to the steady-state $C_{\text{ald}}$ for kinetic analysis of reaction curve can hardly be found in publications.
6
+
7
+ To start kinetic analysis of an ADH reaction curve, the highest absorbance under analysis is taken as $A_{\text{mk}}$ to predict the best $C_{\text{ald}}$ for the current run of kinetic analysis of reaction curve. The estimated $A_{\text{mk}}$ is then used to predict the second $C_{\text{ald}}$ for the second run of kinetic analysis of reaction curve (Fig. 10). Such an iterative kinetic analysis of reaction curve can predict $A_{\text{mk}}$ consistent with $A_{\text{me}}$ for 0.31 mmol/L ethanol when reaction duration is just 6.0 min and the convergence criterion is set for absorbance change below 0.0015 in $A_{\text{mk}}$. Usually convergence is achieved with 7 runs of the iterative kinetic analysis of reaction. Moreover, it is resistant to the change of ADH activities by 50% and coefficients of variation (CV) are below 5% for final ethanol levels from 20 µmol/L to 310 µmol/L in reaction solutions.